- Home
- A-Z Publications
- First Break
- Previous Issues
- Volume 24, Issue 11, 2006
First Break - Volume 24, Issue 11, 2006
Volume 24, Issue 11, 2006
-
-
After 75 years CGG is still making seismic history
By A. McBarnetThe 75th anniversary of the CGG Group is being celebrated alongside the company’s preparations for its merger with Veritas DGC. Andrew McBarnet brings some perspective based on conversations with CGG chairman and CEO Robert Brunck. It had been a whirlwind romance when just a month or two ago CGG announced its intention to merge with Veritas DGC. Robert Brunck, chairman and CEO of CGG, quips that it took the company 75 years to settle down with a partner. The proposed $3.1 billion purchase of Veritas in a combination of shares and cash, likely to be finalized early next year, has certainly provided a dramatic punctuation point in the history of the oldest European geoscience services company – and it has added poignancy to some memorable CGG parties this year to celebrate 75 years in the business. By next March Brunck says that he will have been to 12 different celebrations around the world, and that each one will have been well worth it. ‘For us, it is a tremendous opportunity to meet with employees, shareholders, and clients in many parts of the world. It also makes the statement that seismic is back centre stage and that the hard work of our incredibly loyal employees over the last five years has paid off.’ There’s no doubt that the pending acquisition marks a watershed in the CGG story which, Brunck is the first to admit, has had its ups and downs. ‘I can only speak about what I have seen or experienced personally,’ he says, ‘but I would agree that CGG literally missed the boat in the early 1990s. At that time the company failed to realize the importance of the global offshore seismic market and Sercel was under-performing.’ Brunck identifies the turning point for CGG as the mid 1990s under his predecessor Yves Lesage. This was when the company began to focus on building up its 3D marine seismic operations and developing its seismic acquisition manufacturing subsidiary Sercel under the leadership of Thierry Le Roux. In the Brunck era that strategy for the CGG Group appears to have come up trumps. In 1999, for example, CGG was a $400 million company, today it’s up to $1.5 billion, and will soon be more than double that. Brunck admits that he wasn’t always confident that it would work out. ‘At the end of the 1990s we were looking at vessel over-capacity, low oil prices, oil company consolidations, and unfortunately like everyone else lay-offs in our company. Then there was September 11 and the weak economy that followed. It meant that there was a lot of doubt out there. I told my colleagues that we were facing some tough years but we would definitely bounce back. But in 2003 and early 2004 I wasn’t so sure about the timescale for recovery. Luckily, by the end of 2004, the prognosis was much better.’
-
-
-
E&P companies face investment challenge for their cash
The annual study on oil and gas E&P business by US independent petroleum research company John S. Herold and UK-based global oil and gas corporate adviser Harrison Lovegrove & Company has consistently provided an excellent guide to current trends and once again comes up with some surprising conclusions. We publish here an abridged version of the report, now in its 39th year, which is based on the performance of some 200 companies representing a cross section of the global upstream sector. Tight markets and escalating prices fuelled a 32% jump in the average price per barrel oil equivalent (boe) realized by the Global Upstream Performance Review universe of over 200 oil companies in 2005; the sector enjoyed a revenue gain of $190 billion over the prior year. Clearly, it is a challenge for the industry to invest such a surge of funds, particularly given the restrictions on access to basins with large resource potential. For context, the entire indus-try’s finding and development investment during 2004 was $164 billion. Upstream capital investment did soar to a record for the third consecutive year, increasing by 31% to $277 billion. Yet the incremental outlays represented only 35% of 2005’s incremental revenue. Reserve purchases also set a new high in dollar terms, rising 13% to $54 billion. In spite of the increased capital and acquisition programmes, the industry still found itself with surplus capital. Rather than going back into the ground, $128 billion of this money was channelled back to stakeholders through dividends and share repurchases. In fact, funds devoted by the oils to stock buybacks exceeded proved reserve purchases by almost 20% and were nearly 80% higher than exploration outlays last year. On a regional basis, budget allocations have been shifting rather slowly. The US continues to draw a smaller share of finding and development outlays, even though profitability per barrel in the US is the highest among our six global regions. Over the last several years, the industry has been making an effort to direct its capital toward areas where relatively large targets may be drilled. But competition for these areas is intense and accessing adequate opportunities is challenging. Although total exploration investment rose to a record $36 billion, it drew the smallest share of industry investment over the last five years (13% versus 15% in 2001). Exploration remains under a magnifying glass because discovered volumes continue to disappoint. This is a significant problem. Without access to some of the prolific hydrocarbon regions currently closed to most major oils, new field discoveries are likely to continue getting smaller. The industry will be forced to choose between reliance on higher cost resources such as tar sands, LNG and GTL, or development of ever-smaller accumulations, which also implies continually rising investment per new barrel. Our work has shown that the historical allocation of upstream capital to development activities fluctuates little over time: development outlays receive a smaller share of the total budget when acquisitions claim a larger part of the pie. Since the timing of development wells can be highly discretionary, and desirable acquisitions can present themselves with little notice, this cyclic diversion of funds is not surprising.
-
-
-
Determination of reservoir properties from the integration of CSEM, seismic, and well-log data
Authors P.E. Harris and L.M. MacGregorPeter Harris, Rock Solid Images, and Lucy MacGregor, Offshore Hydrocarbons Mapping, discuss the advantages in reservoir interpretation of integrating data from controlled source electromagnetic (CSEM) and seismic data, illustrated by an example from the Nuggets-1 gas reservoir in the UK Northern North Sea. The problem of remote characterization of reservoir properties is of significant economic importance to the hydrocarbon industry. For example, in exploration the ability to determine the gas saturation in an identified prospect would avoid the costly drilling of un-economic low saturation accumulations. During development and production, a detailed knowledge of the reservoir properties and geometry, and changes in these parameters through time, can aid optimization of well placement and enhance overall recovery rates. A range of geophysical techniques can be applied to this problem. Seismic data are commonly used to develop geological models of structure and stratigraphy. Amplitude variation with offset (AVO) and inversion for acoustic and elastic impedance may also be used to constrain reservoir properties such as elastic moduli and density. These can in turn be related to mineralogy, porosity, and fluid properties through rock physics relationships (for example, Mavko et al., 1998). However seismic data alone in many situations cannot give a complete picture of the reservoir. Ambiguities exist, for example, in AVO responses which may be caused either by fluid or lithological variations, and cannot be separated on the basis of the seismic data alone. The controlled source electromagnetic (CSEM) method is becoming widely used in the offshore hydrocarbon industry, and has been applied successfully in a variety of settings (see for example, Srnka et al., 2006; MacGregor et al., 2006; Moser et al., 2006). The CSEM method uses a high powered horizontal electric dipole to transmit a low frequency electromagnetic signal through the seafloor to an array of multi-com-ponent electromagnetic receivers. Variations in the received signal as the source is towed through the array of receivers are interpreted to provide the bulk electrical resistivity of the seafloor, through a combination of forward modelling, geophysical inversion, and imaging. The bulk resistivity of a porous rock is to a large degree controlled by the properties and distribution of fluids within it. Typical brine saturated sediments have a resistivity in the range 1-5 .m. Replacing the seawater with resistive hydrocarbon can result in an increase in the bulk resistivity of the formation by 1-2 orders of magnitude. CSEM sounding exploits this dramatic change in physical properties to distinguish water bearing formations from those containing hydrocarbons. However, as for seismic data, potential ambiguities exist in the interpretation of CSEM data. For example, tight limestones, volcanics, or salt bodies may also have high resistivity, and could give a CSEM response similar to that of a hydrocarbon reservoir. In addition, because of the diffusive nature of electromagnetic fields in the earth, the structural resolution is generally lower than that given by seismic data. Since the CSEM and seismic data are controlled by very different physical processes, it is clear that a careful combination of seismic and CSEM data, exploiting the strengths of each, can supply information which is not available or is unreliable from either type of data alone, thus reducing ambiguity and risk. A number of approaches to the integration of disparate data types have been proposed (e.g. Musil et al., 2003; Gallardo & Meju., 2004; Hoverston et al., 2006). Here we illustrate the advantages of an integrated interpretation using CSEM and seismic data collected on the Nuggets-1 gas reservoir.
-
-
-
Electromagnetic prospect scanning moves seabed logging from risk reduction to opportunity creation
Authors D. Ridyard, T.A. Wicklund and B.P. LindhomDave Ridyard, Tor Atle Wicklund, and Bjorn Petter Lindhom of Norwegian company Electromagnetic Geoservices (emgs) describe an adaptation of conventional controlled source electromagnetic surveying aimed at marine hydrocarbon exploration in frontier areas. In the past four years emgs has conducted over 200 seabed-logging surveys. This method is now well established as an effective technique for verifying the presence of hydrocarbons in prospects initially identified by seismic analysis. Surveys can be designed to measure the resistivity of subsurface bodies in a growing number of geological settings. In the hands of a skilled interpreter, survey data can help to determine whether or not a structure contains hydrocarbons and, if it does, to delineate the reservoir. A new application of this technique, known as ‘scanning’, provides a means of identifying prospects in frontier areas working only from basin-scale knowledge. In this article, we introduce the concept of scanning and discuss some of the issues concerning survey design and application. What is scanning? The traditional approach to exploring a frontier area requires considerable effort in seismic imaging and drilling before there is any real evidence that hydrocarbons are present. In contrast, electromagnetic (EM) prospect scanning identifies prospective areas much earlier in the process. By acquiring data from a grid of seabed-logging receivers, areas containing significant resistive anomalies can be identified, and exploration efforts can then be focused on those leads. An exploration workflow that includes EM prospect scanning in this way can dramatically reduce the overall time and effort required to find a new reservoir. The difference between scanning and a conventional sea-bed-logging survey is that scanning data is acquired on a relatively sparse grid (compared with the typically 1 km receiver spacing used in traditional seabed logging). As a result, large areas can be scanned quickly. Because preliminary data analysis is also quite quick, it may be possible to identify potential reservoirs before the survey vessel leaves the area. This provides the option to undertake a more detailed infill characterization of the potential reservoir. Then, after seismic imaging, drilling can be targeted on the most prospective areas. This approach may reduce typical field development times by up to a year. The technique can also be applied in more mature areas to identify by-passed pay, which can extend the life of the field using existing production infrastructure. Another opportunity for the use of scanning is in areas where permits for seismic acquisition can be hard to obtain, such as areas of environmental sensitivity. Scanning is most effective when used to survey large areas, so collaboration between several oil companies and licensing authorities can create the greatest value.
-
-
-
Airborne electromagnetics in Europe: recent activities and future goals
Authors A.M. Steuer and U. MeyerAnnika Steuer and Uwe Meyer of the Federal Institute for Geosciences and Natural Resources, report on a recent gathering of organizations in Europe using airborne electromagnetics. In numbers, European systems are dominated by frequency domain instruments. The Geological Surveys of Norway, Germany, and Austria, as well as the AWI, use helicopter electromagnetic (HEM) systems, which differ in the number of frequencies and configuration of the coil systems. NGU and GBA use the Geotech Hummingbird, NGU with three horizontal coplanar and two horizontal coaxial coils whereas GBA uses four horizontal coplanar coils. BGR operates a DIGHEM bird from Fugro with five horizontal coplanar coils. AWI developed a dual-frequency HEM system, specially designed for the sounding of sea-ice thicknesses in Polar Regions. The Geological surveys of the United Kingdom (BGS) and Finland (GTK) established a ‘Joint Airborne-geoscience Capability’ (JAC) based on a system originally developed by GTK. They operate a fixed wing aircraft carrying a four-frequency system using vertical coplanar coils. The Swedish Geological Survey and the University of Uppsala also use a fixed-wing. Both developed a new VLF instrument working in the wide frequency band of 1-350 kHz. At present, the only time-domain system in Europe is operated by the University of Aarhus. The Aarhus SkyTEM system is a transient electromagnetic technique in a ‘central loop’-like configuration, towed by a helicopter.
-
-
-
Vertical seismic pitfalls: problems in archiving VSP data
By E.L. JackCompanies need to make sure that their treasured VSP data is archived and in good order, warns Eleanor Jack, senior geophysicist of Landmark's Information Management Practice. She provides some horror stories of what all too frequently can go wrong to prove her point. VSPs (Vertical Seismic Profiles) are the bridge between wells and seismic, the point where time and distance meet, where the generalities of the surface seismic data are anchored to the certainty of the well. As such, VSP data should be the securest items in the archive, the ones in which we have the most confidence. And yet, when we come to archiving VSP data, we find ourselves in a no-man’s land of conflicting formats, procedures, and practices, where hardly anything is standardized and everything is at risk. How can this be? Summary of problems The problem is partly that no-one really knows where to put VSP data, which do not fit comfortably into a well database, because they are mainly seismic, and do not fit into a seismic database, because they belong to a well. Moreover, there is, or rather, should be, a variety of different data types such as reports and calibrated logs in addition to the seismic, all of which have different archiving and indexing requirements. VSP seismic data too, have their own special problems, involving as they generally do, X, Y, and Z components. Multicomponent VSP data arrived on the scene well before anyone had devised a format to cope with them and the data therefore had to be shoe-horned into the SEGY format in a variety of ingenious ways. (SEG-Y Rev 1 finally addressed this problem in 2002, by which time a 20-year or so backlog of multi-component original SEG-Y had accrued). Processing of these data has never really been standardized, and this has resulted in a great diversity of processed data. Data were processed with very little concern over who else, other than the perpetrator, would need to read them and this led to some truly bizarre interpretations of what the SEGY format actually meant. As if this were not enough, there is a fundamental difference of approach between the well and seismic camps on the matter of encapsulation. Use of TIF (Tape Image Format) encapsulation is standard for well logs but not, unfortunately, for seismic. An archive involving both logs and seismic may well therefore be a mixture of encapsulated and unencapsulated data, but if archivists attempt a unified approach and encapsulate their seismic data, this will render them unreadable to most applications. This subject will be revisited in detail in one of the examples.
-
-
-
Early fortunes of CGG in a volatile world
In recognition of the 75th anniversary of Compagnie Générale de Géophysique, Europe’s longest surviving geophysical services company, we publish here extracts with photos from a new history of the company. The beginning 1912-1931 It was in the summer of 1912 that the young Conrad Schlumberger, who had studied at the prestigious École Polytechnique, carried out his first electrical surveying experiments on iron-rich synclines on his Val Richer property in Normandy. Using the principle that water conducts electric current, he constructed a ‘black box’ which recorded the electric current flowing between two rods acting as electrodes. He then drew a map which he compared with topographical contour lines. Comparison of the two sets of results provided clear evidence that electrical soundings could be used to build up a picture of the vertical layering of sedimentary strata, as a function of their resistivity. The following year, he measured the natural forces created by pyrites at Sain-Bel near Lyons and at Bor in Yugoslavia. After the war, Conrad Schlumberger returned to his research and made a number of important discoveries. Some of the first involved applied geophysics: in connection with oil exploration associated with the Aricesti dome in Romania in 1923, and with mining exploration at Noranda, Canada in 1924. Two years later, he formed the Société de Prospection Electrique (SPE) with his brother Marcel Schlumberger. This company, located at 30, rue Fabert, Paris, specialized in oil and mining exploration and civil engineering. It was here that the first ‘subsurface’ drilling techniques - soon to be used worldwide by SPE-Schlumberger crews -were developed, together with themethods of prospecting used some years later by the emerging CGG. In 1927, Conrad Schlumberger experimented with electrical sampling at the Pechelbronn site, the only known French oil deposit. He used drilling rigs to conduct vertical measurements, in order to gain a better understanding of the nature of the stratigraphic layers encountered, and to record the levels that showed promise for future drilling. This was the exploration technique that came to be known as logging and was destined to have a brilliant future. A year later, the SPE had nearly 100 employees, more than half of them engineers. The economic crisis that followed the stock market crash of 1929, and the ensuing fall in oil prices, left the SPE in financial difficulties. One consequence was SPE’s merger with the Société Géophysique de Recherches Minières (SGRM) and the founding in 1931 of the Compagnie Générale de Géophysique (CGG). It is a venture that has continued to do business ever since, adapting to all of the changes and crises that have marked the history of oil.
-
-
-
Location and evaluation of flow barriers with 4D seismic
Authors Y.A. Almaskeri and C. MacBethOne of the key set of uncertainties in understanding the reservoir for flow simulation purposes is the location, geometry and properties of the internal discontinuities to fluid or pressure communication. Barriers are created by faults, cemented fractures, lithology changes, or boundaries between individual genetic flow units. Description and evaluation of these features are necessary for a precise understanding of the dynamic behaviour of the reservoir, affecting predictions of permeable fairways, bypassed hydrocarbons, and hence production. Furthermore, missing, mis-located, or mis-represented barriers can lead to inadequate predictions of reservoir performance and hence non-optimal reservoir management and hydrocarbon recovery. Compartmentalization and the presence of barriers can, to some extent, be recognized during production by unusually low pressures, early liberation of solution gas, and poor pressure support from injectors, surrounded by high pressures. Additionally, once the simulation model has been constructed, this can be used to position and quantify expected barriers by history matching their properties to the produced volumes and pressures. Contributing to this process is 3D seismic and geological interpretation, which provide information on the likely positions of potentially important structural discontinuities, although these data cannot help in directly constraining dynamic flow performance. In such a process subtle connectivities between sand channels are often lost and local high permeability pathways cannot be accurately quantified. Fault seal capacity can be estimated considering the fault zone heterogeneity via reservoir juxtaposition and shale gouge ratio calibrated by laboratory tests (Manzocchi et al., 1999). A further technique to determine the degree of communication across the barriers for engineering purposes is well-well transient interference testing (Stewart et al., 1984). Reservoir connectivity and the presence of flow boundaries in the reservoir volume can also be assessed by interpreting transient analysis and simulation of extended well tests in combination with the initial pressure data from repeat formation tests (for example, Richardson et al., 1997). Techniques based on well data for pressure information lack aerial coverage and resolution and, in principle, this gap can be filled by 4D seismic which could supply the extra detailed information required for a general reservoir connectivity assessment at sub-simulator cell resolution. Past case studies have highlighted this value. For example, Sonneland, et al. (2000) used a saturation-based approach to assess seal along a fault network with 4D seismic in the Gullfaks field. Barkved et al. (2003) indicated that a sealed fault block, previously unrecognized on the 3D seismic, could be detected after pressure depletion on Valhall. Parr and Marsh (2000) gave examples of the visual location of barriers inferred from 4D signature evolution due to horizontal producers and also the added information on barrier breakdown during production in West of Shetland data. Finally, Almaskeri and MacBeth (2004) showed how discontinuities in the 4D seismic signatures might be used as a fine-scale attribute to reveal the location of barriers. Building on this latter work, the aim here is to provide a further evaluation of barriers by applying a recently developed technique for horizontal permeability estimation (MacBeth and Almaskeri, 2005). Specifically, by modifying the way in which the estimation technique is implemented, it is possible to focus on the low magnitude components of the permeability spectrum related predominantly to the barrier contribution, and then to use the magnitude of the permeability drops to assess the extent to which the barrier can transmit pressure. This method results in a seismic map of barriers and hence an assessment of the degree of communication.
-
-
-
Understanding stochastic inversion: part 1
By A.M. FrancisAshley Francis, managing director of UK consultancy Earthworks Environment & Resources, provides the first of a two-part tutorial on the theory of deterministic and stochastic inversion with some comparison of the pros and cons of the two methods. The second part will appear in the December issue of First Break. Seismic inversion tools designed to estimate impedance have been available to geophysicists for over 20 years. Most of the available methods are based on forward convolution of a reflectivity model with the estimated wavelet, comparison of the modelled output with the observed seismic trace and then updating the reflectivity model (inverting) to minimize the difference between the modelled and observed traces. Whether generalized linear inversion, sparse spike, or simulated annealing, all the algorithms work on this basic principle of minimization. Methods based on minimization are commonly referred to as ‘deterministic’. The output of a deterministic inversion is a relatively smooth (or blocky) estimate of the impedance. Because of its smoothness deterministic inversion is generally unsuited for constraining reservoir models used for volumetric calculations, estimation of connectivity, or fluid flow simulation. Stochastic seismic inversion generates a suite of alternative heterogeneous impedance representations that agree with the 3D seismic volume. Taken together, the suite of possible impedance models or realizations capture the uncertainty or non-uniqueness associated with the seismic inversion process. Stochastic seismic inversion is complementary to deterministic inversion. The deterministic seismic inversion is the average of all the possible non-unique stochastic realizations. Although the principles of stochastic seismic inversion were published over 12 years ago, commercial implementation and application have only started to grow in the last five years or so. For many geophysicists, understanding and being able to make a discerning judgement on the possible benefits of this new technique is difficult. A number of misconceptions concerning stochastic seismic inversion have arisen, particularly related to resolution. A commonplace but incorrect statement is that stochastic techniques can ‘…allow substantially increased resolution, capturing details well beyond seismic bandwidth’. It is the purpose of this tutorial to provide a clear theoretical basis for deterministic and stochastic inversion and assist the geophysicist in making an informed decision concerning the application of stochastic seismic inversion to his or her particular reservoir description problem. In order to understand stochastic seismic inversion we will have to understand some of the limitations of conventional seismic inversion (often referred to as ‘deterministic’ inversion), provide a grounding in some geostatistical concepts, and also consider the general problem of estimation at unmeasured locations. This tutorial will only consider seismic inversion in the sense of estimating an impedance model of the subsurface. ‘Impedance’ will be taken in a very general sense to refer to any rock property estimated from surface seismic data. This could include acoustic or elastic impedance, extended elastic impedance, or any other more elaborate pre-stack inversion scheme to estimate Vp, Vs, and density. Stochastic inversion may be applied to any of these objectives and the aim of this tutorial is to describe only the general limitations of deterministic inversion and the possible advantages of a stochastic inversion framework and not to consider the specifics of pre-or post-stack inversion objectives.
-
Volumes & issues
-
Volume 42 (2024)
-
Volume 41 (2023)
-
Volume 40 (2022)
-
Volume 39 (2021)
-
Volume 38 (2020)
-
Volume 37 (2019)
-
Volume 36 (2018)
-
Volume 35 (2017)
-
Volume 34 (2016)
-
Volume 33 (2015)
-
Volume 32 (2014)
-
Volume 31 (2013)
-
Volume 30 (2012)
-
Volume 29 (2011)
-
Volume 28 (2010)
-
Volume 27 (2009)
-
Volume 26 (2008)
-
Volume 25 (2007)
-
Volume 24 (2006)
-
Volume 23 (2005)
-
Volume 22 (2004)
-
Volume 21 (2003)
-
Volume 20 (2002)
-
Volume 19 (2001)
-
Volume 18 (2000)
-
Volume 17 (1999)
-
Volume 16 (1998)
-
Volume 15 (1997)
-
Volume 14 (1996)
-
Volume 13 (1995)
-
Volume 12 (1994)
-
Volume 11 (1993)
-
Volume 10 (1992)
-
Volume 9 (1991)
-
Volume 8 (1990)
-
Volume 7 (1989)
-
Volume 6 (1988)
-
Volume 5 (1987)
-
Volume 4 (1986)
-
Volume 3 (1985)
-
Volume 2 (1984)
-
Volume 1 (1983)