- Home
- A-Z Publications
- First Break
- Previous Issues
- Volume 24, Issue 6, 2006
First Break - Volume 24, Issue 6, 2006
Volume 24, Issue 6, 2006
-
-
Global context of today’s oil business
By D. YerginLeading energy analyst Daniel Yergin last month presented to the US House of Representatives Committee on Energy and Commerce a tour de force analysis of the current perplexing world of oil – its price, supply and demand, and security. Not everyone will agree with his conclusions, especially on ‘peak oil’, but we reproduce the text here (in slightly abbreviated form) in the belief that it truly manages to capture the global context of the oil business and as such provides a valuable insight into the energy challenges ahead. We are at a historic juncture. After a quarter century, the great cushion of surplus oil production capacity that was created by the energy turbulence of the 1970s and early 1980s has been largely spent - at least for the time being. It is on that relatively narrow band of ‘spare capacity’ that so much of the drama in world oil markets is playing out. Today, the balance between supply and demand in the world oil market is very tight. Part of the reason is the surge in economic growth in both developed and developing countries - of which the growth of China and, to a lesser regard, India provide the most noteworthy examples. But the demand surge turned into slower growth in 2005 and the data is still preliminary for 2006. Meanwhile, the focus of the market has shifted from demand to supply. We are currently experiencing that slow motion supply shock, the aggregate disruption of more than two million barrels per day. What explains the sharp rise in oil prices over the past eight weeks? The first is the real disruption of a significant part of Nigeria’s oil production owing to an insurgency in Nigeria’s Delta region. Workers have been evacuated, and the local insurgents are threatening further attacks. This means the loss of a high quality light sweet oil particularly wellsuited for making gasoline. The second is the ratcheting up of tensions over Iran’s nuclear programme with a fear of a disruption of Iran’s 2.5 million b/d of exports. Some Iranian spokesmen threaten to unleash an ‘oil crisis’ while others seek to separate oil from atoms. But in a market this tight, the risk of escalation is enough to send crude oil prices up. The third factor is at home (United States) - the rapid switch over from MTBE to ethanol on the East Coast and in Texas has added pressure to what has been for a number of years the most difficult period in the gasoline market - the spring makeover of gasoline from winter to summer blends. We would expect that the transition will be complete by the time most Americans begin their serious summer driving. But there is little reason to think that the tension over Iran’s nuclear program will abate, and much uncertainty remains over what will happen in Nigeria. So we must look to the impact of fundamentals for price moderation - in the build-up of supplies from elsewhere, the relatively high level of crude oil inventories, and the demand response to higher prices.
-
-
-
Over/under acquisition and data processing: the next quantum leap in seismic technology?
More LessDavid Hill, Leendert Combee, and John Bacon, WesternGeco, introduce a new configuration for towed-streamer seismic data acquisition. They argue that the ‘over/under’ technique is a major advance on conventional techniques providing previously unattained signal bandwidth, where the low-frequency content gives deeper penetration, and therefore, improved imaging beneath basalt, salt, and other highly absorptive overburdens. In a conventional towed-streamer marine acquisition configuration, shallow sources and shallow cables increase the high-frequency content of the seismic data needed for resolution. However, shallow sources and shallow cables attenuate the low frequencies, which are necessary for stratigraphic and structural inversion, and for imaging deep objectives. Towing shallow also makes the data more susceptible to environmental noise. In contrast deep sources and deep cables enhance the low frequencies, but attenuate the high frequencies. In addition, the data recorded via a deep tow have a higher signal-to-ambient-noise ratio due to the more benign towing environment. A conventional towed-streamer survey design therefore, attempts to balance these conflicting aspects to arrive at a tow depth for the sources and cables that optimizes the bandwidth and signal-to-noise ratio of the data for a specific target depth or two-way travel time, often at the expense of other shallower or deeper objectives. An over/under, towed-streamer configuration is a method of acquiring seismic data where cables are typically towed in pairs at two different cable depths, with one cable vertically above the other. The depths of these paired cables are typically significantly deeper than would be used for a conventional towed-streamer configuration. In conjunction with these paired cables, it is possible to acquire data with paired sources at two differing source depths. Again, the depths of these paired sources are typically significantly deeper than would be used for a conventional towed-streamer configuration. The seismic data recorded by the over/under towed-streamer configuration are combined in data processing into a single dataset that has the high-frequency characteristics of conventional data recorded at a shallow towing depth and the lowfrequency characteristics of conventional data recorded at a deeper towing depth. This combination process is commonly referred to in the geophysical literature as deghosting. The current benefits of over/under data compared with conventional data can be summarized as: - A significantly broader signal bandwidth, where the lowfrequency content gives deeper penetration, and therefore, improved imaging beneath basalt, salt, and other highly absorptive overburdens. Moreover, the bandwidth extension to lower frequencies makes seismic inversion less dependent upon model-based methods - A simpler signal wavelet with the bandwidth extension to higher frequencies giving enhanced resolving power, allowing for a more detailed stratigraphic interpretation - Higher signal-to-ambient-noise ratio as a consequence of the deeper towed-cable pairs - An extended weather window enabled by the deeper towed-cable pairs.
-
-
-
Innovative hybrid algorithm designed to enhance seismic characterization
More LessPascal Klein and Andy Peloso of Paradigm present a method for multi-disciplinary interpretation of rock and fluid properties by classifying data into seismic facies volumes, used to describe and characterize seismic heterogeneities and properties. Seismic facies analysis has been performed since the use of seismic data for E&P. The traditional method of seismic interpretation involves analyzing the seismic reflection patterns, including configurations (i.e. sigmoidal, hummocky, etc.) and their associated attributes (i.e. amplitude, frequency, continuity, etc.). These patterns and/or configurations were mapped to generate a seismic facies map. This technique, however, is painstakingly slow, very dependent on the interpreter’s skills, and limited to 2D. With the introduction of computer-aided seismic facies techniques, this process is automated and volume-based. These techniques classify all samples from a set of seismic attribute volumes over a user specified zone to produce a volume of classified samples. The multi-attribute seismic classification methodology performs a clustering of the samples of a set of input attributes. These techniques continue to grow and play a vital role in interpretation workflows within the industry. In recent years, there has been an explosion in the number of seismic attributes available for use in E&P. Use of these attributes helps analyze the subsurface and can reveal important features, from regional geology to detailed reservoir properties. To effectively understand the multitude of seismic attributes, Paradigm has developed classification techniques to support the quantitative assessment of exploration targets and to improve reservoir characterization within field development projects (Peloso et al., 2005). The objective of the facies classification process is to describe characteristics within the seismic data and relate these characteristics to the interpretation of rock and fluid properties and help identify quality hydrocarbon accumulations.
-
-
-
Subsurface correlation of the Triassic of the UK southern Central Graben: new look at an old problem
Authors M. de Jong, D. Smith, S.D. Nio and N. HardyMat De Jong, David Smith, S. Djin Nio, and Nick Hardy discuss climate change as a primary driver of vertical lithofacies change allowing a time-significant stratigraphic classification to be derived from a standard facies-sensitive wireline log such as the GR. Knowledge of Triassic stratigraphy in the Central Graben is impeded by lack of regional seismic markers, poor recovery of microfossils, and regional structural complexity. Of the various sources of data available (seismic, logs, cuttings, core/sidewall core), the most reliable and continuous must surely be the wireline log. With the exception of occasional short intervals of bad data, logs are more or less universally available for all North Sea wells, providing an unparalleled source of objective and closely spaced samples of various physical quantities. An ideal stratigraphic method would extract time-significant information from the logs, allowing correlation at a resolution approaching that of the logs themselves. We here describe the experimental application of just such an approach to 16 wells in the southern part of the UK Central Graben, quads 22 and 30 (Figure 1). Our method relies on (a) a new method of extracting trends in spectral (wavelength, frequency and phase) content of a wireline log, and (b) the interpretation of this information in terms of orbital-forcing of climate change in the 104 to 105 year waveband. We first outline the regional background to the need for a unifying scheme for Triassic correlation that is readily applicable to all significant Triassic well-penetrations in the area. We next describe the key principles underlying the method. Finally, we describe the stratigraphic scheme that emerges from this initial study, and we discuss its implications.
-
-
-
Fresh insight into time migration tomography
Authors R. Leggott and R. MorganRichard Leggott and Richard Morgan of Veritas DGC provides this discussion of a tomographic algorithm application which offers the potential for an improved velocity model for seismic data analysis. In hydrocarbon exploration and development, imaging of seismic data is used for accurate geologic interpretation, estimation of rock properties, pore pressure prediction and many other purposes. All such analyses depend on a good seismic image, and this in turn requires an accurate velocity model. As more information is extracted from seismic data in areas of increasing structural complexity, the role of a good velocity model has become even more crucial. In this article an idealized tomography algorithm is described, which simultaneously images the seismic data and updates the velocity model using the same imaging algorithm. By using data from every sample of the seismic image, the tomographically updated velocity will exhibit high resolution features that other velocity analysis tools cannot resolve. Results from this tomography are shown using time migration imaging in both isotropic and anisotropic media. Conventional velocity analysis The conventional method for generating a velocity model for imaging is vertical updating (Deregowski, 1990). Here, an initial velocity model is used to image the acquired seismic data to form common image gathers (CIGs). An updated RMS velocity field is picked using normal moveout (NMO) to flatten selected seismic events on the CIGs. This velocity is often smoothed, converted to an interval velocity, re-smoothed, and clipped. A new seismic image is then formed using the updated velocity model. If required, the procedure is iterated until the primary seismic events are flat on the CIGs. Vertical updating is only suitable for simple geology. The method assumes that applying a residual moveout with NMO is a reasonable approximation to re-imaging the seismic data with an updated velocity model; this can only be a valid assumption when the geology has little structure. Additionally, care must be taken when picking RMS velocities at a high temporal resolution to avoid an unstable interval velocity. This means that the velocity for thin layers cannot be picked accurately, and that a velocity contrast between distinct lithological layers can only be identified when that boundary is identified with a significant seismic reflection. Many different methods exist for imaging seismic data (Kirchhoff time migration, Kirchhoff depth migration, wave-equation migration, etc.). Different imaging techniques will use the velocity model in different ways to form a seismic image. Hence, a velocity model suitable for one imaging algorithm cannot be assumed to be suitable for another. For example, an appropriate velocity model for a Kirchhoff time migration will not generally be adequate for a Kirchhoff depth migration. There is a coupling between the velocity model and the imaging algorithm used. In other words, seismic imaging and velocity analysis are two aspects of the same problem and this should be reflected in how the velocity model is generated.
-
-
-
Application of geophysics to North American prehistoric sites
Authors W.J. Johnson and D.W. JohnsonWilliam J. Johnson and Donald W. Johnson, members of the US-based Archaeology and Geophysics Consortium, explain how North America has some catching up with Europe to do in applying geophysical methods in archaeological investigations. Archaeological geophysics, sometime called remote sensing by archaeologists, is the measurement of geophysical properties at the ground surface to create images of the subsurface that can be interpreted by a geophysicist and archaeologist working together to identify subsurface features of cultural origin. As such, geophysics is a tool to infer the presence of cultural features before excavations actually take place. Geophysical measurements also allow for obtaining a continuity of subsurface information that compliments the detailed point information from shovel tests or unit excavations. The geophysics can then provide context to the excavations made by the archaeologists. Geophysical measurements are not routinely obtained at North American prehistoric sites. This situation contrasts with European practice, where geophysical studies are a routine part of archaeological investigations. A contributing factor for this situation is the subtlety of North American targets. There is a perception by many archaeologists that because prehistoric peoples in North America did not routinely leave massive stone foundations, there is little to image with geophysics. Another factor is that professional geophysicists have not been involved with the majority of the available case histories. Even the case histories published at the North American Database of Archaeological Geophysics (NADAG) at the University of Arkansas (http:// www.cast.uark.edu/nadag/) are not of uniform quality. Of the approximately 80 results of geophysical surveys presented for prehistoric sites by NADAG, more than three quarters do not demonstrate representative results and/or appropriate procedures. Nevertheless, within this group there are also some excellent case histories. Especially worthy of note are recent studies conducted at the Hollywood Mounds site in Tunica County, Mississippi by the University of Mississippi (Johnson et al., 2000); the Mit-tutta-hang-kush Village (Fort Clark State Historic Site, 32ME2), ND presented by Kvamme (2001); and the Double Ditch Indian Village State Park (32BL8) presented by Kvamme (2002). Many of the results of the geophysical surveying appear to indicate promising results, but have not been ground-truthed, such as the work at the Greenbriar site in Arkansas (Johnson et al., 1999). The work by the University of Arkansas at the Toltec Mounds (Lockhart, 2001) shows interesting results that could also benefit from additional ground truthing.
-
-
-
Using dual-azimuth data to image below salt domes
Authors F.J. Dewey, M. van der Meulen and P.J. WhitfieldIn complex geological areas, subsurface illumination is largely determined by the acquisition geometry. More recently, multi-azimuth and wide-azimuth surveys are increasingly being used to maximize subsurface coverage, but these can present new challenges; for example, how to combine the images from different azimuths into a single migrated image for use by the interpreter. Traditionally, the common practice is to ignore potential azimuthal velocity variations and build a single vertical velocity model, in some cases incorporating VTI anisotropy. Data from different azimuths are pre-stack migrated separately, but with a single velocity model usually derived from one azimuth, and then combined into a single image after application of residual move-out corrections. We present a case history of building a single velocity model for pre-stack depth migration of dual-azimuth data in the southern North Sea. The dual-azimuth data gave us enhanced illumination of the target zone, but somewhat to our surprise, it also showed strong azimuthal velocity variations in the overburden. We devised a methodology and built a single velocity model that incorporated these azimuthal velocity variations so that all events are migrated to their correct position. When the data sets are combined, primary events are reinforced while noise cancels out to give one optimal data set.
-
-
-
Hild structure multi-azimuth seismic experiment
Authors A. Riou, K. Kravik, P.A. Sexton, L.L. Lemaistre, V.T. Aubin and F. BertiniIllumination of complex structures has always been a matter of discussion. While in the ‘old’ days it was preferable to shoot in the dip direction of the main structural trend, the recent advances in processing have led us towards shooting in the most ‘economical’ direction. The recent development of OBC (Ocean Bottom Cable) acquisition has provided us with high-fold multi-azimuth data and has demonstrated dramatic improvements in the image quality. Already in 1996 a test was carried out by Elf (in Gabon) acquiring four surveys at different azimuths across a salt body (Houllevigue, H.., Delesalle, H., and De Bazelaire, E., 1999). The result of this experiment showed the incomplete but complementary information extracted from the various acquisition directions. PGS carried out the same kind of experiment across the Norwegian Varg field where they shot two surveys at 600 from the old existing survey (Hegna, S. and Gaus, D., 2003). The published results showed improved imaging at the reservoir level.
-
-
-
Characterizing fracture networks: an effective approach using seismic anisotropy attributes
Authors Y. Freudenreich, E. Angerer, A. Amato del Monte and C. ReiserThe development of naturally fractured reservoirs is significantly influenced by the characteristics of the fracture network since these control the volume and flow direction of the fluid through the reservoir rocks. The presence of fractures can be very beneficial since knowledge of their characteristics allows the design of well paths that intersect a larger number of permeable fractures, thus increasing production. It also enables optimized placement of injectors for improved sweep efficiency and better control of the reservoir pressure. However, fractures can also be damaging for the economic potential of a field. They can create preferential flow paths which may lead to premature water breakthroughs or, conversely, act as barriers to impede production. A good understanding of the fracture network in terms of intensity, orientation, and spatial distribution is therefore essential for improved reservoir development. Some fracture information is available from core observations and image log interpretation, but these data are only valid in the vicinity of the borehole, and when extrapolated beyond this, can lead to erroneous prediction of the overall reservoir dynamics. Even though geostatistical methods can help to reduce the uncertainty associated with spatial predictions by taking into account the geological heterogeneities, a true 3D attribute is necessary in order to accurately characterize a fractured reservoir. Following the observations of Crampin (1985a, 1985b), Crampin et al. (1986), Thomsen (1988), and others, it is now widely recognized that fracture systems are often found to be aligned in a preferential direction. This induces a directional (or azimuthal) dependence of seismic properties such as traveltime, velocities and reflection amplitudes. This directional dependence, also referred to as anisotropy, can cause seismic shear waves to split in preferential directions related to the alignment of the fractures: the fast shear waves being polarized parallel to the fractures and the slow shear waves polarized perpendicular to the fractures. It can also affect the amplitude of compressional waves depending on the azimuthal direction of propagation. An analysis of the anisotropy effects observed in 3D seismic data can therefore provide insight into the fracture characteristics (Thomsen, 1995; Lynn et al., 1995, 1996). Methods based on shear-wave splitting analysis are well established (Crampin, 2000), but unfortunately shear wave data are relatively expensive to acquire and process. As a result, in the last few years there has been a growing interest in P-wave azimuthal amplitude variation. Rüger and Tsvanskin (1997) demonstrated that reliable estimates of the anisotropy parameters could be obtained from the P-wave amplitude. Later, the approximations of the P-wave reflection coefficient presented by Rüger (1998) were extended into a linearized form by Jenner (2002) and used by many authors for inferring fracture properties. Angerer et al. (2003a) then went on to propose an integrated workflow for seismic fracture characterization from wide-azimuth large offset P-wave and PS data. This approach (Figure 1) includes a state-ofthe- art geostatistical decomposition technique combined with azimuthal anisotropy analysis and yields quantitative estimates of fracture intensity and direction. It can be applied either to horizon-based attributes, such as interval velocity maps and RMS amplitude maps, or to 3D attribute volumes such as seismic amplitude and AVO attributes. Here we present the results of a recent fracture characterization study based on P-wave azimuthal anisotropy from a wide-azimuth 3D land dataset. The goal of the study was to characterize the fracture distribution in a carbonate reservoir using several attributes such as interval velocities, RMS amplitude and seismic amplitude. In addition, to confirm the validity of the estimated anisotropy attributes, this study also included an analysis of the fractures using FMI/FMS log data from six wells. Several comparisons of anisotropic and conventional processing illustrate the benefits of our approach at different stages of the processing. Finally, we review the accuracy and efficiency of each seismic attributes in terms of characterizing the fractures orientation, magnitude and distribution.
-
-
-
Experience with CSEM offshore southeast Brazil
More LessDuring 2004 and 2005, Shell Brasil (SBEP) purchased on a spec basis controlled source electromagnetic (CSEM or EM) datasets in the Campos, Santos, and Espirito Santo Basins of southeast Brazil (Figure 1). PGS acquired the 2004 dataset (~1600 km), while Electromagnetic Geoservices (emgs) acquired the 2005 dataset (~1200 km). Shell evaluated the CSEM data over a number of field analogues and prospects, including Shell’s 2002 O-North discovery in the Campos Basin. In the process, we obtained calibration of the technology and a successful pre-drill EM prediction prior to Shell’s Nautilus discovery. In interpreting these large surveys, we also encountered situations where CSEM technology was ineffective. With this experience, we now have a more realistic view of the technique’s strengths and limitations. As a result, we can often use CSEM in an integrated approach to influence prospect portfolio decisions. We are also evaluating new technology to increase the resolving power of EM.
-
Volumes & issues
-
Volume 43 (2025)
-
Volume 42 (2024)
-
Volume 41 (2023)
-
Volume 40 (2022)
-
Volume 39 (2021)
-
Volume 38 (2020)
-
Volume 37 (2019)
-
Volume 36 (2018)
-
Volume 35 (2017)
-
Volume 34 (2016)
-
Volume 33 (2015)
-
Volume 32 (2014)
-
Volume 31 (2013)
-
Volume 30 (2012)
-
Volume 29 (2011)
-
Volume 28 (2010)
-
Volume 27 (2009)
-
Volume 26 (2008)
-
Volume 25 (2007)
-
Volume 24 (2006)
-
Volume 23 (2005)
-
Volume 22 (2004)
-
Volume 21 (2003)
-
Volume 20 (2002)
-
Volume 19 (2001)
-
Volume 18 (2000)
-
Volume 17 (1999)
-
Volume 16 (1998)
-
Volume 15 (1997)
-
Volume 14 (1996)
-
Volume 13 (1995)
-
Volume 12 (1994)
-
Volume 11 (1993)
-
Volume 10 (1992)
-
Volume 9 (1991)
-
Volume 8 (1990)
-
Volume 7 (1989)
-
Volume 6 (1988)
-
Volume 5 (1987)
-
Volume 4 (1986)
-
Volume 3 (1985)
-
Volume 2 (1984)
-
Volume 1 (1983)
Most Read This Month
