- Home
- A-Z Publications
- First Break
- Previous Issues
- Volume 21, Issue 12, 2003
First Break - Volume 21, Issue 12, 2003
Volume 21, Issue 12, 2003
-
-
Marine acquisition: Moving beyond the signal-to-noise ratio?
By A. LongAndrew Long, PGS Technology (Perth), offers a challenge to conventional thinking on signal-to-noise ratio in the context of 3D multi-streamer acquisition surveys arguing for more attention to all the factors contributing to ‘noise’ in seismic images. The definition of exactly what comprises ‘signal’ and ‘noise’ on seismic data is ill-defined, but it can be said that noise is the unwanted component of the target frequency spectra, which is not directly related to, or correlated with, the primary reflection energy. If signal and noise could be separated into two distinct amplitude spectra, then the maxima of the spectra may be closely aligned (less easily separable), or more distinct (more easily separable). Irrespective of the comparative spectra, the signal- to-noise ratio (S/N ratio) is typically defined as being simply the logarithmic ratio of the maximum amplitudes from each of the signal and noise spectra. This is obviously too vague to be usefully descriptive of data quality, however, the S/N ratio is a universally ascribed term in seismic data analysis. Quite commonly, the term is used as a more qualitative or colloquial description of general data coherency and resolution. Improvements in S/N quality are generally attributed to the fold of stack. By the well-known square root relationship, increasing the stack fold suppresses random (not coherent) noise. However, many other factors contribute to the ‘noise’ contaminating seismic images, as discussed below. Historical efforts have been made to quantify the nature of the S/N ratio, and its relationship to data quality. Junger (1964) observed that ‘For a signal-to-noise ratio greater than two, the signal predominates visually, and only a slight improvement in quality can be obtained with additional improvements in the signal-to-noise ratio’. Hence, it is observed that once the random noise component is suppressed below a certain threshold, other factors than mere fold are clearly contributing to the quality of the seismic image. It is quite poorly established how more complicated acquisition parameters, such as multi-streamer spread dimensions and shooting templates, influence the ‘S/N ratio’ of seismic data, particularly after the application of multichannel pre-stack processing algorithms, notably pre-stack migration. In the sections below, I describe how ‘noise’ is manifested both during acquisition and processing, in the context of 3D marine multi-streamer acquisition parameters. I demonstrate how misleading and inappropriate it is to only consider data quality in terms of fold and simple S/N ratio measurements.
-
-
-
Improved imaging of 3D marine seismic data from offshore Costa Rica with CRS processing
Authors G. Gierse, J. Pruessmann, E. Laggiard, C. Boennemann and H. MeyerG. Gierse, J. Pruessmann, E. Laggiard, C. Boennemann, and H. Meyer show how the Common Reflection Surface (CRS) imaging technique developed by German research and commercial organizations can be successfully applied to a 3D dataset, in this case from a seismic survey off Costa Rica. The macro model independent Common Reflection Surface (CRS) imaging technique has proved to produce superior images in various 2D seismic case studies. A 3D marine dataset application demonstrates similar capabilities of the CRS technique for 3D data. The signal-to-noise ratio is strongly increased and dipping features are better resolved. The marine dataset is selected from the active continental margin offshore Costa Rica. The CRS processing aims at enhancing the image of the slope sediments and deeper crustal structures. The resolution of complex subsurface structures in 2D and 3D still represents a major challenge to seismic exploration. Up to now, continuous efforts have been made throughout the oil and gas industry to improve the imaging of complex structures with the main focus on prestack depth imaging. The seismic wavefront that travels through the complex subsurface is likely to deviate from a spherical shape having passed all sorts of inhomogeneities. Prestack depth migration has the advantage of not assuming a spherical wavefront like conventional techniques, since it calculates the actual deformations of the wavefront from a more or less coarse model of the subsurface. The derivation of the model, however, is a crucial step where prestack depth migration might fail. A very low signal-to-noise ratio in the seismic data often prevents the definition of a reliable basic model and the identification of the main horizons in the prestack data. Likewise, model building can fail in areas of complex tectonics, such as overthrust areas. Thus the strength of the model-based imaging cannot be exploited. For such cases, recent advances in time domain imaging with the CRS technique can be an alternative. CRS processing strongly increases the signal-to-noise ratio, and produces a significant improvement of imaging results. Poststack depth migration allows the transfer of the improved resolution from time domain to depth. In general, time processing has seen fewer efforts to improve the imaging techniques compared with depth processing. In many exploration projects, the conventional NMO / DMO processing flow for producing the zero-offset stack still dominates seismic processing in the time domain. This standard technique has prevailed nearly unchanged throughout the seismic industry during the last two decades. NMO / DMO processing uses a type of a macro model given by the stacking velocity field, which is derived from Common Midpoint (CMP) gathers. The velocity field describes the CMP reflection time curves, which are assumed to be hyperbolic. This assumption corresponds to undisturbed wavefronts from reflection points in a subsurface with plane horizontal layering. In case of dipping layers, the one dimensional subsurface model in the NMO approach leads to reflection point smearing, and requires a partial migration via the Dip Moveout (DMO) correction. Time domain imaging approaches, that were considered alternatives to the established NMO / DMO technique with its simplified subsurface model, have frequently been proposed. At the end of the 80s, de Bazelaire (1986, 1988) and Gelchinsky (1988, 1989) proposed new strategies for a zero-offset imaging. In contrast to the NMO/DMO technique, as well as prestack depth migration, these strategies do not require a macro model, but estimate the imaging parameters directly from the prestack data.
-
-
-
A deep-towed multichannel seismic streamer for very high-resolution surveys in full ocean depth
Authors M. Breitzke and J. BialasMonika Breitzke and Jörg Bialas, research scientists at the GEOMAR Research Centre for Marine Geosciences, Kiel, Germany describe the latest testing of a deep-towed multichannel streamer combined with a sidescan sonar system to achieve improved resolution in deep water with gas hydrates the main interest.
-
-
-
Using geology to develop a better depth product in Central Green Canyon Roho
By L. AndersonLynn Anderson (CGG) discusses some of the techniques available to better understand the Roho basin in the deep water Green Canyon area of the Gulf of Mexico
-
-
-
An evaluation of peak and bubble tuning in sub-basalt seismology: modelling and results from OBS data
Authors Z.C. Lunnon, P.A.F. Christie and R.S. WhiteAs part of the iSIMM project (White et al. 2002), a 6360 in3 airgun source array was used to acquire, in two passes, a deep seismic profile into a 480 km array of ocean bottom seismometers (OBS) east of the Faroe Islands. The first pass used peak tuning, and the second pass used bubble tuning, with other source parameters constant. The objective was to deliver low frequency energy for deep, long-offset, sub-basalt penetration. The results suggest that towing large guns deep is more important than the tuning method. However, for the gun configuration used, the bubble-tuned data are more compact, less reverberant and easier to pick.
-
-
-
Geo-electrical exploration for groundwater in a hard rock region of Hyderabad, India
By K.P. SinghGeo-electrical methods are employed very commonly in geohydrological investigations, as they are more economical and effective than other geophysical techniques. The direct current resistivity (DCR) method is an effective tool in groundwater exploration, geothermal studies, civil engineering applications, and in monitoring water pollution and contamination. To achieve these objectives, conventional Schlumberger and Wenner soundings and electrical resistivity tomography (ERT) are currently used worldwide. Like other geophysical methods, interpretation of DCR data provides a non-unique solution of a real geological model. In recent years, two types of technique have been developed to interpret DCR sounding data. The first is the ‘direct method’ that requires no information on the number of layers, and the apparent resistivities are assumed to be true resistivities (Koefoed 1979; Zhody 1989). In the second method, known as the ‘indirect method’, an initial-guess model is required for initialization of the inversion of observed data (Jupp & Vozoff 1975). Non-uniqueness of the interpretation makes it difficult to select the best model that is closest to the real geological model. Results of interpretation of DCR sounding data become more ambiguous with increasing depth. It is very difficult to interpret layer resistivity and thickness with sufficient accuracy, and both synthetic and field examples show that the resolution of thin layers (having thicknesses less than one-tenth of their depth) is particularly difficult (Singh 2003). The properties of a thin conducting layer can be determined in terms of longitudinal conductance, and those of a resistive layer by transverse resistance (Yungul 1996). In the present study, layer models have been obtained using a stable iterative algorithm proposed by Jupp and Vozoff (1975). The effectiveness of the resistivity method in the determination of aquifer parameters was demonstrated by Rijo et al. (1977). Several factors that create problems in the detection of an aquifer/conducting layer, like the presence of a conducting surface layer, effects of anisotropy, and the screening effect of overlying layers, have been discussed in detail by Singh (1998a). It is a well-established fact that ambiguity in interpretation of resistivity data increases very rapidly with increase in depth (Singh 1998b). Singh (2003) suggested a new approach to the detection of hidden aquifers in hard-rock regions based on resistivity data transforms. The resistivity method is applied to determine the groundwater potential of an area and to study the relationship wells. 3D maps of depth and thickness of the aquifer in Osmania University Campus (OUC) were prepared in order to study the variation in the level of the water table. The role of the thickness of the aquifer in water resources management, and the natural/ artificial recharge of the aquifer, has been studied in detail. The main objective of the present study was to delineate the subsurface distribution of groundwater in the OUC. In addition to this, the relationships between the surface/subsurface layer parameters and the yield of existing boreholes and the recharge of the water table were examined. This study has also proved very useful in identifying new sites that are suitable for groundwater exploitation. Location and geology of the area The study area is located between latitudes 17º 24' 30” and 17º 25' 30" north, and longitudes 78º 27' and 78º 29' east. It is an area of 3.1x 2.9 km2 covering the entire OUC. It is a typical hard-rock region where water is found in small pockets within fractures. The whole area is covered by granitic soil, and granitic rocks are also exposed at several locations. Known as the Hyderabad granites, the rocks exposed in this area of slightly elevated topography are of Archæan age.
-
-
-
Case study in NW Greece of passive seismic tomography: a new tool for hydrocarbon exploration
Authors S. Kapotas, G.A. Tselentis and N. MartakisWe have learned more about the structure of the Earth and its crust from earthquake seismology in all its facets than from any other single geophysical or geological method. The use of the vast amount of information provided by the natural seismicity of the earth has been largely restricted to the investigation of classical seismological problems or to large scale investigations of the earth’s interior, with relatively little attention being paid towards small scale hydrocarbon exploration. Listening to the earth passively, and using the collected seismological information wisely, can be successfully applied to hydrocarbon exploration, as is shown in the present investigation. Controlled source seismology uses conventional surface sources such as vibroseis, explosives or airguns to generate seismic waves whose travel times and amplitude distribution through the earth are used to determine structural images and bulk physical properties of the subsurface. In contrast, passive seismic tomography uses micro-earthquakes as an energy source to probe earth structure. It is a fairly simple concept, based on the fundamental principle that all small movements and ‘roars’ in the earth are actually seismic sources. Both compressional and shear waves are emitted from an earthquake source and can be used for independent estimates of compressional (Vp) and shear (Vs) velocities of the various geological formations. The definition of velocity structure in a complex tectonic environment is a challenging task for conventional reflection velocity analysis based on NMO methods. In recent years, there has been an increase in exploration activity in geologically complex areas, such as fold and thrust belts, and even seeking good seismic images beneath high impedance layers such as basalt. Exploration in these areas is challenging, as well as expensive, and is driving the oil exploration industry towards the application of state of the art techniques. A key aspect of tackling the ‘complex geology’ problem has been the design and implementation of new types of seismic acquisition and processing strategies. The passive seismic tomography method falls into this category. Conventional land seismic is a labour-intensive business with expensive recording crews and set-ups of hundreds of miles of cable out on the ground, while geophones have to be deployed and retrieved manually. Surface access is needed for vibrators and shot-hole rigs, while permitting and other environmental issues mean high costs. The rationale for the application of passive tomography as a complementary imaging tool is multifold. It is a cost effective manner to image a large area where the terrain is difficult (mountainous or even shallow water) and, as a consequence, conventional seismic is expensive and may be of poor quality. Since the seismic energy from microearthquakes comes from below the target of interest it can be easily used to map complex tectonic regions (i.e. overthrust belts, subbasalts, shallow carbonates etc.) characterised by seismic energy penetration problems. Another advantage of passive seismic tomography is the capability to measure accurately an intrinsic Vp/Vs ratio. This is a direct consequence of the production of high amplitude shear waves by small earthquakes that are reliably recorded by (3-component) surface receivers. In contrast, 3D reflection seismic methods employ man-made sources (e.g. explosions) that do not produce large shear waves, so that detecting and identifying the weak shear waves reflected deep in the medium is not generally reliable, and often not even possible. Consequently, the active seismic reflection methods currently employed by the petroleum industry do not adequately provide material parameter information related to the shear velocities in the medium. Typically, a passive seismic 3D survey will cost less than a conventional 2D survey by several orders of magnitude. Drilling and explosives, for example, generally account for almost half of conventional 3D seismic cost. These costs are eliminated with passive seismic. Furthermore, the recording station density required in this methodology is significantly less, meaning significantly smaller equipment inventory and crew size. A passive seismic crew will be normally five to 10 people, compared with a conventional crew that will number upwards of 50. Another important aspect of the technique is that it has the advantage of being environmentally friendly. The absence of explosives and heavy vehicle support for conventional sources permits activity in terrains that might otherwise be inaccessible, and almost eliminates any environmental issues. In the following sections we present a successful application of passive seismic tomography for hydrocarbon exploration in the area of Epirus, Greece.
-
Volumes & issues
-
Volume 42 (2024)
-
Volume 41 (2023)
-
Volume 40 (2022)
-
Volume 39 (2021)
-
Volume 38 (2020)
-
Volume 37 (2019)
-
Volume 36 (2018)
-
Volume 35 (2017)
-
Volume 34 (2016)
-
Volume 33 (2015)
-
Volume 32 (2014)
-
Volume 31 (2013)
-
Volume 30 (2012)
-
Volume 29 (2011)
-
Volume 28 (2010)
-
Volume 27 (2009)
-
Volume 26 (2008)
-
Volume 25 (2007)
-
Volume 24 (2006)
-
Volume 23 (2005)
-
Volume 22 (2004)
-
Volume 21 (2003)
-
Volume 20 (2002)
-
Volume 19 (2001)
-
Volume 18 (2000)
-
Volume 17 (1999)
-
Volume 16 (1998)
-
Volume 15 (1997)
-
Volume 14 (1996)
-
Volume 13 (1995)
-
Volume 12 (1994)
-
Volume 11 (1993)
-
Volume 10 (1992)
-
Volume 9 (1991)
-
Volume 8 (1990)
-
Volume 7 (1989)
-
Volume 6 (1988)
-
Volume 5 (1987)
-
Volume 4 (1986)
-
Volume 3 (1985)
-
Volume 2 (1984)
-
Volume 1 (1983)