- Home
- A-Z Publications
- Geophysical Prospecting
- Previous Issues
- Volume 57, Issue 5, 2009
Geophysical Prospecting - Volume 57, Issue 5, 2009
Volume 57, Issue 5, 2009
-
-
Modelling and migration with orthogonal isochron rays
Authors Eduardo Filpo Ferreira da Silva and Paul SavaABSTRACTFor increasing time values, isochrons can be regarded as expanding wavefronts and their perpendicular lines as the associated orthogonal isochron rays. The speed of the isochron movement depends on the medium velocity and the source‐receiver position. We introduce the term equivalent‐velocity to refer to the speed of isochron movement. In the particular case of zero‐offset data, the equivalent velocity is half of the medium velocity. We use the concepts of orthogonal isochron‐rays and equivalent velocity to extend the application of the exploding reflector model to non‐zero offset imaging problems. In particular, we employ these concepts to extend the use of zero‐offset wave‐equation algorithms for modelling and imaging common‐offset sections. In our imaging approach, the common‐offset migration is implemented as a trace‐by‐trace algorithm in three steps: equivalent velocity computation, data conditioning for zero‐offset migration and zero‐offset wave‐equation migration. We apply this methodology for modelling and imaging synthetic common‐offset sections using two kinds of algorithms: finite‐difference and split‐step wavefield extrapolation. We also illustrate the isochron‐ray imaging methodology with a field‐data example and compare the results with conventional common‐offset Kirchhoff migration. This methodology is attractive because it permits depth migration of common‐offset sections or just pieces of that by using wave‐equation algorithms, it extends the use of robust zero‐offset algorithms, it presents favourable features for parallel processing, it permits the creation of hybrid migration algorithms and it is appropriate for migration velocity analysis.
-
-
-
Fast free surface multiples attenuation workflow for three‐dimensional ocean bottom seismic data
Authors Bärbel Traub, Anh Kiet Nguyen and Matthias RiedeABSTRACTWater multiples can be very strong and contaminate the primary reflections. This can cause problems in the processing flow and the interpretation of the subsurface image. Hence, multiples suppression is an important part of the preprocessing flow. We present a fast workflow for attenuation of free surface related multiples for 2D and 3D ocean bottom seismic data based on the wave equation approach. Included in the workflow are: 1. Calibration of the pressure and vertical velocity components by using wavefield splitting. 2. Data interpolation by using offset projection. 3. Fast Radon transform by using fast fractional Fourier transform. Advantages of this workflow are that it is fast, efficient and the only requirements are the recording of both pressure and vertical particle components at some point below the source in the water column.
-
-
-
A hybrid fast algorithm for first arrivals tomography
More LessABSTRACTA hybrid algorithm, combining Monte‐Carlo optimization with simultaneous iterative reconstructive technique (SIRT) tomography, is used to invert first arrival traveltimes from seismic data for building a velocity model. Stochastic algorithms may localize a point around the global minimum of the misfit function but are not suitable for identifying the precise solution. On the other hand, a tomographic model reconstruction, based on a local linearization, will only be successful if an initial model already close to the best solution is available. To overcome these problems, in the method proposed here, a first model obtained using a classical Monte Carlo‐based optimization is used as a good initial guess for starting the local search with the SIRT tomographic reconstruction. In the forward problem, the first‐break times are calculated by solving the eikonal equation through a velocity model with a fast finite‐difference method instead of the traditional slow ray‐tracing technique. In addition, for the SIRT tomography the seismic energy from sources to receivers is propagated by applying a fast Fresnel volume approach which when combined with turning rays can handle models with both positive and negative velocity gradients. The performance of this two‐step optimization scheme has been tested on synthetic and field data for building a geologically plausible velocity model.This is an efficient and fast search mechanism, which permits insertion of geophysical, geological and geodynamic a priori constraints into the grid model and ray path is completed avoided. Extension of the technique to 3D data and also to the solution of ‘static correction’ problems is easily feasible.
-
-
-
Post‐stack velocity analysis in the dip‐angle domain using diffractions
Authors Moshe Reshef and Evgeny LandaABSTRACTInterval velocity analysis in complex geological areas is often considered as an unresolved problem. A novel approach to improve the velocity analysis process is to perform the analysis in a non‐conventional domain and to use seismic events that are usually ignored during standard data processing and imaging. In this study, a method to analyse diffraction data for migration velocity analysis in the time‐ or depth‐domain is presented. The method is based on the clear distinction between diffractions and reflections in the post‐migration dip‐angle domain. The attractive possibility to perform the analysis, using only stacked data as an input, is demonstrated on synthetic and real data examples.
-
-
-
The Petersen‐Middleton theorem and sampling of seismic data
More LessABSTRACTThe Petersen‐Middleton sampling theorem can lead to weaker sampling requirements than the Shannon sampling theorem and therefore proposes advantageous economic alternatives for the acquisition and processing of seismic data. In this paper we present a tutorial review on the Petersen‐Middleton sampling theorem and a unified description of its applications to seismic survey: the interpolation of spatially aliased seismic data, the dealiasing of seismic common midpoint and common offset gathers, the recording and processing of seismic data on a hexagonal sampling grid. The important aspects of this sampling theorem are highlighted and illustrated by synthetic examples.
-
-
-
Anisotropic P‐wave attenuation measured from a multi‐azimuth surface seismic reflection survey
More LessABSTRACTA system of aligned vertical fractures produces azimuthal variations in stacking velocity and amplitude variation with offset, characteristics often reported in seismic reflection data for hydrocarbon exploration. Studies of associated attenuation anisotropy have been mostly theoretical, laboratory or vertical seismic profiling based. We used an 11 common‐midpoint‐long portion of each of four marine surface‐seismic reflection profiles, intersecting each other at 45° within circa 100 m of a common location, to measure the azimuthal variation of effective attenuation, Q−1eff and stacking velocity, in a shallow interval, about 100 m thick, in which consistently orientated vertical fracturing was expected due to an underlying salt diapirism. We found qualitative and quantitative consistency between the azimuthal variation in the attenuation and stacking velocity, and published amplitude variation with offset results. The 135° azimuth line showed the least apparent attenuation (1000 Q−1eff= 16 ± 7) and the fastest stacking velocity, hence we infer it to be closest to the fracture trend: the orthogonal 45° line showed the most apparent attenuation (1000Q−1eff= 52 ± 15) and slowest stacking velocity. The variation of Q−1eff with azimuth φ is well fitted by 1000Q−1eff= 34 − 18cos[2(φ+40°)] giving a fracture direction of 140 ± 23° (±1SD, derived from ‘bootstrapping’ fits to all 114 combinations of individual common‐midpoint/azimuth measurements), compared to 134 ± 47° from published amplitude variation with offset data. The effects of short‐window spectral estimation and choices of spectral ratio bandwidth and offset ranges used in attenuation analysis, individually give uncertainties of up to ±13° in fracture direction. This magnitude of azimuthal variation can be produced by credible crack geometries (e.g., dry cracks, radius 6.5 m, aspect ratio 3 × 10−5, crack density 0.2) but we do not claim these to be the actual properties of the interval studied, because of the lack of well control (and its consequences for the choice of theoretical model and host rock physical properties) and the small number of azimuths available here.
-
-
-
A two‐step wavelet‐based regularization for linear inversion of geophysical data
Authors Ali Gholami and Hamid Reza SiahkoohiABSTRACTRegularization methods are used to recover a unique and stable solution in ill‐posed geophysical inverse problems. Due to the connection of homogeneous operators that arise in many geophysical inverse problems to the Fourier basis, for these operators classical regularization methods possess some limitations that one may try to circumvent by wavelet techniques.
In this paper, we introduce a two‐step wavelet‐based regularization method that combines classical regularization methods with wavelet transform to solve ill‐posed linear inverse problems in geophysics. The power of the two‐step wavelet‐based regularization for linear inversion is twofold. First, regularization parameter choice is straightforward; it is obtained from a priori estimate of data variance. Second, in two‐step wavelet‐based regularization the basis can simultaneously diagonalize both the operator and the prior information about the model to be recovered. The latter is performed by wavelet‐vaguelette decomposition using orthogonal symmetric fractional B‐spline wavelets.
In the two‐step wavelet‐based regularization method, at the first step where fully classical tools are used, data is inverted for the Moore‐Penrose solution of the problem, which is subsequently used as a preliminary input model for the second step. Also in this step, a model‐independent estimate of data variance is made using nonparametric estimation and L‐curve analysis. At the second step, wavelet‐based regularization is used to partially recover the smoothness properties of the exact model from the oscillatory preliminary model.
We illustrated the efficiency of the method by applying on a synthetic vertical seismic profiling data. The results indicate that a simple non‐linear operation of weighting and thresholding of wavelet coefficients can consistently outperform classical linear inverse methods.
-
-
-
Low‐frequency symmetric waves in fluid‐filled boreholes and pipes with radial layering
Authors A. Sidorov, A. Bakulin, B. Kashtan, S. Ziatdinov and D. AlexandrovABSTRACTMany tasks in geophysics and acoustics require estimation of mode velocities in cylindrically layered media. For example, acoustic logging or monitoring in open and cased boreholes need to account for radial inhomogeneity caused by layers inside the borehole (sand screen, gravel pack, casing) as well as layers outside (cement, altered and unaltered formation layers). For these purposes it is convenient to study a general model of cylindrically layered media with inner fluid layer and free surface on the outside. Unbounded surrounding media can be described as a limiting case of this general model when thickness of the outer layer is infinite. At low frequencies such composite media support two symmetric modes called Stoneley (tube) and plate (extensional) wave. Simple expressions are obtained for these two mode velocities valid at zero frequency. They are written in a general form using elements of a propagator matrix describing axisymmetric waves in the entire layered composite. This allows one to apply the same formalism and compute velocities for n‐layered composites as well as anisotropic pipes. It is demonstrated that the model of periodical cylindrical layers is equivalent to a homogeneous radially transversely isotropic media when the number of periods increases to infinity, whereas their thickness goes to zero. Numerical examples confirm good validity of obtained expressions and suggest that even small number of periods may already be well described by equivalent homogeneous anisotropic media.
-
-
-
Validation of elastic wave measurements of rock fracture compliance using numerical discrete particle simulations
Authors M. Möllhoff and C.J. BeanABSTRACTWe test various methods of quantifying the compliance of single and multiple rock fractures from synthetic ultrasonic data. The data are generated with a 2D discrete particle scheme which has previously been shown to treat fractures in agreement with linear‐slip theory. Studying single fractures, we find that delays derived from peak amplitudes do not correspond to group delays, as might be expected. This is due to waveform distortion caused by the frequency‐dependent transmission across the fracture. Instead the delays correspond to an expression for phase delays, which we derive from linear‐slip theory. Phase delays are a unique function of compliance, whereas group delays are non‐uniquely related to compliance. We believe that this property of group delays has hindered the wider application of deriving fracture compliances from traveltimes. We further show that transmission coefficients derived from waveform spectra yield more accurate fracture compliances than those obtained from ratios of signal peak amplitudes. We also investigate the compliance of a set of parallel fractures. Fracture compliance can only be determined from transmission coefficients if the fracture spacing is so large that the first arriving pulse is not contaminated by reverberations. In the case of contamination the direct measurement of group or phase delays is not practical. However, we demonstrate that in such cases of strong waveform distortion the coda wave interferometry method is very effective for determining relative fracture compliance. First break delays in the fracture set data are related to those observed in single fracture simulations. This means that fracture set compliance can be estimated from first break data if used together with numerical simulations.
-
-
-
Elimination of the water‐layer response from multi‐component source and receiver marine electromagnetic data
Authors Janniche Iren Nordskag, Lasse Amundsen, Lars Løseth and Egil HolvikABSTRACTThis paper presents the theory to eliminate from the recorded multi‐component source, multi‐component receiver marine electromagnetic measurements the effect of the physical source radiation pattern and the scattering response of the water‐layer. The multi‐component sources are assumed to be orthogonally aligned above the receivers at the seabottom. Other than the position of the sources, no source characteristics are required. The integral equation method, which for short is denoted by Lorentz water‐layer elimination, follows from Lorentz' reciprocity theorem. It requires information only of the electromagnetic parameters at the receiver level to decompose the electromagnetic measurements into upgoing and downgoing constituents. Lorentz water‐layer elimination replaces the water layer with a homogeneous half‐space with properties equal to those of the sea‐bed. The source is redatumed to the receiver depth.
When the subsurface is arbitrary anisotropic but horizontally layered, the Lorentz water‐layer elimination scheme greatly simplifies and can be implemented as deterministic multi‐component source, multi‐component receiver multidimensional deconvolution of common source gathers. The Lorentz deconvolved data can be further decomposed into scattering responses that would be recorded from idealized transverse electric and transverse magnetic mode sources and receivers. This combined electromagnetic field decomposition on the source and receiver side gives data equivalent to data from a hypothetical survey with the water‐layer absent, with idealized single component transverse electric and transverse magnetic mode sources and idealized single component transverse electric and transverse magnetic mode receivers.
When the subsurface is isotropic or transverse isotropic and horizontally layered, the Lorentz deconvolution decouples into pure transverse electric and transverse magnetic mode data processing problems, where a scalar field formulation of the multidimensional Lorentz deconvolution is sufficient. In this case single‐component source data are sufficient to eliminate the water‐layer effect.
We demonstrate the Lorentz deconvolution by using numerically modeled data over a simple isotropic layered model illustrating controlled‐source electromagnetic hydrocarbon exploration. In shallow water there is a decrease in controlled‐source electromagnetic sensitivity to thin resistors at depth. The Lorentz deconvolution scheme is designed to overcome this effect by eliminating the water‐layer scattering, including the field's interaction with air.
-
-
-
A lateral model parameter correlation procedure for one‐dimensional inverse modelling
Authors Niels B. Christensen and Rasmus J. TølbøllABSTRACTWe present a new, fast and versatile method, the lateral parameter correlation method, of invoking lateral smoothness in model sections of one‐dimensional (1D) models. Modern, continuous electrical and electromagnetic methods are capable of recording very large data sets and except for a few cases, standard inversion methodology still relies on 1D models. In environments where the lateral rate of change of resistivity is small, 1D inversion can be justified but model sections of concatenated 1D models do not necessarily display the expected lateral smoothness.
The lateral parameter correlation method has three steps. First, all sounding data are inverted individually. Next, a laterally smooth version of each model parameter, one at a time, is found by solving a simple constrained inversion problem. Identity is postulated between the uncorrelated and correlated parameters and the equations are solved including a model covariance matrix. As a last step, all sounding data are inverted again to produce models that better fit the data, now subject to constraints by including the correlated parameter values as a priori values. Because the method separates the inversion from the correlation it is much faster than methods where the inversion and correlation are solved simultaneously, typically with a factor of 200–500.
Theoretical examples show that the method produces laterally smooth model sections where the main influence comes from the well‐determined parameters in such a way that problems with equivalence and poor resolution are alleviated. A field example is presented, demonstrating the improved resolution obtained with the lateral parameter correlation method. The method is very flexible and is capable of coupling models from inversion of different data types and information from boreholes.
-
Volumes & issues
-
Volume 72 (2023 - 2024)
-
Volume 71 (2022 - 2023)
-
Volume 70 (2021 - 2022)
-
Volume 69 (2021)
-
Volume 68 (2020)
-
Volume 67 (2019)
-
Volume 66 (2018)
-
Volume 65 (2017)
-
Volume 64 (2015 - 2016)
-
Volume 63 (2015)
-
Volume 62 (2014)
-
Volume 61 (2013)
-
Volume 60 (2012)
-
Volume 59 (2011)
-
Volume 58 (2010)
-
Volume 57 (2009)
-
Volume 56 (2008)
-
Volume 55 (2007)
-
Volume 54 (2006)
-
Volume 53 (2005)
-
Volume 52 (2004)
-
Volume 51 (2003)
-
Volume 50 (2002)
-
Volume 49 (2001)
-
Volume 48 (2000)
-
Volume 47 (1999)
-
Volume 46 (1998)
-
Volume 45 (1997)
-
Volume 44 (1996)
-
Volume 43 (1995)
-
Volume 42 (1994)
-
Volume 41 (1993)
-
Volume 40 (1992)
-
Volume 39 (1991)
-
Volume 38 (1990)
-
Volume 37 (1989)
-
Volume 36 (1988)
-
Volume 35 (1987)
-
Volume 34 (1986)
-
Volume 33 (1985)
-
Volume 32 (1984)
-
Volume 31 (1983)
-
Volume 30 (1982)
-
Volume 29 (1981)
-
Volume 28 (1980)
-
Volume 27 (1979)
-
Volume 26 (1978)
-
Volume 25 (1977)
-
Volume 24 (1976)
-
Volume 23 (1975)
-
Volume 22 (1974)
-
Volume 21 (1973)
-
Volume 20 (1972)
-
Volume 19 (1971)
-
Volume 18 (1970)
-
Volume 17 (1969)
-
Volume 16 (1968)
-
Volume 15 (1967)
-
Volume 14 (1966)
-
Volume 13 (1965)
-
Volume 12 (1964)
-
Volume 11 (1963)
-
Volume 10 (1962)
-
Volume 9 (1961)
-
Volume 8 (1960)
-
Volume 7 (1959)
-
Volume 6 (1958)
-
Volume 5 (1957)
-
Volume 4 (1956)
-
Volume 3 (1955)
-
Volume 2 (1954)
-
Volume 1 (1953)