- Home
- Conferences
- Conference Proceedings
- Conferences
72nd EAGE Conference and Exhibition incorporating SPE EUROPEC 2010
- Conference date: 14 Jun 2010 - 17 Jun 2010
- Location: Barcelona, Spain
- ISBN: 978-90-73781-86-3
- Published: 14 June 2010
1 - 100 of 797 results
-
-
Geologically Constrained Full Waveform Inversion – Theory
Authors A. Guitton, F. Ortigosa and G. GonzalesThe waveform inversion problem is inherently ill-posed. Traditionally, regularization terms are used to address this issue. For waveform inversion, where the model is expected to have many details reflecting the physical properties of the Earth, regularization and data fitting can work in opposite directions, slowing down convergence. In this paper, we constrain the velocity model with a model-space preconditioning scheme based on directional Laplacian filters. This preconditioning strategy preserves the details of the velocity model while smoothing the solution along known geological dips. The Laplacian filters have the property to smooth or kill local events according to a local dip field. By construction, these filters can be inverted and used in a preconditioned waveform-inversion scheme to yield geologically meaningful models. We illustrate on a 2-D synthetic example how preconditioning with non-stationary directional Laplacian filters outperforms traditional waveform inversion when sparse data are inverted for. We think that preconditioning could benefit waveform inversion of real data where (for instance) irregular geometry, coherent noise and lack of low frequencies are present.
-
-
-
Efficient Gauss-Newton Hessian for Full Waveform Inversion
More LessFull waveform inversion (FWI) has received an increasing amount of attention thanks to its ability to provide a high resolution velocity model of the subsurface. The computational cost still presents a challenge, however, and the convergence rate of the FWI problem is usually very slow without proper preconditioning on the gradient. While preconditioners based on the Gauss-Newton Hessian matrix can provide significant improvements in the convergence of FWI, computation of the Hessian matrix itself has been considered highly impractical due to its computational time and the storage requirements. In this paper, we design preconditioners based on an approximate Gauss-Newton Hessian matrix obtained using the phase-encoding method. The new method requires only 2Ns forward simulations compared to Ns(Nr+1)forward simulations required in conventional approaches, where Ns and Nr are the numbers of sources and receivers. We apply the diagonal of the phase-encoded Gauss-Newton Hessian to both sequential source FWI and encoded simultaneous source FWI. Numerical examples on Marmousi model demonstrate that phase-encoded Gauss-Newton Hessian improves the convergence of the FWI significantly.
-
-
-
Acoustic Waveform Inversion Applicability on Elastic Data
Authors D. V. Vigh, E. W. Starr and P. ElapavuluriFull Waveform inversion is a computer intensive process, especially for 3D seismic data. After a tremendous number of synthetic examples, finally real 3D data sets have been undertaken by the industry. As field data is dominated with P waves, one feasible approach is to use the acoustic approximation. The Full Waveform Inversion described above is acoustic and real data is more accurately described by an elastic model. It is common practice to apply acoustic inversion, especially for 3D data sets because the elastic modeling is prohibitively expensive. Although the long offsets may suffer from elastic effects our experiment shows that the velocity field obtained using acoustic inversion vs. elastic data are reasonable in spite of the difference between the modeling and mother Earth. Although the elastic propagation would provide a better match to the acquired data the cost is still prohibitive compared to the acoustic propagation especially if large 3Ds are under consideration.
-
-
-
Application of an Impedance-based Full-waveform Inversion Method for Dual-sensor, Single-streamer Field Recordings
Authors S. Kelly, J. Ramos-Martinez, B. Tsimelzon and S. CrawleyThusfar, waveform-based inversion methods have utilized the spatial distribution of back-projected reflectivities in order to refine velocity (and density) models. While these methods excel at refining a velocity model for high-wavenumber features, they face significant difficulties when attempting to invert for long-wavelength features several kilometres below the depth of acquisition. In this abstract, we develop the theory for a waveform-based inversion method that utilizes an impedance image, rather than a reflectivity image, in order to extract spatial variations in velocity and density. Results are presented for the first stage of an inversion using a 2-D line of dual-sensor, single-streamer, field recordings. It is shown that features with vertical scale sizes up to 0.5 km can be determined at depths up to 4 km, using data with a maximum offset of only 8 km and frequencies below 7.5 Hz.
-
-
-
2D Acoustic Full Waveform Inversion of a Land Seismic Line
Authors W. A. Mulder, C. Perkins and M. J. van de RijzenThe application of waveform inversion to land data is even more challenging than the marine case because of strong elastic effects such as groundroll, near surface attenuation, scattering due to rapid geological variations and topography. In this paper, rather than considering full elastic waveform inversion which maybe too difficult for standard geophone data, we consider the application of acoustic 2D full waveform inversion with a frequency domain code. At lower frequencies, the data are very noisy so we carried out a fixed number of iterations with a small band of low frequencies and then repeated this for increasingly larger bandwidth. We present the results from this procedure, which show the waveform inversion improved the continuity of the main reflectors.
-
-
-
3D Elastic Wavefield Inversion in the Time Domain
Authors L. Guasch, M. R. Warner, I. Stekl and A. P. UmplebyWe have developed a 3D tomographic wavefield inversion code that solves the fully elastic wave equation in the time domain using finite differences. We show results of applying this elastic code to different synthetic 3D problems.
-
-
-
Time-lapse Elastic Full Waveform Inversion using Injected Grid Method
Authors S. C. Singh and G. T. RoyleFull waveform inversion is increasing being used in oil industry to quantify P and S-wave velocities of the sub-surface. So far, waveform inversion methods have been either 3D acoustic or 2D elastic. Furthermore, only low frequencies have been used because of the high computation cost. The application of elastic full waveform for 3D reservoir monitoring is still beyond the reach of present day computing technology. Here, we demonstrate the application an injected grid method, where the forward modelling for iterative elastic waveform inversion is performed in a small volume surrounding the reservoir, reducing the cost significant for full waveform. The application of algorithm is demonstrated on Marmousi II data set in a time-lapse mode.
-
-
-
Waveform Tomography by Correlation Optimisation
Authors T. van Leeuwen and W. A. MulderAn important ingredient of any tomography-based velocity inversion method is the determination of traveltime differences. When a ray-based method is used, the modelled traveltimes are directly available and the traveltimes in the observed data need only to be picked once. When using wave-equation-based methods, however, the traveltime difference between two complex waveforms needs to be determined at each iteration. A straight-forward approach automatically picks the onset of relevant arrivals, either directly or via a correlation of the two waveforms. If the waveforms are not very similar, however, this approach is problematic. We propose to measure the traveltime difference via a weighted norm of the correlation of the two waveforms. The weighted norm can be used directly as an optimisation criterion for waveform tomography. We illustrate this with a synthetic and real data example.
-
-
-
Elastic Inversion of 3D Seimic Data – A Market Benchmark
Authors I. Escobar and H. Peder HansenBecause of an increasing use of seismic inversion products within Maersk Oil, it was needed to perform a benchmarking exercise to identify seismic inversion/reservoir characterization contractor(s) of choice. To ensure robustness of the benchmark a highquality dataset was used (high SNR, good well coverage and well established rock physics model) from a clastic Palaecene reservoir showing a strong class II AVO response. The exercise included four contractors and Maersk’s internal inversion group, each asked to provide relative and absolute acoustic impedance, Poisson’s ratio volumes along with Bayesian seismo-facies prediction. The comparison was carried out by measuring correlations between inverted and measured relative acoustic impedance and relative Poisson’s ratio, combined with visual inspection of different sections, maps and along well locations in terms of predictability and continuity of events. We have found that the reliability in estimating acoustic impedance was similar for all the participants, and it was in fact the ability to estimate Poisson’s ratio that was the main differentiator. Other important aspects were significant differences in software runtimes, efficiency of workflows and processing/pre-conditioning capabilities. This benchmark has given us a sound overview of the abilities of the participating contractors.
-
-
-
Sensitivity of Time-lapse Changes in Pressure and Saturation to Seismic AVO and Time-shifts
Authors M. Trani, R. Arts, O. Leeuwenburgh and J. BrouwerAn inversion scheme that solves for reservoir pressure and fluid saturation changes from time-lapse pre-stack seismic attributes and post-stack seismic time-shifts is presented. It makes use of four equations expressing the changes in zero-offset and gradient reflectivities, compressional and shear waves time-shifts as functions of production induced changes in fluid properties. The method has been successfully tested on a realistic, synthetic reservoir, where seismic data have been modeled before and after 30 years production and water injection. Results show very accurate estimations if information about the vertically averaged reservoir porosity is available. The use of the gradient reflectivity equation causes biased estimations of real changes in saturation and strong leakage between the two different parameters. However, if the equation related to the S-wave time-shift can replace the gradient reflectivity equation, the inversion results may be very accurate. In cases where shear wave data might not be acquired, the approximation of the exact changes in this seismic attribute becomes more accurate if quadratic terms in relative changes of seismic properties are not neglected. The improved forward approximation in this attribute leads to inversion results characterized by weaker leakage and sharper discrimination between different fluid effects.
-
-
-
4D Pre-stack Inversion Workflow Integrating Reservoir Model Control and Lithology Supervised Classification
Authors S. Toinet, S. Maultzsch, V. Souvannavong and O. Colnard4D pre-stack inversion is used in the industry to image reservoir changes due to production and injection, and to make reservoir management decisions in order to optimize hydrocarbon recovery. We present an innovative workflow to prepare, constrain and compute 4D pre-stack inversion attributes. Specific properties of the studied field (huge time-shifts due to gas coming out of solution, various turbiditic contexts) implied building a composite warping result, filtered using a 4D mask to build the initial monitor model for 4D inversion. The pre-stack 4D inversion workflow not only integrates seismic information, but also well information, used to discriminate sand from shale during the 4D mask building, and a 4D rock-physics model. Applied to simulated reservoir properties, the rock-physics model defines a range of relative density and velocity variations in which the inversion results can vary. Moreover, because water-bearing sands are hard to discriminate from shales in some of the field reservoirs using a cross-plot of P and S impedances, information from the reservoir grid was also introduced to help locating water-bearing sands in the 4D mask. Preliminary analyses of 4D inversion attributes show an improved image compared to previous 4D attributes.
-
-
-
Ray Impedance Inversion on the Tight-sand Gas Reservoir
More LessIn a tight sand gas reservoir, due to its complicated geology and extremely low porosity and permeability, conventional inversion methods cannot always accomplish the task of characterizing reservoir distribution. In this paper, we apply the ray-impedance inversion on a tight-sand gas reservoir. We perform inversion on constant ray parameter profiles, with robust estimate of wavelet and stable result of reflectivity series to start with. Comparing the inverted ray impedance with the acoustic impedance, the elastic impedance and shear impedance from the commercial software, as well as these elastic parameters extracted from well logs, it shows that the frequency-dependent ray impedance is superior in identifying fracture zone and the characterization of reservoir distribution for this tight-sand gas reservoir.
-
-
-
Statistical Rock Physics and Bayesian Classification – Are Marginal Distributions Important?
Authors I. Escobar, H. Peder Hansen and M. BellerThe importance and impact of the way marginal (unconditional) distributions are considered in statistical rock physics and Bayesian classification of inverted seismic data was analyzed. Two scenarios were compared, one assuming perfect knowledge of the probability density functions and perfect and unbiased sampling; and another one where an unclassified group is included, with a given distribution in order to account for imperfections in the data, models, and estimation techniques. Using a dataset from a clastic Palaeocen field in the North Sea, it was shown that assuming the first scenario (perfectly known distributions) leads to probabilistic volumetric estimations of hydrocarbon saturated sands at least 3 times bigger than in the second case where an unclassified group is included. For the sake of further downstream engineering and facilities analyses, it is straightforward to realize the impact of such over-predictions in the estimation of P10, P50 and P90 volumes of hydrocarbons in place.
-
-
-
Applicability of AVO Inversion Based on Effective Reflection Coefficients to Long-offset Data from Curved Interfaces
Authors L. V. Skopintseva, A. M. Aizenberg, M. A. Ayzenberg, M. Landrø and T. V. NefedkinaAVO inversion is an essential and powerful interpretational tool. However, conventional AVO-inversion workflow in application to long-offset data will unlikely succeed, because it is limited to relatively plane interfaces, weak parameter contrasts and moderate incidence angles, where the offset dependence of the reflection amplitude can be described by linearized plane-wave reflection coefficients (PWRCs). It is also known that the PWRC is insensitive to the interface curvatures and breaks down at the near-critical offsets, where the head wave is generated, as well as at the post-critical offsets, where the reflected and head waves interfere. To avoid these inconsistencies, we exploit effective reflection coefficients (ERCs) that generalize PWRCs for curved interfaces and nonplane waves. Based on our previous results for plane interfaces, we generalize the improved approach to AVO inversion for the curved interfaces. Using a synthetic data example, we also show that the theoretical description of the actual reflection from a curved interface is directly applicable in AVO inversion.
-
-
-
A 3D Ray-based Pulse Estimation for Seismic Inversion of PSDM Data
Authors F. Georgsen, O. Kolbjørnsen and I. LecomteWe develop a methodology for estimation of the point-spread function of prestack depth migrated data (PSDM). The parametrization of the point-spread function is given by a ray-based approach which incorporates the effects of wave propagation through the use of illumination vectors. The only unknown factor in the expression for the point-spread function is a 1D pulse. This pulse is estimated from reflection coefficients in wells, and co-located PSDM data using a least squares approach. The estimate of the point-spread function is the first step towards seismic inversion of PSDM data. In comparison to traditional wavelet estimation, the model using PSDM data has a larger range of validity, i.e. we are able to remove the assumption of a horizontally layered earth. In examples we show that our method is identical to the 1D convolution when the earth has a constant dip, but gives an improvement when this assumption is violated.
-
-
-
Pilot Fracture Characterization Study Using Seismic Attributes Derived from Singular Value Decomposition of AVOAz Data
Authors G. Chao and S. MaultzschThis work presents a pilot study of the application of a seismic inversion technique for fracture density and fracture orientation in an area where available image log data revealed the presence of open fractures at a given well location. The inversion method is based on a singular value decomposition of azimuthal AVO data. This decomposition allows us to calculate seismic attributes which are linked to the fracture density of the fracture network through anisotropic rock physics modeling. The results reveal an area of high fracture density around the well where open fracture had been observed. The fracture orientations obtained are consistent with the interpreted image logs and outcrop studies in the area. The outcome of this pilot study is promising since the obtained results are consistent with existing image log data. Further testing and research on a larger area will be valuable to assess if the fracture characterization results are consistent with regional geological interpretations and other analysises of fracture networks in the reservoir.
-
-
-
Simultaneous Sources – Processing and Applications
By I. MooreThere has been considerable interest recently in data acquisition using simultaneous sources because of the enormous improvements in acquisition efficiency and source sampling that the method proffers. Realizing these improvements in practice requires an appropriate combination of survey design, acquisition technology and data processing capabilities. Given a suitably-designed survey, I show that simultaneous-source data can be separated effectively into equivalent datasets for each source. These datasets may then be processed using conventional techniques, which benefit naturally from any improvements in sampling. Several field datasets, employing a variety of acquisition geometries, illustrate the viability and limitations of this approach. The main limitation comes from ambiguities in the separation process, which can be resolved to a large degree by using source dithering techniques in combination with a separation algorithm that includes an effective sparseness constraint. The use of more than two sources simultaneously adds to the potential of the method and is shown to be viable provided the survey is designed appropriately.
-
-
-
Iterative Method for the Seperation of Blended Encoded Shot Records
Authors A. Mahdad and G. BlacquièreSeismic acquisition is a trade-off between economical and quality considerations. Generally seismic data is recorded with large time delays between illuminating sources in order to avoid interference in time. The consequence is a poorly sampled shot domain. However, in the concept of blended acquisition, the time delay between sources is reduced significantly. Moreover, the sources may transmit encoded signals. Depending on the acquisition objectives, blended acquisition significantly improves the economics or quality or both by adding additional degrees of freedom in the acquisition design. By a deblending procedure, the individual source responses are retrieved. However, the deblended result contains residual noise due to the interference from other sources. The level of this noise depends on the choice of blending parameters. E.g., in the case of a simple code like linear phase encoding (i.e., applying time delays), it is larger than in the case of a more sophisticated code like transmitting sweeps as in vibroseis technology. In this paper an iterative approach is proposed for deblending, based on the estimation and subsequent adaptive subtraction of the interference noise. The type of coding that is applied is one of the factors that determines the required number of iterations.
-
-
-
High Quality Separation of Simultaneous Sources by Sparse Inversion
Authors R. L. Abma, T. Manning, M. Tanis, J. Yu and M. FosterThis paper demonstrates a method of producing pre-stack gathers from blended acquisition data that are as noise-free as gathers from conventional acquisition. Filtering out the interference from the overlapping shots and stacking the data are fairly effective in attenuating the interference from blended shots. However high-quality separation of the interference from the pre-stack data would make the data more suitable for amplitude dependent analysis, such as that for amplitude-versus-offset, time-lapse measurements, and fracture detection. The method presented here produces seismic records in which the interference is attenuated to the point that it is well below the background noise. This high quality attenuation is achieved by using a sparse inversion process that solves a modified version of Berkhout's matrix system, allowing the source responses to be separated for high quality amplitude measurements. The success of this source separation step in the data processing of blended acquisition means we can produce results with a quality that is comparable to that of conventional acquisition. Furthermore, this method improves the data quality while retaining the lower cost and higher productivity of ISSTM acquisition when compared with conventional acquisition.
-
-
-
Separation of Blended Impulsive Sources using an Iterative Approach
Authors P. Doulgeris, A. Mahdad and G. BlacquiereTraditional data acquisition practice dictates the existence of sufficient time intervals between the firing of sequential impulsive sources in the field. However, much attention has been drawn recently to the possibility of shooting in an overlapping fashion. Numerous publications have addressed the issue from different scopes (de-noising, compressing, blind signal separation etc.) while others have defined the theoretical background. The term 'blending' has been introduced to describe this new trend in acquisition designs, the time-overlapping data acquisition. In turn, the term 'deblending' refers to an algorithm that recovers the data as if they were shot in the traditional way. Such an algorithm is presented in this paper, specially designed for the case of impulsive sources that fire with small time-delays. This algorithm is based on iterative interference estimation and subtraction. The key to signal extraction from blended data is the incoherency of the interference (as opposed to the coherency of the signal) accomplished by resorting the data into a different than the common source domain. The method is applied on a real marine dataset, where the blending process has been simulated numerically.
-
-
-
Signal-to-noise Estimates of Time-reverse Images
More LessLocating subsurface sources from passive seismic recordings is difficult when attempted with low signal-to-noise data that do not contain observable arrivals. Using time reversal techniques, recorded energy can be focused at its source depth. However, when a focus cannot be mat-ched to a particular event in the data, it can be difficult to distinguish true focusing from artifacts. Artificial focusing can arise from numerous causes, including surface waves, local noise sources, acquisition geometry and velocity model effects. We present a method to better locate subsurface sources that reduces the ambiguity of the results by creating an estimate of the signal-to-noise ratio in the image domain. Time-reverse imaging techniques are used to image the recorded data and a noise model. In the data domain, the noise model only approximates the energy of local noise sources. After imaging, however, the result also captures the effects of acquisition geometry and spurious focusing due to the velocity model. The noise image is then used to correct the data image to produce an estimate of the signal-to-noise ratio. Synthetic data examples with various amounts of noise demonstrates the versatility of this technique.
-
-
-
Multichannel Matching Pursuit for Seismic Trace Decomposition
By Y. H. WangMatching pursuit can decompose a seismic trace adaptively into a series of wavelets. However, the solution is not unique and is strongly affected by data noise. To improve the stability of the procedure, matching pursuit is implemented in a multichannel fashion that exploits lateral coherence as a constraint to overcome the non-uniqueness of the solution. Each constitutional wavelet extracted from a seismic trace has an optimal correlation coefficient to neighboring traces. The scheme stabilizes performance on wavelet decomposition with great improvement over spatial continuity along a seismic profile. It is demonstrated using two examples. One shows that the multichannel scheme is able to remove a strong coal-seam reflection from seismic profile, before target reservoirs with weak reflections on the top can be characterized. Another is to generate a reliable, spatially continuous time-frequency spectrum, on which low-frequency shadows can be used for gas reservoir detection.
-
-
-
Fast Sparse Time-frequency Decomposition
Authors A. Gholami, N. Amini, H. R. Siahkoohi and A. EdalatTime-frequency analysis plays an important role in seismic data processing and interpretation. A fast algorithm is presented for sparse time-frequency decomposition. A sparsity constraint is used to render the decomposition process unique while producing high resolution energy distribution maps which can be used as a reliable attribute for delineating reservoirs.
-
-
-
Can Thin Beds Be Identified Through Statistical Phase Estimation?
Authors J. A. Edgar and J. I. SelvageWe introduce a method of identifying phase changes caused by thin bed interference on the seismic wavelet, directly from the seismic alone. We are primarily concerned with the identification of thin beds from seismic sections, but our approach offers scope to map thin beds laterally. We have modified a kurtosis-based statistical wavelet estimation technique to enable the identification of regions of phase changes within seismic data. The method is applied to real seismic data, which, from well log lithology identification, was known to contain a series of thin beds. We expect the thin beds to cause phase changes within the seismic data and demonstrate that our statistical method is sensitive to them. This study emphasises the potential of statistical methods to extract useful information from seismic data that may otherwise be missed.
-
-
-
High Productivity without Compromise – The Relationship between Productivity Quality and Vibroseis Group Size
Authors T. Dean, P. Kristiansen and P. L. VermeerIt has been shown that reducing the source line interval can significantly improve the acquisition footprint of land data. To achieve this reduction without impacting acquisition rates requires the use of high-productivity techniques such as slip-sweep and ISS, typically utilising single vibrators with extended sweeps. These techniques result in a decrease in shot record quality as the sweeps from different vibrators overlap. This paper discusses an altogether more tractable solution: vibrator groups with short sweeps. High productivity can be achieved without interference between different sweeps.
-
-
-
On the Accuracy of Harmonic Estimation from Weighted-sum Ground Force
Authors V. C. Do and C. BagainiThe harmonic noise generated by hydraulic seismic vibrators is not a major concern in conventional vibroseis acquisition with upsweeps and sweeps longer than the listening time. However, its significance is magnified when the acquisition is conducted with simultaneous shooting techniques based on phase encoding or slip-sweep. We show that the use of the ground force estimated at the vibrator location leads to a determination of the response of the earth's interior to the harmonics which is a worse approximation than that determined with the seismic data. The inaccuracy of the estimated ground force in characterizing the harmonic noise increases with frequency and, for the considered dataset, leads to an increase rather than attenuation of the harmonic noise above 40 Hz. It is therefore recommended to use the surface data to determine the harmonic noise.
-
-
-
Deharmonics, a Method for Harmonic Noise Removal on Vibroseis Data
Authors F. D. Martin and P. A. MunozWith the advent of slip sweeps to increase Vibroseis acquisition productivity, the need for harmonic noise removal became more critical to preserve the data quality compared to conventional "flip flop" vibroseis data. Several methods for harmonic noise attenuation are available such as HPVA, Jeffryes, Bagaini, Ziolkowski, Sicking et al. and others. Most of the methods are based on the recording of the vibrator ground force signal to design the operator. However, in some cases the signal is lost or not representative of the vibroseis array. Some of the methods require using uncorrelated data that implies handling of a large amount of data every day. We have successfully implemented a method to remove the harmonic noise without the ground force signal. The method is based on collapsing the harmonic noise by adding the pilot fundamental phase on a correlated record and subtracting the theoretical harmonic phase to be collapsed followed by a deterministic surgical edit of the first breaks. Transformation back to the original noise free correlated record is trivial by applying the inverse phase operation, which is adding the harmonic phase and subtracting the fundamental. The method is demonstrated with synthetic and real data from a slip sweep acquisition.
-
-
-
Keeping the Data Quality of High Productivity Vibroseis Acquisitions Under Control
By C. BagainiThe popularity of the slip-sweep method for high productivity vibroseis acquisition is growing. However, for high values of the sweep length to slip-time ratio, that is for aggressive slip-sweep acquisitions, the harmonic contamination is severe. The first part of this paper presents a new technique for harmonic noise attenuation in aggressive slip-sweep acquisition. The second part introduces the dithered acquisition method in the framework of vibroseis acquisition. It is shown that after separation of the dithered records using a modeling and inversion technique, the quality of the final image (using the same number of shot gathers) is not affected by the dithered interference. The combination of dithered and slip-sweep acquisition is here proposed as a method to increase vibroseis productivity to values close to the limits obtainable when the vibrators sweep independently while keeping the interference noise due to simultaneous shooting under control.
-
-
-
Harmonic Distortion Reduction of Seismic Vibrator Using Vibrator Control Electronics
More LessIt is well known that the mechanical system of seismic hydraulic vibrators introduces harmonic distortion into the output ground force. Most research and development related to reducing harmonic distortion is focused on the mechanical structure and hydraulic system of the vibrator. However, recent advancements in source control technology demonstrate effective reduction in the harmonic distortion caused by the servo-hydraulic system. This paper discusses these advancements and provides field test examples of harmonic distortion reduction using various types of seismic vibrators.
-
-
-
What if I...? – The Use of Vibroseis 'Energy Tests' as an Aid in Parameter Choice
Authors P. Kristiansen, J. Quigley, D. Holmes and T. DeanA core component in planning a land seismic survey is the choice of source and its parameters. Parameters tests are often performed as part of the survey start-up, but Interpretation of the results from these tests is frequently subjective, particularly when a complete line or swath is not acquired and processed. The typical instinct is to err on the side of caution and this may result in the application of an excessive amount of source energy, thus negatively impacting survey efficiency and cost. However, the effect from different sources and source parameters on signal-to-noise ratio and other key quality indicators can be tested and analyzed in a systematic way. This will allow comparison of a much wider variety of source options than is possible through the acquisition of a very limited number of costly and time-consuming test lines. We can also identify possible efficiency improvements resulting from acquiring data with equivalent signal-to-noise ratios, but different parameter combinations. A suite of such tests has been developed within WesternGeco and are referred to as energy tests. Within this paper, we will describe the results from such tests and its interpretation.
-
-
-
Low-frequency Generation Using Seismic Vibrators
Authors G. J. M. Baeten, A. Egreteau, J. Gibson, F. Lin, P. Maxwell and J. J. SallasTypical specifications for a seismic vibrator include a low-frequency limit determined by the reaction mass weight, the piston stroke and the pump flow rating. Emission of frequencies well below this so-called displacement limit is investigated using dedicated sweep designs. The impact of the low frequency emissions on geophysical and mechanical particle motions is investigated using a variety of different sensors, downhole, along a 2D receiver line and on the seismic vibrator.
-
-
-
Directive Geophone
More LessThere may be good reason to consider planting geophones at an angle to enhance energy arriving at high incidence angles. This paper will examine two issues: the fidelity of geophones planted with a tilt, and the potential benefit of data collected with tilted geophones. It has been found, via lab measurements, that if a conventional geophone is tilted at 40° from vertical, then this will cause 70% losses of its maximum response (Bertram et al, 1999). The author has measured geophone properties in the field under static (passive source) and dynamic (active source) modes. In static mode, conventional geophones were observed operating normally up to 30º of tilt. In dynamic mode, a tilt of 60º was observed to cause about 90-95% losses of the maximum while no losses were observed at tilt angles ranging from 15-45º. The fact that geophones can maintain response integrity at tilted angles can be used to create directionally tuned arrays to enhance energy arriving at higher incidence angles. Directive or "Steering and Fixed Angle Geophone" (patent pending), presented here, is an attempt to improve the recovery of energy from longer offsets at acquisition stage.
-
-
-
Field Data Results of Elimination of Free-surface-related Events for Marine Over/Under Streamer Data
Authors M. Majdanski, C. Kostov, E. Kragh, I. Moore, M. Thompson and J. MispelAttenuation of multiples in marine seismic data is typically achieved using a multiple prediction and adaptive subtraction process. An alternative method based on deconvolution of up- by down-going pressure data has been successfully applied to ocean-bottom data, where it achieves removal of all free-surface-related events (source- and receiver-side ghosts, as well as free-surface multiples). Here, we present a field data application of this method to towed-streamer data, acquired in an over/under configuration. To overcome the requirement of accurately recording direct arrivals, we use an estimation based on source near-field hydrophone recordings. The final results are compared to standard SRME technique showing similar performance, but within a single operation and without the requirement of adaptive subtraction.
-
-
-
Surface Related Multiple Elimination for WATS Data
Authors E. Kurin, D. Lokshtanov and H. K. HelgesenA version of the 3D SRME algorithm for the WATS data is considered. In this version, missing traces required for 3D multiple prediction are reconstructed on-the-fly based on the azimuth dependent NMO for a few selected horizons. The subtraction scheme with multichannel multidimensional filters averaged over a streamer, which avoids re-sorting of traces, proved effective for such data. The use of the synthetic data, generated from a realistic model and with the acquisition pattern corresponding to the real case, helps to analyze strong and weak points of the studied demultiple scheme. The results show that the suggested algorithm is effective for the WATS data, especially at near-to-middle offsets. As for far offsets combined with complex geological settings, there is still a room for improvements. An effective data handling scheme for scalable implementation of the algorithm for compute clusters is suggested, where receiver-side trace are accessed in the sequential order and processed according to the pre-computed index tables of trace contributions.
-
-
-
3D Predictive Deconvolution for Wide-azimuth Gathers
Authors P. Hugonnet, J. L. Boelle, P. Herrmann, F. Prat and S. NavionUsual pre-stack predictive deconvolution solutions for multiple attenuation are either monochannel or designed for 2D gathers. These existing 1D /2D solutions are suboptimal for today's modern acquisition geometries, which allow the construction of 3D, wide azimuth pre-stack collections. We therefore present a 3D pre-stack predictive deconvolution algorithm suited to today's WAZ HD HR gathers. It targets the attenuation of the multiples (either surface or internal) in horizontally layered media, and is applied on densely sampled, wide-azimuth gathers, from either marine, land, or OBC surveys (cross-spread gathers, receiver or shot gathers, mega bin, macro bin,...). As usual with the predictive deconvolution algorithms, it is the most useful for short to medium period multiples.
-
-
-
2D Multiple Prediction in the Curvelet Domain
Authors D. Donno, H. Chauris and M. NobleThe suppression of multiples is a crucial task when processing seismic reflection data. We investigate how curvelets could be used for surface-related multiple prediction. From a geophysical point of view, a curvelet can be seen as the representation of a local plane wave, and is particularly well suited for seismic data decomposition. For the prediction of multiples in the curvelet domain, we propose to first decompose the input data into curvelet coefficients. These coefficients are then convolved together to predict the coefficients associated to multiples, and the final result is obtained by applying the inverse curvelet transform. The curvelet transform offers two advantages. The directional characteristic of curvelets allows to exploit Snell’s law at the sea surface. Moreover, the possible aliasing in the predicted multiples can be better managed by using the curvelet multi-scale property to weight the prediction according to the low-frequency part of the data. 2D synthetic and field data examples show that some artifacts and aliasing effects can be indeed reduced in the multiple prediction with the use of curvelets.
-
-
-
Incorporating the Source Array into Primary Estimation
Authors G. J. A. van Groenestijn and D. J. VerschuurIn the surface-related multiple elimination (SRME) method the source array is assumed to act as a single stable source. When the behavior of the source array differs too much from this assumption it effects the primary estimation. We will demonstrate this with two cases; unstable sources and large source arrays. We will discuss which measures to take to incorporate the source array in SRME. On the same synthetic data examples we will demonstrate that it is very easy to bring in the source array into the recently introduced estimation of primaries by sparse inversion (EPSI) method. For the unstable source case EPSI can estimate the source wavelet that was used during each shot. For the case of large source arrays EPSI can estimate primary impulse responses from the data as if they came from a single point source acquisition, thus removing the blending effect from the source array.
-
-
-
Stabalized Estimation of Primaries by Sparse Inversion
Authors T. T. Y. Lin and F. J. HerrmannEstimation of Primaries by Sparse Inversion (EPSI) is a recent method for surface-related multiple removal using a direct estimation method closely related to Amundsen inversion, where under a sparsity assumption the primary impulse response is determined directly from a data-driven wavefield inversion process. One of the major difficulties in its practical adoption is that one must have precise knowledge of a time-window that contains multiple-free primaries during each update. Moreover, due to the nuances involved in regularizing the model impulse response in the inverse problem, the EPSI approach has an additional number of inversion parameters where it may be difficult to choose a reasonable value. We show that the specific sparsity constraint on the EPSI updates lead to an inherently intractable problem, and that the time-window and other inversion variables arise in the context of additional regularizations that attempts to drive towards a meaningful solution. We furthermore suggest a way to remove almost all of these parameters via convexification, which stabilizes the inversion while preserving the crucial sparsity assumption in the primary impulse response model.
-
-
-
Surface Multiple Attenuation Through Sparse Inversion – Attenuation Results for Complex Synthetics and Real Data
Authors T. Savels, K. de Vos and J. W. de MaagWe present new results of the recently introduced multiple attenuation through sparse inversion approach (van Groenestijn and Verschuur, 2009). This method aims at attenuating surface multiples of all periodicities without the need for an adaptive subtraction or a near-offset extrapolation. The aim of our paper is twofold. First, we demonstrate the viability of the sparse inversion approach on two complex 2D synthetic data sets, showing that sparse inversion may serve as a powerful tool to attenuate short and long-period multiples in complex settings. We additionally illustrate that the resulting primary estimations match the synthetically modelled ones. Second, we apply the 2D sparse inversion approach to a 2D line of a real 3D deep-marine data set. We observe that the estimated multiples exhibit a time shift compared to the true ones, leading to a degraded multiple attenuation. In a further synthetic study we demonstrate that, as expected, this time shift is induced by the presence of cross-line dip. Our observations confirm the necessity of a full 3D approach in order for the sparse inversion method to be effective in practice.
-
-
-
Energy Reassignment of an Image for Improved Picking of Velocity Dispersion Curves
Authors C. W. Hyslop and M. S. DialloMany algorithms have been proposed to improve the resolution of the velocity dispersion curve through the use of different transforms to the frequency-velocity domain. The reassignment proposed in this paper is an image processing and picking technique that has the ability to overcome problems intrinsic to poor sampling and complex modal structure. We use an iterative approach in applying energy reassignment to move energy points closer to the crest of the dispersive mode in the frequency-velocity image. In this sense, energy reassignment of the image can be used as a mode separation technique. Directly reassigning the image is also computationally efficient, thus facilitating interactive analysis and processing of surface waves. An example from within the processing flow of a surface wave mitigation algorithm is given to show the advantage in using this process on beamformed dispersive modes.
-
-
-
A Marine Broadband Case Study Offshore China
Authors T. J. Bunting, B. J. Lim, C. H. Lim, S. W. Pei, S. K. Yang, Z. B. Zhang, Y. H. Xie and L. LiThe effect of the sea surface ghost on marine seismic acquisition is well understood. Shallow tow geometries recover high frequencies at the expense of attenuating low frequencies and deep tow geometries recover low frequencies at the expense of attenuating high frequencies. In recent years two dual streamer tow depth solutions (Over-Under and Sparse-Under) have been deployed, both of which use two streamers which are towed at two different depths. A 2D survey was acquired offshore China, in August 2009, utilizing three separate streamer depths (5, 17 and 23 m). This three streamer depth configuration allows for the benefits of the two broadband solutions to be evaluated against each other and against a shallow streamer single depth seismic measurement. This paper will review the theory behind the two combination techniques, compare the seismic datasets, and finalize with some conclusions on the relative benefits both in terms of seismic imaging and acquisition efficiency.
-
-
-
First 3-D Dual-sensor Streamer Acquisition in the North Sea – An Example from the Luno Field
Authors B. Osnes, A. Day, M. Widmaier, J. E. Lie, V. Danielsen, D. Harrison-Fox, M. Lesnes and O. El BaramonyIn 2009, the first 3-D dual-sensor towed streamer survey in the North Sea was acquired over the Luno field. This field presents a challenging area for seismic imaging that requires the best possible resolution to image the complexity of the reservoir, together with maximum signal penetration due to the presence of an overlying chalk interval. Dual-sensor technology is therefore attractive due to its ability to remove the receiver ghost, thereby increasing the usable bandwidth, which further permits increased towing depth to improve the low frequency signal-to-noise. In this paper, the acquisition and processing of this dataset will be reviewed. It will be shown that, in addition to the data quality benefits, deep towed dual-sensor acquisition has a number of operational advantages over conventional acquisition. In particular, dual-sensor streamer operations are less susceptible to weather-related noise and seismic interference.
-
-
-
Mitigating the Environmental Footprint of Towed Streamer Seismic Surveys
Authors P. M. Fontana and C. P. ZickermanThe environmental risks associated with marine towed streamer seismic surveys can be categorized into four main emission components; solid, fluid, gaseous, and acoustic. Sources for each component can be identified with respect to the maritime technologies incorporated into the design of the seismic survey vessel and the seismic acquisition technologies and methods applied to specific geophysical objectives. To facilitate this analysis, we introduce the concept of an emission risk matrix. Within this matrix the potential risk level of each emission component associated with the survey vessel and the seismic equipment can be assessed. Once the primary risks have been identified, mitigation measures can then be identified which, when applied, can significantly reduce the environmental footprint of the towed streamer survey effort.
-
-
-
Recon 3D – A Fast and Cheap 3D Acquisition Approach to Large Scale Exploration
Authors M. D. MacNeill and J. F. McNuttIn deep-water environments, energy companies are exploring increasingly subtle plays, in ever shorter time frames, to discover commercial hydrocarbon volumes. Advances in deep-water drilling and completions, subsea equipment and facilities technology underpin many of the recent commercially successful deep-water developments; exploration seismic advancements lagged those in engineering. In deep-water frontier environments, seismic stratigraphic and structural interpretation, in addition to hydrocarbon system and regional analyses, obviously are key components of prospect evaluation and risking leading to successful exploration and development programs. Clear business need provides an opportunity for more cost effective seismic acquisition technology. This paper introduces the novel RECONnaissance 3D marine seismic acquisition method that provides the spatial and temporal resolution of conventional marine 3D seismic, but at a significantly reduced cost. RECON 3D is an ideal method for acquiring deep water seismic data to explore large areas, to upgrade separately or previously acquired azimuthally restricted data, or to acquire fast-track data covering numerous fields, prospects, or leads.
-
-
-
First Wide-azimuth Time-lapse Seismic Acquisition Using Ocean Bottom Seismic Nodes at Atlantis Field – Gulf of Mexico
Authors G. J. Beaudoin, M. D. Reasnor, M. Pfister and G. OpenshawThe world's first ocean bottom seismic (OBS) node-on-node time-lapse monitor survey was acquired at Atlantis Field in the Gulf of Mexico in 2009. The baseline survey, acquired in 2005-2006, was the first large-scale deepwater OBS survey employing autonomous nodes (Beaudoin and Ross, 2007). Following first oil in October 2007, studies were initiated for a time-lapse survey to monitor production changes in the Atlantis reservoir. The baseline survey demonstrated that remotely-operated vehicles (ROVs) could deploy and recover autonomous nodes with high positional accuracy even in the presence of severe bathymetric challenges. This capability underpinned the design, planning and execution of a highly repeatable monitor survey. For the monitor survey, an ROV deployed 500 nodes to a subset of the original 1628 baseline node locations. These 500 monitor locations were chosen to image the expected area of reservoir changes after careful analysis of image quality as baseline data were progressively decimated. The ROV deployed 91% of the nodes to within 5 meters of their respective baseline locations and 98% within 10 meters. This remarkable geometric repeatability within a producing field has laid the foundation for highly repeatable monitor surveys even in challenging seafloor environments.
-
-
-
A Case Study of a High Latitude Towed Streamer 3D Seismic Survey
Authors A. Ross, S. Hildebrand and S. ViceerIn the summer of 2009, CGGVeritas collected a towed seismic streamer survey in the Canadian Beaufort Sea for BP. This was the highest latitude (>71 degrees) towed streamer 3D survey collected by either company. The presence of first-year and multi-year sea ice dominated the operation and taught both companies a great deal about high latitude seismic operations.
-
-
-
Randomized Sampling Strategies
More LessSeismic exploration relies on the collection of massive data volumes that are subsequently mined for information during seismic processing. While this approach has been extremely successful in the past, the current trend towards higher quality images in increasingly complicated regions continues to reveal fundamental shortcomings in our workflows for high-dimensional data volumes. Two causes can be identified. First, there is the so-called "curse of dimensionality" exemplified by Nyquist's sampling criterion, which puts disproportionate strain on current acquisition and processing systems as the size and desired resolution of our survey areas continues to increase. Secondly, there is the recent "departure from Moore's law" that forces us to lower our expectations to compute ourselves out of this curse of dimensionality. In this paper, we offer a way out of this situation by a deliberate randomized subsampling combined with structure-exploiting transform-domain sparsity promotion. Our approach is successful because it reduces the size of seismic data volumes without loss of information. As such we end up with a new technology where the costs of acquisition and processing are no longer dictated by the size of the acquisition but by the transform-domain sparsity of the end-product.
-
-
-
Acquisition Design for Incoherent Shooting
Authors G. Blacquiere and A. J. BerkhoutIn blended seismic data acquisition, the blended source wavefield should be designed in such a way that it has a large spatial bandwidth without notches. This corresponds to using blended source arrays with a high degree of incoherency. The three-dimensional (x,y,t) autocorrelation function of the blended source wavefield at the surface is proposed as a surface-related, quantitative measure of incoherency. In this assessment the subsurface is not involved. A second measure is proposed that does include the propagation effects of the near- and subsurface on the illumination by the blended source wavefield. For each subsurface gridpoint the autocorrelation function of the incident blended source wavefield - being represented by a dispersed time series - is judged for its whiteness. This result can be extended to angle-dependent illumination by computing the cross-correlation function as well.
-
-
-
Time-lapse Refraction Seismic Monitoring
Authors F. Hansteen, P. B. Wills, J. C. Hornman, L. Jin and S. J. BourneA novel seismic technique for reservoir monitoring has been developed and tested in a resent field trial at the Peace River heavy oil field in Alberta, Canada. By measuring time shifts on first arrival head-waves from a refracting layer below the reservoir, the method aims to produce high-resolution areal maps of reservoir time shifts at a much lower cost than conventional 4D seismic. Good lateral resolution is achieved by numerically redatuming the wave field recorded at surface to a datum just above the reservoir. Preliminary results show plausible one-way time-lapse time shifts in the reservoir of the order of 2 ms, caused by variation in reservoir fluid pressure in the vicinity of active steam injectors.
-
-
-
Benefits of Hydrophones for Land Seismic Monitoring
Authors E. Rebel and E. ForguesCGGVeritas has conducted for Shell Canada a 4D project based on a network of buried mini-vibrators associated with buried sensors. This paper shows a comparison of signal and noise recorded on different types of sensors (surface DSU, buried geophones and hydrophones). We conclude that buried hydrophones provided the best data quality: (i) they are free of shear wave, (ii) they present a better Signal to Noise ratio (20dB gain), (iii) they show better repeatability. Therefore, hydrophones are well adapted for permanent seismic land acquisition used in 4D monitoring.
-
-
-
A New Approach to Efficient 4D Acquisition
Authors P. B. Sabel, L. Fenstad and S. DarlingTo acquire good 4D it is important to repeat acquisition parameters and especially the baseline’s source and receiver positions. If one cannot compensate for feather one will have to shoot 4D infill, which adds to the cost of the monitor survey. We present a novel approach to pre-plot planning and positioning QC during acquisition. From analysis of the baseline’s post-plot data for unique coverage contribution of each source-receiver pair and using a field specific tolerable positioning error we derive uniqueness criteria. Lines not matching these criteria will be removed from the pre-plot. From analysis of the baseline data we define how much feather deviation we can tolerate and introduce a robustness criterion, the feather aperture. All pre-plot lines for the monitor survey will now have an associated, line specific, feather aperture value. The feather aperture concept assists in choosing lines with a wide feather aperture for periods of unpredictable currents and can also help to reduce time spent waiting on unnecessarily tight feather matching criteria. This paper shall demonstrate a technique that should change 4D QC and also significantly improve acquisition efficiency.
-
-
-
Time-lapse Acquisition With a Dual-sensor Streamer over a Conventional Baseline Survey
Authors A. Day, M. Widmaier, T. Høy and B. OsnesThis paper describes an experiment to validate the use of a dual-sensor streamer for time-lapse acquisition. Dual-sensor streamer data was acquired at an acquisition depth of 15m over five adjacent sail lines that had been acquired using a conventional streamer that records the pressure wavefield at 8m depth some months earlier. The dual-sensor streamer data were processed to obtain the total pressure field at 8m depth. Both these data and the conventional data were then taken through a state-of-the-art time-lapse processing flow. The difference in the final images is very small, as would be expected for two surveys conducted only a few months apart in an area where there is no production. The repeatability is shown to conform to industry requirements, thereby demonstrating that a dual-sensor streamer can be used to perform time-lapse acquisition in areas where earlier surveys have been acquired using conventional streamers.
-
-
-
The Correlated Leakage Method – It's Application to Better Quantify Timing Shifts on 4D Data
Authors P. Paramo, D. N. Whitcombe, N. Philip, A. Toomey, T. Redshaw and S. LinnWe present a new methodology for estimating time shifts between 4D surveys, the Correlated Leakage Method (CLM). Its ability to provide accurate estimates of time shifts for a wide range of window size makes it an attractive alternative to existing algorithms. Timing shifts are calculated from the gradient of a line fitted to a cross plot of: the amplitude difference between baseline and monitor against the amplitude difference between the baseline and monitor survey average and a time shifted version of this average. The nature of the fitting process of the CLM suppresses the time shift noise and makes it stable. Additionally the CLM calculations can be implemented in 1D, 2D and 3D and can be easily modified to estimate vertical or lateral shifts between time lapse surveys. Here we review the CLM theory and discuss the advantages and limitations of this method.
-
-
-
Simultaneous Multi-vintage Multi-parameter Time-lapse Matching
Authors E. Zabihi Naeini, H. Hoeber, G. Poole, F. Buker and M. van Schaack4D matching is an important processing step which is repeatedly used within a time-lapse processing sequence in order to remove residual amplitude, time and phase differences between vintages of seismic data. In this paper we introduce a new matching algorithm that minimizes the global NRMS between an arbitrary number of vintages and thus automatically achieves the best possible overall repeatability in one single matching step. The algorithm can work in the time- or frequency domain and simultaneously finds all matching parameters for all vintages. By using constraints a unique solution to this non-linear optimisation problem is found without the necessity to define an upfront reference vintage. This minimises the risk of propagating artefacts which may be present on one of the vintages to all other datasets. We show two example applications of the new algorithm. Firstly, we use the new algorithm for pre-imaging sail-line consistent removal of acquisition related artefacts (destriping) of multiple vintages. Secondly, we present examples of multi-vintage matching, as applicable to 4D residual local matching on imaged data.
-
-
-
Retrieving 4D Signal in Complex Media Using the Full Waveform Inversion Paradigm
Authors P. Thore, H. Allouche, P. O. Lys and I. TarrassVarious 4D processing techniques are based on 1D approximation and their applicability is limited in the case of dipping reservoirs. In this paper we present a synthetic case study with dips up to 40° and superimposed reservoirs where data have been generated using full waveform modelling. New processing, based on full waveform inversion technique has been applied to the data in order to extract the 4D signal. Our results show that even in a complex environment seismic monitoring can be considered.
-
-
-
Time Lapse Monitoring of the Elgin HPHT Field
Authors A. Grandi, O. Rahmanov, V. Neillo, F. Bourgeois, C. Deplanté and L. Ben-BrahimMonitoring is essential in understanding HPHT fields and extending their life. The high cost of infill drilling requires minimising risks of well failure, improving understanding of compartmentalisation, flow connectivity and reservoir quality away from control wells. Time lapse seismic when fully integrated with other geosciences disciplines can bring essential information to help achieving these aims. Using a dedicated warping inversion, a new and more stable 4D signal has been inverted at the reservoir level. This signal is well correlated with compaction patterns predicted by the reservoir model and brings essential information on compartmentalisation and vertical connectivity within the reservoir. Both 4D attributes and the 4D-calibrated geomechanical model are used in various operational studies targetting the reservoir and the overburden, confirming 4D as a key source of information to improve the understanding of the Elgin Field.
-
-
-
Coil Shooting on Tulip Discovery – Processing Challenges and Results
Authors M. Buia, R. Vercesi, D. Nikolenko, A. T. Waluyo, M. Tham and S. L. NgThe circular (Coil) shooting geometry, when compared with conventional 3D marine data, introduces several differences and a number of new challenges in the whole modeling, acquisition and processing workflows. This paper describes the positive seismic processing experience made on Tulip Coil data in Indonesia. After an ad hoc sequence, aimed both at overcome challenges and at taking advantage from the opportunities given by circular shooting, the results demonstrated to be far superior to vintage seismic data in the area
-
-
-
Multi-azimuth High-resolution Tomography – Application to Offshore Nile Delta
Authors D. W. van der Burg, S. Lin, C. Zhou and J. JiaoMulti-azimuth acquisition has been shown to be beneficial for attenuating multiple diffraction energy and improving illumination. In this case study on a multi-azimuth dataset from the Nile Delta we demonstrate that multi-azimuth reflection tomographic inversion is able to resolve smaller scale velocity variations than narrow-azimuth inversion, and hence enables more accurate velocity model building in depth. Multi-azimuth inversion is capable of resolving these small scale velocity heterogeneities because of the azimuthal diversity. The gathers obtained from multi-azimuth inversion are flatter and the stack is improved compared to narrow-azimuth inversion. The resolved small scale velocity variations correlate very well with geology.
-
-
-
Exploring Oligocene Targets Using Multi-azimuth Acquisitions – Application of Non-linear Slope Tomography
Authors J. P. Gruffeille, P. Guillaume, G. Lambaré, H. Ladegaard, A. Spedding and A. El FattahNon-linear slope tomography offers an accurate and efficient way of combining dip and residual move-out information from several narrow azimuth acquisitions. We present an application of non-linear slope tomography to a multi-azimuth acquisition in West Mediterranean Sea. We demonstrate the improved resolution obtained in the velocity model thanks to the complementarities and redundancy of the azimuths.
-
-
-
The Benefits of Multi-azimuth Depth Migration over the Tidepole Field, North West Shelf, Australia
Authors D. Dickinson and T. Ridsdill-SmithThe Tidepole gas field, located in WA-5-L on the North West Shelf, Australia was discovered in 1971. The field is covered by the high-resolution Demeter 3D seismic survey acquired over the central North West Shelf in 2003, but further appraisal of the Tidepole field remained difficult due to poor seismic data quality. The successful results from model studies led to the acquisition in 2007 of two additional azimuths over the Tidepole field to complement the existing Demeter data. The two new datasets were acquired at azimuths 60 and 120 degrees apart from Tidepole to maximize the azimuthal coverage. Time domain processing of the multi-azimuth data revealed the existence of strong localized velocity differences in the different acquisition directions. To resolve these differences multi-azimuth anisotropic pre-stack depth migration was proposed. During the model building process the conventional residual moveout from all three azimuths was used to perform a multi-azimuth tomography. Initially an isotropic model was used to solve for heterogeneity. Then anisotropic layers were introduced where residual moveout remained. The resulting single model was used to migrate all three azimuths to the same depth. This paper demonstrates the benefits of multi-azimuth depth migration.
-
-
-
Using High–density OBC Seismic to Optimise the Andrew Satellites Development
Authors L. Padmos, D. Davies, M. Davies and J. McGarritySuccessful development of the Andrew satellite fields (Central North Sea) will depend strongly on the ability to place development wells in an optimal position in the poorly imaged, subtle structures. Towed-streamer seismic quality is poor at the Palaeocene reservoir level due to the presence of anomalously fast Eocene sand channels in the overburden. In order to achieve a step change in data quality, a 140 km2 high shot density, wide-azimuth Ocean Bottom Cable (OBC) seismic survey has been acquired over Kinnoull, Kidd and Farragon. The survey was acquired between December 2008 and July 2009, and the processed P/Z product was on the workstation in December 2009. The new high-density OBC data has been very successful in improving the imaging below the Eocene channels. Signal to Noise and resolution have been improved, resulting in reservoir reflections that can be interpreted with a lot more confidence. The OBC data has also allowed a better estimation of the zero offset and gradient AVO products. The improved seismic quality will play an important role in optimising well trajectories and help de-risk upside development options.
-
-
-
5D Data Reconstruction Using the Anti-leakage Fourier Transform
By G. Poole LtdWide-azimuth datasets allow us to incorporate more dimensions into data regularisation. Using a multi-dimensional Fourier transform that handles irregular data we can either output traces on a regular grid or interpolate additional source and receiver lines. This allows us to fill holes more effectively and regularise in the offset and azimuth directions whilst preserving amplitude variation with offset and azimuth (AVO and AVAz). We show how the anti-leakage Fourier transform can be used with four spatial dimensions and use OBC and land data examples to highlight improvement in continuity and accurate reconstruction of missing data.
-
-
-
Can We Correct for Azimuthal Variations of Residual Moveout in Land WAZ Context Using Depth Non-linear Slope Tomography?
Authors S. Zimine, G. Lambaré, P. Guillaume, J. P. Montel, J. P. Touré, N. Deladerrière, X. Zhang, A. Prescott, D. Lecerf, S. Navion, J. L. Boelle, A. Belmokhtar and A. LadmekHigh-density wide azimuth (WAZ) land surface acquisitions have demonstrated superior imaging capabilities. However, processing of such data exhibits several challenges related to the traditionally poor S/N ratio of land recording and the necessity of reconciling the kinematics of the various azimuths. We present an imaging case history involving WAZ non-linear slope tomography. Based on kinematic invariants, velocity model update is performed both in depth and time from the same picking. Our dense automated dip and residual move-out picking is done on an initial pre-stack time migrated (PreSTM) dataset after application of a structurally consistent filtering that greatly improves the S/N ratio. Our case study demonstrates that non-linear slope tomography in the depth domain greatly improves the imaging of the structures when compared to the initial PreSTM result. Even if tomography in the time domain significantly enhances imaging, it cannot successfully honour the kinematics of the various azimuths within the constraints of time imaging assumptions. On the contrary, WAZ non-linear slope tomography in the depth domain offers an efficient way to reconcile these kinematics, thus promoting the use of depth imaging when processing high-density WAZ data, even in the context of mild geological complexity.
-
-
-
Analysis Methodology for Azimuthal Anisotropy
Authors K. Bishop, A. Osadchuk and M. StanleyHere is a prescription for the implementation of a data-driven analysis procedure for determining accurate horizontal anisotropic velocity parameters. The high resolution method is simple and fast and can be quickly applied to 3D surveys. Wide and multiple azimuth datasets are making it clear that azimuthal anisotropy is more prevalent than most imagers realize. In many regions, significant horizontal stress can deform the rock or pore matrix or cause fracturing that result in directionally dependent horizontal velocity or horizontal transverse isotropy (HTI). This method extends the parameterization accuracy by defining an elliptical gradient technique. The normally occurring abundant spatial sampling of data provides sufficient statistics for computing a gradient surfaces and finding the minimum to solve for the anisotropy parameters.
-
-
-
Controlled-source Electromagnetic Modelling Studies – Utility of Auxiliary Potentials for Low-frequency Stabilization
Authors R. Streich, C. Schwarzbach, M. Becken and K. SpitzerWhen simulating controlled-source electromagnetic data at frequencies significantly lower than 1 Hz, standard formulations based, e.g., on the vector Helmholtz equation for the electric field, may fail to produce faithful results, because the term in the Helmholtz equation involving conductivity decays linearly with frequency. To stabilize low-frequency simulations, we introduce an auxiliary potential, constrained to be zero by boundary conditions, and explicitly enforce a divergence condition. We present an implementation of our stabilization technique within a finite-difference frequency-domain algorithm. The utility of the stabilized approach is demonstrated by showing synthetic data for frequencies as low as 0.001 Hz for a 3D model roughly mimicking the geometry of the CO2 sequestration pilot site in Ketzin, Germany. In these synthetic data, we observe that instability occurring primarily in the air for non-stabilized simulations disappears for stabilized computations. As expected, the auxiliary potential assumes near-zero values, but its amplitude increases with decreasing frequency due to numerical limitations. The amplitude characteristics of the auxiliary potential, and good accuracy of our synthetic data in comparison with 1D simulation results, suggest that the stabilization is usable down to frequencies at which electromagnetic fields can effectively be considered static.
-
-
-
Exploiting the Airwave for Land CSEM Reservoir Monitoring
Authors M. Wirianto, W. A. Mulder and E. C. SlobDisplacement of oil by saline water leads to time-lapse differences in CSEM measurements. Because the time-lapse effect may be weak and difficult to observe, acquisition optimization plays an important role. Applying a horizontal dipole source on the surface creates a source-induced airwave component. We studied the effect of the airwave on time-lapse land CSEM measurements of resistivity changes after oil production by running numerical simulations for several configurations. Our study shows that the source-induced airwave is the predominant component in increasing the amplitude of the time-lapse fields, the difference between the recorded EM data before and after oil production. Moreover, the lateral extent of the depleted part of the oil reservoir is much better defined when the airwave is present.
-
-
-
Using CSEM to Monitor the Production of a Complex 3D Gas Reservoir – A Synthetic Case Study
Authors D. L. Andreis and L. M. MacGregorThe marine CSEM method is currently being applied to the problem of detecting and characterizing hydrocarbons in a variety of settings for exploration purposes. Because of its physics it can on one hand, distinguish between fluids within formations. On the other hand, it can suffer from a poor resolution in positioning those bodies in the vertical plane. The purpose of this presentation is to highlight the possibilities and challenges the CSEM technique will have to face to successfully play a key role during monitoring phase of a hydrocarbon reservoir life cycle. This will be illustrated using synthetic data from a complex meandriform anticline channel charged with gas.
-
-
-
Fast-track Marine CSEM Processing and 3D Inversion
Authors J. P. Morten and A. K. BjørkeWe present CSEM 3D inversion results where the receiver rotation angle relative to North is treated as a free parameter in the optimization. This allows us to omit receiver orientation estimation and data rotation pre-processing in the workflow. Moreover, the initial resistivity model is prepared independently from the CSEM data using available seismic and well log information. In this way, calibrated frequency domain data can be input directly to the anisotropic 3D inversion tool, with arbitrary initial values for the rotation angles. Synthetic data is generated using speed optimized modeling parameters. This demonstrates a fast-track 3D inversion workflow which can deliver a 3D inverted resistivity volume within few days after data has been uploaded from the vessel.
-
-
-
Comparison of Marine TEM and FEM Techniques – Offshore West Africa
Authors L. D. Masnaghetti, M. Watts and G. CairnsMarine CSEM data acquired using a continuously-towed source and 50% duty cycle waveform were processed to give both time and frequency-domain responses from an array of receivers in water about 100m deep. 1 and 2.5D inversion gives results which are consistent with seismic data.
-
-
-
The Value of CSEM Data in Exploration
Authors A. Buland, L. O. Løseth and T. RøstenA recent internal Statoil review of the CSEM performance in exploration shows a clear progress with time from the first early surveys to the more recent surveys. CSEM in hydrocarbon exploration has a short commercial history of less than ten years, and the progress can be explained by improvement during these years in acquisition, processing, interpretation techniques, experience, skills and tools. The economical value of CSEM data can be predicted for specific exploration settings using standard decision analysis. Based on performance tracking and review of the prediction strength, conservative estimates of the economical value of CSEM data can be more than 10 times above the typical costs for a CSEM survey and analysis.
-
-
-
Scalable Solutions for Nonlinear Inverse Uncertainty Using Model Reduction, Constraint Mapping, and Sparse Sampling
Authors M. J. Tompkins and J. Fernandez MartinezWe present a general nonlinear inverse uncertainty estimation method that allows for the comprehensive search of model posterior space while maintaining computational efficiencies similar to deterministic inversions. Integral to this method is the combination of a parameter reduction technique, like Principal Component Analysis, a parameter bounds mapping routine, a sparse sampling scheme, and a forward solver. Parameter reduction, based on linearized model covariances, is used to reduce the model space by orders of magnitude. Parameter constraints are then mapped to this reduced space, using a linear programming scheme, defining a bounded posterior polytope. Sparse deterministic grids are employed to sample this feasible model region, while forward evaluations determine which model samples are equi-probable. The resulting ensemble represents the equivalent model space, consistent with Principal Components, that is used to infer uncertainty measures. The number of forward evaluations is determined adaptively and minimized by finding the sparsest sampling required for convergence of uncertainty measures. We demonstrate, with a surface electromagnetic example, that this method has the potential to reduce the nonlinear inverse uncertainty problem to a deterministic sampling problem in only a few dimensions, requiring limited forward solves, and resulting in an optimally sparse representation of the posterior model space.
-
-
-
Sensitivity Study of Multi-sources Receivers CSEM Data for TI-anisotropy Medium using 2.5D Forward and Inversion Algorithm
Authors A. Abubakar, J. Liu, M. Li, T. M. Habashy and K. MacLennanOur study is to analyze the sensitivity of multicomponent electromagnetic fields to the reservoir conductivity. We assume the reservoir to be transverse isotropic (TI) and the field components include both electric and magnetic data. Several authors studied the 1D case and showed that the response from a TI-anisotropic thin resistive layer is dominated by the vertical conductivity, but to-date, anisotropy in a 2D model has not been adequately studied. In addition to the electric dipole sources, we also investigated the sensitivity of electric and magnetic fields due to magnetic dipole sources, which could provide complementary information on the reservoir conductivity.
-
-
-
Seismic Imaging with Multiple Scattering
Authors A. J. Berkhout and D. J. VerschuurThis paper focuses on the promising concept of using multiple scattering in the migration process. This means that the input of the proposed migration algorithm consists of seismic data that includes (blended) multiples. It also means that both primary and multiple scattering contribute to the seismic image of the subsurface, utilizing the double illumination property.
-
-
-
Angle-dependent Least-squares Imaging of Incoherent Wavefields
Authors D. J. Verschuur and A. J. BerkhoutIn traditional seismic acquisition, temporal overlap between the recordings of consecutive shots is avoided. In this paper we will investigate the proposal to abandon this acquisition constraint by introducing dense source distributions in an overlapping fashion (’blended acquisition’). The result is an improved source space with a favorable azimuth distribution. We will show that blended measurements can be directly used in an angle-dependent migration process with a multi-shift least-squares imaging condition, such that a densely sampled illumination is obtained in the ray parameter domain.
-
-
-
Prestack Data Enhancement Using Local Traveltime Approximation
Authors V. Buzlukov, R. Baina and E. LandaThe quality of recorded seismic data depends on many factors such as complexity of the subsurface, strong noise level, the topography of the earth’s surface, near surface inhomogeneities etc. We propose to use a local common offset (CO) approximation for traveltime stacking surface description. It allows to approximate traveltimes of reflection events in the vicinity of arbitrary ray and thus arbitrary offset. We present the general workflow and the implementation of a signal enhancement scheme.
-
-
-
3D Prestack Beam Migration with Compensation for Frequency Dependent Absorption and Dispersion
Authors Y. Xie, C. Notfors, J. Sun, K. Xin, A. Biswal and M. BalasubramaniamSpatial variations in the transmission properties of the overburden cause seismic amplitude attenuation, wavelet phase distortion and seismic resolution reduction on deeper horizons. This poses problems for the seismic interpretation, tying of migration images with well-log data and AVO analysis. We developed an efficient prestack beam Q migration approach to compensate for the frequency dependent dissipation effects in the migration process. A 3D tomographic amplitude inversion approach may be used for the estimation of absorption model. Examples show that the method can mitigate these frequency dependent dissipation effects caused by transmission anomalies and should be considered as one of the processes for amplitude preserving processing that is important for AVO analysis when transmission anomalies are present.
-
-
-
Pure P-wave Propagators Versus Pseudo-acoustic Propagators for RTM in VTI Media
Authors X. Du, R. P. Fletcher and P. J. FowlerWe propose a new approach to obtain a pure P-wave propagator for vertical transversely isotropic (VTI) media. We compare it with existing “pseudo-acoustic” propagators as modeling kernels for prestack reverse time migration (RTM).
-
-
-
Angle Gathers by Reverse-time Migration
Authors M. Vyas, E. Mobley, D. Nichols and J. PerdomoOne of the ways to compute angle gathers for reverse-time migration (RTM) is through time lags. Algorithm for the transformation of time lags to angles has been proposed previously by other authors. However, the proposed workflow requires a dip field and is increasingly inaccurate in areas with conflicting dips and for dips greater than 60-degrees. In this paper we propose methods to overcome the dip limitation and to do the angle transformation without explicitly using the dip field. The angle gathers thus obtained can be used for migration velocity analysis or to improve the image quality through stack. We use a synthetic 2D model to compare results obtained using our proposed methods and the conventional workflow. Finally, we show RTM angle gathers for SEAM model using our methods.
-
-
-
Kirchhoff Beam Q Migration
Authors J. Liu and G. PalacharlaWe developed a Kirchhoff beam Q migration method that combines the advantages of beam migration and Q migration to correctly handle both multi-arrivals and amplitude/frequency restoration. The method was successfully applied to a 2-D synthetic data set and produced images that are superior to those from single-arrival Kirchhoff Q migration or Kirchhoff beam migration without Q compensation.
-
-
-
Impact of TTI Anisotropy on Elastic and Acoustic Reverse Time Migration
Authors R. Lu, P. Traynin, J. E. Anderson and T. DickensIn this study, we use synthetic seismic data to demonstrate the impact of proper anisotropic parameterization as well as the choice of elastic migration algorithm on the output image. We synthesize an elastic dataset using a 2D TTI elastic model based on the SEAM 3D model. Then we migrate this dataset using isotropic acoustic RTM, VTI acoustic RTM, TTI acoustic RTM, and TTI elastic RTM algorithms, respectively. Image quality improves progressively as we use more realistic models as well as more accurate elastic wave-propagation engine. Upgrading from TTI acoustic RTM to TTI elastic RTM results in better delineation of the salt boundary and better sediment termination against faults and salt. These differences result from the fundamental limitations of migrating elastic data using acoustic RTM and result from the proper positioning of converted waves.
-
-
-
Airborne TDEM by He-filled balloon
More LessPerforming Time Domain Electromagnetic surveys in rugged terrain is a challenging task and as an alternative to mundanely laying out loops or using a helicopter system, a central loop configuration Time Domain EM system has been fitted to a 5m diameter He–filled balloon with a capacity to lift a ~48kg payload. The transmitter and receiver loops have a diameter of 10 and 5m, respectively. The balloon is handled by an operator and 3 assistants and measures while drifting above the topography. The TDEM electronics was custom built by Elta-Geo in Novosibirsk, Russia and records the soundings to a HP IPAQ handheld PC via BlueTooth communication. A maximum transmitter current input of up to 20 Amp is possible using a conventional half sine waveform. One of the areas where the balloon was employed was the N'teisha gold occurrence in Yemen where the gold is situated in narrow shear zones, between 0.5 and 2m wide. Although the shear zones are possibly too narrow to be detected by the TDEM system an interesting conductivity anomaly was detected in the vicinity of the shear zones that still has to be further investigated.
-
-
-
Shallow Water CSEM Using a Surface-towed Source
Authors D. V. Shantsev, F. Roth, C. Twarz, A. Frisvoll and A. K. NguyenWe address challenges for using controlled-source electromagnetic (CSEM) surveys at small water depths. A novel deployment setup where electrodes of a conventional CSEM source are suspended from two GPS positioned buoys and towed 10 m below the sea surface is proposed. This setup allows better control of the source position and orientation along with improved speed and manoeuvrability as demonstrated by a test survey in the North Sea. The finite-difference 3D modelling code used in data interpretation has been improved by a careful representation of source and receivers allowing an accurate modelling in shallow waters. Modelling results assuming a 2.2 km deep resistor demonstrate that a surface-towed source has essentially the same efficiency in detecting the target as the traditional deep-towed source if the water depth is within a few hundred meters. Additional attenuation of EM fields travelling through the water layer for surface-towing may be compensated by a better knowledge of these fields due to precise control of source position and orientation.
-
-
-
Case Study – A Towed EM Test at the Peon Discovery in the North Sea
Authors J. M. Mattsson, L. L. Lund, J. L. Lima, F. E. Engelmark and A. M. McKayA newly developed towed EM system has been tested offshore in the North Sea. In this paper we use modeling and inversion to investigate the ability of the towed EM system to detect and characterize a shallow gas discovery. We show that the measured electric field data are of sufficient quality and signal-to-noise ratio for successful detection and inversion of the high resistivity reservoir area including distinction of some of the shallow gas accumulations above the reservoir. 1D inversion in the frequency domain has been performed on individual common mid points (cmps) along a survey line across the reservoir with robust results. The estimated model from the 1D inversion outside the reservoir is used as the 1D background model in 3D modeling. The imposed 3D resistivity model is based on seismic data and interpreted horizons. It is concluded from the 3D modeling that the resistivity values in the reservoir obtained from 1D inversion are lower than those in the refined 3D model which is also supported by the well-log data.
-
-
-
3D Inversion of Transient EM Data – A Case Study From the Alvheim Field, North Sea
Authors B. A. Hobbs, M. S. Zhdanov, A. Gribenko, A. Paterson, G. Wilson and C. ClarkeWe present a case study leading to the 3D inversion of transient electromagnetic (EM) data for delineating reservoir extent at the Alvheim field in the Norwegian sector of the North Sea. The survey was conducted in July and August 2008 using one method of marine EM surveying, namely a two ship operation and ocean bottom cables. One ship laid a receiver cable with 30 receivers on the sea floor, and the second ship placed a source cable on the sea floor which was used to generate a coded transient signal. The configuration of the source and receiver spread was analogous to 2D seismic acquisition, as the system was rolled along to obtain multi-fold coverage of the subsurface. The survey spanned 20 km, resulting in measurements of 1270 source-receiver locations. The measured electric field for each source-receiver pair was deconvolved for the measured source current to determine the impulse response function. Preliminary inversions were made for each source-receiver pair using a 1D model, and the results were stitched to a 2D image. Having defined a background model, all data were then simultaneously inverted in 3D with focusing regularization. This revealed high resistivity volumes corresponding to the known hydrocarbon-bearing reservoirs of the Alvheim field.
-
-
-
Inversion of CSEM Data in the Presence of Shallow Resistive Inhomogeneities
Authors L. D. Masnaghetti, M. Watts and L. BornaticiMarine controlled-source EM data can be strongly distorted by shallow conductive (shale diapirs) or resistive (salt) bodies. Interpretation strategies using a priori models and sharp-boundary inversion are discussed.
-
-
-
2.5D Anisotropic Inversion of Marine CSEM Survey Data – An Example from West Africa
Authors C. J. Ramananjaona and L. M. MacGregorAn example of 2D interpretation of marine CSEM survey data is presented.Following the conclusion of initial 1D modelling of the response measured in line with the source dipole, it is shown that assuming vertical anisotropy of the 2D earth model improves the inversion results signigicantly compared to the isotropic assumption.
-
-
-
3D Iterative Migration of Marine Controlled–source Electromagnetic Data with Focusing Regularization
Authors M. S. Zhdanov, M. S. Zhdanov, M. Cuma and G. A. WilsonWe present our implementation of an iterative migration algorithm for marine controlled-source electromagnetic (MCSEM) data based on the 3D integral equation method with inhomogeneous background conductivity and focusing regularization with a priori terms. The use of focusing stabilizers makes it possible to recover subsurface models with sharper geoelectric contrasts and boundaries than can be obtained using traditional smooth stabilizers. The method is implemented in a fully parallelized code which makes it practical to run large-scale 3D iterative migration within a day on multi-component, multi-line MCSEM surveys for models with millions of cells. We present a suite of interpretations obtained from different iterative migration scenarios for a synthetic 3D MCSEM survey computed from a very detailed model of stacked anticline structures and reservoir units of the Shtokman gas field in the Russian sector of the Barents Sea.
-
-
-
Understanding Wave-equation Image-gathers – A Prerequisite for Advanced Post-processing and Velocity Model Building
Authors J. Sirgue, P. Jousselin and F. AudebertWave-Equation Migration has evolved towards a routine tool to generate final stack images but the computation of wave-equation image gathers has not yet spread out in the industry. Nevertheless, provided a good understanding of wave equation gathers’ typology, this prestack information proves to be useful to improve final wave-equation stack images and to build accurate velocity models. In this paper, we explore these new image-gathers, describe their behaviour in terms of true specular events as opposed to artefacts and propose solutions for the applications of these image-gathers in improved imaging and in velocity model building. We first describe what are the image-gathers produced by Wave-Equation Migration. Then we characterize the specular events and the acquisition and processing artefacts they exhibit. On analytical and synthetic image-gathers, we illustrate the artefacts linked with the acquisition truncation. Although feeding tomography with these gathers may require a sophisticated method to recover only the specular part, an adapted mute can prove to be a very efficient in a post-processing workflow. We illustrate on a real 3D subsalt dataset how the use of wave-equation image gathers significantly improves the seismic stack.
-
-
-
On the Gradient of Wave-equation Migration Velocity Analysis
More LessWave Equation Migration Velocity Analysis (WEMVA) is an image-domain tomography based on a wave-equation propagator (one-way or two-way). Different algorithms for residual calculation, propagation and optimization add varying flavors to WEMVA. The two most popular ways to compute the residual are either using the Differential Semblance Optimization (DSO) or using the Differential Residual Migration (DRM). In this paper, we present relevant theory to understand the difference these two algorithms make in the final gradient calculation. We compare and contrast the two methods with the help of numerical examples. Both methods have some advantages and disadvantages over the other and they should be kept in mind while making the choice. The theory and results presented are based on one-way wave equation migration.
-
-
-
Differential Semblance Wavefield Tomography Using Extended Images
Authors T. Yang and P. C. SavaWavefield tomography can be formulated based on extended images constructed at discrete locations in the subsurface, i.e. by analysis of extended common-image-point gathers. Such gathers can be built at locations that conform to the geologic structure, thus achieving optimal sampling of the image. The extended CIPs indicate velocity error by defocusing from the origin of the space- and time-lag domain. We can formulate an objective function based on this information by penalizing the image departure from its ideal shape. Such penalty resembles the process used for differential semblance optimization on more conventional common-image-gathers and is characterized by similar robustness and simplicity.
-
-
-
Local Plane Wave Tomography
Authors D. Lokshtanov, J. K. Lotsberg and E. KurinWe propose a local plane wave tomography where we use the output of a wave equation migration in a consistent manner as input to the tomography. The tomography is based on an efficient layer-stripping inversion of residual moveout of offset slowness gathers after double-square-root migration. The method does not require closeness of the initial model to the true model and it does not involve ray tracing through complex overburden. The results of testing on 2D synthetic FD data from the SEG-EAGE salt model are very promising. In the current work we use split-step-Fourier migration, but it is straightforward to extend the suggested approach to any other migration which provides information on incident and reflected angles at the image point. The considered tomography itself is very fast, but the whole layer-stripping process can be quite time consuming. The method requires at least one new migration for each layer included in the velocity model. Therefore, optimally, it should be used for target oriented velocity inversion in combination with faster approaches for simple overburden. Also, fast and reliable auto-picking of residual moveout is of crucial importance.
-
-
-
Acoustic Wave Equations for a Virtual Source Shift
More LessThe direct relation between the shape of the wavefield and the source location can provide insights useful for velocity estimation and interpolation. As a result, I derive partial differential equations that relate changes in the wavefield shape to perturbations in the source location, especially along the Earth's surface. These partial differential equations have the same structure as the wave equation with a source function that depends on the background (original source) wavefield. The similarity in form implies that we can use familiar numerical methods to solve the perturbation equations, including finite difference and downward continuation. The solutions of the perturbation equations represent the coefficients of a Taylor's series type expansion for the wavefield. As a result, we can speed up the wavefield calculation as we can approximate the wavefield shape in the vicinity of the original source. The new formula introduces changes to the background wavefield only in the presence of lateral velocity variation or in general terms velocity variations in the perturbation direction. The accuracy of the representation, as demonstrated on the Marmousi model.
-
-
-
Velocity-independent "tau"-p Moveout in a Layered VTI Medium
Authors L. Casasanta and S. FomelLocal seismic event slopes carry complete information about the structure of the subsurface. This information is enough to accomplish all time-domain imaging steps, without knowing the seismic velocity model. In this paper, we develop a velocity-independent tau-p imaging approach to perform moveout correction in 2D layered VTI media. The effective and interval anisotropic parameters turn into data attributes through the use of slopes and they are directly mappable to the zero-slope traveltime. The tau-p transform is the natural domain for anisotropy parameter estimation in layered media because it simplifies data processing and allows more accurate traveltime modeling and inversion than traditional t-x methods. Synthetic and field data tests show the practical effectiveness of our method.
-
-
-
Quantifying Structural Uncertainty in Anisotropic Depth Imaging – Gulf of Mexico Case Study
Authors K. Osypov, D. Nichols, Y. Yang, F. Qiao, M. O‘Briain and O. ZdravevaQuantifying structural uncertainty in the process of anisotropic model building is of paramount importance, especially in volumetrics estimation and drilling risk analysis. The modern concept of geophysical model building is based on integration of various data types and constraining by knowledge databases and targeted numerical modeling. Tomographic inversion of common-image-point gathers is the main engine for building the earth model. However, this inversion alone is very ambiguous, especially in the presence of anisotropy. Therefore, it is necessary to add well information and other measurements like electromagnetics and gravity in a simultaneous joint inversion process. Furthermore, geological knowledge, basin and geomechanical modeling, and lithoclassification are important constraints for model building. This novel methodology using uncertainty analysis delivers an entire suite of models that fit all available data equally well, allowing the user to select the most geologically plausible solution. In other words, uncertainty analysis has the capability to provide a new paradigm for model building. A case study for the Walker Ridge area of the Gulf of Mexico is presented in this paper.
-
-
-
Rapid Salt Model Scenario Testing with Fast Pre-stack Beam Migration
Authors A. Osen, S. K. Foss, M. Rhodes, D. Michel, C. Lojewski and N. EttrichWe develop a system that enable the interpreter and processor to quickly test different salt models and their impact on the seismic image. The migration algorithm is a Parsimonious Kirchhoff-type method.
-
-
-
Simultaneous Joint Inversion as a Salt Detector in South Gabon Land Exploration
Authors M. Mantovani and T. DugoujardIn the special case of non-Gardner bodies (i.e., salts) inside a standard Gardner background setting, simultaneous joint inversion offers a quantitative approach to QC on interpreted layer boundaries for such a formation. Misplacements or boundary errors in salt horizons result in simultaneous joint inversion velocity and density outputs as inconsistencies that may, therefore, indicate layer geometry corrections to otherwise regular solutions. In the presented case, a boundary of validity between a velocity and resistivity positive and negative correlation is detected by simultaneous joint inversion, and used as salt top/bottom edge indicator. For hydrocarbon exploration in the South Gabon sub-basin, because the seismic method alone was failing in salt and subsalt imaging, Perenco hypothesized that gravity data could be utilized to enhance the imaging of pre-salt seismic reflectors by providing additional input for prestack depth migration. Two vintage 2D seismic lines were reprocessed utilizing the technology simultaneous joint inversion technology.
-
-
-
Gulf of Mexico Case Study of Localized Anisotropic Tomography with Checkshot
Authors A. V. Bakulin, Y. Liu and O. ZdravevaReliably deriving parameters for anisotropic depth models requires use of borehole information. Localized tomographic inversion attempts to streamline and automate this process by directly incorporating the available well data into conventional reflection tomography. We present a case study from Gulf of Mexico where we conduct local VTI anisotropic tomography using a joint dataset consisting of seismic and checkshot data. Because this area has low structural dip, the results can be compared with more traditional manual 1D layer-stripping inversion. Tomographic inversion for three VTI parameters produces a smooth velocity model that both fits the checkshot traveltimes and flattens all seismic gathers. To regularize tomographic inversion, we apply smoothing operators that are oriented along predominant dips of seismic event and have large lateral extent. The anisotropic profiles derived by tomography and 1D inversion have similar low-frequency components, but differ in finer details. Borehole data require careful conditioning before joint inversion because of potential difference in water velocity between seismic and well surveys. The workflow we present can be applied to calibrating anisotropic parameters in the more general case of 3D models with structural dip and borehole data from deviated wells.
-
-
-
Introducing A Priori Information in Non-linear Slope Tomography as Applied to the Minagish Survey in Kuwait
Authors P. Guillaume, D. Carotti, A. L. Al-Kandari, A. El-Emam and G. LambaréIntroducing a priori information in velocity model building is certainly one of the most challenging problems in seismic imaging. In this context, non-linear slope tomography, which expresses the velocity model-building problem as a multi-data multi-parameter optimization problem, appears particularly adapted for dealing with various types of constraints. With an application to the land Minagish dataset in Kuwait, we address here the problem of introducing information from wells into the velocity model-building problem. We show how short vertical wavelengths of the velocity at wells, which cannot be estimated by the inversion process, can be incorporated in the tomography update, resulting in a very accurate depth velocity model insuring both fitting at wells and focusing.
-
-
-
Application of Isotropic PP/PS-stereotomography to a Gas Cloud Context – The Snøhvit Gas Field
Authors M. Alerini, P. Eliasson, S. Nag, M. Buddensiek and C. RavautGas clouds may considerably distort the energy of P-waves travelling through them. This complicates imaging, and particularly velocity estimation. When seismic data cannot be used a priori information may help to estimate a satisfying model. Stereotomography is a flexible tool, but the parameterization of the estimated model by cubic B-splines makes the introduction of a priori information difficult. Here, we use a hybrid model, consisting of a smooth layered model and some smooth variations on top of it. This model parameterization is smooth, allowing the use of paraxial ray tracing, and is at the same time connected to some layers, allowing the introduction of a priori information. By using such a model parameterization, we show that we can significantly improve the estimated results on a 2D dataset, which exhibits a low signal-to-noise ratio of the PP-data underneath a gas cloud at the Snøhvit gas field.
-
-
-
Multi Directional Residual Curvature Analysis to Improve Seismic Anisotropy Estimations
More Lessom dipping horizons and fault planes (multi directional residual curvature analysis – MDRCA). Real 3D seismic data examples are included.
-