- Home
- Conferences
- Conference Proceedings
- Conferences
72nd EAGE Conference and Exhibition - Workshops and Fieldtrips
- Conference date: 14 Jun 2010 - 17 Jun 2010
- Location: Barcelona, Spain
- ISBN: 978-90-73781-87-0
- Published: 13 June 2010
41 - 60 of 105 results
-
-
Building starting model for full waveform inversion from wide-aperture data by stereotomography
Authors V. Prieux, G. Lambaré, S. Operto and J. VirieuxBuilding a reliable starting model remains one of the most topical issues for successful application of full waveform inversion (FWI). In this study, we assess stereotomography as a tool to build a reliable starting model for frequency-domain FWI from long offset (i.e., wide-aperture) data. Stereotomography is a slope tomography method based on the use of traveltimes and slopes of locally-coherent events in the data cube. A key feature of stereotomography is that it can be coupled efficiently with semi-automatic picking, which partially frees one from the tedious and difficult interpretive traveltime picking. We assessed a tomographic workflow based on stereotomography and frequency-domain FWI with the 2D acoustic synthetic Valhall case study. The Valhall model is
mainly characterized by a large-scale low velocity zone associated with gas layers above the reservoir level. We first computed an acoustic full-wavefield dataset using a finite-difference time-domain modeling engine for a wide-aperture survey with a maximum offset of 16 km. The source bandwidth is between 10 and 45 Hz. Compared to the conventional application of stereotomography, we assess in this study the benefits provided by the joint inversion of refraction and reflection traveltimes from long-offset data. Use of refraction traveltimes is expected to stabilize and improve the reconstruction of the shallow part of the model. In a similar manner for frequency-domain FWI, we design a multiscale approach which proceeds hierarchically from the wide-aperture to the short-aperture angles to mitigate the non-linearity of the inversion. Starting models for FWI were built by stereotomography using two sets of picked events. For the first data set, the picking is limited to reflection traveltimes with a maximum offset of 4 km, while both refracted and reflected events were picked in the second case using the full range of offsets (± 16 km). We highlight the improvements of the FWI results obtained from the starting stereotomographic model built from the long-offset data set. The improvements are observed at the reservoir level below the gas layers but also in the upper part of the model where the joint use of refraction and reflection traveltimes is helpful to improve the ray illumination.
-
-
-
Full Waveform Teleseismic Tomography: Theory and Applications
Authors S. Roecker, B. Baker and J. McLaughlinWe have adapted a 2D spectral domain finite difference waveform tomography algorithm previously used in active source seismological imaging to the case of a plane wave propagating through a 2.5D viscoelastic medium in order to recover P and S wavespeed variations from body waves recorded at teleseismic distances. A transferable efficacy that permits recovery of arbitrarily heterogeneous models on moderately sized computers provides the primary motivation for choosing this algorithm. Synthetic waveforms can be generated either by specifying an analytic solution for a background plane wave in a 1D model and solving for the source distribution that would produce it, or by solving for a scattered field excited by a plane wave source and then adding the background wavefield to it. Because the former approach typically involves a concentration of sources at the free surface, the latter tends to be more stable numerically. We adapt a gradient approach to solve the inverse problem to maintain tractability; calculating the gradient does not require much more computational effort than does the forward problem. The waveform tomography algorithm can be applied in a straightforward way to perform receiver function migration and travel time inversion. We will discuss an application of this technique to imaging the crust and upper mantle beneath the Tien Shan range in central Asia.
-
-
-
Full waveform inversion in the Laplace and Laplace-Fourier domains
Authors C. Shin, W. Ha, W. Chung and H. Seuk BaeWe present a review of Laplace and Laplace-Fourier domain waveform inversion. The wave equation in the Laplace and Laplace-Fourier domains can be solved by changing the real frequencies from the Fourier transform into imaginary frequencies. The initial model of Laplace-domain inversion can be a scratch such as a homogeneous velocity model. The inversion provides a long-wavelength velocity model that can be used as a starting velocity model for conventional waveform inversion, which uses the zero-frequency components of the damped wavefield. Laplace-Fourier domain inversion can recover long-, medium- and short-wavelength velocity models by adjusting the complex frequencies. Careful muting of noise should be applied before the first arrival because the damped wavefield is sensitive to random noise. Numerical experiments and real data examples show that full waveform inversion in the Laplace and Laplace-Fourier domains can provide an alternative for seismic velocity estimation.
-
-
-
Elastic (Visco) Full Waveform Inversion of multi-component marine seismic data in the time domain: A tribute to Albert Tarantola
Authors S. C. Singh, G. Royle, T. Sears, M. Roberts and P. Barton(AVO) analyses can be used to estimate P and S-wave impedances. Since the method is local, i.e. assumes 1D media, linear approximation to the reflection coefficient, and ignores interference effects, the results are very approximative. In 1980s Tarantola’s group in Paris started developing elastic full waveform of near offset, while other groups were focusing on different types of migration algorithm using more sophisticated mathematical techniques. Tarantola (1986) first set-up the mathematical foundation of full waveform inversion in acoustic media and then extended it to full elastic media (Tarantola, 1988). In early 1990s our group started working on 1D elastic full waveform inversion (Singh et al, 1993) but used long offset data to get medium to large-scale velocity of the sub-surface. We showed that wide-angle reflection data (Neves and Singh, 1996) has sensitivity to intermediate wavelength information. Joint inversion of near- and post-critical angle reflections data allowed convergence towards the global minimum (Shipp and Singh, 2002). Since then we have extended the algorithm to multi-component OBC data to invert P and S-wave velocity (Sears et al., 2008; Roberts et al., 2008) and recently for attenuation (Royle and Singh, 2010). We start inverting wide-angle data first, followed by critical angle and then near offset data. For a stable inversion, we invert P-wave velocity first from vertical component data, then medium scale S-wave velocity vertical component and finally short wavelength S-wave velocity from horizontal component data. Although, our group has made significant progress, computation remains a main issue in applying elastic full waveform inversion on a routine basis. In this talk, I will give a historical prospective of elastic full waveform inversion, particularly those related to work of Albert Tarantola, and then present state of the art techniques of full elastic waveform and then propose a strategy for future waveform inversion. I will particularly highlight the importance of elastic inversion for reservoir characterization, and show how the full elastic waveform inversion could be extended to 3D media in a time-lapse mode (Royle and Singh, 2010; Queisser and Singh, 2010). We are presently taking full waveform a step further by jointly inverting both seismic and controlled source electromagnetic data (Brown et al, 2010).
-
-
-
Improved Near-surface Velocity Models from Waveform Tomography Applied to Vibroseis MCS Reflection Data
Authors B. Smithyman and R. ClowesMultichannel vibroseis reflection surveys are prevalent in the land exploration seismic industry because of benefits in speed and cost, along with reduced environmental impact when compared to explosive sources. Since the downgoing energy must travel through the shallow subsurface, an improved model of near-surface velocity can in theory substantially improve the resolution of deeper reflections. We describe techniques aimed at allowing the use of vibroseis data for long-offset refraction processing of first-arrival traveltimes and waveforms. Waveform tomography combines inversion of first-arrival traveltime data with full waveform inversion of densely-sampled refracted arrivals. A number of challenges are presented by the characteristics of vibroseis acquisition; we discuss some of these challenges and techniques to mitigate them. Through the use of waveform tomography, we plan to build useful, detailed near-surface velocity models for both the reflection work flow and direct interpretation.
-
-
-
Seismic anisotropy effects in 3D wavefield tomography
Authors I. Stekl, A. Umpleby and M. WarnerWe are presenting results how seismic anisotropy may affect waveform inversion images. Result from our Marmousi model extended to 3D as 2.5D morel show that not including appropriate anisotropy in the modelling algorithm can lead to mispositioning of anomalies in the images.
-
-
-
Time vs frequency for 3D wavefield tomography
Authors A. Umpleby, M. Warner and I. SteklUnlike the situation in two-dimensions, where direct factorisation of the matrix equations makes frequency-domain methods much faster than explicit solution in the time-domain, the computational resources required for practical wavefield tomography in 3D can be rather similar in the two domains. We have developed and optimised schemes that undertake wavefield tomography using explicit time stepping in the time domain and that iteratively solve the matrix equations of the implicit problem in the frequency domain.
We have applied these two methods systematically to the same suite of problems. In the frequency domain, the principal advantages are that the initial tomographic updates for lowest frequencies are often seen more quickly, and spatial resolution can be better at the highest frequencies. In the time domain, one of the principal advantages is that it is possible to mute and/or weight the field data in time, and consequently the method can be made to work more effectively with difficult datasets. In practice, both approaches are useful, and both should be available within a comprehensive suite of inversion tools.
-
-
-
3D GOM WAZ survey experiment using Full Waveform Inversion
More LessExploration in geologically more complex areas requires new tools/methodologies in order to address these challenges. The recently introduced wide-azimuth data acquisition method offers better illumination, noise attenuation and lower frequencies to more accurately determine a velocity field for imaging. The methodology in this paper follows the layer striping approach where we developed the supra salt sediment followed by the top of salt, salt flanks, base of salt and finished with a limited subsalt update. The inversion stages were carefully QC-ed through gather displays to ensure the kinematics were honoured. In order to approximate the observed data, the acoustic inversion had attenuation, anisotropy, acquisition source and receiver depth incorporated in the propagator. The final results were validated by reverse time migration using the inverted velocity field versus the final tomography velocity regime.
-
-
-
An overview of the SEISCOPE project on frequency-domain Full Waveform Inversion Multiparameter inversion and efficient 3D full-waveform inversion
Authors J. Virieux, S. Operto, H. Ben Hadj Ali, R. Brossier, V. Etienne, Y. Gholami, G. Hu, Y. Jia, D. Pageot, V. Prieux and A. RibodettiWe present an overview of the SEISCOPE project on frequency-domain full waveform inversion (FWI). The two main objectives are the reconstruction of multiple classes of parameters and the 3D acoustic and elastic FWI. The optimization relies on a reconditioned L-BFGS algorithm which provided scaled gradients of the misfit function for each classes of parameter. For onshore applications where body waves and surface waves are jointly inverted, P- and S-wave velocities (VP and VS) must be reconstructed simultaneously using a hierarchical inversion algorithm with two nested levels of data preconditioning with respect to frequency and arrival time. Simultaneous inversion of multiple frequencies rather than successive inversions of single frequencies significantly increases the S/N ratio of the models. For offshore applications where VS can have a minor footprint in the data, a hierarchical approach which first reconstructs VP in the acoustic approximation from the hydrophone component followed by the joint
reconstruction of VP and VS from the geophone components can be the approach of choice. Among all the possible minimization criteria, we found that the L1 norm provides the most robust and easy-to-tune criterion as expected for this norm. In particular, it allowed us to successfully reconstruct VP and VS on a realistic synthetic offshore case study, when white noise with outliers has been added to the data. The feasibility of 3D FWI is highly dependent on the efficiency of the seismic modelling. Frequency-domain modelling based on direct solver allows one to tackle small-scale problems involving few millions of unknowns at low frequencies. If the seismic modelling engine embeds expensive source-dependent tasks, source encoding can be used to mitigate the computational burden of multiple-source modelling. However, we have shown the sensitivity of the source encoding to noise in the framework of efficient frequency-domain FWI where a limited number of frequencies is inverted sequentially. Simultaneous
inversion of multiple frequencies is required to achieve an acceptable S/N ratio with a reasonable number of FWI iterations. Therefore, time-domain modelling for the estimation of harmonic components of the solution can be the approach of choice for 3D frequency-domain FWI because it allows one to extract an arbitrary number of frequencies at a minimum extra cost.
-
-
-
3D full-wavefield tomography: imaging beneath heterogeneous overburden
Authors M. Warner, A. Umpleby and I. SteklWe have developed computer codes and work-flows for 3D acoustic waveform inversion in both the frequency and time domains. We have applied these methods to several 3D field datasets with a variety of acquisition geometries and target depths. In each case, wavefield tomography was able to obtain a high-resolution high-fidelity velocity model of the heterogeneous overburden, and consequently to improve subsequent depth imaging of an underlying target.
-
-
-
Improvements in Imaging and Reduction of Uncertainty in Velocity Determination by the Use of Wide Azimuth Surveys
Authors A. Bartana and D. KosloffSeismic velocity determination has suffered from insufficient coverage of the data acquisition. For this reason only smooth long wave length components of the velocity variation can be reliably recovered. We show by means of a theoretical study that multi azimuth data has the potential to significantly improve velocity determination. In this study we examine the capability of multi azimuth acquisition in resolving small velocity anomalies by means of a 3D synthetic example. The model consists of a layered structure which contains two small velocity anomalies. The study compares the resolution when the migrated gathers contain no azimuthal information to the case when the gathers are binned both according to offset and azimuth. The results show that conventional gathers can only obtain a blurred image of the velocity anomalies, whereas with multi azimuth gathers the velocity anomalies appear distinctly.
-
-
-
Coil Shooting on Tulip discovery in Indonesia: a summary of the work done and lessons learned until now
By M. BuiaCoil Shooting [French, Cole, 1984; Durrani, 1987] is a technique in which a marine towed streamer vessel acquires an almost continuous sequence of circular "lines". The circular line geometry is repeated in the X and Y directions to build up fold, offset and azimuth distribution. This method allows for full azimuth (FAZ) acquisition using a single vessel, shooting on a continuous turn. The time between each circular line is of the order of minutes, as opposed to hours compared to conventional race-track acquisition. This results in high acquisition uptime and efficiency. Eni Indonesia and WestenGeco shot and processed through PSDM a full 3D Circular Shooting survey (Coil) over the Tulip Discovery in Indonesia between August 2008 and February 2010. Compared to “traditional” streamer surveys, the circular geometry introduces several differences and a number of new challenges, including proper offset / azimuth stacking. This paper presents the steps of the whole project: design, onboard illumination QC and final imaging results of this new “Full Azimuth” (FAZ) seismic effort.
-
-
-
CRS - More than a stack: A workflow from time to depth
Authors D. Gajewski, M. Baykulov and S. DümmongStacking is one of the most stable processes in reflection seismic data processing. Although the stacked section provides a distorted picture of the subsurface it remains the first image in the processing chain since the CMP concept was invented more than half a century ago. The stability of the stacking process results from the limited assumptions made in its derivation. Particularly no assumption on the type of model is made. This applies as well to the extension of the CMP concept the Common Refection Surface (CRS) method. Not just one but several CMP locations are considered when determining the stacking attributes which automatically accounts for the dip of the events. This improves the structural quality of the stack. Moreover, since several CMPs are considered more traces contribute to the stack. The stack is just one product of this procedure. The stacking attributes or CRS attributes are determined for each sample of the data. These attributes (three for the 2-D situation) have many important applications in seismic data processing like velocity model building, multiple suppression, pre stack data enhancement or data regularization. What started out as a stack evolved into a reflection seismic data processing workflow from time to depth producing structural images of high fidelity.
-
-
-
Neural-network based multi-azimuth processing
Authors A. Huck, P. de Groot, T. Manning and W. RietveldThis paper describes the results of a series of experiments with neural networks, dip-steered noise reduction filters and other techniques aimed at combining multi-azimuth data. The seismic data was first pre-processed by applying dip-steered noise reduction filters, amplitude correction and inter-volume trace matching for dynamic shift corrections. Then the individual azimuthal stacks were combined using first unsupervised - and then supervised neural networks using custom-made semi-automated workflows.
The main conclusions drawn from this study are that incremental improvements were achieved after consecutively: aligning the azimuth volumes, unsupervised stacking and supervised stacking. Alignment proved to be a mandatory step. Unsupervised segmentation provided a useful segment volume that highlights the area affected by stacking issues, while the same segmentation was also used for re-stacking the seismic data. Main improvements were achieved by selecting the relative weight to use for stacking. Supervised neural network stacking were further used to smoothen the transition between segment. The “MLP weighted” output is considered better than the input mazstack. The “MLP weighted” stack is perfectly fit for interpretation since no processing related artifacts were accepted. The workflow was adapted to the pre-stack domain but no additional gains were obtained.
-
-
-
Multi-dimensional data reconstruction and noise attenuation for optimal wide azimuth stack
Authors G. Poole and R. WombellOver recent years the value of wide-azimuth acquisition has been well documented. As well as significant improvements in the imaging of complex structures due to improved illumination, these data have also demonstrated benefits in the suppression of coherent and random noise and multiple energy. Two of the key factors controlling the quality of wide-azimuth datasets are high density regular sampling and good signal to noise. Using simple synthetics, we demonstrate the importance of regular sampling in the stack response. We continue by showing how data can be regularised and interpolated with 5D Fourier reconstruction to stack out more noise and improve the stack response of primary energy. In addition we highlight how the use of multi-dimensional denoising techniques can be used to enhance weak energy where the signal-to-noise ratio is a problem.
-
-
-
Least Squares Migration of Stacked Supergathers
Authors G. T. Schuster, W. Dai and G. ZhanWe show that phase-encoded shot gathers can be stacked together to form supergathers and efficiently migrated using an iterative least squares migration (LSM) method. The major problem of cross-talk can be largely eliminated by iterative stacking of the phase-encoded migrations and a multisource preconditioning factor, where random static shifts are used for the phase encoding function. Empirical results with synthetic seismic data suggests that increasing the number of stacked shot gathers requires an attendant increase in the number of LSM iterations. A key merit of phase-encoded LSM of supergathers is that, under ideal conditions, computational costs, IO, and storage requirements can be reduced by several orders of magnitude compared to conventional LSM.
-
-
-
Beam, wavelets and enhanced seismic attributes for interpretation
Authors K. Sherwood and J. SherwoodIn beam migration, it is possible to maintain a one-to-one mapping between a coherent event in unmigrated time and the corresponding event in migrated depth. The mapping is accompanied by many seismic attributes, including dip/azimuth of the reflector, angle of incidence at the reflector, raypath to the reflector, coherency, and wavefront curvature. During reconstruction, one or more of these seismic attributes can be used to filter the data, creating unique seismic volumes that aid in the model building and interpretation. The benefits derived from these volumes can lead to significant improvements in the imaging of challenging areas.
-
-
-
Anti-alias Optimal Interpolation with Priors
Authors M. Vassallo, A. Özbek, A. K. Özdemir, D. Molteni and Y. K. AlpWe introduce a new technique referred to as Optimal Interpolation with Priors, or OIP, for interpolation of irregularly sampled signals, using prior estimates of their spectral content, which is optimal in the least square sense. In this paper, after introducing this technique and describing its basic advantages with respect to other state of the art regularization techniques, we demonstrate its potential to interpolate signals that are spatially aliased, based on realistic prior information. We also propose an algorithm to obtain a reliable prior estimate of the signal spectrum. The combined use of this algorithm and OIP, referred to henceforth as Anti-Alias OIP (AA-OIP), can be applied to datasets irregularly sampled in multi-dimensional spaces.
-
-
-
Advances in onshore imaging
Authors J. -W. de Maag, H. Rynja, E. van Dedem, P. Milcik and M. van de RijzenOnshore data typically poses additional challenges for processing and imaging in comparison with offshore data. Here we can think of limited access for acquisition, more variation in source and receiver coupling, more severe random noise conditions, presence of coherent (shear) noise such as groundroll, more complicated multiple systems, processing no longer being done from a flat datum and signal distortion from a rapidly varying shallow overburden. To overcome these challenges, several advances towards a better stack are being made.. Some of these will be discussed below. Examples shown will be from two onshore datasets; a sparser Libyan survey and a high-density Wide-Azimuth survey acquired in the South of Oman.
-
-
-
Seismic stacking in a wider perspective
More LessStacking can be seen as part of the well-known correlation process: ‘stacking is zero-shift cross-correlation’. Hence, the stack yields one element out of a larger data volume. By computing this larger data volume (‘generalized stacking’), the original unstacked input can be fully recovered (‘generalized destacking’). If we look at the physics behind those mathematical transformations, generalized stacking represents a focussing process and generalized destacking represents a defocussing process. In this paper it is proposed to extend the traditional stack to focal transformation. In addition, it is proposed to formulate focal transformation in terms of constrained inversion.
-