- Home
- Conferences
- Conference Proceedings
- Conferences
72nd EAGE Conference and Exhibition incorporating SPE EUROPEC 2010
- Conference date: 14 Jun 2010 - 17 Jun 2010
- Location: Barcelona, Spain
- ISBN: 978-90-73781-86-3
- Published: 14 June 2010
1 - 50 of 797 results
-
-
Geologically Constrained Full Waveform Inversion – Theory
Authors A. Guitton, F. Ortigosa and G. GonzalesThe waveform inversion problem is inherently ill-posed. Traditionally, regularization terms are used to address this issue. For waveform inversion, where the model is expected to have many details reflecting the physical properties of the Earth, regularization and data fitting can work in opposite directions, slowing down convergence. In this paper, we constrain the velocity model with a model-space preconditioning scheme based on directional Laplacian filters. This preconditioning strategy preserves the details of the velocity model while smoothing the solution along known geological dips. The Laplacian filters have the property to smooth or kill local events according to a local dip field. By construction, these filters can be inverted and used in a preconditioned waveform-inversion scheme to yield geologically meaningful models. We illustrate on a 2-D synthetic example how preconditioning with non-stationary directional Laplacian filters outperforms traditional waveform inversion when sparse data are inverted for. We think that preconditioning could benefit waveform inversion of real data where (for instance) irregular geometry, coherent noise and lack of low frequencies are present.
-
-
-
Efficient Gauss-Newton Hessian for Full Waveform Inversion
More LessFull waveform inversion (FWI) has received an increasing amount of attention thanks to its ability to provide a high resolution velocity model of the subsurface. The computational cost still presents a challenge, however, and the convergence rate of the FWI problem is usually very slow without proper preconditioning on the gradient. While preconditioners based on the Gauss-Newton Hessian matrix can provide significant improvements in the convergence of FWI, computation of the Hessian matrix itself has been considered highly impractical due to its computational time and the storage requirements. In this paper, we design preconditioners based on an approximate Gauss-Newton Hessian matrix obtained using the phase-encoding method. The new method requires only 2Ns forward simulations compared to Ns(Nr+1)forward simulations required in conventional approaches, where Ns and Nr are the numbers of sources and receivers. We apply the diagonal of the phase-encoded Gauss-Newton Hessian to both sequential source FWI and encoded simultaneous source FWI. Numerical examples on Marmousi model demonstrate that phase-encoded Gauss-Newton Hessian improves the convergence of the FWI significantly.
-
-
-
Acoustic Waveform Inversion Applicability on Elastic Data
Authors D. V. Vigh, E. W. Starr and P. ElapavuluriFull Waveform inversion is a computer intensive process, especially for 3D seismic data. After a tremendous number of synthetic examples, finally real 3D data sets have been undertaken by the industry. As field data is dominated with P waves, one feasible approach is to use the acoustic approximation. The Full Waveform Inversion described above is acoustic and real data is more accurately described by an elastic model. It is common practice to apply acoustic inversion, especially for 3D data sets because the elastic modeling is prohibitively expensive. Although the long offsets may suffer from elastic effects our experiment shows that the velocity field obtained using acoustic inversion vs. elastic data are reasonable in spite of the difference between the modeling and mother Earth. Although the elastic propagation would provide a better match to the acquired data the cost is still prohibitive compared to the acoustic propagation especially if large 3Ds are under consideration.
-
-
-
Application of an Impedance-based Full-waveform Inversion Method for Dual-sensor, Single-streamer Field Recordings
Authors S. Kelly, J. Ramos-Martinez, B. Tsimelzon and S. CrawleyThusfar, waveform-based inversion methods have utilized the spatial distribution of back-projected reflectivities in order to refine velocity (and density) models. While these methods excel at refining a velocity model for high-wavenumber features, they face significant difficulties when attempting to invert for long-wavelength features several kilometres below the depth of acquisition. In this abstract, we develop the theory for a waveform-based inversion method that utilizes an impedance image, rather than a reflectivity image, in order to extract spatial variations in velocity and density. Results are presented for the first stage of an inversion using a 2-D line of dual-sensor, single-streamer, field recordings. It is shown that features with vertical scale sizes up to 0.5 km can be determined at depths up to 4 km, using data with a maximum offset of only 8 km and frequencies below 7.5 Hz.
-
-
-
2D Acoustic Full Waveform Inversion of a Land Seismic Line
Authors W. A. Mulder, C. Perkins and M. J. van de RijzenThe application of waveform inversion to land data is even more challenging than the marine case because of strong elastic effects such as groundroll, near surface attenuation, scattering due to rapid geological variations and topography. In this paper, rather than considering full elastic waveform inversion which maybe too difficult for standard geophone data, we consider the application of acoustic 2D full waveform inversion with a frequency domain code. At lower frequencies, the data are very noisy so we carried out a fixed number of iterations with a small band of low frequencies and then repeated this for increasingly larger bandwidth. We present the results from this procedure, which show the waveform inversion improved the continuity of the main reflectors.
-
-
-
3D Elastic Wavefield Inversion in the Time Domain
Authors L. Guasch, M. R. Warner, I. Stekl and A. P. UmplebyWe have developed a 3D tomographic wavefield inversion code that solves the fully elastic wave equation in the time domain using finite differences. We show results of applying this elastic code to different synthetic 3D problems.
-
-
-
Time-lapse Elastic Full Waveform Inversion using Injected Grid Method
Authors S. C. Singh and G. T. RoyleFull waveform inversion is increasing being used in oil industry to quantify P and S-wave velocities of the sub-surface. So far, waveform inversion methods have been either 3D acoustic or 2D elastic. Furthermore, only low frequencies have been used because of the high computation cost. The application of elastic full waveform for 3D reservoir monitoring is still beyond the reach of present day computing technology. Here, we demonstrate the application an injected grid method, where the forward modelling for iterative elastic waveform inversion is performed in a small volume surrounding the reservoir, reducing the cost significant for full waveform. The application of algorithm is demonstrated on Marmousi II data set in a time-lapse mode.
-
-
-
Waveform Tomography by Correlation Optimisation
Authors T. van Leeuwen and W. A. MulderAn important ingredient of any tomography-based velocity inversion method is the determination of traveltime differences. When a ray-based method is used, the modelled traveltimes are directly available and the traveltimes in the observed data need only to be picked once. When using wave-equation-based methods, however, the traveltime difference between two complex waveforms needs to be determined at each iteration. A straight-forward approach automatically picks the onset of relevant arrivals, either directly or via a correlation of the two waveforms. If the waveforms are not very similar, however, this approach is problematic. We propose to measure the traveltime difference via a weighted norm of the correlation of the two waveforms. The weighted norm can be used directly as an optimisation criterion for waveform tomography. We illustrate this with a synthetic and real data example.
-
-
-
Elastic Inversion of 3D Seimic Data – A Market Benchmark
Authors I. Escobar and H. Peder HansenBecause of an increasing use of seismic inversion products within Maersk Oil, it was needed to perform a benchmarking exercise to identify seismic inversion/reservoir characterization contractor(s) of choice. To ensure robustness of the benchmark a highquality dataset was used (high SNR, good well coverage and well established rock physics model) from a clastic Palaecene reservoir showing a strong class II AVO response. The exercise included four contractors and Maersk’s internal inversion group, each asked to provide relative and absolute acoustic impedance, Poisson’s ratio volumes along with Bayesian seismo-facies prediction. The comparison was carried out by measuring correlations between inverted and measured relative acoustic impedance and relative Poisson’s ratio, combined with visual inspection of different sections, maps and along well locations in terms of predictability and continuity of events. We have found that the reliability in estimating acoustic impedance was similar for all the participants, and it was in fact the ability to estimate Poisson’s ratio that was the main differentiator. Other important aspects were significant differences in software runtimes, efficiency of workflows and processing/pre-conditioning capabilities. This benchmark has given us a sound overview of the abilities of the participating contractors.
-
-
-
Sensitivity of Time-lapse Changes in Pressure and Saturation to Seismic AVO and Time-shifts
Authors M. Trani, R. Arts, O. Leeuwenburgh and J. BrouwerAn inversion scheme that solves for reservoir pressure and fluid saturation changes from time-lapse pre-stack seismic attributes and post-stack seismic time-shifts is presented. It makes use of four equations expressing the changes in zero-offset and gradient reflectivities, compressional and shear waves time-shifts as functions of production induced changes in fluid properties. The method has been successfully tested on a realistic, synthetic reservoir, where seismic data have been modeled before and after 30 years production and water injection. Results show very accurate estimations if information about the vertically averaged reservoir porosity is available. The use of the gradient reflectivity equation causes biased estimations of real changes in saturation and strong leakage between the two different parameters. However, if the equation related to the S-wave time-shift can replace the gradient reflectivity equation, the inversion results may be very accurate. In cases where shear wave data might not be acquired, the approximation of the exact changes in this seismic attribute becomes more accurate if quadratic terms in relative changes of seismic properties are not neglected. The improved forward approximation in this attribute leads to inversion results characterized by weaker leakage and sharper discrimination between different fluid effects.
-
-
-
4D Pre-stack Inversion Workflow Integrating Reservoir Model Control and Lithology Supervised Classification
Authors S. Toinet, S. Maultzsch, V. Souvannavong and O. Colnard4D pre-stack inversion is used in the industry to image reservoir changes due to production and injection, and to make reservoir management decisions in order to optimize hydrocarbon recovery. We present an innovative workflow to prepare, constrain and compute 4D pre-stack inversion attributes. Specific properties of the studied field (huge time-shifts due to gas coming out of solution, various turbiditic contexts) implied building a composite warping result, filtered using a 4D mask to build the initial monitor model for 4D inversion. The pre-stack 4D inversion workflow not only integrates seismic information, but also well information, used to discriminate sand from shale during the 4D mask building, and a 4D rock-physics model. Applied to simulated reservoir properties, the rock-physics model defines a range of relative density and velocity variations in which the inversion results can vary. Moreover, because water-bearing sands are hard to discriminate from shales in some of the field reservoirs using a cross-plot of P and S impedances, information from the reservoir grid was also introduced to help locating water-bearing sands in the 4D mask. Preliminary analyses of 4D inversion attributes show an improved image compared to previous 4D attributes.
-
-
-
Ray Impedance Inversion on the Tight-sand Gas Reservoir
More LessIn a tight sand gas reservoir, due to its complicated geology and extremely low porosity and permeability, conventional inversion methods cannot always accomplish the task of characterizing reservoir distribution. In this paper, we apply the ray-impedance inversion on a tight-sand gas reservoir. We perform inversion on constant ray parameter profiles, with robust estimate of wavelet and stable result of reflectivity series to start with. Comparing the inverted ray impedance with the acoustic impedance, the elastic impedance and shear impedance from the commercial software, as well as these elastic parameters extracted from well logs, it shows that the frequency-dependent ray impedance is superior in identifying fracture zone and the characterization of reservoir distribution for this tight-sand gas reservoir.
-
-
-
Statistical Rock Physics and Bayesian Classification – Are Marginal Distributions Important?
Authors I. Escobar, H. Peder Hansen and M. BellerThe importance and impact of the way marginal (unconditional) distributions are considered in statistical rock physics and Bayesian classification of inverted seismic data was analyzed. Two scenarios were compared, one assuming perfect knowledge of the probability density functions and perfect and unbiased sampling; and another one where an unclassified group is included, with a given distribution in order to account for imperfections in the data, models, and estimation techniques. Using a dataset from a clastic Palaeocen field in the North Sea, it was shown that assuming the first scenario (perfectly known distributions) leads to probabilistic volumetric estimations of hydrocarbon saturated sands at least 3 times bigger than in the second case where an unclassified group is included. For the sake of further downstream engineering and facilities analyses, it is straightforward to realize the impact of such over-predictions in the estimation of P10, P50 and P90 volumes of hydrocarbons in place.
-
-
-
Applicability of AVO Inversion Based on Effective Reflection Coefficients to Long-offset Data from Curved Interfaces
Authors L. V. Skopintseva, A. M. Aizenberg, M. A. Ayzenberg, M. Landrø and T. V. NefedkinaAVO inversion is an essential and powerful interpretational tool. However, conventional AVO-inversion workflow in application to long-offset data will unlikely succeed, because it is limited to relatively plane interfaces, weak parameter contrasts and moderate incidence angles, where the offset dependence of the reflection amplitude can be described by linearized plane-wave reflection coefficients (PWRCs). It is also known that the PWRC is insensitive to the interface curvatures and breaks down at the near-critical offsets, where the head wave is generated, as well as at the post-critical offsets, where the reflected and head waves interfere. To avoid these inconsistencies, we exploit effective reflection coefficients (ERCs) that generalize PWRCs for curved interfaces and nonplane waves. Based on our previous results for plane interfaces, we generalize the improved approach to AVO inversion for the curved interfaces. Using a synthetic data example, we also show that the theoretical description of the actual reflection from a curved interface is directly applicable in AVO inversion.
-
-
-
A 3D Ray-based Pulse Estimation for Seismic Inversion of PSDM Data
Authors F. Georgsen, O. Kolbjørnsen and I. LecomteWe develop a methodology for estimation of the point-spread function of prestack depth migrated data (PSDM). The parametrization of the point-spread function is given by a ray-based approach which incorporates the effects of wave propagation through the use of illumination vectors. The only unknown factor in the expression for the point-spread function is a 1D pulse. This pulse is estimated from reflection coefficients in wells, and co-located PSDM data using a least squares approach. The estimate of the point-spread function is the first step towards seismic inversion of PSDM data. In comparison to traditional wavelet estimation, the model using PSDM data has a larger range of validity, i.e. we are able to remove the assumption of a horizontally layered earth. In examples we show that our method is identical to the 1D convolution when the earth has a constant dip, but gives an improvement when this assumption is violated.
-
-
-
Pilot Fracture Characterization Study Using Seismic Attributes Derived from Singular Value Decomposition of AVOAz Data
Authors G. Chao and S. MaultzschThis work presents a pilot study of the application of a seismic inversion technique for fracture density and fracture orientation in an area where available image log data revealed the presence of open fractures at a given well location. The inversion method is based on a singular value decomposition of azimuthal AVO data. This decomposition allows us to calculate seismic attributes which are linked to the fracture density of the fracture network through anisotropic rock physics modeling. The results reveal an area of high fracture density around the well where open fracture had been observed. The fracture orientations obtained are consistent with the interpreted image logs and outcrop studies in the area. The outcome of this pilot study is promising since the obtained results are consistent with existing image log data. Further testing and research on a larger area will be valuable to assess if the fracture characterization results are consistent with regional geological interpretations and other analysises of fracture networks in the reservoir.
-
-
-
Simultaneous Sources – Processing and Applications
By I. MooreThere has been considerable interest recently in data acquisition using simultaneous sources because of the enormous improvements in acquisition efficiency and source sampling that the method proffers. Realizing these improvements in practice requires an appropriate combination of survey design, acquisition technology and data processing capabilities. Given a suitably-designed survey, I show that simultaneous-source data can be separated effectively into equivalent datasets for each source. These datasets may then be processed using conventional techniques, which benefit naturally from any improvements in sampling. Several field datasets, employing a variety of acquisition geometries, illustrate the viability and limitations of this approach. The main limitation comes from ambiguities in the separation process, which can be resolved to a large degree by using source dithering techniques in combination with a separation algorithm that includes an effective sparseness constraint. The use of more than two sources simultaneously adds to the potential of the method and is shown to be viable provided the survey is designed appropriately.
-
-
-
Iterative Method for the Seperation of Blended Encoded Shot Records
Authors A. Mahdad and G. BlacquièreSeismic acquisition is a trade-off between economical and quality considerations. Generally seismic data is recorded with large time delays between illuminating sources in order to avoid interference in time. The consequence is a poorly sampled shot domain. However, in the concept of blended acquisition, the time delay between sources is reduced significantly. Moreover, the sources may transmit encoded signals. Depending on the acquisition objectives, blended acquisition significantly improves the economics or quality or both by adding additional degrees of freedom in the acquisition design. By a deblending procedure, the individual source responses are retrieved. However, the deblended result contains residual noise due to the interference from other sources. The level of this noise depends on the choice of blending parameters. E.g., in the case of a simple code like linear phase encoding (i.e., applying time delays), it is larger than in the case of a more sophisticated code like transmitting sweeps as in vibroseis technology. In this paper an iterative approach is proposed for deblending, based on the estimation and subsequent adaptive subtraction of the interference noise. The type of coding that is applied is one of the factors that determines the required number of iterations.
-
-
-
High Quality Separation of Simultaneous Sources by Sparse Inversion
Authors R. L. Abma, T. Manning, M. Tanis, J. Yu and M. FosterThis paper demonstrates a method of producing pre-stack gathers from blended acquisition data that are as noise-free as gathers from conventional acquisition. Filtering out the interference from the overlapping shots and stacking the data are fairly effective in attenuating the interference from blended shots. However high-quality separation of the interference from the pre-stack data would make the data more suitable for amplitude dependent analysis, such as that for amplitude-versus-offset, time-lapse measurements, and fracture detection. The method presented here produces seismic records in which the interference is attenuated to the point that it is well below the background noise. This high quality attenuation is achieved by using a sparse inversion process that solves a modified version of Berkhout's matrix system, allowing the source responses to be separated for high quality amplitude measurements. The success of this source separation step in the data processing of blended acquisition means we can produce results with a quality that is comparable to that of conventional acquisition. Furthermore, this method improves the data quality while retaining the lower cost and higher productivity of ISSTM acquisition when compared with conventional acquisition.
-
-
-
Separation of Blended Impulsive Sources using an Iterative Approach
Authors P. Doulgeris, A. Mahdad and G. BlacquiereTraditional data acquisition practice dictates the existence of sufficient time intervals between the firing of sequential impulsive sources in the field. However, much attention has been drawn recently to the possibility of shooting in an overlapping fashion. Numerous publications have addressed the issue from different scopes (de-noising, compressing, blind signal separation etc.) while others have defined the theoretical background. The term 'blending' has been introduced to describe this new trend in acquisition designs, the time-overlapping data acquisition. In turn, the term 'deblending' refers to an algorithm that recovers the data as if they were shot in the traditional way. Such an algorithm is presented in this paper, specially designed for the case of impulsive sources that fire with small time-delays. This algorithm is based on iterative interference estimation and subtraction. The key to signal extraction from blended data is the incoherency of the interference (as opposed to the coherency of the signal) accomplished by resorting the data into a different than the common source domain. The method is applied on a real marine dataset, where the blending process has been simulated numerically.
-
-
-
Signal-to-noise Estimates of Time-reverse Images
More LessLocating subsurface sources from passive seismic recordings is difficult when attempted with low signal-to-noise data that do not contain observable arrivals. Using time reversal techniques, recorded energy can be focused at its source depth. However, when a focus cannot be mat-ched to a particular event in the data, it can be difficult to distinguish true focusing from artifacts. Artificial focusing can arise from numerous causes, including surface waves, local noise sources, acquisition geometry and velocity model effects. We present a method to better locate subsurface sources that reduces the ambiguity of the results by creating an estimate of the signal-to-noise ratio in the image domain. Time-reverse imaging techniques are used to image the recorded data and a noise model. In the data domain, the noise model only approximates the energy of local noise sources. After imaging, however, the result also captures the effects of acquisition geometry and spurious focusing due to the velocity model. The noise image is then used to correct the data image to produce an estimate of the signal-to-noise ratio. Synthetic data examples with various amounts of noise demonstrates the versatility of this technique.
-
-
-
Multichannel Matching Pursuit for Seismic Trace Decomposition
By Y. H. WangMatching pursuit can decompose a seismic trace adaptively into a series of wavelets. However, the solution is not unique and is strongly affected by data noise. To improve the stability of the procedure, matching pursuit is implemented in a multichannel fashion that exploits lateral coherence as a constraint to overcome the non-uniqueness of the solution. Each constitutional wavelet extracted from a seismic trace has an optimal correlation coefficient to neighboring traces. The scheme stabilizes performance on wavelet decomposition with great improvement over spatial continuity along a seismic profile. It is demonstrated using two examples. One shows that the multichannel scheme is able to remove a strong coal-seam reflection from seismic profile, before target reservoirs with weak reflections on the top can be characterized. Another is to generate a reliable, spatially continuous time-frequency spectrum, on which low-frequency shadows can be used for gas reservoir detection.
-
-
-
Fast Sparse Time-frequency Decomposition
Authors A. Gholami, N. Amini, H. R. Siahkoohi and A. EdalatTime-frequency analysis plays an important role in seismic data processing and interpretation. A fast algorithm is presented for sparse time-frequency decomposition. A sparsity constraint is used to render the decomposition process unique while producing high resolution energy distribution maps which can be used as a reliable attribute for delineating reservoirs.
-
-
-
Can Thin Beds Be Identified Through Statistical Phase Estimation?
Authors J. A. Edgar and J. I. SelvageWe introduce a method of identifying phase changes caused by thin bed interference on the seismic wavelet, directly from the seismic alone. We are primarily concerned with the identification of thin beds from seismic sections, but our approach offers scope to map thin beds laterally. We have modified a kurtosis-based statistical wavelet estimation technique to enable the identification of regions of phase changes within seismic data. The method is applied to real seismic data, which, from well log lithology identification, was known to contain a series of thin beds. We expect the thin beds to cause phase changes within the seismic data and demonstrate that our statistical method is sensitive to them. This study emphasises the potential of statistical methods to extract useful information from seismic data that may otherwise be missed.
-
-
-
High Productivity without Compromise – The Relationship between Productivity Quality and Vibroseis Group Size
Authors T. Dean, P. Kristiansen and P. L. VermeerIt has been shown that reducing the source line interval can significantly improve the acquisition footprint of land data. To achieve this reduction without impacting acquisition rates requires the use of high-productivity techniques such as slip-sweep and ISS, typically utilising single vibrators with extended sweeps. These techniques result in a decrease in shot record quality as the sweeps from different vibrators overlap. This paper discusses an altogether more tractable solution: vibrator groups with short sweeps. High productivity can be achieved without interference between different sweeps.
-
-
-
On the Accuracy of Harmonic Estimation from Weighted-sum Ground Force
Authors V. C. Do and C. BagainiThe harmonic noise generated by hydraulic seismic vibrators is not a major concern in conventional vibroseis acquisition with upsweeps and sweeps longer than the listening time. However, its significance is magnified when the acquisition is conducted with simultaneous shooting techniques based on phase encoding or slip-sweep. We show that the use of the ground force estimated at the vibrator location leads to a determination of the response of the earth's interior to the harmonics which is a worse approximation than that determined with the seismic data. The inaccuracy of the estimated ground force in characterizing the harmonic noise increases with frequency and, for the considered dataset, leads to an increase rather than attenuation of the harmonic noise above 40 Hz. It is therefore recommended to use the surface data to determine the harmonic noise.
-
-
-
Deharmonics, a Method for Harmonic Noise Removal on Vibroseis Data
Authors F. D. Martin and P. A. MunozWith the advent of slip sweeps to increase Vibroseis acquisition productivity, the need for harmonic noise removal became more critical to preserve the data quality compared to conventional "flip flop" vibroseis data. Several methods for harmonic noise attenuation are available such as HPVA, Jeffryes, Bagaini, Ziolkowski, Sicking et al. and others. Most of the methods are based on the recording of the vibrator ground force signal to design the operator. However, in some cases the signal is lost or not representative of the vibroseis array. Some of the methods require using uncorrelated data that implies handling of a large amount of data every day. We have successfully implemented a method to remove the harmonic noise without the ground force signal. The method is based on collapsing the harmonic noise by adding the pilot fundamental phase on a correlated record and subtracting the theoretical harmonic phase to be collapsed followed by a deterministic surgical edit of the first breaks. Transformation back to the original noise free correlated record is trivial by applying the inverse phase operation, which is adding the harmonic phase and subtracting the fundamental. The method is demonstrated with synthetic and real data from a slip sweep acquisition.
-
-
-
Keeping the Data Quality of High Productivity Vibroseis Acquisitions Under Control
By C. BagainiThe popularity of the slip-sweep method for high productivity vibroseis acquisition is growing. However, for high values of the sweep length to slip-time ratio, that is for aggressive slip-sweep acquisitions, the harmonic contamination is severe. The first part of this paper presents a new technique for harmonic noise attenuation in aggressive slip-sweep acquisition. The second part introduces the dithered acquisition method in the framework of vibroseis acquisition. It is shown that after separation of the dithered records using a modeling and inversion technique, the quality of the final image (using the same number of shot gathers) is not affected by the dithered interference. The combination of dithered and slip-sweep acquisition is here proposed as a method to increase vibroseis productivity to values close to the limits obtainable when the vibrators sweep independently while keeping the interference noise due to simultaneous shooting under control.
-
-
-
Harmonic Distortion Reduction of Seismic Vibrator Using Vibrator Control Electronics
More LessIt is well known that the mechanical system of seismic hydraulic vibrators introduces harmonic distortion into the output ground force. Most research and development related to reducing harmonic distortion is focused on the mechanical structure and hydraulic system of the vibrator. However, recent advancements in source control technology demonstrate effective reduction in the harmonic distortion caused by the servo-hydraulic system. This paper discusses these advancements and provides field test examples of harmonic distortion reduction using various types of seismic vibrators.
-
-
-
What if I...? – The Use of Vibroseis 'Energy Tests' as an Aid in Parameter Choice
Authors P. Kristiansen, J. Quigley, D. Holmes and T. DeanA core component in planning a land seismic survey is the choice of source and its parameters. Parameters tests are often performed as part of the survey start-up, but Interpretation of the results from these tests is frequently subjective, particularly when a complete line or swath is not acquired and processed. The typical instinct is to err on the side of caution and this may result in the application of an excessive amount of source energy, thus negatively impacting survey efficiency and cost. However, the effect from different sources and source parameters on signal-to-noise ratio and other key quality indicators can be tested and analyzed in a systematic way. This will allow comparison of a much wider variety of source options than is possible through the acquisition of a very limited number of costly and time-consuming test lines. We can also identify possible efficiency improvements resulting from acquiring data with equivalent signal-to-noise ratios, but different parameter combinations. A suite of such tests has been developed within WesternGeco and are referred to as energy tests. Within this paper, we will describe the results from such tests and its interpretation.
-
-
-
Low-frequency Generation Using Seismic Vibrators
Authors G. J. M. Baeten, A. Egreteau, J. Gibson, F. Lin, P. Maxwell and J. J. SallasTypical specifications for a seismic vibrator include a low-frequency limit determined by the reaction mass weight, the piston stroke and the pump flow rating. Emission of frequencies well below this so-called displacement limit is investigated using dedicated sweep designs. The impact of the low frequency emissions on geophysical and mechanical particle motions is investigated using a variety of different sensors, downhole, along a 2D receiver line and on the seismic vibrator.
-
-
-
Directive Geophone
More LessThere may be good reason to consider planting geophones at an angle to enhance energy arriving at high incidence angles. This paper will examine two issues: the fidelity of geophones planted with a tilt, and the potential benefit of data collected with tilted geophones. It has been found, via lab measurements, that if a conventional geophone is tilted at 40° from vertical, then this will cause 70% losses of its maximum response (Bertram et al, 1999). The author has measured geophone properties in the field under static (passive source) and dynamic (active source) modes. In static mode, conventional geophones were observed operating normally up to 30º of tilt. In dynamic mode, a tilt of 60º was observed to cause about 90-95% losses of the maximum while no losses were observed at tilt angles ranging from 15-45º. The fact that geophones can maintain response integrity at tilted angles can be used to create directionally tuned arrays to enhance energy arriving at higher incidence angles. Directive or "Steering and Fixed Angle Geophone" (patent pending), presented here, is an attempt to improve the recovery of energy from longer offsets at acquisition stage.
-
-
-
Field Data Results of Elimination of Free-surface-related Events for Marine Over/Under Streamer Data
Authors M. Majdanski, C. Kostov, E. Kragh, I. Moore, M. Thompson and J. MispelAttenuation of multiples in marine seismic data is typically achieved using a multiple prediction and adaptive subtraction process. An alternative method based on deconvolution of up- by down-going pressure data has been successfully applied to ocean-bottom data, where it achieves removal of all free-surface-related events (source- and receiver-side ghosts, as well as free-surface multiples). Here, we present a field data application of this method to towed-streamer data, acquired in an over/under configuration. To overcome the requirement of accurately recording direct arrivals, we use an estimation based on source near-field hydrophone recordings. The final results are compared to standard SRME technique showing similar performance, but within a single operation and without the requirement of adaptive subtraction.
-
-
-
Surface Related Multiple Elimination for WATS Data
Authors E. Kurin, D. Lokshtanov and H. K. HelgesenA version of the 3D SRME algorithm for the WATS data is considered. In this version, missing traces required for 3D multiple prediction are reconstructed on-the-fly based on the azimuth dependent NMO for a few selected horizons. The subtraction scheme with multichannel multidimensional filters averaged over a streamer, which avoids re-sorting of traces, proved effective for such data. The use of the synthetic data, generated from a realistic model and with the acquisition pattern corresponding to the real case, helps to analyze strong and weak points of the studied demultiple scheme. The results show that the suggested algorithm is effective for the WATS data, especially at near-to-middle offsets. As for far offsets combined with complex geological settings, there is still a room for improvements. An effective data handling scheme for scalable implementation of the algorithm for compute clusters is suggested, where receiver-side trace are accessed in the sequential order and processed according to the pre-computed index tables of trace contributions.
-
-
-
3D Predictive Deconvolution for Wide-azimuth Gathers
Authors P. Hugonnet, J. L. Boelle, P. Herrmann, F. Prat and S. NavionUsual pre-stack predictive deconvolution solutions for multiple attenuation are either monochannel or designed for 2D gathers. These existing 1D /2D solutions are suboptimal for today's modern acquisition geometries, which allow the construction of 3D, wide azimuth pre-stack collections. We therefore present a 3D pre-stack predictive deconvolution algorithm suited to today's WAZ HD HR gathers. It targets the attenuation of the multiples (either surface or internal) in horizontally layered media, and is applied on densely sampled, wide-azimuth gathers, from either marine, land, or OBC surveys (cross-spread gathers, receiver or shot gathers, mega bin, macro bin,...). As usual with the predictive deconvolution algorithms, it is the most useful for short to medium period multiples.
-
-
-
2D Multiple Prediction in the Curvelet Domain
Authors D. Donno, H. Chauris and M. NobleThe suppression of multiples is a crucial task when processing seismic reflection data. We investigate how curvelets could be used for surface-related multiple prediction. From a geophysical point of view, a curvelet can be seen as the representation of a local plane wave, and is particularly well suited for seismic data decomposition. For the prediction of multiples in the curvelet domain, we propose to first decompose the input data into curvelet coefficients. These coefficients are then convolved together to predict the coefficients associated to multiples, and the final result is obtained by applying the inverse curvelet transform. The curvelet transform offers two advantages. The directional characteristic of curvelets allows to exploit Snell’s law at the sea surface. Moreover, the possible aliasing in the predicted multiples can be better managed by using the curvelet multi-scale property to weight the prediction according to the low-frequency part of the data. 2D synthetic and field data examples show that some artifacts and aliasing effects can be indeed reduced in the multiple prediction with the use of curvelets.
-
-
-
Incorporating the Source Array into Primary Estimation
Authors G. J. A. van Groenestijn and D. J. VerschuurIn the surface-related multiple elimination (SRME) method the source array is assumed to act as a single stable source. When the behavior of the source array differs too much from this assumption it effects the primary estimation. We will demonstrate this with two cases; unstable sources and large source arrays. We will discuss which measures to take to incorporate the source array in SRME. On the same synthetic data examples we will demonstrate that it is very easy to bring in the source array into the recently introduced estimation of primaries by sparse inversion (EPSI) method. For the unstable source case EPSI can estimate the source wavelet that was used during each shot. For the case of large source arrays EPSI can estimate primary impulse responses from the data as if they came from a single point source acquisition, thus removing the blending effect from the source array.
-
-
-
Stabalized Estimation of Primaries by Sparse Inversion
Authors T. T. Y. Lin and F. J. HerrmannEstimation of Primaries by Sparse Inversion (EPSI) is a recent method for surface-related multiple removal using a direct estimation method closely related to Amundsen inversion, where under a sparsity assumption the primary impulse response is determined directly from a data-driven wavefield inversion process. One of the major difficulties in its practical adoption is that one must have precise knowledge of a time-window that contains multiple-free primaries during each update. Moreover, due to the nuances involved in regularizing the model impulse response in the inverse problem, the EPSI approach has an additional number of inversion parameters where it may be difficult to choose a reasonable value. We show that the specific sparsity constraint on the EPSI updates lead to an inherently intractable problem, and that the time-window and other inversion variables arise in the context of additional regularizations that attempts to drive towards a meaningful solution. We furthermore suggest a way to remove almost all of these parameters via convexification, which stabilizes the inversion while preserving the crucial sparsity assumption in the primary impulse response model.
-
-
-
Surface Multiple Attenuation Through Sparse Inversion – Attenuation Results for Complex Synthetics and Real Data
Authors T. Savels, K. de Vos and J. W. de MaagWe present new results of the recently introduced multiple attenuation through sparse inversion approach (van Groenestijn and Verschuur, 2009). This method aims at attenuating surface multiples of all periodicities without the need for an adaptive subtraction or a near-offset extrapolation. The aim of our paper is twofold. First, we demonstrate the viability of the sparse inversion approach on two complex 2D synthetic data sets, showing that sparse inversion may serve as a powerful tool to attenuate short and long-period multiples in complex settings. We additionally illustrate that the resulting primary estimations match the synthetically modelled ones. Second, we apply the 2D sparse inversion approach to a 2D line of a real 3D deep-marine data set. We observe that the estimated multiples exhibit a time shift compared to the true ones, leading to a degraded multiple attenuation. In a further synthetic study we demonstrate that, as expected, this time shift is induced by the presence of cross-line dip. Our observations confirm the necessity of a full 3D approach in order for the sparse inversion method to be effective in practice.
-
-
-
Energy Reassignment of an Image for Improved Picking of Velocity Dispersion Curves
Authors C. W. Hyslop and M. S. DialloMany algorithms have been proposed to improve the resolution of the velocity dispersion curve through the use of different transforms to the frequency-velocity domain. The reassignment proposed in this paper is an image processing and picking technique that has the ability to overcome problems intrinsic to poor sampling and complex modal structure. We use an iterative approach in applying energy reassignment to move energy points closer to the crest of the dispersive mode in the frequency-velocity image. In this sense, energy reassignment of the image can be used as a mode separation technique. Directly reassigning the image is also computationally efficient, thus facilitating interactive analysis and processing of surface waves. An example from within the processing flow of a surface wave mitigation algorithm is given to show the advantage in using this process on beamformed dispersive modes.
-
-
-
A Marine Broadband Case Study Offshore China
Authors T. J. Bunting, B. J. Lim, C. H. Lim, S. W. Pei, S. K. Yang, Z. B. Zhang, Y. H. Xie and L. LiThe effect of the sea surface ghost on marine seismic acquisition is well understood. Shallow tow geometries recover high frequencies at the expense of attenuating low frequencies and deep tow geometries recover low frequencies at the expense of attenuating high frequencies. In recent years two dual streamer tow depth solutions (Over-Under and Sparse-Under) have been deployed, both of which use two streamers which are towed at two different depths. A 2D survey was acquired offshore China, in August 2009, utilizing three separate streamer depths (5, 17 and 23 m). This three streamer depth configuration allows for the benefits of the two broadband solutions to be evaluated against each other and against a shallow streamer single depth seismic measurement. This paper will review the theory behind the two combination techniques, compare the seismic datasets, and finalize with some conclusions on the relative benefits both in terms of seismic imaging and acquisition efficiency.
-
-
-
First 3-D Dual-sensor Streamer Acquisition in the North Sea – An Example from the Luno Field
Authors B. Osnes, A. Day, M. Widmaier, J. E. Lie, V. Danielsen, D. Harrison-Fox, M. Lesnes and O. El BaramonyIn 2009, the first 3-D dual-sensor towed streamer survey in the North Sea was acquired over the Luno field. This field presents a challenging area for seismic imaging that requires the best possible resolution to image the complexity of the reservoir, together with maximum signal penetration due to the presence of an overlying chalk interval. Dual-sensor technology is therefore attractive due to its ability to remove the receiver ghost, thereby increasing the usable bandwidth, which further permits increased towing depth to improve the low frequency signal-to-noise. In this paper, the acquisition and processing of this dataset will be reviewed. It will be shown that, in addition to the data quality benefits, deep towed dual-sensor acquisition has a number of operational advantages over conventional acquisition. In particular, dual-sensor streamer operations are less susceptible to weather-related noise and seismic interference.
-
-
-
Mitigating the Environmental Footprint of Towed Streamer Seismic Surveys
Authors P. M. Fontana and C. P. ZickermanThe environmental risks associated with marine towed streamer seismic surveys can be categorized into four main emission components; solid, fluid, gaseous, and acoustic. Sources for each component can be identified with respect to the maritime technologies incorporated into the design of the seismic survey vessel and the seismic acquisition technologies and methods applied to specific geophysical objectives. To facilitate this analysis, we introduce the concept of an emission risk matrix. Within this matrix the potential risk level of each emission component associated with the survey vessel and the seismic equipment can be assessed. Once the primary risks have been identified, mitigation measures can then be identified which, when applied, can significantly reduce the environmental footprint of the towed streamer survey effort.
-
-
-
Recon 3D – A Fast and Cheap 3D Acquisition Approach to Large Scale Exploration
Authors M. D. MacNeill and J. F. McNuttIn deep-water environments, energy companies are exploring increasingly subtle plays, in ever shorter time frames, to discover commercial hydrocarbon volumes. Advances in deep-water drilling and completions, subsea equipment and facilities technology underpin many of the recent commercially successful deep-water developments; exploration seismic advancements lagged those in engineering. In deep-water frontier environments, seismic stratigraphic and structural interpretation, in addition to hydrocarbon system and regional analyses, obviously are key components of prospect evaluation and risking leading to successful exploration and development programs. Clear business need provides an opportunity for more cost effective seismic acquisition technology. This paper introduces the novel RECONnaissance 3D marine seismic acquisition method that provides the spatial and temporal resolution of conventional marine 3D seismic, but at a significantly reduced cost. RECON 3D is an ideal method for acquiring deep water seismic data to explore large areas, to upgrade separately or previously acquired azimuthally restricted data, or to acquire fast-track data covering numerous fields, prospects, or leads.
-
-
-
First Wide-azimuth Time-lapse Seismic Acquisition Using Ocean Bottom Seismic Nodes at Atlantis Field – Gulf of Mexico
Authors G. J. Beaudoin, M. D. Reasnor, M. Pfister and G. OpenshawThe world's first ocean bottom seismic (OBS) node-on-node time-lapse monitor survey was acquired at Atlantis Field in the Gulf of Mexico in 2009. The baseline survey, acquired in 2005-2006, was the first large-scale deepwater OBS survey employing autonomous nodes (Beaudoin and Ross, 2007). Following first oil in October 2007, studies were initiated for a time-lapse survey to monitor production changes in the Atlantis reservoir. The baseline survey demonstrated that remotely-operated vehicles (ROVs) could deploy and recover autonomous nodes with high positional accuracy even in the presence of severe bathymetric challenges. This capability underpinned the design, planning and execution of a highly repeatable monitor survey. For the monitor survey, an ROV deployed 500 nodes to a subset of the original 1628 baseline node locations. These 500 monitor locations were chosen to image the expected area of reservoir changes after careful analysis of image quality as baseline data were progressively decimated. The ROV deployed 91% of the nodes to within 5 meters of their respective baseline locations and 98% within 10 meters. This remarkable geometric repeatability within a producing field has laid the foundation for highly repeatable monitor surveys even in challenging seafloor environments.
-
-
-
A Case Study of a High Latitude Towed Streamer 3D Seismic Survey
Authors A. Ross, S. Hildebrand and S. ViceerIn the summer of 2009, CGGVeritas collected a towed seismic streamer survey in the Canadian Beaufort Sea for BP. This was the highest latitude (>71 degrees) towed streamer 3D survey collected by either company. The presence of first-year and multi-year sea ice dominated the operation and taught both companies a great deal about high latitude seismic operations.
-
-
-
Randomized Sampling Strategies
More LessSeismic exploration relies on the collection of massive data volumes that are subsequently mined for information during seismic processing. While this approach has been extremely successful in the past, the current trend towards higher quality images in increasingly complicated regions continues to reveal fundamental shortcomings in our workflows for high-dimensional data volumes. Two causes can be identified. First, there is the so-called "curse of dimensionality" exemplified by Nyquist's sampling criterion, which puts disproportionate strain on current acquisition and processing systems as the size and desired resolution of our survey areas continues to increase. Secondly, there is the recent "departure from Moore's law" that forces us to lower our expectations to compute ourselves out of this curse of dimensionality. In this paper, we offer a way out of this situation by a deliberate randomized subsampling combined with structure-exploiting transform-domain sparsity promotion. Our approach is successful because it reduces the size of seismic data volumes without loss of information. As such we end up with a new technology where the costs of acquisition and processing are no longer dictated by the size of the acquisition but by the transform-domain sparsity of the end-product.
-
-
-
Acquisition Design for Incoherent Shooting
Authors G. Blacquiere and A. J. BerkhoutIn blended seismic data acquisition, the blended source wavefield should be designed in such a way that it has a large spatial bandwidth without notches. This corresponds to using blended source arrays with a high degree of incoherency. The three-dimensional (x,y,t) autocorrelation function of the blended source wavefield at the surface is proposed as a surface-related, quantitative measure of incoherency. In this assessment the subsurface is not involved. A second measure is proposed that does include the propagation effects of the near- and subsurface on the illumination by the blended source wavefield. For each subsurface gridpoint the autocorrelation function of the incident blended source wavefield - being represented by a dispersed time series - is judged for its whiteness. This result can be extended to angle-dependent illumination by computing the cross-correlation function as well.
-
-
-
Time-lapse Refraction Seismic Monitoring
Authors F. Hansteen, P. B. Wills, J. C. Hornman, L. Jin and S. J. BourneA novel seismic technique for reservoir monitoring has been developed and tested in a resent field trial at the Peace River heavy oil field in Alberta, Canada. By measuring time shifts on first arrival head-waves from a refracting layer below the reservoir, the method aims to produce high-resolution areal maps of reservoir time shifts at a much lower cost than conventional 4D seismic. Good lateral resolution is achieved by numerically redatuming the wave field recorded at surface to a datum just above the reservoir. Preliminary results show plausible one-way time-lapse time shifts in the reservoir of the order of 2 ms, caused by variation in reservoir fluid pressure in the vicinity of active steam injectors.
-
-
-
Benefits of Hydrophones for Land Seismic Monitoring
Authors E. Rebel and E. ForguesCGGVeritas has conducted for Shell Canada a 4D project based on a network of buried mini-vibrators associated with buried sensors. This paper shows a comparison of signal and noise recorded on different types of sensors (surface DSU, buried geophones and hydrophones). We conclude that buried hydrophones provided the best data quality: (i) they are free of shear wave, (ii) they present a better Signal to Noise ratio (20dB gain), (iii) they show better repeatability. Therefore, hydrophones are well adapted for permanent seismic land acquisition used in 4D monitoring.
-