68th EAGE Conference and Exhibition incorporating SPE EUROPEC 2006
- Conference date: 12 Jun 2006 - 15 Jun 2006
- Location: Vienna, Austria
- ISBN: 978-90-73781-00-9
- Published: 12 June 2006
401 - 420 of 462 results
-
-
Building 4D Constraint for History Matching from Stratigraphic Pre-Stack inversion - Application to the Girassol Field
More LessAuthors P. Nivlet, T. Tonellot, P. Sexton, J. L. Piazza, F. Lefeuvre and O. DuplantierAdequate history matching is mandatory to understand properly the physical changes occurring in an hydrocarbon field that are related with production. However, this process has no unique solution. 4D seismic, which consists in repeating a 3D seismic survey at different calendar times is an additional source of data, which provides information about production effects. Its integration in the history matching process can therefore help in better identifying permeability barriers, by-passed oil zones, and therefore influence the field management.
In this paper, we focus on the definition of this 4D seismic constraint on Girassol, a giant deep-offshore field located offshore Angola. The methodology we have used is based on a sequential stratigraphic inversion of pre-stack data from the different seismic vintages. The methodology to invert the monitor survey is divided into 4 parts:
- time realignment of seismic events from monitor survey on the base survey;
- verification of the invariability of wavelets with calendar time;
- updating of a priori model;
- pre-stack stratigraphic inversion of the monitor survey;
Finally, we interpret the variations of inverted impedances between the different vintages in terms of production effects.
-
-
-
Integration of 4D Seismic Data and the Dynamic Reservoir Model - Revealing New Targets in Gannet C
More LessAuthors P. S. Rowbotham, R. Staples, A. Cook, J. Braisby and A. MabillardTwo sets of 4D seismic data gave major new insights into the structure and dynamic behaviour of the Gannet C oil and gas reservoir in the UK Central North Sea. The 4D data revealed major extensions of reservoir units previously presumed to be absent or thin over much of the reservoir. Furthermore, in a subsea field with significant uncertainty of production allocation, 4D also proved an invaluable history matching parameter for the dynamic model. Together, the dynamic model and the 4D data gave rise to the identification of one recompletion opportunity and two infill well opportunities, to produce oil volumes in existing and newly identified reservoir sands.
-
-
-
Integrated 3D/4D Structural and Stratigraphic Interpretation on Nelson Accounts for Variable Fluid Contact Levels
More LessAuthors A. MacLellan, P. S. Rowbotham, R. Rogers and J. MillingtonTo optimize the value of time-lapse seismic data, it is essential to understand the reservoir architecture and structure in 3D. Only then can we attribute 4D signal to specific reservoir flow units, and 4D signal boundaries to flow barriers such as stratigraphic margins or faults. We present a case study on the UK Nelson oil field, which, with its multiple seismic monitor surveys and strong 4D water sweep signal, is a world-class example of 4D data. We show how 3D and 4D interpretation can explain variability in sweep and thus identify infill drilling requirements.
-
-
-
Impact of Time Lapse Processing on 4D Simultaneous Inversion - The Marlim Field Case Study
More LessAuthors A. Castoro, C. Reiser, E. Thedy Ambrosini, P. Johann Schroeder and P. LeeThis paper presents a 4D simultaneous inversion case study from the Marlim field, offshore Brazil. The paper aims at illustrating the influence of processing for time-lapse AVO inversion.
An AVO inversion of the Marlim 4D seismic dataset has been performed on an original dataset, not acquired or processed for time-lapse purposes. A seismic pre-conditioning processing sequence was applied to the post-stack migrated angle stacks to improve the seismic repeatability. The sequence included the creation of a 'common' seismic dataset generated from the coherent signal between base and monitor data. The common dataset is characterized by higher signal-to-noise ratio than both base and monitor vintages. The pre-conditioning efficiently improves the seismic repeatability of the vintages. However, the inversion results show a significant amount of noise contaminating the 4D signal.
For this reason, on behalf on Petrobras, the 4D dataset has been subsequently reprocessed in parallel by CGG. The reprocessing involved the application of the same processing sequence to the two vintages. The AVO inversion is being repeated on the new dataset. The 4D parallel reprocessing dramatically increases the repeatability of the two vintages allowing an efficient reduction of the 4D noise for the inversion.
-
-
-
Plan, Acquire, Process and Interpret - How to Turn Around a Time Lapse Survey in Three Weeks
More LessAuthors C. R. Schiott and A. KingThe turnaround of seismic surveys and time lapse data in particular can be greatly reduced if the objectives are well defined at an early project stage. Through the definition of these objectives by the integrated reservoir management team consisting of all subsurface and production specialists, the geophysicists can design the survey geometry, processing workflow and prepare an interpretation program that delivers initial answers on sweep geometry and compartmentalization. It is demonstrated how a towed streamer monitor survey shot in September-October 2005 was planned and prepared enabling the availability of amplitude and time change maps for analysis within three weeks of the last shot fired whilst still including full processing and prestack migration of the monitor survey.
-
-
-
Enhanced Streamer Positioning for 3D and 4D Seismic
More LessAuthors J. A. Musser, M. Burnham and D. RidyardStreamer positioning technology has evolved from streamer shape computations based on magnetic compasses to hybrid systems combining compasses with limited acoustic sensors to improve positioning errors. Correlated positioning errors can have a detrimental impact on the resolution of 3D surveys. The potential impact on 4D seismic is even more serious. These problems can be addressed by the use of more rigorous fully cross-braced acoustic positioning networks. Such networks can improve positioning accuracy by 60% and facilitate the use of steerable streamers for minimizing streamer separation, streamer offset from obstructions and errors in repeatability for 4D seismic monitoring surveys.
-
-
-
Understanding 4D Repeatability Variograms
More LessAuthors H. C. Hoeber, S. Butt, C. Lacombe, S. Campbell and D. N. WhitcombeRecent discussions of 4D repeatability have focused primarily on the effect of acquisition differences prior to processing. We show that acquisition differences in areas with target reflector dip will always result in non-repeatability prior to regularization and that this is caused by differing reflection points. Processing elements such as 4d binning and regularization attempt to move traces to a common subsurface position (in a 4D sense) and to their bin center. They are designed to overcome differences in acquisition and cannot be overlooked in a full repeatability analysis. Since regularization and imaging destroy the association of traces and navigation attributes, a variogram analysis based on differences in source and receiver positions cannot be performed beyond these steps. Consequently it is difficult to predict final repeatability from a variogram analysis prior to regularization.
-
-
-
Time-Spectral Analysis for 4D Data Q-Controlled Calibration
More LessAuthors D. Lecerf, M. Rogers and F. LefeuvreConventional approaches in time-lapse studies often ignore seismic transmission effects such as attenuation. For example, gas injection may produce changes in the factor Q. Consequently, amplitude and phase variations in time-lapse seismic data may be wrongly interpreted To correct such spectral distortion, we present a cross-equalisation technique based on differential Q-controlled calibration. The methodology should be applied in a 4D context when frequency attenuation variation occurs inside a reservoir or in the overburden.
The spectral characterisation of the data is tested for two different techniques: Multi-taper and Wavelet decomposition. We present the advantages and the disadvantages specifically for time-lapse studies.
The cross-equalisation is defined as "attribute-driven-processing". Using the constant Q definition, an attribute, called 4DQ, is computed simultaneously on the base and monitor with linear fitting of the logarithmic spectral ratio.
The calibration, controlled by the 4DQ attribute, removes the effect of the absorption variation.
The methodology is tested on real data. The measured 4DQ attribute shows a clear spatial correlation with the reservoir in production, but its direct interpretation seems to be critical. Furthermore, the calibration process and a single-Q compensation are applied providing a high-resolution 4D signature.
-
-
-
Use of Spectral Decomposition to Detect Dispersion Anomalies Associated with Gas Saturation
More LessAuthors M. Chapman, J. Zhang, E. Odebeatu, E. Liu and X. Y. LiSignificant evidence suggests that hydrocarbon deposits are associated with abnormally high values of seismic attenuation, and the ability to detect such zones would aid seismic exploration. Unfortunately, attenuation is difficult to measure and it is not clear how to interpret observed frequency responses. Based on forward modelling, we believe that the effect of the frequency dependent reflection coefficient which results from high dispersion in the hydrocarbon saturated zone can often be the dominant observable effect. We show how the theory of spectral decomposition can be used to detect such effects and validate the technique with synthetic data. We show examples of spectral anomalies associated with gas reservoirs in field data, and demonstrate how these anomalies can be modelled in terms of gas-induced dispersion.
-
-
-
Seismic Wave Attenuation and Dispersion in Patchy-Saturated Rocks - Numerical Experiments
More LessAuthors F. Krzikalla, T. M. Müller, B. Hardy and B. GurevichHydrocarbon bearing reservoir rocks often contain a mixture of several fluids in their porespace (e.g. oil, water and gas). If the pore fluids are immiscible and form pockets on a mesoscopic length scale (exceeding the typical pore size but still small compared to seismic wavelength), the fluid saturation is referred to as patchy saturation. Elastic waves travelling through such a rock will exhibit a characteristic frequency-dependent attenuation and velocity dispersion. These dynamic effects are believed to contribute significantly to the overall characteristics of the seismic wavefield. We numerically simulate wave propagation in a partially saturated rock model containing water with gas inclusions and extract attenuation and dispersion from synthetic seismograms. The results are compared to a theory of frequency-dependent attenuation and velocity dispersion in partially saturated media. Our numerical results are in reasonable agreement with those predicted theoretically. Furthermore, we are able to accurately infer information about the size of the patches from the extracted frequency-dependent attenuation. This underlines the possibility that the size of the fluid patches can be estimated from seismic data.
-
-
-
Attenuation and Dispersion in Partially Saturated Porous Rock - Random vs Periodic Models
More LessAuthors J. Toms, T. M. Müller, B. Gurevich and D. L. JohnsonMesoscopic heterogeneities often occur on a mesoscopic scale, that is scale which is greater than pore-scale but less than wavelength scale. Presence of mesoscopic fluid patches in a porous rock may cause a substantial phase velocity dispersion and attenuation. This is a result of wave induced fluid flow, which arises when a passing wave induces spatial gradients in fluid pressure. Attenuation and dispersion arising from mesoscopic heterogeneities is affected by the spatial distribution of saturating fluids. Here we compare theoretical models for attenuation and dispersion which utilize a 3D random and periodic distribution of fluid heterogeneities. In particular, the periodic model proposed by Johnson (2001) is reinterpreted within the context of the random model. Good agreement between estimates of attenuation and phase velocity is obtained showing that with the right choice of parameters Johnson’s model can describe random as well as periodic distribution of fluid patches.
-
-
-
Relationships Among Ultrasonic Velocities and Attenuations of Carbonate Reservoir Rocks
More LessAuthors A. I. Best, J. Sothcott, T. A. Johansen, P. Avseth and C. McCannUltrasonic P- and S-wave velocity (Vp, Vs) and attenuation (Qp-1, Qs-1), including azimuthal variations in Vs (wave propagation parallel to bedding), were measured on 12 core samples from a Russian carbonate reservoir at 40 MPa effective pressure using the laboratory pulse echo-system. While relationships between velocities, attenuations and porosity and permeability do not reveal any significant features, cross-plots of the ratios Vp/Vs and Qp-1/Qs-1 allow some categorisation of these carbonate rocks in terms of porosity and permeability ranges. In particular, low porosity (< 5%), low permeability (< 0.1 mD) carbonate rocks plot in a separate group to intermediate porosity and permeability carbonate rocks. The results show the value of combined velocity and attenuation datasets to reservoir characterisation.
-
-
-
Complex Fault Network Generation Using a Fused Hierarchy
More LessAuthors K. S. Hoffman, J. W. Neave and E. H. NilsenFault network modeling of complexly faulted structures - those containing hundreds, if not thousands, of faults - can be an extremely difficult and time-consuming process. A variety of methods exist to create the appropriate relationships between the faults, such as truncations, crossing, or offsets due to younger episodes of faulting, but each of these has limitations which make working in the multi-hundred fault areas unwieldy. The fused fault block technique removes the limitations of previous fault modeling techniques and provides a robust, repeatable fault network builder. This new technique expands on the concept of a binary fault tree, but allows compound truncations that are impossible with a strict binary tree approach. The fault network is the framework for subsequent modeling - stratigraphic, facies, or petrophysical - and can be used to generate a grid suitable for reservoir simulation.
-
-
-
Discrete Element Modelling for Coupling Geomechanics and Seismics in Reservoir Monitoring
More LessAuthors H. T. I. Alassi and R. M. HoltDiscrete element method (DEM) is used as a tool for modeling seismic wave propagation as well as stresses and strains associated with reservoir depletion. The advantage of DEM lies in its ability to model strain localization and evolution of fractures, which can be directly linked to time-lapse seismics and micro-seismic events. For this purpose, DEM is tested to verify its ability to model wave propagation. Then a simplified 2D synthetic model for typical North Sea field is constructed. The simulated geomechanical response to reservoir depletion is compared to that obtained by use of the Finite element method. Finally a simple 4D seismic profile is created for the reservoir top.
-
-
-
Improving Seismic Interpretation Integration in Reservoir Model
More LessAuthors C. Brunel and T. ModianoBuilding a reservoir model from seismic data interpretation can be very time consuming and cause incorrect or poor quality results without any specific methodology. Limitation of the seismic scale regarding to the reservoir detail needs, incomplete horizon or fault picking and structural 3D incoherencies can prevent from getting structural surfaces adapted to structural reservoir modeling. In addition, geomodel software limitations must also be taken into account in order to ensure that critical structural heterogeneities described from seismic interpretation will be ranked and adapted for an appropriate modeling . The following paper proposes a methodological workflow that enables the delivery, of representative and appropriate structural surfaces, ready for reservoir modeling. The workflow underlined how artifacts, incoherencies and software limitations can be corrected. All specialists, geophysicist, geologist and reservoir engineer must work together early in the process in order to define structural modeling strategy and choices in agreement with structural, sedimentology and dynamic criteria. In any case such workflow will ensure a better description of essential heterogeneities. In addition, as the reservoir grid remains consistent with seismic data then lithoseismic cubes can be considered to constrain the facies and petrophysical modeling.
-
-
-
Integrating Geology and Depth Imaging in a Mature Overthrust Area - A Case History
More LessAuthors W. Ritchie, M. Popovici and M. FleidnerWe present a depth-domain processing flow-chart based on iterative ray+Born migration/inversion and Very Fast Simulated Annealing (VSFA) inversion. The quantitative migrated image computed by ray+Born inversion is used as an input in the VSFA inversion to build a structural velocity model. Aim of the VSFA inversion is to remove limited-bandwidth effects in the migrated image resulting from the limited bandwidth of the source and the limited aperture coverage. The input data of the VSFA inversion are velocity profiles of the migrated images and the output models are the corresponding structural velocity profiles. The forward problem associated with the VSFA inversion is approximated by a time-domain convolution with the source wavelet, which makes a global exploration of the model space to be possible.
The flow-chart is assessed thanks to the Marmousi model. We computed 9 iterations of single-arrival ray+Born modelling/inversion, which allowed us to derive an accurate true-amplitude migrated image. Second, we applied the VSFA inversion using as input all the vertical profiles of both the true perturbation model (obtained by band-pass filtering the Marmousi model) and the ray+Born migrated image. The output is a structural velocity model, which mimics the true Marmousi model.
-
-
-
Integrated Prestack Depth Migration/Inversion and Simulated Annealing Optimization for Structural Model Building
More LessAuthors S. Operto, A. Ribodeti, W. Agudelo and J. VirieuxWhether refining seismic images to evaluate opportunities in mature areas and exploit the maximum resource, or exploring in frontier areas, determining an accurate velocity model within the turnaround time constraints of reservoir management and exploration timeframes is critical. Speed, robustness, and accuracy are equally important. Seismic imaging has made great strides in recent years with the advent of so-called wave-equation migration imaging methods. Given the correct acoustic propagation velocity for seismic waves in the Earth’s subsurface, these wave-equation methods yield the highest resolution and most accurate images of the earth. However, the process of determining the correct acoustic propagation velocity can be an elusive, time consuming, and costly procedure. We describe an approach both to shorten the process and to make the process less biased and more accurate. The process is shortened by automating the labor-intensive portion of the workflow, and made less biased and more robust and accurate by using much more data than is commonly used in manual picking approaches.
-
-
-
Automating the Velocity Building Process
More LessAuthors D. Bevc, M. M. Fliedner and J. VanderKwaakIn seismic reflection tomography, the velocity model of the subsurface is updated by back-projecting travel-time residuals along ray-paths. The travel-time residuals are picked from the seismic data itself and the methodology used to gather these picks is a fundamental part of any velocity inversion workflow. In particular, the density at which the residuals are represented in the four-dimensional data space (inline, crossline, offset and depth/time) appears to have a significant effect on the precision of the velocity updates that are output from the tomographic inversion. Utilising dense, hyperbolic- (or parabolic-) fitting means that the residuals are finely sampled in the data space but does not necessarily represent their true values with great accuracy. Dense, non-hyperbolic fitting offers a similarly fine sampling but with greater adherence to the true residual value. These two methodologies have been compared and contrasted on a complex synthetic dataset. It can be seen from this comparison that the dense, non-hyperbolic tomography offers greater potential for resolving small-scale velocity heterogeneities in the Earth.
-
-
-
Resolving Small-Scale Heterogeneities with Dense Non-Hyperbolic Seismic Tomography
More LessAuthors J. Brittan and J. YuanThe main goal of 3D CRS (Common Reflection Surface) stack is the improvement of the image with respect to NMO/DMO processing route. However, also the set of parameters defining the CRS stacking surface are a precious source of information on subsurface properties. In 3D, the CRS second order traveltime trajectory is defined by eight parameters, that can be divided into three groups: KN, KNIP (each one is a 2x2 matrix with three independent parameters) and angles (azimuth and emergence). KN are mainly related to the subsurface reflectors geometry, KNIP to velocity; angles describe the normal ray emergence. The NIP-wave tomography method described in Duvenek, 2004 uses KNIP and angles to reconstruct a smooth velocity model in depth, suited for a PSDM and/or for petrophysical studies . In this paper, the first application to a real 3D data-set (from West Africa) of the method is presented. The comparison among common image gathers obtained from a reference velocity model and the ones coming out using NIP-wave tomography shows that a reliable velocity model can be reconstructed in a very cost-effective way directly from CRS processing.
-
-
-
3D CRS-Based Velocity Model Building - An Accurate and Cost-Effective Approach
More LessAuthors D. Della Moretta, T. Kluever and P. MarchettiThe resolution of first arrival traveltimes tomography is limited by the size of the first fresnel zone for each ray, the experimental device and the structure studied itself. This leads to uneven ray coverage and locally varying resolution. Besides, the sensitivity kernel of each ray induce high frequency information even where theoretical resolution is poor.Smoothing constraints are often added to the tomographic system to deal with these problems. We propose an alternative adaptive parametrization based on second generation wavelet transform, as introduced by Wim Sweldens.
We explain the principles and interesting properties of wavelet transform and show how we insert it in the tomography process. Then we present a synthetic example showing difficulties generated by rays in velocity model recontruction. We present the effects of classical gaussian smoothing and of our adaptive parametrization using wavelets.We show that wavelet transform allows a better control of the resolution power depending on the parameter location and that it is more adaptive than classical methods.
-