- Home
- Conferences
- Conference Proceedings
- Conferences
72nd EAGE Conference and Exhibition - Workshops and Fieldtrips
- Conference date: 14 Jun 2010 - 17 Jun 2010
- Location: Barcelona, Spain
- ISBN: 978-90-73781-87-0
- Published: 13 June 2010
1 - 100 of 105 results
-
-
Integrated interactive framework for the prestack interpretation arena
Authors O. Kvamme Leirfall, S. -K. Foss, A. Osen, M. Rhodes and Ø. SkjævelandCreating an arena for interactive collaboration between geology and geophysics has for some time been identified as a field for strengthening seismic exploration. The set objective was always to release the synergies of combining these two fields of expertise. The challenges faced in providing an arena capable of true interactivity have historically limited the options for real time processing. Over the last few years there have been several publications on interactive migration, indicating that interactive processing is scalable within the current standards of production high performance computing (HPC) configurations. In this article we will present a scalable framework of visualization and interactive processing only limited to available combined memory of a HPC installation and high-speed interconnect. This in addition to interpretation software and visualization capabilities, define the hardware of an interactive prestack interpretation arena
-
-
-
Optimizing interpretation knowledge with full azimuth information and data in a common canvas
More LessPre stack seismic data structures are a result of a systematic and formulated reconstruction of the recorded seismic wavefield. Seismic processing and imaging specialists spend most of their time generating, refining, analyzing, and inverting these pre stack data structures in order to attenuate noise, build velocity models, image the subsurface structure, search for direct hydrocarbon indicators and derive elastic properties. Seismic interpreters, on the other hand, spend most or all of their time performing operations on the stacked image. While modern interpretation systems are rich in data mining, visualization, and extraction procedures, these operations, and the information extracted from them, are limited to the dimensionality of the data and the loss of information suffered during the stacking process. To overcome these deficiencies, approximations in the form of limited dimensional stacks over acquisition offset or subsurface angle are frequently used as part of the interpretation process.
-
-
-
Pre-Stack Pro: Scaleable Pre-Stack Computing on the Interpretation Desktop
Authors B. Shea and F. -J. Pfreundtrated by in-house and contract processors, and are forced to rely on stacking to reduce data volumes to “interpretable” dimensions. Pre-Stack Pro, a new breed of software application, harnesses several high-performance computing (HPC) technologies to bring large-scale pre-stack computing to the interpreter’s desktop. By fully exploiting parallelism throughout the system, we simplify pre-stack analysis and deliver a system whose throughput scales efficiently with total hardware investment.
-
-
-
Waveform inversion from a poor starting model – using a residual ‘drip-feed’ strategy
Authors N. K. Shah, M. R. Warner, L. Guasch, I. Stekl and A. P. UmplebyWe present a new waveform inversion scheme designed to avert the need for an accurate starting
model and low frequency content in the data – a necessary key step in making the technique work on
a much wider range of exploration datasets and targets than it currently can. The scheme operates by
preceding the inversion of the field data by inversion of intermediate target datasets – synthesised out
of the curl-free (irrotational) part of the phase mismatch at the lowest useable frequency. We
demonstrate its effectiveness over the corresponding conventional approach by inverting data from the
Marmousi model with a 1-D starting model and minimum frequency of 5Hz.
-
-
-
Building anisotropic models for depth imaging: from imaging parameters to Earth model
Authors A. Bakulin, O. Zdraveva, M. Woodward, K. Osypov, P. Fowler, D. Nichols and Y. LiuVelocity model is the bridge that links our data with our images of the subsurface. Therefore our images can only be as good as our velocity models. Moving to “difficult oil” in sub-salt, sub-basalt and generally deeper targets, we can no longer afford the compromises of simplistic models we did in the past. Doing that leads to poor or no image areas. To fully leverage potential of new data types (wide azimuth, long offsets), we have to put a realistic complexity into our models. To address these challenges industry moved into using anisotropic earth models as a new standard (vertical and tilted transverse isotropy or VTI and TTI). Incorporating anisotropy increases our ability to fit the data and image every single piece of it. However growing expectations requires not only focusing the image but also accurately positioning seismic images for drilling. While this is achievable with anisotropic models, it only occurs when geology and data from boreholes are intimately incorporated into velocity model building from the very start.
-
-
-
Challenges of depth model building for Tilted TI media
Authors P. M. Bakker and A. StopinOver the last years, Shell has accepted anisotropic velocity model building and pre-stack depth migration as the norm for their seismic imaging. This has reduced the number of sidetracks to be drilled as a consequence of sub-optimal imaging or positioning, which is a clear demonstration of the impact of including anisotropy in the model building cycle. While VTI is a commonly accepted working model, with a workflow that is conceptually clear, this is less obvious for Tilted TI. Tilted TI might be the preferred model at the flanks of salt bodies and minibasins, where the sediments are strongly dipping. We shall demonstrate such a real data case in the Gulf of Mexico, where the application of Tilted TI simultaneously explained mis-ties of well markers and sub-optimal focusing of dipping reflectors. Mis-ties and residual moveout were in conflict for a VTI model. Modelling experiments confirm that this can be the case if the subsurface is Tilted TI, indeed. We shall also see that availability of Wide-Azimuth data should be helpful in recognizing Tilted TI in the subsurface. Usually, anellipticity causes only subtle effects in focusing, considering the trade-off between Eta and moveout-related velocity. For the Gulf of Mexico, one generally finds Eta to be small. Therefore, model building is frequently restricted to vertically elliptic models, at least in the initial phase. Reflection tomography with such a model will flatten residual moveout already to a large extent. If, subsequently, a tilt angle is introduced in such a model, this has effect on the well-ties as well as on the residual moveout in the common image gathers. A method is discussed to fit well-markers in such a situation, aiming at preservation of moveout.
-
-
-
Migration velocity analysis using a transversely isotropic medium with tilt normal to the reflector dip
Authors T. Alkhalifah and P. SavaA transversely isotropic model in which the tilt is constrained to be normal to the dip (DTI model) allows for simplifications in the imaging and velocity model building efforts as compared to a general TTI model. Though this model, in some cases, can not be represented physically like in the case of conflicting dips, it handles all dips with the assumption of symmetry axis normal to the dip. It provides a process in which areas that meet this feature is handled properly. We use efficient downward continuation algorithms that utilizes the reflection features of such a model. For lateral inhomogeneity, phase shift migration can be easily extended to approximately handle lateral inhomogeneity, because unlike the general TTI case the DTI model reduces to VTI for zero dip. We also equip these continuation algorithms with tools that expose inaccuracies in the velocity. We test this model on synthetic data of general TTI nature and show its resilience even couping with complex models like the recently released anisotropic BP model.
-
-
-
Extended common-image-point gathers for anisotropic wave-equation migration
Authors P. Sava and T. AlkhalifahIn regions characterized by complex subsurface structure, wave-equation depth migration is a powerful tool for accurately imaging the earth’s interior. The quality of the final image greatly depends on the quality of the model which includes anisotropy parameters (Gray et al., 2001). In particular, it is important to construct subsurface velocity models using techniques that are consistent with the methods used for imaging. Generally speaking, there are two possible strategies for velocity estimation from surface seismic data in the context of wavefield-based imaging (Sava et al., 2010). One possibility is to formulate an objective function in the data space, prior to migration, by matching the recorded data with simulated data. Techniques in this category are known by the name of waveform inversion. Another possibility is to formulate an objective function in the image space, after migration, by measuring and correcting image features that indicate model inaccuracies. Techniques in this category are known as wave-equation migration velocity analysis (MVA).
-
-
-
Extended common-image-point gathers for anisotropic imaging with blended sources
More LessIn regions characterized by complex geology, the accuracy of imaging is controlled by the quality of the Earth model used to simulate wave propagation in the subsurface (Gray et al., 2001). Thus, accurate model building is a critical prerequisite for imaging of the interior of the Earth. This requirement is even more stringent in regions characterized by strong anisotropy. Furthermore, it is important to construct subsurface velocity models using techniques that are consistent with the methods used for imaging. As reported in the recent literature, in the context of wavefield-based imaging, there are two main strategies that can be used for velocity estimation from surface seismic data. Those methods can be separated into two groups: data space methods, which operate by matching the recorded data with simulated data, and image space methods, which operate by correcting image features that indicate model inaccuracies (Sava et al., 2010).
-
-
-
Imaging of VTI media from wide-aperture data by frequency-domain full-waveform inversion
Authors Y. Gholami, A. Ribodetti, R. Brossier, S. Operto and J. VirieuxSeismic imaging of anisotropic media is one of the most challenging problem in exploration geophysics because of the possible coupling between the different anisotropic parameters. In this study, we develop a frequency-domain full-waveform inversion (FWI) method for imaging 2D visco-elastic VTI media from wide-aperture data. The forward problem relies on a finite-element discontinuous Galerkin method on unstructured triangular meshes that allows for accurate seismic modeling in complex media with reflectors of arbitrary shape. The inversion relies on a quasi-Newton algorithm which allows for proper scaling of misfit-function gradients associated with different parameter classes. The model parameters are either the P and S wave speeds on the symmetry axis and the Thomsen’s parameters ! and ", or the stiffness coefficients c11, c33, c13 and c44. We first present a review of the diffraction patterns of each anisotropic parameter classes to assess the best parameterization of the inverse problem. Second, we present some preliminaries examples of FWI with simple synthetic model. Results highlight the coupling between the parameters and the difficulty of imaging the ! parameter from surface data.
-
-
-
Numerical analysis of parameter sensitivity in 3D TTI tomography
More LessWe investigate the resolution of anisotropic model parameters in ray-based TTI reflection tomography. Our approach is to perform numerical experiments using 3D acoustic TTI finitedifference model data. In the first part of this presentation, we introduce a 3D TTI model that is used for this study. The size of this model is x=20 km, y=17.5 km and z=10 km. Some of the important features of the model are steeply dipping layers (dips up to 70 degrees) exhibiting relatively strong anisotropy with structurally conformable orientation of the anisotropic symmetry axis. The model also contains salt bodies that allow us to investigate the effects of TTI anisotropy in overburden sediments on the imaging of subsalt events.
-
-
-
Gaining insights about imaging structural uncertainty: TTI 3-D synthetic case study
Authors K. Osypov, B. Nolte, U. Albertin, M. O‘Briain, Y. Yang, D. Nichols, M. Woodward and F. QiaoOsypov et al. (SEG 2008, EAGE 2010) introduced an uncertainty analysis method for anisotropic migration velocity analysis which generates model samples from the posterior distribution by using eigendecomposition of anisotropic tomography operators and null-space projection. A realistic synthetic model is an important object for validating the uncertainty method because the ground truth is known and various “what if” scenarios could be played. This study deals with analysis of set of such scenarios for the BP TTI 3-D model to gain insights about anisotropic velocity parameter estimation ambiguity and the scale dependency of uncertainty. This analysis was done by using TTI offset ray tracing of the model for an OBC mirror geometry.
-
-
-
Azimuthal anisotropy? The time and depth imaging points of view: an imaging case history
The superior imaging capabilities of wide azimuth (WAZ) are now well established. Processing such acquisitions rquires adaptation of the processing workflows and tools. For example the recorded azimuthal information must be kept and the tools should be able to deal with the increased amount of data. Concerning the velocity model building, a question remains on the need of introducing azimuthal anisotropy. In this paper we address this point with a time and depth imaging case study for a high density wide azimuth (WAZ) land surface acquisition. We show on this dataset that azimuthally varying residual move-out observed on time migrated common image gathers (CIG) definitely disappears on the depth migrated CIGs (in both cases no azimuthal anisotropy is introduced in the velocity model). This illustration highlights the limits of the assumptions of time imaging, thuis promoting the use of depth imaging when processing high-density WAZ data, even in the context of mild geological complexity.
-
-
-
Modeling overburden heterogeneity in terms of Vp and TI for PSDM, Williston Basin, U.S.A.
Authors G. M. Johnson and J. DorseyWe present a Williston Basin case study solution where using a new anisotropic depth migration velocity analysis work flow and available well data, Vp and TI parameters in the overburden are modeled and validated in sufficient detail to have a direct impact on the quality of the seismic image at depth. Comparison with isotropic modeling demonstrates the need for highly detailed spatial and vertical modeling of overburden TI heterogeneity prior to azimuthal attribute analysis and other reservoir characterization flows.
-
-
-
TTI model interpretation when anisotropic migration velocity analysis is impossible: A case history from the Colombian Llanos Basin
Authors R. Vestrum and I. C. FlorezThe objective of the anisotropic depth imaging was to obtain a more accurate positioning of the events for an appropriate well trajectory. The original well location was determined from the isotropic depth migration, and it was clear from early in the drilling cycle that there were lateral-position errors on the seismic image. The TTI anisotropic depth migration project ran during drilling to refine the well trajectory. We used a collaborative, geologically constrained approach that integrates all available geologic information into the interpretation of the seismic velocity model. This area has interbedded siliciclastic rocks with high dips and vertical and lateral velocity contrasts, giving a considerable lateral movement in the images in depth when we correct for seismic anisotropy and lateral-velocity heterogeneity. The position of the structure and dips on the final depth image were confirmed by two wells drilled in the area.
-
-
-
High resolution surveying - Historical perspectives - how and why we got here
By A. B. ReidHigh resolution and high sensitivity potential fields surveying has a long pedigree, going back to early recognition that sediments and hydrocarbon reduction effects could have observable magnetic signatures, and that very high resolution views of Basement geology could be obtained with close-spaced survey lines. Since then, close-spaced gravity and magnetic surveying has lived up to its early promise, and delineated sedimentary and basement structure with extraordinary effectiveness.
-
-
-
HiRES airborne geophysical surveys in the UK: the Anglesey magnetic perspective
Authors D. Beamish and J. WhiteThis paper provides a comparison of vintage, UK national scale aeromagnetic data and modern, high resolution airborne data using a case study across the island of Anglesey, NW Wales. Deeper responses associated with magnetic basement are masked by a near-surface Palaeogene dyke swarm. In order to extract shallow and deeper basement features both data sets are processed in a consistent manner using azimuthal and spectral filtering procedures. A joint assessment is then carried out on the resolution capabilities of both data sets using established edge and depth location methodologies.
-
-
-
An equivalent source approach to the removal of cultural noise from HRAM survey data: examples from Midland region of central Ireland and Haynesville, Louisiana, United States
Authors A. Salem, K. Lei, C. Green, D. Fairhead and G. StanleyHigh-resolution aeromagnetic surveys are commonly flown close to the ground surface (~80m) and sample the magnetic field at about 10m spacing. Such surveys are thus susceptible to magnetic anomalies resulting from populated and/or industrial areas (rigs, pipelines etc). These anomalies are generally called “cultural noise” and need to be removed from the survey data if one wishes to carry out accurate microlevelling to identify and interpret subtle anomalies resulting from subsurface geological structures. Conventional algorithms of cultural noise removal tend to be based on Fast Fourier Transform (FFT) operations either on their own or together with identification and removal of cultural noise signals, using either manually or nonlinear filters methods. These algorithms can have difficulty interpolating across the profile edited sections (i.e., data gaps where cultural noise has been removed) and can introduce artificial anomalies. For these reasons, we have developed a semiautomated method that both identifies sections of profile data containing cultural noise and use the equivalent source approach to recover the magnetic responses of subtle geological anomalies and interpolate their field across the cultural noise gaps in the profile. Theoretical examples of combined subtle magnetic anomalies and cultural noise are used to test the effectiveness of the proposed method, which is shown to provide results that are closer to the original magnetic data without the cultural noise. We demonstrate the practical utility of the approach using highresolution aeromagnetic data from Ireland and United States.
-
-
-
Aeromagnetic mapping of Norway and adjacent ocean areas - from the Quaternary to the Precambrian
Authors O. Olesen, M. Brönner, J. Ebbing, L. Gernigon, J. Koziel, T. Lauritsen, R. Myklebust and S. UsovAeromagnetic surveys have in the past mainly been used for mapping depths to magnetic basement and igneous units in sedimentary basins. NGU has since 1994 acquired high-resolution aeromagnetic surveys and has also revealed the existence of significant magnetic anomalies arising from sedimentary layers. We have recognized that susceptibility measurements on core samples, hand specimens and in situ on bedrock exposures are essential for the interpretation of these anomalies. Petrophysical data (magnetic susceptibility and remanence) of 40.000 rock samples from the Norwegian mainland and offshore wells and drill holes have been acquired in order to constrain the interpretation of the aeromagnetic data. Sub-cropping Late Paleozoic to Tertiary sedimentary units along the Trøndelag-Nordland coast produce a very distinct anomaly pattern. The asymmetry of the anomalies, with a steep gradient and a negative anomaly to the east and a more gentle gradient to the west, relate the anomalies to a
strata gently dipping westward. The susceptibility measurements on Sintef’s cores indicate that these coast-parallel anomalies are caused by 1) alternating beds of sandstone and claystone/siltstone/mudstone [mean suscept. 0.00013 and 0.00025 SI], 2) siderite-cemented sedimentary rocks [mean suscept. 0.00135 SI], and 3) sedimentary units containing detrital Fe-Tioxides [suscept. 0.00100-0.01000 SI]. Negative anomalies are caused by low-magnetic gypsum, anhydrite, salt or coal [mean suscept. 0.00007 SI]. Recent aeromagnetic surveys in the Barents Sea have also revealed distinct negative magnetic anomalies clearly associated with salt diapirs.
-
-
-
Full tensor magnetic gradiometry processing and interpretation developments
Authors D. Fitzgerald, H. Holstein and S. LettsIn recent years, Anglo/De Beers have championed the development of a Full Tensor Magnetic gradient (FTGM) signal instrument from IPHT. Multiple surveys of this quantity have been made in Southern Africa. With the advent of this new potential field full tensor
gradient instrumentation, new methods have been developed to de-noise and process these curvature gradients. Traditional Fourier domain and minimum least squares residual of the linear differential tensor relationships have been adapted. This leads to levelling, gridding and grid filtering innovations. The result is a full tensor grid representation of the curvature gradients that is coherent and compliant with the physics at all points in the grid. All of the observed data is thus honoured in the Tensor grid. Isolating the signal and then refining it to be sure there are no distortions have dominated efforts to date. Superior anomaly interpretation regarding the full magnetic history and inferences can then be made. A survey from the Groblersdal Platinum mine is shown in the context of the structural geology interpretation. In particular, the dolerite dykes and faults are seen. The Hornfels footwall contact is very strong. The phase map traces the Platreef contact. The Upper Zone magnetites are more pervasive and fine layered structure is revealed there. None of the granites can be seen. A 3D geology model is in preparation. The observed FTG signal will be compared to the predicted thin-body responses from the model. There is more directly inferable structural geology in this tensor signal than can be found in a conventional TMI signal.
-
-
-
Sensitivity of gravity and magnetic data to basalt and subbasalt structures – A case study from the Møre margin, mid-Norway
Authors J. Ebbing, M. Aarset, R. F. Reynisson and T. VattekarThis study investigates the feasibility to use gravity and magnetic inversion to image basalts and sub-basalt structures in sedimentary basins affected by volcanism. Multiple gravity and magnetic data sets are available fro the Møre margin, mid-Norway, which are flown at different altitudes and with different line spacing. Previously, a 3D model was constructed for study area based on a wealth of seismic and petrophysical information, but the resolution of the regional 3D model prevented detailed imaging of the basalts and sub-basaltic structures. While it is difficult to identify the lateral extend of the volcanic features (at depth of 6 km) in the gravity and magnetic data, as well as in Full Tensor Gravity (FTG) data, the sub-basaltic basement architecture can be identified. The gravity gradients provide valuable information on the vertical and lateral extent below the basalts, despite the small density contrast to the surrounding, if the basalts are located in depth of less than 4 km. In such cases, inversion of the gravity and magnetic residuals, gives a better insight into the extent and thickness of the basaltic and sub-basaltic layer.
-
-
-
When resolution is the key – a review of modern airborne magnetic, radiometric and gravity survey
More LessWhile standard airborne magnetic, radiometric and gravity technologies have not undergone significant changes over the last decade, the techniques have each advanced incrementally. Most of the improvements are related to better acquisition systems which allow for higher speed sampling, improved processing techniques, and, more than often, the choice of aircraft platform which can benefit lower and slower survey flying. Using a series of example datasets, we review the current “state-of-the-art” in each technology demonstrating what can be achieved from a resolution perspective and ultimately setting a bench-mark for the sort of data quality industry should demand when resolution is key to project success.
-
-
-
Qualitative and quantitative interpretation of airborne gravity and HRAM targets structures for prospect-scale 2D seismic: Southern Maranon Basin, Peru
Authors H. D. Geiger, E. Velasquez, J. F. Ceron and D. M. DolbergIn conjunction with our partners Ecopetrol, Talisman flew a 12,220 km2 aerogravity/aeromag survey (Sander) in Blocks 134/158, Southern Maranon Basin, Peru in mid-2008. The primary objective of the survey was to reduce exploration cycle time and costs
by eliminating a round of regional reconnaissance 2D seismic, moving directly to prospect scale 2D seismic over the most promising anomalies. The survey coverage included areas to the north and west of the area of interest with existing structures
defined on 2D seismic, some with well control. 2D and simple 3D modelling studies were used to evaluate airborne gravity vs airborne gravity gradiometry. A careful analysis of expected signal to noise performance over the wavelengths of interest suggested that both methods would perform well. Airborne gravity was selected as the most cost-effective, and for reduced contamination of signal from near-surface geologic noise. Qualitative interpretation was based primarily on bandpass filtered Bouguer anomaly maps. In areas with 2D seismic, there was a strong correlation between structural closures mapped from seismic and anomalies identified on specific filter products. The spatial scale-length of structures appears to be consistent and extends into the area of interest where there is no previous subsurface mapping. The revised structural interpretation agrees with preferred tectonic models. Existing structures on key seismic lines were modelled to determine the 2D anomalous Bouguer gravity response, and compared with profiles extracted from filtered products. Anomaly amplitudes and corresponding contour intervals were chosen as proxies for prospect areas. The prospect areas associated with mapped anomaly contours were then used to design a prospect-scale 2D seismic program. Constrained 3D gravity inversion has since confirmed the prospect locations and scales. The dataset was decimated and reprocessed to estimate signal to noise ratios, which confirming the signal to noise estimates.
-
-
-
3D and 4D high resolution microgravity – case stories from mining and geoengineering
By J. MrlinaGravimetry has a long tradition in various geological investigations. For regional exploration projects the accuracy of 0.2 – 0.5 mGal was sufficient for decades. Detailed surveys, on the other hand, and especially microgravity require not only high resolution of gravimeter’s readings, but also high accuracy/repeatibility in the range of 0.00X mGal (X μGal). Therefore, we also focused on the reduction of disturbing effects affecting microgravity measurements (wind, sun, rain, vibrations, etc.), but mainly on data processing. We developed special software for accurate determination of gravimeter system drift from field observations. This technique allows significant reduction of repeatibility error. As the presented case stories represent about three decades, the resolution of gravimeters in contrast to expected and observed signals will be discussed. Case histories will be presented on the detection of voids that often exist in mines as result of natural processes, or unknown historical mining. This will include the monitoring of surface collapse risk. Another example of microgravity monitoring of stress evolution in a deep coal mine will be shown. As well the monitoring of groundwater variations in an open pit mine waste dump body will demonstrate the efficiency of repeated microgravity in hazard control. Microgravity indicated critical increase of groundwater before a damaging slide of the waste dump body mass. Special case is represented by 4D gravity in oil EOR. The results suggest that applied high resolution microgravity may contribute, beside others, to mine monitoring systems aimed at risk mitigation.
-
-
-
High resolution AIRGrav surveys: Advances in hydrocarbon exploration, mineral exploration and geodetic applications
More LessSander Geophysics has now operated its AIRGrav airborne gravity system for over ten years. More than 2,500,000 lkm of AIRGrav surveys have been flown, predominantly for hydrocarbon exploration. Recent advances in SGL's gravity data processing, involving advanced analysis of system dynamics and improved filtering, help to further reduce system noise and allow for the generation of high quality, low noise gravity data through a wider range of survey conditions than was previously possible. In the past year, a number of AIRGrav projects with innovative survey design parameters have been successfully completed. Mineral exploration projects have been flown using a helicopter at extremely slow acquisition speed (30 knots) combined with tight (50 m) line spacing to produce data sets with higher resolution and higher accuracy. On the other end of the spectrum, the AIRGrav system produced excellent results when installed in a NASA DC 8 flown from 500 to 11000 m altitude at 300 knots, covering approximately 9,000 km in 12 hour flights, with differential GPS baselines as long as 3,000 km. New data processing techniques have allowed the extraction of the horizontal gravity components of the airborne gravity data in addition to the traditionally used scalar gravity measurement.
-
-
-
The Value of Integration of Gravity Gradiometry with Seismic and Well Data – an example from a frontal thrust zone of the UAE-Oman Fold Belt
Authors J. A. Protacio, J. Watson, F. van Kleef and D. JacksonThe east of the Emirate of Dubai is dominated by the geologically complex western thrust front of the northern Omani Mountains. This deformation front is the boundary between the western foredeep basin and the eastern Omani fold-and-thrust belt. Complex geology makes conventional exploration challenging. The reservoir (Thamama Group) structures are thrusted anticlines with the overlying Tertiary units showing large-scale thrusting as well. The Lower Cretaceous Thamama Group limestone is one of the main hydrocarbon reservoirs in the Middle East. It forms a major hydrocarbon-producing reservoir in the U.A.E., Iraq, Bahrain and Oman and has a high hydrocarbon potential in southeast Iraq, offshore Oman and offshore northeast Saudi Arabia. Due to the significant density contrast between the reservoir and the overlying sediments, Margham Dubai Establishment commissioned an airborne gravity gradiometry (GG) survey to improve the confidence in top reservoir location and to aid in ongoing exploration activity. GG, magnetic and LiDAR data were acquired and used in an integrated interpretation with existing seismic and well data. The integration of these data allowed a better understanding of the thrust linkages at different levels, and a better insight into the interaction of thrusts, backthrusts, detachment levels, imbricated zones, and lateral ramps. The survey is designed around the airborne GG technology known as the Full Tensor Gravity Gradiometer (FTG). GG measures the rate of change of the Earth’s gravitational field
while conventional gravity (CG) measures the vertical acceleration. Acquired from an aircraft, GG has a strategic advantage over CG due to the resolution limitation imposed by the Differential Global Positioning System (DGPS). The DGPS limits airborne gravity
resolution to > ~4000m wavelengths while GG can resolve wavelengths of > ~300m. The shorter wavelengths are crucial to accurately model the geology above the reservoir. In complex geology, multiple lithological units contribute to the GG signal. To map the
reservoir, it is vital to isolate its response from the overlying geology. The high resolution GG data facilitated an accurate investigation of the 3D Shallow Earth Model (SEM). Modelling of seismic sections constrained by GG and magnetic data exploits their complementary nature. Seismic data respond well to horizontal discontinuities while potential field data respond to vertical discontinuities. This produces a geologically realistic SEM. Forward calculation of the GG signal from the 3D SEM was performed and then subtracted from the observed signal. The SEM corrected data were then used to interpret the Thamama reservoir structure.
-
-
-
Exploration Play Models and FTG Gravity data
Authors C. A. Murphy and J. DickinsonExploration play models are employed by mineral and hydrocarbon exploration companies to establish and help explain the geological setting of their targeted resource. Such models are routinely enhanced through the deployment of geophysical technologies to better improve the understanding of the geological setting and size, shape and depths of the target. FTG Gravity data offers an additional layer of information to consider the fortunes of many exploration play models from salt, fault block closure and igneous intrusion models to more mainstream fault mapping. Many have been confirmed or tweaked through direct involvement of the FTG data into their exploration workflow and are a testament to the acceptance of this innovative technology by leading exploration
companies. This paper will present data examples demonstrating the usefulness of FTG Gravity when investigating the prospectivity of exploration play models. Examples from survey work targeting fault block structural closures and igneous intrusives will be
shown.
-
-
-
Integration of High Resolution Falcon TM Gravity Gradiometry and Seismic data: an example from Northern Argentina
Authors L. Braga, M. L. Fernandez, J. C. S. de Oliveira Lyrio, S. V. Yalamanchili and A. MorganNewly acquired airborne gravity gradient and magnetic data were used to model the tectonic framework and basement configuration beneath the Chirete concession area, onshore, northern Argentina, using the existing seismic data as constraints. The study area is comprised of approximately 3675 square km and covers approximately 35 km in an east-west direction and 105 km in north-south direction. The main project goal was to modeling the basement and associated major structural elements for selection of oil & gas prospects. The basement related faults/lineament maps were generated using several enhancements of gravity gradient and magnetic data. 2D seismic depth sections were used as initial constraints for 2.5D and 3D gravity and magnetic modeling. Well log densities and velocities are also constrained in this modeling process. The basement depth estimates are computed from the total magnetic intensity profile data. The Werner, Euler, and Peters half slope techniques are used in basement depth computations as well as the depths to the magnetic sources within the sedimentary section. This interpretation was then refined by utilizing the gravity gradient data and seismic data using 2.5 D and 3D modeling. Several of these depths are related to volcanic
sources and were identified as igneous provinces and probably related to the rift and post rift periods. The basement depths show significant variation from south to north and ranging approximately 4-13.5 km below sea level. The enhanced potential field data yielded basement faults and structural framework of the area. The general basement faults trends are striking mostly in ENE-WSW direction. Two major East-West faults were identified as being the major bounding faults of the Lomas de Olmedo rift. There exists a remarkable correlation between these faults derived independently from seismic reflection data and enhanced Airborne Gravity Gradiometry data. A number of positive structural features were identified with the associated faulting and may provide new prospects for well drilling.
-
-
-
Interpretation of Gravity Gradiometry data and integration with PSDM workflow – imaging sub-salt structures in Gabon
Authors J. Watson, J. Barraud, F. Assouline and N. DyerFTG data were acquired over an onshore Gabon Block. The objectives of this survey were to delineate accurately the salt structures; to derive a base salt structure map and to map basement structures associated with rifting. The survey area is situated along a clear trend of oil & gas subsalt fields that runs roughly north-south from offshore fields in south Gabon to the Lambaréné horst in the north. As a proven reservoir rock, the primary objective is the Gamba Sandstone. It is overlain by the Gamba Formation Vembo Shale which in turn is overlain by the Ezanga Salt formation. The trap is formed by reactivation of Cocobeach Formation faults due to the sediment load of the overlying Gamba and Ezanga Formations and the Madiela Group limestone and dolomite. The Ezanga salt also provides the top seal. Prior to the FTG survey three wells were drilled to target the Cretaceous Gamba sandstones. Although one of the wells appeared to be gas-bearing, the other two wells were dry. In addition, the significant discrepancies between predicted and actual depths demonstrated the need for new and independent geophysical data that would shed a new light on the “salt problem”. Acquisition of seismic data in this area is difficult given the environment, described as coastal, marshy, tree covered areas with small rolling hills. Processing and depth migrating the 2D seismic data is also difficult for several reasons, including a thick and variable weathering layer, 3D nature of salt structure and uncertainties in velocities. These problems resulted in misinterpretations of the seismic data and significant errors on the predicted depths of formations tops. The excellent quality of the FTG data and the large density contrast between salt and the surrounding sediments (clastics and carbonates) ensured that the workflows employed returned a successful result. Compared to traditional land gravity techniques, airborne FTG data has the advantage of offering fast and complete horizontal coverage, and providing the high resolution and bandwidth that is necessary to image the shallow salt structures with confidence. The workflow involved a back-stripping approach in which the model is constructed from top to bottom. Independent data (seismic and magnetic data) was also used to constrain the results. Following on from this an area of interest was defined to focus the interpretation effort on the most promising targets. In this core area, eleven seismic lines were selected for reprocessing and depth migration. The new PSDM seismic data were then used as constraints, together with well data, to build a model in 3D. Manual 3D forward modelling was used to build detailed surfaces with a maximum control on the geometry.
-
-
-
Nigeria’s Nationwide High Resolution Airborne Geophysical Surveys
By S. RefordNigeria has gained near nationwide airborne geophysical coverage, with high resolution horizontal gradiometer magnetic and radiometric surveys, flown at 500 m line spacing and 80 m mean terrain clearance, totalling almost 2 million line-km. The surveys were flown, per the index map, as follows: 2003 – Pilot Project – Ogun State, 2005-07 – Phase 1 – Blocks A+C and B, 2007-09 – Phase 2 – Blocks D1, D2, D3 and D4. All surveys were carried out by Fugro Airborne Surveys on behalf of the Nigerian Geological Survey Agency. Phase 2 forms part of the World Bank-supported Sustainable Management for Mineral Resources Project. As part of Phase 1, time-domain electromagnetic surveys were flown at 200 m line spacing in 2008-09 with the Tempest system over three blocks, totalling 24,000 line-km. Additional TDEM surveys are planned for Phase 2. To complete the airborne coverage, the Niger Delta block will be flown in 2010 with magnetics at 1 km line spacing. In addition, a quarter of the block will incorporate airborne gravity. The survey data are currently being interpreted by Fugro Airborne Surveys (Phase 1) and by Paterson, Grant & Watson Limited (Phase 2). PGW has prepared nationwide merged grids, and will integrate the two interpretations. The data have proven extremely valuable for: -Depth-to-source mapping of the inland sedimentary basins, delineating areas of interest for oil & gas exploration, as well as mapping shallow basement with extensions of known mineral belts -Determining signatures of known occurrences such as gold deposits, lead-zinc deposits and kimberlite pipes -Mineral potential mapping -Characterization of the “Older” and “Younger” granites -Mapping intrasedimentary igneous sources, sedimentary horizons and structure -The complementary mapping capabilities of radiometric and electromagnetic data in both hard rock terrains and exposed sediments (e.g. Benue Trough). An oral presentation will provide an overview and key highlights. It will also discuss the challenges of compiling and integrating data from a multi-year campaign, utilizing as many as seven aircraft in a survey block. A poster presentation will provide a more in-depth analysis of specific areas of geophysical and geological significance, as well as the contrast of the new surveys with the low resolution magnetic data from the 1970s.
-
-
-
Waveform inversion for shallow velocity recovery
Authors U. Albertin, J. Mika, M. Zhou, A. Ford, N. Robinson and G. SchurterWe demonstrate the effectiveness of frequency-domain waveform inversion for shallow velocity recovery in a number of geologic settings around the world. Waveform inversion uses the entire wavefield, including direct-arrival information, in order to achieve high resolution, and we find it remains stable in areas historically known to have difficulty with coherent noise such as multiples. We demonstrate the effectiveness of the technique in offshore Trinidad, offshore Pacific Asia, and the Caspian Sea.
-
-
-
Waveform Tomography of Land Data from a Complex Thrust-Fold Belt in Western Canada: What Works, What Doesn’t, and What Needs to Improve
Authors A. J. Brenders, R. G. Pratt and S. CharlesIn the Canadian Foothills, long-offset seismic data are occasionally acquired in an effort to "undershoot" areas of steeply dipping faults and a severely weathered near-surface. Since long-offsets are one method of acquiring the low-wavenumber information necessary for Waveform Tomography (diving wave tomography followed by full-waveform inversion), these data provide an excellent opportunity for demonstrating the efficacy of the method in building velocity models in with land seismic data, in areas possessing large, lateral velocity variations. However, the acquisition of lowfrequency field data remains a challenge: the majority of land-seismic field crews use 10Hz geophones. The use of MEMS accelerometers (with a broadband response in the acceleration domain) was proposed as a solution to this problem, but the results of a recent high-effort, long-offset acquisition demonstrate that the records possess sub-optimal low-frequency data, dominated by instrument noise. In addition, full-waveform inversion of land seismic data must account for the effects of elastic modes such as ground roll. Without an efficient, multi-parameter, elastic inversion scheme, we must use one based upon the acoustic wave-equation, and care must be taken to mitigate the effects of elastic modes
during pre-processing of the seismic data. If seismic data are recorded with an appropriately designed acquisition (i.e., long-offsets and recording instruments capable of recording the low-frequencies), and sufficient, appropriate preprocessing is applied to the input data, acoustic full-waveform inversion can produce complex velocity models of the sub-surface from field seismic data acquired on land in complex geological settings.
-
-
-
Elastic wavefield inversion in three dimensions
Authors L. Guasch, I. Stekl, A. Umpleby and M. WarnerAlthough acoustic wavefield inversion is widely used, a complete solution of the seismic inversion problem requires that we account properly for the physics of wave propagation, and so must include elastic effects. We have developed a 3D tomographic wavefield inversion code that incorporates the full elastic wave equation. The code uses explicit time-stepping by finite differences that are 4th order in space and 2nd order in time.
-
-
-
Acoustic waveform tomography of elastic wavefields: Application to marine OBS data
More LessAmong varieties of waveform inversion techniques, acoustic waveform inversion has been a popular choice because of its simple formulation and modest computational costs. However, the earth consists of elastic materials, and thus there remain concerns about reliability, since behaviors of elastic wavefields, such as P-S convergence, are not properly accounted for. We demonstrate the practical validity of acoustic waveform tomography in marine settings using real data and synthetic studies. Ocean Bottom Seismograph (OBS) data in the seismogenic Nankai subduction zone were inverted with the acoustic implementation. We clearly delineated major geological features including the mega splay fault, and thrusts in the accretionary prisms. The mega splay fault accompanies a low velocity layer, which indicates fluid migration or a lithology change. The fault structure underneath the ridge could be debatable, due to the similarity to the topography. Synthetic waveforms kinematically well coincides with observed waveforms, but there remain discrepancies in amplitudes. The results validate the applicability of waveform tomography to elastic wavefields, but the elastic and attenuation effects need to be investigated further. In order to validate the real data results, preliminary 1D evaluation of the inverted results was conducted with synthetic elastic wavefields. The recovery of major structures was verified, but degradation was admitted in vertical velocity contrasts. 2D synthetic results will be computed to further investigate the ability of the acoustic implementation to retrieve spatial velocity contrasts, and the contamination by topography effects.
-
-
-
Examples of full-waveform inversion that utilize the low-frequency content available from dual-sensor, single-streamer acquisition
Authors S. Kelly, J. Ramos-Martínez and S. CrawleyIn this abstract, we present both 2-D and 3-D examples of acoustic, full-waveform inversion using synthetic recordings for up-going pressure. These data are representative of those obtained through de-ghosting, by utilizing dual-sensor, single-streamer recording. We compare the results of “conventional” full-waveform inversion with those obtained from a method based on impedance, which we have found to be useful for improving recovery of the lowest wavenumbers in the inverted model. We also present results for the 2-D inversion of a line of dual-sensor, field data recorded from offshore Cyprus. Both this inversion and the synthetic inversion study indicate that features ~ 0.5 km can be accurately recovered at depths of 3.5 km using maximum offsets of only 8 km.
-
-
-
Fast Three-Dimensional Full Wave Seismic Inversion using Source Encoding
Authors J. R. Krebs, J. E. Anderson, D. Hinkley, S. Lee, A. Baumstein, Y. Ho Cha R. Neelamani and M. -D. LacasseIn this paper we will demonstrate that the computational effort of FWI can be reduced significantly by applying it to data formed by encoding and summing source gathers, if the encoding of the sources is changed between iterations. Changing the encoding between iterations changes the crosstalk noise caused by the summation of the sources. Thus, the source crosstalk-noise stacks out of the inverted earth model, allowing summation of a large number of encoded sources. We call this method encoded simultaneous-source FWI (ESSFWI). We demonstrate this technique on 2D and 3D synthetic data.
-
-
-
Deep Ocean 3D tomography on field data
Authors J. Morgan, G. Christeson and M. WarnerWe have run a suite of 2D and 3D wavefield inversions to recover fine-scale velocity structure in upper oceanic crust. The data were acquired on a plateau, close to a transform fault in deep water in the Pacific ocean. At the fault, a vertical section of oceanic crust is exposed and has been mapped using submersibles, hence our inverted velocity models can be directly compared with adjacent outcrop data. Synthetic tests using the 2D inversion code suggest that the inverted velocity structure may contain artefacts caused by offline arrivals and feathering of the streamer. Synthetic tests using the 3D inversion code show that true velocity structure can be recovered, and 3D inversions of the real data suggest that there is a velocity inversion in the upper oceanic crust in the area modelled.
-
-
-
Lithospheric imaging from teleseismic data by frequency-domain elastic full-waveform tomography
Authors D. Pageot, S. Operto, M. Vallée, J. Virieux and R. BrossierWe shall present a 2D elastic frequency-domain full-waveform tomography method suitable for lithospheric imaging from teleseismic data. In the teleseismic configuration, the source is a plane wave impinging the base of the lithospheric target located below the receiver netwok. The plane-wave source is implemented in the frequency-domain forward problem using a scattered-field formulation. The wave modeling is performed with a finite-element discontinuous Galerkin method on unstructured triangular meshes. The inverse problem is solved in the frequency-domain using a quasi-Newton LBFGS optimization and the adjoint-state method. Preliminary applications in the framework of the acoustic approximation were presented to highlight the resolution improvements provided by the inversion of topside reflections after the first reflection at the free surface. Theses shorter-aperture converted phases increase dramatically the high-wavenumber coverage in the model space which would have been rather poor otherwise. We assess in a realistic teleseismic setting for a 0.2-2 Hz source bandwidth the frequency sampling required for avoiding wraparound of lithospheric reflectors, which result from the narrow aperture illumination provided by plane wave sources when temporal frequencies are not sufficiently finely sampled. Before considering application to real data, obliquity of plane wave sources with respect to the imaged section must be adressed either by implementation of the 2.5D wave equation or by applying empirical corrections to velocities.
-
-
-
Some 3D applications of full waveform inversion
Authors R. -E. Plessix, S. Michelet, H. Rynja, H. Kuehl, C. Perkins, J. W. de Maag and P. HatchellSince 25 years, several synthetic and real examples of Full Waveform Inversion (FWI) have been published. The main advantages and limitations of this approach have been explained in the 80’s. It was soon realized that FWI requires long offsets and low frequencies to update the velocity background. Thanks to the improvements in acquisition over the last 20 years, long offset data became available and several research groups could demonstrate the relevance of FWI with 2D data sets. Over the last 5 years,
the increase in computer power made 3D FWI affordable at least with low frequency data. In this presentation, we will discuss a few applications of FWI with marine data sets in the context of velocity model building and time laspe when we invert only the low frequencies. We will also illustrate the relevance of an anisotropic FWI to correctly handle short and long offsets.
-
-
-
Waveform tomography – Marine vs Land: Targets, Challenges and Opportunities
Authors A. J. Brenders, R. G. Pratt, R. Kamei and S. CharlesWaveform tomography yields sub-wavelength scale velocity resolution through formal inversion of the recorded seismic wavefield. Velocity estimation through waveform inversion works best where wide angle (large offset) refractions and reflections are available in the input data, where low frequencies with good signal-to-noise rations are available, and where sources and receivers are adequately coupled and are reliably consistent. Waveform tomography results have been obtained by our group for both Marine and Land data settings. Marine seismic data, in most cases, lend themselves well to waveform tomography. Waveform tomography, especially where OBS recording are available, is ideally suited to provide geologically significant velocity images of deeper structures. Land seismic data, in contrast, pose significant challenges for waveform tomography particularly when topographic relief, weathering, and near-surface conditions are severe. The challenges afforded by land data mean that waveform tomography can require extensive manual intervention and repeated parameter testing, driving the costs up dramatically.
-
-
-
Building starting model for full waveform inversion from wide-aperture data by stereotomography
Authors V. Prieux, G. Lambaré, S. Operto and J. VirieuxBuilding a reliable starting model remains one of the most topical issues for successful application of full waveform inversion (FWI). In this study, we assess stereotomography as a tool to build a reliable starting model for frequency-domain FWI from long offset (i.e., wide-aperture) data. Stereotomography is a slope tomography method based on the use of traveltimes and slopes of locally-coherent events in the data cube. A key feature of stereotomography is that it can be coupled efficiently with semi-automatic picking, which partially frees one from the tedious and difficult interpretive traveltime picking. We assessed a tomographic workflow based on stereotomography and frequency-domain FWI with the 2D acoustic synthetic Valhall case study. The Valhall model is
mainly characterized by a large-scale low velocity zone associated with gas layers above the reservoir level. We first computed an acoustic full-wavefield dataset using a finite-difference time-domain modeling engine for a wide-aperture survey with a maximum offset of 16 km. The source bandwidth is between 10 and 45 Hz. Compared to the conventional application of stereotomography, we assess in this study the benefits provided by the joint inversion of refraction and reflection traveltimes from long-offset data. Use of refraction traveltimes is expected to stabilize and improve the reconstruction of the shallow part of the model. In a similar manner for frequency-domain FWI, we design a multiscale approach which proceeds hierarchically from the wide-aperture to the short-aperture angles to mitigate the non-linearity of the inversion. Starting models for FWI were built by stereotomography using two sets of picked events. For the first data set, the picking is limited to reflection traveltimes with a maximum offset of 4 km, while both refracted and reflected events were picked in the second case using the full range of offsets (± 16 km). We highlight the improvements of the FWI results obtained from the starting stereotomographic model built from the long-offset data set. The improvements are observed at the reservoir level below the gas layers but also in the upper part of the model where the joint use of refraction and reflection traveltimes is helpful to improve the ray illumination.
-
-
-
Full Waveform Teleseismic Tomography: Theory and Applications
Authors S. Roecker, B. Baker and J. McLaughlinWe have adapted a 2D spectral domain finite difference waveform tomography algorithm previously used in active source seismological imaging to the case of a plane wave propagating through a 2.5D viscoelastic medium in order to recover P and S wavespeed variations from body waves recorded at teleseismic distances. A transferable efficacy that permits recovery of arbitrarily heterogeneous models on moderately sized computers provides the primary motivation for choosing this algorithm. Synthetic waveforms can be generated either by specifying an analytic solution for a background plane wave in a 1D model and solving for the source distribution that would produce it, or by solving for a scattered field excited by a plane wave source and then adding the background wavefield to it. Because the former approach typically involves a concentration of sources at the free surface, the latter tends to be more stable numerically. We adapt a gradient approach to solve the inverse problem to maintain tractability; calculating the gradient does not require much more computational effort than does the forward problem. The waveform tomography algorithm can be applied in a straightforward way to perform receiver function migration and travel time inversion. We will discuss an application of this technique to imaging the crust and upper mantle beneath the Tien Shan range in central Asia.
-
-
-
Full waveform inversion in the Laplace and Laplace-Fourier domains
Authors C. Shin, W. Ha, W. Chung and H. Seuk BaeWe present a review of Laplace and Laplace-Fourier domain waveform inversion. The wave equation in the Laplace and Laplace-Fourier domains can be solved by changing the real frequencies from the Fourier transform into imaginary frequencies. The initial model of Laplace-domain inversion can be a scratch such as a homogeneous velocity model. The inversion provides a long-wavelength velocity model that can be used as a starting velocity model for conventional waveform inversion, which uses the zero-frequency components of the damped wavefield. Laplace-Fourier domain inversion can recover long-, medium- and short-wavelength velocity models by adjusting the complex frequencies. Careful muting of noise should be applied before the first arrival because the damped wavefield is sensitive to random noise. Numerical experiments and real data examples show that full waveform inversion in the Laplace and Laplace-Fourier domains can provide an alternative for seismic velocity estimation.
-
-
-
Elastic (Visco) Full Waveform Inversion of multi-component marine seismic data in the time domain: A tribute to Albert Tarantola
Authors S. C. Singh, G. Royle, T. Sears, M. Roberts and P. Barton(AVO) analyses can be used to estimate P and S-wave impedances. Since the method is local, i.e. assumes 1D media, linear approximation to the reflection coefficient, and ignores interference effects, the results are very approximative. In 1980s Tarantola’s group in Paris started developing elastic full waveform of near offset, while other groups were focusing on different types of migration algorithm using more sophisticated mathematical techniques. Tarantola (1986) first set-up the mathematical foundation of full waveform inversion in acoustic media and then extended it to full elastic media (Tarantola, 1988). In early 1990s our group started working on 1D elastic full waveform inversion (Singh et al, 1993) but used long offset data to get medium to large-scale velocity of the sub-surface. We showed that wide-angle reflection data (Neves and Singh, 1996) has sensitivity to intermediate wavelength information. Joint inversion of near- and post-critical angle reflections data allowed convergence towards the global minimum (Shipp and Singh, 2002). Since then we have extended the algorithm to multi-component OBC data to invert P and S-wave velocity (Sears et al., 2008; Roberts et al., 2008) and recently for attenuation (Royle and Singh, 2010). We start inverting wide-angle data first, followed by critical angle and then near offset data. For a stable inversion, we invert P-wave velocity first from vertical component data, then medium scale S-wave velocity vertical component and finally short wavelength S-wave velocity from horizontal component data. Although, our group has made significant progress, computation remains a main issue in applying elastic full waveform inversion on a routine basis. In this talk, I will give a historical prospective of elastic full waveform inversion, particularly those related to work of Albert Tarantola, and then present state of the art techniques of full elastic waveform and then propose a strategy for future waveform inversion. I will particularly highlight the importance of elastic inversion for reservoir characterization, and show how the full elastic waveform inversion could be extended to 3D media in a time-lapse mode (Royle and Singh, 2010; Queisser and Singh, 2010). We are presently taking full waveform a step further by jointly inverting both seismic and controlled source electromagnetic data (Brown et al, 2010).
-
-
-
Improved Near-surface Velocity Models from Waveform Tomography Applied to Vibroseis MCS Reflection Data
Authors B. Smithyman and R. ClowesMultichannel vibroseis reflection surveys are prevalent in the land exploration seismic industry because of benefits in speed and cost, along with reduced environmental impact when compared to explosive sources. Since the downgoing energy must travel through the shallow subsurface, an improved model of near-surface velocity can in theory substantially improve the resolution of deeper reflections. We describe techniques aimed at allowing the use of vibroseis data for long-offset refraction processing of first-arrival traveltimes and waveforms. Waveform tomography combines inversion of first-arrival traveltime data with full waveform inversion of densely-sampled refracted arrivals. A number of challenges are presented by the characteristics of vibroseis acquisition; we discuss some of these challenges and techniques to mitigate them. Through the use of waveform tomography, we plan to build useful, detailed near-surface velocity models for both the reflection work flow and direct interpretation.
-
-
-
Seismic anisotropy effects in 3D wavefield tomography
Authors I. Stekl, A. Umpleby and M. WarnerWe are presenting results how seismic anisotropy may affect waveform inversion images. Result from our Marmousi model extended to 3D as 2.5D morel show that not including appropriate anisotropy in the modelling algorithm can lead to mispositioning of anomalies in the images.
-
-
-
Time vs frequency for 3D wavefield tomography
Authors A. Umpleby, M. Warner and I. SteklUnlike the situation in two-dimensions, where direct factorisation of the matrix equations makes frequency-domain methods much faster than explicit solution in the time-domain, the computational resources required for practical wavefield tomography in 3D can be rather similar in the two domains. We have developed and optimised schemes that undertake wavefield tomography using explicit time stepping in the time domain and that iteratively solve the matrix equations of the implicit problem in the frequency domain.
We have applied these two methods systematically to the same suite of problems. In the frequency domain, the principal advantages are that the initial tomographic updates for lowest frequencies are often seen more quickly, and spatial resolution can be better at the highest frequencies. In the time domain, one of the principal advantages is that it is possible to mute and/or weight the field data in time, and consequently the method can be made to work more effectively with difficult datasets. In practice, both approaches are useful, and both should be available within a comprehensive suite of inversion tools.
-
-
-
3D GOM WAZ survey experiment using Full Waveform Inversion
More LessExploration in geologically more complex areas requires new tools/methodologies in order to address these challenges. The recently introduced wide-azimuth data acquisition method offers better illumination, noise attenuation and lower frequencies to more accurately determine a velocity field for imaging. The methodology in this paper follows the layer striping approach where we developed the supra salt sediment followed by the top of salt, salt flanks, base of salt and finished with a limited subsalt update. The inversion stages were carefully QC-ed through gather displays to ensure the kinematics were honoured. In order to approximate the observed data, the acoustic inversion had attenuation, anisotropy, acquisition source and receiver depth incorporated in the propagator. The final results were validated by reverse time migration using the inverted velocity field versus the final tomography velocity regime.
-
-
-
An overview of the SEISCOPE project on frequency-domain Full Waveform Inversion Multiparameter inversion and efficient 3D full-waveform inversion
Authors J. Virieux, S. Operto, H. Ben Hadj Ali, R. Brossier, V. Etienne, Y. Gholami, G. Hu, Y. Jia, D. Pageot, V. Prieux and A. RibodettiWe present an overview of the SEISCOPE project on frequency-domain full waveform inversion (FWI). The two main objectives are the reconstruction of multiple classes of parameters and the 3D acoustic and elastic FWI. The optimization relies on a reconditioned L-BFGS algorithm which provided scaled gradients of the misfit function for each classes of parameter. For onshore applications where body waves and surface waves are jointly inverted, P- and S-wave velocities (VP and VS) must be reconstructed simultaneously using a hierarchical inversion algorithm with two nested levels of data preconditioning with respect to frequency and arrival time. Simultaneous inversion of multiple frequencies rather than successive inversions of single frequencies significantly increases the S/N ratio of the models. For offshore applications where VS can have a minor footprint in the data, a hierarchical approach which first reconstructs VP in the acoustic approximation from the hydrophone component followed by the joint
reconstruction of VP and VS from the geophone components can be the approach of choice. Among all the possible minimization criteria, we found that the L1 norm provides the most robust and easy-to-tune criterion as expected for this norm. In particular, it allowed us to successfully reconstruct VP and VS on a realistic synthetic offshore case study, when white noise with outliers has been added to the data. The feasibility of 3D FWI is highly dependent on the efficiency of the seismic modelling. Frequency-domain modelling based on direct solver allows one to tackle small-scale problems involving few millions of unknowns at low frequencies. If the seismic modelling engine embeds expensive source-dependent tasks, source encoding can be used to mitigate the computational burden of multiple-source modelling. However, we have shown the sensitivity of the source encoding to noise in the framework of efficient frequency-domain FWI where a limited number of frequencies is inverted sequentially. Simultaneous
inversion of multiple frequencies is required to achieve an acceptable S/N ratio with a reasonable number of FWI iterations. Therefore, time-domain modelling for the estimation of harmonic components of the solution can be the approach of choice for 3D frequency-domain FWI because it allows one to extract an arbitrary number of frequencies at a minimum extra cost.
-
-
-
3D full-wavefield tomography: imaging beneath heterogeneous overburden
Authors M. Warner, A. Umpleby and I. SteklWe have developed computer codes and work-flows for 3D acoustic waveform inversion in both the frequency and time domains. We have applied these methods to several 3D field datasets with a variety of acquisition geometries and target depths. In each case, wavefield tomography was able to obtain a high-resolution high-fidelity velocity model of the heterogeneous overburden, and consequently to improve subsequent depth imaging of an underlying target.
-
-
-
Improvements in Imaging and Reduction of Uncertainty in Velocity Determination by the Use of Wide Azimuth Surveys
Authors A. Bartana and D. KosloffSeismic velocity determination has suffered from insufficient coverage of the data acquisition. For this reason only smooth long wave length components of the velocity variation can be reliably recovered. We show by means of a theoretical study that multi azimuth data has the potential to significantly improve velocity determination. In this study we examine the capability of multi azimuth acquisition in resolving small velocity anomalies by means of a 3D synthetic example. The model consists of a layered structure which contains two small velocity anomalies. The study compares the resolution when the migrated gathers contain no azimuthal information to the case when the gathers are binned both according to offset and azimuth. The results show that conventional gathers can only obtain a blurred image of the velocity anomalies, whereas with multi azimuth gathers the velocity anomalies appear distinctly.
-
-
-
Coil Shooting on Tulip discovery in Indonesia: a summary of the work done and lessons learned until now
By M. BuiaCoil Shooting [French, Cole, 1984; Durrani, 1987] is a technique in which a marine towed streamer vessel acquires an almost continuous sequence of circular "lines". The circular line geometry is repeated in the X and Y directions to build up fold, offset and azimuth distribution. This method allows for full azimuth (FAZ) acquisition using a single vessel, shooting on a continuous turn. The time between each circular line is of the order of minutes, as opposed to hours compared to conventional race-track acquisition. This results in high acquisition uptime and efficiency. Eni Indonesia and WestenGeco shot and processed through PSDM a full 3D Circular Shooting survey (Coil) over the Tulip Discovery in Indonesia between August 2008 and February 2010. Compared to “traditional” streamer surveys, the circular geometry introduces several differences and a number of new challenges, including proper offset / azimuth stacking. This paper presents the steps of the whole project: design, onboard illumination QC and final imaging results of this new “Full Azimuth” (FAZ) seismic effort.
-
-
-
CRS - More than a stack: A workflow from time to depth
Authors D. Gajewski, M. Baykulov and S. DümmongStacking is one of the most stable processes in reflection seismic data processing. Although the stacked section provides a distorted picture of the subsurface it remains the first image in the processing chain since the CMP concept was invented more than half a century ago. The stability of the stacking process results from the limited assumptions made in its derivation. Particularly no assumption on the type of model is made. This applies as well to the extension of the CMP concept the Common Refection Surface (CRS) method. Not just one but several CMP locations are considered when determining the stacking attributes which automatically accounts for the dip of the events. This improves the structural quality of the stack. Moreover, since several CMPs are considered more traces contribute to the stack. The stack is just one product of this procedure. The stacking attributes or CRS attributes are determined for each sample of the data. These attributes (three for the 2-D situation) have many important applications in seismic data processing like velocity model building, multiple suppression, pre stack data enhancement or data regularization. What started out as a stack evolved into a reflection seismic data processing workflow from time to depth producing structural images of high fidelity.
-
-
-
Neural-network based multi-azimuth processing
Authors A. Huck, P. de Groot, T. Manning and W. RietveldThis paper describes the results of a series of experiments with neural networks, dip-steered noise reduction filters and other techniques aimed at combining multi-azimuth data. The seismic data was first pre-processed by applying dip-steered noise reduction filters, amplitude correction and inter-volume trace matching for dynamic shift corrections. Then the individual azimuthal stacks were combined using first unsupervised - and then supervised neural networks using custom-made semi-automated workflows.
The main conclusions drawn from this study are that incremental improvements were achieved after consecutively: aligning the azimuth volumes, unsupervised stacking and supervised stacking. Alignment proved to be a mandatory step. Unsupervised segmentation provided a useful segment volume that highlights the area affected by stacking issues, while the same segmentation was also used for re-stacking the seismic data. Main improvements were achieved by selecting the relative weight to use for stacking. Supervised neural network stacking were further used to smoothen the transition between segment. The “MLP weighted” output is considered better than the input mazstack. The “MLP weighted” stack is perfectly fit for interpretation since no processing related artifacts were accepted. The workflow was adapted to the pre-stack domain but no additional gains were obtained.
-
-
-
Multi-dimensional data reconstruction and noise attenuation for optimal wide azimuth stack
Authors G. Poole and R. WombellOver recent years the value of wide-azimuth acquisition has been well documented. As well as significant improvements in the imaging of complex structures due to improved illumination, these data have also demonstrated benefits in the suppression of coherent and random noise and multiple energy. Two of the key factors controlling the quality of wide-azimuth datasets are high density regular sampling and good signal to noise. Using simple synthetics, we demonstrate the importance of regular sampling in the stack response. We continue by showing how data can be regularised and interpolated with 5D Fourier reconstruction to stack out more noise and improve the stack response of primary energy. In addition we highlight how the use of multi-dimensional denoising techniques can be used to enhance weak energy where the signal-to-noise ratio is a problem.
-
-
-
Least Squares Migration of Stacked Supergathers
Authors G. T. Schuster, W. Dai and G. ZhanWe show that phase-encoded shot gathers can be stacked together to form supergathers and efficiently migrated using an iterative least squares migration (LSM) method. The major problem of cross-talk can be largely eliminated by iterative stacking of the phase-encoded migrations and a multisource preconditioning factor, where random static shifts are used for the phase encoding function. Empirical results with synthetic seismic data suggests that increasing the number of stacked shot gathers requires an attendant increase in the number of LSM iterations. A key merit of phase-encoded LSM of supergathers is that, under ideal conditions, computational costs, IO, and storage requirements can be reduced by several orders of magnitude compared to conventional LSM.
-
-
-
Beam, wavelets and enhanced seismic attributes for interpretation
Authors K. Sherwood and J. SherwoodIn beam migration, it is possible to maintain a one-to-one mapping between a coherent event in unmigrated time and the corresponding event in migrated depth. The mapping is accompanied by many seismic attributes, including dip/azimuth of the reflector, angle of incidence at the reflector, raypath to the reflector, coherency, and wavefront curvature. During reconstruction, one or more of these seismic attributes can be used to filter the data, creating unique seismic volumes that aid in the model building and interpretation. The benefits derived from these volumes can lead to significant improvements in the imaging of challenging areas.
-
-
-
Anti-alias Optimal Interpolation with Priors
Authors M. Vassallo, A. Özbek, A. K. Özdemir, D. Molteni and Y. K. AlpWe introduce a new technique referred to as Optimal Interpolation with Priors, or OIP, for interpolation of irregularly sampled signals, using prior estimates of their spectral content, which is optimal in the least square sense. In this paper, after introducing this technique and describing its basic advantages with respect to other state of the art regularization techniques, we demonstrate its potential to interpolate signals that are spatially aliased, based on realistic prior information. We also propose an algorithm to obtain a reliable prior estimate of the signal spectrum. The combined use of this algorithm and OIP, referred to henceforth as Anti-Alias OIP (AA-OIP), can be applied to datasets irregularly sampled in multi-dimensional spaces.
-
-
-
Advances in onshore imaging
Authors J. -W. de Maag, H. Rynja, E. van Dedem, P. Milcik and M. van de RijzenOnshore data typically poses additional challenges for processing and imaging in comparison with offshore data. Here we can think of limited access for acquisition, more variation in source and receiver coupling, more severe random noise conditions, presence of coherent (shear) noise such as groundroll, more complicated multiple systems, processing no longer being done from a flat datum and signal distortion from a rapidly varying shallow overburden. To overcome these challenges, several advances towards a better stack are being made.. Some of these will be discussed below. Examples shown will be from two onshore datasets; a sparser Libyan survey and a high-density Wide-Azimuth survey acquired in the South of Oman.
-
-
-
Seismic stacking in a wider perspective
More LessStacking can be seen as part of the well-known correlation process: ‘stacking is zero-shift cross-correlation’. Hence, the stack yields one element out of a larger data volume. By computing this larger data volume (‘generalized stacking’), the original unstacked input can be fully recovered (‘generalized destacking’). If we look at the physics behind those mathematical transformations, generalized stacking represents a focussing process and generalized destacking represents a defocussing process. In this paper it is proposed to extend the traditional stack to focal transformation. In addition, it is proposed to formulate focal transformation in terms of constrained inversion.
-
-
-
Virtual Outcrop Models: Case Study from the Paleozoic Sandstone Reservoir and Aquifer Analogs, Saudi Arabia
Authors O. Abdullatif and M. MakkawiExcellently exposed Paleozoic Sandstone outcropping strata in central and southern Saudi Arabia provide good outcrop analog to many subsurface formations and hydrocarbon reservoirs and groundwater aquifers. The study of these outcropping rocks provides invaluable opportunity to examine different scale of sedimentary heterogeneity and to understand their impacts on reservoir and aquifer quality and their behavior in the subsurface. This might help to refine and better characterize reservoir and aquifer geological models based on subsurface information.
-
-
-
The Ainsa quarry outcrop revisited via orientation models built from LIDAR data
Authors P. Arbués, D. García-Sellés, O. Falivene, Ò. Gratacós and J. A. MuñozThe Ainsa quarry outcrop is located 1.5 km south of Ainsa town, in southern Pyrenees, Spain. The strata in the exposure are Eocene, and deposited in a submarine slope setting undergoing synchronous thrusting and related folding. They have tectonic dips of about 22º to the WSW. The succession comprises a 20 m thick turbidite sandstone body (Figure 1) sandwiched between mudstone-dominated mass-transport deposits, together representing the Ainsa-1 turbidite channel-complex. The outcrop has been the subject of numerous studies that have contributed to the global understanding of turbidite systems (Mutti and Normark, 1987; Schuppers, 1995; Clark and Pickering 1996). This succession is also important in that it has been regarded as an analogue for reservoirs in the offshore West Africa. The outcrop section is about 400 m long, and its map view shows as a very open angle, limiting the validity of 3-D reconstructions away from the outcrop face to mere extrusion. However, the quarry face is clean and allows for the observation of multiple bedding surfaces, specially the sharp soles of turbidite sandstones on top of mudstone beds (Figure 1). These surfaces were studied from their LIDAR point cloud expression. The results, a local 3-D reconstruction, will be used to revisit an existing depositional and architectural interpretation of the outcrop (Arbués et al. 2008) that had been built on the basis of conventional outcrop characterization techniques.
-
-
-
LIDAR-based 3D reconstruction and modelling of a flat-topped non-rimmed carbonate platform: Aptian, Maestrat Basin, Spain
Authors T. Bover-Arnal, O. Gratacós, D. García-Sellés, O. Falivene, R. Salas and J. A. MuñozOutcrop-scale reconstruction of depositional geometries and facies distribution of carbonate systems improves our knowledge on their heterogeneity distribution, stacking patterns and stratal architecture. The collected data and derived models can be used as analogues for characterizing and modelling potential subsurface reservoirs. Traditional sedimentological analyses in cropping out carbonate systems have limited accuracy depending on exposure conditions, accessibility or past erosive processes. On the other hand, there is a need to complement classical sedimentological approaches with quantitative characterizations and models of sedimentary bodies. In this respect, processing of three-dimensional (3D) point clouds captured by terrestrial LIght Detection And Ranging (LIDAR) technology combined with real-time kinematic global positioning system offers to field geologists the possibility to construct virtual 3D digital outcrop models (DOMs), which allow for more accurate analyses, reconstructions and quantification of the outcropping facies distribution than conventional digital terrain models. We present a LIDAR 3D DOM of an Aptian flat-topped non-rimmed carbonate platform margin from the western Maestrat Basin (Spain). The DOM served as a departing point to perform a 3D reconstruction that shows the relationship between depositional architecture and facies distribution of the carbonate system. The reconstruction not only highlights the value of digital outcrop models to characterize virtual attributes not observable in the outcrops due to the limitations of the 2D views of the exposures, but also allows to refine outcrop-scale sequence stratigraphic analyses. In addition, the 3D sequence stratigraphic approach obtained together with the 3D facies distribution model generated can be used as an analogue for the characterization of subsurface carbonate reservoirs with similar depositional profiles. The workflow of this study followed these steps: 1) Acquisition of the outcrop 3D point data set using a ground-based terrestrial LIDAR equipped with a differential GPS; 44 overlapping scans were needed to cover the entire outcrops of the flat-topped non-rimmed carbonate system characterized, each scan has associated a high-resolution photograph. 2) Mapping stratigraphic surfaces and pseudowells describing 5 lithofacies onto each individual photograph using a CAD-based tool, the mapping is carried directly onto the photographs because manipulating the images and interpreting the details is easier than directly digitizing onto the point-cloud. 3) The features mapped onto the photographs are projected into the corresponding point-cloud in order to georeference them. 4) Locally georeferenced individual point-clouds and attached interpretations were globally georeferenced by means of the UTM coordinates of each scan. 5) The stratigraphic boundaries mapped are used reconstruct the surfaces bounding stratigraphic units. 6) Population of the internal facies distribution conditioned to the pseudowells. This methodology allows to efficiently extracting information from point clouds, and resulted in the construction of a high-resolution 3D geological model displaying the stratal architecture and facies heterogeneity of sedimentary bodies, confined within a 3D sequence stratigraphical framework.
-
-
-
Application of ground-based LIDAR for the study of the Huesca Fluvial Fan (Northern Ebro Basin, Spain): modelling the Montearagón outcrop
Authors R. Calvo, P. Arbués, D. García, P. Cabello and E. RamosThe emergence of new techniques usually awakes a strong interest within the scientific community for testing its potential applications in different fields. Thus, any new tool or methodology should be validated previously to its systematic application. Validation process must be carried out in places sufficiently known in order to verify if the results are consistent with those expected. This work aims to incorporate the Light Detection And Ranging technique (LIDAR) to study sedimentary outcrops (Bellian, J. et al.; 2005). The used methodology includes both classical field study and geometric information extracted from the analysis of LIDAR-based Virtual Outcrop (VO). In this case, the chosen study area is the Montearagón outcrop. As illustrated in Fig. 1, Montearagón is located in the Northern margin of the Ebro Foreland Basin (Arenas, C. et al., 2001; Luzón, A., 2005), 11 km Northeast of Huesca (Aragón). The outcropping materials correspond to the Oligo-Miocene Huesca Fluvial Fan (Hirst, J., 1991; Nichols, G., 2004; Donselaar, M. et al., 2008), and are mainly composed by different typologies of sandstone bodies included in a matrix of flood plain shale and silt (Fig. 2). The main objective of this work is creating a 3D model of the Huesca Fluvial Fan. This model will be built using the proposed methodology and by integrating several outcrops that represent various sectors of the fan. Studying outcropping ancient fluvial systems, like the Huesca Fluvial Fan, is of big interest to the oil industry. It gives an approach to the behaviour of similar buried fluvial reservoirs that are hard to image and to model accurately.
-
-
-
Characterization of an analogue of fractured reservoir using LIDAR, GPR and conventional data
Authors M. Coll, D. García-Sellés, M. Grasmueck, G. P. Eberli, J. Lamarche and K. PomarThe Solvay quarry displays karstified and heavily fractured strata of peritidal platform carbonates of late Barremian age, that can serve as an analog to subsurface fractured reservoirs. In addition of being a potential analog, this study also aims to improve the methodology used in building of DOM (Digital Outcrop Model). The originality of the applied methodology is the integration of conventional outcrop analysis, LIDAR (Light Detection and Ranging) and GPR (Ground Penetrating Radar) data. The goal is to produce an accurate and efficient DOM that resolves the three-dimensional sub-seismic heterogeneity of the fracture distribution in the strata. Stratigraphic and fracture analysis with conventional methods was performed on about 2 km of exposed cliff faces that were subsequently scanned with the LIDAR equipment. Transversal and longitudinal 2D GPR lines and 6 GPR cubes were acquired on the quarry floor to correlate the quarry walls. The 2D GPR data were statically corrected using the GPS horizontal coordinates of the transects, high-resolution topography provided from LIDAR data, and a replacement velocity of 0.098m/ns. GPR and LIDAR data were loaded into 3D CAD software to interpret each horizon and to reconstruct the structural framework. To characterize the fracture distribution; scanline measures were performed along the quarry walls, 3D migrated GPR data was interpreted by delineating high amplitude zones originating from focused diffractions that define fracture surfaces (Grasmueck et al. 2005) and LIDAR point clouds were processed to reveal the main planes families that form the rough wall surface. Two of GPR cubes show the coexistence of four sub-vertical fracture families trending N-S, E-W, NW-SE and NE-SW. The NE-SW fracture family is not detected in the outcrop using the scanline method because the fracture is parallel to the direction of the quarry wall, however the LIDAR algorithm found two families planes oriented near this fracture family. This planes are related to the morphological features of NE-SW joints like twist hackles. The 3D fractures constructed with GPR data allow to filter and understand the planes computed with LIDAR data and to determine the sampling bias due to scanline orientation. Subsequently, the LIDAR data and the scanline measures allow to obtain a continuous distribution of the families fractures along the quarry allowing to characterize dip and azimuth variations.
-
-
-
A workflow for the automatic characterization of geological surfaces from terrestrial LIDAR data
Authors O. Falivene, D. García-Sellés, P. Arbués, O. Gratacós, J. A. Muñoz and S. TavaniPoint clouds acquired with terrestrial LIDAR are used as a digital support to accurately and precisely georeference outcrop characterizations; as well as to resolve accessibility problems, and improve outcrop characterizations. The LIDAR data allows for an efficient visualization and analysis of the outcrop in the computer, and is also useful for revisiting field data in the office or for teaching purposes. The common practice for virtual outcrop interpretation is visual identification and manual digitalization of pointsets or polylines by using 3-D CAD-like modules. Other, less generic, approaches are oriented towards the automated or semi-automated extraction of geological features, either based on the processing of intensity or other attributes of the virtual outcrop (RGB, hyperspectral) or on geometric parameters calculated from positions. In this presentation, we propose a workflow for the automatic characterization of planar surfaces (typically stratigraphic bedding or fractures) from LIDAR data. The workflow directly uses the point cloud; therefore no decimation, smoothing, intermediate triangulated or gridded surface are required; and is designed aiming to minimize user interaction to allow for a simple, fast, objective and semi-automated use. The result of the workflow is the reconstruction of planar surfaces identified in the point cloud by means of TIN surfaces, organized into families according to their orientations. These surfaces can be used as seeds for building surface-based models of the outcrop, or can be further analysed to investigate their characteristics (geometry, morphology, spacing, dimensions, intersections, etc.). The workflow is based on planar regressions carried out for each point in the point cloud. Which allow the subsequent filtering of points based on normal vector orientation, planar regression quality, relative locations of points or their relative normal vectors differences. This is aimed at individualizing planar patches with geological signification. A coarse grid search strategy is implemented to speed up neighbouring points searches and allow handling multimillion point clouds. The workflow is illustrated using synthetic and natural examples.
-
-
-
Collection, processing, interpretation and modelling of digital outcrop data using VRGS: An integrated approach to outcrop modelling
By D. HodgettsMuch focus has been given to the hardware and data collection techniques for digital outcrop analogue work. The software development has, however, been left behind, with many geoscientists relying on applications designed for civil engineering or surveying purposes. Though these approaches have yielded interesting and often impressive results, without dedicated software applications the true power of digital outcrop data will never be realised. For the past 7 years software has been under development at Manchester University dedicated to 3D digital outcrop work, with a focus of being able to use very large data sets (collected from LiDAR or other digital sources) effectively and efficiently, but importantly to integrate these approaches with more traditional data collection approaches such as sedimentary logging and field mapping. The software developed facilitates processing of point cloud data from LiDAR and satellite sources (such as Digital Elevation Models), the triangulation of that data into meshes, and interpretation on both point clouds and meshes where appropriate. Interpretation tools include typical polyline mapping tools, structural measurement tools, sedimentary logging tools as well as more automated interpretation and mesh/point cloud classification approaches. Due to the nature of the rapid and large scale data collection possible using modern surveying systems and abundance of publically available satellite imagery and DEM data, digital outcrop datasets can be very large in size. This presents problems in the time taken to interpret and extract surfaces, structures and geostatistics from these data. One solution to reduce the time needed for interpretation and classification is the application of artificial intelligence to the problem. Artificial Neural networks try and replicate the same learning process used by humans and other animals. These Artificial Neural Networks (ANN) potentially provide very powerful ways of classifying data. Examples will be given showing the application of these ANN approaches to the classification of point cloud and mesh data, in particular addressing the problems of extracting structural data on plane orientations such as fracture and bedding planes. The applications of other soft computing and artificial intelligence approaches will also be presented. Integration of multiple data sources into one environment facilitates the development of new modelling approaches. A predictive approach to surface modelling will be presented relying on the use of structural data from dip-azimuth measurements from bedding planes, and polyline interpretations from key stratal surfaces. This modelling approach relies on converging a triangulated mesh, based on the control data, onto a solution matching that input data, rather than using traditional interpolation/extrapolation approaches. With the rapid evolution of computer hardware, particularly the development of high power graphics-card based computing, the application of modern graphics-card features to the processing visualisation and rendering of large digital outcrop datasets will be demonstrated. These hardware advancements will prove of significant benefit to the geosciences, but only if software applications are written to take advantage of them.
-
-
-
Advances in Virtual Outcrop Geology
Authors J. A. Howell, S. Buckley, T. Kurz, A. Rittersbacher and A. SimaVirtual Outcrops, in which geological exposures are digital captured in a workstation, provide a new and rapidly emerging tool for the collection and analysis of field data. The advantages of virtual outcrop are primarily twofold; the rapid collection of accurate, spatially constrained measurements of geological features and, the improved visualization of outcrops which allows better correlation and mapping coupled with an ability to illustrate and communicate field observations to a wider audience. Virtual Outcrop geology is a rapidly expanding field of study which has grown over the last 10 years from photogrammetrry and basic digital mapping to the advanced data collection and visualization methods utilized today. This presentation addresses two recent developments: the collection and utilization of very large datasets and the integration of hyperspectral imagery to allow the remote mapping of lithology and mineralogy. To date the majority of photo-realistic virtual outcrops are generated from ground based lidar systems. While producing excellent results, these systems are limited by mobility and range, especially when studying very large outcrops. A solution to this problem is to mount the lidar system in a helicopter and scan the outcrop obliquely. This allows the rapid collection of very large volumes of data and has the added advantage of optimizing the angle at which both the scan and associated photos are taken, reducing the occurrence of scan-shadows. Very large virtual outcrops that cover 10s of km can be collected in hours. Despite the speed of acquisition, heli-based data presents a new set of challenges, not least the creation of very large datasets which cannot be visualized using conventional software. The acquisition, processing and utilization of these data will be illustrated with examples from fluvial and shallow marine systems from Utah. Airborne hyper-spectral imagery is an established method for remote sensing which utilizes the absorption characteristics of light outside the visible range (near infer-red) to identify mineralogy and other surface features (vegetation, land use etc). Mounting a similar camera on a tripod and obliquely scanning geological outcrops allows the remote mapping of lithology and mineralogy. The acquisition of oblique data from surfaces with significant topography has presented challenges for the processing of such data. Integration with the detailed terrain mapping provided by the lidar has allowed the spectral absorption response to be modelled and meaningful virtual outcrops, textured with quantitative mineralogical information to be produced. The results is a virtual outcrop which is textured with false coloured images that record mineralogy and can be accurately and rapidly investigated for quantitative information.
-
-
-
3D Digital Outcrop Modeling and aquifer/reservoir characterization of a slope system tufa complex. La Peña del Manto, Soria (Spain)
Field surveys had been performed on an excellent outcrop of a Quaternary perched springline (slope system) tufa complex (La Peña del Manto, Soria, Spain) integrating LIDAR (Laser Imaging Detection and Ranking), DGPS (differential global position system), GPR (Ground penetrating Radar), ERT (Electrical resistivity tomography) technologies and conventional field studies. The later include: 1) detailed GIS-based geological and geomorphological mapping; 2) description and characterization of sedimentary facies; 3) logging of stratigraphic sections; 4) palaeocurrent measurement; 5) sampling for petrographic, microtextural, geochemical and geochronological analysis; together with 5) sampling for petrophysical characterization (porosity and hydraulic conductivity analysis) of the different lithofacies that will be carried out in a following step of research project. PETREL sowftware (Schlumberger) is being used to integrate the data set and to build a digital outcrop model (DOM) and a 3D facies model of this sloped tufa complex. These models will allow the accurate reconstruction of sedimentary geometries and quantification of the spatial distribution of lithofacies and their physical properties. These models are envisaged as a highly valuable tool for unraveling the sedimentological development and evolution of the cascade tufa complex and their aquifer characterization, providing key insights for understanding the geomorphological evolution during the Quaternary of the fluvial drainage network of the area. In addition, the results will help to improve the current knowledge and understanding of tufa sedimentary systems (comparatively much less studied that other carbonate sedimentary systems) and will provide valuable information for aquifer and reservoir analogs of comparable sedimentary bodies.
-
-
-
Methods for Analyzing High Resolution 3D Digital Outcrop Geology: Deepwater Jackfork Sandstone at Big Rock Quarry, Arkansas
Authors C. L. V. Aiken, M. I. Olariu, M. Wang, J. P. Bhattacharya and J. F. FergusonQualitative facies distributions and quantitative bed/channel dimensions in three-dimensional virtual outcrops using ground-based remote sensing and analysis of terrain surfaces is a basis for geologic mapping and interpretation of deepwater deposits at Big Rock Quarry, Arkansas located in the southeastern part of the Ouachita Mountains in North Little Rock, Arkansas (Fig 1). Three-dimensional views of the lower part of the upper Jackfork Group (Olariu et al, 2008) allows three-dimensional reconstruction of facies architectural elements, stacked channels that lack levees and overflow deposits, a submarine channel complex deposited at the base of slope estimated as 9.6 km by 16 to 24 km pinching out 4 km north of the quarry. Flow indicators are oriented west-southwest.
-
-
-
Automated Methods for Fully Exploring and Interpreting LIDAR Data Points
By S. ViseurThe LIDAR scanning combined with digital photograph mapping techniques (Bellian et al. 2005) has become a privileged tool to obtain a 3D georeferenced reconstruction of an outcrop, often termed as DOM (Digital Outcrop Model). For several years, many geoscientist applications use DOMs as support for manual interpretations of strata or fractures and facies mapping. However, the LIDAR tool produces huge data sets that become easily difficult to manipulate interactively and then to interpret. A new challenge in geomodelling is then to extract, in an automated way, geological features from a DOM. Different kinds of strategies have been proposed in the litterature based on both LIDAR points or DOMs. For example, some authors have proposed to use maximum curvature values (Ahlgren et al., 2003) in order to obtain statistics about fracture networks (orientations, density). Automated detection methods have been presented in Kudelski et al. (2009), Viseur (2008) and Viseur et al. (2009). They are applied on DOMs and they aim at extracting as polygonal lines the strata or fracture paths observed along the outcrop. Other authors (Garcia-Selles et al., 2008; Franceschi et al., 2009) use properties computed (geometrical attributes) or available (intensity) from LIDAR data points to highlight or detect geological features. Finally, authors have proposed approaches based on the ”ant tracking” algorithms applied on the colours of the mapped pictures (Monsen et al., 2007). In this paper, a series of algorithms are presented. They are integrated into a workflow to fully explore and interpret numerical outcrops from data points to horizon or fracture surface constructions. Indeed, working on DOMs requires to build surfaces from very dense multivalued XYZ data points which is time consuming and generally leads to mesh decimation in order to obtain triangulated surfaces light to manipulate. These operations may damage the information contained in the topography geometry. Therefore, working directly onto the XYZ data points may be a good alternative and allows the display of subtle relief signals. Moreover, the LIDAR engin is experiencing new developments and LIDAR data points with RGB flags are increasingly provided. The proposed approach aims at first extracting as polygonal lines the limits of geological objects from the LIDAR data points. Then, surfaces may be built to model the detected fractures or strata interfaces.
-
-
-
GeoAnalysis Tools - ArcScene Extension for the Analysis of 3D Geological Outcrop Models
Authors L. White, C. Aiken, M. Alfarhan and J. ClineGeoAnalysis Tools is an ESRI ArcScene extension developed by Geological & Historical Virtual Models, LLC, based upon research performed in collaboration with the University of Texas at Dallas. GeoAnalysis Tools allows for the interactive analysis of orientation and deformation features of a 3D model of a geological outcrop. The model can be either a photorealistic solid model constructed by draping photographs on a triangulated irregular network (TIN) derived from a LIDAR point cloud or it can be the point cloud itself. Basic field measurements such as strike-dip, trend-plunge, and bedding thickness can be made on the 3D model. The program provides for the extrusion of features in a trend-plunge direction to facilitate the nature of the deformation. Down-plunge cross sections are rapidly created from traces of features such as bedding contacts and displayed in ArcMap.
-
-
-
Challenges and Pitfalls of Modelling Old and Deep Petroleum Systems: Examples from North Africa and the Middle East
Authors J. Craig, D. Grigo, A. Rebora, G. Serafini and E. TebaldiOlder and deeply buried petroleum systems are usually characterised by complex geological histories, and this is certainly the case for the Neoproterozoic and Palaeozoic petroleum systems of North Africa and the Middle East. In these systems, the efficiency of the source rocks and the potential to generate, migrate and trap hydrocarbons in a time frame that allows hydrocarbons to be retained are often the most critical risks. Hydrocarbons can usually only be trapped for a few tens of millions of years because even the most perfect seals are permeable over longer periods of time. One of the most critical issues determining the efficiency of older and/or deeply buried petroleum systems is, therefore, their burial history, and specifically the existence of a ‘late’ burial phase that can allow hydrocarbons to be generated, expelled, migrated and trapped in a suitably recent timeframe. Exceptions, such as the Neoproterozoic petroleum system of the Amadeus and Officer basins of Australia or the Late Neoproterozoic-Early Cambrian petroleum systems of Oman and the Indian Sub-continent generally occur where evaporite super-seals are present and/or where the post–trapping history is dominated by extreme tectonic stability.
-
-
-
Automated reconstructions of sedimentary basins in frontier area
Authors L. H. Rüpke and D. W. SchmidThe self consistent reconstruction of the thermal, tectonic, and stratigraphic evolution of sedimentary basins is a challenging task. Good results have been obtained (e.g., Bellingham and White 2002, Fjeldskaar et al. 2004, Kooi et al. 1992, Poplavskii et al. 2001, Rüpke et al. 2008) based on McKenzie’s pioneering work (1978). However, with the current petroleum prospects moving further and further into frontier areas, characterized by deep water and extreme stretching of the lithosphere, the McKenzie approach does not suffice any longer to obtain a valid reconstruction. Required additional physics include depth dependent stretching, formation of new oceanic crust, and mineral phase transitions. We have implemented all standard as well as these frontier area relevant mechanisms in a software package called TECMOD2D. TECMOD2D allows for automated thermotectonostratigraphic reconstructions of sedimentary basins. Key to this is the coupling of a forward model to an inverse scheme for automated parameter update. The forward model resolves simultaneously for lithosphere processes (e.g. thinning, flexure, temperature) and sedimentary basin processes (e.g. sedimentation, compaction, maturation). The inverse algorithm automatically updates crustal and mantel thinning factors as well as paleo-water depth until the input stratigraphy is fitted to a desired accuracy.
-
-
-
Basin modelling of the Ghana transform margin: implications for the structural, thermal and hydrocarbon evolution of the Tano Basin
Authors L. H. Rüpke, D. W. Schmid, E. H. Hartz and B. MartinsenThis study explores the structural and thermal evolution of the Ghana transform margin. The main objective is to explore how the opening of the Atlantic Ocean and subsequent interaction with the Mid-Atlantic Ridge (MAR) has affected the margin’s structural and thermal evolution. Two representative evolution scenarios are described: a reference case that neglects the influence of continental break up and a second scenario that does account for a possible heat influx during the passage of the MAR as well as magmatic underplating. These two scenarios have further been analyzed for the implications for the hydrocarbon potential of the region. The scenario analysis builds on a suite of 2D realizations performed with TECMOD2D, a modeling software for automated basin reconstructions. Taking the presently observed stratigraphy as input, the structural and thermal evolution of a basin is automatically reconstructed. This is achieved through the coupling of a lithosphere scale forward model with an inverse algorithm for model parameter optimization. We find that lateral heat transport from the passing MAR in combination with flexure of the lithosphere can explain the observed uplift of the margin. These results were obtained for a broken plate elasticity solution with a relative large value for the effective elastic thickness (Te=15) and necking level (15km). Lateral heat flow from oceanic lithosphere is clearly visible in elevated basement heat flow values up to 50km away from the OCT. This influx of heat does, however, not seem to have significantly affected the maturation history along the margin. Only the deepest sediments close to the OCT show slightly elevated vitrinite reflectances in simulations that account for the passage of the MAR. In conclusion, it appears that that lateral heat transport from the oceanic lithosphere is instrumental for shaping the Ghana transform margin but seems to have only limited control on the maturation history.
-
-
-
What can we expect from process-based source rock modelling: Examples from high and low resolution data sets
By U. MannIn order to be able to quantify geological processes in a distinct part of a sedimentary basin, two prerequisites are essential: first, a reasonable description of the most relevant processes taking place, and second, the description of these processes in 3 dimensions. The process-based source rock modelling software OF-Mod 3D aims at predicting source rocks units in sedimentary basins in terms of distribution and properties. It simulates the most important processes relevant for organic matter accumulation in sediments, and the interactions between them. Modelled processes are: supply and distribution of marine and terrigenous organic matter, degradation in the water column, burial efficiency at the sea floor under oxic and anoxic (oxygen minimum zones, anoxic bottom water) conditions, as well as dilution of the organic matter with siliciclastic sediments. The results can be calibrated to or just compared with analytical data from well samples. The advantage of such process-based modelling of organic sedimentation is that the process descriptions substitute to some degree for missing data and therewith the modelling has predictive power. In addition, complex parameter interactions are considered and the influence of each control parameter can be identified easily. In terms of petroleum systems modelling, it is also notable that the process-based forward modelling approach results in initial, not maturity-altered source rock properties. This is important since often geochemical data from exploration wells are heavily maturity altered and thus provide no further information on source rock properties that can be used as input into hydrocarbon generation and migration modelling.
-
-
-
Which equations to pick: a comparison of equations for calculating marine organic carbon deposition
More LessAny quantitative description of a geological process requires a mathematical model describing the relevant processes, as well as values for the input parameters. The processes involved in the deposition and preservation of marine organic matter include the flux of the primary produced organic matter from the sea surface to the sea floor, burial efficiency of the material that reaches the bed, and finally the amount of total marine organic carbon that is preserved in the deposit. These three processes are commonly modelled using empirical equations, mostly derived from fits to modern data sets. A range of equations exists for each process, derived by different authors from different data sets (although older data sets are commonly included in newer derivations). This means that a range of answers can be expected when using different combinations of equations to describe marine organic carbon deposition in a given area. The input parameters for equations describing these processes are primary productivity, water depth, sedimentation rate, and oxygen conditions at the bed. For present-day simulations, input parameters are available from measurements. This is unfortunately not the case for simulations of the geological past, but in this case they can be estimated from data measured in cores. The parameter estimation can be done using the same empirical equations as used for the process descriptions. As a range of equations exists, again a range of estimated values can be expected.Several equations and combinations of equations were used to investigate the range of answers that different approaches give. The different equations were used in a Monte Carlo simulation of the calculation of marine organic carbon values, to estimate values of primary productivity with published core data (other input parameters were obtained from the core measurements), and to simulate the spatial distribution of marine organic matter with the forward model OF-Mod (Organic Facies Model).
-
-
-
Numerical Modelling of Hydrothermal Fault-Related Dolomitization
Authors F. H. Nader, J. -M. Daniel, O. Lerat and B. DoligezClassical diagenesis studies make use of a wide range of methods and analytical techniques in order to suggest conceptual models that explain specific, relatively time-framed, diagenetic processes (like dolomitisation) and their impacts on reservoirs. Modern techniques usually combine petrographic analyses (by means of conventional, cathodoluminescence, fluoresence, and scanning electron microscopic techniques), geochemical measurements (major/trace elements, micro-probe, stable oxygen and carbon isotopes, Sr radiogenic isotopes) and fluid inclusion analyses, providing independent arguments to support the proposed model. Still, conceptual models are qualitative and do not yield "real" data for direct use by reservoir engineers for rock-typing and geomodelling. This contribution provides new insights into numerical modelling of hydrothermal dolomitisation.
-
-
-
Challenges in the modelling of hydrocarbon systems from seismic cubes
By Ø. SyltaSeismic data have for a long time been used to build geologic models for basin modeling purposes. The basin models used in migration studies have typically been built as 2D section profiles (Figure 1), but over the last 10 years we have seen 3D stratigraphic geometries being built from interpreted seismic horizons. The interpretations have been depth-converted and merged into a 3D structural framework, and the "inside" of the layers have thereafter been populated with flow properties from geological libraries. These libraries will often be very elementary in their representation of the flow properties, resulting perhaps in too simple hydrocarbon migration flow patterns in modeled basins.
-
-
-
Integration between Pore Pressure Prediction and Petroleum System Modelling Methodologies
Authors P. Sibin, M. Della Martera, M. Tonetti and C. AndreolettiThe necessity to satisfy the world needs of oil & gas presses Oil Companies to drill in conditions that are getting harder and harder in terms of geopressure environment. In exploration, pore pressure prediction is critical for the evaluation of vertical and lateral sealing, the estimation of maximum possible column of hydrocarbons in place, and consequently to rank the prospects and evaluate the economics. Moreover, the overpressures have created serious problems during drilling operations in the past and also at the present time; for all these reasons, the geopressures prediction is important to define the best well design in order to reduce NPT, costs and reservoir damages. In order to build the appropriate model and to face this complex problem in the right way, the
necessary information have to be collect, interpreted, elaborated and evaluated by several disciplines and for this kind of item, geology, geophysics and engineering have to be strongly integrated to give the best. We know from several years that information about geopressure can be derived from seismic velocities, and several relationships exist and applied with considerable success, but generally, in complex and deep areas, conventional velocity fields derived from seismic time processing are often
not accurate enough to make a correct pore pressure prediction. For this reason, several methods have been developed trying to obtain more appropriate velocity fields.
-
-
-
Seismic imaging solutions by multi-geophysical measurements and joint inversion
Authors D. Colombo and T. KehoA wide range of near surface geological features challenge seismic acquisition and processing in arid land environments: sand dunes, collapsed karsts, dry river beds, sabkas, outcropping refractors, high velocity near surface layers, velocity reversals, layered basalts and rough topography, to cite a few. These features introduce sharp velocity changes in the vertical and horizontal directions that are difficult to model by using seismic data alone (e.g. velocity inversions, karsts). As a consequence, their imprints remain in the seismic images from surface to reservoir depths. The type of problems introduced by unresolved near surface velocity anomalies range from lack of seismic image quality, to misidentification of prospective low-relief structures and to erroneous depth conversions. Conventional statics and seismic acquisition practices often fail in areas with complex near surface conditions. Therefore, new and even unconventional approaches should be considered to address the near surface challenge. Among these, non-seismic methods such as precision gravity, shallow electromagnetics (EM) and/or electrical resistivity techniques could be effective in reconstructing near surface features correlated to seismic velocity anomalies.
-
-
-
Integration of seismic, well, potential-field and geological data for ore prospecting in the Iberian Pyrite Belt
Authors J. Carvalho, P. Sousa, J. X. Matos and C. PintoaOre prospecting using gravimetric and magnetic data has become one of the traditional approaches in the last decades, often complemented with electric and electromagnetic methods. However, due to the problem of non-uniqueness inherent to potential-filed modelling, constrains provided by structural methods such as seismic reflection are often used. During the exploration of massive sulphide polimetallic minerals in the Iberian Pyrite Belt Figueira de Cavaleiros sector, located in the Sado Tertiary Basin, several gravimetric and magnetic anomalies were considered as interesting targets. In order to reduce ambiguity of the gravimetric modelling and to confirm the geological model of the area, two seismic reflection profiles were acquired. The interpretation of these profiles was assisted by three mechanical boreholes, two of them located in the research area, in order to make a seismostratigraphic interpretation. Unfortunately, the gravimetric modelling suggests that the anomaly has a lithological and structural origin and is not related with massive sulphides. Nevertheless, a good agreement between the seismic and potential-field data was achieved and new insights into the geological model for the region were obtained form this work, with accurate data about the Tertiary cover and Palaeozoic basement.
-
-
-
Possibilities for multidisciplinary, integrated approaches in near-surface geophysics
By R. GhoseWe show that it is possible, under certain boundary conditions, to integrate different methods or disciplines based on the underlying physics to address near-surface characterization challenges. The benefits are improved efficiency and marked enhancements in accuracy and reliability. Very divergent disciplines (e.g. small-strain seismic VS and large-strain geotechnical CPT qc) can be integrated provided there is a convexity in the property domain. It is important to reduce different observations to comparable scales. The integration approaches based on poroelastcity theories show promising results on field data even at low frequencies and appear to be robust against noise and uncertainties in data and physical models.
-
-
-
The use of structurally coupled cooperative inversion in conjunction with cluster analysis towards a comprehensive subsurface characterization
Authors T. Günther, C. Rücker and M. Müller-PetkeThe use of multiple physical principles and data is a common rule in geophysics in order to narrow the variety of possible interpretations. However, in most cases this is done on the interpretation level. A more rigorous reduction of ambiguity can be achieved by coupling within the inversion level. In order to combine data that are not directly related to each other, two main ways exist: • use of petrophysical relations to redirect the output parameters to a common parameter set • structural coupling of otherwise independent inversions based on the model characteristics We use the latter way, for which various approaches have been presented. Günther & Rücker (2006) used a generalized smoothness-constrained inversion scheme and on this basis Günther & Bentley (2006) presented a structural coupling between resistivity and velocity using the gradients of the individual models. An IRLS function is used to predict weights for the model boundary based on co-located model gradients of the other method. As a result, we obtain two physical properties on the same discretisation. Further methods such as cluster analysis can be used to produce a comprehensive subsurface model. Fuzzy c-means clustering yields not only the cluster membership for each model cell, but also a matching function can be derived that is of valuable help in the interpretation.
-
-
-
Surface-Subsurface Integration Reveals Faults in Gulf of Suez Oilfields
Authors A. Laake, M. Sheneshen, C. Strobbia, L. Velasco and A. Cuttsling software for the surface-subsurface integration. The joint analysis of Rayleigh wave data with satellite imagery provides a near surface structural geologic model, which can be interpreted for shallow drilling risks related to fault outcrops. The suite of near surface geological products – Rayleigh wave velocity mapping, short offset rayparameter interferometry and shallow fault mapping – is enabled by the acquisition, processing and interpretation of point-receiver seismic data. For the first time detailed structural geology comprising faults and lithology changes was imaged in the near surface, a data regime that is conventionally contaminated by the seismic acquisition footprint.
-
-
-
From independent data to comprehensive models
Authors M. Mueller-Petke and T. Guenther and U. YaramanciGeophysical exploration has become more and more multi-parameter and multi-method driven during the last decades. These data sets allow to obtain, to connect and to interpret subsurface properties more reliably. However the potential of these data sets is often unused and interpretation is reduced to independent inversions. We give an overview on basic principles and differences using the potential of those data sets. We show examples using data from Magnetic Resonance Sounding (MRS) and Geoelectrics.
-
-
-
In-situ permeability from physics-based integration of poroelastic reflection coefficients
Authors K. van Dalen and R. GhoseA reliable estimate of the in-situ permeability of a porous layer in the subsurface is extremely difficult to obtain. We have observed that at the field seismic frequency band the poroelastic behaviour for different seismic wave modes can differ in such a way that their combination can give unique estimates of in-situ permeability and porosity simultaneously. We have integrated the angle- and frequency-dependent poroelastic reflection coefficients of different seismic wave modes, and have tested the results through numerical simulations. The estimated values of permeability and porosity appear to be robust against uncertainties in the employed poroelastic attenuation mechanism. Potential applications of this approach exist in hydrocarbon exploration, hydrogeology, and geotechnical engineering.
-
-
-
Recent advances and open problems in the integration of near-surface geophysical data
Authors A. Vesnaver, D. Nieto, L. Baradello, M. Romanelli and A. VuanThe integration of different geophysical techniques is the best way to reduce the ambiguities of any single prospecting method, when characterizing geobodies by their rock properties. Seismic imaging is the main tool for delineating deep targets in 3D, but its quality may increase when the near surface effects are compensated for by gravity or electro-magnetic methods (den Boer et al. 2000, Dell’Aversana 2003, Colombo et al. 2008, 2010, among others). Classical refraction statics, in fact, break down when velocities are not monotonically increasing or the shallow formations are very inhomogeneous. In the last decade, new contributions are emerging from unusual information sources as vibrators’ controllers (Al-Ali et al. 2003, Ley et al. 2006), geological maps and satellite imagery (Vesnaver et al. 2006b, 2009, Laake et al. 2008). Here we review some of these recent results and highlight a few problems that require further analysis. We describe also an ongoing experiment for expanding the band-width of active seismic surveys by integrating them with passive ones.
-
-
-
In-situ soil properties from transmission seismic measurements using frequency-dependent wave attributes
Authors A. Zhubayev and R. GhoseA new concept for a physics-based integration of the velocity and attenuation of seismic waves in the shallow subsoil is proposed and tested. The theories of poroelasticity explaining the frequency-dependent seismic wave propagation have been explored. The integration leads to simultaneous estimation of two or more important soil properties in undisturbed condition, which is otherwise difficult if not impossible to achieve. The results of application to field data look promising.
-
-
-
Static and dynamic aspects of near surface characterization through physics-based integration of GPR, ERT, SIP and SP data in the time-lapse mode
Authors G. Cassiani, A. Binley, A. Brovelli, R. Deiana, P. Dietrich, A. Flores, A. Kemna, E. Rizzo and U. Werban .The use of geophysics for the characterization of the near surface is requiring more and more frequently that data be analysed quantitatively to offer meaningful information for the specific discipline object of investigation. This is true for all applications, including environmental studies, hydrology, soil science and geotechnics. This tendency leads substantially to overtaking of the classical approach to geophysics as a pure imaging technique, and requires in-depth understanding of the information contained in each specific physical measurement. Irrespective of the specific application, the geophysical response of the near surface is essentially controlled by a combination of geological (“static”) and ambient (“dynamic”) factors. The latter include moisture content and temperature variations. The separation of static and dynamic factors is the key step towards a quantitative use of near surface geophysics, as individual disciplines and applications may be interested selectively in one or more of the static or dynamic aspects, or combinations. Physico-mathematical modelling is often a fundamental tool that helps to discriminate between static and dynamic aspects, extracting the factors of specific interest for the application at hand. A link between measured geophysical quantities and the corresponding quantities of practical interest can only be established in the form of quantitative constitutive relationships. As many applications can benefit from the joint application of multivariate geophysical measurements (e.g. ERT, GPR, SIP etc) it would be highly advantageous to develop constitutive laws that in turn depend on few parameters that can be independently measured and that have a common, albeit different, impact on several geophysical data. In this contribution we illustrate the above general framework with a number of applications including catchment hydrology, digital soil mapping, contaminated site characterization and subsurface hydrology.
-
-
-
Seismic body and surface wave data integration for near surface characterisation
Authors L. V. Socco, D. Boiero, S. Foti and C. PiattiSeismic methods are widely used in near surface characterisation and very often different seismic datasets relative to body and surface waves are acquired at the same site. These data are, in the majority of the cases, acquired and interpreted separately to provide different information disregarding the synergies between different methods both in acquisition and inversion. In particular the joint or constrained inversion of different datasets may overcome intrinsic limitations of individual techniques and provide a more reliable and consistent final velocity model. Moreover, different information coming from different datasets provide a comprehensive site characterisation.
-
-
-
The Limits of Automatic History Matching
By B. DaviesAfter thirty-plus years of development of commercial reservoir simulators, and twenty-plus years of research into history matching, manual and otherwise, we continue to be surprised by what our new wells encounter in the dynamically changing subsurface, and by what our old wells produce. What are the limits of what is achievable by automatic history matching? One approach to this question is to posit the existence of an infallible automatic history matcher of some description, and then to consider what the implications would be for oilfield operational practice. The author looks at which ways in which ostensibly revolutionary technological breakthroughs are actually adopted and normalised by practicing engineers, and the long-term implications for the delivery of their early promise of savings in time, money or skilled labor. Several examples are introduced from the recent history of other information-driven industries, and from the author's field experience in the delivery and application of predictive reservoir models in the different phases of the reservoir lifecycle. In many cases, so-called "automatic" history matchers find their greatest utility not as black boxes that deliver a perfect model, but as guides to the intelligent use of more conventional manual matching techniques. What are the differences between a "perfect matcher" and a "helpful matching assistant"? Can both these design goals be achieved in a single piece of software, or is a different architectural approach required? Finally, the author speculates about the implications of these findings for the future of reservoir modelling practice, and considers how the non-specialist might be better served by the technology providers.
-
-
-
Conditioning the models with … uncertainties
By T. V. NguyenThe geomodelling technique has become the principal tool for geological representation of the subsurface for the last 10-15 years. It has been subject to an important technical and commercial development and growth and as a consequence has led also the way to a rapid evolution of reservoir modelling and the use of dynamic data. In particular the contribution from geostatistical methods has been a key factor for success. Figure 1 shows the classical streamline process from data processing/ analysis and interpretation to the geomodelling and reservoir simulation leading to the final evaluation of IHIP and production/ reserves. The interesting point to see is that many feed-back loops exist today, not only from reservoir model to geomodel but also from geomodel to different previous data processing/ analysis and interpretation steps. These feed-back loops clearly identify the need to go back more and more upstream in order to better condition the final reservoir model to the field monitoring and production history data. The practice of geomodelling and now of the feed-back loops increase the need for team integration and cross-discipline approach. One important reason for this need is the presence of uncertainties within the different type of data generally due to the scarcity and the quality of the acquisition. Processing and interpretation could sometimes become so difficult that only data integration could somehow helps to relieve the situation.
-
-
-
Time-lapse seismic provides key constraints to dynamic models
By P. HatchellTime-lapse seismic is one of the few technologies that provides a full-field areal picture of what is happening in the subsurface and is routinely used to update static and dynamic models. This is a mature technology in some parts of the world (marine + high porosity) and progress is continually made in more difficult areas (on-shore, HPHT, near infrastructure, lower porosity). Under the right conditions, time-lapse seismic is a proven method to detect and image differences due to changed fluid saturation and pore pressure inside the reservoir and deformations such as those related to reservoir compaction outside the reservoir. This capability often provides information on: (i) the progress of an injected fluid front, (ii) the ingress of an aquifer, (iii) the expansion of a gas cap, (iv) gas evolved due to depletion below bubble point, and (v) the distribution of reservoir compaction. The areal and vertical resolution of this information is typically at the scale of tens of meters. This technology addresses important uncertainties in our knowledge of reservoir connectivity and heterogeneity.
-
-
-
From History to Prediction -Techniques for Conditioning Reservoir Models to Dynamic Data
More LessReservoir simulation models should always be built for specific business goals. It is an accepted rule that models used for production forecasts should reproduce the production on history. Although, most history matching processes are often the result of a complex team effort, objectives for using simulation models and the required level of detail are quite diverse. Applications range from prospect evaluation with limited available calibration data to designing detailed production planning scenarios for mature fields with highly constraining well production data. In either case, applied techniques, workflow requirements and the level of complexity will naturally differ. In recent years assisted history matching techniques and optimization workflows have been established and included in best-practice guidelines in an increasing number of companies in the oil and gas industry. The application of assisted history matching techniques is often motivated by the need to handle increasingly complex problem statements as well as the desire to improve workflow efficiency and transparency. Initially, the focus was given to finding single best models. Modelling paradigms, however, are changing. More recently, the industry has given a stronger interest to understanding a distribution of alternative scenarios which more realistically captures the uncertainty-envelope. This step is non-trivial, since there is no natural extension from the paradigm of single best history-matched models with deterministic forecasting capabilities to the paradigm of establishing a distribution of alternative production forecasts. This defines a major challenge to the reservoir engineering workflow and the question of handling multiple models with alternative outcomes. This talk reviews selected techniques used in history matching workflows. It discusses practical considerations for finding a compromise between “accurate” history-matched models with deterministic forecasting capabilities and the newer paradigm of a sufficient coverage of the uncertainty space for establishing uncertainty distributions.
-
-
-
The evolution of HPC and its opportunities and challenges for Seismic Imaging
By N. BienatiOne of the most important factors for Oil & Gas industry (as for any other industry) is the ability of making predictions about the future. In particular, in this workshop we are concerned with forecasts about the future of HPC and its impact on seismic imaging industry. Needless to say, everyone can predict that hardware performance will continue increasing in the future, but one question that it is interesting to address is: how much? One reliable answer can come from the Top500 list. Indeed, Professor Hans Meuer at University of Mannheim, one of the fathers of Top500, has shown (Meuer, 2008) that, according to historical data, the performance of the system classified at the bottom of the list follows a linear trend on a logarithmic scale (see Figure 1). The rate of growth is around 2x every 13 months, faster than Moore’s law that assumes 2x every 18 months. The nice fit of the data to this trend suggests a good confidence in using the linear trend for extrapolation. The result of such extrapolation is the prediction that between 2015 and 2016 all the systems in the list will exceed the performance of 1 Petaflop/sec. Likewise, it is not unreasonable to predict that this figure will be the minimum standard for all the major players in the seismic imaging industry, both Oil Companies and Service Companies. This is certainly a good news for seismic imaging applications like Reverse Time Migration, Full Waveform Inversion and Seismic Modelling that are amongst the most compute intensive.
-
-
-
Trends in Multicore Processors
By A. GonzálezMoore’s law has fueled a dramatic evolution in microprocessor and will keep doing it in forthcoming generations. Microprocessor designers have leveraged the improvements in process technology to enhance the microarchitecture of processors in different manners. In this quest for delivering higher performance, the whole industry has recently started a journey in the land of multicores. Multicores are very effective to increase computing density, by increasing the number of processing units generation after generation. The scalability of multicore processors faces multiple challenges that will require significant innovation in applications, programming paradigms and tools, and architectures. In this talk, I will describe some of the research avenues that are being pursued to address these challenges.
-
-
-
Programming Seismic Algorithms for GPUs
Authors S. Morton, T. Cullison, I. Terentyev and S. MaGraphics processing units (GPUs) have been shown to be capable of efficiently running computationally demanding seismic imaging algorithms. And the recent significant increase in expenditures by the petroleum industry for GPU clusters indicates these systems are cost effective. With this hurdle cleared, the adoption of GPUs is probably limited mainly by our ability to program seismic algorithms for GPUs. At Hess Corporation, we have moved the most computationally intensive parts of our seismic imaging
codes from CPUs to GPUs over the past few years. The effort involved has varied widely from code to code, from a cost of a man-month to nearly a man-year. Our one-way wave-equation migration for GPUs is a direct port of the computational algorithm used on CPUs. The Kirchhoff code required manual optimization of many of its components. An optimized reverse-time migration library was constructed by screening a set of automatically generated kernels. In this talk we will present the computational algorithms for these seismic imaging codes and discuss our software approaches and performance results.
-
-
-
Accelerating seismic processing applications with FPGAs
By O. PellMicroprocessors have been hitting the limits of attainable clock frequencies for the past few years, resulting in the current multi-core processor solutions provided by the major microprocessor vendors. Multiple cores on a chip result in the need to share the same pins to get to the memory system and communication channels to other machines. This leads to a “memory wall”, since the number of pins per chip does not scale with the number of cores, and a “power wall” since chips must still be cooled within the same physical space. Many geophysically important applications such as finite difference modeling, downwards continuation based migration and sparse matrix solvers already exhibit significantly worse than linear scaling on multiple cores, a problem that is only going to worsen as the major microprocessor vendors move beyond quad-core chips to many-core architectures. Maxeler streaming accelerators implemented on Field Programmable Gate Arrays (FPGAs) allow us to bypass the memory wall by minimizing access to external memory and explicitly forwarding data on-chip at a very high bandwidth (over 10TB/s on the latest chips). The high performance attainable with such architectures has been established for a range of applications (for example [1], [2], [3]). At the same time, since FPGA performance is achieved by massive parallelism at relatively low clock frequencies (hundreds of MHz), we avoid the “power wall” and allow our FPGA-based HPC systems to be configured very densely, with accompanying savings in operational costs for power, space, maintenance, etc.
-
-
-
Seismic Imaging and HPC, how to preserve our investment and to prepare the future?
By H. CalandraAn extraordinary challenge the oil industry must face in the hydrocarbon exploration is to develop leading edge technologies to reconstitute the three-dimensional structure of the Earth. Seismic Imaging industry is made possible because of the progress of the computer capacity to process more and more data in a shorter and shorter time. “Thanks to the extraordinary progress of the computer” we have been using for almost 40 years and will still be used for the coming years. Seismic imaging industry is also made possible because the data acquisition technology has made tremendous progress. But again the technology would not have been developed if we were not able to process the huge amount of data generated by seismic data acquisition without the help of large HPC systems. For more than 30 years, seismic reflection is the main technology used in our industry. The physics is well known and is based on solving different approximations of the wave equation. Anticipating and taking advantage of the constantly evolving of the computer technology, geophysicists are able to find numerical implementation which is the most adapted to the computer hardware: from 2D to 3D, Post to Pre Stack , Asymptotic to band limited, one way to RTM, ray tomography to wave tomography and Full Wave form inversion. All these evolutions follow very closely to the progress of the HPC.
-