- Home
- A-Z Publications
- Geophysical Prospecting
- Previous Issues
- Volume 61, Issue 4, 2013
Geophysical Prospecting - Volume 61, Issue 4, 2013
Volume 61, Issue 4, 2013
-
-
The use of low frequencies in a full‐waveform inversion and impedance inversion land seismic case study
ABSTRACTVelocity model building and impedance inversion generally suffer from a lack of intermediate wavenumber content in seismic data. Intermediate wavenumbers may be retrieved directly from seismic data sets if enough low frequencies are recorded. Over the past years, improvements in acquisition have allowed us to obtain seismic data with a broader frequency spectrum. To illustrate the benefits of broadband acquisition, notably the recording of low frequencies, we discuss the inversion of land seismic data acquired in Inner Mongolia, China. This data set contains frequencies from 1.5–80 Hz. We show that the velocity estimate based on an acoustic full‐waveform inversion approach is superior to one obtained from reflection traveltime inversion because after full‐waveform inversion the background velocity conforms to geology. We also illustrate the added value of low frequencies in an impedance estimate.
-
-
-
Determining the focal mechanisms and depths of relatively small earthquakes using a few stations by full‐waveform modelling
Authors Hussam Busfar and M. Nafi ToksözABSTRACTDetermining the focal mechanism of earthquakes helps us to better define faults and understand the stress regime. This technique can be helpful in the oil and gas industry where it can be applied to microseismic events. The objective of this paper is to find double couple focal mechanisms, excluding scalar seismic moments, and the depths of small earthquakes using data from relatively few local stations. This objective is met by generating three‐component synthetic seismograms to match the observed normalized velocity seismograms. We first calculate Green's functions given an initial estimate of the earthquake's hypocentre, the locations of the seismic recording stations and a 1D velocity model of the region for a series of depths. Then, we calculate the moment tensor for different combinations of strikes, dips and rakes for each depth. These moment tensors are combined with the Green's functions and then convolved with a source time function to produce synthetic seismograms. We use a grid search to find the synthetic seismogram with the largest objective function that best fits all three components of the observed velocity seismogram. These parameters define the focal mechanism solution of an earthquake. We tested the method using three earthquakes in Southern California with moment magnitudes of 5.0, 5.1 and 4.4 using the frequency range 0.1–2.0 Hz. The source mechanisms of the events were determined independently using data from a multitude of stations. Our results obtained, from as few as three stations, generally match those obtained by the Southern California Earthquake Data Center. The main advantage of this method is that we use relatively high‐frequency full‐waveforms, including those from short‐period instruments, which makes it possible to find the focal mechanism and depth of earthquakes using as few as three stations when the velocity structure is known.
-
-
-
Water‐bottom multiple attenuation by Kirchhoff extrapolation
Authors Emmanuel Spadavecchia, Vincenzo Lipari, Nicola Bienati and Giuseppe DrufucaABSTRACTDespite being less general than 3D surface‐related multiple elimination (3D‐SRME), multiple prediction based on wavefield extrapolation can still be of interest, because it is less CPU and I/O demanding than 3D‐SRME and moreover it does not require any prior data regularization. Here we propose a fast implementation of water‐bottom multiple prediction that uses the Kirchhoff formulation of wavefield extrapolation. With wavefield extrapolation multiple prediction is usually obtained through the cascade of two extrapolation steps. Actually by applying the Fermat’s principle (i.e., minimum reflection traveltime) we show that the cascade of two operators can be replaced by a single approximated extrapolation step. The approximation holds as long as the water bottom is not too complex. Indeed the proposed approach has proved to work well on synthetic and field data when the water bottom is such that wavefront triplications are negligible, as happens in many practical situations.
-
-
-
Spectral sparse Bayesian learning reflectivity inversion
Authors Sanyi Yuan and Shangxu WangABSTRACTA spectral sparse Bayesian learning reflectivity inversion method, combining spectral reflectivity inversion with sparse Bayesian learning, is presented in this paper. The method retrieves a sparse reflectivity series by sequentially adding, deleting or re‐estimating hyper‐parameters, without pre‐setting the number of non‐zero reflectivity spikes. The spikes with the largest amplitude are usually the first to be resolved. The method is tested on a series of data sets, including synthetic data, physical modelling data and field data sets. The results show that the method can identify thin beds below tuning thickness and highlight stratigraphic boundaries. Moreover, the reflectivity series, which is inverted trace‐by‐trace, preserves the lateral continuity of layers.
-
-
-
Non‐linear prestack seismic inversion with global optimization using an edge‐preserving smoothing filter
Authors Yan Zhe and Gu HanmingABSTRACTEstimating elastic parameters from prestack seismic data remains a subject of interest for the exploration and development of hydrocarbon reservoirs. In geophysical inverse problems, data and models are in general non‐linearly related. Linearized inversion methods often have the disadvantage of strong dependence on the initial model. When the initial model is far from the global minimum, inversion iteration is likely to converge to the local minimum. This problem can be avoided by using global optimization methods.
In this paper, we implemented and tested a prestack seismic inversion scheme based on a quantum‐behaved particle swarm optimization (QPSO) algorithm aided by an edge‐preserving smoothing (EPS) operator. We applied the algorithm to estimate elastic parameters from prestack seismic data. Its performance on both synthetic data and real seismic data indicates that QPSO optimization with the EPS operator yields an accurate solution.
-
-
-
Suppressing non‐Gaussian noises with scaled receiver wavefield for reverse‐time migration: comparison of different approaches
Authors Yi Tao and Mrinal K. SenABSTRACTNumerical implementation of the gradient of the cost function in a gradient‐based full‐ waveform inversion (FWI) is essentially a migration operator used in wave equation migration. In FWI, minimizing different data residual norms results in different weighting strategies of data residuals at receiver locations prior to back‐propagation into the medium. In this paper, we propose different scaling methods to the receiver wavefield and compare their performances. Using time‐domain reverse‐time migration (RTM), we show that compared to conventional algorithms, this type of scaling is able to significantly suppress non‐Gaussian noise, i.e., outliers. Our tests also show that scaling by its absolute norm produces better results than other approaches.
-
-
-
L1 norm inversion method for deconvolution in attenuating media
More LessABSTRACTIn order to perform a good pulse compression, the conventional spike deconvolution method requires that the wavelet is stationary. However, this requirement is never reached since the seismic wave always suffers high‐frequency attenuation and dispersion as it propagates in real materials. Due to this issue, the data need to pass through some kind of inverse‐Q filter. Most methods attempt to correct the attenuation effect by applying greater gains for high‐frequency components of the signal. The problem with this procedure is that it generally boosts high‐frequency noise. In order to deal with this problem, we present a new inversion method designed to estimate the reflectivity function in attenuating media. The key feature of the proposed method is the use of the least absolute error (L1 norm) to define both the data and model error in the objective functional. The L1 norm is more immune to noise when compared to the usual L2 one, especially when the data are contaminated by discrepant sample values. It also favours sparse reflectivity when used to define the model error in regularization of the inverse problem and also increases the resolution, since an efficient pulse compression is attained. Tests on synthetic and real data demonstrate the efficacy of the method in raising the resolution of the seismic signal without boosting its noise component.
-
-
-
An automated cross‐correlation based event detection technique and its application to a surface passive data set
Authors Farnoush Forghani‐Arani, Jyoti Behura, Seth S. Haines and Mike BatzleABSTRACTIn studies on heavy oil, shale reservoirs, tight gas and enhanced geothermal systems, the use of surface passive seismic data to monitor induced microseismicity due to the fluid flow in the subsurface is becoming more common. However, in most studies passive seismic records contain days and months of data and manually analysing the data can be expensive and inaccurate. Moreover, in the presence of noise, detecting the arrival of weak microseismic events becomes challenging. Hence, the use of an automated, accurate and computationally fast technique for event detection in passive seismic data is essential. The conventional automatic event identification algorithm computes a running‐window energy ratio of the short‐term average to the long‐term average of the passive seismic data for each trace. We show that for the common case of a low signal‐to‐noise ratio in surface passive records, the conventional method is not sufficiently effective at event identification. Here, we extend the conventional algorithm by introducing a technique that is based on the cross‐correlation of the energy ratios computed by the conventional method. With our technique we can measure the similarities amongst the computed energy ratios at different traces. Our approach is successful at improving the detectability of events with a low signal‐to‐noise ratio that are not detectable with the conventional algorithm. Also, our algorithm has the advantage to identify if an event is common to all stations (a regional event) or to a limited number of stations (a local event). We provide examples of applying our technique to synthetic data and a field surface passive data set recorded at a geothermal site.
-
-
-
Consistent joint elastic‐electrical differential effective‐medium modelling of compacting reservoir sandstones
Authors Erling Hugo Jensen, Leiv‐J. Gelius, Tor Arne Johansen and Zhong WangAbstractImproved reservoir characterization and monitoring can be achieved by combining seismic and controlled‐source electromagnetic techniques. This requires developing coherent rock physics descriptions. In this paper we demonstrate consistent joint elastic‐electrical modelling according to the differential effective‐medium theory. We test our modelling against data from a compaction experiment on a set of 11 sandstone core samples from the same quarry location. The presented approach is analogous to calibrating a rock physics model to a particular reservoir based on data from possible well logs and core samples. For simplicity we choose to use multivariable non‐linear regression in the inversion. It shows that this technique is able to identify solutions that are physically sound. However, a more rigorous inversion method might be considered in future implementations. To identify the critical parameters we test the elastic‐electrical sensitivity of the various unknown variables involved. The most sensitive parameters identified are then perturbed during the modelling. The mineralogy consists mainly of quartz, which we assume to be spherical and kaolinite. We use the resistivity to calibrate the aspect ratio of the clay grains and estimate the porosity reduction due to compaction. These values are in turn used in inverse modelling of the bulk and shear moduli. The solid minerals make up the inclusion material in the differential effective‐ medium modelling for both the elastic and electrical properties. Hence, this formulation constitutes a consistent joint elastic‐electrical modelling scheme. We achieve good fits between the model results and the laboratory measurements for most of the samples. The reason for the less good fit with some of the samples might be due to measurement errors in the laboratory. This is supported by the observed abnormal stiffness compaction trends associated with those samples.
-
-
-
An estimation method for effective stress changes in a reservoir from 4D seismics data
Authors Alejandro Garcia and Colin MacBethABSTRACTAdvances in seismics acquisition and processing and the widespread use of 4D seismics have made available reliable production‐induced subsurface deformation data in the form of overburden time‐shifts. Inversion of these data is now beginning to be used as an aid to the monitoring of a reservoir's effective stress. Past solutions to this inversion problem have relied upon analytic calculations for an unrealistically simplified subsurface, which can lead to uncertainties. To enhance the accuracy of this approach, a method based on transfer functions is proposed in which the function itself is calibrated using numerically generated overburden strain deformation calculated for a small select group of reference sources. This technique proves to be a good compromise between the faster but more accurate history match of the overburden strain using a geomechanical simulator and the slower, less accurate analytic method. Synthetic tests using a coupled geomechanical and fluid flow simulator for the South Arne field confirm the efficacy of the method. Application to measured time‐shifts from observed 4D seismics indicates compartmentalization in the Tor reservoir, more heterogeneity than is currently considered in the simulation model and moderate connectivity with the overlying Ekofisk formation.
-
-
-
A statistical review of mudrock elastic anisotropy
By S.A. HorneABSTRACTMudrocks, defined to be fine‐grained siliclastic sedimentary rocks such as siltstones, claystones, mudstones and shales, are often anisotropic due to lamination and microscopic alignments of clay platelets. The resulting elastic anisotropy is often non‐negligible for many applications in the earth sciences such as wellbore stability, well stimulation and seismic imaging. Anisotropic elastic properties reported in the open literature have been compiled and statistically analysed. Correlations between elastic parameters are observed, which will be useful in the typical case that limited information on a rock's elastic properties is known. For example, it is observed that the highest degree of correlation is between the horizontal elastic stiffnesses C11 and C66. The results of statistical analysis are generally consistent with prior observations. In particular, it is observed that Thomsen's ɛ and γ parameters are almost always positive, Thomsen's ɛ and γ parameters are well correlated, Thomsen's δ is most frequently small and Thomsen's ɛ is generally larger than Thomsen's δ. These observations suggest that the typical range for the elastic properties of mudrocks span a sub‐space less than the five elastic constants required to fully define a Vertical Transversel Isotropic medium. Principal component analysis confirms this and that four principal components can be used to span the space of observed elastic parameters.
-
-
-
Quantitative geophysical pore‐type characterization and its geological implication in carbonate reservoirs
Authors Luanxiao Zhao, Mosab Nasser and De‐hua HanABSTRACTThis paper discusses and addresses two questions in carbonate reservoir characterization: how to characterize pore‐type distribution quantitatively from well observations and seismic data based on geologic understanding of the reservoir and what geological implications stand behind the pore‐type distribution in carbonate reservoirs. To answer these questions, three geophysical pore types (reference pores, stiff pores and cracks) are defined to represent the average elastic effective properties of complex pore structures. The variability of elastic properties in carbonates can be quantified using a rock physics scheme associated with different volume fractions of geophysical pore types. We also explore the likely geological processes in carbonates based on the proposed rock physics template. The pore‐type inversion result from well log data fits well with the pore geometry revealed by a FMI log and core information. Furthermore, the S‐wave prediction based on the pore‐type inversion result also shows better agreement than the Greensberg‐Castagna relationship, suggesting the potential of this rock physics scheme to characterize the porosity heterogeneity in carbonate reservoirs. We also apply an inversion technique to quantitatively map the geophysical pore‐type distribution from a 2D seismic data set in a carbonate reservoir offshore Brazil. The spatial distributions of the geophysical pore type contain clues about the geological history that overprinted these rocks. Therefore, we analyse how the likely geological processes redistribute pore space of the reservoir rock from the initial depositional porosity and in turn how they impact the reservoir quality.
-
-
-
The marine controlled source electromagnetic response of a steel borehole casing: applications for the NEPTUNE Canada gas hydrate observatory
Authors Andrei Swidinsky, R. Nigel Edwards and Marion JegenABSTRACTGas hydrates are a potential energy resource, a possible factor in climate change and an exploration geohazard. The University of Toronto has deployed a permanent seafloor time‐domain controlled source electromagnetic (CSEM) system offshore Vancouver Island, within the framework of the NEPTUNE Canada underwater cabled observatory. Hydrates are known to be present in the area and due to their electrically resistive nature can be monitored by 5 permanent electric field receivers. However, two cased boreholes may be drilled near the CSEM site in the near future. To understand any potential distortions of the electric fields due to the metal, we model the marine electromagnetic response of a conductive steel borehole casing. First, we consider the commonly used canonical model consisting of a 100 Ωm, 100 m thick resistive hydrocarbon layer embedded at a depth of 1000 m in a 1 Ωm conductive host medium, with the addition of a typical steel production casing extending from the seafloor to the resistive zone. Results show that in both the frequency and time domains the distortion produced by the casing occurs at smaller transmitter‐receiver offsets than the offsets required to detect the resistive layer. Second, we consider the experimentally determined model of the offshore Vancouver Island hydrate zone, consisting of a 5.5 Ωm, 36 m thick hydrate layer overlying a 0.7 Ωm sedimentary half‐space, with the addition of two borehole casings extending 300 m into the seafloor. In this case, results show that the distortion produced by casings located within a 100 m safety zone of the CSEM system will be measured at 4 of the 5 receivers. We conclude that the boreholes must be positioned at least 200 m away from the CSEM array so as to minimize the effects of the casings.
-
-
-
Fast mapping of magnetic basement depth, structure and nature using aeromagnetic and gravity data: combined methods and their application in the Paris Basin
Authors G. Martelet, J. Perrin, C. Truffert and J. DeparisABSTRACTAssessment of deep buried basin/basement relationships using geophysical data is a challenge for the energy and mining industries as well as for geothermal or CO2 storage purposes. In deep environments, few methods can provide geological information; magnetic and gravity data remain among the most informative and cost‐effective methods. Here, in order to derive fast first‐order information on the basement/basin interface, we propose a combination of existing and original approaches devoted to potential field data analysis. Namely, we investigate the geometry (i.e., depth and structure) and the nature of a deep buried basement through a case study SW of the Paris Basin. Joint processing of new high‐resolution magnetic data and up‐to‐date gravity data provides an updated overview of the deep basin.
First, the main structures of the magnetic basement are highlighted using Euler deconvolution and are interpreted in a structural sketch map. The new high‐resolution aeromagnetic map actually offers a continuous view of regional basement structures and reveals poorly known and complex deformation at the junction between major domains of the Variscan collision belt.
Second, Werner deconvolution and an ad hoc post‐processing analysis allow the extraction of a set of magnetic sources at (or close to) the basin/basement interface. Interpolation of these sources together with the magnetic structural sketch provides a Werner magnetic basement map displaying realistic 3D patterns and basement depths consistent with data available in deep petroleum boreholes.
The last step of processing was designed as a way to quickly combine gravity and magnetic information and to simply visualize first‐order petrophysical patterns of the basement lithology. This is achieved through unsupervised classification of suitably selected gravity and magnetic maps and, as compared to previous work, provides a realistic and updated overview of the cartographic distribution of density/magnetization of basement rocks.
Altogether, the three steps of processing proposed in this paper quickly provide relevant information on a deep buried basement in terms of structure, geometry and nature (through petrophysics). Notwithstanding, limitations of the proposed procedure are raised: in the case of the Paris Basin for instance, this study does not provide proper information on Pre‐Mesozoic basins, some of which have been sampled in deep boreholes.
-
-
-
The variable projection method for waveform inversion with an unknown source function
More LessABSTRACTThis paper compares three alternative algorithms for simultaneously estimating a source wavelet at the same time as an earth model in full‐waveform inversion: (i) simultaneous descent, (ii) alternating descent and (iii) descent with the variable projection method. The latter is a technique for solving separable least‐squares problems that is well‐known in the applied mathematics literature. When applied to full‐waveform inversion, it involves making the source wavelet an implicit function of the earth model via a least‐squares filter‐estimation process. Since the source wavelet becomes purely a function of medium parameters, it no longer needs to be treated as a separate unknown in the inversion. Essentially, the predicted data are projected onto the measured data in a least‐squares sense at every function evaluation, making use of the fact that the filter estimation problem is trivial when compared to the full‐waveform inversion problem. Numerical tests on a simple 1D model indicate that the variable projection method gives the best result; actually producing results in quality that are very similar to control experiments with a known, correct wavelet.
-
-
-
Research note: Seismic attenuation due to wave‐induced fluid flow at microscopic and mesoscopic scales
Authors J. Germán Rubino and Klaus HolligerABSTRACTWave‐induced fluid flow at microscopic and mesoscopic scales arguably constitutes the major cause of intrinsic seismic attenuation throughout the exploration seismic and sonic frequency ranges. The quantitative analysis of these phenomena is, however, complicated by the fact that the governing physical processes may be dependent. The reason for this is that the presence of microscopic heterogeneities, such as micro‐cracks or broken grain contacts, causes the stiffness of the so‐called modified dry frame to be complex‐valued and frequency‐dependent, which in turn may affect the viscoelastic behaviour in response to fluid flow at mesoscopic scales. In this work, we propose a simple but effective procedure to estimate the seismic attenuation and velocity dispersion behaviour associated with wave‐induced fluid flow due to both microscopic and mesoscopic heterogeneities and discuss the results obtained for a range of pertinent scenarios.
-
Volumes & issues
-
Volume 72 (2023 - 2024)
-
Volume 71 (2022 - 2023)
-
Volume 70 (2021 - 2022)
-
Volume 69 (2021)
-
Volume 68 (2020)
-
Volume 67 (2019)
-
Volume 66 (2018)
-
Volume 65 (2017)
-
Volume 64 (2015 - 2016)
-
Volume 63 (2015)
-
Volume 62 (2014)
-
Volume 61 (2013)
-
Volume 60 (2012)
-
Volume 59 (2011)
-
Volume 58 (2010)
-
Volume 57 (2009)
-
Volume 56 (2008)
-
Volume 55 (2007)
-
Volume 54 (2006)
-
Volume 53 (2005)
-
Volume 52 (2004)
-
Volume 51 (2003)
-
Volume 50 (2002)
-
Volume 49 (2001)
-
Volume 48 (2000)
-
Volume 47 (1999)
-
Volume 46 (1998)
-
Volume 45 (1997)
-
Volume 44 (1996)
-
Volume 43 (1995)
-
Volume 42 (1994)
-
Volume 41 (1993)
-
Volume 40 (1992)
-
Volume 39 (1991)
-
Volume 38 (1990)
-
Volume 37 (1989)
-
Volume 36 (1988)
-
Volume 35 (1987)
-
Volume 34 (1986)
-
Volume 33 (1985)
-
Volume 32 (1984)
-
Volume 31 (1983)
-
Volume 30 (1982)
-
Volume 29 (1981)
-
Volume 28 (1980)
-
Volume 27 (1979)
-
Volume 26 (1978)
-
Volume 25 (1977)
-
Volume 24 (1976)
-
Volume 23 (1975)
-
Volume 22 (1974)
-
Volume 21 (1973)
-
Volume 20 (1972)
-
Volume 19 (1971)
-
Volume 18 (1970)
-
Volume 17 (1969)
-
Volume 16 (1968)
-
Volume 15 (1967)
-
Volume 14 (1966)
-
Volume 13 (1965)
-
Volume 12 (1964)
-
Volume 11 (1963)
-
Volume 10 (1962)
-
Volume 9 (1961)
-
Volume 8 (1960)
-
Volume 7 (1959)
-
Volume 6 (1958)
-
Volume 5 (1957)
-
Volume 4 (1956)
-
Volume 3 (1955)
-
Volume 2 (1954)
-
Volume 1 (1953)