- Home
- A-Z Publications
- Geophysical Prospecting
- Fast Track Listing
Geophysical Prospecting - Online First
1 - 50 of 92 results
-
-
A comparison of shot‐encoding schemes for wave‐equation migration
Authors Jeff Godwin and Paul SavaAvailable online: 03 May 2013More LessABSTRACTIn the last decade the seismic imaging industry has begun collecting data volumes with a substantial amount of data redundancy through new acquisition geometries including: wide‐azimuth, rich‐azimuth and full‐azimuth geometries. The increased redundancy significantly improves image quality in areas with complex geology, but requires considerably greater computational power to construct an image because of the additional data and the need to use advanced imaging algorithms. One way to reduce the computational cost of processing such datasets is to blend shot‐records, using shot‐encoding, together prior to imaging which reduces the number of migrations necessary for imaging. The downside to doing so is that blending introduces strong, non‐physical, cross‐talk noise into the final image. By carefully choosing the shot‐encoding scheme, we can reduce the additional noise inserted into the image and maximally reduce the number of migrations necessary. We describe a theory of blended imaging that explains all shot‐encoding schemes, and use the theory to design a new class of encodings that use amplitude weights instead of phase‐shifts or time‐delays. We are able to use amplitude encoding to produce blended images of the same quality as previous encoding schemes at a similiar computational cost. Furthermore, we compare the results of amplitude encoding with the results from well‐known shot‐encoding schemes from previous work including: plane‐wave migration, random‐time delay, modulated‐shot migration, and decimated shot‐record migration. In our comparison, we find that plane‐wave migration is in many ways an optimal shot‐encoding scheme. However, we find that plane‐wave migration produces results that are comparable to decimated shot‐record migration when the total cost of imaging is taken into account, thereby calling into question the utility of shot‐encoding in general. Overall, this work questions the potential for shot‐encoding in standard (shot‐record) seismic imaging because blended imaging does not appear to sufficiently reduce the cost of imaging given the quality of the blended image compared to decimated shot‐record migration.
-
-
-
Elastic moduli and the aspect ratio spectrum of rock using simulated annealing
Authors Satoshi Izumotani and Shigenobu OnozukaAvailable online: 17 April 2013More LessABSTRACTWe propose a new method for estimating pore volume concentrations associated with inclusions with different aspect ratios and also the rock matrix and pore fluid moduli using very fast simulated annealing. We use the Kuster and Toksöz effective modulus formulations as a forward model that takes the pore shapes into consideration.
In order to provide this method, we first estimated the model parameters, then calculated the P‐ and S‐wave velocities as a function of pressure in dry and saturated conditions and finally compared the calculated velocities with the measured ultrasonic velocities of sandstone, limestone and granite. The calculated velocity fitted well with the measured velocity. Furthermore, we verified the calculated bulk modulus and shear modulus of the rock matrix. As these moduli were consistent with the results from other experiments and were almost the same as those by the inversion method, we believe that this method can satisfactorily calculate the moduli.
Next, we conducted optimization under several cases of moduli setting. We obtained the best results using the independent shear modulus under saturated conditions. The result indicates that the shear modulus varies according to the fluid in the pores in sandstone.
Finally, we optimized the average aspect ratio of rock and found that the average aspect ratio may depend on the type of rock. The velocity calculated by the single aspect ratio is similar to the velocity calculated by using a spectrum of aspect ratios for the same rock.
-
-
-
Decomposition of a compliance tensor for fractures and transversely isotropic medium
By Çağrı DinerAvailable online: 17 April 2013More LessABSTRACTThe purpose of this paper is to develop an analytical method to decompose an observed anisotropic compliance tensor into two transversely isotropic (TI) tensors that are associated with layers and fractures. Specifically, the fracture parameters and the TI background medium parameters are obtained from a given monoclinic compliance tensor. Here the set of parallel fractures and TI medium can be arbitrarily oriented in ; they are not constrained to vertical and horizontal directions respectively. First the summation of the two TI tensors, which represent fractures and layerings, is obtained in order to have the form of the resultant monoclinic medium. The orientation of each TI medium is represented by one Euler angle once the mirror plane normal of the monoclinic tensor is determined. This is because the mirror plane normal of the monoclinic medium is perpendicular to the rotation axes of the two TI tensors. Thus a layered medium with one set of parallel fractures is represented by nine parameters; two for rotationally symmetric fractures, five for a generic background TI medium and two Euler angles for the orientations of the structures. The decomposition problem, which is to find these nine parameters from a given monoclinic compliance tensor with thirteen parameters, is solved in the paper. Finally, the decomposition method is extended to media with three structures, namely two sets of fractures and layerings whose rotation axes lie in the same plane.
-
-
-
Controlled laboratory experiments to assess the geomechanical influence of subsurface injection and depletion processes on 4D seismic responses
Authors Rune M. Holt and Jørn F. StenebråtenAvailable online: 08 April 2013More LessABSTRACTLaboratory experiments are performed with soft synthetic reservoir sandstone cemented under stress and with synthetic overburden (caprock) material consisting of compacted clay (kaolinite) in brine. The rock‐like materials are loaded mechanically under stress paths representative of stress changes occurring in the subsurface as a result of injection (increasing pore pressure) or depletion followed by injection into a storage reservoir. Static stress‐strain behaviour and multidirectional P‐ and S‐wave velocities are monitored during the tests. The tests with sandstone are performed on dry material and simple poroelastic modelling is performed to relate these data to the behaviour of fluid (water / CO2) saturated samples under the same stress paths. The focus is on identifying 4D seismic attributes that may be used in the field to interpret monitoring measurements. This could help diagnose stress changes in the overburden, signalling the risk of CO2 leakage from a reservoir if the compressive or tensile strength limit of the overburden is reached and of course to help quantify amounts of CO2 stored.
-
-
-
2D tomographic inversion of complex resistivity data on cylindrical models
Available online: 06 March 2013More LessABSTRACTThe resistive and capacitive response of a multiphase subsoil can be analysed by amplitude and phase models of the electrical complex resistivity. The main goal of this work is to extend the 2D transformed formulation used for electrical site investigations for cylindrical laboratory models, solving the complex resistivity forward problem starting from the Complete Electrode Model approach. This formulation is tested by a comparison with the full 3D solution and is proven to be stable and accurate. Inversion of complex resistivity data is achieved through a Matlab interface included in the EIDORS environment, with the addition of numerous new functions. Three synthetic examples are discussed, to understand the potential and limits of this approach in comparison with the 3D inversion. Laboratory experiments on a cylindrical laboratory model with a horizontal cross‐section of 10 electrodes validated synthetic results. The model having a height of 1 m and a diameter of 500 mm is made by sand contaminated from the top by an engineered fluid with electrical properties similar to chlorinated solvents.
-
-
-
Measurement of the normal/tangential fracture compliance ratio (ZN/ZT) during hydraulic fracture stimulation using S‐wave splitting data
Authors James P. Verdon and Andreas WüstefeldAvailable online: 28 February 2013More LessABSTRACTWe develop a method to invert S‐wave splitting (SWS) observations, measured on microseismic event data, for the ratio of normal to tangential compliance (ZN/ZT) of sets of aligned fractures. We demonstrate this method by inverting for ZN/ZT using SWS measurements made during hydraulic fracture stimulation of the Cotton Valley tight gas reservoir, Texas. When the full SWS data set is inverted, we find that ZN/ZT= 0.74 ± 0.04. Windowing the data by time, we were able to observe variations in ZN/ZT as the fracture stimulation progresses. Most notably, we observe an increase in ZN/ZT contemporaneous with proppant injection. Rock physics models and laboratory observations have shown that ZN/ZT can be sensitive to (1) the stiffness of the fluid filling the fracture, (2) the extent to which this fluid can flow in and out of the fracture during the passage of a seismic wave and (3) the internal architecture of the fracture, including the roughness of the fracture surfaces, the number and size of any asperities and the presence of material filling the fracture. These factors have direct implications for modelling the fluid‐flow properties of fractures. Consequently, the ability to image ZN/ZT using SWS will provide useful information about fractured rocks and allow additional constraints to be placed on reservoir behaviour.
-
-
-
Adaptive scaling for an enhanced dynamic interpretation of 4D seismic data
Authors Reza Falahat, Asghar Shams and Colin MacBethAvailable online: 27 February 2013More LessABSTRACTIn this study, importance is drawn to the role of engineering principles when interpreting dynamic reservoir changes from 4D seismic data. In particular, it is found that in clastic reservoirs the principal parameters controlling mapped 4D signatures are not the pressure and saturation changes per se but these changes scaled by the corresponding thickness (or pore volume) of the reservoir volume that these effects occupy. For this reason, pressure and saturation changes cannot strictly be recovered by themselves, this being true for all data interpretation. This understanding is validated both with numerical modelling and analytic calculation. Interestingly, the study also indicates that the impact of gas saturation on the seismic can be written using a linear term but that inversion for gas saturation can yield at best only the total thickness/pore volume of the distribution. The above provides a basis for a linear equation that can readily and accurately be used to estimate pressure and saturation changes. Quantitative updates of the static and dynamic components of the simulation model can be achieved by comparing thickness or pore volume‐scaled changes from the simulator with the corresponding quantities on the inverted observations.
-
-
-
Imaging by forward propagating the data: theory and application
Authors Akbar Zuberi and Tariq AlkhalifahAvailable online: 27 February 2013More LessABSTRACTThe forward (modelled) wavefield for conventional reverse time migration (RTM) is computed by extrapolating the wavefield from an estimated source wavelet. In the typical case of a smooth subsurface velocity, this wavefield lacks the components, including surface reflections, necessary to image multiples in the observed data. We, instead, introduce the concept of forward propagating the recorded data, including direct arrivals, as part of RTM. We analyse the influence of the main components of the data on the imaging process, which include direct arrivals, primaries and surface‐related multiples. In our RTM methodology, this implies correlating the forward extrapolated recorded data wavefield with its reversely extrapolated version prior to applying the zero‐lag cross‐correlation imaging condition. The interaction of the data components with each other in the cross‐correlation process will image primaries and multiples, as well as introduce cross‐talk artefact terms. However, some of these artefacts are present in conventional RTM implementation and they tend to be relatively weak. In fact, for the surface seismic experiment, forward propagating the direct arrivals is almost equivalent to forward propagating a source and it tends to contribute the majority of the data imaging energy. In addition, primaries and multiples recorded in the data become multiples of one higher order. Forward propagating the recorded data to recreate the source will relieve us from the requirement of estimating the source function. It will also include near‐surface information necessary to improve the image in areas with near‐surface complexity. Data from a simple synthetic layered model, as well as the Marmousi model, are used to demonstrate some of these features.
-
-
-
Pseudo‐remote reference processing of magnetotelluric data: a fast and efficient data acquisition scheme for local arrays
Available online: 27 February 2013More LessABSTRACTThe basic physical properties of the magnetic source field, namely its homogeneity and spatial coherence, have been used for a variety of magnetotelluric processing techniques including remote reference processing. In the present work we propose a data acquisition and processing technique for a large number of stations distributed over a localized area ideally on a grid. For pseudo‐remote reference processing it is necessary to use the following station setup: five‐component MT data are only measured at some sites (base stations) while at the majority of sites (local stations) only the electric and vertical magnetic fields are recorded. The impedance tensor and vertical magnetic transfer functions at each local station are computed by assigning the magnetic fields of a base station to the local station as if they had been measured there. This approach can lead to biased or erroneous estimates of local transfer functions at stations in the vicinity of strong conductivity contrasts that can be corrected using the interstation transfer functions between the horizontal magnetic fields measured at the base station(s).
We test this approach with a data set collected in the vicinity of the Groß Schönebeck geothermal test site. Magnetotelluric data were collected at 146 local and 5 base stations distributed over an approximately 5 km × 25 km wide grid with site spacing ranging from 500 m × 500 m to 1000 m × 1000 m in the frequency range 128–0.001Hz. The obtained pseudo‐remote reference transfer functions are generally smooth and consistent and conductivity models obtained from 2D inversion of the data are in agreement with previous conductivity models from the study area.
-
-
-
Applicability of 1D and 2.5D marine controlled source electromagnetic modelling
Authors Ali Moradi Tehrani and Evert SlobAvailable online: 27 February 2013More LessABSTRACTWe present two‐and‐a‐half dimensional (2.5D) and three‐dimensional (3D) integral equation modelling of the marine controlled source electromagnetic method. We implement 2.5D modelling using a point source and a two‐dimensional reservoir and compare the results with point source responses from one‐dimensional and three‐dimensional reservoir models. These methods are based on an electric field domain integral equation formulation. We show how the 2.5D method performs in terms of both accuracy and computing speed with different configurations. We compare the results from 1D, 2.5D and 3D modelling, for a symmetrically placed reservoir and the in‐line acquisition configuration, as a function of different reservoir sizes in the cross‐line direction, thickness and for different frequencies and depths. Depending on the model’s parameters 2.5D modelling can be considered as an accurate and fast method for marine controlled source electromagnetic acquisition optimization and interpretation. If the thickness of the reservoir is less than one fifth of the skin depth of the embedding and if the depth of the reservoir is two or more times the skin depth of the embedding, the largest amplitude difference between two‐dimensional and three‐dimensional reservoirs is less than 10% when the source is above the centre of the reservoir. In this paper we discuss supporting examples with different configurations, where the 2.5D results lead to an optimistic detection estimate. Phase differences between 2.5D and 3D modelling are even smaller and the 2.5D solution can be used to assess the ability to detect the reservoir with a given acquisition configuration.
-
-
-
Spectral decomposition with f−x−y preconditioning
Authors David Bonar and Mauricio SacchiAvailable online: 27 February 2013More LessABSTRACTSpectral decomposition, or local time‐frequency analysis, tries to enhance the amount of information one can obtain from a seismic volume by finding the frequency content of the seismic data at each time sample. However, if a small amount of noise is present within the seismic amplitude volume, it has the potential to become more prominent in the spectrally decomposed data especially if high‐resolution or sparsity promoting methods are utilized. To combat this problem post‐processing noise removal has commonly been employed, but these techniques can potentially degrade the resolution of small‐scale geological structures in their attempt to remove this noise. Rather than de‐noising the spectrally decomposed data after they are generated, we propose to incorporate the ideas of f−x−y deconvolution within the spectral decomposition process to create an algorithm that has the ability to de‐noise the time‐frequency representation of the data as they are being generated. By incorporating the spatial prediction error filters that are utilized for f−x−y deconvolution with the spectral decomposition problem, a spatially smooth time‐frequency representation that maintains its sparsity, or high‐resolution characteristics, can be obtained. This spatially smooth high‐resolution time‐frequency representation is less likely to exhibit the random noise that was present in the more conventionally obtained time‐frequency representation. Tests on a real data set demonstrate that by de‐noising while the time‐frequency representation is being constructed, small‐scale geological structures are more likely to maintain their resolution since the de‐noised time‐frequency representation is specifically built to reconstruct the data.
-
-
-
Influence of borehole‐eccentred tools on wireline and logging‐while‐drilling sonic logging measurements
Authors David Pardo, Pawel J. Matuszyk, Carlos Torres‐Verdin, Angel Mora, Ignacio Muga and Victor M. CaloAvailable online: 13 February 2013More LessABSTRACTWe describe a numerical study to quantify the influence of tool‐eccentricity on wireline (WL) and logging‐while‐drilling (LWD) sonic logging measurements. Simulations are performed with a height‐polynomial‐adaptive (hp) Fourier finite‐element method that delivers highly accurate solutions of linear visco‐elasto‐acoustic problems in the frequency domain. The analysis focuses on WL instruments equipped with monopole or dipole sources and LWD instruments with monopole excitation. Analysis of the main propagation modes obtained from frequency dispersion curves indicates that the additional high‐order modes arising as a result of borehole‐eccentricity interfere with the main modes (i.e., Stoneley, pseudo‐Rayleigh and flexural). This often modifies (decreases) the estimation of shear and compressional formation velocities, which should be corrected (increased) to account for borehole‐eccentricity effects. Undesired interferences between different modes can occur at different frequencies depending upon the properties of the formation and fluid annulus size, which may difficult the estimation of the formation velocities.
-
-
-
Application of seismic full waveform inversion to monitor CO2 injection: modelling and a real data example from the Ketzin site, Germany
Authors Fengjiao Zhang, Christopher Juhlin, Monika Ivandic and Stefan LüthAvailable online: 29 January 2013More LessABSTRACTSeismic monitoring of an injected carbon dioxide (CO2) distribution at depth is an important issue in the geological storage of CO2. To help monitor changes in the subsurface during CO2 injection a series of 2D seismic surveys were acquired within the framework of the CO2SINK and CO2MAN projects at Ketzin, Germany at different stages of the injection process. Here we investigate using seismic full waveform inversion as a qualitative tool for time‐lapse seismic monitoring given the constraints of the limited maximum offsets of the 2D seismic data. Prior to applying the inversion to the real data we first made a number of benchmark tests on synthetic data using a similar geometry as in the real data. Results from the synthetic benchmark tests show that it is difficult to recover the true value of the velocity anomaly due to the injection but that it is possible to qualitatively locate the distribution of the injected CO2. After the synthetic studies, we applied seismic full waveform inversion on the real time‐lapse data from the Ketzin site along with conventional time‐lapse processing. Both methods show a similar qualitative distribution of the injected CO2 and agree well with expectations based upon more extensive 3D time‐lapse monitoring in the area.
-
-
-
Review paper: Instrumentation for marine magnetotelluric and controlled source electromagnetic sounding
Available online: 29 January 2013More LessABSTRACTWe review and describe the electromagnetic transmitters and receivers used to carry out magnetotelluric and controlled source soundings in the marine environment. Academic studies using marine electromagnetic methods started in the 1970s but during the last decade these methods have been used extensively by the offshore hydrocarbon exploration industry. The principal sensors (magnetometers and non‐polarizing electrodes) are similar to those used on land but magnetotelluric field strengths are not only much smaller on the deep sea‐floor but also fall off more rapidly with increasing frequency. As a result, magnetotelluric signals approach the noise floor of electric field and induction coil sensors (0.1 nV/m and 0.1 pT) at around 1 Hz in typical continental shelf environments. Fluxgate magnetometers have higher noise than induction coils at periods shorter than 500 s but can still be used to collect sea‐floor magnetotelluric data down to 40–100 s. Controlled source transmitters using electric dipoles can be towed continuously through the seawater or on the sea‐bed, achieving output currents of 1000 A or more, limited by the conductivity of seawater and the power that can be transmitted down the cables used to tow the devices behind a ship. The maximum source‐receiver separation achieved in controlled source soundings depends on both the transmitter dipole moment and on the receiver noise floor and is typically around 10 km in continental shelf exploration environments. The position of both receivers and transmitters needs to be navigated using either long baseline or short baseline acoustic ranging, while sea‐floor receivers need additional measurements of orientations from compasses and tiltmeters. All equipment has to be packaged to accommodate the high pressure (up to 40 MPa) and corrosive properties of seawater. Usually receiver instruments are self‐contained, battery powered and have highly accurate clocks for timekeeping, even when towed on the sea‐floor or in the water column behind a transmitter.
-
-
-
3D pseudo‐seismic imaging of transient electromagnetic data – a feasibility study
Authors G.Q. Xue, L.‐J. Gelius, L. Xiu, Z.P. Qi and W.Y. ChenAvailable online: 15 January 2013More LessABSTRACTWe investigate a pseudo‐seismic approach based on the so‐called inverse Q‐transform as an alternative way of processing transient electromagnetic (TEM) data. This technique transforms the diffusive TEM response into that of propagating waves obeying the standard wave‐equation. These transformed data can be input into standard seismic migration schemes with the potential of giving higher resolution subsurface images. Such images contain geometrical and qualitative information about the medium but no quantitative results are obtained as in model‐based inversion techniques. These reconstructed images can be used directly for geological interpretation or in further constraining possible inversions. We extend the original Q‐transform based on an electrical‐source formulation to the case of a large‐loop TEM source. Moreover, an efficient discrete version of the inverse of this modified Q‐transform is presented using a regularization method. Application of this inverse transform to the measured TEM responses gives the corresponding pseudo‐seismic data, which are input into a 3D migration scheme. We then use a 3D boundary element type of Kirchhoff migration to ensure high computational efficiency. This proposed method was applied to both synthetic data as well as field measurements taken from an engineering geology survey. The results indicate that the resolution of the TEM data is significantly improved when compared with standard apparent‐resistivity plots, demonstrating that higher resolution 3D transient electromagnetic imaging is feasible using this method.
-
-
-
Estimated source wavelet‐incorporated reverse‐time migration with a virtual source imaging condition
Authors Youngseo Kim, Yongchae Cho and Changsoo ShinAvailable online: 15 January 2013More LessABSTRACTMany geophysicists perform reverse‐time migration using a variety of artificial sources to obtain the source wavefield. Upon processing the seismic data, however, it is difficult to recover the original phase and amplitude of the source wavelet used for seismic exploration, regardless of the source. We have therefore used several artificial source wavelets such as Ricker or the first derivative Gauss wavelets expressed by well‐known functions. There are some differences between these artificial source wavelets and the original source wavelets, resulting in imperfect migration images. Artificial source wavelets tend to distort the exact location of subsurface reflectors and they create noise around the boundary of the stratum. To solve this problem, we applied the source estimation technique to the reverse‐time migration algorithm. The source estimation technique approximates the source wavelet to the original exploration source wavelet by a deconvolution method. This technique is used in full waveform inversion and provides better inversion results as demonstrated by other studies. To prove the effect of reverse‐time migration with source estimation, we tested this algorithm on the Sigsbee2a model, SEG/EAGE 3D salt model and 3D real field land data. Using the resulting images of these three models, we found that the source estimation technique can yield better migration images. To suppress the artefacts produced in the migration image, we used a wavenumber filter and Laplacian filter on 2D and 3D examples, respectively. Furthermore, we used the pseudo‐Hessian similar to the source illumination to scale the migration image because the virtual source imaging condition was used for reverse‐time migration.
-
-
-
Robust error on magnetotelluric impedance estimates
Authors Pierre Wawrzyniak, Pascal Sailhac and Guy MarquisAvailable online: 07 January 2013More LessABSTRACTWe propose here a new, robust, methodology to estimate the errors on a magnetotelluric (MT) impedance tensor. This method is developed with the bounded influence remote‐reference processing (BIRRP) code in a single site configuration. The error is estimated by reinjecting an electric field residual obtained after the calculation of an impedance tensor into a tensor function calculation procedure. We show using synthetic examples that the error tensor calculated with our method yields a more reliable error estimate than the one calculated from Jackknife statistics. The modulus of realistic error estimates can be used as a quality control and an accurate inversion constraint of MT surveys. Moreover, reliable error estimates are necessary for new applications of MT to dynamic subsurface processes such as reservoir monitoring.
-
-
-
Influence of a velocity model and source frequency on microseismic waveforms: some implications for microseismic locations
Authors P.J. Usher, D.A. Angus and J.P. VerdonAvailable online: 10 December 2012More LessABSTRACTIn this paper, we examine the influence of a velocity model and microseismic source frequency on microseismic waveforms and event locations. Finite‐difference waveform synthetics are generated based on the Cotton Valley hydraulic fracture experiment, where we vary the vertical heterogeneity of the velocity models as well as the microseismic source frequencies. We find that differences between plausible velocity models lead to changes in arrival times of approximately 0.0035 seconds for P‐waves and 0.0085 seconds for S‐waves. Based on the average P‐ and S‐wave velocities, the difference in the P‐ and S‐wave traveltimes is equivalent to approximately 20 m in location difference. Significant increases in the waveform coda develop with increasing model heterogeneity and increasing source frequency. The presence of signal noise as well as other sources of error (e.g., uncertainty in geophone location) will likely lead to further increase in uncertainty in location error estimates. Thus we note that location error due to incorrect velocity models cannot be ignored.
-
-
-
Statics‐preserving projection filtering
Authors Yann Traonmilin and Necati GulunayAvailable online: 10 December 2012More LessABSTRACTProjection filtering has been used for many years in seismic processing as a tool to extract a signal out of noisy data. The effectiveness of projection filtering reaches a limit when seismic events are affected by static shifts. Such shifts degrade the lateral coherency of the data, which is the strongest assumption made by projection filtering. We propose an algorithm to estimate projection filters and static shifts simultaneously in order to perform noise attenuation in the presence of static shifts in the data. We then show results on synthetic and real data to demonstrate the denoising capabilities of our algorithm.
-
-
-
Automatic detection and imaging of diffraction points using pattern recognition
Authors J. J. S. de Figueiredo, F. Oliveira, E. Esmi, L. Freitas, J. Schleicher, A. Novais, P. Sussner and S. GreenAvailable online: 07 December 2012More LessABSTRACTHydrocarbon reservoirs are generally located beneath complex geological structures. Frequently, such areas contain seismic diffractors that carry detailed structure information in the order of the seismic wavelength. Therefore, the development of computational facilities capable of detecting diffractor points with a good resolution is desirable but has been a challenge in the area of seismic processing. In this work, we present a method for the detection of diffraction points in the common‐offset‐gather domain. The method applies a two‐class k nearest neighbours (kNN) pattern recognition technique to amplitudes along diffraction traveltime curves to distinguish between diffractions and reflections or noise. While the method, in principle, requires knowledge of the migration velocity field, it is very robust with respect to an erroneous model. Numerical examples using synthetic seismic and field ground‐penetrating‐radar (GPR) data demonstrate the feasibility of the technique and show its usefulness for automatically mapping diffraction points in a seismic section. In our applications, the method was able to detect all diffractions present in the data and did not produce any false positives.
-
-
-
Curvature analysis to differentiate magnetic sources for geologic mapping
Authors Madeline Lee, William Morris, George Leblanc and Jeff HarrisAvailable online: 23 November 2012More LessABSTRACTCurvature of a surface is typically applied in seismic data interpretation; however this work outlines its application to a potential field, specifically aeromagnetic data. The curvature of a magnetic grid (from point data) is calculated by fitting a quadratic surface within a moving window at each grid node. The overall and directional curvatures calculated within this window provide insight into the geometry of the magnetic grid surface and causative sources. Curvature analysis is an in‐depth study of both qualitative (graphically) and quantitative (statistically) approaches. This analysis involved the calculation of full, profile and plan curvatures. The magnitude, sign and relative ratios enable the user to define source location and geometry and also discriminate source type; for example, differentiation between a fault and normal polarity dyke. The reliability of the analysis is refined when a priori geological knowledge is available and basic statistics are considered. By allotting a weighting scheme to various statistical populations (e.g., standard deviation), increased detail is extracted on the different lithologies and structures represented by the data set. Furthermore, the curvature's behaviour is analogous to derivative calculation (vertical, horizontal and tilt) by producing a zero value at the source edge and either a local maxima or minima over the source. Application prior to semi‐automated methods may help identify correct indices necessary for identification of magnetic sources. Curvature analysis is successfully applied to an aeromagnetic data set over the 2.6–1.85 Ga Paleoproterozoic Wopmay orogen, Northwest Territories, Canada. This area has undergone regional and local‐scale faulting and is host to multiple generations of dyke swarms. As the area has been extensively mapped, this data set proved to be an ideal test site.
-
-
-
Sea‐bed diffractions and their impact on 4D seismic data
Authors Sissel Grude, Bård Osdal and Martin LandrøAvailable online: 23 November 2012More LessABSTRACTA sea‐bed reflection is used to estimate changes in water layer velocities between time‐lapse seismic surveys. Such corrections are crucial in order to obtain high‐quality 4D seismic data. This might be a challenge in areas where rough sea‐bed topography creates sea‐bed diffractions that interfere with the sea‐bed reflection, as in several of the northern fields in the Norwegian Sea. These diffractions and diffracted multiples are difficult to attenuate during data processing and become a source of noise in time‐lapse data.
In this work we study how cross‐correlation analysis of time‐lapse seismic data at a sea‐bed reflection may be perturbed by the presence of diffractions. The sea‐bed topography from the Norne field is used in 2D finite difference modelling to explain some of the observed variation in a water layer time‐shift in field data. We find that a rough sea‐bed and the diffracted energy it induces cause residual noise on the 4D data. The variation to the water layer time‐shift increases with sea‐bed complexity and is amplified by interaction with other sources of non‐repeatability like water column variations, mis‐positioning and strength of the ice scours creating the diffracted energy. Time‐shift variations with mis‐positioning and velocity changes between the surveys seem to best explain the observed variation in the time‐shift for time‐lapse seismic field data from Norne.
-
-
-
P‐wave attenuation anisotropy in fractured media: A seismic physical modelling study
Authors A.M. Ekanem, J. Wei, X.‐Y. Li, M. Chapman and I.G. MainAvailable online: 08 November 2012More LessABSTRACTWe used a laboratory scale model to study the effects aligned fractures might have on seismic wave propagation at a larger scale in real Earth imaging. Our main objective was to investigate the effect of aligned fractures on seismic P‐wave amplitude through the estimation of the induced attenuation. The physical model was constructed from a mixture of epoxy resin and silicon rubber, with inclusions designed to simulate two sets of inclined fractures at an angle of 29.2° with each other. Two‐dimensional reflection data were acquired using the pulse and transmission method in three principal directions relative to the fracture strike azimuth with the model submerged in a water tank. We used the Quality Versus Offset (QVO) method, an extension of the classical spectral ratio method for determining attenuation to estimate the induced attenuation (inverse of the seismic quality factor) from the Common Mid Point (CMP) pre‐processed gathers. The results of our analysis show that the induced P‐wave attenuation is anisotropic, with elliptical (cos2θ) variations with respect to the survey azimuth angle θ. The minor axis of the Q ellipse corresponds to the fracture normal. In this direction, i.e. across the material grain, the attenuation is a maximum. The major axis corresponds to the fracture strike direction (parallel to the material grain) where minimum attenuation occurs. These attenuation results show consistency with the azimuthal anisotropy observed in the stacking velocities in the fractured‐layer and are all consistent with the physical model, and thus provide a physical basis for using attenuation anisotropy to derive fracture properties from seismic data.
-
-
-
The impact of CO2 on the electrical properties of water bearing porous media – laboratory experiments with respect to carbon capture and storage
Authors Jana H. Börner, Volker Herdegen, Jens‐Uwe Repke and Klaus SpitzerAvailable online: 08 November 2012More LessABSTRACTWe conducted a detailed experimental investigation of the effect of CO2 injection on the electrical conductivity of water bearing porous media, needed for an improved geophysical monitoring of CO2 storage reservoirs. Therefore, we developed an experimental set‐up that allows to investigate electrical characteristics of the injection process as well as the impact of dissolved CO2 on pore water conductivity. We found that a gaseous, fluid and supercritical pure CO2 phase bears no relevant conductivity at pressures up to 13 MPa and temperatures up to 50° C. When CO2 dissolves in pore water, pressure‐dependent dissociation processes can double the pore water conductivity, that can be used in leakage detection. This is quantified by an adaptation of Archie’s law. The empirical adaptation and the experimental data are confirmed by combined geochemical‐geoelectrical modelling. Furthermore, water‐saturated sand samples were investigated while CO2 displaced the pore water at pressures up to 13 MPa and temperatures up to 40° C. A decrease in electrical conductivity by a factor of up to 33 was measured, corresponding to a residual water saturation of 14–19%. Qualitatively, a decrease was also demonstrated under supercritical conditions. As an integrative interpretation, a conceptual model of electrical rock properties during CO2 sequestration is presented.
-
-
-
Suitability of 10 Hz vertical geophones for seismic noise array measurements based on frequency‐wavenumber and extended spatial autocorrelation analyses
Authors S. Rosa‐Cintas, J.J. Galiana‐Merino, J. Rosa‐Herranz, S. Molina and J. Giner‐CaturlaAvailable online: 07 November 2012More LessABSTRACTMicrozonation studies using ambient noise measurements constitute a promising way for seismic hazard evaluation in urban areas. Among the existing techniques, seismic noise array measurements have become a valuable tool for estimating Vs profiles and thus, the characteristics of a soil structure. Although methods based on analysis of seismic noise are simpler, cheaper and faster than borehole drilling and down‐hole or cross‐hole logs to derive shear‐wave velocity profiles, array deployment requires the use of several stations (broadband or short‐period sensors) that are not always available. In this paper, we have compared the results obtained by 10 Hz‐vertical‐geophone arrays with the results provided by 1 Hz‐sensor arrays. Two sites in the Bajo Segura Basin (SE Spain), with different soil characteristics, were chosen for array deployment. The comparison is carried out in terms of dispersion curves by using frequency‐wavenumber (f‐k) and extended spatial autocorrelation (ESAC) techniques. Both analyses show a good agreement using either 1 Hz sensors or 10 Hz geophones; moreover, they demonstrate that it is possible to extend the analysis in a frequency range much below the natural frequency of the geophones. The results of our study confirm the suitability of standard seismic refraction/reflection equipment also for ambient noise array measurements, which constitutes a cheaper and faster way for investigating soil characteristics.
-
-
-
Downhole interferometric illumination diagnosis and balancing
Available online: 07 November 2012More LessABSTRACTWith seismic interferometry or the virtual source method, controlled sources can be redatumed from the Earth’s surface to generate so‐called virtual sources at downhole receiver locations. Generally this is done by cross‐correlation of the recorded downhole data and stacking over source locations. By studying the retrieved data at zero time lag, downhole illumination conditions that determine the virtual source radiation pattern can be analysed without a velocity model. This can be beneficial for survey planning in time‐lapse experiments. Moreover, the virtual source radiation pattern can be corrected by multi‐dimensional deconvolution or directional balancing. Such an approach can help to improve virtual source repeatability, posing major advantages for reservoir monitoring. An algorithm is proposed for so‐called illumination balancing (being closely related to directional balancing). It can be applied to single‐component receiver arrays with limited aperture below a strongly heterogeneous overburden. The algorithm is demonstrated on synthetic 3D elastic data to retrieve time‐lapse amplitude attributes.
-
-
-
Preserved‐traveltime smoothing
Authors V. Vinje, A. Stovas and D. ReynaudAvailable online: 07 November 2012More LessABSTRACTMost ray‐based migration and tomography methods require a certain degree of smoothness of the depth velocity model. Since smoothing changes the velocity model, it is generally impossible to preserve the traveltime between all pairs of points in the model. Using conventional smoothing, the traveltime errors are particularly large at discontinuities in the velocity model. These errors are offset‐dependent and they cause errors in both the depth and residual moveout (RMO) of depth migrated images. Here we propose a new method, Preserved‐Traveltime Smoothing (PTS) with an objective of preserving traveltimes (and hence depths) at these discontinuities. This is accomplished by smoothing of velocity moments for anisotropic models using a specific cross‐correlation filter with the desired smoothness properties. The method is valid for isotropic, transversely isotropic with vertical symmetry axis (VTI) and structural transverse isotropic (STI) models. The variations of the model parameters perpendicular to the symmetry axis of the anisotropy are assumed to be small.
-
-
-
Absorption related velocity dispersion below a possible gas hydrate geobody
By Ian F. JonesAvailable online: 07 November 2012More LessABSTRACTVelocity dispersion is not usually a problem in surface seismic data processing, as the seismic bandwidth is relatively narrow and thus for most Q values, dispersive effects are not noticeable. However, for highly absorptive bodies, such as the overpressured free gas accumulations associated with some gas hydrates or high‐porosity normally pressured gas sands, dispersive effects may be seen. In this work I analyse one such data set from the offshore north‐east coast of India. I demonstrate that the effect is measurable and that compensating for it in either data processing or migration can improve the wavelet character, as well as delivering an estimate of the effective Q values in the associated geobody. I also raise the question as to whether velocities derived using low‐frequency waveform inversion over such dispersive geobodies are wholly appropriate for migration of full seismic‐bandwidth data.
-
-
-
Borehole receiver orientation using a 3D velocity model
Authors Giovanni Menanno, Aldo Vesnaver and Michael JervisAvailable online: 18 October 2012More LessABSTRACTThe orientation of three‐component borehole geophones used for recording during a microseismic monitoring experiment is estimated. The standard technology for deploying multi‐component geophones in a deep borehole is wireline‐based, in which the azimuthal rotation of the geophone string cannot be controlled. Each receiver can have a different rotation angle that is compensated by the particle motion analysis of the direct P‐wave arrivals, picked from a walk‐around VSP carried out in the proximity of the well. Knowing the orientation of borehole receivers is critical, as inaccuracies lead to systematic errors in determining the hypocentral coordinates of microseismic events. Additional errors arise from over‐simplifications of P‐ and S‐wave velocity Earth models.
In this paper, we propose a tomographic approach for improving the orientation estimates of borehole receivers based on hodogram analysis. The initial velocity model built from well logs and upholes is refined by 3D tomographic inversion of walk‐around VSP data and some string shots fired in a nearby borehole. Taking into account ray bending, the estimated errors due to local velocity anomalies can be reduced. This makes our estimates of fracture orientation and microseismic hypocentres more reliable when borehole receivers are used in passive seismic surveys.
-
-
-
Acoustic full‐waveform inversion of surface seismic data using the Gauss‐Newton method with active constraint balancing
Authors Yonghwan Joo, Soon Jee Seol and Joongmoo ByunAvailable online: 18 October 2012More LessABSTRACTWe propose a full‐waveform inversion algorithm using the Gauss‐Newton inversion method with active constraint balancing that uses the spatially variant damping factor and source normalized wavefield approach for surface seismic data in the frequency domain. The active constraint balancing technique automatically determines the optimum distribution of damping factors that control the stability and resolution in Gauss‐Newton inversion by using a parameter resolution matrix and spread function analysis. Through numerical experiments, we present that the active constraint balancing scheme provides stable inversion results without a severe loss of resolution compared with the conventional Gauss‐Newton method. The reconstructed image using the active constraint balancing method more closely resembles the true image for the region with low sensitivity. Also, the estimated value converges faster to the smaller RMS error level than those estimated by the conventional Gauss‐Newton method. We also implement the normalized wavefield method to overcome the lack of precise knowledge on the source. The source normalized wavefield approach effectively removes the potential inversion errors from source estimation because the source spectrum is eliminated during the normalization procedure. Our inversion algorithm, using the source normalization scheme, provides excellent inversion results even though the data are generated by two slightly different source wavelets. We present that the frequency selection scheme proposed by Sirgue and Pratt, which is based on the average amplitude of the whole received data, provides a useful guideline for selecting the proper frequencies for our inversion. Our novel inversion algorithm successfully reconstructs the velocity model within 10–30 iterations despite its starting from a homogeneous or linearly increasing velocity model. In addition, for testing the performance of our inversion algorithm on a more complicated structure, we apply the algorithm to the SEG/EAGE overthrust model. Successful inversion is achieved as the reconstructed image approaches the true model with the consistently converging RMS error even though insufficient data are used.
-
-
-
3D geophysical inversions of the north‐east Amer Belt and their relationship to the geologic structure
Authors V. Tschirhart, W.A. Morris, C.W. Jefferson, P. Keating, J.C. White and L. CalhounAvailable online: 15 October 2012More LessABSTRACTThe Amer Lake area is located within the Churchill Structural Province in the Kivalliq Region of Nunavut, approximately 160 km north‐west of Baker Lake. Two distinct geophysical‐geological entities are structurally intercalated: an Archean mixed granitoid gneiss – metasedimentary‐metavolcanic basement and the unconformably overlying Paleoproterozoic Amer Group metasediments. From east of Amer Lake stretching toward the south‐west, these two entities form the Amer fold and thrust belt. At the north‐east end of this belt, high‐resolution aeromagnetic data define a distinctive oval shape that has been interpreted as a south‐west trending doubly plunging synform. The outcrop within the interior of this structure is sparse resulting in limited structural data and speculative geological interpretations with multiple geometries possible. The high‐resolution aeromagnetic data compiled through an industry‐government consortium and newly acquired detailed gravity profiles were modelled to provide constraints on the geometry of this synform.
We document a geophysical‐geological feedback process whereby the available geological and geophysical data were used to derive constraints on inversion models for the synform. Starting with available limited litho‐structural data the presence of a double plunging synform was directly inferred from the aeromagnetic data. Segments of the aeromagnetic data have 2D morphology and so can be modelled using a simple parametric 2D dipping slab inversion approach. Models of profiles extracted from the aeromagnetic data were used to provide preliminary dip and magnetic susceptibility constraints for the Three Lakes mudstone with iron formation and the Five Mile Lake basalt. Landsat imagery outlined the spatial limits of the stratigraphically underlying, non‐magnetic Ayagaq quartzite. Incorporating these outputs as bounds in the input / reference model for a UBC‐GIF 3D magnetic inversion helped to accentuate the geological structure in the output mesh: an enhanced inversion that incorporates both geological and geophysical constraints. The validity of the resulting inversion model was tested by computing 2D forward models of the gravity profile data. The inversion model generated by this study emphasizes the importance of integrating information from as many knowledge sources as one can find. More trust can be placed on forward and inversion models where there is agreement among all data sets and a coherency of structural style.
-
-
-
Building starting models for full waveform inversion from wide‐aperture data by stereotomography
Authors Vincent Prieux, Gilles Lambaré, Stéphane Operto and Jean VirieuxAvailable online: 15 October 2012More LessABSTRACTBuilding an accurate initial velocity model for full waveform inversion (FWI) is a key issue to guarantee convergence of full waveform inversion towards the global minimum of a misfit function. In this study, we assess joint refraction and reflection stereotomography as a tool to build a reliable starting model for frequency‐domain full waveform inversion from long‐offset (i.e., wide‐aperture) data. Stereotomography is a slope tomographic method that is based on the inversion of traveltimes and slopes of locally‐coherent events in a data cube. One advantage of stereotomography compared to conventional traveltime reflection tomography is the semi‐automatic picking procedure of locally‐coherent events, which is easier than the picking of continuous events, and can lead to a higher density of picks. While conventional applications of stereotomography only consider short‐offset reflected waves, we assess the benefits provided by the joint inversion of reflected and refracted arrivals. Introduction of the refracted waves allows the construction of a starting model that kinematically fits the first arrivals, a necessary requirement for full waveform inversion. In a similar way to frequency‐domain full waveform inversion, we design a multiscale approach of stereotomography, which proceeds hierarchically from the wide‐aperture to the short‐aperture components of the data, to reduce the non‐linearity of the stereotomographic inversion of long‐offset data. This workflow which combines stereotomography and full waveform inversion, is applied to synthetic and real data case studies for the Valhall oil‐field target. The synthetic results show that the joint refraction and reflection stereotomography for a 24‐km maximum offset data set provides a more reliable initial model for full waveform inversion than reflection stereotomography performed for a 4‐km maximum offset data set, in particular in low‐velocity gas layers and in the deep part of a structure below a reservoir. Application of joint stereotomography, full waveform inversion and reverse‐time migration to real data reveals that the FWI models and the reverse‐time migration images computed from the stereotomography model shares several features with FWI velocity models and migrated images computed from an anisotropic reflection‐traveltime tomography model, although stereotomography was performed in the isotropic approximation. Implementation of anisotropy in joint refraction and reflection stereotomography of long‐offset data is a key issue to further improve the accuracy of the method.
-
-
-
Convergence improvement and noise attenuation considerations for beyond alias projection onto convex sets reconstruction
Authors Jianjun Gao, Aaron Stanton, Mostafa Naghizadeh, Mauricio D. Sacchi and Xiaohong ChenAvailable online: 15 October 2012More LessABSTRACTA reconstruction method known as Projection Onto Convex Sets (POCS) is an effective, uncomplicated and robust method for the recovery of irregularly missing seismic traces. However, slow convergence of the POCS reconstruction method could jeopardize its computational appeal. For this reason, we investigate the performance of the POCS reconstruction method in terms of different threshold schedules and present a new data driven threshold that leads to an efficient implementation of the POCS method. In particular, we show that high quality solutions can be obtained in a few iterations. In addition, we address an important issue with the classical implementations of POCS reconstruction in that they cannot interpolate regularly missing data. To solve this problem, we introduce a masking operator that is based on a dominant dip scanning method into the POCS iteration. At the end, we present a variant of the POCS method that permits de‐noising seismic volumes during the reconstruction stage. This is achieved by defining a weighted trace re‐insertion strategy that alleviates the influence of noisy traces in the final reconstruction of the seismic volume. We show the effectiveness of the proposed method using synthetic and field data.
-
-
-
Estimating primaries by sparse inversion, a generalized approach
Authors F.H.C. Ypma and D.J. VerschuurAvailable online: 11 October 2012More LessABSTRACTFor an accurate interpretation of seismic data, multiple‐free data are of great value. Removing surface multiples and interbed multiples proves to be challenging in many cases. The nowadays widely used method of Surface‐Related Multiple Elimination (SRME) has lately been redefined as a full‐waveform inversion process, resulting in the method of Estimation of Primaries by Sparse Inversion (EPSI). The new method is shown to be more accurate than the former method in several situations, because it estimates primaries such that they, together with their multiples, explain the input data. Its main advantage is that the minimum energy assumption in traditional multiple subtraction is avoided. The SRME methodology has been extended to the case of internal multiples by several authors, however, the involved subtraction of predicted multiples is probably even more challenging than for the surface‐multiple case. Therefore, in this paper the EPSI method is generalized to remove both surface and interbed multiples. As in previous implementations of internal multiple removal based on data‐driven convolution, the newly proposed scheme requires some knowledge about the subsurface: the data should be divided into (macro) layers and appropriate time windows must be selected. The method is tested on two 2D synthetic datasets to prove its viability. Furthermore, application to a 2D field dataset showed improved accuracy compared to conventional prediction and subtraction.
-
-
-
Coherent and random noise attenuation via multichannel singular spectrum analysis in the randomized domain
Available online: 19 July 2012More LessABSTRACTThe attenuation of coherent and random noise still poses technical challenges in seismic data processing, especially in onshore environments. Multichannel Singular Spectrum Analysis (MSSA) is an existing and effective technique for random‐noise reduction. By incorporating a randomizing operator into MSSA, this modification creates a new and powerful filtering method that can attenuate both coherent and random noise simultaneously. The key of the randomizing operator exploits the fact that primary events after NMO are relatively horizontal. The randomizing operator randomly rearranges the order of input data and reorganizes coherent noise into incoherent noise but has a minimal effect on nearly horizontal primary reflections. The randomizing process enables MSSA to suppress both coherent and random noise simultaneously. This new filter, MSSARD (MSSA in the randomized domain) also resembles a combination of eigenimage and Cadzow filters. I start with a synthetic data set to illustrate the basic concept and apply MSSARD filtering on a 3D cross‐spread data set that was severely contaminated with ground roll and scattered noise. MSSARD filtering gives superior results when compared with a conventional 3D f‐k filter. For a random‐noise example, the application of MSSARD filtering on time‐migrated offset‐vector‐tile (OVT) gathers also produces images with higher signal‐to‐noise ratios than a conventional f‐xy deconvolution filter.
-
-
-
Fast waveform inversion without source‐encoding
Authors Tristan van Leeuwen and Felix J. HerrmannAvailable online: 10 July 2012More LessABSTRACTRandomized source‐encoding has recently been proposed as a way to dramatically reduce the costs of full waveform inversion. The main idea is to replace all sequential sources by a small number of simultaneous sources. This introduces random cross‐talk in model updates and special stochastic optimization strategies are required to deal with this. Two problems arise with this approach: i) source‐encoding can only be applied to fixed‐spread acquisition setups and ii) stochastic optimization methods tend to converge very slowly, relying on averaging to suppress the cross‐talk. Although the slow convergence is partly off‐set by a low iteration cost, we show that conventional optimization strategies are bound to outperform stochastic methods in the long run. In this paper we argue that we do not need randomized source‐encoding to reap the benefits of stochastic optimization and we review an optimization strategy that combines the benefits of both conventional and stochastic optimization. The method uses a gradually increasing batch of sources. Thus, iterations are initially very cheap and this allows the method to make fast progress in the beginning. As the batch‐size grows, the method behaves like conventional optimization, allowing for fast convergence. Stylized numerical examples suggest that the stochastic and hybrid methods perform equally well with and without source‐encoding and that the hybrid method outperforms both conventional and stochastic optimization. The method does not rely on source‐encoding techniques and can thus be applied to marine data. We illustrate this on a realistic synthetic model.
-
-
-
Reconstructing frequency‐magnitude statistics from detection limited microseismic data
Authors Michael J. Williams and Joel Le CalvezAvailable online: 10 July 2012More LessABSTRACTMicroseismic monitoring, particularly the monitoring of hydraulic fracturing in gas‐ and oil‐bearing shales, has developed significantly over the last ten years. Early work focused on the location of microseismic events but more recently there have been attempts to extract more of the information afforded by this rich data source. In particular, the recovery of the frequency‐magnitude distribution, which is expected to follow a Gutenberg‐Richter distribution, may provide insights into the prevailing effective stress regime in the vicinity of the events. This stress regime varies with distance from the hydraulic fracturing: at the propagating fracture one expects conditions for tensile or shear failure, away from the fracture one may broadly expect microseismicity associated with pre‐existing weakness in the rock, occurring at effective stress conditions close to the conditions existing prior to the treatment.
All geophysical experiments are detection limited and the microseismic monitoring case does not differ in this regard. In constructing a statistical indicator such as the distribution of moment magnitudes we would like the estimate to be robust and use as much of the data as possible. In analysing earthquake catalogues the predominant practise is to determine a magnitude of completeness denoting the detection limit of the catalogue. This approach defines a minimum magnitude above which all events are thought to have been reliably recorded. In effect, this imposes an artificial, conservative detection limit to replace the unknown detection limit of the catalogue. We present the case of an arbitrary detection limit and introduce an approach from astronomy that is particularly suited to the single‐well observing geometry most prevalent in hydraulic fracture monitoring.
We calculate b‐values for a set of event magnitudes from the Barnett Shale formation, where multiple stimulation treatments were applied in a pair of wells (‘zipper frac’) followed by a four‐stage treatment in a third well and find significant variations in the b‐value between the pumped stages.
-
-
-
A passive low‐frequency seismic experiment in the Albertine Graben, Uganda
Authors F. Martini, I. Lokmer, K. Jonsdottir, L. De Barros, M. Möllhoff, C. J. Bean, F. Hauser, J. Doherty, C. Ryan and J. MonganAvailable online: 05 July 2012More LessABSTRACTA passive seismic experiment was conducted in April/May 2010 in the Albertine Graben region in Uganda to record low‐frequency seismic signals and explore the possibility of their exploitation in this area as a direct hydrocarbon indicator (DHI). Recordings were made at locations directly overlying both hydrocarbon and water‐bearing strata within the sedimentary basin as well as reference sites external to the basin, directly on the basement. Contrary to findings published in some literature to date, we found that spatial variations in the analysed wavefield parameters correlate with the underlying geology rather than the presence or absence of hydrocarbons. Inversion of the surface‐wave (fundamental mode) dispersion curve as well as the observed horizontal‐to‐vertical spectral ratio of both surface and body waves provide evidence that the observed spectral variations can be explained solely by a simple layered/gradient velocity model, without the presence of any kind of anomaly that could be attributed exclusively to a hydrocarbon reservoir.
Consequently, it is recommended that knowledge of the geological and velocity structure is sought when analysing passive low‐frequency seismic data sets. This is a fundamental prerequisite in order to guard against misinterpretation of the spatial variation of seismic derived attributes as DHIs.
-
-
-
Two‐dimensional fast generalized Fourier interpolation of seismic records
Authors Mostafa Naghizadeh and Kristopher A. InnanenAvailable online: 05 July 2012More LessABSTRACTThe fast generalized Fourier transform algorithm is extended to two‐dimensional data cases. The algorithm provides a fast and non‐redundant alternative for the simultaneous time‐frequency and space‐wavenumber analysis of data with time‐space dependencies. The transform decomposes data based on local slope information and therefore making it possible to extract the weight function based on dominant dips from alias‐free low frequencies. By projecting the extracted weight function to alias‐contaminated high frequencies and utilizing a least‐squares fitting algorithm, a beyond‐alias interpolation method is accomplished. Synthetic and real data examples are provided to examine the performance of the proposed interpolation method.
-
-
-
Constrained 1D joint inversion of seismic surface waves and P‐refraction traveltimes
Authors C. Piatti, L.V. Socco, D. Boiero and S. FotiAvailable online: 15 May 2012More LessABSTRACTWe present a joint inversion scheme that couples P‐wave refraction and seismic surface‐wave data for a layered subsurface. An algorithm is implemented with a damped least‐squares approach. The estimated parameters are S‐ and P‐wave velocities and layer thicknesses, while densities are assumed constant during inversion. The coupling is both geometric and physical: layer thicknesses are the same for S‐ and P‐wave velocity profiles and P‐wave velocities enter in both forward algorithms. Sensitivity analysis, performed on synthetic data, reveals that surface‐wave dispersion curves can be sensitive also to P‐wave velocity of some layers (especially for Poisson's ratio values smaller than about 0.35), allowing synergic resolution of this parameter. Applications on both synthetic and field data show that the proposed approach mitigates the hidden layer problem of seismic refraction and leads to more accurate results than individual inversions also for surface waves. Additional constraints on the objective function on a priori Poisson's ratio values allow unrealistic and not admissible VP and VS values to be avoided; such constraints were applied in one field case considering the a priori information available about water‐table depth. It is also shown that estimation of porosity can help the selection of the proper constraint on a priori Poisson's ratio.
-
-
-
-
-
THREE DIMENSIONAL GRAVITY SURVEY*
By W. DOMZALSKIAvailable online: 09 October 2008More LessABSTRACTThe paper describes and discusses the results of an experimental gravity survey which was carried out underground on different levels of a mine, in the mine shafts, and on the surface above the mine workings.
The paper is composed of three complementing sections. The part dealing with gravity measurements in the shafts gives also attention to the particular problem of the terrain corrections underground, due to the surface topography. The interval densities from gravity measurements in the shafts are computed and adjusted in accordance with known geology and compared with the stratigraphical columns of the shafts. The effect of the ore body on the stations in the shaft is derived theoretically and compared with the observed one.
The gravity contours are constructed on different levels in the mine workings and discussed in relation to the known extent of the ore body. The gravity profile across a fault underground is presented and discussed. Another gravity profile was run underground in the same plan position as a surface traverse 1000 feet above it. The line of boreholes along this traverse gives good account of geology which includes step‐faulting. This known geology is compared with the deductions based on the gravity results. This is also done in the case of another gravity profile run over a known geological section. A number of gravity measurements were also taken in the same plan position, separated by the vertical distance of 800–1,100 feet. These points were placed by the boreholes previously drilled in the area. Attempt in correlation of these and gravity results is made.
The densities computed from the gravity measurements are compared with the laboratory determinations of the densities, carried out on samples from different parts of the mine.
The contours on the top of the base formation are constructed from the information obtained from the boreholes, and are compared with the gravity contours on the surface above.
A simple method of computation of the effects of slabs and blocks is presented as applied to the calculation of the corrections for underground drifts and faults. A table is appended for use with this method.
-
-
-
DRILLING METHODS IN SAND AND GRAVEL FORMATIONS*
By G. MURATORIAvailable online: 09 October 2008More LessABSTRACTIn the development of seismic drilling methods in Northern Italy the most important considerations have been those of ease in transport and quickness of operation. Different drilling systems are described that are adapted to the different types of near surface formation that are encountered in the area. In soft sands a high pressure water jet is used, a rotary table not being required in a great many cases. In medium consistency sand and small gravel, the drill rods are rotated by a special roller wrench equipment instead of the more usual kelly. In coarse formations, gravel and boulders, casing is driven by special pile driving equipment with collapsible mast for easy transport.
-
-
-
SOME THEORETICAL CONSIDERATIONS ON THE USE OF MULTIPLE GEOPHONES ARRANGED LINEARLY ALONG THE LINE OF TRAVERSE*
Authors F. W. HALES and T. E. EDWARDSAvailable online: 09 October 2008More LessAbstractAn analytical treatment is given of the response of linear arrays of multiple geophones, as a function of the direction of incidence of the wave and of the wave frequency. The relation between the response and the direction of incidence may be quite complicated, the response being zero in several directions. As a function of wave frequency, the response may also have several zero values within the frequency band in which reflections are expectable. This would result in a serious modification of the filter characteristics of the amplifier. It is shown, however, how such a modification of the filter characteristics may be avoided by a judicious choice of the number and the spacing of the geophones in a multiple group.
-
-
-
BOOK REVIEW
Available online: 09 October 2008More LessBook review in this article
L. W. Sorokin. – Lehrbuch der Geophysikalischen Methoden zur Erkundung von Erdölvorkommen.
M. R. J. Wyllie. – The fundamentals of electric log interpretation. Academic Press Inc. – New York, Sept. 1954. Price $ 3.60.
-
-
-