- Home
- A-Z Publications
- Geophysical Prospecting
- Fast Track Listing
Geophysical Prospecting - Online First
21 - 40 of 92 results
-
-
Automatic detection and imaging of diffraction points using pattern recognition
Authors J. J. S. de Figueiredo, F. Oliveira, E. Esmi, L. Freitas, J. Schleicher, A. Novais, P. Sussner and S. GreenAvailable online: 07 December 2012More LessABSTRACTHydrocarbon reservoirs are generally located beneath complex geological structures. Frequently, such areas contain seismic diffractors that carry detailed structure information in the order of the seismic wavelength. Therefore, the development of computational facilities capable of detecting diffractor points with a good resolution is desirable but has been a challenge in the area of seismic processing. In this work, we present a method for the detection of diffraction points in the common‐offset‐gather domain. The method applies a two‐class k nearest neighbours (kNN) pattern recognition technique to amplitudes along diffraction traveltime curves to distinguish between diffractions and reflections or noise. While the method, in principle, requires knowledge of the migration velocity field, it is very robust with respect to an erroneous model. Numerical examples using synthetic seismic and field ground‐penetrating‐radar (GPR) data demonstrate the feasibility of the technique and show its usefulness for automatically mapping diffraction points in a seismic section. In our applications, the method was able to detect all diffractions present in the data and did not produce any false positives.
-
-
-
Curvature analysis to differentiate magnetic sources for geologic mapping
Authors Madeline Lee, William Morris, George Leblanc and Jeff HarrisAvailable online: 23 November 2012More LessABSTRACTCurvature of a surface is typically applied in seismic data interpretation; however this work outlines its application to a potential field, specifically aeromagnetic data. The curvature of a magnetic grid (from point data) is calculated by fitting a quadratic surface within a moving window at each grid node. The overall and directional curvatures calculated within this window provide insight into the geometry of the magnetic grid surface and causative sources. Curvature analysis is an in‐depth study of both qualitative (graphically) and quantitative (statistically) approaches. This analysis involved the calculation of full, profile and plan curvatures. The magnitude, sign and relative ratios enable the user to define source location and geometry and also discriminate source type; for example, differentiation between a fault and normal polarity dyke. The reliability of the analysis is refined when a priori geological knowledge is available and basic statistics are considered. By allotting a weighting scheme to various statistical populations (e.g., standard deviation), increased detail is extracted on the different lithologies and structures represented by the data set. Furthermore, the curvature's behaviour is analogous to derivative calculation (vertical, horizontal and tilt) by producing a zero value at the source edge and either a local maxima or minima over the source. Application prior to semi‐automated methods may help identify correct indices necessary for identification of magnetic sources. Curvature analysis is successfully applied to an aeromagnetic data set over the 2.6–1.85 Ga Paleoproterozoic Wopmay orogen, Northwest Territories, Canada. This area has undergone regional and local‐scale faulting and is host to multiple generations of dyke swarms. As the area has been extensively mapped, this data set proved to be an ideal test site.
-
-
-
Sea‐bed diffractions and their impact on 4D seismic data
Authors Sissel Grude, Bård Osdal and Martin LandrøAvailable online: 23 November 2012More LessABSTRACTA sea‐bed reflection is used to estimate changes in water layer velocities between time‐lapse seismic surveys. Such corrections are crucial in order to obtain high‐quality 4D seismic data. This might be a challenge in areas where rough sea‐bed topography creates sea‐bed diffractions that interfere with the sea‐bed reflection, as in several of the northern fields in the Norwegian Sea. These diffractions and diffracted multiples are difficult to attenuate during data processing and become a source of noise in time‐lapse data.
In this work we study how cross‐correlation analysis of time‐lapse seismic data at a sea‐bed reflection may be perturbed by the presence of diffractions. The sea‐bed topography from the Norne field is used in 2D finite difference modelling to explain some of the observed variation in a water layer time‐shift in field data. We find that a rough sea‐bed and the diffracted energy it induces cause residual noise on the 4D data. The variation to the water layer time‐shift increases with sea‐bed complexity and is amplified by interaction with other sources of non‐repeatability like water column variations, mis‐positioning and strength of the ice scours creating the diffracted energy. Time‐shift variations with mis‐positioning and velocity changes between the surveys seem to best explain the observed variation in the time‐shift for time‐lapse seismic field data from Norne.
-
-
-
P‐wave attenuation anisotropy in fractured media: A seismic physical modelling study
Authors A.M. Ekanem, J. Wei, X.‐Y. Li, M. Chapman and I.G. MainAvailable online: 08 November 2012More LessABSTRACTWe used a laboratory scale model to study the effects aligned fractures might have on seismic wave propagation at a larger scale in real Earth imaging. Our main objective was to investigate the effect of aligned fractures on seismic P‐wave amplitude through the estimation of the induced attenuation. The physical model was constructed from a mixture of epoxy resin and silicon rubber, with inclusions designed to simulate two sets of inclined fractures at an angle of 29.2° with each other. Two‐dimensional reflection data were acquired using the pulse and transmission method in three principal directions relative to the fracture strike azimuth with the model submerged in a water tank. We used the Quality Versus Offset (QVO) method, an extension of the classical spectral ratio method for determining attenuation to estimate the induced attenuation (inverse of the seismic quality factor) from the Common Mid Point (CMP) pre‐processed gathers. The results of our analysis show that the induced P‐wave attenuation is anisotropic, with elliptical (cos2θ) variations with respect to the survey azimuth angle θ. The minor axis of the Q ellipse corresponds to the fracture normal. In this direction, i.e. across the material grain, the attenuation is a maximum. The major axis corresponds to the fracture strike direction (parallel to the material grain) where minimum attenuation occurs. These attenuation results show consistency with the azimuthal anisotropy observed in the stacking velocities in the fractured‐layer and are all consistent with the physical model, and thus provide a physical basis for using attenuation anisotropy to derive fracture properties from seismic data.
-
-
-
The impact of CO2 on the electrical properties of water bearing porous media – laboratory experiments with respect to carbon capture and storage
Authors Jana H. Börner, Volker Herdegen, Jens‐Uwe Repke and Klaus SpitzerAvailable online: 08 November 2012More LessABSTRACTWe conducted a detailed experimental investigation of the effect of CO2 injection on the electrical conductivity of water bearing porous media, needed for an improved geophysical monitoring of CO2 storage reservoirs. Therefore, we developed an experimental set‐up that allows to investigate electrical characteristics of the injection process as well as the impact of dissolved CO2 on pore water conductivity. We found that a gaseous, fluid and supercritical pure CO2 phase bears no relevant conductivity at pressures up to 13 MPa and temperatures up to 50° C. When CO2 dissolves in pore water, pressure‐dependent dissociation processes can double the pore water conductivity, that can be used in leakage detection. This is quantified by an adaptation of Archie’s law. The empirical adaptation and the experimental data are confirmed by combined geochemical‐geoelectrical modelling. Furthermore, water‐saturated sand samples were investigated while CO2 displaced the pore water at pressures up to 13 MPa and temperatures up to 40° C. A decrease in electrical conductivity by a factor of up to 33 was measured, corresponding to a residual water saturation of 14–19%. Qualitatively, a decrease was also demonstrated under supercritical conditions. As an integrative interpretation, a conceptual model of electrical rock properties during CO2 sequestration is presented.
-
-
-
Suitability of 10 Hz vertical geophones for seismic noise array measurements based on frequency‐wavenumber and extended spatial autocorrelation analyses
Authors S. Rosa‐Cintas, J.J. Galiana‐Merino, J. Rosa‐Herranz, S. Molina and J. Giner‐CaturlaAvailable online: 07 November 2012More LessABSTRACTMicrozonation studies using ambient noise measurements constitute a promising way for seismic hazard evaluation in urban areas. Among the existing techniques, seismic noise array measurements have become a valuable tool for estimating Vs profiles and thus, the characteristics of a soil structure. Although methods based on analysis of seismic noise are simpler, cheaper and faster than borehole drilling and down‐hole or cross‐hole logs to derive shear‐wave velocity profiles, array deployment requires the use of several stations (broadband or short‐period sensors) that are not always available. In this paper, we have compared the results obtained by 10 Hz‐vertical‐geophone arrays with the results provided by 1 Hz‐sensor arrays. Two sites in the Bajo Segura Basin (SE Spain), with different soil characteristics, were chosen for array deployment. The comparison is carried out in terms of dispersion curves by using frequency‐wavenumber (f‐k) and extended spatial autocorrelation (ESAC) techniques. Both analyses show a good agreement using either 1 Hz sensors or 10 Hz geophones; moreover, they demonstrate that it is possible to extend the analysis in a frequency range much below the natural frequency of the geophones. The results of our study confirm the suitability of standard seismic refraction/reflection equipment also for ambient noise array measurements, which constitutes a cheaper and faster way for investigating soil characteristics.
-
-
-
Downhole interferometric illumination diagnosis and balancing
Available online: 07 November 2012More LessABSTRACTWith seismic interferometry or the virtual source method, controlled sources can be redatumed from the Earth’s surface to generate so‐called virtual sources at downhole receiver locations. Generally this is done by cross‐correlation of the recorded downhole data and stacking over source locations. By studying the retrieved data at zero time lag, downhole illumination conditions that determine the virtual source radiation pattern can be analysed without a velocity model. This can be beneficial for survey planning in time‐lapse experiments. Moreover, the virtual source radiation pattern can be corrected by multi‐dimensional deconvolution or directional balancing. Such an approach can help to improve virtual source repeatability, posing major advantages for reservoir monitoring. An algorithm is proposed for so‐called illumination balancing (being closely related to directional balancing). It can be applied to single‐component receiver arrays with limited aperture below a strongly heterogeneous overburden. The algorithm is demonstrated on synthetic 3D elastic data to retrieve time‐lapse amplitude attributes.
-
-
-
Preserved‐traveltime smoothing
Authors V. Vinje, A. Stovas and D. ReynaudAvailable online: 07 November 2012More LessABSTRACTMost ray‐based migration and tomography methods require a certain degree of smoothness of the depth velocity model. Since smoothing changes the velocity model, it is generally impossible to preserve the traveltime between all pairs of points in the model. Using conventional smoothing, the traveltime errors are particularly large at discontinuities in the velocity model. These errors are offset‐dependent and they cause errors in both the depth and residual moveout (RMO) of depth migrated images. Here we propose a new method, Preserved‐Traveltime Smoothing (PTS) with an objective of preserving traveltimes (and hence depths) at these discontinuities. This is accomplished by smoothing of velocity moments for anisotropic models using a specific cross‐correlation filter with the desired smoothness properties. The method is valid for isotropic, transversely isotropic with vertical symmetry axis (VTI) and structural transverse isotropic (STI) models. The variations of the model parameters perpendicular to the symmetry axis of the anisotropy are assumed to be small.
-
-
-
Absorption related velocity dispersion below a possible gas hydrate geobody
By Ian F. JonesAvailable online: 07 November 2012More LessABSTRACTVelocity dispersion is not usually a problem in surface seismic data processing, as the seismic bandwidth is relatively narrow and thus for most Q values, dispersive effects are not noticeable. However, for highly absorptive bodies, such as the overpressured free gas accumulations associated with some gas hydrates or high‐porosity normally pressured gas sands, dispersive effects may be seen. In this work I analyse one such data set from the offshore north‐east coast of India. I demonstrate that the effect is measurable and that compensating for it in either data processing or migration can improve the wavelet character, as well as delivering an estimate of the effective Q values in the associated geobody. I also raise the question as to whether velocities derived using low‐frequency waveform inversion over such dispersive geobodies are wholly appropriate for migration of full seismic‐bandwidth data.
-
-
-
Borehole receiver orientation using a 3D velocity model
Authors Giovanni Menanno, Aldo Vesnaver and Michael JervisAvailable online: 18 October 2012More LessABSTRACTThe orientation of three‐component borehole geophones used for recording during a microseismic monitoring experiment is estimated. The standard technology for deploying multi‐component geophones in a deep borehole is wireline‐based, in which the azimuthal rotation of the geophone string cannot be controlled. Each receiver can have a different rotation angle that is compensated by the particle motion analysis of the direct P‐wave arrivals, picked from a walk‐around VSP carried out in the proximity of the well. Knowing the orientation of borehole receivers is critical, as inaccuracies lead to systematic errors in determining the hypocentral coordinates of microseismic events. Additional errors arise from over‐simplifications of P‐ and S‐wave velocity Earth models.
In this paper, we propose a tomographic approach for improving the orientation estimates of borehole receivers based on hodogram analysis. The initial velocity model built from well logs and upholes is refined by 3D tomographic inversion of walk‐around VSP data and some string shots fired in a nearby borehole. Taking into account ray bending, the estimated errors due to local velocity anomalies can be reduced. This makes our estimates of fracture orientation and microseismic hypocentres more reliable when borehole receivers are used in passive seismic surveys.
-
-
-
Acoustic full‐waveform inversion of surface seismic data using the Gauss‐Newton method with active constraint balancing
Authors Yonghwan Joo, Soon Jee Seol and Joongmoo ByunAvailable online: 18 October 2012More LessABSTRACTWe propose a full‐waveform inversion algorithm using the Gauss‐Newton inversion method with active constraint balancing that uses the spatially variant damping factor and source normalized wavefield approach for surface seismic data in the frequency domain. The active constraint balancing technique automatically determines the optimum distribution of damping factors that control the stability and resolution in Gauss‐Newton inversion by using a parameter resolution matrix and spread function analysis. Through numerical experiments, we present that the active constraint balancing scheme provides stable inversion results without a severe loss of resolution compared with the conventional Gauss‐Newton method. The reconstructed image using the active constraint balancing method more closely resembles the true image for the region with low sensitivity. Also, the estimated value converges faster to the smaller RMS error level than those estimated by the conventional Gauss‐Newton method. We also implement the normalized wavefield method to overcome the lack of precise knowledge on the source. The source normalized wavefield approach effectively removes the potential inversion errors from source estimation because the source spectrum is eliminated during the normalization procedure. Our inversion algorithm, using the source normalization scheme, provides excellent inversion results even though the data are generated by two slightly different source wavelets. We present that the frequency selection scheme proposed by Sirgue and Pratt, which is based on the average amplitude of the whole received data, provides a useful guideline for selecting the proper frequencies for our inversion. Our novel inversion algorithm successfully reconstructs the velocity model within 10–30 iterations despite its starting from a homogeneous or linearly increasing velocity model. In addition, for testing the performance of our inversion algorithm on a more complicated structure, we apply the algorithm to the SEG/EAGE overthrust model. Successful inversion is achieved as the reconstructed image approaches the true model with the consistently converging RMS error even though insufficient data are used.
-
-
-
3D geophysical inversions of the north‐east Amer Belt and their relationship to the geologic structure
Authors V. Tschirhart, W.A. Morris, C.W. Jefferson, P. Keating, J.C. White and L. CalhounAvailable online: 15 October 2012More LessABSTRACTThe Amer Lake area is located within the Churchill Structural Province in the Kivalliq Region of Nunavut, approximately 160 km north‐west of Baker Lake. Two distinct geophysical‐geological entities are structurally intercalated: an Archean mixed granitoid gneiss – metasedimentary‐metavolcanic basement and the unconformably overlying Paleoproterozoic Amer Group metasediments. From east of Amer Lake stretching toward the south‐west, these two entities form the Amer fold and thrust belt. At the north‐east end of this belt, high‐resolution aeromagnetic data define a distinctive oval shape that has been interpreted as a south‐west trending doubly plunging synform. The outcrop within the interior of this structure is sparse resulting in limited structural data and speculative geological interpretations with multiple geometries possible. The high‐resolution aeromagnetic data compiled through an industry‐government consortium and newly acquired detailed gravity profiles were modelled to provide constraints on the geometry of this synform.
We document a geophysical‐geological feedback process whereby the available geological and geophysical data were used to derive constraints on inversion models for the synform. Starting with available limited litho‐structural data the presence of a double plunging synform was directly inferred from the aeromagnetic data. Segments of the aeromagnetic data have 2D morphology and so can be modelled using a simple parametric 2D dipping slab inversion approach. Models of profiles extracted from the aeromagnetic data were used to provide preliminary dip and magnetic susceptibility constraints for the Three Lakes mudstone with iron formation and the Five Mile Lake basalt. Landsat imagery outlined the spatial limits of the stratigraphically underlying, non‐magnetic Ayagaq quartzite. Incorporating these outputs as bounds in the input / reference model for a UBC‐GIF 3D magnetic inversion helped to accentuate the geological structure in the output mesh: an enhanced inversion that incorporates both geological and geophysical constraints. The validity of the resulting inversion model was tested by computing 2D forward models of the gravity profile data. The inversion model generated by this study emphasizes the importance of integrating information from as many knowledge sources as one can find. More trust can be placed on forward and inversion models where there is agreement among all data sets and a coherency of structural style.
-
-
-
Building starting models for full waveform inversion from wide‐aperture data by stereotomography
Authors Vincent Prieux, Gilles Lambaré, Stéphane Operto and Jean VirieuxAvailable online: 15 October 2012More LessABSTRACTBuilding an accurate initial velocity model for full waveform inversion (FWI) is a key issue to guarantee convergence of full waveform inversion towards the global minimum of a misfit function. In this study, we assess joint refraction and reflection stereotomography as a tool to build a reliable starting model for frequency‐domain full waveform inversion from long‐offset (i.e., wide‐aperture) data. Stereotomography is a slope tomographic method that is based on the inversion of traveltimes and slopes of locally‐coherent events in a data cube. One advantage of stereotomography compared to conventional traveltime reflection tomography is the semi‐automatic picking procedure of locally‐coherent events, which is easier than the picking of continuous events, and can lead to a higher density of picks. While conventional applications of stereotomography only consider short‐offset reflected waves, we assess the benefits provided by the joint inversion of reflected and refracted arrivals. Introduction of the refracted waves allows the construction of a starting model that kinematically fits the first arrivals, a necessary requirement for full waveform inversion. In a similar way to frequency‐domain full waveform inversion, we design a multiscale approach of stereotomography, which proceeds hierarchically from the wide‐aperture to the short‐aperture components of the data, to reduce the non‐linearity of the stereotomographic inversion of long‐offset data. This workflow which combines stereotomography and full waveform inversion, is applied to synthetic and real data case studies for the Valhall oil‐field target. The synthetic results show that the joint refraction and reflection stereotomography for a 24‐km maximum offset data set provides a more reliable initial model for full waveform inversion than reflection stereotomography performed for a 4‐km maximum offset data set, in particular in low‐velocity gas layers and in the deep part of a structure below a reservoir. Application of joint stereotomography, full waveform inversion and reverse‐time migration to real data reveals that the FWI models and the reverse‐time migration images computed from the stereotomography model shares several features with FWI velocity models and migrated images computed from an anisotropic reflection‐traveltime tomography model, although stereotomography was performed in the isotropic approximation. Implementation of anisotropy in joint refraction and reflection stereotomography of long‐offset data is a key issue to further improve the accuracy of the method.
-
-
-
Convergence improvement and noise attenuation considerations for beyond alias projection onto convex sets reconstruction
Authors Jianjun Gao, Aaron Stanton, Mostafa Naghizadeh, Mauricio D. Sacchi and Xiaohong ChenAvailable online: 15 October 2012More LessABSTRACTA reconstruction method known as Projection Onto Convex Sets (POCS) is an effective, uncomplicated and robust method for the recovery of irregularly missing seismic traces. However, slow convergence of the POCS reconstruction method could jeopardize its computational appeal. For this reason, we investigate the performance of the POCS reconstruction method in terms of different threshold schedules and present a new data driven threshold that leads to an efficient implementation of the POCS method. In particular, we show that high quality solutions can be obtained in a few iterations. In addition, we address an important issue with the classical implementations of POCS reconstruction in that they cannot interpolate regularly missing data. To solve this problem, we introduce a masking operator that is based on a dominant dip scanning method into the POCS iteration. At the end, we present a variant of the POCS method that permits de‐noising seismic volumes during the reconstruction stage. This is achieved by defining a weighted trace re‐insertion strategy that alleviates the influence of noisy traces in the final reconstruction of the seismic volume. We show the effectiveness of the proposed method using synthetic and field data.
-
-
-
Estimating primaries by sparse inversion, a generalized approach
Authors F.H.C. Ypma and D.J. VerschuurAvailable online: 11 October 2012More LessABSTRACTFor an accurate interpretation of seismic data, multiple‐free data are of great value. Removing surface multiples and interbed multiples proves to be challenging in many cases. The nowadays widely used method of Surface‐Related Multiple Elimination (SRME) has lately been redefined as a full‐waveform inversion process, resulting in the method of Estimation of Primaries by Sparse Inversion (EPSI). The new method is shown to be more accurate than the former method in several situations, because it estimates primaries such that they, together with their multiples, explain the input data. Its main advantage is that the minimum energy assumption in traditional multiple subtraction is avoided. The SRME methodology has been extended to the case of internal multiples by several authors, however, the involved subtraction of predicted multiples is probably even more challenging than for the surface‐multiple case. Therefore, in this paper the EPSI method is generalized to remove both surface and interbed multiples. As in previous implementations of internal multiple removal based on data‐driven convolution, the newly proposed scheme requires some knowledge about the subsurface: the data should be divided into (macro) layers and appropriate time windows must be selected. The method is tested on two 2D synthetic datasets to prove its viability. Furthermore, application to a 2D field dataset showed improved accuracy compared to conventional prediction and subtraction.
-
-
-
Coherent and random noise attenuation via multichannel singular spectrum analysis in the randomized domain
Available online: 19 July 2012More LessABSTRACTThe attenuation of coherent and random noise still poses technical challenges in seismic data processing, especially in onshore environments. Multichannel Singular Spectrum Analysis (MSSA) is an existing and effective technique for random‐noise reduction. By incorporating a randomizing operator into MSSA, this modification creates a new and powerful filtering method that can attenuate both coherent and random noise simultaneously. The key of the randomizing operator exploits the fact that primary events after NMO are relatively horizontal. The randomizing operator randomly rearranges the order of input data and reorganizes coherent noise into incoherent noise but has a minimal effect on nearly horizontal primary reflections. The randomizing process enables MSSA to suppress both coherent and random noise simultaneously. This new filter, MSSARD (MSSA in the randomized domain) also resembles a combination of eigenimage and Cadzow filters. I start with a synthetic data set to illustrate the basic concept and apply MSSARD filtering on a 3D cross‐spread data set that was severely contaminated with ground roll and scattered noise. MSSARD filtering gives superior results when compared with a conventional 3D f‐k filter. For a random‐noise example, the application of MSSARD filtering on time‐migrated offset‐vector‐tile (OVT) gathers also produces images with higher signal‐to‐noise ratios than a conventional f‐xy deconvolution filter.
-
-
-
Fast waveform inversion without source‐encoding
Authors Tristan van Leeuwen and Felix J. HerrmannAvailable online: 10 July 2012More LessABSTRACTRandomized source‐encoding has recently been proposed as a way to dramatically reduce the costs of full waveform inversion. The main idea is to replace all sequential sources by a small number of simultaneous sources. This introduces random cross‐talk in model updates and special stochastic optimization strategies are required to deal with this. Two problems arise with this approach: i) source‐encoding can only be applied to fixed‐spread acquisition setups and ii) stochastic optimization methods tend to converge very slowly, relying on averaging to suppress the cross‐talk. Although the slow convergence is partly off‐set by a low iteration cost, we show that conventional optimization strategies are bound to outperform stochastic methods in the long run. In this paper we argue that we do not need randomized source‐encoding to reap the benefits of stochastic optimization and we review an optimization strategy that combines the benefits of both conventional and stochastic optimization. The method uses a gradually increasing batch of sources. Thus, iterations are initially very cheap and this allows the method to make fast progress in the beginning. As the batch‐size grows, the method behaves like conventional optimization, allowing for fast convergence. Stylized numerical examples suggest that the stochastic and hybrid methods perform equally well with and without source‐encoding and that the hybrid method outperforms both conventional and stochastic optimization. The method does not rely on source‐encoding techniques and can thus be applied to marine data. We illustrate this on a realistic synthetic model.
-
-
-
Reconstructing frequency‐magnitude statistics from detection limited microseismic data
Authors Michael J. Williams and Joel Le CalvezAvailable online: 10 July 2012More LessABSTRACTMicroseismic monitoring, particularly the monitoring of hydraulic fracturing in gas‐ and oil‐bearing shales, has developed significantly over the last ten years. Early work focused on the location of microseismic events but more recently there have been attempts to extract more of the information afforded by this rich data source. In particular, the recovery of the frequency‐magnitude distribution, which is expected to follow a Gutenberg‐Richter distribution, may provide insights into the prevailing effective stress regime in the vicinity of the events. This stress regime varies with distance from the hydraulic fracturing: at the propagating fracture one expects conditions for tensile or shear failure, away from the fracture one may broadly expect microseismicity associated with pre‐existing weakness in the rock, occurring at effective stress conditions close to the conditions existing prior to the treatment.
All geophysical experiments are detection limited and the microseismic monitoring case does not differ in this regard. In constructing a statistical indicator such as the distribution of moment magnitudes we would like the estimate to be robust and use as much of the data as possible. In analysing earthquake catalogues the predominant practise is to determine a magnitude of completeness denoting the detection limit of the catalogue. This approach defines a minimum magnitude above which all events are thought to have been reliably recorded. In effect, this imposes an artificial, conservative detection limit to replace the unknown detection limit of the catalogue. We present the case of an arbitrary detection limit and introduce an approach from astronomy that is particularly suited to the single‐well observing geometry most prevalent in hydraulic fracture monitoring.
We calculate b‐values for a set of event magnitudes from the Barnett Shale formation, where multiple stimulation treatments were applied in a pair of wells (‘zipper frac’) followed by a four‐stage treatment in a third well and find significant variations in the b‐value between the pumped stages.
-
-
-
A passive low‐frequency seismic experiment in the Albertine Graben, Uganda
Authors F. Martini, I. Lokmer, K. Jonsdottir, L. De Barros, M. Möllhoff, C. J. Bean, F. Hauser, J. Doherty, C. Ryan and J. MonganAvailable online: 05 July 2012More LessABSTRACTA passive seismic experiment was conducted in April/May 2010 in the Albertine Graben region in Uganda to record low‐frequency seismic signals and explore the possibility of their exploitation in this area as a direct hydrocarbon indicator (DHI). Recordings were made at locations directly overlying both hydrocarbon and water‐bearing strata within the sedimentary basin as well as reference sites external to the basin, directly on the basement. Contrary to findings published in some literature to date, we found that spatial variations in the analysed wavefield parameters correlate with the underlying geology rather than the presence or absence of hydrocarbons. Inversion of the surface‐wave (fundamental mode) dispersion curve as well as the observed horizontal‐to‐vertical spectral ratio of both surface and body waves provide evidence that the observed spectral variations can be explained solely by a simple layered/gradient velocity model, without the presence of any kind of anomaly that could be attributed exclusively to a hydrocarbon reservoir.
Consequently, it is recommended that knowledge of the geological and velocity structure is sought when analysing passive low‐frequency seismic data sets. This is a fundamental prerequisite in order to guard against misinterpretation of the spatial variation of seismic derived attributes as DHIs.
-
-
-
Two‐dimensional fast generalized Fourier interpolation of seismic records
Authors Mostafa Naghizadeh and Kristopher A. InnanenAvailable online: 05 July 2012More LessABSTRACTThe fast generalized Fourier transform algorithm is extended to two‐dimensional data cases. The algorithm provides a fast and non‐redundant alternative for the simultaneous time‐frequency and space‐wavenumber analysis of data with time‐space dependencies. The transform decomposes data based on local slope information and therefore making it possible to extract the weight function based on dominant dips from alias‐free low frequencies. By projecting the extracted weight function to alias‐contaminated high frequencies and utilizing a least‐squares fitting algorithm, a beyond‐alias interpolation method is accomplished. Synthetic and real data examples are provided to examine the performance of the proposed interpolation method.
-