- Home
- A-Z Publications
- Geophysical Prospecting
- Previous Issues
- Volume 62, Issue 5, 2014
Geophysical Prospecting - Volume 62, Issue 5, 2014
Volume 62, Issue 5, 2014
-
-
Review Paper: An outlook on the future of seismic imaging, Part I: forward and reverse modelling
More LessABSTRACTThe next generation of seismic imaging algorithms will use full wavefield migration, which regards multiple scattering as indispensable information. These algorithms will also include autonomous velocity‐updating in the migration process, called joint migration inversion. Full wavefield migration and joint migration inversion address industrial requirements to improve the images of highly complex reservoirs as well as the industrial ambition to produce these images more automatically (automation in seismic processing).
In these vision papers on seismic imaging, full wavefield migration and joint migration inversion are formulated in terms of a closed‐loop, estimation algorithm that can be physically explained by an iterative double‐focusing process (full wavefield Common Focus Point technology). A critical module in this formulation is forward modelling, allowing feedback from the migrated output to the unmigrated input (‘closing the loop’). For this purpose, a full wavefield modelling module has been developed, which uses an operator description of complex geology. Full wavefield modelling is pre‐eminently suited to function in the feedback path of a closed‐loop migration algorithm.
‘The Future of Seismic Imaging’ is presented as a coherent trilogy of papers that propose the migration framework of the future. In Part I, the theory of full wavefield modelling is explained, showing the fundamental distinction with the finite‐difference approach. Full wavefield modelling allows the computation of complex shot records without the specification of velocity and density models. Instead, an operator description of the subsurface is used. The capability of full wavefield modelling is illustrated with examples. Finally, the theory of full wavefield modelling is extended to full wavefield reverse modelling (FWMod−1), which allows accurate estimation of (blended) source properties from (blended) shot records.
-
-
-
Review Paper: An outlook on the future of seismic imaging, Part II: Full‐Wavefield Migration
More LessABSTRACTThe next‐generation seismic imaging algorithms will consider multiple scattering as indispensable information, being referred to as Full‐Wavefield Migration. In addition, these algorithms will also include autonomous velocity updating in the migration process, being referred to as Joint Migration Inversion. Full‐Wavefield Migration and Joint Migration Inversion address the industrial needs to improve images of very complex reservoirs as well as the industrial ambition to produce these images in a more automatic manner (automation in seismic processing).
In this vision paper on seismic imaging, Full‐Wavefield Migration and Joint Migration Inversion are formulated in terms of a closed‐loop estimation algorithm that can be physically explained by an iterative double focusing process (full‐wavefield common‐focus‐point technology). A critical module in this formulation is forward modelling, allowing feedback from migrated output to unmigrated input (closing the loop). For this purpose, a full‐wavefield modelling module has been developed, which utilizes an operator description of complex geology. The full‐wavefield modelling module is pre‐eminently suited to function in the feedback path of a closed‐loop migration algorithm.
‘The Future of Seismic Imaging’ is presented as a coherent trilogy, proposing the migration framework of the future in three consecutive parts. In Part I, it was shown that the proposed full‐wavefield modelling module algorithm differs fundamentally from finite‐difference modelling because velocities and densities need not be provided. Instead, an operator description of the subsurface is used. In addition, the concept of reverse modelling was introduced. In Part II, it is shown how the theory of Primary Wavefield Migration can be extended to Full‐Wavefield Migration by correcting for angle‐dependent transmission effects and by utilizing multiple scattering. The potential of the Full‐Wavefield Migration algorithm is illustrated with numerical examples. A multidirectional migration strategy is proposed that navigates the Full‐Wavefield Migration algorithm through the seismic data cube in different directions.
-
-
-
Review Paper: An outlook on the future of seismic imaging, Part III: Joint Migration Inversion
More LessABSTRACTThe next generation seismic imaging algorithms will consider multiple scattering as indispensable information, referred to as Full Wavefield Migration. In addition, these algorithms will also include autonomous velocity updating in the migration process, referred to as Joint Migration Inversion. Full wavefield migration and joint migration inversion address the industrial needs of improving images of very complex reservoirs as well as the industry ambition of producing these images in a more automatic manner (‘automation in seismic processing’).
In this vision paper on seismic imaging, full wavefield migration and joint migration inversion are formulated in terms of a closed‐loop, estimation algorithm that can be physically explained by an iterative double focusing process (full wavefield Common‐Focus‐Point technology). A critical module in this formulation is forward modelling, allowing feedback from migrated output to unmigrated input (‘closing the loop’). For this purpose a full wavefield modelling module has been developed that utilizes an operator description of complex geology. Full wavefield modelling module is pre‐eminently suited to function in the feedback path of a closed‐loop migration algorithm.
‘The Future of Seismic Imaging’ is presented as a coherent trilogy, proposing in three consecutive parts the migration framework of the future. In part I it was shown that the proposed full wavefield modelling module algorithm differs fundamentally from finite difference modelling, as velocities and densities need not be provided. Instead, full wavefield modelling module uses an operator description of the subsurface. In Part II it was shown how the theory of Primary Wavefield Migration can be extended to Full Wavefield Migration by correcting for elastic transmission effects and by utilizing multiple scattering. In Part III it is shown how the full wavefield migration technology can be extended to Joint Migration Inversion, allowing full wavefield migration of blended data without knowledge of the velocity. Velocities are part of the joint migration inversion output, being obtained by an operator‐driven parametric inversion process. The potential of the proposed joint migration inversion algorithm is illustrated with numerical examples.
-
-
-
Dimensionality‐reduced estimation of primaries by sparse inversion
Authors Bander Jumah and Felix J. HerrmannABSTRACTWave‐equation based methods, such as the estimation of primaries by sparse inversion, have been successful in the mitigation of the adverse effects of surface‐related multiples on seismic imaging and migration‐velocity analysis. However, the reliance of these methods on multidimensional convolutions with fully sampled data exposes the ‘curse of dimensionality’, which leads to disproportional growth in computational and storage demands when moving to realistic 3D field data. To remove this fundamental impediment, we propose a dimensionality‐reduction technique where the ‘data matrix’ is approximated adaptively by a randomized low‐rank factorization. Compared to conventional methods, which need for each iteration passage through all data possibly requiring on‐the‐fly interpolation, our randomized approach has the advantage that the total number of passes is reduced to only one to three. In addition, the low‐rank matrix factorization leads to considerable reductions in storage and computational costs of the matrix multiplies required by the sparse inversion. Application of the proposed method to two‐dimensional synthetic and real data shows that significant performance improvements in speed and memory use are achievable at a low computational up‐front cost required by the low‐rank factorization.
-
-
-
Fidelity and repeatability of wave fields reconstructed from multicomponent streamer data
ABSTRACTWave field reconstruction – the estimation of a three‐dimensional (3D) wave field representing upgoing, downgoing or the combined total pressure at an arbitrary point within a marine streamer array – is enabled by simultaneous measurements of the crossline and vertical components of particle acceleration in addition to pressure in a multicomponent marine streamer. We examine a repeated sail line of North Sea data acquired by a prototype multicomponent towed‐streamer array for both wave field reconstruction fidelity (or accuracy) and reconstruction repeatability. Data from six cables, finely sampled in‐line but spaced at 75 m crossline, are reconstructed and placed on a rectangular data grid uniformly spaced at 6.25 m in‐line and crossline. Benchmarks are generated using recorded pressure data and compared with wave fields reconstructed from pressure alone, and from combinations of pressure, crossline acceleration and vertical acceleration. We find that reconstruction using pressure and both crossline and vertical acceleration has excellent fidelity, recapturing highly aliased diffractions that are lost by interpolation of pressure‐only data. We model wave field reconstruction error as a linear function of distance from the nearest physical sensor and find, for this data set with some mismatched shot positions, that the reconstructed wave field error sensitivity to sensor mispositioning is one‐third that of the recorded wave field sensitivity. Multicomponent reconstruction is also more repeatable, outperforming single‐component reconstruction in which wave field mismatch correlates with geometry mismatch. We find that adequate repeatability may mask poor reconstruction fidelity and that aliased reconstructions will repeat if the survey geometry repeats. Although the multicomponent 3D data have only 500 m in‐line aperture, limiting the attenuation of non‐repeating multiples, the level of repeatability achieved is extremely encouraging compared to full‐aperture, pressure‐only, time‐lapse data sets at an equivalent stage of processing.
-
-
-
Improved normalization of time‐lapse seismic data using normalized root mean square repeatability data to improve automatic production and seismic history matching in the Nelson field
Authors Dr Karl D. Stephen and Dr. Alireza KazemiABSTRACTUpdating of reservoir models by history matching of 4D seismic data along with production data gives us a better understanding of changes to the reservoir, reduces risk in forecasting and leads to better management decisions. This process of seismic history matching requires an accurate representation of predicted and observed data so that they can be compared quantitatively when using automated inversion. Observed seismic data is often obtained as a relative measure of the reservoir state or its change, however. The data, usually attribute maps, need to be calibrated to be compared to predictions. In this paper we describe an alternative approach where we normalize the data by scaling to the model data in regions where predictions are good. To remove measurements of high uncertainty and make normalization more effective, we use a measure of repeatability of the monitor surveys to filter the observed time‐lapse data.
We apply this approach to the Nelson field. We normalize the 4D signature based on deriving a least squares regression equation between the observed and synthetic data which consist of attributes representing measured acoustic impedances and predictions from the model. Two regression equations are derived as part of the analysis. For one, the whole 4D signature map of the reservoir is used while in the second, 4D seismic data is used from the vicinity of wells with a good production match. The repeatability of time‐lapse seismic data is assessed using the normalized root mean square of measurements outside of the reservoir. Where normalized root mean square is high, observations and predictions are ignored. Net: gross and permeability are modified to improve the match.
The best results are obtained by using the normalized root mean square filtered maps of the 4D signature which better constrain normalization. The misfit of the first six years of history data is reduced by 55 per cent while the forecast of the following three years is reduced by 29 per cent. The well based normalization uses fewer data when repeatability is used as a filter and the result is poorer. The value of seismic data is demonstrated from production matching only where the history and forecast misfit reductions are 45% and 20% respectively while the seismic misfit increases by 5%. In the best case using seismic data, it dropped by 6%. We conclude that normalization with repeatability based filtering is a useful approach in the absence of full calibration and improves the reliability of seismic data.
-
-
-
Time‐lapse pre‐stack seismic data registration and inversion for CO2 sequestration study at Cranfield
Authors Rui Zhang, Xiaolei Song, Sergey Fomel, Mrinal K Sen and Sanjay SrinivasanABSTRACTPre‐stack seismic data are indicative of subsurface elastic properties within the amplitude versus offset characteristic and can be used to detect elastic rock property changes caused by injection. We perform time‐lapse pre‐stack 3‐D seismic data analysis for monitoring sequestration at Cranfield. The time‐lapse amplitude differences of Cranfield datasets are found entangled with time‐shifts. To disentangle these two characters, we apply a local‐correlation‐based warping method to register the time‐lapse pre‐stack datasets, which can effectively separate the time‐shift from the time‐lapse seismic amplitude difference without changing the original amplitudes. We demonstrate the effectiveness of our registration method by evaluating the inverted elastic properties. These inverted time‐lapse elastic properties can be reliably used for monitoring plumes.
-
-
-
Double parameterized regularization inversion method for migration velocity analysis in transversely isotropic media with a vertical symmetry axis
Authors Caixia Yu, Yanfei Wang, Jingtao Zhao and Zhenli WangABSTRACTSimultaneous estimation of velocity gradients and anisotropic parameters from seismic reflection data is one of the main challenges in transversely isotropic media with a vertical symmetry axis migration velocity analysis. In migration velocity analysis, we usually construct the objective function using the l2 norm along with a linear conjugate gradient scheme to solve the inversion problem. Nevertheless, for seismic data this inversion scheme is not stable and may not converge in finite time. In order to ensure the uniform convergence of parameter inversion and improve the efficiency of migration velocity analysis, this paper develops a double parameterized regularization model and gives the corresponding algorithms. The model is based on the combination of the l2 norm and the non‐smooth l1 norm. For solving such an inversion problem, the quasi‐Newton method is utilized to make the iterative process stable, which can ensure the positive definiteness of the Hessian matrix. Numerical simulation indicates that this method allows fast convergence to the true model and simultaneously generates inversion results with a higher accuracy. Therefore, our proposed method is very promising for practical migration velocity analysis in anisotropic media.
-
-
-
Misfit functionals in Laplace‐Fourier domain waveform inversion, with application to wide‐angle ocean bottom seismograph data
Authors Rie Kamei, R. Gerhard Pratt and Takeshi TsujiABSTRACTIn seismic waveform inversion, non‐linearity and non‐uniqueness require appropriate strategies. We formulate four types of L2 normed misfit functionals for Laplace‐Fourier domain waveform inversion: i) subtraction of complex‐valued observed data from complex‐valued predicted data (the ‘conventional phase‐amplitude’ residual), ii) a ‘conventional phase‐only’ residual in which amplitude variations are normalized, iii) a ‘logarithmic phase‐amplitude’ residual and finally iv) a ‘logarithmic phase‐only’ residual in which the only imaginary part of the logarithmic residual is used. We evaluate these misfit functionals by using a wide‐angle field Ocean Bottom Seismograph (OBS) data set with a maximum offset of 55 km. The conventional phase‐amplitude approach is restricted in illumination and delineates only shallow velocity structures. In contrast, the other three misfit functionals retrieve detailed velocity structures with clear lithological boundaries down to the deeper part of the model. We also test the performance of additional phase‐amplitude inversions starting from the logarithmic phase‐only inversion result. The resulting velocity updates are prominent only in the high‐wavenumber components, sharpening the lithological boundaries. We argue that the discrepancies in the behaviours of the misfit functionals are primarily caused by the sensitivities of the model gradient to strong amplitude variations in the data. As the observed data amplitudes are dominated by the near‐offset traces, the conventional phase‐amplitude inversion primarily updates the shallow structures as a result. In contrast, the other three misfit functionals eliminate the strong dependence on amplitude variation naturally and enhance the depth of illumination. We further suggest that the phase‐only inversions are sufficient to obtain robust and reliable velocity structures and the amplitude information is of secondary importance in constraining subsurface velocity models.
-
-
-
Dual sensor streamer technology used in Sleipner CO2 injection monitoring
Authors Anne‐Kari Furre and Ola EikenABSTRACTCO2 has been injected into the saline aquifer Utsira Fm at the Sleipner field since 1996. In order to monitor the movement of the CO2 in the sub‐surface, the seventh seismic monitor survey was acquired in 2010, with dual sensor streamers which enabled optimal towing depths compared to previous surveys. We here report both on the time‐lapse observations and on the improved resolution compared to the conventional streamer surveys. This study shows that the CO2 is still contained in the subsurface, with no indications of leakage. The time‐lapse repeatability of the dual sensor streamer data versus conventional data is sufficient for interpreting the time‐lapse effects of the CO2 at Sleipner, and the higher resolution of the 2010 survey has enabled a refinement of the interpretation of nine CO2 saturated layers with improved thickness estimates of the layers. In particular we have estimated the thickness of the uppermost CO2 layer based on an analysis of amplitude strength together with time‐separation of top and base of this layer and found the maximum thickness to be 11 m. This refined interpretation gives a good base line for future time‐lapse surveys at the Sleipner CO2 injection site.
-
-
-
Effective wavefield extrapolation in anisotropic media: accounting for resolvable anisotropy
More LessABSTRACTSpectral methods provide artefact‐free and generally dispersion‐free wavefield extrapolation in anisotropic media. Their apparent weakness is in accessing the medium‐inhomogeneity information in an efficient manner. This is usually handled through a velocity‐weighted summation (interpolation) of representative constant‐velocity extrapolated wavefields, with the number of these extrapolations controlled by the effective rank of the original mixed‐domain operator or, more specifically, by the complexity of the velocity model. Conversely, with pseudo‐spectral methods, because only the space derivatives are handled in the wavenumber domain, we obtain relatively efficient access to the inhomogeneity in isotropic media, but we often resort to weak approximations to handle the anisotropy efficiently. Utilizing perturbation theory, I isolate the contribution of anisotropy to the wavefield extrapolation process. This allows us to factorize as much of the inhomogeneity in the anisotropic parameters as possible out of the spectral implementation, yielding effectively a pseudo‐spectral formulation. This is particularly true if the inhomogeneity of the dimensionless anisotropic parameters are mild compared with the velocity (i.e., factorized anisotropic media). I improve on the accuracy by using the Shanks transformation to incorporate a denominator in the expansion that predicts the higher‐order omitted terms; thus, we deal with fewer terms for a high level of accuracy. In fact, when we use this new separation‐based implementation, the anisotropy correction to the extrapolation can be applied separately as a residual operation, which provides a tool for anisotropic parameter sensitivity analysis. The accuracy of the approximation is high, as demonstrated in a complex tilted transversely isotropic model.
-
-
-
Logarithm of short‐time Fourier transform for extending the seismic bandwidth
Authors Muhammad Sajid and Deva GhoshABSTRACTImproving seismic resolution is essential for obtaining more detailed structural and stratigraphic information. We present a new algorithm to increase seismic resolution with a minimum of user‐defined parameters. The algorithm inherits useful properties of both the short‐time Fourier transform and the cepstrum to smooth and broaden the frequency spectrum at each translation of the spectral decomposing window. The key idea is to replace the amplitude spectrum with its logarithm in each window of the short‐time Fourier transform. We describe the mathematical formulation of the algorithm and its testing on synthetic and real seismic data to obtain broader frequency spectra and thus enhance the seismic resolution.
-
-
-
A comparison of continuous mass‐lumped finite elements with finite differences for 3‐D wave propagation
Authors Elena Zhebel, Sara Minisini, Alexey Kononov and Wim A. MulderABSTRACTThe finite‐difference method on rectangular meshes is widely used for time‐domain modelling of the wave equation. It is relatively easy to implement high‐order spatial discretization schemes and parallelization. Also, the method is computationally efficient. However, the use of finite elements on tetrahedral unstructured meshes is more accurate in complex geometries near sharp interfaces. We compared the standard eighth‐order finite‐difference method to fourth‐order continuous mass‐lumped finite elements in terms of accuracy and computational cost. The results show that, for simple models like a cube with constant density and velocity, the finite‐difference method outperforms the finite‐element method by at least an order of magnitude. Outside the application area of rectangular meshes, i.e., for a model with interior complexity and topography well described by tetrahedra, however, finite‐element methods are about two orders of magnitude faster than finite‐difference methods, for a given accuracy.
-
-
-
Forced imbibition into a limestone: measuring P‐wave velocity and water saturation dependence on injection rate
Authors Sofia Lopes, Maxim Lebedev, Tobias M. Müller, Michael B. Clennell and Boris GurevichABSTRACTQuantitative interpretation of time‐lapse seismic data requires knowledge of the relationship between elastic wave velocities and fluid saturation. This relationship is not unique but depends on the spatial distribution of the fluid in the pore‐space of the rock. In turn, the fluid distribution depends on the injection rate. To study this dependency, forced imbibition experiments with variable injection rates have been performed on an air‐dry limestone sample. Water was injected into a cylindrical sample and was monitored by X‐Ray Computed Tomography and ultrasonic time‐of‐flight measurements across the sample. The measurements show that the P‐wave velocity decreases well before the saturation front approaches the ultrasonic raypath. This decrease is followed by an increase as the saturation front crosses the raypath. The observed patterns of the acoustic response and water saturation as functions of the injection rate are consistent with previous observations on sandstone. The results confirm that the injection rate has significant influence on fluid distribution and the corresponding acoustic response. The complexity of the acoustic response —‐ that is not monotonic with changes in saturation, and which at the same saturation varies between hydrostatic conditions and states of dynamic fluid flow – may have implications for the interpretation of time‐lapse seismic responses.
-
-
-
Model‐based attenuation for scattered dispersive waves
Authors Claudio Strobbia, Alexander Zarkhidze, Roger May and Fatma IbrahimABSTRACTCoherent noise in land seismic data primarily consists in source‐generated surface‐wave modes. The component that is traditionally considered most relevant is the so‐called ground roll, consisting in surface‐wave modes propagating directly from sources to receivers.
In many geological situations, near‑surface heterogeneities and discontinuities, as well as topography irregularities, diffract the surface waves and generate secondary events, which can heavily contaminate records.
The diffracted and converted surface waves are often called scattered noise and can be a severe problem particularly in areas with shallow or outcropping hard lithological formations. Conventional noise attenuation techniques are not effective with scattering: they can usually address the tails but not the apices of the scattered events. Large source and receiver arrays can attenuate scattering but only in exchange for a compromise to signal fidelity and resolution.
We present a model‑based technique for the scattering attenuation, based on the estimation of surface‐wave properties and on the prediction of surface waves with a complex path involving diffractions.
The properties are estimated first, to produce surface‑consistent volumes of the propagation properties. Then, for all gathers to filter, we integrate the contributions of all possible diffractors, building a scattering model. The estimated scattered wavefield is then subtracted from the data. The method can work in different domains and copes with aliased surface waves. The benefits of the method are demonstrated with synthetic and real data.
-
-
-
Avoidable Euler Errors – the use and abuse of Euler deconvolution applied to potential fields†
Authors Alan B. Reid, Jörg Ebbing and Susan J. WebbABSTRACTWindow‐based Euler deconvolution is commonly applied to magnetic and sometimes to gravity interpretation problems. For the deconvolution to be geologically meaningful, care must be taken to choose parameters properly. The following proposed process design rules are based partly on mathematical analysis and partly on experience.
- The interpretation problem must be expressible in terms of simple structures with integer Structural Index (SI) and appropriate to the expected geology and geophysical source.
- The field must be sampled adequately, with no significant aliasing.
- The grid interval must fit the data and the problem, neither meaninglessly over‐gridded nor so sparsely gridded as to misrepresent relevant detail.
- The required gradient data (measured or calculated) must be valid, with sufficiently low noise, adequate representation of necessary wavelengths and no edge‐related ringing.
- The deconvolution window size must be at least twice the original data spacing (line spacing or observed grid spacing) and more than half the desired depth of investigation.
- The ubiquitous sprays of spurious solutions must be reduced or eliminated by judicious use of clustering and reliability criteria, or else recognized and ignored during interpretation.
- The process should be carried out using Cartesian coordinates if the software is a Cartesian implementation of the Euler deconvolution algorithm (most accessible implementations are Cartesian).
If these rules are not adhered to, the process is likely to yield grossly misleading results. An example from southern Africa demonstrates the effects of poor parameter choices.
-
-
-
Euler deconvolution in a radial coordinate system
More LessABSTRACTThis paper introduces the conversion of Euler's equation from a Cartesian coordinate system to a radial coordinate system, and then demonstrates that for sources of the type 1/rN (where r is the distance to the source, and N is the structural index) it can be solved at each point in space without the need for inversion, for a known structural index. It is shown that although the distance to the source that is obtained from Euler's equation depends on the structural index used, the direction to the source does not. For some models, such as the gravity and magnetic response of a contact, calculation of the analytic signal amplitude of the data is necessary prior to the application of the method. Effective noise attenuation strategies, such as the use of moving windows of data points, are also discussed. The method is applied to gravity and magnetic data from South Africa, and yields plausible results.
-
-
-
Interpretation of magnetic and gravity gradient tensor data using normalized source strength – A case study from McFaulds Lake, Northern Ontario, Canada
Authors Majid Beiki, Pierre Keating and David A. ClarkABSTRACTIn this paper, we present a case study on the use of the normalized source strength (NSS) for interpretation of magnetic and gravity gradient tensors data. This application arises in exploration of nickel, copper and platinum group element (Ni‐Cu‐PGE) deposits in the McFaulds Lake area, Northern Ontario, Canada. In this study, we have used the normalized source strength function derived from recent high resolution aeromagnetic and gravity gradiometry data for locating geological bodies.
In our algorithm, we use maxima of the normalized source strength for estimating the horizontal location of the causative body. Then we estimate depth to the source and structural index at that point using the ratio between the normalized source strength and its vertical derivative calculated at two levels; the measurement level and a height h above the measurement level. To discriminate more reliable solutions from spurious ones, we reject solutions with unreasonable estimated structural indices.
This method uses an upward continuation filter which reduces the effect of high frequency noise. In the magnetic case, the advantage is that, in general, the normalized magnetic source strength is relatively insensitive to magnetization direction, thus it provides more reliable information than standard techniques when geologic bodies carry remanent magnetization. For dipping gravity sources, the calculated normalized source strength yields a reliable estimate of the source location by peaking right above the top surface.
Application of the method on aeromagnetic and gravity gradient tensor data sets from McFaulds Lake area indicates that most of the gravity and magnetic sources are located just beneath a 20 m thick (on average) overburden and delineated magnetic and gravity sources which can be probably approximated by geological contacts and thin dikes, come up to the overburden.
-
-
-
Transient electromagnetic modelling of an isolated wire loop over a conductive medium
ABSTRACTA large closed wire loop is generally used in field experiments for testing airborne electrical exploration equipment. Thus, methods are required for the precise calculation of an electromagnetic response in the presence of a closed wire loop. We develop a fast and precise scheme for calculating the transient response for such a closed loop laid out at the surface of a horizontally layered conductive ground. Our scheme is based on the relationship between the magnetic flux flowing through a closed loop and the current induced in it. The developed scheme is compared with 2D and 3D finite‐element modelling for several positions of an airborne electromagnetic system flying over a closed loop. We also study the coupling effect between the current flowing in the closed loop and the current flowing in the horizontally layered conductive medium. The result shows that for the central position of the transmitter, the difference between axisymmetrical finite‐element modelling and our scheme is less than 1%. Moreover, for the non‐coaxial transmitter–receiver–loop system, the solution obtained by our scheme is in good agreement with full 3D finite‐element modelling, and our total simulation time is substantially lower: 1 minute versus 120 hours.
-
Volumes & issues
-
Volume 72 (2023 - 2024)
-
Volume 71 (2022 - 2023)
-
Volume 70 (2021 - 2022)
-
Volume 69 (2021)
-
Volume 68 (2020)
-
Volume 67 (2019)
-
Volume 66 (2018)
-
Volume 65 (2017)
-
Volume 64 (2015 - 2016)
-
Volume 63 (2015)
-
Volume 62 (2014)
-
Volume 61 (2013)
-
Volume 60 (2012)
-
Volume 59 (2011)
-
Volume 58 (2010)
-
Volume 57 (2009)
-
Volume 56 (2008)
-
Volume 55 (2007)
-
Volume 54 (2006)
-
Volume 53 (2005)
-
Volume 52 (2004)
-
Volume 51 (2003)
-
Volume 50 (2002)
-
Volume 49 (2001)
-
Volume 48 (2000)
-
Volume 47 (1999)
-
Volume 46 (1998)
-
Volume 45 (1997)
-
Volume 44 (1996)
-
Volume 43 (1995)
-
Volume 42 (1994)
-
Volume 41 (1993)
-
Volume 40 (1992)
-
Volume 39 (1991)
-
Volume 38 (1990)
-
Volume 37 (1989)
-
Volume 36 (1988)
-
Volume 35 (1987)
-
Volume 34 (1986)
-
Volume 33 (1985)
-
Volume 32 (1984)
-
Volume 31 (1983)
-
Volume 30 (1982)
-
Volume 29 (1981)
-
Volume 28 (1980)
-
Volume 27 (1979)
-
Volume 26 (1978)
-
Volume 25 (1977)
-
Volume 24 (1976)
-
Volume 23 (1975)
-
Volume 22 (1974)
-
Volume 21 (1973)
-
Volume 20 (1972)
-
Volume 19 (1971)
-
Volume 18 (1970)
-
Volume 17 (1969)
-
Volume 16 (1968)
-
Volume 15 (1967)
-
Volume 14 (1966)
-
Volume 13 (1965)
-
Volume 12 (1964)
-
Volume 11 (1963)
-
Volume 10 (1962)
-
Volume 9 (1961)
-
Volume 8 (1960)
-
Volume 7 (1959)
-
Volume 6 (1958)
-
Volume 5 (1957)
-
Volume 4 (1956)
-
Volume 3 (1955)
-
Volume 2 (1954)
-
Volume 1 (1953)