Exploration Geophysics - Volume 49, Issue 6, 2018
Volume 49, Issue 6, 2018
- Research Articles
-
-
-
Least-squares Kirchhoff migration with non-smooth regularisation strategy for subsurface imaging
More LessAuthors Jie Hou and Yanfei Wang[A fast non-smooth regularisation method is proposed to solve the difficulties involved in the classical least-squares Kirchhoff migration. It not only accounts for the irregular and incomplete data sampling, but also compensates for the anomalous ray coverage and multipathing problem. Numerical experiments show that the method works well.
,During the past several decades, many types of wave-equation migration methods arise for subsurface structure imaging. The classical Kirchhoff migration, however, is still widely adopted in the petroleum industry owing to its flexibility and computational efficiency. In constant density isotropic acoustic media, a basic assumption of the Kirchhoff migration is that every point of the subsurface model is supposed to be a diffractor which scatters wavefield energy to every direction, and hence collecting the scattered energy of all directions is the basic requirement for focusing the diffractor. Factors influencing the final image quality include incomplete data acquisition, multipathing from the surface to the imaging point, and insufficient illumination under complex overburden. All these factors can be theoretically taken into account in the migration weighting coefficient. However, computation of the weighting coefficient is hard work. In view of this difficulty, a fast regularising least-squares Kirchhoff migration algorithm is presented in this paper. It not only accounts for the irregular and incomplete data sampling (e.g. limited recording aperture, coarse sampling and acquisition gaps), but also compensates for the anomalous ray coverage and multipathing problem except for the shadow zone in the media. For the purpose of attenuating migration artefacts and providing a clear and accurate image of subsurface reflectivity, regularisation strategies are applied. The classical regularisation strategy may easily lead to over-regularisation or insufficient regularisation; we try to balance these two effects in this paper. The method is called the hybrid regularisation which incorporates smoothing and non-smoothing scale operators. The algorithm is implemented using a fast gradient decent solution method based on the Rayleigh quotient being used. Numerical experiments show that this hybrid regularisation method is powerful in handling the sparsity and smoothness of the model parameters.
]
-
-
-
-
Amplification characteristics at Iedang Reservoir dam sites determined using H/V spectral ratio with background noise, S-wave and coda wave energy
More LessAuthors Jun Kyoung Kim, Soung Hoon Wee, Seong Hwa Yoo and Kwang Hee Kim[Seismograms are mainly composed of three key factors: seismic source, path effects and site amplification characteristics. Among these, amplification characteristics are critical in evaluating the reliability of not only seismic design for engineering but also seismic source and crustal attenuation characteristics for seismology. In this study, the horizontal-to-vertical (H/V) ratio seismogram method was applied. The calculation of the H/V spectral ratio in the frequency domain was first proposed by Y. Nakamura in 1989, and has recently been extended to evaluate site amplification using S-waves, coda waves and background noise. In this study, H/V spectral ratios were analysed using six earthquake seismograms (from earthquakes near Boryeong in Chungnam Province and Baekyeong Island) observed at four temporary dam sites (IDS, IDU, IDD and IDF) near Iedang Reservoir, Korea. We simultaneously compared amplification characteristics, using the S-wave, coda wave and background noise of each seismogram. The simultaneous comparison of S-waves and coda waves (excluding background noise) at the four dam sites showed well-developed consistency in site amplification characteristics. Each site showed different low- and high-frequency characteristics and had its own unique resonance frequency (IDS: ~11 Hz, IDU: ~4 Hz, IDD: ~7 Hz), except for the IDF site, which is situated in hard bedrock. In addition, the IDD site showed first- and second-order site resonance frequency harmonics. We hope that by comparing our results with those from studies using other methods, we are able to add new information to future studies of dynamic behaviour and site classification in Korea.
,Amplification characteristics are critical in evaluating the reliability in seismic design for engineering and seismic source and crustal attenuation characteristics for seismology. Horizontal-to-vertical spectral ratios were analysed using six seismograms observed at four sites near Iedang Reservoir in Korea. Amplification characteristics were compared using the S-wave, coda wave and background noise of each seismogram.
]
-
-
-
Geophysical strata rating (GSR) as an aid in carbonate reservoir characterisation: an example from the South Pars gas field, Persian Gulf Basin
More Less[In this study, the geophysical strata rating (GSR), which is an empirical measure of rock competency, is calculated from petrophysical data. The GSR is then extended to the whole South Pars gas field in the framework of 3D seismic data through an acoustic impedance poststack seismic inversion.
,In this study, the geophysical strata rating (GSR) is calculated from petrophysical data using the equations developed for clastic rocks. The region being investigated is the South Pars gas field in the Persian Gulf Basin, where the Permian–Triassic Dalan and Kangan reservoirs host the largest accumulations of gas in the world. A 3D GSR model is estimated from 3D poststack seismic data by using a probabilistic neural network model. In this study, two methodologies are used to obtain GSR values at the different scales of wireline logs and 3D seismic data.
Strong correlations between neural network predictions and actual GSR data at a blind well prove the validity of the intelligent model for estimating GSR. The GSR results are also in good agreement with porosity and elastic moduli of these carbonate rocks. Discrimination between the reservoir and non-reservoir shaly units can easily be obtained by comparing GSR and well logs. Very low GSR values with high gamma ray log responses indicate shaly intervals. These can cause washouts, casing collapse and other related drilling problems. Intervals with low GSR values and low gamma ray log responses indicate the presence of good reservoir units.
]
-
-
-
Comparison of the projection onto convex sets and iterative hard thresholding methods for seismic data interpolation and denoising*
More LessAuthors Benfeng Wang and Chenglong Gu[The projection onto convex sets (POCS) method is deduced using the iterative hard thresholding (IHT) algorithm and a projection operator with more detailed physical illustrations. The interpolation performances on noise-free and noisy data are explained in detail and the reasons behind these performances are fully discussed, which provide clues to further improve interpolation accuracy.
,Because of the environment limitations, irregularity appears in the observed seismic data. In addition, the observed seismic data contains random noise from the acquisition equipment and surrounding environment, which affects the performances of multi-channel techniques, such as surface related multiple elimination (SRME) and amplitude variation with offset (AVO) analysis. The projection onto convex sets (POCS) method, known as an efficient interpolation method, is suitable for high signal-to-noise ratio (SNR) situations; however, the existing random noise may affect its final performance. In our previously published paper, the POCS formula was deduced in the view of iterative hard thresholding (IHT) method using a projection operator. In this paper, more physical illustrations about its detailed deduction are provided to show the differences between IHT and POCS in noise-free and noisy situations with easy understanding for readers. Then, performances of the POCS and IHT methods are compared in both noise-free and noisy situations, in terms of seismograms, frequency wavenumber (FK) spectra and single traces. For noise-free data, both the POCS and IHT methods can achieve good interpolation results. For noisy data, the POCS method is unsuitable because of the observed noisy data insertion, while the IHT performance is satisfactory because it uses a thresholding operator to eliminate random noise. Numerical examples on noise-free datasets demonstrate the validities of the POCS and IHT methods for interpolation. Tests on noisy data contaminated with additive white Gaussian noise prove the ability and superiority of the IHT method with anti-noise property compared with the POCS method.
]
-
-
-
Sensitivity evaluation of a seismic interpolation algorithm
More LessAuthors Doan Huy Hien, Seonghuyng Jang, Ta Quang Minh, Bui Viet Dung and Nguyen Thanh Tung[To maintain the initial resolution of the seismic image from seismic sparse acquisition, we propose several ways to sample data irregularly but periodically. At every processing step, we quantified the effect of interpolation by comparing the results with those from the fully sampled data. The result shows that using 60% of the available data is feasible.
,Sparse seismic acquisition is a new trend in seismic exploration, as it costs much less than conventional methods. To maintain the initial resolution of the seismic image, we propose several ways to sample data irregularly but periodically. These were tested by decimating the synthetic data, then interpolating, imaging and inversion. At every processing step, we quantified the effect of interpolation by comparing the results with those from the fully sampled data. Once the numerical test suggested the best decimation scheme, we were able to proceed to test the real dataset. This test confirmed that sparse acquisition using 60% of the available data is feasible.
]
-
-
-
A high expansion implicit finite-element prestack reverse time migration method
More LessAuthors Limin Liu, Yun Wang, Yong Wang, Jing Chen and Yanqiu Liu[Building on the concepts of cohesion degree and local relaxation, we propose an integrated hierarchical equilibrium parallel finite-element reverse time migration (HEP-FE-RTM) algorithm, which is a fine-grained central processing unit (CPU) parallel computation method in two-level host-sub-processors mode. A single master process is responsible for data reading and controlling the progress of the calculation, while each subordinate process deals with a part of the depth domain space. This algorithm is able to achieve single source forward-modelling and inversion calculation using more than 2000 CPUs. On the premise of controlling iteration times for convergence, sub-module/processors only communicate with their adjacent counterparts and the host processor, so the level of data exchange is proportional to cohesion degree. This HEP-FE-RTM algorithm has the distinct advantage that parallel efficiency does not decrease as the number of processors increases. In two-level host-sub-processors mode, more than 2000 processors are used and one billion unknowns are solved. By combining the finite-element implicit dynamic Newmark integral scheme, this approach achieves a prestack reverse time migration (RTM) with high expansion. Making full use of the characteristics of high accuracy and strong boundary adaptability of the finite-element method, through the optimisation of finite-element solving, the HEP-FE-RTM algorithm improved the efficiency of parallel computing and achieved RTM implementation using finite element. Model tests show that this method has a significant effect on both imaging efficiency and accuracy.
,Building on the concepts of cohesion degree and local relaxation, we propose an integrated hierarchical equilibrium parallel finite-element reverse time migration (HEP-FE-RTM) algorithm, which has the distinct advantage in that parallel efficiency does not decrease as the number of processors increases.
]
-
-
-
Poynting vector-guided imaging condition for imaging fractures using microseismic data
More LessAuthors Yeonghwa Jo, Soon Jee Seol, Hyungwook Choi and Joongmoo Byun[We image pre-existing fractures using the elastic reverse time migration with source-independent converted phase (ERTM SICP) imaging condition based on observed seismic signals during microseismic monitoring. However, the results showed linear spurious events which are caused by velocity differences between P- and S-waves. To suppress these events, we modified the imaging condition by adding a weighting function calculated from the Poynting vector of the P- and S-waves.
,Hydraulic fracturing has been implemented in enhanced geothermal systems and in developing unconventional reservoirs to increase the permeability of earth media. To improve safety during the hydraulic fracturing, a method is proposed for imaging pre-existing fractures adjacent to the hydro-fractured zones based on observed seismic signals during microseismic monitoring using the elastic reverse time migration with source-independent converted phase (ERTM SICP) imaging condition. The ERTM SICP imaging condition is more computationally efficient than the conventional ERTM method, and it can be used to perform migration without information on the locations of source events. It is therefore appropriate for handling microseismic data.
However, because of the difference between P- and S-wave velocities and complex geometry, ERTM SICP imaging condition produces spurious events that can be confused with fractures when applied to reflected waves. Based on the idea that the propagation directions of the P- and S-waves at mode converting points are very similar, whereas those in other regions are commonly different, we modified the imaging condition by adding a weighting function calculated from the Poynting vector of the P- and S-waves. The weighting function has different values depending on the angle between the propagation directions of the P- and S-waves. Based on the tests of the imaging performance of this modified imaging condition, we confirm that our method can successfully suppress the linear spurious events that appear in non-mode-converting regions.
]
-
-
-
Analysis of reservoir heterogeneities and depositional environments: a new method
More LessAuthors Cyril D. Boateng and Li-Yun Fu[A new methodology for quantifying reservoir heterogeneities based on a Monte Carlo parameter estimation technique using fractal parameters derived from the von Karman autocorrelation function is proposed. The reservoir heterogeneity parameters computed from sonic logs in an oilfield show that these parameters can be discriminants for depositional environments.
,A new methodology is presented to quantify reservoir heterogeneities based on a Monte Carlo parameter estimation technique from sonic logs. The acoustic reservoir heterogeneities are then quantified as an indicator to differentiate depositional environments and reservoir facies for geological interpretation in this study. Fractal statistics provides us with a framework to model reservoir heterogeneities with different Hurst numbers, correlation lengths and fluctuation standard deviations from sonic logs. These fractal parameters are derived from the von Karman autocorrelation model and estimated by an improved methodology using a Monte Carlo parameter estimation technique. Unlike the regular estimation method, the Monte Carlo parameter estimation technique is more stable. The resulting Hurst numbers, correlation lengths and root mean square (RMS) heights from 20 sonic logs are mapped in a reservoir with sandstone–mudstone sequences over complex continental deposits in north-eastern China. The spatial distribution of estimated reservoir heterogeneity parameters are then correlated with depositional facies interpretation derived from seismic attributes and petrophysical properties based on prior geological knowledge. Numerical experiments show that the Monte Carlo parameter estimation technique is successful in recovering acoustic heterogeneity parameters from sonic logs. Results from qualitative correlational analysis of sonic logs from the complex continent deposits in north-eastern China show that these parameters can be key discriminants for depositional facies and environments and may be utilised as a constraint in reservoir characterisation. Maps of Hurst numbers and correlation lengths strongly correspond with reservoir facies distributions, whereas maps of RMS heights correlate significantly with fluvial depositional patterns. The results were generally uniform even when different depth ranges within the formation of interest were used as input for parameter estimation. Numerical values of reservoir heterogeneity parameters may not fully recover the geology of a given reservoir but their distribution in space is an important indication of the morphological features of depositional environments.
]
-
-
-
Development and numerical tests of a Bayesian approach to inferring shallow velocity structures using microtremor arrays
More LessAuthors Ikuo Cho and Takaki Iwata[We have developed an empirical Bayesian approach to inferring shallow S-wave velocity structures. This approach has the potential to automatically determine the number of layers of a velocity structure model as well as to confirm the plausible assumption of a surface-wave theory.
,We propose an empirical Bayesian approach to inferring shallow (depth ranges from a few to several tens of metres) S-wave velocity structures using microtremor arrays and execute numerical tests to assess the feasibility of this approach. In our approach, the estimate of the S-wave structure (posterior) is derived from an empirical S-wave structure model (prior) and phase velocities of Rayleigh waves obtained with microtremor arrays. In other words, we aim to find a model that is close to the empirical model and is able to explain phase velocities with a 1D surface-wave theory. The inversion is stabilised by the constraints from the prior model so that model parameterisation with many thin layers can be adopted. The velocity structure is individually estimated for each of two cases (assumptions): the case where we assume fundamental-mode dominance and the case where we take into account the higher modes. Optimal values of the model parameters (e.g. a thickness parameter) are found, based on Akaike’s Bayesian Information Criterion (ABIC), and the choice of the better assumption of the surface-wave theory is also based on ABIC. Numerical tests, where synthetic data is generated from a horizontally stratified two-layer model, indicate that the relative weight between a prior model and the observed data is appropriately adjusted by ABIC. It is revealed that a value of the thickness parameter required to reproduce the given two-layer model is successfully found by ABIC. We also suggest that we can make a plausible choice of the assumption of the surface-wave theory with ABIC, unless observation error is extremely large.
]
-
-
-
Acoustic VTI reverse time migration based on an improved source wavefield storage strategy
More LessAuthors Ying Shi, Xiuzheng Fang, Weihong Wang and Xuan Ke[Advances in computational capabilities as well as ongoing improvements in storage strategies have made reverse time migration (RTM) a feasible method for capturing images of complex structures. However, large storage requirements still restrict RTM applications, especially in anisotropic media. Utilising a first-order quasi-P-wave equation in vertically transversely isotropic (VTI) media, we investigate anisotropy and deduce an RTM equation for a staggered-grid high-order finite difference (FD) scheme incorporating a perfectly matched layer (PML) boundary in this study. We also develop an improved source wavefield storage strategy via a PML boundary method for VTI medium RTM using graphic processing unit (GPU) accelerated computation. Our proposed method significantly reduces the total volume of data storage required for conventional RTM while increasing calculation time by just a small amount. Checkpoints can be set based on GPU memory size, leading to the generation of high precision and high efficiency subsurface images. We carried out a series of numerical tests on simple anisotropic media and complex Hess 2D VTI models to verify the effectiveness of our proposed method.
,The reverse time migration (RTM) equations for staggered-grid high order finite difference scheme incorporating a perfectly matched layer boundary for vertically transversely isotropic (VTI) media are proposed, and checkpoints and GPU accelerated techniques are utilised for data storage and computation efficiency. Hess 2D model tests demonstrate the effectiveness of the proposed algorithm.
]
-
-
-
A new staggered grid finite difference scheme optimised in the space domain for the first order acoustic wave equation
More LessAuthors Wenquan Liang, Xiu Wu, Yanfei Wang, Jingjie Cao, Chaofan Wu and Baoqing He[In this paper, we propose a new finite difference (FD) scheme which uses different staggered grid FD operators for different first order spatial derivatives in the first order acoustic wave equation. The dispersion analysis and numerical simulation demonstrated the effectiveness of the proposed method.
,Staggered grid finite difference (FD) methods are widely used to synthesise seismograms theoretically, and are also the basis of reverse time migration and full waveform inversion. Grid dispersion is one of the key problems for FD methods. It is desirable to have a FD scheme which can accelerate wave equation simulation while still preserving high accuracy. In this paper, we propose a totally new staggered grid FD scheme which uses different staggered grid FD operators for different first order spatial derivatives in the first order acoustic wave equation. We determine the FD coefficient in the space domain with the least-squares method. The dispersion analysis and numerical simulation demonstrated the effectiveness of the proposed method.
]
-
-
-
Reconstruction of 3D non-uniformly sampled seismic data along two spatial coordinates using non-equispaced curvelet transform
More LessAuthors Hua Zhang, Su Diao, Haiyang Yang, Guangnan Huang, Xiao Chen and Lei Li[Seismic data acquisition often faces the challenge of non-uniformly sampled data with missing traces. Only a few existing multitrace reconstruction methods can natively handle non-uniformly sampled data with missing traces. In this paper, we propose the non-equispaced fast discrete curvelet transform (NFDCT)-based reconstruction method designed for 3D seismic data that are non-uniformly sampled along two spatial coordinates. By partitioning 3D seismic datasets into time slices along source-receiver coordinates, we introduce 2D non-equispaced fast Fourier transform in the conventional fast discrete curvelet transform and formulate a regularised inversion of operator that links the uniformly sampled curvelet coefficients to non-uniformly sampled data. Numerically, the uniform curvelet coefficients are calculated by solving the L1-norm problem via the spectral projected-gradient algorithm. With the uniform curvelet coefficients, the NFDCT is formed via the conventional inverse curvelet transform and is used to reconstruct 3D non-uniformly sampled seismic data along two spatial coordinates. At the hand of reconstructed results from synthetic and field data, we demonstrate that the proposed method shows significant improvement over the conventional anti-leakage Fourier transform-based reconstruction method. The method we propose, which has a strong anti-aliasing and anti-noise ability, can be used to reconstruct the subset of observed data to a specified uniform grid along two spatial coordinates.
,We propose the non-equispaced fast discrete curvelet transform-based reconstruction method designed for 3D seismic data that are non-uniformly sampled along two spatial coordinates. The method we propose, which has a strong anti-aliasing and anti-noise ability, can be used to reconstruct the non-uniform sampled data to a specified uniform grid.
]
-
-
-
Simple assessment of shallow velocity structures with small-scale microtremor arrays: interval-averaged S-wave velocities
More LessAuthors Ikuo Cho, Atsushi Urabe, Tsutomu Nakazawa, Yoshiki Sato and Kentaro Sakata[This article describes a method for processing microtremor records from a small-scale seismic array that allows interval-averaged S-wave velocities to be estimated for 10-m depth ranges down to a depth of 30 m. The method was applied to microtremor data obtained in the town of Mashiki, Kumamoto Prefecture, Japan, and the analysis results were evaluated through a comparison with available PS logs and sections obtained by surface-wave methods. It turned out that the interval-averaged S-wave velocity estimates may be subject to errors of up to 20–30% in absolute values, but it was shown that the method can help evaluate relative spatial variations in those S-wave velocities. In view of the simplicity of analysis, the analyser-independent nature of the results and the limitations of analysis accuracy, the interval-averaged S-wave velocity estimation method presented here could be used as an effective tool for the preliminary analysis of microtremor data from small-scale seismic arrays.
,This article describes a simple method for estimating interval-averaged S-wave velocities for 10-m depth ranges down to a depth of 30 m. Possibilities and limitations of this method are examined by using the analysis results of the microtremor data obtained in the town of Mashiki, Kumamoto Prefecture, Japan.
]
-
-
-
Integration of 3D reflection seismics and magnetic data for deep platinum mine planning and risk mitigation: a case study from Bushveld Complex, South Africa
More LessAuthors Stephanie E. Scheiber-Enslin and Musa Manzi[Loss-of-ground due to slumping, faulting or replacement of mineral horizons are common problems during mining in the Bushveld Complex, South Africa. Integrating high-resolution aeromagnetic and 3D reflection seismic data to delineate geological features allows for efficient mine planning and risk reduction. We use high-resolution seismic data from the western Bushveld Complex to image slump structures, iron-rich ultramafic pegmatoids (IRUPs), faults, dykes and diapirs that impact the economic horizons (UG-2). The seismic data are able to resolve faults with throws as small as ~10 m. These data reveal a slump structure in the north of the survey area, extending ~5 km along strike and causing up to 1 km of vertical displacement. This region is characterised by strong magnetic anomalies that are interpreted to be associated with IRUPs, as there were no boreholes drilled in this region. The structure is bounded on several sides by faults and IRUPs. Another feature that is better imaged using seismic data is a large-scale fault mapped on surface (locally known as the Chaneng structure). On seismic data, this structure is imaged as a complex fault network in the west of the study area and a region of material flow in the east. Seismic data also show good mapping resolution of dykes due to their close association with faults, which cause displacements on economic horizons. The seismic data are also characterised by disrupted seismic amplitude zones associated with a diapir (~6 km in diameter), which is linked with the upwelling of basement rocks during the emplacement of the complex. This diapir displaces the economic UG-2 horizon at the mining levels by ~0.05 s two-way traveltime or 175 m. This information could be used for future mining planning and design to assess and mitigate the risks posed by these features during mining activities.
,We use high-resolution aeromagnetic and 3D reflection seismics to delineate geological structures and bodies defined as loss-of-ground features in the Bushveld Complex, South Africa. These include faults, replacement pegmatoids and slump features. This is done for more efficient mine planning and risk reduction.
]
-
-
-
Two dimensional cross-gradient joint inversion of gravity and magnetic data sets constrained by airborne electromagnetic resistivity in the Capricorn Orogen, Western Australia
More Less[Airborne collection of electromagnetic and potential field data is a common strategy for extensive resource exploration and reconnaissance. Since these datasets contain information about different properties at different depths, they are normally considered complementary and are interpreted separately. Using airborne data acquired in Western Australia, we explore the viability of their joint inversion and show the advantages of their combined analysis and interpretation.
,In many geological scenarios, the interpretation of multiple geophysical datasets through the use of joint inversion has become a common practice provided all data share compatible spatial resolution. Unfortunately, this requirement has also limited the application of airborne electromagnetic (AEM) data in joint inversion. For instance, we commonly assume that airborne gravity and magnetic datasets largely originate at a depth of a few kilometres, whereas co-located AEM signals can only penetrate a few hundred metres, thus rendering spatially incompatible datasets. We believe, however, that a fraction of these datasets originate from the same structures and provide a common ground for structural joint inversion strategies. We aim to explore the viability of jointly inverting such datasets using potential and AEM field data acquired in Western Australia with three comparative experiments. First, we generate conventional 2D separated models for each dataset to gauge their individual resolution capability. We then perform the 2D cross-gradient joint inversion of gravity and magnetic datasets. Finally, we adapt the structural joint inversion to include the AEM resistivity model as a constraint. We show that there is an area commonly sensed by the three datasets and that the coupled resolution influences both shallow and deep structures of the joint models. This yields a coherent integrated interpretation of shallow and deep structures of the studied section, which is validated when compared to a nearby seismic traverse section.
]
-
-
-
Multiple-point geostatistical simulation for mine evaluation with aeromagnetic data
More LessAuthors Jinpyo Hong, Seokhoon Oh and Seong-Jun Cho[Multiple-point geostatistical simulation (MPS) was applied to develop 3D ore models matched to surrounding geological information accompanying aeromagnetic data using training image. The present study proposes a method for reducing the uncertainty of the 3D ore model, applying MPS to create probabilistic ore models and analysing the correlation between the models and geophysical data.
,Multiple-point geostatistical simulation (MPS) was applied to develop 3D ore models matched to surrounding geological information accompanying aeromagnetic data using a training image (TI). Conventional 3D geological models generated from a limited number of boreholes and other geological information may be useful for evaluating the mineral resources around the boreholes, while also bearing uncertainty regarding the evaluation of the ore body over the entire area. Geostatistical analysis accompanying the geophysical interpretation is adopted to reduce the uncertainty of the 3D ore model. Among the geostatistical methods, MPS based on a TI made from available geological information is chosen to simulate the configuration and distribution of the ore body according to the geological structure. The present study proposes a method for reducing the uncertainty of the 3D ore model, applying MPS for mine evaluation to create probabilistic ore models and analysing the correlation between the models and geophysical data. This method was applied to a metal mine located in Korea. Single normal equation simulation (SNESIM) was chosen as the simulation algorithm, and aeromagnetic data were used to support the analysis of simulated models. With comparison/analysis of the probabilistic ore model and geophysical data, the 3D geological model utilising MPS represented the configuration and distribution of the ore body well according to the geological structure. The SNESIM cluster results indicated high reliability for the final interpretation of the 3D models.
]
-
Volumes & issues
-
Volume 56 (2025)
-
Volume 55 (2024)
-
Volume 54 (2023)
-
Volume 53 (2022)
-
Volume 52 (2021)
-
Volume 51 (2020)
-
Volume 50 (2019)
-
Volume 49 (2018)
-
Volume 48 (2017)
-
Volume 47 (2016)
-
Volume 46 (2015)
-
Volume 45 (2014)
-
Volume 44 (2013)
-
Volume 43 (2012)
-
Volume 42 (2011)
-
Volume 41 (2010)
-
Volume 40 (2009)
-
Volume 39 (2008)
-
Volume 38 (2007)
-
Volume 37 (2006)
-
Volume 36 (2005)
-
Volume 35 (2004)
-
Volume 34 (2003)
-
Volume 33 (2002)
-
Volume 32 (2001)
-
Volume 31 (2000)
-
Volume 30 (1999)
-
Volume 29 (1998)
-
Volume 28 (1997)
-
Volume 27 (1996)
-
Volume 26 (1995)
-
Volume 25 (1994)
-
Volume 24 (1993)
-
Volume 23 (1992)
-
Volume 22 (1991)
-
Volume 21 (1990)
-
Volume 20 (1989)
-
Volume 19 (1988)
-
Volume 18 (1987)
-
Volume 17 (1986)
-
Volume 16 (1985)
-
Volume 15 (1984)
-
Volume 14 (1983)
-
Volume 13 (1982)
-
Volume 12 (1981)
-
Volume 11 (1980)
-
Volume 10 (1979)
-
Volume 9 (1978)
-
Volume 8 (1977)
-
Volume 7 (1976)
-
Volume 6 (1975)
-
Volume 5 (1974)
-
Volume 4 (1973)
-
Volume 3 (1972)
-
Volume 2 (1971)
-
Volume 1 (1970)
Most Read This Month