- Home
- A-Z Publications
- Geophysical Prospecting
- Previous Issues
- Volume 63, Issue 2, 2015
Geophysical Prospecting - Volume 63, Issue 2, 2015
Volume 63, Issue 2, 2015
-
-
Anisotropy signature in reverse‐time migration extended images
Authors Paul Sava and Tariq AlkhalifahABSTRACTReverse‐time migration can accurately image complex geologic structures in anisotropic media. Extended images at selected locations in the Earth, i.e., at common‐image‐point gathers, carry rich information to characterize the angle‐dependent illumination and to provide measurements for migration velocity analysis. However, characterizing the anisotropy influence on such extended images is a challenge. Extended common‐image‐point gathers are cheap to evaluate since they sample the image at sparse locations indicated by the presence of strong reflectors. Such gathers are also sensitive to velocity error that manifests itself through moveout as a function of space and time lags. Furthermore, inaccurate anisotropy leaves a distinctive signature in common‐image‐point gathers, which can be used to evaluate anisotropy through techniques similar to the ones used in conventional wavefield tomography. It specifically admits a V‐shaped residual moveout with the slope of the “V” flanks depending on the anisotropic parameter η regardless of the complexity of the velocity model. It reflects the fourth‐order nature of the anisotropy influence on moveout as it manifests itself in this distinct signature in extended images after handling the velocity properly in the imaging process. Synthetic and real data observations support this assertion.
-
-
-
Improved edge detection mapping through stacking and integration: a case study in the Bathurst Mining Camp
Authors Peter Tschirhart and Bill MorrisABSTRACTAirborne geophysical surveys provide spatially continuous regional data coverage, which directly reflects subsurface petrophysical differences and thus the underlying geology. A modern geologic mapping exercise requires the fusion of this information to complement what is typically limited regional outcrop. Often, interpretation of the geophysical data in a geological context is done qualitatively using total field and derivative maps. With a qualitative approach, the resulting map product may reflect the interpreter's bias. Source edge detection provides a quantitative means to map lateral physical property changes in potential and non‐potential field data. There are a number of Source edge detection algorithms, all of which apply a transformation to convert local signal inflections associated with source edges into local maxima. As a consequence of differences in their computation, the various algorithms generate slightly different results for any given source depth, geometry, contrast, and noise levels. To enhance the viability of any detected edge, it is recommended that one combines the output of several Source edge detection algorithms. Here we introduce a simple data compilation method, deemed edge stacking, which improves the interpretable product of Source edge detection through direct gridding, grid addition, and amplitude thresholding. In two examples, i.e., a synthetic example and a real‐world example from the Bathurst Mining Camp, New Brunswick, Canada, a number of transformation algorithms are applied to gridded geophysical data sets and the resulting Source edge detection solutions combined. Edge stacking combines the benefits and nuances of each Source edge detection algorithm; coincident or overlapping and laterally continuous solutions are considered more indicative of a true edge, whereas isolated points are taken as being indicative of random noise or false solutions. When additional data types are available, as in our example, they may also be integrated to create a more complete geologic model. The effectiveness of this method is limited only by the resolution of each survey data set and the necessity of lateral physical property contrasts. The end product aims at creating a petrophysical contact map, which, when integrated with known lithological outcrop information, can be led to an improved geological map.
-
-
-
Appraisal problem in the 3D least squares Fourier seismic data reconstruction
Authors Fabio Ciabarri, Alfredo Mazzotti, Eusebio Stucchi and Nicola BienatiABSTRACTLeast squares Fourier reconstruction is basically a solution to a discrete linear inverse problem that attempts to recover the Fourier spectrum of the seismic wavefield from irregularly sampled data along the spatial coordinates. The estimated Fourier coefficients are then used to reconstruct the data in a regular grid via a standard inverse Fourier transform (inverse discrete Fourier transform or inverse fast Fourier transform).
Unfortunately, this kind of inverse problem is usually under‐determined and ill‐conditioned. For this reason, the least squares Fourier reconstruction with minimum norm adopts a damped least squares inversion to retrieve a unique and stable solution.
In this work, we show how the damping can introduce artefacts on the reconstructed 3D data. To quantitatively describe this issue, we introduce the concept of “extended” model resolution matrix, and we formulate the reconstruction problem as an appraisal problem. Through the simultaneous analysis of the extended model resolution matrix and of the noise term, we discuss the limits of the Fourier reconstruction with minimum norm reconstruction and assess the validity of the reconstructed data and the possible bias introduced by the inversion process. Also, we can guide the parameterization of the forward problem to minimize the occurrence of unwanted artefacts. A simple synthetic example and real data from a 3D marine common shot gather are used to discuss our approach and to show the results of Fourier reconstruction with minimum norm reconstruction.
-
-
-
A robust approach to time‐to‐depth conversion and interval velocity estimation from time migration in the presence of lateral velocity variations
Authors Siwei Li and Sergey FomelABSTRACTThe problem of conversion from time‐migration velocity to an interval velocity in depth in the presence of lateral velocity variations can be reduced to solving a system of partial differential equations. In this paper, we formulate the problem as a non‐linear least‐squares optimization for seismic interval velocity and seek its solution iteratively. The input for the inversion is the Dix velocity, which also serves as an initial guess. The inversion gradually updates the interval velocity in order to account for lateral velocity variations that are neglected in the Dix inversion. The algorithm has a moderate cost thanks to regularization that speeds up convergence while ensuring a smooth output. The proposed method should be numerically robust compared to the previous approaches, which amount to extrapolation in depth monotonically. For a successful time‐to‐depth conversion, image‐ray caustics should be either nonexistent or excluded from the computational domain. The resulting velocity can be used in subsequent depth‐imaging model building. Both synthetic and field data examples demonstrate the applicability of the proposed approach.
-
-
-
Diffraction imaging by uniform asymptotic theory and double exponential fitting
Authors Jingtao Zhao, Yanfei Wang and Caixia YuABSTRACTSeismic diffracted waves carry valuable information for identifying geological discontinuities. Unfortunately, the diffraction energy is generally too weak, and standard seismic processing is biased to imaging reflection. In this paper, we present a dynamic diffraction imaging method with the aim of enhancing diffraction and increasing the signal‐to‐noise ratio. The correlation between diffraction amplitudes and their traveltimes generally exists in two forms, with one form based on the Kirchhoff integral formulation, and the other on the uniform asymptotic theory. However, the former will encounter singularities at geometrical shadow boundaries, and the latter requires the computation of a Fresnel integral. Therefore, neither of these methods is appropriate for practical applications. Noting the special form of the Fresnel integral, we propose a least‐squares fitting method based on double exponential functions to study the amplitude function of diffracted waves. The simple form of the fitting function has no singularities and can accelerate the calculation of diffraction amplitude weakening coefficients. By considering both the fitting weakening function and the polarity reversal property of the diffracted waves, we modify the conventional Kirchhoff imaging conditions and formulate a diffraction imaging formula. The mechanism of the proposed diffraction imaging procedure is based on the edge diffractor, instead of the idealized point diffractor. The polarity reversal property can eliminate the background of strong reflection and enhance the diffraction by same‐phase summation. Moreover,the fitting weakening function of diffraction amplitudes behaves like an inherent window to optimize the diffraction imaging aperture by its decaying trend. Synthetic and field data examples reveal that the proposed diffraction imaging method can meet the requirement of high‐resolution imaging, with the edge diffraction fully reinforced and the strong reflection mostly eliminated.
-
-
-
Velocity model building from seismic reflection data by full‐waveform inversion
Authors Romain Brossier, Stéphane Operto and Jean VirieuxABSTRACTFull‐waveform inversion is re‐emerging as a powerful data‐fitting procedure for quantitative seismic imaging of the subsurface from wide‐azimuth seismic data. This method is suitable to build high‐resolution velocity models provided that the targeted area is sampled by both diving waves and reflected waves. However, the conventional formulation of full‐waveform inversion prevents the reconstruction of the small wavenumber components of the velocity model when the subsurface is sampled by reflected waves only. This typically occurs as the depth becomes significant with respect to the length of the receiver array. This study first aims to highlight the limits of the conventional form of full‐waveform inversion when applied to seismic reflection data, through a simple canonical example of seismic imaging and to propose a new inversion workflow that overcomes these limitations. The governing idea is to decompose the subsurface model as a background part, which we seek to update and a singular part that corresponds to some prior knowledge of the reflectivity. Forcing this scale uncoupling in the full‐waveform inversion formalism brings out the transmitted wavepaths that connect the sources and receivers to the reflectors in the sensitivity kernel of the full‐waveform inversion, which is otherwise dominated by the migration impulse responses formed by the correlation of the downgoing direct wavefields coming from the shot and receiver positions. This transmission regime makes full‐waveform inversion amenable to the update of the long‐to‐intermediate wavelengths of the background model from the wide scattering‐angle information. However, we show that this prior knowledge of the reflectivity does not prevent the use of a suitable misfit measurement based on cross‐correlation, to avoid cycle‐skipping issues as well as a suitable inversion domain as the pseudo‐depth domain that allows us to preserve the invariant property of the zero‐offset time. This latter feature is useful to avoid updating the reflectivity information at each non‐linear iteration of the full‐waveform inversion, hence considerably reducing the computational cost of the entire workflow. Prior information of the reflectivity in the full‐waveform inversion formalism, a robust misfit function that prevents cycle‐skipping issues and a suitable inversion domain that preserves the seismic invariant are the three key ingredients that should ensure well‐posedness and computational efficiency of full‐waveform inversion algorithms for seismic reflection data.
-
-
-
A fast algorithm for 3D azimuthally anisotropic velocity scan
Authors Jingwei Hu, Sergey Fomel and Lexing YingABSTRACTThe conventional velocity scan can be computationally expensive for large‐scale seismic data sets, particularly when the presence of anisotropy requires multiparameter scanning. We introduce a fast algorithm for 3D azimuthally anisotropic velocity scan by generalizing the previously proposed 2D butterfly algorithm for hyperbolic Radon transforms. To compute semblance in a two‐parameter residual moveout domain, the numerical complexity of our algorithm is roughly as opposed to of the straightforward velocity scan, with N being the representative of the number of points in a particular dimension of either data space or parameter space. Synthetic and field data examples demonstrate the superior efficiency of the proposed algorithm.
-
-
-
Comparison of P‐wave attenuation models of wave‐induced flow
Authors Weitao Sun, Jing Ba, Tobias M. Müller, José M. Carcione and Hong CaoABSTRACTWave‐induced oscillatory fluid flow in the vicinity of inclusions embedded in porous rocks is one of the main causes for P‐wave dispersion and attenuation at seismic frequencies. Hence, the P‐wave velocity depends on wave frequency, porosity, saturation, and other rock parameters. Several analytical models quantify this wave‐induced flow attenuation and result in characteristic velocity–saturation relations. Here, we compare some of these models by analyzing their low‐ and high‐frequency asymptotic behaviours and by applying them to measured velocity–saturation relations. Specifically, the Biot–Rayleigh model considering spherical inclusions embedded in an isotropic rock matrix is compared with White's and Johnson's models of patchy saturation. The modeling of laboratory data for tight sandstone and limestone indicates that, by selecting appropriate inclusion size, the Biot‐Rayleigh predictions are close to the measured values, particularly for intermediate and high water saturations.
-
-
-
Improving the gradient of the image‐domain objective function using quantitative migration for a more robust migration velocity analysis
Authors Charles‐Antoine Lameloise, Hervé Chauris and Mark NobleABSTRACTMigration velocity analysis aims at determining the background velocity model. Classical artefacts, such as migration smiles, are observed on subsurface offset common image gathers, due to spatial and frequency data limitations. We analyse their impact on the differential semblance functional and on its gradient with respect to the model. In particular, the differential semblance functional is not necessarily minimum at the expected value. Tapers are classically applied on common image gathers to partly reduce these artefacts. Here, we first observe that the migrated image can be defined as the first gradient of an objective function formulated in the data‐domain. For an automatic and more robust formulation, we introduce a weight in the original data‐domain objective function. The weight is determined such that the Hessian resembles a Dirac function. In that way, we extend quantitative migration to the subsurface‐offset domain. This is an automatic way to compensate for illumination. We analyse the modified scheme on a very simple 2D case and on a more complex velocity model to show how migration velocity analysis becomes more robust.
-
-
-
Numerical estimation of carbonate rock properties using multiscale images
Authors Mohamed Soufiane Jouini, Sandra Vega and Ahmed Al‐RatroutABSTRACTCharacterizing the pore space of rock samples using three‐dimensional (3D) X‐ray computed tomography images is a crucial step in digital rock physics. Indeed, the quality of the pore network extracted has a high impact on the prediction of rock properties such as porosity, permeability and elastic moduli. In carbonate rocks, it is usually very difficult to find a single image resolution which fully captures the sample pore network because of the heterogeneities existing at different scales. Hence, to overcome this limitation a multiscale analysis of the pore space may be needed. In this paper, we present a method to estimate porosity and elastic properties of clean carbonate (without clay content) samples from 3D X‐ray microtomography images at multiple resolutions. We perform a three‐phase segmentation to separate grains, pores and unresolved porous phase using 19 μm resolution images of each core plug. Then, we use images with higher resolution (between 0.3 and 2 μm) of microplugs extracted from the core plug samples. These subsets of images are assumed to be representative of the unresolved phase. We estimate the porosity and elastic properties of each sample by extrapolating the microplug properties to the whole unresolved phase. In addition, we compute the absolute permeability using the lattice Boltzmann method on the microplug images due to the low resolution of the core plug images.
In order to validate the results of the numerical simulations, we compare our results with available laboratory measurements at the core plug scale. Porosity average simulations for the eight samples agree within 13%. Permeability numerical predictions provide realistic values in the range of experimental data but with a higher relative error. Finally, elastic moduli show the highest disagreements, with simulation error values exceeding 150% for three samples.
-
-
-
Effect of oil composition on fluid substitution in heavy oil reservoirs
Authors Amir Shamsa and Larry LinesABSTRACTUnlike light oils, heavy oils do not have a well‐established scheme for modelling elastic moduli from dynamic reservoir properties. One of the main challenges in the fluid substitution of heavy oils is their viscoelastic nature, which is controlled by temperature, pressure, and fluid composition. Here, we develop a framework for fluid substitution modelling that is reliable yet practical for a wide range of cold and thermal recovery scenarios in producing heavy oils and that takes into account the reservoir fluid composition, grounded on the effective‐medium theories for estimating elastic moduli of an oil–rock system. We investigate the effect of fluid composition variations on oil–rock elastic moduli with temperature changes. The fluid compositional behaviour is determined by flash calculations. Elastic moduli are then determined using the double‐porosity coherent potential approximation method and the calculated viscosity based on the fluid composition. An increase in temperature imposes two opposing mechanisms on the viscosity behaviour of a heavy‐oil sample: gas liberation, which tends to increase the viscosity, and melting, which decreases the viscosity. We demonstrate that melting dominates gas liberation, and as a result, the viscosity and, consequently, the shear modulus of the heavy oils always decrease with increasing temperature. Furthermore, it turns out that one can disregard the effects of gas in the solution when modelling the elastic moduli of heavy oils. Here, we compare oil–rock elastic moduli when the rock is saturated with fluids that have different viscosity levels. The objective is to characterize a unique relation between the temperature, the frequency, and the elastic moduli of an oil–rock system. We have proposed an approach that takes advantage of this relation to find the temperature and, consequently, the viscosity in different regions of the reservoir.
-
-
-
Acoustic reflection logging for open and cased holes
Authors M. Markov, I. Markova and G. Ronquillo JarilloABSTRACTIn the present work, the waveforms of reflected wave sonic log for open and cased boreholes are calculated. Calculations are performed for a borehole containing an acoustic multipole source (monopole, dipole, or quadrupole). A reflected wave is more efficiently excited at resonant frequencies. These frequencies for all source types are close to the frequencies of oscillations of a fluid column located in an absolutely rigid hollow cylinder. It is shown that the acoustic reverberation is controlled by the acoustic impedance of the rock Z = Vp ρs for fixed parameters of the borehole fluid, where Vp is the compressional wave velocity in the rock, and ρs is the rock density. This result is correct for all types of acoustic sources (monopole, dipole, or quadrupole). Methods of the waveform processing for determining parameters characterizing the reflected wave are discussed.
-
-
-
Levelling aeromagnetic survey data without the need for tie‐lines
Authors James C. White* and David BeamishABSTRACTA new methodology that levels airborne magnetic data without orthogonal tie‐lines is presented in this study. The technique utilizes the low‐wavenumber content of the flight‐line data to construct a smooth representation of the regional field at a scale appropriate to the line lengths of the survey. Levelling errors are then calculated between the raw flight‐line data and the derived regional field through a least squares approach. Minimizing the magnitude of the error, with a first‐degree error function, results in significant improvements to the unlevelled data. The technique is tested and demonstrated using three recent airborne surveys.
-
-
-
Mineral potential mapping in Central Iran using fuzzy ordered weighted averaging method
Authors Maysam Abedi, Gholam‐Hossain Norouzi and Nader FathianpourABSTRACTThe aim of this work is to introduce the application of the fuzzy ordered weighted averaging method as a straightforward knowledge‐driven approach to explore porphyry copper deposits in an airborne prospect. In this paper, the proposed method is applied to airborne geophysical (potassium radiometry, magnetometry, and frequency‐domain electromagnetic) data, geological layers (fault and host rock zones), and various extracted alteration layers from remote sensing images. The central Iranian volcanic–sedimentary belt in Kerman province of Iran that is located within the Urumieh–Dokhtar (Sahand–Bazman) magmatic arc is chosen for this study. This region has high potential of mineral occurrences, especially porphyry copper, containing some active world‐class copper mines such as Sarcheshmeh. Two evidential layers, including the downward continued map and the analytic signal of such filtered magnetic data, are generated to be used as geophysical plausible traces of porphyry copper occurrences. The low values of the resistivity layer acquired from airborne frequency‐domain electromagnetic data are also used as an electrical criterion in this study. Four remote sensing evidential layers, including argillic, phyllic, propylitic, and hydroxyl alterations, are extracted from Advanced Spaceborne Thermal Emission and Reflection Radiometer images in order to map the altered areas associated with porphyry copper deposits. The Enhanced Thematic Mapper plus images are used to map iron oxide layer. Since potassium alteration is the mainstay of copper alteration, the airborne potassium radiometry data are used. Here, the fuzzy ordered weighted averaging method uses a wide range of decision strategies in order to generate numerous mineral potential/prospectivity maps. The final mineral potential map based upon desired geo‐data set indicates adequately matching of high‐potential zones with previous working mines and copper deposits.
-
-
-
Joint inversion of long‐offset and central‐loop transient electromagnetic data: Application to a mud volcano exploration in Perekishkul, Azerbaijan
Authors A. Haroon, J. Adrian, R. Bergers, M. Gurk, B. Tezkan, A. L. Mammadov and A. G. NovruzovABSTRACTMud volcanism is commonly observed in Azerbaijan and the surrounding South Caspian Basin. This natural phenomenon is very similar to magmatic volcanoes but differs in one considerable aspect: Magmatic volcanoes are generally the result of ascending molten rock within the Earth's crust, whereas mud volcanoes are characterised by expelling mixtures of water, mud, and gas. The majority of mud volcanoes have been observed on ocean floors or in deep sedimentary basins, such as those found in Azerbaijan. Furthermore, their occurrences in Azerbaijan are generally closely associated with hydrocarbon reservoirs and are therefore of immense economic and geological interest. The broadside long‐offset transient electromagnetic method and the central‐loop transient electromagnetic method were applied to study the inner structure of such mud volcanoes and to determine the depth of a resistive geological formation that is predicted to contain the majority of the hydrocarbon reservoirs in the survey area. One‐dimensional joint inversion of central‐loop and long‐offset transient electromagnetic data was performed using the inversion schemes of Occam and Marquardt. By using the joint inversion models, a subsurface resistivity structure ranging from the surface to a depth of approximately 7 km was determined. Along a profile running perpendicular to the assumed strike direction, lateral resistivity variations could only be determined in the shallow depth range using the transient electromagnetic data. An attempt to resolve further two‐dimensional/three‐dimensional resistivity structures, representing possible mud migration paths at large depths using the long‐offset transient electromagnetic data, failed. Moreover, the joint inversion models led to ambiguous results regarding the depth and resistivity of the hydrocarbon target formation due to poor resolution at great depths (>5 km). Thus, 1D/2D modelling studies were subsequently performed to investigate the influence of the resistive terminating half‐space on the measured long‐offset transient electromagnetic data.
The 1D joint inversion models were utilised as starting models for both the 1D and 2D modelling studies. The results tend to show that a resistive terminating half‐space, implying the presence of the target formation, is the favourable geological setting. Furthermore, the 2D modelling study aimed to fit all measured long‐offset transient electromagnetic Ex transients along the profile simultaneously. Consequently, 3125 2D forward calculations were necessary to determine the best‐fit resistivity model. The results are consistent with the 1D inversion, indicating that the data are best described by a resistive terminating half‐space, although the resistivity and depth cannot be determined clearly.
-
-
-
A parallel, scalable and memory efficient inversion code for very large‐scale airborne electromagnetics surveys
Authors Casper Kirkegaard and Esben AukenABSTRACTOver the past decade the typical size of airborne electromagnetic data sets has been growing rapidly, along with an emerging need for highly accurate modelling. One‐dimensional approximate inversions or data transform techniques have previously been employed for very large‐scale studies of quasi‐layered settings but these techniques fail to provide the consistent accuracy needed by many modern applications such as aquifer and geological mapping, uranium exploration, oil sands and integrated modelling. In these cases the use of more time‐consuming 1D forward and inverse modelling provide the only acceptable solution that is also computationally feasible.
When target structures are known to be quasi layered and spatially coherent it is beneficial to incorporate this assumption directly into the inversion. This implies inverting multiple soundings at a time in larger constrained problems, which allows for resolving geological layers that are undetectable using simple independent inversions. Ideally, entire surveys should be inverted at a time in huge constrained problems but poor scaling properties of the underlying algorithms typically make this challenging.
Here, we document how we optimized an inversion code for very large‐scale constrained airborne electromagnetic problems. Most importantly, we describe how we solve linear systems using an iterative method that scales linearly with the size of the data set in terms of both solution time and memory consumption. We also describe how we parallelized the core region of the code, in order to obtain almost ideal strong parallel scaling on current 4‐socket shared memory computers. We further show how model parameter uncertainty estimates can be efficiently obtained in linear time and we demonstrate the capabilities of the full implementation by inverting a 3327 line km SkyTEM survey overnight. Performance and scaling properties are discussed based on the timings of the field example and we describe the criteria that must be fulfilled in order to adapt our methodology for similar type problems.
-
-
-
Increasing the effectiveness of electrical resistivity tomography using γ11n configurations
Authors S. Szalai, I. Lemperger, M. Metwaly, Á. Kis, V. Wesztergom, K. Szokoli and A. NovákABSTRACTA new array type, i.e., the γ11n arrays, is introduced in this paper, in which the sequence of the current (C) and potential (P) electrodes is CPCP, and the distance between the last two electrodes is n times the distance between the first two ones and that of the second one and the third one. These arrays are called quasinull arrays because they are—according to their array and behaviour—between the traditional and null arrays. It is shown by numerical modelling that, in detecting small‐effect inhomogeneity, these configurations may be more effective than the traditional ones, including the optimized Stummer configuration. Certain γ11n configurations—especially the γ112, γ113, and γ114—produced better results both in horizontal and vertical resolution investigations. Based on the numerical studies, the γ11n configurations seem to be very promising in problems where the anomalies are similar to the numerically investigated ones, i.e., they can detect and characterize, e.g., tunnels, caves, cables, tubes, abandoned riverbeds, or discontinuity, in a clay layer with greater efficacy than those of the traditional configurations. γ11n measurements need less data than traditional configurations; therefore, the time demand of electrical resistivity tomography measurements can be shortened by their use.
-
Volumes & issues
-
Volume 72 (2023 - 2024)
-
Volume 71 (2022 - 2023)
-
Volume 70 (2021 - 2022)
-
Volume 69 (2021)
-
Volume 68 (2020)
-
Volume 67 (2019)
-
Volume 66 (2018)
-
Volume 65 (2017)
-
Volume 64 (2015 - 2016)
-
Volume 63 (2015)
-
Volume 62 (2014)
-
Volume 61 (2013)
-
Volume 60 (2012)
-
Volume 59 (2011)
-
Volume 58 (2010)
-
Volume 57 (2009)
-
Volume 56 (2008)
-
Volume 55 (2007)
-
Volume 54 (2006)
-
Volume 53 (2005)
-
Volume 52 (2004)
-
Volume 51 (2003)
-
Volume 50 (2002)
-
Volume 49 (2001)
-
Volume 48 (2000)
-
Volume 47 (1999)
-
Volume 46 (1998)
-
Volume 45 (1997)
-
Volume 44 (1996)
-
Volume 43 (1995)
-
Volume 42 (1994)
-
Volume 41 (1993)
-
Volume 40 (1992)
-
Volume 39 (1991)
-
Volume 38 (1990)
-
Volume 37 (1989)
-
Volume 36 (1988)
-
Volume 35 (1987)
-
Volume 34 (1986)
-
Volume 33 (1985)
-
Volume 32 (1984)
-
Volume 31 (1983)
-
Volume 30 (1982)
-
Volume 29 (1981)
-
Volume 28 (1980)
-
Volume 27 (1979)
-
Volume 26 (1978)
-
Volume 25 (1977)
-
Volume 24 (1976)
-
Volume 23 (1975)
-
Volume 22 (1974)
-
Volume 21 (1973)
-
Volume 20 (1972)
-
Volume 19 (1971)
-
Volume 18 (1970)
-
Volume 17 (1969)
-
Volume 16 (1968)
-
Volume 15 (1967)
-
Volume 14 (1966)
-
Volume 13 (1965)
-
Volume 12 (1964)
-
Volume 11 (1963)
-
Volume 10 (1962)
-
Volume 9 (1961)
-
Volume 8 (1960)
-
Volume 7 (1959)
-
Volume 6 (1958)
-
Volume 5 (1957)
-
Volume 4 (1956)
-
Volume 3 (1955)
-
Volume 2 (1954)
-
Volume 1 (1953)