- Home
- Conferences
- Conference Proceedings
- Conferences
ECMOR XI - 11th European Conference on the Mathematics of Oil Recovery
- Conference date: 08 Sep 2008 - 11 Sep 2008
- Location: Bergen, Norway
- ISBN: 978-90-73781-55-9
- Published: 08 September 2008
41 - 60 of 105 results
-
-
High Order Adaptive Implicit Method for Conservation Laws
Authors A. Riaz, R. de Loubens and H. TchelepiNumerical solution of conservation laws with large variations in the local CFL number, requires impractically small time steps for stable, explicit time integration. Implicit time stepping with large time steps is therefore necessary but is computationally expensive and consequently limited to first-order discretization schemes. A numerical scheme that is accurate and at the same time allows large time steps is clearly desirable. We develop a numerical formulation based on the Adaptive Implicit Method (AIM) that achieves both objectives by treating implicitly only a few grid cells that restrict the time step size, and by using high-order spatial discretization in both the implicit and explicit cells. Our novel approach solves the fully coupled transport and flow equations with high order-accuracy within the AIM framework to reduce the computational cost and improve the solution accuracy. We show that high-order discretization is computationally efficient when combined with AIM and leads to significant improvements in accuracy compared to the first-order solution.
-
-
-
Effects of Sand Dunes on Full Wavefields in Exploration
More LessSeismic pressure point sources have been simulated near the surface of sand dunes in Oman. Real topographic elevation data was assembled from recorded field data in the region, in-filled with necessary interpolated data, to form the surface topography implemented. Recent results describing seismic sand curves for the region are employed in the near-surface medium definition, and a constant half space of realistic seismic velocities and densities are used for the simulated sub-sand medium. Topographic effects in the form of scattering and reflections from prominent sand dune topography are confirmed, showing differences between scattering from elongated and normally oriented sand dunes to the excited source orientation. 8th order Finite-Differences (F-Ds) are used to discretize full elastic wave equations in the velocity-stress formulation, reducing gradually the F-D order when approaching all numerical grid boundaries. At the free surface, I employed boundary conditions for arbitrary surface topography derived for the particle velocities, discretized by 2nd order F-Ds, and at the side and bottom boundaries exponential damping is applied for non-reflective boundaries. A rectangular grid is used to model the Cartesian wave equations from a curved grid by adding in the equations extra terms that depend on the arbitrary surface topography and its spatial derivatives. The curved grid is adapted to an arbitrary surface topography with gradually less curvature with depth towards the bottom of the grid, which is planar. The code is parallellized by domain decomposition through MPI (Message Passing Interface) parallellization between processors. This makes possible the time domain modelling of higher frequencies (even relevant for exploration) and larger areas than could be done otherwise. Only in the last 3-5 years has such complex modelling been commercially viable in hydrocarbon exploration.
-
-
-
Reservoir Characterization Improvement by Accounting for Well Test Extracted Effective Permeabilities
Authors A. Skorstad, F. Georgsen, P. Abrahamsen and E. SmørgravGeostatistical simulation of permeabilities on a geologically detailed resolution will account for permeabilities in the cells containing wells through kriging. These permeabilities are typically based on porosity logs and core plug measurements of both porosity and permeability. No direct measurement of permeability on the geomodelling grid scale exists. However, well tests give information on the effective permeability in an area close to the well region, covering several grid cells, and are therefore data on an aggregate scale. Given an assumption of radial flow into a vertical well, the effective permeability becomes a convolution of the permeabilities in the well test region. Downscaling of the well test data is possible by co-kriging the aggregate scaled permeability field with that of the geomodelling grid scale. Thereby, the geomodelling grid permeabilities will honour data on both scales. The effective permeability is downscaled through inverse block kriging, which implies a deconvolution procedure. Keeping the computation costs low when introducing a new conditioning parameter is ensured by transforming into the Fourier domain, since the Fast Fourier Transform algorithm is an efficient method for solving the inverse block co-kriging. Initial testing of the implemented algorithm has been made on real data from a StatoilHydro operated field on the Norwegian continental shelf in a proof of concept test. Three wells with well test data were chosen, and their derived effective permeabilities were included in the permeability simulations. Cases both with and without well test conditioning were run, and compared in a well test simulation software. All three near well regions showed a significant improvement. These tests indicate that the conditioning method can be a useful contribution in bringing dynamic data into the reservoir characterization.
-
-
-
Anisotropic Permeability in Fractured Reservoirs from Joint Inversion of Seismic AVAZ and Production Data
Authors M. Jakobsen and A. ShahrainiAnisotropic effective medium theory (for homogenization of fractured/composite porous media under the assumption of complete scale-separation) can be used to map the effects of sub-grid (or sub-seismic) fractures onto the grid-scale. The effective poroelastic stiffness and hydraulic permeability tensors that determines the seismic amplitude versus angle-azimuth (AVAZ) and production data (provided that one has information about the overburden and the porous matrix, as well as suitable tools for seismic forward modelling and fluid flow simulation) can therefore be viewed as functions of a relatively small number of parameters related with the fractures (e.g., fracture density and azimuthal orientation). In this paper, we develop a rock physics-based Bayesian method for joint inversion of seismic AVAZ and production data with respect to these parameters of the fractures. Within this stochastic framework, the expectation value of the effective permeability tensor is given by an integral of an effective medium approximation (derived by using rigorous integral equation or Green’s tensor function methods) weighted by the posterior probability density function over the allowed values of the parameters of the fractures. The present work complements those of Will et al. (2005), Jakobsen et al. (2007) and Jakobsen and Shahraini (2008) in the sense that we are using different inversion methods, seismic attributes and/or (rock physics) models of the fractured reservoir. Our interdisciplinary method for characterization of sub-grid fractures can in principle be applied to rather complicated models of fractured reservoirs (e.g., involving multiple sets of fractures characterized by different fracture densities and orientations, etc.). However, the initial (synthetic) modelling and inversion results presented here are associated with a relatively simple model of a fractured reservoir in which a single set of vertical fractures are located within sub-set of the (piecewise constant) reservoir model (that can in principle be identified using complementary methods). We have analysed synthetic AVAZ and production data contaminated with different amounts of normal distributed noise, and managed to recover the (unknown parameters of the fractures) effective permeability tensors with good accuracy. The results show clearly that the incorporation of seismic anisotropy attributes into the history matching process helps to reduce the uncertainties of the estimated permeability tensors.
-
-
-
A Novel Kriging Approach for Incorporating Nonlinear Constraints
Authors F.P. Campozana and L.W. LakeThis work presents new Kriging-based algorithms that allow incorporating nonlinear constraints such as a given average and variance into the kriged field. The first of these constraints allows one to obtain, for example, permeability fields whose average matches well-test-derived permeability; the second one enables one to generate fields that have a desired variability. Therefore, a well-known drawback of Kriging, namely excessive smoothness, is overcome. As these constraints are applied, the Kriging estimates of all blocks become mutually dependent and must be solved simultaneously. This fact led to the development of a new concept, that of Simultaneous Kriging. Furthermore, because permeability is not an additive variable, the problem becomes nonlinear; an optimization procedure based on Newton's method is used to solve it. Both average and variance constraints are applied through Lagrange multiplier technique. Power averaging with a ω exponent that varies from -1 to 1 is used to estimate field-generated permeability within the well-test drainage area. The optimization algorithm searches for an optimum ω value as well as a set of gridblock values such that the desired field average and variance are met. Starting from an initial guess and tolerance, the algorithm will reach the closest minimum within the solution space. Unlike other Kriging methods, the solution is not unique; in fact, infinite, equiprobable solutions can be found. From this point of view, the proposed algorithms become more like conditional simulation. A number of permeability fields were generated and used as input of a numerical simulator to calculate their actual well-test permeability. The simulation results proved that the fields generated by the algorithm actually matched well-test permeability and field data variance within a reasonable tolerance. The same procedure can be used to incorporate other nonlinear constraints, such as facies proportion.
-
-
-
History Matching of Truncated Gaussian Models by Parallel Interacting Markov Chains on a Reduced Dimensional Space
More LessIn oil industry and subsurface hydrology, geostatistical models are often used to represent the spatial distribution of different lithofacies in the reservoir. Two main model families exist: multipoint and truncated Gaussian models. We focus here on the latter. In history matching of lithofacies reservoir model, we attempt to find multiple realizations of lithofacies configuration, that are conditional to dynamic data and representative of the model uncertainty space. This problem can be formalized in the Bayesian framework. Given a truncated Gaussian model as a prior and the dynamic data with its associated measurement error, we want to sample from the conditional distribution of the facies given the data. A relevant way to generate conditioned realizations is to use Markov Chains Monte Carlo (MCMC). However, the dimension of the model and the computational cost of each iteration are two important pitfalls for the use of MCMC. In practice, we have to stop the chain far before it has scanned the whole support of the posterior. Further more, as the relationship between the data and the random field is non-linear, the posterior can be multimodal. Hence, the chain may stay stuck in one of the modes. In this work, we first show how to reduce drastically the dimension of the problem by using a truncated Karhunen-Loève expansion of the Gaussian random field underlying the lithofacies realization. Then we show how we can improve the mixing properties of classical single MCMC, without increasing the global computational cost, by the use of parallel interacting Markov chains at different temperatures. Applying the dimension reduction and this innovative sampling method lowers drastically the number of iterations needed to sample efficiently from the posterior. We show the encouraging results obtained when applying the methodology to a synthetic history matching case.
-
-
-
Using the EnKF with Kernel Methods for Estimation of Non-Gaussian Variables
Authors J.G. Vabø, G. Evensen, J. Hove and J.A. SkjervheimThe Ensemble Kalman Filter (EnKF) is derived under the assumption of Gaussian probability distributions. Thus, the successful application of the EnKF when conditioning to dynamic data depends on how well the stochastic reservoir state can be approximated by a Gaussian distribution. For facies models, the distribution becomes non-Gaussian and multi-modal, and the EnKF fails to preserve the qualitative geological structure. To apply the EnKF with models where the Gaussian approximation fails, we propose to map the ensemble to a higher dimensional feature space before the analysis step. By careful selection of the mapping, moments of arbitrary order in the canonical reservoir parameterization can be embedded in the 1st and 2nd order moments in the feature space parametrization. As a result, the parameterization in the feature space might be better approximated by a Gaussian distribution. However, the mapping from the canonical parameterization to the feature space parameterization does not have an inverse. Thus, finding the analyzed ensemble of canonical states from the analyzed ensemble represented in the feature space is an inverse problem. By using the kernel trick, the mapping to the feature space is never explicitly computed, and we solve the inverse problem efficiently by minimizing a cost function based on the feature space distance. As a result, the computational efficiency of the EnKF is retained, and the methodology is applicable for large scale reservoir models. The proposed methodology is evaluated as an alternative to other approaches for estimation of facies with the EnKF, such as the truncated pluri-Gaussian approach. Results from a field case are shown.
-
-
-
History Matching Using a Multiscale Ensemble Kalman Filter
Authors W. Lawniczak, R.G. Hanea, A.W. Heemink, D. McLaughlin and J.D. JansenSince the first version of Kalman Filter was introduced in 1960 it received a lot of attention in mathematical and engineering world. There are many successful successors like for example Ensemble Kalman Filter (Evensen 1996) which has been applied also for reservoir engineering problems. The method proposed in [Zhou et al. 2007] draws together the ensemble filtering ideas and an efficient covariance representation, and is expected to perform well in history matching for reservoir engineering. It is the Ensemble Multiscale Filter. The EnMSF is a different way to represent the covariance of an ensemble. The computations are done on a tree structure and are based on an ensemble of possible realizations of the states and/or parameters of interest. The ensemble consists of replicates that are the values of states per pixel. The pixels in the grid are partitioned between the nodes of the finest scale in the tree. A construction of the tree is led by the eigenvalue decomposition. Then, the state combinations with the greatest corresponding eigenvalues are kept on the higher scales. The updated states/parameters using the EnMSF are believed to keep geological structure due to localization property. It comes from the filter’s characterization where the pixels from the grid (e.g. permeability field) are distributed (in groups) over the finest scale tree nodes. We present a comparison of covariance matrices obtained with different setups used in the EnMSF. This sensitivity study is necessary since there are many parameters in the algorithm which can be adjusted to the needs of an application; they are connected to the tree construction part. The study gives the idea of how to efficiently use the EnMSF. The localization property is discussed based on the example where the filter is run with a simple simulator (2D, 2 phase) and a binary ensemble is used (the pixels in the replicates of permeability have two values only). Several possible patterns for ordering the pixels are applied.
-
-
-
Comparing different ensemble Kalman filter approaches
Authors B.V. Valles and G. NaevdalOver the last decade, the ensemble Kalman filter (EnKF) method has become an attractive tool for history matching reservoir simulation models and production forecasting. Recently, EnKF has been successfully applied to real field studies (e.g. Bianco et al., 2007). Nonetheless, Lorentzen et al. (2005) observed a consistency problem. They showed, using the Kolmogorov-Smirnov test, that for the PUNQ-S3 model, the posterior cdfs of the total cumulative oil production for 10 ensembles were not coming from the same distribution, as was expected since the 10 initial ensembles were generated using a common distribution. The forecasts from these initial ensembles gave consistent cdfs. We investigate if this issue is related to the inbreeding of the ensemble members when applying EnKF. Houtekamer and Mitchell (1998) proposed to use a paired EnKF with covariance localization to improve atmospheric data assimilation. This method was developed to hinder ensemble collapse due to inbreeding of ensemble members and spurious correlations problems far from the observation points. We show that using a paired EnKF, where each Kalman gain is used to update each other's ensemble, do not in fact prevent inbreeding. A new approach, coupled EnKF, that do prevent inbreeding is presented. The present work first review the issue of consistency encountered with EnKF and investigate the use of paired and coupled EnKFs without covariance localization as a possible remedy to this problem. The method is tested on a simple nonlinear model for which the posterior distribution can be analytically solved. The obtained results are compared with the traditional EnKF and the analytic solution. Next, a hierarchical ensemble filter approach inspired by Anderson (2007) acting as a covariance localization technique is proposed and tested on the PUNS-S3 model against the traditional EnKF. This approach seems better than paired or coupled EnKF to help solve the observed inconsistencies.
-
-
-
Channel Facies Estimation Based on Gaussian Perturbations in the EnKF
Authors D. Moreno, S.I. Aanonsen, G. Evensen and J.A. SkjervheimThe ensemble Kalman filter (EnKF) method has proven to be a promising tool for reservoir model updating and history matching. However, because the EnKF requires that the prior distributions for the parameters to be estimated are Gaussian, or approximately Gaussian, the application to facies models, and channel systems in particular, has been a challenge. In this paper we suggest two different approaches for parameterization of the facies models in terms of Gaussian perturbations of an existing "best guess" model. Method 1 is inspired by level set methods, where surfaces (here facies boundaries) are implicitly modelled through a level set function, normally defined as the signed distance from the nearest surface. Model realizations are generated by adding a Gaussian random field to the initial level set function. Method 2 is based on a similar idea, but the realizations are generated by adding a Gaussian random field to a smoothed indicator function. The smoothing is performed with a Shapiro filter, which is fast and simple to use for any number of dimensions. The performance of the methods is illustrated using a 3D model inspired by a real North Sea fluvial reservoir. It is shown that realistic facies model realizations may be generated from realizations of a Gaussian random field. Based on one realization from the prior for each of the methods, two sets of synthetic production data were generated. Then the prior model ensemble generated with each of the methods were conditioned to each of the two data sets using EnKF. Reasonably good matches were obtained in all cases, including those where the true model is not a realization from the same statistical model as is used to generate the prior realizations.
-
-
-
A Distance-based Representation of Reservoir Uncertainty: the Metric EnKF
More LessUncertainty quantification is probably one of the most important and challenging aspects of reservoir modeling, engineering and management. Traditional approaches have failed to provide manageable solution models. In most cases, uncertainty is represented through either a multi-variate distribution model (posterior) in a Bayesian modeling context or through a set of multiple realizations in a more frequentist view. The traditional Monte-Carlo simulation approach, whereby multiple alternative high-resolution models are generated and used as input to reservoir simulation or optimization codes, has not been proven to be practical mainly due to CPU limitation, nor is it effective or flexible enough in terms of addressing a varying degree of uncertainty assessment issues. In this paper, we propose a reformulation of reservoir uncertainty in terms of distances between any two reservoir model realizations. Instead of considering a reservoir model realization as some point in a high-dimensional space on which a complex posterior distribution is defined, we consider a distance matrix which contains the distances between any two model realizations. The latter matrix is of size NR×NR, NR being the number of realizations, much less for example that an N×N covariance matrix, N being the number of grid-blocks. Note that, unlike a co-variance matrix or probability function, a distance can be tailored to the specific problem at hand, for example water-breakthrough, or OOIP, even though the realizations remain the same. Next, the classical Karhunen-Loeve expansion of a Gaussian random field, based on the eigenvalue decomposition of an N×N covariance table, can now be formulated as function of the NR×NR distance matrix. To achieve this, we construct an NR×NR kernel matrix using the classical radial basis function, which is function of the distance and perform eigenvalue decomposition of this kernel matrix. In kernel space, new realizations can be generated or adjusted to new data. As application we discuss a new technique, termed metric EnKF yo simulataneously update multiple non-Gaussian realizations with production data. Using this simple dual framework to probabilistic-based uncertainty, I show how many types of reservoir uncertainty (spatial, non-spatial, scenario-based etc..) can be modeled in this fashion. I show various applications of this framework to the problem of history matching, model updating and optimization.
-
-
-
Designing Optimal Data Acquisition Schemes Using Kullback-Leibler Divergence within the Bayesian Framework
Authors A.A. Alexandre and B.N. BenoitOil and gas reservoirs are commonly complex heterogeneous natural structures that are described by means of several direct or indirect field measurements involving different physical processes operating at different spatial and temporal scales. Seismic techniques provide a description of the large scale geological structures. In some cases, they can help to characterize the spatial fluid distribution, which knowledges can in turn be used to improve the oil recovery strategy. In practice, these measurements are always expensive and due to their indirect, local and incomplete nature, an exhaustive representation of the entire reservoir cannot be attained. Several uncertainties are always remaining, that must be conveniently propagated in the modeling workflow to deduce in turn uncertainties of the production forecasts. Those uncertainties are essential when setting up a reservoir development scenario. A typical issue is to choose between several oil recovery scenario, or position and trajectory of a new well. Due to the cost of the associated field operations, it is essential to model the risks due to the remaining uncertainties. It is within this framework that devising strategies allowing to set-up optimal data acquisition schemes can have many applications in oil or gas reservoir engineering, or in the recently considered CO2 geological storages involving analogous technologies. We present a method allowing us to quantify the information that is potentially provided by any set of measurements. Applying a Bayesian framework, we quantify the information content of any set of data using the so called Kullback-Leibler divergence between posterior and prior distributions. In the case of a gaussian model where the data depend linearly on the parameters, analytic formulae are given and allow us to define the optimal set of time acquisitions. The redundancy of information can also be quantified, highlighting the role of the correlation structure of the prior model.
-
-
-
Non-intrusive Stochastic Approaches for Efficient Quantification of Uncertainty Associated with Reservoir Simulations
More LessThis paper presents non-intrusive, efficient stochastic approaches for predicting uncertainties associated with petroleum reservoir simulations. The Monte Carlo simulation method, which is the most common and straightforward approach for uncertainty quantification in the industry, requires to perform a large number of reservoir simulations and is thus computationally expensive especially for large-scale problems. We propose an efficient and accurate alternative through the collocation-based stochastic approaches. The reservoirs are considered to exhibit randomly heterogeneous flow properties. The underlying random permeability field can be represented by the Karhunen-Loeve expansion (or principal component analysis), which reduces the dimensionality of random space. Two different collocation-based methods are introduced to propagate uncertainty of the reservoir response. The first one is the probabilistic collocation method that deals with the random reservoir responses by employing the orthogonal polynomial functions as the bases of the random space and utilizing the collocation technique in the random space. The second one is the sparse grid collocation method that is based on the multi-dimensional interpolation and high-dimensional quadrature techniques. They are non-intrusive in that the resulting equations have the exactly the same form as the original equations and can thus be solved with existing reservoir simulators. These methods are efficient since only a small number of simulations are required and the statistical moments and probability density functions of the quantities of interest in the oil reservoirs can be accurately estimated. The proposed approaches are demonstrated with a 3D reservoir model originating from the 9th SPE comparative project. The accuracy, efficiency, and compatibility are compared against Monte Carlo simulations. This study reveals that, compared to traditional Monte Carlo simulations, the collocation-based stochastic approaches can accurately quantify uncertainty in petroleum reservoirs and greatly reduce the computational cost.
-
-
-
Model Complexity in Reservoir Simulation
Authors G.E. Pickup, M. Valjak and M.A. ChristieThe goal of history matching is to construct one or more reservoir models that match observed field behaviour and use those models to forecast field behaviour under different operating conditions. In constructing the model to be matched, there are a number of choices to be made, such as the type of geological model to construct and the number of unknown parameters to use for history matching. We often choose a single geological model and vary a set of parameters to give the best fit to the data (e.g. production history). There are two areas of concern with this approach. The first is that there are usually a number of possible geological models which are all plausible to some degree, and we may be able to make better forecasts by using this information. The second is that increasing the number of unknown parameters may give a better fit to the data, but may not predict so well if the model is over-fitted. The goal of this paper is to examine the application of two techniques to handle these problems. The first technique uses the concept of minimum description length (MDL), which was developed from information theory, and quantifies the trade-off between model complexity and goodness of fit. The second technique is called Bayesian Model Averaging, and was developed to improve the reliability of weather forecasts using codes developed at different centres by constructing a suitable average of the models which takes into account not only the uncertainty forecast by each model but also the between model variance. Both techniques are illustrated with real reservoir examples. The MDL approach is shown on a simple reservoir model based on a Gulf of Mexico Field. The BMA approach is shown on a field with a moderate number of injectors and producers.
-
-
-
Adaptive Multiscale-streamline Simulation and Inversion for High-resolution Geomodels
Authors K.-A. Lie, V.R. Stenerud and A.F. RasmussenFirst, we present an efficient method for integrating dynamic data in high-resolution subsurface models. The method consists of two key technologies: (i) a very fast multiscale-streamline flow simulator, and (ii) a fast and robust 'generalized travel-time inversion' method. The travel-time inversion is based on sensitivities computed analytically along streamlines using only one forward simulation. The sensitivities are also used to selectively reduce the updating of basis functions in the multiscale mixed finite-element pressure solver. Second, we discuss extensions of the methodology to grids with large differences in cell sizes and unstructured connections. To this end, we suggest to use rescaled sensitivities (average cell volume multiplied by local sensitivity density) in the inversion and propose a generalized smoothing operator for the regularization to impose smooth modification on reservoir parameters. Two numerical examples demonstrate that this reduces undesired grid effects. Finally, we show a slightly more complex example with two faults and infill drilling.
-
-
-
Structural Identifiability of Grid Block and Geological Parameters in Reservoir Simulation Models
Authors J. van Doren, J.D. Jansen, P.M.J. Van den Hof and O.H. BosgraIt is well-known that history matching of reservoir models with production measurements is an ill-posed problem, e.g. different choices for the history matching parameters may lead to equally good history matches. We analyzed this problem using the system-theoretical concept of structural identifiability. This allows us to analytically calculate a so-called information matrix. From the information matrix we can determine an identifiable parameterization with a significantly reduced number of parameters. We apply structural identifiability analysis to single-phase reservoir simulation models and obtain identifiable parameterizations. Next, we use the parameterization in minimizing an objective function that is defined as the mismatch between pressure measurements and model outputs. We also apply the structural identifiability analysis to an object-based parameterization describing channels and barriers in the reservoir. We use the iterative procedure to determine for reservoir models with 2025 grid block permeability values a structurally identifiable parameterization of only 13 identifiable parameters. Next, we demonstrate that the parameterization leads to perfect history matches without the use of a prior model in the objective function. We also demonstrate the use of the identifiable object-based parameterization, leading to geologically more realistic history matches.
-
-
-
Model-reduced Variational Data Assimilation for Reservoir Model Updating
Authors M.P. Kaleta, R.G. Hanea, J.D. Jansen and A.W. HeeminkVariational data assimilation techniques (automatic history matching) can be used to adapt a prior permeability field in a reservoir model using production data. Classical variational data assimilation requires, however, the implementation of an adjoint model, which is an enormous programming effort. Moreover, it requires the results of one complete simulation of forward and adjoint models to be stored, which is a serious problem in real-life applications. Therefore, we propose a new approach to variational data assimilation that is based on model reduction, where the adjoint of the tangent linear approximation of the original model is replaced by the adjoint of a linear reduced model. The Proper Orthogonal Decomposition approach is used to determine a reduced model. Using the reduced adjoint the gradient of the objective function is approximated and the minimization problem is solved in the reduced space. If necessary, the procedure is iterated with the updated estimate of the parameters. We evaluated the model-reduced method for a simple 2D reservoir model. We compared the method with variational data assimilation where the gradient is approximated by finite differences and we found that the reduced-order method is about 50 % more efficient. We foresee that the computational efficiency will significantly increase for larger model size and our current research is focused on quantifying this computational benefit.
-
-
-
Simultaneous AVA Stochastic Inversion of Seismic Data
Authors D.E. Kashcheev, D.G. Kirnos and A.M. GritsenkoWe consider an efficient AVA stochastic inversion algorithm which performs inversion of equal angle offset volumes to volumes of Vp, Vs and density. Exact solution of Zoeppritz equations is used in order to determine reflection coefficients. Using combination of Simulated Annealing and Metropolis-Hasting sampler, we generate a set of equiprobable high-resolution volumes of elastic properties which are consistent with seismic data, well measurements and reproduce detailed geological features. Stochastic realizations generating process is constrained to reproduce seismic data, well data, low-frequency elastic trends, 3D variograms for elastic parameters, estimates of probability distributions of (Vp,Vs,Den), estimates of vertical alteration of elastic properties, etc. The multi-trace approach and extensive use of a priori geological knowledge greatly reduces influence of seismic noise on the inversion results for each of the solutions obtained. Multiple grid approach is used to properly treat large-scale variograms features. For reproduction of vertical alteration of elastic properties we use multi-point statistics. The training dataset is formed from log-derived properties. This eliminates the need for building an artificial training model. Application of multi-point statistics has a greater potential for ensuring more accurate account of vertical variations of acoustic properties, for obtaining geologically valid inversion solutions and for decreasing the uncertainty degree. The seismic inversion algorithm has been efficiently parallelized. It uses a shared memory computer and inverts simultaneously different 1D models on different processors preserving spatial correlation of elastic properties. AVA stochastic inversion results are used for cascaded stochastic simulation of lithology and fluid units, for simulation of reservoir properties, uncertainty analysis. Based on Bayesian approach we can take into account of existing uncertainties, in particular, uncertainties caused by inaccuracy of the mathematical model that relates elastic parameters and reservoir properties, and uncertainties caused by non-uniqueness and inaccuracy of seismic inversion results.
-
-
-
Use of Solution Error Models in History Matching
Authors M.A. Christie, G.E. Pickup, A.E. O'Sullivan and V. DemyanovUncertainty in reservoir models can be quantified by generating large numbers of history-matched models, and using those models to forecast ranges of hydrocarbons produced. The need to run large numbers of simulations inevitably drives the engineer to compromises in either the physics represented in the reservoir model, or in the resolution of the simulations run. These compromises will often introduce biases in the simulations, and the unknown reservoir parameters are estimated using the biased simulations, which can lead to biases in the parameter estimates. Solution error models can be used to correct for the effects of the biases. Solution error models work by building a statistical model for the differences between fine and coarse simulations (or between full physics and reduced physics simulations) using data from simulations at a limited number of locations in parameter space. The statistical model then produces estimates of the error elsewhere in parameter space; these estimates are used to correct the effects of the coarse model biases. In this work, we apply a solution error model to material balance calculations. Material balance is frequently used in reservoir engineering to estimate the initial oil in place. However such models are very simple, treating the reservoir as a tank and allowing instantaneous equilibration of fluids within the tank. The results of material balance simulations will therefore not be consistent with multi-cell reservoir simulations. We use a model based on Teal South Reservoir in the Gulf of Mexico to demonstrate how an error model can correct a material balance model to the accuracy of a reservoir simulation.
-
-
-
Joint Quantification of Uncertainty on Spatial and Non-spatial Reservoir Parameters
Authors C. Scheidt and J.K. CaersThe experimental design methodology is widely used to quantify uncertainty in the oil and gas industry. This technique is adapted for uncertainty quantification on non-spatial parameters, such as bubble point pressure, oil viscosity, and aquifer strength. However, it is not well adapted for the case of geostatistical (spatial) uncertainty, due to the discrete nature of many input parameters as well as the potential nonlinear response with respect to those parameters. One way to handle spatial uncertainty is called the joint modeling method (JMM). This method, originally proposed in a petroleum context by Zabalza (2000), incorporates both non-spatial and spatial parameters within an experimental design framework. The method consists of the construction of two models, a mean model which accounts for the non-spatial parameters and a dispersion model which accounts for the spatial uncertainty. Classical Monte-Carlo simulation is then applied to obtain the probability density and quantiles of the response of interest (for example the cumulative oil production). Another method to quantify spatial uncertainty is the distance kernel method (DKM) proposed recently by Scheidt and Caers (2007), which defines a realization-based model of uncertainty. Based on a distance measure between realizations, the methodology uses kernel methods to select a small subset of representative realizations which have the same characteristics as the entire set. Flow simulations are then run on the subset, allowing for an efficient and accurate quantification of uncertainty. In this work, we extend the DKM to address uncertainty in both spatial and non-spatial parameters, and propose it as an alternative to the joint JMM. Both methods are applied to a synthetic test case which has spatial uncertainty on the channel representation of the facies, and non-spatial uncertainties on the channel permeability, porosity, and connate water saturation. The results show that the DKM provides a more accurate quantification of uncertainty with fewer reservoir simulations. Finally, we propose a third method which combines aspects of the DKM and the JMM. This third method again shows improvement in efficiency compared to the JMM alone.
-