- Home
- Conferences
- Conference Proceedings
- Conferences
ECMOR XV - 15th European Conference on the Mathematics of Oil Recovery
- Conference date: 29 Aug 2016 - 01 Sep 2016
- Location: Amsterdam, Netherlands
- ISBN: 978-94-6282-193-4
- Published: 29 August 2016
61 - 80 of 163 results
-
-
Reservoir Simulation with High Volume of Tracers whilst Retaining Performance
Authors H. Mustapha and T. JonsthovelTracking fluid movement in a reservoir using single-well tracer and inter-well tracer tests is an important mechanism for optimizing recovery. In full-field simulations with a high number of tracers, the solution to tracer systems can be computationally expensive because each tracer requires the solution of a linear system with dimension equal to the grid size. Hence, for an increasing number of hydrocarbon components, grid size, and number of tracers, the performance of the simulator degrades. Relying on the nature of the material balance equations governing the tracer flow, we propose to utilize a convenient feature of the underlying partial differential equations (PDEs) in that the matrix-form equations of all tracers carried by a given fluid phase or component are the same, and only the right-hand sides of the corresponding linear systems change. We provide an overview of the optimal linear solver for solving tracer linear systems in different scenarios (e.g. high volume of tracers). Moreover, we show that by overlapping computation on the central processing unit (CPU) and the graphic processing unit (GPU), we can significantly reduce the impact of solving the tracer equations on the overall simulator performance, which enables running a simulation with a high volume of tracers.
-
-
-
Operator-based Linearization for Non-isothermal Multiphase Compositional Flow in Porous Media
More LessNon-isothermal multiphase compositional simulation is based on the solution of governing equations describing mass and energy transfer in the subsurface. The solution strategy requires a linearization of strongly nonlinear governing equations describing the process. Usually, a Newton-based method is used for the linearization that demands an assembly of a Jacobian matrix and residuals for a fully coupled system of equations. Recently, a new linearization approach was proposed for compositional problems and tested for simulation of binary compositional and low-enthalpy geothermal flow. The key idea of the approach is the transformation of discretised mass conservation equations to an operator form with separate space-dependent and state-dependent components. This transformation provides an opportunity for an approximate representation of exact physics (physical properties) of the problem. Specifically, each term of conservation equations is represented as a product of two different operators. The first operator depends on a current physical state of a system and contains different properties such as density, viscosity, relative permeability, etc. The second operator captures both spatially altered properties such as permeability and the rest of state variables such as pressure in the discrete approximation of the gradient. At the pre-processing stage, all state-dependent operators are uniformly parametrized within the physical space of the problem (pressure-composition intervals). During the simulation process, a multi-linear interpolation is applied to approximate the first type of operators, while the second type of operators is processed based on the conventional approach. In this work, we have extended this approach to general purpose simulation. We introduced the operator-based parametrization of mass and energy conservations equation based on the pressure, composition, temperature, and porosity. In addition, the approach has been extended and tested on truly multi-component systems of practical interest. The accuracy and robustness of the new method have been tested against the results of simulations based on the conventional approach.
-
-
-
Physics-based Pre-conditioners for Large-scale Subsurface Flow Simulation
Authors G.B. Diaz Cortes, C. Vuik and J.D. JansenWe consider deflation-based pre-conditioning of the pressure equation for large-scale reservoir models with strong spatial variations in the permeabilities. The use of deflation techniques involves the search for good deflation vectors, which usually are problem-dependent. We propose the use of proper orthogonal decomposition (POD) to generate physics-based problem-specific deflation vectors. The use of POD to construct pre-conditioners has been attempted before but in those applications, a snap-shot-based reduced-order basis was used as pre-conditioner directly whereas we propose the use of basis vectors as deflation vectors. We investigate the effectiveness of the method with numerical experiments using the conjugate gradient iterative method in combination with Incomplete Cholesky preconditioning (ICCG) and POD-based deflation (DICCG). We consider incompressible and compressible single-phase flow in a layered model with large variations in the permeability coefficients, and the SPE10 benchmark model. We obtain an important reduction for the number of iterations with our proposed DICCG method in comparison with the ICCG method. In some test problems, we achieve convergence within one DICCG iteration. However, our method requires a number of preparatory reservoir simulations proportional to the number of wells and the solution of an eigenvalue problem to compute the deflation vectors. This overhead will be justified in case of a large number of subsequent simulations with different control settings as typically required in numerical optimization or sensitivity studies.
-
-
-
Comparison of Linear Reconstructions for Second Order Finite Volume Schemes on Polyhedral Grids
Authors R. Klöfkorn, A. Kvashchuk and M. NolteImproved and enhanced oil recovery methods require sophisticated simulation tools to predict the injected flow pass together with the chemical reactions inside it. One approach is the application of higher order numerical schemes to avoid excessive numerical diffusion that is very typical for transport processes. In this work we provide a first step towards higher order schemes applicable on general polyhedral and corner-point grids typically used in reservoir simulation. We compare two possible approaches of linear reconstruction and slope limiting techniques on a variety of different meshes in two and three space dimensions and discuss advantages and disadvantages.
-
-
-
Improving the Computational Efficiency of a Dynamic Pore Network Model - A Hybrid Approach for a Better Performance
Authors M. Regaieg and A. MoncorgePore scale simulation is more and more used to study various pore scale phenomena that cannot be reproduced by conventional Darcy-based simulators. Dynamic pore network models are a method to study the flow at the pore scale without having to use the very precise and time consuming direct numerical simulators. However, these models are still very slow when applied to 3D core scale simulations. In fact, to reproduce the competition between viscous and capillary forces governing the immiscible flow in porous media, these models require computing many expensive pressure gradients. However, at low rates the displacement tends to become dominated by capillary forces and this means that, during drainage, the pores having the lowest capillary entry pressure are filled first. In this case, simple flow rules can be defined thus avoiding the pressure calculations. These simplified models are named quasi-static and can be only used when viscous forces do not influence the flow. In the literature, most researchers have used either a dynamic pore network or a quasi-static model. Since quasi-static algorithms are faster and are able to reproduce similar results to dynamic models at low rates, we propose to combine these two approaches in a hybrid algorithm taking advantage of the speed of quasi-static algorithms when the flow is governed by the capillary forces and that can simulate the viscous effects when they are important. We propose a criterion to localize the pressure solution to the important areas to enhance the computational efficiency of the algorithm even in viscous dominated regimes. In this paper, we first show that using the classical definition of the capillary number as a switching criterion is not good enough to characterize the domain where the flow is controlled by capillary forces. Therefore, we use the macroscopic capillary number as a criterion to switch between the dynamic and quasi-static flow regimes. Finally, we present several test cases where we show that the hybrid algorithm can considerably improve the computational performance of the pore network simulator without losing the accuracy of the solution. For capillary dominated regimes, the observed speed-up on 3D networks can reach 500 and 16000 for our industrial networks of 43000 and 1 million nodes, respectively. For viscous dominated regimes the speed-up on 3D networks can reach 5 and 30 for 43000 and 1 million nodes, respectively. This approach is compatible with a multiscale method for the pressure computations and will provide an additional speed-up.
-
-
-
Damping of Newton Iterations Using Automatic Error-control Step-length Selection
Authors G. Lutidze and R.M. YounisConsiderable recent interest in the improvement of the robustness of Newton-like methods for implicit simulation has led to the development of a number of successful safeguarding strategies and alternate discretization methods with improved differentiability. To date however, the focus of these reported efforts has been on specific sets of physics and canonical multiphase flow problems. Motivated by the success of these problem-specific efforts, this work proposes a general safeguarding strategy for Newton-like methods that are applied to implicit time-dependent simulation. The proposed method combines ideas from ad hoc and classical safeguarding strategies. The proposed method is applicable to any level of complexity of physical problem. The safeguarding approach proposed in this work is based on the idea of treating Newton’s method as an explicit Euler integration process of the continuous Newton flow Ordinary Differential Equations. For well-posed simulation problems, the Newton flow prescribes a continuous and locally differentiable path from the initial guess to the solution. We propose to tailor a local error control ODE step-length selection algorithm in order to determine a diagonal damping matrix. The result is that the Newton iteration traces a single Newton flow path more accurately. Two novel aspects are developed in order to achieve the objective. First, we develop two sets of local discretization error estimates as a function of damping-factor size. The first set is an a posteriori estimate, and is independent of the form of the particular simulation problem. The second estimate is derived specifically for transport equations by analyzing the backward Euler discretization error of Newton’s method. The second development is a proposed ad hoc strategy to loosen the error tolerance that is enforced as the iteration progresses in order to improve performance. Computational results are presented for a series of simulation problems with increasing complexity. The results for two phase flow simulations demonstrate that the proposed method is competitive with, but not superior to recently proposed strategies. These comparisons show that absent of tailored ad hoc strategies such as those recently proposed for black oil simulation, the proposed strategy is more robust and efficient than the current state-of-the art.
-
-
-
Phase Switching Algorithm for Slug Flows in Wellbores
Authors A.B. Starostin, B.I. Krasnopolsky and A.A. LukyanovWell test data might update reservoir model to precise reserves evaluation and field development plan. We consider the cases of gas-liquid flows in a deviated borehole. In order to evaluate the properties of reservoir, one has to be able to filter out the non-linear input of trajectory. Particularly the wellbore flow may have pulses and phase change even under constant reservoir inflow. The evaluation of well tests demands the development of fast and robust numerical techniques for modelling capabilities. The study presents a multi-fluid model and its implementation using the Jacobian-Free Newton-Krylov method. The fully implicit formulation framework described in this work enables to efficiently solve governing fluid flow equations. A reduction of a multi-fluid model in zones of phase disappearance is based on the phase state distribution over the cells. The numerical method implement a novel phase switching algorithm when the single phase cells and multiphase cells are distinguished by solving reduced set of governing equations. A transient two-fluid model is used to verify and validate the phase switching algorithm for conditions of terrain-induced slug flow regime. The algorithm results are in a good quantitative agreement with other multi-fluid simulator, experimental data and well tests.
-
-
-
Percolation-based Effective Permeability Estimation in Real Heterogeneous Porous Media
More LessIt has long been understood that flow behavior in heterogeneous porous media is largely controlled by the continuity of permeability contrasts. With this in mind, we are looking in new methods for a fast estimation of the effective permeability which concentrates on the properties of the percolating cluster. From percolation concepts we use a threshold permeability value (K_th) by which the gridblocks with the highest permeability values connect two opposite side of the system in the direction of the flow. Those methods can be applied to heterogeneous media of a range of permeabilities distribution and various underlying structures. We use power law relations and weighted power averages that can be inferred either from the statistics and the properties of percolation sub-networks at the threshold point. This approach does not need fitting to the experimental data of conductivity measurements to estimate the model parameter as is done in empirical methods. We examine the order of accuracy of these methods on some layers of 10th SPE model and found very good agreements with the values determined from the commercial flow simulators. The results of this work open insights on new methods in estimating the effective permeability using percolation concepts.
-
-
-
How Fracture Capillary Pressure Affects Ensemble Relative Permeability of Naturally Fractured Reservoirs
Authors M. Sedaghat, S. Azizmohammadi and S.K. MatthaiThis work presents a significant advance over earlier methods because it employs a surface-roughness based fracture dilation model to compute aperture distributions. From these, fracture capillary pressure is computed before saturation functions are extracted. This upscaling is performed using an unsteady state approach to evaluate the impact of fracture capillary pressure on ensemble relative permeability and ultimate recovery. The simulation approach is applied to outcrop-based meter- and kilometre-scale DFM models. For these fracture geometries, aperture attributes are computed for plausible regimes of in situ stress. Corresponding capillary pressure values are assigned to individual fractures. The capillary pressure of the rock matrix is parameterized with representative data for siliciclastics and carbonates. The two-phase flow simulations are performed with the Finite Element-Centered Finite Volume Method (FECFVM). Flow-based upscaling establishes ensemble relative permeability between capillary and viscous limits. Based on results, for a water-wet rock matrix, there is more fracture-matrix transfer and oil recovery is higher. Counter-current-imbibition flux is diminished gradually since the small fractures that dominate the fracture-matrix interface area have drastically smaller fracture-matrix pressure differentials. These differences become more pronounced near the capillary limit. As the wettability tends to the oil, two phase flow occurs within a narrower range of saturation.
-
-
-
An Efficient Multiscale Mixed Finite Element Method for Modelling Flow in Discrete Fractured Reservoirs
More LessFractures can significantly impact the flow patterns of carbonate reservoirs and should be accurately accounted for in a geological model. Accurate modeling of flow in fractured media is usually done by discrete fracture model (DFM), as it provides a detailed representation of flow characteristic. However, DFM poses a particular challenge to traditional numerical method with regard to computational efficiency and accuracy. In this study, a multiscale mixed finite element method (MsMFEM) has been proposed for detailed modeling of two-phase oil-water flow in fractured reservoirs. The MsMFEM uses a standard Darcy model to approximate pressure and fluxes on a coarse grid. Fine-scale effects of fractured media are captured through basis functions constructed numerically by solving local DFM on the fine-scale grid. In our approach, we consider arbitrary fracture orientations and use triangular fine grid. Through multiscale basis functions, we can maintain the efficiency of an upscaling technology, while at the same time generate a more accurate and conservative velocity field on the full fine-scale grid. Comparisons of the multiscale solutions to the fine-scale discrete fracture model solutions indicate that the fine-scale flow in fracture networks can be represented within a coarse-scale Darcy flow model. The results demonstrate that the MsMFE technology is a promising method toward fine flow simulation of high-resolution geological models of fractured reservoirs.
-
-
-
Bayesian Experimental Design for the Influence Identification of Uncertain Geological Parameters on the CO2-GAGD Process
Authors W.J.M. Al-Mudhafar, D.N. Rao and J. TangDetermining the most influential reservoir parameters on the GAGD process is an essential step to understanding the EOR process efficiency. In this paper, we introduce Bayesian Model Averaging (BMA) as a stochastic linear modelling approach to select the most influential parameters affecting the Gas Assisted Gravity Drainage (GAGD) Process performance in a multilayer heterogeneous sandstone oil reservoir. Lithofacies and petrophysical property model was reconstructed considering multiple-point geostatistics for 3D property distribution. CO2 is injected through vertical injectors at the top two layers. The 2nd three layers were left as a transition to allow a vertical depth interval for gas gravity drainage. Horizontal producers were set up through the sixth, seventh, and eighth layers where the oil saturation has the highest levels. The last four layers were left with no injection/production activity. The studies reservoir factors are horizontal permeability, anisotropy ratio (Kv/Kh), and porosity. Latin Hypercube Design created many simulation jobs and the elimination was conducted by the BMA stochastic approach, which adopts posterior probability to choose the best model among a set of candidate models. Moreover, the accurate determining of influential factors through BMA has led to better understanding of the effect of heterogeneity and anisotropy on the GAGD process.
-
-
-
Study the Effect of Connectivity between Two Wells on Secondary Recovery Efficiency Using Percolation Approach
Authors S. Sadeghnejad, M. Masihi, P.R. King and P.A. GagoEstimating available hydrocarbon to be produced during secondary oil recovery is an ongoing activity in field development. The primary plan is normally scheduled during early stage of field’s life through master development plan studies. During this period, due to the lake of certain data, estimation of the field efficiency is usually based on rules of thumb and not detailed field characterization. Hence, there is a great motivation to produce simpler physically-based methodologies. The minimum necessity inputs of percolation approach make it a useful tool for foration performance prediction. This approach enables us to attain a better assessment of the efficiency of secondary recovery methods at early production time. The main contribution of this study is to establish a continuum percolation model based on Monte Carlo simulation that can estimate the connectivity of good sands between two wells. In the classical percolation, the connectivity is considered between two lines and two faces of the system in 2- and 3-D; whereas, hydrocarbon production is achieved through wells with the shape of lines (e.g., vertical, horizontal, or deviated wells). In addition, the results showed that not implementation of the correct geometry of wells can alter the estimated results from the percolation approach.
-
-
-
Hydrocarbon Formation Evaluation Using an Efficient Genetic Algorithm-based Factor Analysis Method
By N.P. SzaboA global optimization approach for the factor analysis of wireline logging data sets is presented. Oilfield well logs are processed together to give an estimate to factor logs by using an adaptive genetic algorithm. Nonlinear relations between the first factor and essential petrophysical parameters of shaly-sand reservoirs are revealed, which are used to predict the values of shale volume and permeability directly from the factor scores. Independent values of the relevant petrophysical properties are given by inverse modeling and well-known deterministic methods. Case studies including the evaluation of hydrocarbon formations demonstrate the feasibility of the improved algorithm of factor analysis. Comparative numerical analysis made between the genetic algorithm-based factor analysis procedure and the independent well log analsis methods shows consistent results. By factor analysis, an independent in-situ estimate to shale content and permeability is given, which may improve the reservoir model and refine the results of the reserve calculation.
-
-
-
Advanced Geologically-consistent History Matching and Uncertainty Evaluation
Authors E.S. Zakirov, I.M. Indrupskiy, I.M. Shiryaev, O.V. Lyubimova and D.P. AnikeevAt the ECMOR-14 conference we proposed a method for automated geologically-consistent history matching of a 3D reservoir model and presented the results of its implementation as a numerical algorithm. In the approach considered, control parameters to be evaluated through dynamic data assimilation were geostatistical parameters of a 3D reservoir model. It was assumed that porosity within inter-well space was calculated by the kriging procedure based on measured values on wells. Permeability distribution was supposed to be calculated through its correlation dependence on porosity. Through the inverse problem solution, the most uncertain parameters of a geostatistical model were estimated, namely, parameters of an anisotropic semivariogram and the porosity-to-permeability dependence for each facies. The inverse problem solution algorithm was based on efficient methods of the optimal control theory (the adjoint methods). In the present study, the approach is further advanced by the transition from the deterministic kriging procedure to stochastic geostatistical methods such as sequential Gaussian simulation. The control parameters evaluated through the inverse problem solution – parameters of the anisotropic semivariogram and porosity-to-permeability dependence - are supplemented by the values in pilot points chosen with a special algorithm. For some common depositional environments, original algorithm has been also developed to adjust facies distribution within the inverse problem. Thus, firstly, it becomes possible to effectively identify heterogeneities of parameter distributions in the inter-well space. Secondly, efficient assessment of uncertainty in the parameters obtained by the inverse problem solution can be carried out. As an alternative implementation to the computationally-intensive approach based on the group analysis of an ensemble of history matched models, a simplified procedure has been implemented based on the linearization of the objective function at the optimal point. To obtain the covariance matrix, the sensitivity matrix is calculated by means of a special computationally-efficient procedure using the state variables variation problem - one of the subproblems of the adjoint-based algorithm for inverse problem solution. The paper provides a number of examples demonstrating performance of the proposed approaches and algorithms. Main contributions: - Geologically-consistent history matching based on adjoint methods has been extended to stochastic geostatistical formulations for evaluation of anisotropic semivariogram and porosity-to-permeability relation parameters for each facies and values in pilot points chosen with a special algorithm. - Original algorithm has been developed to constrain facies distribution to dynamic data. - Uncertainty assessment procedure for inverse problem solution has been implemented based on a subproblem of the adjoint algorithm for sensitivity and covariance matrix computation.
-
-
-
An Ensemble 4D Seismic History Matching Framework with Wavelet Multiresolution Analysis - A 3D Benchmark Case Study
Authors X. Luo, T. Bhakta, M. Jakobsen and G. NævdalIn a previous work (Luo et al., 2016), we proposed an ensemble 4D seismic history matching framework, which has some relatively new ingredients, in terms of the type of seismic data in choice, the way to handle big seismic data and related data noise estimation, and the use of a recently developed iterative ensemble history matching algorithm. In seismic history matching, it is customary to use inverted seismic parameters as the observations. In doing so, extra uncertainties may arise during the inversion processes. We avoid such intermediate inversion processes by adopting amplitude versus angle (AVA) data. To handle the big-data problem in seismic history matching, we adopt a wavelet-base sparse representation procedure. Concretely, we apply a discrete wavelet transform to seismic data, and estimate noise in resulting wavelet coefficients. We then use an iterative ensemble smoother to history-match leading wavelet coefficients above a certain threshold value. In the previous work (Luo et al., 2016), we applied the proposed framework to a 2D synthetic case. In the current study, we extend our investigation to the 3D Brugge benchmark case. Numerical results indicate that, the proposed framework is very efficient in handling big seismic data, while achieving reasonably good history matching performance.
-
-
-
Assisted History Matching for Multi-facies Channelized Reservoir Using ES-MDA with Common Basis DCT
Authors Y. Zhao, F. Forouzanfar and A.C. ReynoldsHistory matching a reservoir with multiple facies has always posed a great challenge to researchers. Most traditional history-matching techniques were designed to work with Gaussian distributed continuous variables instead of non-Gaussian distributed categorical variables like facies. Inspired by the previous researchers, we develop a workflow which combines Ensemble Smoother with Multiple Data Assimilation (ES-MDA) algorithm with common basis Discrete Cosine Transform (DCT) to conduct assisted history matching for multi-facies channelized reservoir, especially the 3D problems which has rarely been investigated before. In this work, an ensemble of geological realizations is first generated by using multi-point statistics (for 2D case) or object-based modeling (for 3D case). Then the DCT is implemented for each realization (facies field) to get the particular basis functions and their corresponding coefficients. For the purpose of extracting the general geological features among different realizations, we retain a series of common basis functions which are identical and fixed for all realizations. The corresponding coefficients of each realization are recomputed with respect to the common basis set in order to reconstruct the original facies field by minimizing the least square residual. Through history matching the observed data using ES-MDA, the DCT coefficients are updated and the facies field is renewed with the updated coefficients and the common basis set. The discrete facies field is obtained by applying an optimization algorithm to truncate the continuous values at the end of each ES-MDA iteration. We apply this procedure to both 2D and 3D synthetic problems considering complex three facies (shale, levee and sand) channelized reservoir. The results show that the proposed algorithm can provide good data matches and reduce the uncertainty in the prior ensemble significantly. Moreover, the posterior estimation of model parameters properly reflects the main geological features of the true model. Compared to previous studies, this work not only applies the ensemble-based method with common basis DCT for history matching and uncertainty quantification for 2D and 3D multi-facies reservoirs, but provides a robust and relatively easy approach to handle 3D cases which has very limited report in the literature.
-
-
-
Ensemble-based Seismic History Matching with Distance Parameterization for Complex Grids
Authors Y. Zhang and O. LeeuwenburghRecently, a distance parameterization of flood fronts derived from seismic anomalies was proposed as a solution in combination with the ensemble Kalman filter (EnKF), which is known as an efficient method for conditioning of multiple reservoir models to observed data. Even though the distance parameterization in terms of front positions is efficient and effective in reducing both nonlinearity and the effective number of seismic data, which improves the performance of the EnKF, the method adopted for distance computation therein is only applicable for reservoir models with regular Cartesian grids because large errors will be introduced otherwise. In this paper, we improve the applicability of the distance parameterization in terms of front positions by extending the fast marching method for solution of the Eikonal equation to complex simulation grids. This is realized by taking advantage of a diagonal stencil in the fast-marching implementation which allows more accurate calculations of distances between observed and simulated fronts, and by an isoparametric mapping which provides a transformation from the Cartesian to curvilinear coordinates. The improvements of the proposed methods are demonstrated through a number of numerical experiments on corner-point grid including a 3D synthetic case of Norne full-field model.
-
-
-
Application of Ensemble Smoother and Multiple-data Assimilation for Estimating Relative Permeability from Coreflood Experiments
Authors A. Jahanbakhsh, A. ElSheikh and M. SohrabiRelative permeability curves (kr) are flow functions governing multiphase flow in porous media. These functions are an essential component of any large-scale simulator of porous media flow of different phases (oil, water, and gas) with several applications in environmental and petroleum engineering. Unsteady state methods are commonly performed on core samples taken from subsurface reservoirs to obtain the relative permeability curves experimentally. The obtained measurements are then used to calibrate analytical functions (to be embedded in the flow simulator) through automatic history matching. In this study, we evaluate iterative ensemble-based history matching techniques based on the Ensemble-Smoother (ES) formulation. Mainly, the Ensemble Smoother with Multiple-Data Assimilation (ES-MDA) is used for calibrating the parametric relative permeability models using data from unsteady-state core flood experiments. An Ensemble-Smoother updates the model parameters globally by assimilating all the time depended data at once, from the start to the end of the experiment. This is to be contrasted with online updating scheme adopted in Ensemble Kalman Filtering methods. Recently, ES-MDA was developed to improve on ES and to provide reliable uncertainty quantification of the unknown parameters with low computational cost. In the current work, ES-MDA is compared to global optimization methods for calibrating the relative permeability curves. The results of estimating two and three-phase relative permeability curves from three-phase coreflood experiments are presented. The experiments were performed on 65 mD mixed-wet Clashach sandstone core and cumulative productions and pressure drop across the core were measured during the course of experiments. ES-MDA was able to find the global optimum parameters at much faster convergence rates in comparison to genetic algorithm (GA), a widely used global search method. This was evident for the history matching of three-phase unsteady state experiments where optimal solutions were obtained efficiently while preserving uncertainties in the estimated parameters.
-
-
-
Different Parameterizations of the Initial Ensemble for a Channelized Reservoir in an Assisted History Matching Context
Authors B. Sebacher, A.S. Stordal and R.G. HaneaIn this paper we present a comparison of three parameterizations of channelized reservoirs generated using multipoint geostatistics (MPS) in combination with a training image. In a previous study, we suggested estimating the facies probability fields from an ensemble generated with MPS and linked, marginally, the facies probability fields with the standard Gaussian variables by means of the normal score transform. We have parameterized the facies fields with random fields, marginally Gaussian, using the conditional mean of the Gaussian variables. This parameterization keeps a possible dependence structure inherited from the training image, but marginally the sampling from the Gaussian distribution is discrete and bi-modal. Here, we extend this parameterization in two directions. First, we do not take into account the dependence structure and parameterize by random sampling from the conditional distribution. The second idea is to draw samples from the conditional distribution, but using the same random seed for each grid cell within each ensemble member, but different random seeds across the ensemble members. This would preserve the dependence structure within each ensemble member while increasing the variability between the ensemble members. Both parameterizations have the property that, marginally, samples correctly from the standard Gaussian distribution. We compare the behavior of the parameterizations within a history matching process assimilating the production data. The comparison has two main directions: to prove the impact of the stochastic forcing on the history matching of geological properties and to prove the stochastic forcing on the predictive power of the models. We have used the iterative adaptive Gaussian mixture filter (IAGM) for history matching because the IAGM is suited for highly nonlinear problems and has a re-sampling step that allow us to use the already existing technique of re-sampling from the training image using updated probability fields. The re-sampling step is necessary to re-position the facies geometry, lost after a cycle of data assimilation.
-
-
-
Large-ensemble Data Assimilation Using an Upscaled Model
Authors K. Fossum and T. MannsethWhen performing ensemble based data assimilation (DA) one can, due to the high inherent computational cost of running a complex reservoir simulator, only afford to apply a moderate number of ensemble members. Without modification it is well known that the DA procedure will fail when assimilating data in high-dimensional geophysical systems. Distance-based localization mitigate the effects of few ensemble members, but, for many cases, it is difficult to define localization in a suitable manner, this is especially the case for problems with a nonlocal relationship between data and parameters. An alternative to localization is to increase the ensemble size. With fixed computational resources, an increase in the ensemble size must be compensated for by a decrease in the cost of each reservoir simulation. This can be achieved by replacing the reservoir simulator with a proxy model. In this work, we investigate the use of a proxy model that is constructed by discretizing the reservoir model equations on a coarser grid than the original reservoir simulation model. A modest reduction in the number of grid cells should be sufficient to compensate for the increase in computation from using a large ensemble size. This reduction is achieved by a flexible and adaptive upscaling procedure, capable of handling all grids, which is constructed based on a second generation wavelet transform. Since the update step of the DA algorithm is much less computationally demanding than the forecast step, we consider a DA method where the forecast is performed on the coarse model while the update is performed on the the fine grid. This is formulated without the need for error structure correction. The large ensemble proxy DA method is compared with a localization methods on several numerical examples where localization is traditionally required.
-