- Home
- Conferences
- Conference Proceedings
- Conferences
ECMOR XIII - 13th European Conference on the Mathematics of Oil Recovery
- Conference date: 10 Sep 2012 - 13 Sep 2012
- Location: Biarritz, France
- ISBN: 978-90-73834-30-9
- Published: 10 September 2012
1 - 50 of 114 results
-
-
A Mortar Method Based on NURBS for Curve Interfaces
Authors A. Rodriguez, H. Florez and M.F. WheelerThe Mortar Finite Element Method (MFEM) has been demonstrated to be a powerful technique in order to formulate a weak continuity condition at the interface of sub-domains in which different meshes, i.e. non-conforming or hybrid, and / or variational approximations are used. This is particularly suitable when coupling different physics on different domains, such as elasticity and poro-elasticity, for example, in the context of coupled flow and geomechanics. In this area precisely, geometrical aspects play also a role. It is very expensive, from the computational standpoint, having the same mesh for flow and mechanics. Tensor product meshes are usually propagated from the reservoir in a conforming way into its surroundings, which makes non-conforming discretizations a highly attractive option for these cases. In order to tackle these general sub-domains problems, a MFEM scheme on curve interfaces based on Non-Uniform Rational B-Splines (NURBS) curves and surfaces is presented in this paper. The goal is having a more robust geometrical representation for mortar spaces which allows gluing non-conforming interfaces on realistic three-dimensional geometries. The resulting mortar saddle point problem will be decoupled by means of standard Domain Decomposition techniques such as Dirichlet-Neumann and Neumann-Neumann, in order to exploit current parallel machine architectures. Three-dimensional examples ranging from near-wellbore applications to field level subsidence computations show that the proposed scheme can handle problems of practical interest. In order to facilitate the implementation of complex workflows, an advanced Python wrapper interface that allows programming capabilities have been implemented. Extensions to couple elasticity and plasticity, which seems very promising in order to speed up computations involving poroplasticity, will be also discussed.
-
-
-
Errors in the Upstream Mobility Scheme for Counter-Current Two-Phase Flow With Discontinuous Permeabilities
Authors T.S. Mykkeltvedt, I. Aavatsmark and S. TveitThe upstream mobility scheme (UM) is widely used to solve hyperbolic conservation laws numerically. When applied to a homogeneous porous medium this scheme has been shown convergent. When heterogeneities are introduced through the permeability, the flux function attains a spatial discontinuity. In earlier works UM for some examples of countercurrent flow has been shown to perform badly. We have looked at the performance of UM for the counter-current flow of CO2 and brine in a 1D vertical column. The solutions computed from UM are compared to the physically relevant solution found by the modified Godunov flux approximation. Through four examples we show that UM may not converge to the physically correct solution. The scheme is ill-conditioned since a small perturbation in the permeability may give a large difference in the solution. Without knowledge of the physically correct solution it is impossible to rule out the solution produced by UM. Even if UM performs well in most cases, we emphasize that there exists systems where the scheme approximates a completely different solution than the physically relevant one. Since this scheme is widely used in reservoir simulation it is important to be aware of that the scheme can perform this badly.
-
-
-
A Rigid Element Method for Building Structural Reservoir Models
Authors G. Laurent, G. Caumon, M. Jessell and J.J. RoyerMost current approaches for building structural reservoir models focus on geometrical aspects and consistency with seismic and well data. Few approaches account for the validity of 3D geological models regarding structural compatibility. It may be done using restoration to check the kinematics or mechanics. This is generally performed a posteriori, which also provides critical insights on the basin/reservoir history, but requires significant modeling efforts. This paper presents an approach introducing a first-order kinematic and mechanical consistency at the early stages of the structural modeling. Because the full deformation path is generally poorly constrained, we suggest using simplified approaches to generate plausible structures and assess first-order deformations, making efficiency and robustness more important than physical accuracy. A mechanical deformable model based on rigid elements linked by a non-linear energy has been adapted to geological problems. The optimal deformation is obtained by minimizing the total energy with appropriate boundary conditions. Last, the displacement field is transferred to the geological objects embedded into the rigid elements. With this approach, 3D structural models can be obtained by successively modeling the tectonic events. The underlying tectonic history of resulting models is explicitly controlled by the interpreter and can be used to study structural uncertainties.
-
-
-
Predicting Faults from Curvatures of Deformed Geological Layers Viewed as Thin Plates.
By J.J. RoyerContinuous media theory in physics uses the Von Karman's theory to describe the shape, strains and stresses of thin plates, non Euclidian thin shells or surfaces. Given a set of boundary conditions, it relates geometrical shape parameters such as the Gaussian and the mean curvatures, the physical properties of the materials such as the Young's modulus and Poisson's ratio to the bending (or flexural slip) and stretching (or pure shearing) energy terms. Layered geological structures, especially reservoir bearing structures, have typically larger lateral extents compared to their thickness, and can be considered in a first approximation as thin plates regarding their mechanical behavior. Moreover, during sedimentation the top of the sedimentary pile can be generally considered as smooth developable surfaces in the depositional space, which are then deformed during their burial history under tectonic events. This idea is used to suggest a method for identifying the probability of finding sub-seismic faults in thin geological structures or reservoirs. This paper presents theoretical results that relate the curvatures of the top or bottom surfaces of geological structures and reservoirs. Bending and stretching energy terms are used as structural attributes to predict fracturing or the deformation style.
-
-
-
A Gabriel-Delaunay Triangulation of Complex Fractured Media for Multiphase Flow Simulations
By H. MustaphaFractured reservoirs are complex domains where discrete fractures are constraining boundaries. The discrete fractures are discretized into intersected edges during a grid generation process. Delaunay triangulations are often used to represent complex structures. However, a Delaunay triangulation of a fractured medium generally does not conform to the fracture boundaries. Recovering the fracture elements may violate the Delaunay empty-circle (2D) criterion. Refining the triangulation is not a practical solution in complex fractured media. This paper presents a new approach that combines both Gabriel and Delaunay triangulations. The Gabriel condition of edge-empty-circle is locally employed to quantify the quality of the fracture edges in 2D. The fracture edges violating the Gabriel criterion are released in a first stage. After that, a Delaunay triangulation quality is generated considering the rest of the fracture constraints. The released fracture edges are then approximated by the edges of the Delaunay triangles. The final representation of fractures might be slightly different, but a very accurate solution is always maintained. The method is near optimal and has the capability to generate fine grids and to offer an accurate good-quality grid. Numerical examples are presented to assess the performance and efficiency of the proposed method.
-
-
-
Numerical Prediction of Relative Permeability in Water-Wet Naturally Fractured Reservoir Rocks
Authors S.K. Matthai, S. Bazrafkan, P. Lang and C. MilliotteThe grid-block scale ensemble relative permeability, kri of fractured porous rock with appreciable matrix permeability is of decisive interest to reservoir simulation and the prediction of production, injector-producer water breakthrough, and ultimate recovery. While the dynamic behaviour of naturally fractured reservoirs (NFR) already provides many clues about (pseudo) kri on the inter-well length scale, such data are difficult to interpret because, in the subsurface, the exact fracture geometry is unknown. Here we present numerical simulation results from discrete fracture and matrix (DFM) unstructured grid hybrid FEM-FVM simulation models, predicting the shape of fracture-matrix kri curves. In contrast to earlier work (Matthai et al. 2007, Nick and Matthai, 2011), we also simulate capillary fracture matrix transfer (CFMT) and without relying the frequently made simplifying assumption that fracture saturation reflects fracture-matrix capillary pressure equilibrium. We also use a novel discretization of saturation which permits jump discontinuities to develop across the fracture-matrix interface. This increased physical realism permits – for the first time - to test the Matthai and Nick (2009) semi-analytical model of the flow rate dependence of relative permeability, ensuing from CFMT. The sensitivity analysis presented here constrains the CMFT-related flow rate dependence of kri and illustrates how it manifests itself in two geometries of layer-restricted well-developed fracture patterns mapped in the field. In a companion paper (Lang et al.), also investigate the dependence of kri on fracture aperture as computed using discrete element analysis for plausible states of in situ stress. Our results indicate that fracture-matrix ensemble relative permeability is matched – for fast flow rates – by the semi-analytic model of Matthai and Nick (2009). For slow rates the strong impact of CFMT leads to significantly different behaviour requiring a more elaborate treatment.
-
-
-
Flows in Discrete Fracture Networks: from Fine Scale Explicit Simulations to Network Models and Reservoir Simulators
Authors B. Noetinger, M.D. Delorme, A.F. Fourno and N.K. KhvoenkovaModelling flows in fractured reservoirs is becoming essential, due to the increasing number of fractured reservoirs to be exploited. Building fluid flow simulations keeping explicit Discrete Fracture Network (DFN) models that capture well the highly localized nature of flow in fractured reservoirs is a challenging issue. A successful solution will be of considerable help for setting up EOR schemes. A rigorous workflow handling 3D DFN simulations to standard large scale simulations must be set up. We show that it is possible to build an exact approximation scheme using an original Galerkin projection technique and a quasi steady state approximation (simulation time greater that a typical diffusion time over one fracture). At the lowest order, the resulting set of equations to be solved has the structure of a resistor/capacitor network The associated mass and transmissibility matrices can be computed explicitly solving steady state boundary value Laplace equations over each fracture. Considering millions fracture models remains impossible in this context. Using geometrical considerations, we have developed accelerated algorithms allowing to treat such cases in an acceptable time on a standard computer. Validations were done with high resolution reference calculations. The theoretical aspects and validation tests will be adressed during the presentation.
-
-
-
Single Porosity Model for Fractured Formations
Authors P.YU. Tomin and A.K. Pergamentdeveloped. Analogous to work of G. Dagan & P. Indelman, the energy criterion is used for upscaling of absolute permeability. The fine-scale energy equality to approximated value corresponding to tensor coefficients is required for cells containing fractures. The resulting effective tensor is symmetric and physically consistent since the flux approximation is assured. Two classes of methods are applied to determine the pseudo relative permeability tensor. First one is the stationary capillary equilibrium method which is applicable in capillary trapping zones far from wells. Furthermore, analysis of relations between phase and absolute permeability tensors is carried out using this method. Samples of relative permeability curves are obtained for media with orthotropic and monocline symmetries. The influence of connectivity property on the functions is shown and the saturation dependence of direction of principal axes for phase permeability tensor is investigated. Thereby the misalignment of phase and absolute permeability tensors is shown. The second class is a dynamic pseudo-function approach which uses the multiscale method for water flooding simulation. The method combines the Fedorenko finite superelement method and the Samarskii support operator method and belongs to the high-resolution methods class. The technique developed allows to incorporate fractures of complex geometry, accurately accounts the anisotropy for two-phase flows, and as opposed to dual parameters model doesn’t require the connectivity of fractures system and avoids doubling the number of unknowns. The method is successfully applied for simulation of the China and West Siberia fractured reservoirs.
-
-
-
Diagnosis and Quantification of Stress-Sensitive Reservoir Behaviour from Pressure and Rate Transient Data
By R.A. ArcherClassical analytical and numerical techniques for simulation of fluid flow in petroleum reservoirs typically assume permeability is independent of pressure. In naturally fractured and low permeability systems the reservoir permeability may depend on the stress state of the reservoir which means the diffusivity equation that governs single phase flow in the reservoir becomes nonlinear. Stress-sensitive behaviour is particularly relevant to the development of tight gas and other unconventional resources. This work develops a set of tools to diagnose and quantify stress sensitivity through analysis of transient pressure or flow rate data. The work builds on analytical solutions for radial flow in a stress-sensitive medium presented by Friedel and Voigt (SPE 122768, 2009), and for the linear flow case presented by Archer (AFMC 17, 2010). The radial flow solution uses the Boltzmann transform whereas the linear flow solution is based on the use of the Cole-Hopf transform. High resolution numerical solutions are also used to complement these analytical solutions. Where appropriate pseudo-pressures are used to take account of the pressure dependence of gas properties on pressure. This paper considers both transient pressure and rate solutions and develops a range of type curve formats to demonstrate how production from stress-sensitive reservoirs differs from conventional reservoirs when plotted in traditional well test format (log-log plot of pressure and pressure derivative), as a p/z plot (for the gas case), as a rate versus cumulative plot, and as “Blasingame” type curves in the including the normalised rate, rate-integral, and rate-integral-derivative formats. This suite of tools can be used in a diagnostic manner to identify whether stress-sensitive behaviour is occurring, to quantify the errors that may be made in permeability estimates if stress-sensitive behaviour is ignored, and to estimate the impact of stress-sensitivity on ultimate recovery from a well.
-
-
-
A Spectal Approach to Conditional Simulation
Authors I.R. Minniakhmetov and A.H. Pergamentcovariance matrix representing grid point’s correlation. For the large fields the Cholesky factorization can be computationally expensive. In this work we present an alternative approach, based on the usage of spectral representation of a conditional process. It is shown that covariance of two arbitrary spectral components could be factorized into functions of corresponding harmonics. In this case the Cholesky decomposition could be considerably simplified. The advantage of the presented approach is its accuracy and computational simplicity.
-
-
-
Quantitative Use of Different Seismic Attributes in Reservoir Modeling
Authors T. Feng, J. Skjervheim and G. EvensenAccurate reservoir models are essential for reservoir management. Optimal use of all available data is crucial. Traditionally, reservoir properties have been conditioned to the dynamic production data from the wells. Seismic data, on the other hand, is only used in a qualitative manner. Quantitative use of seismic data is sparse and research based. To use seismic data quantitatively in the reservoir-modeling process, an integrated workflow need to be established such that the forward modeling of the synthetic seismic data and the preferable measured seismic data can be incorporated in the conditioning process. The different modeling regimes, such as reservoir flow simulation, rock physics, and seismic wave propagation, are involved in getting from reservoir flow properties to seismic signals. Hence, different seismic attributes from different levels can be used in the conditioning process. In this work, our focus is to test and demonstrate an integrated workflow for quantitative use of different seismic attributes in history matching. The history matching concept will be formulated in a Bayesian setting through ensemble based algorithms. The uncertainty of model is represented with an ensemble of realizations. A field case study is used to demonstrate the importance of different seismic attributes in the conditioning process.
-
-
-
Using Two-point Geo-statistics Reservoir Model Parameters Reduction
By J. LeguijtUsing two point geo-statistics reservoir model parameters reduction. An algorithm has been developed to constrain gridded reservoir models that are used with assisted history matching with geo-statistical information and at the same time reduce the number of variables that are needed to describe the model. Gridded models, as used within most reservoir modelling packages, may consist of 10^5 up to 10^6 grid blocks. A covariance matrix which can be used to constrain the model with a variogram (two point statistics) would consist of 10^10 up to 10^12 coefficients and a direct principal component decomposition of is beyond the capability of current computer systems. A common way to reduce the number of variables is using the members of an ensemble of models from a geo-statistical simulation as basis vectors for a subspace. When a history match is obtained with a model that is constrained to this subspace, this model will have a decently looking continuity behaviour. There is however no guarantee that this subspace contains the directions that correspond with the eigenvectors of the covariance matrix with the largest eigenvalues. This can be demonstrated with a simple simulation and is theoretically described by the Wishart distribution. It is possible to construct a set of orthonormal basis vectors that contains the directions that correspond with the eigenvectors of the covariance matrix with the significantly large eigenvalues. The number of basis vectors may still be rather large but it is mainly determined by the size of the model and the range of the variogram. From an eigenvector decomposition of this covariance matrix, a very good approximation can be obtained of the eigenvectors with a significant large eigenvalues. As the small eigenvalues can be neglected, the number of eigenvectors needed to describe the model is approximately 10^2, which results in a significant parameter reduction.
-
-
-
Numerical Comparison of Ensemble Kalman Filter and Randomized Maximum Likelihood
Authors K. Fossum, T. Mannseth, D. Oliver and H.J. SkaugIn recent years, more traditional history matching methods have been increasingly challenged by sequential data assimilation techniques such as the ensemble Kalman filter (EnKF). There are strong similarities between EnKF and the non-sequential method, randomized maximum likelihood (RML). For a linear forward model the two methods are equal, for a nonlinear forward model there arises some differences (in addition to sequential/batch data assimilation): RML can be iterative, while EnKF is not; RML uses realization-specific gradients/sensitivities to change a model realization while EnKF uses the same covariance for all realizations. We assess the sampling capabilities of RML and EnKF for a weakly nonlinear forward model. Results are compared to a Markov chain Monte Carlo (McMC) method, which samples correctly from the posterior. Our aim is to clarify which of the above mentioned differences between RML and EnKF has the biggest impact on the sampling capabilities. We apply the methods to a two-phase reservoir models small enough to be suitable for McMC. The assessment of RML and EnKF is performed by comparing history matching capabilities, and properties of their posterior distributions to those of the posterior distributions obtained with McMC.
-
-
-
Smooth Multi-scale Parameterization for Integration of Seismic and Production Data Using Second-generation Wavelets
Authors T. Gentilhomme, T. Mannseth, D. Oliver, G. Caumon and R. MoyenIn this paper, we use the second-generation wavelet transform as multi-scale smooth parameterization technique for history matching of seismic derived models using an ensemble based optimization method (batch-enRML). The construction of the second generation wavelet is presented and their advantages compared to first generation wavelets are discussed. Then, these wavelets are applied to a realistic 3D faulted reservoir model. Their ability to represent correctly this model with a large compression ratio is demonstrated. Finally, using the SGW re-parameterization, we set the basis for a new adaptive multi-scale inversion method, which aims at limiting the increase of the mismatch to seismic data of the seismic-derived realizations by selecting relevant parameters. Efficiency of the method is discussed through a 2D synthetic example.
-
-
-
Distance Parameterization for Efficient Seismic History Matching with the Ensemble Kalman Filter
Authors O. Leeuwenburgh and R. ArtsThe Ensemble Kalman Filter (EnKF), in combination with travel-time parameterization, provides a robust and flexible method for quantitative multi-model history matching to time-lapse seismic data. A disadvantage of the parameterization in terms of travel-times is that it requires simulation of models beyond the update time. A new distance parameterization is proposed for fronts, or more generally, for isolines of arbitrary seismic attributes, that circumvents the necessity of additional simulation time. An accurate Fast Marching Method for solution of the Eikonal equation in Cartesian grids is used to calculate distances between observed and simulated fronts which are subsequently used as innovations in the EnKF. Experiments are presented that demonstrate the functioning of the method in synthetic 1D and 2D cases that include uncertain model properties, and merging or multiple secondary fronts. Results are compared with those resulting from direct use of saturation data. The proposed algorithm significantly reduces the number of data while still capturing the essential information, it removes the need for seismic inversion when the oil-water front is identified only, and it produces a more favorable distribution of simulated data, leading to improved functioning of the EnKF.
-
-
-
Preventing Ensemble Collapse and Preserving Geostatistical Variability Across the Ensemble with the Subspace EnKF
More LessOne of the key issues of the EnKF is the well known problem of ensemble collapse, which is particularly evident for small ensembles. This results in an artificial reduction of variability across the ensemble. The second, more important problem is that the EnKF is theoretically appropriate only if all ensemble members belong to the same multi-Gaussian random field (geological/geostatistical model). This is an important problem because for most real fields, we have more than one geological scenario, and ideally, we would like to obtain one or more history-matched models for each geological scenario. In this work, we propose the subspace EnKF to alleviate both problems mentioned above. The basic idea of the subspace EnKF is to constrain the different ensemble members to different subspaces of the same or different random field. This is accomplished by parameterizing the random fields and modifying the EnKF formulation with the gradients of the parameterizations. The subspace EnKF prevents ensemble collapse, providing a better quantification of uncertainty, and more importantly, retains key geological characteristics of the initial ensemble, even when each ensemble member belongs to a different geological model. The approach is demonstrated on a synthetic example with a multi-Gaussian permeability field.
-
-
-
Multi-objective Scheme of Estimation of Distribution Algorithm for History-Matching
Authors A. Abdollahzadeh, A. Reynolds, M. Christie, D. Corne, G. Williams and B. DaviesHistory matching is one of the key challenges of efficient reservoir management. In history matching, evolutionary algorithms are used to explore the global parameter search space for multiple good fitting models. General critiques of these algorithms include high computational demands, as well as low diversity of multiple models. Estimation of distribution algorithms are a class of evolutionary algorithms in which new candidate solutions are obtained by sampling a probability distribution created from the population. In previous works, we studied estimation of distribution algorithms for history matching and showed that good results can been obtained by using a single misfit function. Multiobjective optimisation algorithms use the concepts of dominance and the Pareto front to find a set of optimal trade-offs between the competing objectives of minimising misfit. In this paper, we apply a multiobjective estimation of distribution algorithm to history matching of firstly a well-known synthetic reservoir simulation model and secondly a real North Sea reservoir. We will show that one can achieve higher solution diversity and in some cases better quality solutions by taking multiple objectives. In addition, multiobjective optimisation algorithms are less sensitive to parameter tuning and provide trade-offs between objectives that give more insights into history matching problem.
-
-
-
Data Assimilation Using the EnKF for 2-D Markov Chain Models
Authors Y. Zhang, D.S. Oliver, Y. Chen and H.J. SkaugThe ensemble Kalman filter (EnKF) is well-suited to update gaussian variables and can be used for updating continuous nongaussian variables either directly or after transformation. Categorical variables such as facies type are much more difficult for history matching, especially when the variables have complex transitional dependencies. In a previous paper we described a method for updating third order Markov chain models in one dimension using the ENKF, where its efficiency partially depends on the Viterbi algorithm that is not directly applicable in higher dimensions. In this paper, we develop a data assimilation method for updating categorical models using an approximation to the joint probability of facies types (Allard et al 2011) that can be used in a sequential algorithm without iteration. The ensemble of realizations after updating can be used to efficiently approximate the likelihood of the variables, while the categorical model provides an approximation to the transition probabilities. We demonstrate the approach with conditioning two synthetic channel models with two facies types to both linear and nonlinear observations. Our results show the distribution of facies after data assimilation honors data much better than before assimilation, and the transitions among facies are consistent with the prior model.
-
-
-
An Iterative Version of the Adaptive Gaussian Mixture Filter
Authors A.S. Stordal and R. LorentzenThe adaptive Gaussian mixture filter (AGM) was introduced as a robust filter technique for large scale applications and an alternative to the well known ensemble Kalman filter (EnKF). The bias of AGM is determined by two parameters, one adaptive weight parameter and one predetermined bandwidth parameter which decides the size of the linear update. The bandwidth parameter must often be selected significantly different from zero in order to make large enough linear updates to match the data, at the expense of bias in the estimates. In the iterative AGM we introduce here we take advantage of the fact that the history matching problem is usually estimation of parameters. If the prior distribution of parameters is close to the posterior distribution, it is possible to match the observations with a small bandwidth parameter. Hence the bias of the filter solution is small. In order to obtain this scenario we iteratively run the AGM throughout the data history with a very small bandwidth to create a new prior distribution from the updated samples after each iteration. After a few iterations, nearly all samples from the previous iteration match the data and the above scenario is achieved.
-
-
-
Ensemble Kalman Filter Data Assimilation to Condition a Real Reservoir Models to Well Test Observation
By A. AbadpourRecently a significant effort has been made to characterize reservoir models benefiting from Ensemble Kalman filter as data assimilation technique. EnKF proved to be a powerful tool to deal with almost any sort of measurement also to be capable of handling different type of uncertainty in the simulation models and and being affordable from the computational point of view. Lately the technique has been deployed to assimilate on pressure transient and production logging data to update permeabilities and estimate layer skin factor. In the present paper EnKF methodology was used to characterize an offshore reservoir model against the well test pressure data as well as the pressure derivative to adjust cell by cell petrophysical properties, and the skin factor in each well perforation. The results showed that using the derivative observations to calibrate the uncertain parameters helps improving the quality of the match not only in the predicted derivative but also in better forecasting the pressure measurements. The importance of assimilation on skin as well as recalculation of well connection factors revealed. Moreover a new distance based localisation scheme based on the well drainage zone has been introduced to help reducing unnecessary changes in the model.
-
-
-
New Formulation of the Objective Function for Better Incorporation of 4D Seismic Data into Reservoir Models
Authors R. Derfoul, S. Da Veiga, C. Gout and C. Le GuyaderTo build consistent reservoir models, 4D seismic data are an invaluable source of information on fluid displacements and geology over extensive areas of the reservoir. In this paper, we focus on the integration of such data to improve the obtained optimal model in a history matching process. However, this is a challenging task involving a proper definition of the objective function. The objective function computes the discrepancy between observed data and responses computed by the reservoir model. Classical formulations based on the least square mismatch are not adapted to deal with complex, noisy and numerous data such as 4D inverted seismic data. In this paper, we study the integration of seismic data in order to improve the optimal model obtained by the history matching process. The main focus of this paper is to define an experimental methodology to compare and classify seismic matching methods. In particular, we propose an efficient algorithm which focuses on the main trends in a seismic cube. This new algorithm is investigated in the context of seismic data, and its potential is demonstrated on several history matching reservoir examples.
-
-
-
Time Lapse Inversion Workflow Constrained by Reservoir Grid Parameterization
By P. ThoreTime lapse seismic provides key information for assisted history matching. Qualitatively geo-bodies extracted from 4D data reflects the front for a given flow event (e.g. the water flood due to an injector). But 4D data can also provide quantitative information relative to dynamic parameters. We propose a novel workflow for quantitative use of geophysical data for AHM which consists in three steps: 1. Model-Based-Inversion which keeps the layer parameterization of the reservoir grid. This parameterization introduces high and low frequencies missing in the seismic bandwidth. 2. Pressure and saturation inversion constrained by dynamic information and handling uncertainty on data and model. 3. Direct mapping of seismically derived information into the reservoir grid (without using any time to depth conversion). That solves the main problem inherent with the vertical change of support from the (regular) seismic grid to the (irregular layer-based) reservoir grid. Our paper is illustrated with real data examples.
-
-
-
Optimal Choice of a Surveillance Operation Using Information Theory
Authors A.C. Reynolds and D.H. LeWe consider the problem of choosing among a suite of potential reservoir surveillance operations. We frame the problem in terms of two questions: (1.) Which surveillance operation is the most useful? (2.) What is the expected value of the reduction in uncertainty in the reservoir variable J (e.g. cumulative oil production) that would be achieved if we were to conduct each surveillance operation to collect and history-match the data obtained? Note that the objective is to answer these questions with an uncertain reservoir description and without any actual measurements. We propose a procedure based on information theory to answer these questions. Question 1 is answered by calculating the mutual information between J and the vector of observed data. Question 2 is answered by estimating the expected value of the standard deviation (or P90-P10) of J in the posterior model from the conditional entropy of J. We apply the proposed method to two simple problems, a nonlinear toy problem and a simple water flooding problem. The results are verified by an exhaustive history matching procedure, which is reasonably rigorous but very computationally demanding. We find that the mutual information approach is a fast and reliable alternative to the history matching approach.
-
-
-
Application of the Adaptive Gaussian Mixture Filter to History Match a Real Field Case
Authors R. Valestrand, G. Nævdal and A.S. StordalOver the last decade the ensemble Kalman filter (EnKF) has attracted attention as a promising method for solving the reservoir history matching problem: Updating model parameters so that the model output matches the measured production data. The method possesses unique qualities such as; it provides real time update and uncertainty quantification of the estimate, it can estimate any physical property at hand. The method does, however, have its limitations; in particular it is derived based on an assumption of a Gaussianity. A recent method proposed to improve upon the original EnKF method is the Adaptive Gaussian mixture filter (AGM). The AGM loosens up the requirements of a linear and Gaussian model by making smaller linear updates and including importance weights associated with each ensemble member at computational costs as low as EnKF. In this paper we present results where the AGM algorithm is combined with localization. To validate the performance of AGM the result is compared with the EnKF, with and without localization. From the results, we are able to distinguish the performance of the different filters. In particular all the methods provide good history match, but we see that the AGM stands out by better honoring the original geostatistics.
-
-
-
Neural Networks and their Derivatives for History Matching and other Seismic, Basin and Reservoir Optimization Problems
Authors J. Bruyelle and D.R. GuérillotDescription: In geosciences, complex forward problems met in geophysics, petroleum system analysis and reservoir engineering problems often requires replacing these forward problems by proxies, and these proxies are used for optimizations problems. For instance, History Matching of observed field data requires a so large number of reservoir simulation runs (especially when using geostatistical geological models) that it is often impossible to use the full reservoir simulator. Therefore, several techniques have been proposed to mimic the reservoir simulations using proxies. Due to the use of experimental approach, most of authors propose to use second order polynomials. In this paper we demonstrate that: (1) Neural networks can also be second order polynomials. Therefore, the use of a neural network as a proxy is much more flexible and adaptable to the non linearity of the problem to be solved; (2) First order and second order derivatives of the neural network can be obtained providing gradients and hessian for optimizers. For the first point, a complete description of a neural network equivalent to a second order polynomial will be given. For inverse problems met in seismic inversion, well by well production data, optimal well locations, source rock generation, etc., most of the time, gradient methods are used for finding an optimal solution. The paper will describe how to calculate these gradients from a neural network built as a proxy. When needed, the hessian can also be obtained from the neural network approach. Application: On a real case study, the ability of neural networks to reproduce complex phenomena (water-cuts, production rates. etc.) is showed. Comparisons with second polynomials (and kriging methods) will be done demonstrating the superiority of the neural network approach as soon as non linearity behaviors are present in the responses of the simulator. The gradients and the hessian of the neural network will be compared to those of the real response function. Results and conclusions: (1) Neural Network can replace advantageously polynomial and kriging approaches as proxies for inverse problems and uncertainty analysis, (2) A neural network giving a bilinear polynomial will be explicitly given, (3) Gradients and Hessian of neural network can be calculated and use by optimizers. Keywords: Proxies, History Matching, Gradient Methods, Optimizers, Basin Modelling, Seismic Inversion, Uncertainty Analysis, Hessian
-
-
-
North Sea Chalk Reservoir – Seismic History Matching Workflow
Authors H. Sudan, E. Tolstukhin and A. JanssenThis presentation outlines an integrated workflow that incorporates 4D seismic data into the North-Sea Chalk Reservoir history matching process. Successful application and associated benefits of the workflow process are also presented. A number of 4D seismic surveys have been acquired over this field between 1989 and 2008 and this data is becoming a quantitative tool for describing the spatial distribution of reservoir properties and compaction. The seismic monitoring data is used to optimize the waterflood by providing water movement insights and subsequently improve infill well placement. Reservoir depletion and water injection in this field lead to rock compaction and fluid substitution. These changes are revealed in space and time through 4D seismic differences. Inconsistencies between predicted 4D differences (calculated from reservoir model output) and actual 4D differences are therefore used to identify reservoir model shortcomings. This process is captured using the following workflow: prepare and upscale a geologic model; simulate fluid flow and associated rock-physics using a reservoir model; generate a synthetic 4D seismic response from fluid and rock-physics forecasts; and update the reservoir model to better match actual production data and 4D seismic observations. The above-mentioned Seismic History Matching (SHM) workflow employs rock-physics modeling to quantitatively constrain the reservoir model and develop a simulated 4D seismic response. Different parameterization techniques and seismic misfit formulations were validated and used to calibrate and update the reservoir model. This workflow updates the parameters in the closed loop system through minimization of a misfit function by using a customized Particle Swarm Optimization Algorithm. In summary, the Seismic History Matching workflow is a multi-disciplinary process that requires strong collaboration between geological, geomechanical, geophysical and reservoir engineering disciplines to optimize reservoir management.
-
-
-
Efficient Solution of the Optimization Problem in Model-reduced Gradient-based History Matching
Authors S. Szklarz, M. Rojas and M. KaletaAdjusting parameters in reservoir models by minimizing the discrepancy between the model's predictions and actual measurements is a popular approach known as history matching. One of the most effective techniques is gradient-based history matching. For reservoir models, the number of grid blocks and therefore, the size of the problem can become very large. In recent years, model-order reduction techniques aiming to replace large, complex dynamic systems with lower-dimension models have been incorporated into history matching. In both gradient-based history matching and model-reduced gradient-based history matching, first-order optimization methods are used in order to minimize the mismatch between simulated well-production data and observed production. In this work, we investigate the performance of some optimization methods on the minimization problem in model-reduced gradient-based history matching. The methods were tested on the history matching of a small reservoir model with synthetic measurements. Our results show that fast first-order techniques such as the spectral projected gradient method can compete with the popular quasi-Newton BFGS approach.
-
-
-
Deterministic Linear Bayesian Updating of State and Model Parameters
Authors O. Pajonk, B.V. Rosić and H.G. MatthiesBayesian estimation has become an important topic for inverse problems in the context of hydrocarbon recovery. The conceptual and computational advantages due to direct integration with uncertainty quantification workflows are appealing. Especially, linear Bayesian techniques like the ensemble Kalman filter (EnKF) have been successfully used in numerous cases. However, such techniques have difficulties in some applications which are often caused by sampling errors, a limited ensemble size, or the sometimes large number of required samples. In this work we present and discuss a closely related linear Bayesian technique which is based on orthogonal expansions of the stochastic spectrum of the involved random variables and random fields. Basically being a family of fully deterministic implementations of the well-known projection theorem of Hilbert spaces, the technique is conceptually simple, yet powerful. Since they are fully deterministic, these methods avoid all sampling errors. First combined parameter and state estimation results with a low-dimensional chaotic model are presented, using a specific choice of orthogonal expansion. These are compared to results obtained with EnSRF, since it is a close relative to these spectral estimation methods. Challenges and opportunities for applications to the inverse problem of identification for hydrocarbon reservoirs are discussed.
-
-
-
A New Global Upscaling Technique for 3D Unstructured Grids
Authors M. Karimi-Fard and L.J. DurlofskyNew procedures for unstructured coarse-model generation are presented and applied. The underlying fine-grid model is considered to be unstructured, and the coarse-model cells are defined as groupings of fine-grid cells. The key flow quantity that must be computed for the coarse model is the upscaled transmissibility for each cell-to-cell connection. We introduce a global upscaling procedure for this computation. The method first requires several (minimum of three) global single-phase flow solutions. Appropriately defined linear combinations of these solutions are used to compute each upscaled transmissibility. This approach circumvents some of the limitations of existing (local and global) upscaling procedures. It also enables transmissibility to be quickly computed for a number of different coarse grids without performing any additional pressure solutions. Results are presented for an idealized two-phase flow problem. The fine grid contains nearly 200,000 cells, and coarse models of varying resolution are considered. Accurate results for total injector-producer flow rate are observed for all grid-resolution levels for the three different well configurations considered. Oil rate as a function of time is shown to improve in accuracy with increasing resolution, and is quite accurate for a model of about 10,000 cells.
-
-
-
Grid Adaption for Upscaling and Multiscale Method
Authors K.-A. Lie, J.R. Natvig, S. Krogstad, Y. Yang and X.H. WuA Dirichlet-Neumann representation method was recently proposed for upscaling. The method expresses coarse fluxes as linear functions of multiple discrete pressure values along the boundary and at the center of each coarse block. The number of pressure values can be adjusted to improve the accuracy of simulation results, and in particular to resolve important fine-scale details. Improvement over existing approaches is substantial especially for reservoirs that contain high permeability streaks or channels. Multiscale methods obtain fine-scale fluxes or pressures at the cost of solving a coarsened problem, but can also be utilized for flexible upscaling. We compare the DNR and a multiscale mixed finite-element method. Both can be expressed in mixed form, with local stiffness matrices obtained as inner products of basis functions with fine-scale subresolution determined from local flow problems. Piecewise linear Dirichlet boundary conditions are used for DNR and piecewise constant Neumann conditions for MsMFE. Adding discrete pressure points in the DNR method corresponds to subdividing coarse faces and hence increasing the number of basis functions in the MsMFE method. The methods show similar accuracy for 2D Cartesian cases, but the MsMFE method is more straightforward to formulate in 3D and implement for general grids.
-
-
-
Reduced-order Modeling for Thermal Recovery Processes
Authors M.A.H. Rousset, C.K. Huang, H. Klie and L.J. DurlofskyThermal recovery typically entails higher costs than conventional oil recovery, so the application of computational optimization techniques may be beneficial. Optimization, however, requires many simulations, which incurs substantial computational cost. Here we apply a model-order reduction technique, which aims at large reductions in computational requirements. The technique considered, trajectory piecewise linearization (TPWL), entails the representation of new solutions in terms of linearizations around previously simulated (and saved) training solutions. The linearized representation is projected into a low-dimensional space, with the projection matrix constructed through proper orthogonal decomposition of solution `snapshots' generated in a training step. We consider two idealized problems, specifically primary production of oil driven by downhole heaters, and a simplified model for steam assisted gravity drainage, where water and steam are treated as a single `effective' phase. The strong temperature dependence of oil viscosity is included in both cases. TPWL test-case results for these systems demonstrate that the method can provide accurate predictions relative to full-order reference solutions. The overhead associated with TPWL model construction is equivalent to the computation time for several full-order simulations (the precise overhead depends on the number of training runs). Observed runtime speedups are very substantial -- over two orders of magnitude.
-
-
-
Enabling Optimal Production Strategies under Uncertainties via Non-Intrusive Model Reduction Methods
Authors H. Klie, H. Chen, Q. Wang and K. WillcoxThe present work proposes an alternative approach to generate nonlinear reduced order models for optimization and control under uncertainty without explicit knowledge of all the equations governing the physics of the simulation. Hence, the proposed method is amenable for legacy simulation codes. In order to cope with the lack of physical information in conjunction with the inherent curse of dimensionality associated with the number of parameter coefficients, control and state variables of the problem, we combine the projection operators obtained from the Proper Orthogonal Decomposition with neural net interpolation. In this way, the proposed Black-Box Stencil Interpolation Method (BSIM) is capable of exploiting both spatial and temporal variable locality. The method can be seen as a competitive but non-intrusive alternative to the Trajectory Piece-Wise Linear method and the Discrete Empirical Interpolation Method (DEIM) both recently proposed in the literature. We illustrate the capabilities of BSIM on a suite of different black-oil and compositional field models subject to multiple well controls under geological uncertainty. We show that the results are comparable in accuracy to DEIM despite the non-intrusive character of BSIM.
-
-
-
Reservoir Management Using Two-stage Optimization with Streamline Simulation
Authors T. Wen, M.R. Thiele, D. Echeverría Ciaurri, K. Aziz and Y. YeWaterflooding is a common secondary oil recovery process. Performance of waterfloods in mature fields with a significant number of wells can be improved with minimal infrastructure investment by optimizing injection/production rates of individual wells. However, a major bottleneck in the optimization framework is the large number of reservoir flow simulations often required. In this work we propose a new method based on streamline-derived information that significantly reduces these computational costs in addition to making use of the computational efficiency of streamline simulation itself. We seek to maximize the long-term net present value of a waterflood by determining optimal individual well rates, given an expected albeit uncertain oil price and a total fluid injection volume. We approach the optimization problem by decomposing it into two stages which can be implemented in a computationally efficient manner. The two-stage streamline-based optimization approach can be an effective technique when applied to reservoirs with a large number of wells in need of an efficient waterflooding strategy over a 5 to 15 year period.
-
-
-
Response Surface Approaches for Large Decision Trees: Decision Making Under Uncertainty
By H. GrossTraditionally, the connection between simulation and decision analysis is done by using simulation outputs as inputs to decision algorithms. We propose to use simulation input uncertainties directly in decision algorithms by extending existing probabilistic reservoir simulation tools (experimental design, proxy models), and existing decision analysis tools (decision trees, Pareto fronts). This approach addresses questions on field development options under uncertainty (facility sizing, completion decisions or data collection campaigns). When linking probabilistic simulation with decision analysis, three practical problems arose. First, the number of reservoir uncertainties creates huge decision trees. We solve this problem by creating composite solutions, with some branches evaluated exhaustively, and others evaluated with calibrated response surfaces. Then, assumption of independence between uncertainties, often encountered, was too restrictive for practical uses. We thus specify probabilities on all uncertainty branches. Last, we must handle multiple decision drivers and understand the consequences of decisions on several metrics. We have therefore implemented multi-objective optimization capabilities. The technique developed here extends beyond the capability of existing decision analysis and uncertainty quantification tools. Its practical value is demonstrated on two field problems, and proves useful to identify optimal decision paths.
-
-
-
A Workflow for Decision Making Under Uncertainty
Authors D. Busby, S. Da Veiga and S. TouzaniWe propose a workflow for decision making under uncertainty aiming at comparing different development plan scenarios under uncertainty. The approach applies to mature fields where the residual uncertainty is estimated using a probabilstic inversion approach. Moreover a robust optimization method is discussed to optimize controllable parameters in the presence of uncertainty. The key elements of this approach are the use of response surface models to reduce the very high number of simulator model evaluations needed. To build efficient and reliable response surfaces for this application we discuss an experimental design method for correlated input variables where the correlation is induced by the probabilistic inversion process. For the problem of optimization under uncertainty an iterative approach is proposed aiming at refining the response surface iteratively such as to reduce effectively approximation errors and converging faster to the true solution. The workflow is illustrated on a realistic test case of a mature field where the approach is used to compare two new development plan scenarios both in terms of expectation and of risk mitigation and to optimize well position parameters in the presence of uncertainty.
-
-
-
Estimation of Production Rates Using Transient Well Flow Modeling and the Auxiliary Particle Filter
Authors R. Lorentzen, A.S. Stordal, G. Nævdal, H.A. Karlsen and H.J. SkaugImproved recovery of oil from existing petroleum fields is increasingly important. A better representation of production zone information leads to better flowrate control and reservoir management. In order to achieve this, it is possible to utilize the fact that smart wells with multiple zones and laterals are more common, and they may be equipped with permanent instrumentation and control. Today, accurate flowrate measurements or estimates for each zone are lacking, and existing tools are often limited to steady-state models with no uncertainty analysis. Here we combine a transient well flow model and estimation techniques, into a tool for interpretation of wellbore measurements. The estimation technique applied here is the auxiliary sequential importance resampling (ASIR) filter, which has the advantage of being more robust than the traditional particle filter (PF). The ASIR filter is used to tune the output of specific stochastic models of the flowrates. To do this tuning we have chosen a regime type model for the flowrates. More specifically, the model implies that the flowrate process changes structure governed by an underlying Markov jump process. Using this type of models makes us capable of capturing both smooth transitions as well as more abrupt changes of the flowrates.
-
-
-
Generalized Field Development Optimization: Coupled Well-Placement and Control under Geologic Uncertainty
Authors B. Jafarpour and L. LiWell placement optimization is often formulated as an integer-programming problem and is typically carried out assuming known well control settings. Similarly, finding optimal well controls is usually formulated and solved as a control problem in which the well locations are fixed. Solving each problem independently without accounting for the coupling between them leads to suboptimal solutions. We propose to solve the coupled well placement and control optimization problems for improved production performance. We present two alternative methods: i) sequential solution of the decoupled well placement and control subproblems where each subproblem is resolved after updating the decision variables of the other subporoblem from the previous step; ii) simultaneous solution by concurrently changing well locations and controls during the iterations using a generalized stochastic approximation simultaneous perturbations algorithm. The first approach allows for application of well-established methods in the literature to solve each subproblem individually while the second approach requires development of new methods to solve mix-integer optimization problems. We consider field development optimization under geologic uncertainty and discuss computationally efficient approximate solution techniques for robust optimization under ensemble model representations. Several numerical experiments with the PUNQ and a layer of the SPE10 benchmark models demonstrate the applicability of these methods.
-
-
-
A Derivative-Free Methodology with Local and Global Search for the Joint Optimization of Well Location and Control
Authors O.J. Isebor, L.J. Durlofsky and D. Echeverría CiaurriIn oil field development, the optimal location for a new well depends on how it is to be operated. Thus, it is generally suboptimal to treat the well location and well control optimization problems sequentially. Rather, they should be considered as a joint problem. In this work, we present noninvasive, derivative-free, easily-parallelizable procedures to solve this joint optimization problem. Specifically, we consider Particle Swarm Optimization (PSO), a heuristic global stochastic search algorithm, Mesh Adaptive Direct Search (MADS), a local search procedure, and a hybrid PSO-MADS technique that combines the advantages of both methods. Nonlinear constraints are handled through use of filter-based treatments that seek to minimize both the objective function and constraint violation. We also introduce a formulation to determine the optimal number of wells, in addition to their locations and controls, by associating a binary variable (drill/do not drill) with each well. Example cases of varying complexity, which include bound constraints, nonlinear constraints, and the determination of the number of wells, are presented. The PSO-MADS hybrid procedure is shown to consistently outperform both standalone PSO and MADS when solving the joint problem. The joint approach is also observed to provide superior performance relative to a sequential procedure.
-
-
-
Well Placement Optimization under Uncertainty with CMA-ES Using the Neighborhood
Authors Z. Bouzarkouna, D.Y. Ding and A. AugerIn the well placement problem, as well as in other field development optimization problems, geological uncertainty is a key source of risk affecting the viability of field development projects. Well placement problems under geological uncertainty are formulated as optimization problems in which the objective function is evaluated using a reservoir simulator on a number of possible geological realizations. In this paper, we present a new approach to handle geological uncertainty for the well placement problem with a reduced number of reservoir simulations. The proposed approach uses already simulated well configurations in the neighborhood of each well configuration for the objective function evaluation. We use thus only one single reservoir simulation performed on a randomly chosen realization together with the neighborhood to estimate the objective function instead of using multiple simulations on multiple realizations. This approach is combined with the stochastic optimizer CMA-ES. The proposed approach is shown on the benchmark reservoir case PUNQ-S3 to be able to capture the geological uncertainty using a smaller number of reservoir simulations. This approach is compared to the reference approach using all the possible realizations for each well configuration, and shown to be able to reduce significantly the number of reservoir simulations (around 80%).
-
-
-
Optimization of Well Trajectory under Uncertainty for Proactive Geosteering
Authors Y. Chen, R.J. Lorentzen and E.H. VefringVarious logging-while-drilling (LWD) and seismic-while-drilling (SWD) tools offer opportunities to obtain geological information near the bottom-hole-assembly during the drilling process. These real-time in-situ data provide relatively high-resolution information around and possibly ahead of the drilling path compared to the data from a surface seismic survey. The use of this in-situ data offers substantial potential for improved recovery through continuous optimization of the remaining well path while drilling. We show an automated workflow for proactive geosteering through continuous updating of the estimates of the earth model and robust optimization of the remaining well path under uncertainty. A synthetic example is shown to illustrate the proposed workflow. The estimate of the reservoir surfaces, reservoir thickness, and the depth of the initial oil-water contact and their associated uncertainty are obtained through the ensemble Kalman filter using directional resistivity measurements. A robust optimization is used to compute the well position that minimizes the average cost function evaluated on the ensemble of geological models estimated from the EnKF.
-
-
-
Adjoint-Based Optimization of a Foam EOR Process
Authors J.F.B.M. Kraaijevanger, M. Namdar Zanganeh, H.W. Buurman, J.D. Jansen and W.R. RossenWe apply adjoint-based optimization to a Surfactant-Alternating-Gas foam process using a linear foam model introducing gradual changes in gas mobility and a nonlinear foam model giving abrupt changes in gas mobility as function of oil and water saturations and surfactant concentration. For the linear foam model, the objective function is a relatively smooth function of the switching time. For the nonlinear foam model, the objective function exhibits many small-scale fluctuations. As a result, a gradient-based optimization routine could have difficulty finding the optimal switching time. For the nonlinear foam model, extremely small time steps were required in the forward integration to converge to an accurate solution to the semi-discrete (discretized in space, continuous in time) problem. The semi-discrete solution still had strong oscillations in gridblock properties associated with the steep front moving through the reservoir. In addition, an extraordinarily tight tolerance was required in the backward integration to obtain accurate adjoints. We believe the small-scale oscillations in the objective function result from the large oscillations in gridblock properties associated with the front moving through the reservoir. Other EOR processes, including surfactant EOR and near-miscible flooding, have similar sharp changes, and may present similar challenges to gradient-based optimization.
-
-
-
High Order Adjoint Derivatives using ESDIRK Methods for Oil Reservoir Production Optimization
Authors A. Capolei, E.H. Stenby and J.B. JørgensenIn production optimization, computation of the gradients is the computationally expensive step. We improve the computational efficiency of such algorithms by improving the gradient computation using high-order ESDIRK (Explicit Singly Diagonally Implicit Runge-Kutta) temporal integration methods and continuous adjoints . The high order integration scheme allows larger time steps and therefore faster solution times. We compare gradient computation by the continuous adjoint method to the discrete adjoint method and the finite-difference method. The methods are implemented for a two phase flow reservoir simulator. Computational experiments demonstrate that the accuracy of the sensitivities obtained by the adjoint methods are comparable to the accuracy obtained by the finite difference method. The continuous adjoint method is able to use a different time grid than the forward integration. Therefore, it can compute these sensitivities much faster than the discrete adjoint method and the finite-difference method. On the other hand, the discrete adjoint method produces the gradients of the numerical schemes, which is beneficial for the numerical optimization algorithm. Computational experiments show that when the time steps are controlled in a certain range, the continuous adjoint method produces gradients sufficiently accurate for the optimization algorithm and somewhat faster than the discrete adjoint method.
-
-
-
Simultaneous Optimization of Well Placement and Control Using a Hybrid Global-local Strategy
Authors T. D. Humphries, R.D. Haynes and L.A. JamesOptimal placement and control of wells is essential to ensuring maximal net present value (NPV) or total oil recovery when developing an oil field. The majority of academic literature treats optimal placement and control as two separate problems; however, treating the problems simultaneously may allow us to achieve better results. The objective function (i.e. NPV) in this joint problem tends to vary nonsmoothly as positional parameters are varied, but smoothly in the control parameters. This suggests an approach that utilizes both global and local optimization techniques. In this paper we address the placement and control optimization problem simultaneously with two approaches combining a global search strategy (particle swarm optimization, or PSO), which operates over all variables, along with a local generalized pattern search (GPS) strategy, which operates primarily on the control parameters. The first approach is a hybrid PSO/GPS algorithm which optimizes over all positional and control variables simultaneously, while the second approach decouples the problem into separate placement and control problems, and attempts to solve them sequentially. Simulation experiments show that both approaches tend to outperform PSO in simple problems, while the decoupled approach may be the most suitable for more complicated cases.
-
-
-
Ensemble Based Multi-Objective Production Optimization of Smart Wells
Authors R.M. Fonseca, O. Leeuwenburgh and J.D. JansenIn a recent study two hierarchical multi-objective methods were suggested to include short-term targets in life-cycle production optimization. However this previous study has two limitations: 1) the adjoint formulation is used to obtain gradient information, requiring simulator source code access and an extensive implementation effort, and 2) one of the two proposed methods relies on the Hessian matrix which is obtained by a computationally expensive method. In order to overcome the first of these limitations, we used ensemble-based optimization (EnOpt). EnOpt does not require source code access and is relatively easy to implement. To address the second limitation, we used the BFGS algorithm to obtain an approximation of the Hessian matrix. We performed experiments in which a water flood was optimized in a geologically realistic multi-layer sector model. The controls were inflow control valve settings at pre-defined time intervals. Undiscounted Net Present Value (NPV) and highly discounted NPV were the long-term and short-term objective functions used. We obtained an increase of approximately 14% in the secondary objective for a decrease of only 0.2-0.5% in the primary objective. The study demonstrates that ensemble-based multi-objective optimization can achieve results of practical value in a computationally efficient manner.
-
-
-
Mathematical Modeling of Microbial Processes for Oil Recovery
Authors J. Monteagudo and C. HuangMicrobial recovery processes involve the usage of microorganisms, either indigenous or injected into the reservoir, to produce metabolic reactions that trigger a variety of mechanisms conducting to the production of hydrocarbons and/or enhanced oil recovery. In this work we have developed a mathematical model that accounts for several mechanisms involved both in the Microbial Gas Generation (MGG) and Microbial Enhanced Oil Recovery (MEOR) processes. This involves a kinetics model that predicts the cell growth and the metabolite production of gas, bio-surfactants and bio-polymers. Additionally, the model considers the reduction of the residual oil saturation due to the bio-surfactant and the change of water viscosity by the bio-polymer. An adsorption model depicts the retention of solutes in the aqueous phase thus altering the porosity and permeability. The model was implemented in a full-field 3-D compositional and black-oil reservoir simulator. We performed validations against experimental data available in the literature and then used the model to simulate MGG and MEOR processes with synthetic field cases. Sensitivity studies were conducted to assess the influence of the microbial kinetic model parameters in the predictions.
-
-
-
A 2D Model for the Effect of Gas Diffusion on Mobility of Foam for EOR
Authors L.E. Nonnekes, S.J. Cox and W.R. RossenTransport of gas across liquid films between bubbles is cited as one reason why CO2 foams for enhanced oil recovery (EOR) are usually weaker than N2 foams and why steam foams are weaker than foams of steam mixed with N2. We examine here the effect of inter-bubble gas diffusion on flowing bubbles in a simplified model of a porous medium (a periodically constricted tube in 2D) and in particular its effect on the bubble-size distribution and capillary resistance to flow. Bubbles somewhat smaller than a pore disappear by diffusion as the bubbles move. For bubbles larger than a pore, as expected in EOR, diffusion does not affect bubble size. Instead, diffusion actually increases capillary resistance to flow (i.e. makes foam stronger): lamellae spend more time in positions where lamella curvature resists movement. When fit to pressures and diffusion and convection rates representative of field application of foams, diffusion is not expected to alter the bubble-size distribution in a foam, but instead modestly increases the resistance to flow. The reason for the apparent weakness of CO2 foam therefore evidently lies in factors other than CO2's large diffusion rate through foam.
-
-
-
Using Dimensionless Numbers to Assess EOR in Heterogeneous Reservoirs
Authors B. Rashid, O. Fagbowore and A.H. MuggeridgeDimensionless numbers such as mobility ratio and the viscous to gravity ratio provide a convenient way of assessing the flow regime and thus ranking performance when designing secondary and tertiary oil recovery processes. Until recently, however, their application has been limited to homogeneous reservoirs due to a) the lack of a robust heterogeneity index and b) the fact that the viscous to gravity ratio depends upon reservoir permeability and thus heterogeneity. In this paper we present 3D phase diagrams showing how recovery and breakthrough time depend upon mobility ratio, viscous-to-gravity ratio and heterogeneity. We review the literature on the application of dimensionless numbers to identify flow regime in oil recovery processes and select a recently developed heterogeneity index based upon vorticity to characterize heterogeneity. The index has been previously verified using heterogeneous reservoir descriptions taken from SPE10 model 2. We use the phase-diagrams to identify dominant flow regimes and provide criteria based on the dimensionless numbers for identifying those flow regimes when assessing alternative EOR processes.
-
-
-
Molecular Dynamics as a Tool to Deal with Thermogravitation
Authors G. Galliero and F.M. MontelAbstract: A precise description of the initial state of a petroleum reservoir is crucial to optimize its development plan. This relies on an accurate modeling of the spatial distribution of the fluid components within the reservoir which is mainly influenced by gravitational segregation and thermo-diffusion phenomena (thermogravitation). An alternative to the classical thermodynamic modelling to provide further information on thermogravitation without the need of any EoS or any correlation to describe transport properties is to use Non-Equilibrium Molecular Dynamics (NEMD) simulations on systems representing an idealized 1D reservoir fluid column. We will show how such a molecular based approach can shed light on some the underlying physical mechanism (evolution/stability) of the thermo-gravitational process in idealized situations. In particular, it will be shown, on a n-alkane mixture and a acid gas mixture, that the thermodiffusion effect can affects the vertical distribution of the different compounds as much as segregation with the same characteristic time and can even lead to an unstable (i.e. convective) situation in a CO2 rich reservoir.
-
-
-
Modeling Compositional Compressible Two-phase Flow in Porous Media by the Concept of the Global Pressure
Authors B. Amaziane, M. Jurak and A. Zgaljic-KekoThe modeling of multiphase flow in porous formations is important for both the management of petroleum reservoirs and environmental remediation. More recently, modeling multiphase flow received an increasing attention in connection with the disposal of radioactive waste and sequestration of CO2. In this talk, we will discuss a new formulation for modeling compositional compressible two-phase flow in porous media such as immiscible gas injection in oil reservoirs or gas migration through engineered and geological barriers in a deep repository for radioactive waste . The focus is on the problems arising due to Newton-Raphson's flash calculations and the phase appearance and disappearance . Compositional compressible two-phase flows in porous media are usually modeled by the mass balance law written for each component, Darcy-Muscat's law, and the thermodynamic equilibrium between the phases . The obtained equations represent a set of highly coupled nonlinear partial differential equations. In order to model both saturated and unsaturated zones, one has to change the main unknowns of the system. In the saturated zones, the pressure and the saturation of one of the phases are commonly chosen as the main unknowns, whereas in the unsaturated zones the saturation may be replaced by the mass density of one of the component in its phase. To avoid changing the main unknowns, and to make the system coupling weaker, we derive a new formulation of the compositional compressible liquid and gas flow. The formulation considers gravity, capillary effects and diffusivity of each component. The main feature of this formulation is the introduction of a new variable called the global pressure. The derived system is written in terms of the global pressure and the total gas mass density that partially decouples the equations and is able to model the flows both in the saturated and unsaturated zones with no changes of the primary unknowns. The mathematical structure is well defined: the system consists of two nonlinear degenerate parabolic equations. The derived formulation is fully equivalent to the original equations and is more suitable for mathematical and numerical analysis. The accuracy and effectiveness of the new formulation is demonstrated through numerical results.
-
-
-
Thermal Adaptive Implicit Method: Temperature Stability Criteria
Authors J. Maes and A. MoncorgéWe present new linear-stability criteria for the Thermal Adaptive Implicit Method (TAIM) for thermal multiphasic compositional displacement. The analysis is applied to the mass and energy equations. Moncorgé and Tchelepi’s work (2009) is based on the assumption of divergence-free total velocity, and accounts for compressibility effects. Our analysis shows that the criteria proposed do not guarantee oscillation-free numerical solutions in case of displacement that involves steep temperature and saturation fronts. We derive new criteria that result from the analysis of a simplified coupled pressure-temperature linearized system, obtained by decoupling from saturations and compositions unknowns. The new criteria explains instabilities that were undetected by the previous analysis. Moreover, we demonstrate through scaling analysis and numerical examples that for most problems of practical interest, a simple temperature stability criterion obtained by assuming incompressible multiphase flow is quite robust. The relationship between the full and simplified stability criteria is analyzed in detail. The methodology is demonstrated using several thermal-compositional examples, including Steam Assisted Gravity Drainage.
-