- Home
- Conferences
- Conference Proceedings
- Conferences
ECMOR XIII - 13th European Conference on the Mathematics of Oil Recovery
- Conference date: 10 Sep 2012 - 13 Sep 2012
- Location: Biarritz, France
- ISBN: 978-90-73834-30-9
- Published: 10 September 2012
1 - 20 of 114 results
-
-
A Mortar Method Based on NURBS for Curve Interfaces
Authors A. Rodriguez, H. Florez and M.F. WheelerThe Mortar Finite Element Method (MFEM) has been demonstrated to be a powerful technique in order to formulate a weak continuity condition at the interface of sub-domains in which different meshes, i.e. non-conforming or hybrid, and / or variational approximations are used. This is particularly suitable when coupling different physics on different domains, such as elasticity and poro-elasticity, for example, in the context of coupled flow and geomechanics. In this area precisely, geometrical aspects play also a role. It is very expensive, from the computational standpoint, having the same mesh for flow and mechanics. Tensor product meshes are usually propagated from the reservoir in a conforming way into its surroundings, which makes non-conforming discretizations a highly attractive option for these cases. In order to tackle these general sub-domains problems, a MFEM scheme on curve interfaces based on Non-Uniform Rational B-Splines (NURBS) curves and surfaces is presented in this paper. The goal is having a more robust geometrical representation for mortar spaces which allows gluing non-conforming interfaces on realistic three-dimensional geometries. The resulting mortar saddle point problem will be decoupled by means of standard Domain Decomposition techniques such as Dirichlet-Neumann and Neumann-Neumann, in order to exploit current parallel machine architectures. Three-dimensional examples ranging from near-wellbore applications to field level subsidence computations show that the proposed scheme can handle problems of practical interest. In order to facilitate the implementation of complex workflows, an advanced Python wrapper interface that allows programming capabilities have been implemented. Extensions to couple elasticity and plasticity, which seems very promising in order to speed up computations involving poroplasticity, will be also discussed.
-
-
-
Errors in the Upstream Mobility Scheme for Counter-Current Two-Phase Flow With Discontinuous Permeabilities
Authors T.S. Mykkeltvedt, I. Aavatsmark and S. TveitThe upstream mobility scheme (UM) is widely used to solve hyperbolic conservation laws numerically. When applied to a homogeneous porous medium this scheme has been shown convergent. When heterogeneities are introduced through the permeability, the flux function attains a spatial discontinuity. In earlier works UM for some examples of countercurrent flow has been shown to perform badly. We have looked at the performance of UM for the counter-current flow of CO2 and brine in a 1D vertical column. The solutions computed from UM are compared to the physically relevant solution found by the modified Godunov flux approximation. Through four examples we show that UM may not converge to the physically correct solution. The scheme is ill-conditioned since a small perturbation in the permeability may give a large difference in the solution. Without knowledge of the physically correct solution it is impossible to rule out the solution produced by UM. Even if UM performs well in most cases, we emphasize that there exists systems where the scheme approximates a completely different solution than the physically relevant one. Since this scheme is widely used in reservoir simulation it is important to be aware of that the scheme can perform this badly.
-
-
-
A Rigid Element Method for Building Structural Reservoir Models
Authors G. Laurent, G. Caumon, M. Jessell and J.J. RoyerMost current approaches for building structural reservoir models focus on geometrical aspects and consistency with seismic and well data. Few approaches account for the validity of 3D geological models regarding structural compatibility. It may be done using restoration to check the kinematics or mechanics. This is generally performed a posteriori, which also provides critical insights on the basin/reservoir history, but requires significant modeling efforts. This paper presents an approach introducing a first-order kinematic and mechanical consistency at the early stages of the structural modeling. Because the full deformation path is generally poorly constrained, we suggest using simplified approaches to generate plausible structures and assess first-order deformations, making efficiency and robustness more important than physical accuracy. A mechanical deformable model based on rigid elements linked by a non-linear energy has been adapted to geological problems. The optimal deformation is obtained by minimizing the total energy with appropriate boundary conditions. Last, the displacement field is transferred to the geological objects embedded into the rigid elements. With this approach, 3D structural models can be obtained by successively modeling the tectonic events. The underlying tectonic history of resulting models is explicitly controlled by the interpreter and can be used to study structural uncertainties.
-
-
-
Predicting Faults from Curvatures of Deformed Geological Layers Viewed as Thin Plates.
By J.J. RoyerContinuous media theory in physics uses the Von Karman's theory to describe the shape, strains and stresses of thin plates, non Euclidian thin shells or surfaces. Given a set of boundary conditions, it relates geometrical shape parameters such as the Gaussian and the mean curvatures, the physical properties of the materials such as the Young's modulus and Poisson's ratio to the bending (or flexural slip) and stretching (or pure shearing) energy terms. Layered geological structures, especially reservoir bearing structures, have typically larger lateral extents compared to their thickness, and can be considered in a first approximation as thin plates regarding their mechanical behavior. Moreover, during sedimentation the top of the sedimentary pile can be generally considered as smooth developable surfaces in the depositional space, which are then deformed during their burial history under tectonic events. This idea is used to suggest a method for identifying the probability of finding sub-seismic faults in thin geological structures or reservoirs. This paper presents theoretical results that relate the curvatures of the top or bottom surfaces of geological structures and reservoirs. Bending and stretching energy terms are used as structural attributes to predict fracturing or the deformation style.
-
-
-
A Gabriel-Delaunay Triangulation of Complex Fractured Media for Multiphase Flow Simulations
By H. MustaphaFractured reservoirs are complex domains where discrete fractures are constraining boundaries. The discrete fractures are discretized into intersected edges during a grid generation process. Delaunay triangulations are often used to represent complex structures. However, a Delaunay triangulation of a fractured medium generally does not conform to the fracture boundaries. Recovering the fracture elements may violate the Delaunay empty-circle (2D) criterion. Refining the triangulation is not a practical solution in complex fractured media. This paper presents a new approach that combines both Gabriel and Delaunay triangulations. The Gabriel condition of edge-empty-circle is locally employed to quantify the quality of the fracture edges in 2D. The fracture edges violating the Gabriel criterion are released in a first stage. After that, a Delaunay triangulation quality is generated considering the rest of the fracture constraints. The released fracture edges are then approximated by the edges of the Delaunay triangles. The final representation of fractures might be slightly different, but a very accurate solution is always maintained. The method is near optimal and has the capability to generate fine grids and to offer an accurate good-quality grid. Numerical examples are presented to assess the performance and efficiency of the proposed method.
-
-
-
Numerical Prediction of Relative Permeability in Water-Wet Naturally Fractured Reservoir Rocks
Authors S.K. Matthai, S. Bazrafkan, P. Lang and C. MilliotteThe grid-block scale ensemble relative permeability, kri of fractured porous rock with appreciable matrix permeability is of decisive interest to reservoir simulation and the prediction of production, injector-producer water breakthrough, and ultimate recovery. While the dynamic behaviour of naturally fractured reservoirs (NFR) already provides many clues about (pseudo) kri on the inter-well length scale, such data are difficult to interpret because, in the subsurface, the exact fracture geometry is unknown. Here we present numerical simulation results from discrete fracture and matrix (DFM) unstructured grid hybrid FEM-FVM simulation models, predicting the shape of fracture-matrix kri curves. In contrast to earlier work (Matthai et al. 2007, Nick and Matthai, 2011), we also simulate capillary fracture matrix transfer (CFMT) and without relying the frequently made simplifying assumption that fracture saturation reflects fracture-matrix capillary pressure equilibrium. We also use a novel discretization of saturation which permits jump discontinuities to develop across the fracture-matrix interface. This increased physical realism permits – for the first time - to test the Matthai and Nick (2009) semi-analytical model of the flow rate dependence of relative permeability, ensuing from CFMT. The sensitivity analysis presented here constrains the CMFT-related flow rate dependence of kri and illustrates how it manifests itself in two geometries of layer-restricted well-developed fracture patterns mapped in the field. In a companion paper (Lang et al.), also investigate the dependence of kri on fracture aperture as computed using discrete element analysis for plausible states of in situ stress. Our results indicate that fracture-matrix ensemble relative permeability is matched – for fast flow rates – by the semi-analytic model of Matthai and Nick (2009). For slow rates the strong impact of CFMT leads to significantly different behaviour requiring a more elaborate treatment.
-
-
-
Flows in Discrete Fracture Networks: from Fine Scale Explicit Simulations to Network Models and Reservoir Simulators
Authors B. Noetinger, M.D. Delorme, A.F. Fourno and N.K. KhvoenkovaModelling flows in fractured reservoirs is becoming essential, due to the increasing number of fractured reservoirs to be exploited. Building fluid flow simulations keeping explicit Discrete Fracture Network (DFN) models that capture well the highly localized nature of flow in fractured reservoirs is a challenging issue. A successful solution will be of considerable help for setting up EOR schemes. A rigorous workflow handling 3D DFN simulations to standard large scale simulations must be set up. We show that it is possible to build an exact approximation scheme using an original Galerkin projection technique and a quasi steady state approximation (simulation time greater that a typical diffusion time over one fracture). At the lowest order, the resulting set of equations to be solved has the structure of a resistor/capacitor network The associated mass and transmissibility matrices can be computed explicitly solving steady state boundary value Laplace equations over each fracture. Considering millions fracture models remains impossible in this context. Using geometrical considerations, we have developed accelerated algorithms allowing to treat such cases in an acceptable time on a standard computer. Validations were done with high resolution reference calculations. The theoretical aspects and validation tests will be adressed during the presentation.
-
-
-
Single Porosity Model for Fractured Formations
Authors P.YU. Tomin and A.K. Pergamentdeveloped. Analogous to work of G. Dagan & P. Indelman, the energy criterion is used for upscaling of absolute permeability. The fine-scale energy equality to approximated value corresponding to tensor coefficients is required for cells containing fractures. The resulting effective tensor is symmetric and physically consistent since the flux approximation is assured. Two classes of methods are applied to determine the pseudo relative permeability tensor. First one is the stationary capillary equilibrium method which is applicable in capillary trapping zones far from wells. Furthermore, analysis of relations between phase and absolute permeability tensors is carried out using this method. Samples of relative permeability curves are obtained for media with orthotropic and monocline symmetries. The influence of connectivity property on the functions is shown and the saturation dependence of direction of principal axes for phase permeability tensor is investigated. Thereby the misalignment of phase and absolute permeability tensors is shown. The second class is a dynamic pseudo-function approach which uses the multiscale method for water flooding simulation. The method combines the Fedorenko finite superelement method and the Samarskii support operator method and belongs to the high-resolution methods class. The technique developed allows to incorporate fractures of complex geometry, accurately accounts the anisotropy for two-phase flows, and as opposed to dual parameters model doesn’t require the connectivity of fractures system and avoids doubling the number of unknowns. The method is successfully applied for simulation of the China and West Siberia fractured reservoirs.
-
-
-
Diagnosis and Quantification of Stress-Sensitive Reservoir Behaviour from Pressure and Rate Transient Data
By R.A. ArcherClassical analytical and numerical techniques for simulation of fluid flow in petroleum reservoirs typically assume permeability is independent of pressure. In naturally fractured and low permeability systems the reservoir permeability may depend on the stress state of the reservoir which means the diffusivity equation that governs single phase flow in the reservoir becomes nonlinear. Stress-sensitive behaviour is particularly relevant to the development of tight gas and other unconventional resources. This work develops a set of tools to diagnose and quantify stress sensitivity through analysis of transient pressure or flow rate data. The work builds on analytical solutions for radial flow in a stress-sensitive medium presented by Friedel and Voigt (SPE 122768, 2009), and for the linear flow case presented by Archer (AFMC 17, 2010). The radial flow solution uses the Boltzmann transform whereas the linear flow solution is based on the use of the Cole-Hopf transform. High resolution numerical solutions are also used to complement these analytical solutions. Where appropriate pseudo-pressures are used to take account of the pressure dependence of gas properties on pressure. This paper considers both transient pressure and rate solutions and develops a range of type curve formats to demonstrate how production from stress-sensitive reservoirs differs from conventional reservoirs when plotted in traditional well test format (log-log plot of pressure and pressure derivative), as a p/z plot (for the gas case), as a rate versus cumulative plot, and as “Blasingame” type curves in the including the normalised rate, rate-integral, and rate-integral-derivative formats. This suite of tools can be used in a diagnostic manner to identify whether stress-sensitive behaviour is occurring, to quantify the errors that may be made in permeability estimates if stress-sensitive behaviour is ignored, and to estimate the impact of stress-sensitivity on ultimate recovery from a well.
-
-
-
A Spectal Approach to Conditional Simulation
Authors I.R. Minniakhmetov and A.H. Pergamentcovariance matrix representing grid point’s correlation. For the large fields the Cholesky factorization can be computationally expensive. In this work we present an alternative approach, based on the usage of spectral representation of a conditional process. It is shown that covariance of two arbitrary spectral components could be factorized into functions of corresponding harmonics. In this case the Cholesky decomposition could be considerably simplified. The advantage of the presented approach is its accuracy and computational simplicity.
-
-
-
Quantitative Use of Different Seismic Attributes in Reservoir Modeling
Authors T. Feng, J. Skjervheim and G. EvensenAccurate reservoir models are essential for reservoir management. Optimal use of all available data is crucial. Traditionally, reservoir properties have been conditioned to the dynamic production data from the wells. Seismic data, on the other hand, is only used in a qualitative manner. Quantitative use of seismic data is sparse and research based. To use seismic data quantitatively in the reservoir-modeling process, an integrated workflow need to be established such that the forward modeling of the synthetic seismic data and the preferable measured seismic data can be incorporated in the conditioning process. The different modeling regimes, such as reservoir flow simulation, rock physics, and seismic wave propagation, are involved in getting from reservoir flow properties to seismic signals. Hence, different seismic attributes from different levels can be used in the conditioning process. In this work, our focus is to test and demonstrate an integrated workflow for quantitative use of different seismic attributes in history matching. The history matching concept will be formulated in a Bayesian setting through ensemble based algorithms. The uncertainty of model is represented with an ensemble of realizations. A field case study is used to demonstrate the importance of different seismic attributes in the conditioning process.
-
-
-
Using Two-point Geo-statistics Reservoir Model Parameters Reduction
By J. LeguijtUsing two point geo-statistics reservoir model parameters reduction. An algorithm has been developed to constrain gridded reservoir models that are used with assisted history matching with geo-statistical information and at the same time reduce the number of variables that are needed to describe the model. Gridded models, as used within most reservoir modelling packages, may consist of 10^5 up to 10^6 grid blocks. A covariance matrix which can be used to constrain the model with a variogram (two point statistics) would consist of 10^10 up to 10^12 coefficients and a direct principal component decomposition of is beyond the capability of current computer systems. A common way to reduce the number of variables is using the members of an ensemble of models from a geo-statistical simulation as basis vectors for a subspace. When a history match is obtained with a model that is constrained to this subspace, this model will have a decently looking continuity behaviour. There is however no guarantee that this subspace contains the directions that correspond with the eigenvectors of the covariance matrix with the largest eigenvalues. This can be demonstrated with a simple simulation and is theoretically described by the Wishart distribution. It is possible to construct a set of orthonormal basis vectors that contains the directions that correspond with the eigenvectors of the covariance matrix with the significantly large eigenvalues. The number of basis vectors may still be rather large but it is mainly determined by the size of the model and the range of the variogram. From an eigenvector decomposition of this covariance matrix, a very good approximation can be obtained of the eigenvectors with a significant large eigenvalues. As the small eigenvalues can be neglected, the number of eigenvectors needed to describe the model is approximately 10^2, which results in a significant parameter reduction.
-
-
-
Numerical Comparison of Ensemble Kalman Filter and Randomized Maximum Likelihood
Authors K. Fossum, T. Mannseth, D. Oliver and H.J. SkaugIn recent years, more traditional history matching methods have been increasingly challenged by sequential data assimilation techniques such as the ensemble Kalman filter (EnKF). There are strong similarities between EnKF and the non-sequential method, randomized maximum likelihood (RML). For a linear forward model the two methods are equal, for a nonlinear forward model there arises some differences (in addition to sequential/batch data assimilation): RML can be iterative, while EnKF is not; RML uses realization-specific gradients/sensitivities to change a model realization while EnKF uses the same covariance for all realizations. We assess the sampling capabilities of RML and EnKF for a weakly nonlinear forward model. Results are compared to a Markov chain Monte Carlo (McMC) method, which samples correctly from the posterior. Our aim is to clarify which of the above mentioned differences between RML and EnKF has the biggest impact on the sampling capabilities. We apply the methods to a two-phase reservoir models small enough to be suitable for McMC. The assessment of RML and EnKF is performed by comparing history matching capabilities, and properties of their posterior distributions to those of the posterior distributions obtained with McMC.
-
-
-
Smooth Multi-scale Parameterization for Integration of Seismic and Production Data Using Second-generation Wavelets
Authors T. Gentilhomme, T. Mannseth, D. Oliver, G. Caumon and R. MoyenIn this paper, we use the second-generation wavelet transform as multi-scale smooth parameterization technique for history matching of seismic derived models using an ensemble based optimization method (batch-enRML). The construction of the second generation wavelet is presented and their advantages compared to first generation wavelets are discussed. Then, these wavelets are applied to a realistic 3D faulted reservoir model. Their ability to represent correctly this model with a large compression ratio is demonstrated. Finally, using the SGW re-parameterization, we set the basis for a new adaptive multi-scale inversion method, which aims at limiting the increase of the mismatch to seismic data of the seismic-derived realizations by selecting relevant parameters. Efficiency of the method is discussed through a 2D synthetic example.
-
-
-
Distance Parameterization for Efficient Seismic History Matching with the Ensemble Kalman Filter
Authors O. Leeuwenburgh and R. ArtsThe Ensemble Kalman Filter (EnKF), in combination with travel-time parameterization, provides a robust and flexible method for quantitative multi-model history matching to time-lapse seismic data. A disadvantage of the parameterization in terms of travel-times is that it requires simulation of models beyond the update time. A new distance parameterization is proposed for fronts, or more generally, for isolines of arbitrary seismic attributes, that circumvents the necessity of additional simulation time. An accurate Fast Marching Method for solution of the Eikonal equation in Cartesian grids is used to calculate distances between observed and simulated fronts which are subsequently used as innovations in the EnKF. Experiments are presented that demonstrate the functioning of the method in synthetic 1D and 2D cases that include uncertain model properties, and merging or multiple secondary fronts. Results are compared with those resulting from direct use of saturation data. The proposed algorithm significantly reduces the number of data while still capturing the essential information, it removes the need for seismic inversion when the oil-water front is identified only, and it produces a more favorable distribution of simulated data, leading to improved functioning of the EnKF.
-
-
-
Preventing Ensemble Collapse and Preserving Geostatistical Variability Across the Ensemble with the Subspace EnKF
More LessOne of the key issues of the EnKF is the well known problem of ensemble collapse, which is particularly evident for small ensembles. This results in an artificial reduction of variability across the ensemble. The second, more important problem is that the EnKF is theoretically appropriate only if all ensemble members belong to the same multi-Gaussian random field (geological/geostatistical model). This is an important problem because for most real fields, we have more than one geological scenario, and ideally, we would like to obtain one or more history-matched models for each geological scenario. In this work, we propose the subspace EnKF to alleviate both problems mentioned above. The basic idea of the subspace EnKF is to constrain the different ensemble members to different subspaces of the same or different random field. This is accomplished by parameterizing the random fields and modifying the EnKF formulation with the gradients of the parameterizations. The subspace EnKF prevents ensemble collapse, providing a better quantification of uncertainty, and more importantly, retains key geological characteristics of the initial ensemble, even when each ensemble member belongs to a different geological model. The approach is demonstrated on a synthetic example with a multi-Gaussian permeability field.
-
-
-
Multi-objective Scheme of Estimation of Distribution Algorithm for History-Matching
Authors A. Abdollahzadeh, A. Reynolds, M. Christie, D. Corne, G. Williams and B. DaviesHistory matching is one of the key challenges of efficient reservoir management. In history matching, evolutionary algorithms are used to explore the global parameter search space for multiple good fitting models. General critiques of these algorithms include high computational demands, as well as low diversity of multiple models. Estimation of distribution algorithms are a class of evolutionary algorithms in which new candidate solutions are obtained by sampling a probability distribution created from the population. In previous works, we studied estimation of distribution algorithms for history matching and showed that good results can been obtained by using a single misfit function. Multiobjective optimisation algorithms use the concepts of dominance and the Pareto front to find a set of optimal trade-offs between the competing objectives of minimising misfit. In this paper, we apply a multiobjective estimation of distribution algorithm to history matching of firstly a well-known synthetic reservoir simulation model and secondly a real North Sea reservoir. We will show that one can achieve higher solution diversity and in some cases better quality solutions by taking multiple objectives. In addition, multiobjective optimisation algorithms are less sensitive to parameter tuning and provide trade-offs between objectives that give more insights into history matching problem.
-
-
-
Data Assimilation Using the EnKF for 2-D Markov Chain Models
Authors Y. Zhang, D.S. Oliver, Y. Chen and H.J. SkaugThe ensemble Kalman filter (EnKF) is well-suited to update gaussian variables and can be used for updating continuous nongaussian variables either directly or after transformation. Categorical variables such as facies type are much more difficult for history matching, especially when the variables have complex transitional dependencies. In a previous paper we described a method for updating third order Markov chain models in one dimension using the ENKF, where its efficiency partially depends on the Viterbi algorithm that is not directly applicable in higher dimensions. In this paper, we develop a data assimilation method for updating categorical models using an approximation to the joint probability of facies types (Allard et al 2011) that can be used in a sequential algorithm without iteration. The ensemble of realizations after updating can be used to efficiently approximate the likelihood of the variables, while the categorical model provides an approximation to the transition probabilities. We demonstrate the approach with conditioning two synthetic channel models with two facies types to both linear and nonlinear observations. Our results show the distribution of facies after data assimilation honors data much better than before assimilation, and the transitions among facies are consistent with the prior model.
-
-
-
An Iterative Version of the Adaptive Gaussian Mixture Filter
Authors A.S. Stordal and R. LorentzenThe adaptive Gaussian mixture filter (AGM) was introduced as a robust filter technique for large scale applications and an alternative to the well known ensemble Kalman filter (EnKF). The bias of AGM is determined by two parameters, one adaptive weight parameter and one predetermined bandwidth parameter which decides the size of the linear update. The bandwidth parameter must often be selected significantly different from zero in order to make large enough linear updates to match the data, at the expense of bias in the estimates. In the iterative AGM we introduce here we take advantage of the fact that the history matching problem is usually estimation of parameters. If the prior distribution of parameters is close to the posterior distribution, it is possible to match the observations with a small bandwidth parameter. Hence the bias of the filter solution is small. In order to obtain this scenario we iteratively run the AGM throughout the data history with a very small bandwidth to create a new prior distribution from the updated samples after each iteration. After a few iterations, nearly all samples from the previous iteration match the data and the above scenario is achieved.
-
-
-
Ensemble Kalman Filter Data Assimilation to Condition a Real Reservoir Models to Well Test Observation
By A. AbadpourRecently a significant effort has been made to characterize reservoir models benefiting from Ensemble Kalman filter as data assimilation technique. EnKF proved to be a powerful tool to deal with almost any sort of measurement also to be capable of handling different type of uncertainty in the simulation models and and being affordable from the computational point of view. Lately the technique has been deployed to assimilate on pressure transient and production logging data to update permeabilities and estimate layer skin factor. In the present paper EnKF methodology was used to characterize an offshore reservoir model against the well test pressure data as well as the pressure derivative to adjust cell by cell petrophysical properties, and the skin factor in each well perforation. The results showed that using the derivative observations to calibrate the uncertain parameters helps improving the quality of the match not only in the predicted derivative but also in better forecasting the pressure measurements. The importance of assimilation on skin as well as recalculation of well connection factors revealed. Moreover a new distance based localisation scheme based on the well drainage zone has been introduced to help reducing unnecessary changes in the model.
-