- Home
- Conferences
- Conference Proceedings
- Conferences
ECMOR XI - 11th European Conference on the Mathematics of Oil Recovery
- Conference date: 08 Sep 2008 - 11 Sep 2008
- Location: Bergen, Norway
- ISBN: 978-90-73781-55-9
- Published: 08 September 2008
1 - 100 of 105 results
-
-
Continuous Darcy-flux Approximations with Full Pressure Support for Structured and Unstructured Grids in 3D
Authors H. Zheng and M.G. EdwardsThis paper presents the development of families of control-volume distributed flux-continuous schemes with full control-volume surface pressure continuity for the general three dimensional pressure equation. Most effort to date has focused on schemes with pointwise continuity where the local continuity conditions are relatively compact. While full surface pressure continuity requires an increase in the number of local continuity conditions the new schemes prove to be relatively robust. Families of full pressure continuity schemes are developed here for structured and unstructured grids in three dimensions. The resulting Darcy flux approximations are applied to a range of three dimensional test cases that verify consistency of the schemes. Convergence tests of the three-dimensional families of schemes are presented, for a range of quadrature points. M-matrix conditions are presented and the schemes are tested for monotonicity. The full pressure continuity schemes are shown to be beneficial compared with earlier pointwise continuity schemes [1] both in terms improved monotonicity and convergence. The schemes are applied to challenging three dimensional test cases including heterogeneity for both structured and unstructured grids. TECHNICAL CONTRIBUTIONS Extension of full control-volume surface pressure continuity flux continuous schemes to structured and unstructured grids in three dimensions. M-matrix and monotonicity analysis of three dimensional methods. Benefits are demonstrated for challenging three dimensional test cases including heterogeneity on structured and unstructured grids.
-
-
-
A General Framework for Non Conforming Approximations of the Single Phase Darcy Equation
Authors D. Di Pietro, L. Agelas, R. Eymard and R. MassonIn this paper we present an abstract analysis framework for nonconforming approximations of heterogeneous and anisotropic elliptic operators on general 2D and 3D meshes. The analysis applies to discontinuous Galerkin methods, to the popular MPFA O-method, as well as to hybrid finite volume methods. A number of examples is provided in the paper. The guidelines of the analysis can be summarized as follows: (i) each method is re-written in weak formulation by introducing two discrete gradient operators. Penalty terms may be required for non $H^1$-conformal spaces to ensure coercivity; (ii) a discrete $H^1$-norm is introduced such that a discrete version of the Rellich theorem holds; (iii) an a priori estimate on the discrete solution in the discrete $H^1$-norm is derived using the coercivity of the bilinear form; (iv) the convergence of the discrete problem to the continuous one is eventually deduced, thus proving the convergence of the approximation. In order for the above analysis to hold, the gradient operator applied to the discrete solution should be consistent, whereas the one applied to the test function should be weakly convergent in $L^2$. According to the method, further assumptions on the mesh may be required. This analysis framework easily extends to non-linear problems like the ones encountered in the study of multi-phase Darcy flows, and it allows to weaken regularity assumptions on the coefficients of the problem as well as on the exact solution. From a practical viewpoint, its main interest lies in the possibility to analyze and to design new methods as well as to improve existing ones.
-
-
-
On Efficient Implicit Upwind Schemes
Authors J.R. Natvig and K.A. LieAlthough many advanced methods have been devised for the hyperbolic transport problems that arise in reservoir simulation, the most widely used method in commercial simulators is still the implicit upwind scheme. The robustness, simplicity and stability of this scheme makes it preferable to more sophisticated schemes for real reservoir models with large variations in flow speed and porosity. However, the efficiency of the implicit upwind scheme depends on the ability to solve large systems of nonlinear equations effectively. To advance the solution one time step, a large nonlinear system needs to be solved. This is a highly non-trivial matter and convergence is not guaranteed for any starting guess. This effectively imposes limitations on the practical magnitude of time steps as well as on the number of grid blocks that can be handled. In this paper, we present an idea that allows the implicit upwind scheme to become a highly efficient. Under mild assumptions, it is possible to compute a reordering of the equations that renders the system of nonlinear equations (block) lower triangular. Thus, the nonlinear system may be solved one (or a few) equations at a time, increasing the efficiency of the implicit upwind scheme by orders of magnitude. Similar ideas can also be used for high-order discontinuous Galerkin discretizations. To demonstrate the power of these ideas we show results and timings for incompressible and weakly compressible transport in real reservoir models.
-
-
-
Spatially-varying Compact Multi-point Flux Approximations for 3-D Adapted Grids with Guaranteed Monotonicity
Authors J.V. Lambers and M.G. GerritsenWe propose a new single-phase local transmissibility upscaling method for adapted grids in 3-D domains that uses spatially varying and compact multi-point flux approximations (MPFA), based on the VCMP method previously introduced for 2-D Cartesian grids. For each cell face in the coarse upscaled grid, we create a local fine grid region surrounding the face on which we solve three generic local flow problems. The multi-point stencils used to calculate the fluxes across coarse grid cell faces involve up to ten neighboring pressure values. They are required to honor the three generic flow problems as closely as possible while maximizing compactness and ensuring that the flux approximation is as close as possible to being two-point. The resulting MPFA stencil is spatially varying and reduces to two-point expressions in cases without full-tensor anisotropy. Numerical tests show that the method significantly improves upscaling accuracy as compared to commonly used local methods and also compares favorably with a local-global upscaling method. We also present a corrector method that adapts the stencils locally to guarantee that the resulting pressure matrix is an M-matrix. This corrector method is needed primarily for cases where strong permeability contrasts are mis-aligned with the grid locally. The corrector method has negligible impact on the flow accuracy. Finally, we show how the computed MPFA can be used to guide adaptivity of the grid, thus allowing rapid, automatic generation of grids that can account for difficult geologic features such as faults and fractures and efficiently resolve fine-scale features such as narrow channels.
-
-
-
Analytical Estimation of Advanced IOR Performance
Authors R.A. Berenblyum, A.A. Shchipanov, E.M. Reich, L. Surguchev and K. PotschIn the recent years of high oil prices many companies turned their attention to the advanced IOR/EOR methods including cyclic water flooding, miscible gas and steam injection. This paper summarizes experience accumulated in International Research Institute of Stavanger over several years of using in-house analytical pre-screening tool for evaluation of possible IOR/EOR strategies. An analytical models of cyclic water, miscible gas and steam injection is presented. Strong and week points of these methods are analyzed. The results of field case studies both on the Norwegian Continental Shelf and outside Norway are presented and discussed. We conclude that the analytical models may be successfully used for prescreening of the IOR methods. The analytical methods can, in most cases, correctly handle the behavior of the reservoir system as a function of IOR process critical parameters. However final decision to apply a certain IOR procedure and its optimization should be done utilizing full-scale reservoir models. The analytical methods can significantly decrease the decision making time by either: a) decreasing a number of computationally expensive full-scale simulations for mature developed reservoirs; or b) quickly screening new prospect where reservoir knowledge is limited.
-
-
-
Shortest Path Algorithm for Pore-scale Simulation of Water-alternating-gas Injection
Authors M.I.J. van Dijke, J.E. Juri and K.S. SorbieFor simulation of three-phase flow processes, such as water-alternating-gas injection (WAG), accurate descriptions of the three-phase capillary pressures and relative permeabilities as functions of the phase saturations are essential. Instead of deriving these functions from experiments, which is very difficult, they may be obtained from pore-scale network simulations. During multiple injection processes, especially under mixed-wet conditions, isolated clusters of the three phases occur. These clusters may still be mobilized, even under the assumption of capillary-dominated flow, as part of so-called multiple displacement chains of adjacent clusters stretching from inlet to outlet of the model. To determine the most favourable displacement chain requires implementation of an efficient shortest path algorithm. For the present problem the distances or costs are the capillary entry pressures between the various phase clusters. Because these entry pressures can also be negative, so-called negative cost cycles arise, which invalidate traditional algorithms. Instead, we have implemented an efficient shortest path algorithm with negative cycle detection If negative cycles arise, the corresponding cyclic displacement chains are carried out before any linear displacement from inlet to outlet. Negative cost cycles correspond to spontaneous displacements, for which the prevailing invading phase pressure is higher than required. Additionally, and probably even more significant, the shortest path algorithm determines in principle the pressures of all phase clusters in the network. These pressures are in turn used in the accurate calculation of the three-phase entry pressures, as well as the film and layer volumes and conductances in pore corners. Some WAG network simulations in mixed-wet media have been carried out using the new algorithm to demonstrate the occurrence of long displacement chains and the much improved efficiency, also for increasing network sizes. A significant number of negative cycles occurred, but the corresponding displacement chains were short. Additionally, we have analysed the obtained values of the cluster pressures.
-
-
-
Fractional-flow Theory Applied to Non-Newtonian IOR Processes
Authors W.R. Rossen, R.T. Johns, K.R. Kibodeaux, H. Lai and N. Moradi TehraniThe method of characteristics, or fractional-flow theory, is extremely useful in understanding complex IOR processes and in calibrating simulators. One limitation has been its restriction to Newtonian rheology except in rectilinear flow. Its inability to deal with non-Newtonian rheology in polymer and foam IOR has been a serious limitation. We extend fractional-flow methods for two-phase flow to non-Newtonian fluids in one-dimensional cylindrical flow, where rheology changes with radial position r. The fractional-flow curve is then a function of r. At each position r the "injection" condition is the result of the displacement upstream; one can plot the movement of characteristics and shocks downstream as r increases. We show examples for IOR with non-Newtonian polymer and foam. For continuous injection of foam, one can map the 1D displacement in time and space exactly for the non-Newtonian foam bank, and approximately, but with great accuracy, for the gas bank ahead of the foam. The fractional-flow solutions are more accurate that finite-difference simulations on a comparable grid and can be used to calibrate simulators. Fractional-flow methods also allow one to calculate changing mobility near the injection well to a greater accuracy than simulation. For polymer and SAG (alternating-slug) foam injection, characteristics and shocks collide, making the fractional-flow solution complex. Nonetheless, one can solve exactly for changing mobility near the well, again to greater accuracy than with conventional simulation. For polymer solutions that are Newtonian at high and low shear rates but non-Newtonian in between, the non-Newtonian nature of the displacement depends on injection rate. The fractional-flow method extended to non-Newtonian flow can be useful both for its insights for scale-up of laboratory experiments and to calibrate computer simulators involving non-Newtonian IOR. It can also be an input to streamline simulations.
-
-
-
Fractional-flow Theory of Foam Displacements with Oil
Authors M. Namdar-Zanganeh, T. La Force, S.I. Kam, T.L.M. van der Heijden and W.R. RossenFractional-flow theory has proven useful for understanding the factors that control foam displacements and as a benchmark against which to test foam simulators. Most applications of fractional-methods to foam have excluded oil. Recently, Mayberry and Kam (SPE 100964) presented out fractional-flow solutions for foam injection with a constant effect of oil on the foam. We extend fractional-flow methods to foam displacements with oil, using the effects of oil and water saturations on foam as represented in the STARS simulator. There can be abrupt shifts in the composition paths at the limiting water and oil saturations for foam stability. In the immiscible three-phase SAG displacements examined, if foam collapses at the initial oil saturation in the reservoir, there is a very-small-velocity shock from the injected condition to complete foam collapse. The displacement is nearly as inefficient as if no foam were present at all. It does not matter in these cases whether foam is weakened by low water saturation. The displacement is efficient, however, if foam is unaffected by oil but weakened at low water saturation. These results may reflect our foam model, where foam is only partially destroyed at low water saturations but is completely destroyed by high oil saturation. Two idealized models for three-phase displacements can be represented at two-phase displacements with chemical or miscible shocks: A model for a first-contact miscible gas flood with foam suggests an optimal water fraction in foam that puts the gas front just slightly ahead of the foam (surfactant) front. An idealized model of a surfactant flood with foam for mobility control suggests it is important to inject a sufficiently high water fraction in the foam that the gas front is behind the surfactant front as the flood proceeds. We present simulations to verify the solutions obtained with fractional-flow methods.
-
-
-
Double-families of Quasi-Positive Flux-continuous Finite-volume Schemes on Structured and Unstructured Grids
Authors M.G. Edwards and H. ZhengThis paper focuses on flux-continuous pressure equation approximation for strongly anisotropic media. Previous work on families of flux continuous schemes for solving the general geometry-permeability tensor pressure equation has focused on single-parameter families e.g. [1]. These schemes have been shown to remove the O(1) errors introduced by standard two-point flux reservoir simulation schemes when applied to full tensor flow approximation. Improved convergence of the schemes has also been established for specific quadrature points [2]. However these schemes have conditional M-matrices depending on the strength of the cross-terms [1]. When applied to cases involving full tensors arising from strongly anisotropic media, these schemes can fail to satisfy the maximum principle. Loss of solution monotonicity then occurs at high anisotropy ratios, causing spurious oscillations in the numerical pressure solution. New double-family flux-continuous locally conservative schemes are presented for the general geometry-permeability tensor pressure equation. The new double-family formulation is shown to expand on the current single-parameter range of existing schemes that have an M-matrix. While it is shown that a double family formulation does not lead to an unconditional M-matrix scheme, an analysis is presented that classifies the sensitivity of the new schemes with respect to monotonicity and double-family quadrature. Convergence performance with respect to a range of double-family quadrature points is presented for some well known test cases.
-
-
-
Adaptively-localized-continuation-Newton; Reservoir Simulation Nonlinear Solvers that Converge All the Time
Authors R.M. Younis, H. Tchelepi and K. AzizGrowing interest in understanding, predicting, and controlling advanced oil recovery methods emphasizes the importance of numerical methods that exploit the nature of the underlying physics. The Fully Implicit Method offers unconditional stability in the sense of discrete approximations. This stability comes at the expense of transferring the inherent physical stiffness onto the coupled nonlinear residual equations which need to be solved at each time-step. Current reservoir simulators apply safe-guarded variants of Newton’s method, and often can neither guarantee convergence, nor provide estimates of the relation between convergence rate and time-step size. In practice, time-step chops become necessary, and are guided heuristically. With growing complexity, such as in thermally reactive compositional models, this can lead to substantial losses in computational effort, and prohibitively small time-steps. We establish an alternate class of nonlinear iteration that both converges, and associates a time-step to each iteration. Moreover, the linear solution process within each iteration is performed locally. By casting the nonlinear residual for a given time-step as an initial-value-problem, we formulate a solution process that associates a time-step size with each iteration. Subsequently, no iterations are wasted, and a solution is always attainable. Moreover, we show that the rate of progression is as rapid as a standard Newton counterpart whenever it does converge. Finally, by exploiting the local nature of nonlinear waves that is typical to all multiphase problems, we establish a linear solution process that performs computation only where necessary. That is, given a linear convergence tolerance, we identify the minimal subset of solution components that will change by more than the specified tolerance. Using this a priori criterion, each linear step solves a reduced system of equations. Several challenging large-scale simulation examples are presented, and the results demonstrate the robustness of the proposed method as well as its performance.
-
-
-
Study of a New Refinement Criterion for Adaptive Mesh Refinement in Sagd Simulation
Authors M. Mamaghani, C. Chainais and G. EncheryThe SAGD (Steam Assisted Gravity Drainage) is an enhanced oil recovery process for heavy oils and bitumens. To simulate this thermal process and obtain precise forecasts of oil production, very small-sized cells have to be used because of the fine flow interface. But the use of fine cells throughout the reservoir is expensive in terms of CPU time. To reduce the computation time one can use an adaptive mesh refinement technique which will use a refined grid at the flow interface and coarser cells elsewhere. The first numerical tests of SAGD modelling, using a criterion based on two threshold temperatures, showed that this adaptive-mesh-refinement technique could reduce significantly the number of cells [LLR03]. Nevertheless, the refined zone remains too wide. In this work, we introduce a new criterion for the use of adaptive mesh refinement in SAGD simulation. This criterion is based on the work achieved in [KO00] on a posteriori error estimators for finite volume schemes for hyperbolic equations. It enables to follow more precisely the flow interface. Through numerical experiments we show that the new criterion enables to further decrease the number of cells (and thus CPU times) while maintaining a good accuracy in the results.
-
-
-
Mimetic MPFA
Authors A.F. Stephansen and R.A. KlausenThe analysis of the multi point flux approximation (MPFA) method has so far relied on the possibility of seeing it as a mixed finite element method for which the convergence is then established. This type of analysis has been successfully applied to triangles and quadrilaterals, lately also in the case of rough meshes. Another well known conservative method, the mimetic finite difference method, has also traditionally relied on the analogy with a mixed finite element method to establish convergence. Recently however a new type of analysis proposed by Brezzi, Lipnikov and Shashkov, cf. [3], permits to show convergence of the mimetic method on a general polyhedral mesh. We propose to formulate the MPFA O-method in a mimetic finite difference framework, in order to extend the proof of convergence to polyhedral meshes. The general nonsymmetry of the MPFA method and its local explicit flux, found trough a splitting of the flux, are challenges that require special attention. We discuss various qualities needed for a structured analysis of MPFA, and the effect of the lost symmetry.
-
-
-
Streamline Tracing on General Tetrahedral and Hexahedral Grids in the Presence of Tensor Permeability Coefficients
Authors S.F. Matringe, R. Juanes and H.A. TchelepiModern reservoir simulation models often involve geometrically complex (distorted or unstructured) three-dimensional grids populated with full-tensor permeability coefficients. The solution of the flow problem on such grids demands the use of accurate spatial discretizations such as the multipoint flux approximation (MPFA) finite volume schemes or mixed finite element methods (MFEM). The extension of the streamline method to modern grids therefore requires a streamline tracing algorithm adapted to these advanced discretizations. In this paper, we present a new algorithm to trace streamlines from MPFA or MFEM on general tetrahedral or hexahedral grids and in the presence of full-tensor permeabilities. Our approach was already used successfully in 2D and this paper presents the extension to 3D of the previous work by the authors. The method is based on the mathematical framework of MFEM. Since MPFA schemes have recently been interpreted as MFEM, our streamline tracing algorithm is also applicable for the MPFA finite volume methods. Using the mixed finite element velocity shape functions, the velocity field is reconstructed in each grid cell by direct interpolation of the MFEM velocity unknowns or MPFA subfluxes. An integration of the velocity field to arbitrary accuracy yields the streamlines. The method is the natural extension of Pollock’s (1988) tracing method to general tetrahedral or hexahedral grids. The new algorithm is more accurate than the methods developed by Prévost (2003) and Haegland et al. (2007) to trace streamlines from MPFA solutions and avoids the expensive flux postprocessing techniques used by both methods. After a description of the theory and implementation of our streamline tracing method, we test its performance on a full-field reservoir model discretized by an unstructured grid and populated with heterogeneous tensor coefficients.
-
-
-
On Front Tracking for Compressible Flow
Authors H.M. Nilsen and K.A. LieStreamline simulation is particularly efficient for incompressible two-phase flow, for which one can use front tracking to solve the 1-D transport problems along streamlines. Here we investigate the extension of this method to compressible (and immiscible) flow and discuss some of the difficulties involved, and in particular the choices one has in writing the 1-D transport equation(s). Our study is motivated by the simulation of CO2 injection, and we therefore also develop methods that are particularly suited for solving compressible flow where one phase is incompressible. Altogether, we present four front-tracking methods that are based on a combination of solving ordinary Riemann problems and Riemann problems with discontinuous flux.
-
-
-
Adaptive Control for Solver Performance Optimization in Reservoir Simulation
Authors I.D. Mishev, N. Fedorova, S. Terekhov, B.L. Beckner, A.K. Usadi, M.B. Ray and O. DiyankovRun-time performance of a reservoir simulator is significantly impacted by the selection of the linear solver preconditioner, iterative method and their adjustable parameters. The choice of the best solver algorithm and its optimal parameters is a difficult problem that even the experienced simulator users cannot adequately solve by themselves. The typical user action is to use the default solver settings or a small perturbation of them that are frequently far from optimal and consequently the performance may deteriorate. There has been extensive research to develop automatic performance tuning and self-adaptive solver selection systems. For example Self-Adapting Large-scale Solver Architecture (SALSA) developed at the University of Tennessee requires running of large number of problems to initialize the system before using it. In contrast we propose an adaptive control on-line system to optimize the simulator performance by dynamically adjusting the solver parameters during the simulation. We start with a large set of parameters and quickly choose the best combinations that are continuously adapted during the simulation using the solver runtime performance measurements (e.g. solver CPU time) to guide the search. This software system, called the Intelligent Performance Assistant (IPA), has been successfully integrated into ExxonMobil’s proprietary reservoir simulator and deployed with it worldwide. The system can handle a large number of combinations of solver parameters, currently in the order of 108, and consistently improves run time performance of real simulation models, frequently by 30% or more, compared to the performance with the default solver settings. Moreover, IPA includes a persistent memory of solver performance statistics. The runtime statistics from these individual runs can be gathered, processed using data mining techniques and integrated in the IPA system, thus allowing its continuous improvement.
-
-
-
A Novel Percolative Aggregation Approach for Solving Highly Ill-Conditioned Systems
Authors H. Klie and M.F. WheelerA key aspect in any algebraic multilevel procedure is to be able to reliably capture the physical behavior behind system coefficients. However, the modeling of complex reservoir scenarios generally involves the computation of extremely discontinuous and nonlinear coefficients that, in turn, compromise the robustness and efficiency of smoothing and coarsening strategies. In addition, the need of dealing with large discretized domains leads to highly ill-conditioned systems that represent a true challenge to any linear solver technology known today. In this work, we exploit the fact that flow trend information can be used as a basis for developing a robust percolative aggregation (PA) two-stage preconditioning method. To this end, we identify coefficient aggregates by a means of an efficient version of the Hoshen-Kopelman percolation algorithm suitable for any flow network structure. By partitioning and reordering unknowns according to these aggregates, we obtain a set of high-conductive blocks followed by a low-conductive block. Diagonal scaling allows for weakening the interaction between these high- and low- conductive blocks plus improving the overall conditioning of the system. The solution of the high-conductive blocks approximates well the solution of the original problem. Remaining sources of errors from this approximation are mainly due to small eigenvalues that are properly eliminated with a deflation step. The combination of the algebraic solution of the high-conductive blocks with deflation is realized as a two-stage preconditioner strategy. The deflation stage is carried out by further isolating the aggregate blocks with a matrix compensation procedure. We validate the performance of the PA approach against ILU preconditioning. Preliminary numerical results indicate that the PA two-stage preconditioning can be used as a promising alternative to employ existing algebraic multilevel methods in a more effective way.
-
-
-
Mortar Coupling of Discontinuous Galerkin and Mixed Finite Element Methods
Authors G.V. Pencheva, S.G. Thomas and M.F. WheelerGeological media exhibit permeability fields and porosities that differ by several orders of magnitude across highly varying length scales. Computational methods used to model flow through such media should be capable of treating rough coefficients and grids. Further, the adherence of these methods to basic physical properties such as local mass balance and continuity of fluxes is of great importance. Both discontinuous Galerkin (DG) and mixed finite element (MFE) methods satisfy local mass balance and can accurately treat rough coefficients and grids. The appropriate choice of physical models and numerical methods can substantially reduce computational cost with no loss of accuracy. MFE is popular due to its accurate approximation of both pressure and flux but is limited to relatively structured grids. On the other hand, DG supports higher order local approximations, is robust and handles unstructured grids, but is very expensive because of the number of unknowns. To this end, we present DG-DG and DG-MFE domain decomposition couplings for slightly compressible single phase flow in porous media. Mortar finite elements are used to impose weak continuity of fluxes and pressures on the interfaces. The sub-domain grids can be non-matching and the mortar grid can be much coarser making this a multiscale method. The resulting nonlinear algebraic system is solved via a non-overlapping domain decomposition algorithm, which reduces the global problem to an interface problem for the pressures. Solutions of numerical experiments performed on simple test cases are first presented to validate the method. Then, additional results of some challenging problems in reservoir simulation are shown to motivate the future application of the theory.
-
-
-
Multipoint Flux Approximations via Upscaling
Authors C.L. Farmer, A.J. Fitzpatrick and R. PotsepaevControl-volume discretizations for fluid flow in anisotropic porous media are investigated. The method uses numerical solutions of the simplest model equation, –div[K(x) grad p(x)] = f(x). The permeability tensor, K(x), is allowed to have discontinuities. Multipoint Flux Approximations (MPFA) are used, and transmissibility coefficients are obtained, in the usual way, from local numerical flow experiments (transmissibility upscaling) for each cell face. For regular K-orthogonal grids, with a uniform permeability tensor, the scheme reduces to the standard two-point flux approximation. Monotonicity of the solution matrix is discussed and a version of the method that provides an M-matrix is described. This discretization scheme is applied to reservoir simulation on 3D structured grids with distorted geometry, highly anisotropic media, and discontinuities in the permeability tensor. Simulation results are presented and compared with results from other two-point and multipoint flux approximations.
-
-
-
The Competing Effects of Discretization and Upscaling – A Study Using the q-Family of CVD-MPFA
Authors M. Pal and M.G. EdwardsFamilies of flux-continuous, locally conservative, finite-volume schemes have been developed for solving the general tensor pressure equation on structured and unstructured grids [1,2]. A family of flux-continuous schemes is quantified by a quadrature parameterization [3]. Improved convergence has been observed for certain quadrature points [4]. In this paper a q-family of flux-continuous (CVD-MPFA) schemes are used as a part of numerical upscaling procedure for upscaling fine scale permeability fields on to coarse grid scales. A series of data sets [5] are tested where the upscaled permeability tensor is computed on a sequence of grid levels. The upscaling sequence is repeated for three distinct quadrature points (q=0.1, q=0.5, q=1) belonging to the family of schemes. Three types of refinement study are presented: 1. Refinement study with invariant permeability distribution with respect to grid level, involving a classical mathematical convergence study. The same coarse scale underlying permeability map (and therefore the problem) is preserved on all grid levels including the fine scale reference solution. 2. Refinement study with renormalized permeability. In this study, the local permeability is upscaled to the next grid level hierarchically, so that permeability values are renormalized to each coarser level. Hence, showing the effect of locally upscaled permeability, compared to that obtained directly from the fine scale solution. 3. Reservoir field refinement study, in this study the permeability distribution for each grid level is obtained by upscaling directly from the fine scale permeability field as in standard simulation practice. The study is carried out for the discretization of the scheme in physical space. The benefit of using specific quadrature points is demonstrated for upscaling in the study and superconvergence is observed. REFERENCES 1. M.G. Edwards, C.F.Rogers. Finite volume discretization with imposed flux continuity for the general tensor pressure equation. Comput. Geo. (1998). 2. M.G.Edwards, Unstructured, control-volume distributed, full-tensor finite volume schemes with flow based grids. Comput.Geo. (2002). 3. M. Pal, M.G. Edwards and A.R. Lamb., Convergence study of a family of flux-continuous, finite-volume schemes for the general tensor pressure equation. IJNME, 2005. 4. M. Pal, Families of Control-Volume Distributed CVD (MPFA) Finite Volume Schemes for the Porous Medium Pressure Equation on Structured and Unstructured Grids. PhD Thesis, Swansea University, 2007. 5. M.A.Christie, SPE, Herriot-Watt University, and M.J Blunt, Imperial College, Tenth SPE comparative Solution Project: A Comparison of Upscaling Techniques, 2001.
-
-
-
Ensemble Level Upscaling of 3D Well-driven Flow for Efficient Uncertainty Quantification
Authors Y. Chen, K. Park and L.J. DurlofskyUpscaling is commonly applied to account for the effects of fine-scale permeability heterogeneity in coarse-scale simulation models. Ensemble level upscaling (EnLU) is a recently developed approach that aims to efficiently generate coarse-scale flow models capable of reproducing the ensemble statistics (e.g., P50, P10 and P90) of fine-scale flow predictions for multiple reservoir models. Often the most expensive part of standard coarsening procedures is the generation of upscaled two-phase flow functions (e.g., relative permeabilities). EnLU provides a means for efficiently generating these upscaled functions using stochastic simulation. This involves the use of coarse-block attributes that are both fast to compute and representative of the effects of fine-scale permeability on upscaled functions. In this paper, we establish improved attributes for use in EnLU, namely the coefficient of variation of the fine-scale single-phase velocity field (computed during computation of upscaled absolute permeability) and the integral range of the fine-scale permeability variogram. Geostatistical simulation methods, which account for spatial correlations of the statistically generated upscaled functions, are also applied. The overall methodology thus enables the efficient generation of coarse-scale flow models. The procedure is tested on 3D well-driven flow problems with different (Gaussian) permeability distributions and high fluid mobility ratios. EnLU is shown to capture the ensemble statistics of fine-scale flow results (flow rate and oil cut as a function of time) with similar accuracy to full flow-based methods but at a small fraction of the computational cost.
-
-
-
Indirect-error-based, Dynamic Upscaling of Multi-phase Flow in Porous Media
Authors S.H. Lee, H. Zhou and H. TchelepiWe propose an upscaling method that is based on dynamic simulation of a given model in which the accuracy of the upscaled model is continuously monitored via indirect measures of the error. If the indirect error measures are larger than a specified tolerance, the upscaled model is dynamically updated with approximate fine-scale information that is reconstructed by a multi-scale finite volume method (Jenny et al., JCP 217: 627--641, 2006). Upscaling of multi-phase flow entails detailed flow information from the underlying fine scale. We apply adaptive prolongation and restriction operators for the flow and transport equations in constructing an approximate fine-scale solution. This new method reduces the inaccuracy associated with traditional upscaling methods, which rely on prescribed boundary conditions in computing the upscaled variables. This dynamic upscaling algorithm is validated for incompressible two-phase flow in two dimensional heterogeneous domains. We demonstrate that the dynamically upscaled model achieves high numerical efficiency compared with fine-scale computations and also provides excellent agreement with reference fine-scale solutions.
-
-
-
Vertical Aggregation of Reservoir Simulation Models for Numerical Well Testing
Authors R.A. Archer and J. KohUpscaling reservoir simulation models in an appropriate manner is important for any well test interpretation workflow which involves numerical reservoir simulation. Approaches based on steady-state flow behaviour are not necessarily optimal when trying to capture the details of pressure transient phenomena. This study proposes a new layer aggregation strategy which forms a weighted sum of the permeability differences between cells in neighbouring layers. The weighting uses a kernel function (G) presented by Oliver (1990). This approach was successfully applied in four upscaling experiments using a permeability distribution based on the Tarbert formation. These cases included a standard well test, an interference test and a test from a partially penetrating well. The results showed that coarsened models with as few as five layers can reproduce the pressure transient behaviour of a twenty layer fine grid model.
-
-
-
Upscaling of Large Reservoir Systems by Using a Control-relevant Approach
Authors S.A. Vakili-Ghahani, R. Markovinovic and J.D. JansenIn reservoir simulations, we use an ensemble of fine-scale geological models that are upscaled to coarser representations for flow simulations. The primary objective of the upscaling process is to meet the computational limits of the simulator. However, we argue that from a system-theoretic point of view, a more fundamental underlying reason for upscaling is that the complexity level of a model should be adjusted to the amount of available information from measurements and the extent of control (input) exercised by adjusting the well parameters. That is because for a given configuration of wells, a large number of combinations of state variables (pressure and saturation values in the gridblocks) are not actually controllable and observable and accordingly they are not affecting the input-output behavior of the model. Therefore, we propose a “control-relevant-upscaling” (CRU) approach that determines equivalent coarse-scale permeabilities based on the actual system’s input-output behavior. The coarse-scale parameters are obtained as the solution of an optimization problem that minimizes the distance between the input-output behaviors of the fine- and coarse-scale models. This distance is measured by using Hankel- or energy norms, in which we use Hankel singular values and Markov parameters as a measure of the combined controllability and observability, and response of the system, respectively. This work focuses on single-phase flow upscaling, where we develop a CRU algorithm for reservoir systems. Moreover, we address the potential benefits of using proper orthogonal decomposition (POD) in combination with our CRU method to obtain a reduced-order CRU algorithm that accelerate the upscaling procedure. In the cases considered, the CRU algorithm shows superior input-output behavior as compared to upscaling algorithms commonly used in reservoir simulators.
-
-
-
The MPFA G Scheme for Heterogeneous Anisotropic Diffusion Problems on General Meshes
Authors L. Agelas, D. Di Pietro, I. Kapyrin and R. MassonFinite volume cell centered discretizations of multiphase porous media flow are very popular in the oil industry for their low computational cost and their ability to incorporate complex nonlinear closure laws and physics. However, the trend towards a more accurate description of the geometry and of the porous medium requires to dispose of flux approximations handling general polyhedral meshes and full diffusion tensors. The convergence on general meshes as well as the robustness with respect to anisotropy and heterogeneity of the diffusion tensor should come at a reasonable computational cost. An important property on which the analysis relies is coercivity, which ensures the stability of the scheme and allows to prove convergence of consistent discretizations. Meeting all these requirements is still a challenge and this is an active field of research. The L-method, which has been recently introduced by Ivar Aavatsmark and co-workers, displays enhanced monotonicity properties on distorted meshes and for anisotropic diffusion tensors. In this work, we present a generalization of the L-method based on a discrete variational framework and on constant gradient reconstructions. The coercivity of the discretization requires a local condition depending both on the mesh and on the diffusion tensor to be satisfied. The monotonicity and convergence properties of the scheme are assessed on challenging single-phase problems with distorted meshes and anisotropic and heterogeneous diffusion tensors. The proposed method is compared with the standard O- and L-methods as well as with an unconditionally symmetric, coercive cell centered finite volume scheme using a larger stencil.
-
-
-
Monotonicity for Control Volume Methods on Unstructured Grids
Authors E. Keilegavlen and I. AavatsmarkIn reservoir simulation the pressure is the solution of an elliptic equation. It follows from the maximum principle that this equation satisfies a monotonicity property. This property should be preserved when discretising the pressure equation. If this is not the case, the pressure solution may have false internal oscillations and extrema on no-flow boundaries. Thus, there is a need for sufficient conditions for the discretization methods to be monotone. These conditions will depend on the permeability tensor and the grid. Previously monotonicity for control volume methods has been studied on quadrilateral grids and on hexagonal grids. The former was done for general methods which reproduces linear potential fields, while the latter was done in a Control Volume Finite Element setting. These analyses have given sharp sufficient conditions for the discretisation methods to be monotone. However, for methods whose cell stencils include cells which do not have any edges common with the central cell in the discretisation scheme, no work has been done on unstructured grids. In this work, we study monotonicity on triangular grids for control volume methods which are exact for linear potential fields. We derive sufficient conditions for monotonicity of the MPFA-O and -L methods. The found monotonicity regions for the MPFA methods are also tested numerically. The tests are done both on uniform grids in homogeneous media, and on perturbed grids, which corresponds to heterogeneous media. The investigations are done for single phase flow only. However, the results are relevant for multiphase simulations. The results obtained in this work may be utilised in grid generation. In this way we can construct grids where the discretisation of the pressure equation is guaranteed to be monotone.
-
-
-
Higher Dimensional Upwind Schemes for Flow in Porous Media on Unstructured Grids
Authors M.S. Lamine and M.G. EdwardsStandard reservoir simulation schemes employ single-point upstream weighting for approximation of the convective fluxes when multiple phases or components are present. These schemes rely upon upwind information that is determined according to the grid geometry. As a consequence directional diffusion is introduced into the solution that is grid dependent. The effect can be particularly important for cases where the flow is across grid coordinate lines and is known as cross-wind diffusion. Novel higher dimensional upwind schemes that minimize cross-wind diffusion are presented for convective flow approximation on structured and unstructured grids. The schemes are free of spurious oscillations and remain locally conservative. The higher dimensional schemes are coupled with full-tensor Darcy flux approximations. Benefits of the resulting schemes are demonstrated for classical convective test cases in reservoir simulation including cases with full tensor permeability fields, where the methods prove to be particularly effective. The test cases involve a range of unstructured grids with variations in orientation and permeability that lead to flow fields that are poorly resolved by standard simulation methods. The higher dimensional formulations are shown to effectively reduce numerical cross-wind diffusion effect, leading to improved resolution of concentration and saturation fronts. TECHNICAL CONTRIBUTIONS Locally conservative unstructured multi-dimensional schemes coupled with full-tensor Darcy flux approximations are presented for reservoir simulation. The multi-dimensional schemes are developed for general unstructured grids. The schemes are proven to be positive subject to conditions on the tracing direction and permit higher CFL conditions than standard schemes. Comparisons with single point upstream weighting scheme are made on a range of unstructured grids for different grid orientation and aspect ratios in cases involving full tensor coefficient velocity fields. The numerical results demonstrate the benefits of the higher dimensional schemes both in terms of improved front resolution and significant reduction in cross-wind diffusion.
-
-
-
Grid Optimization to Improve Orthogonality of Two-point Flux Approximation for Unstructured 3D Fractured Reservoirs
More LessToday's geological models are increasingly complex. The use of unstructured grids is an efficient way to account for this complexity especially when dealing with large network of fractures and faults. Flux based approximations are commonly used in reservoir simulation community to discretize the flow equations. The two-point flux approximation (TPFA) is the simplest and the most robust discretization technique. Application of TPFA requires an orthogonal grid. When applied to an unstructured model, Perpendicular Bisector (PEBI) grids are usually used as they are orthogonal by construction. But the construction of PEBI grids can become challenging for fully 3D models containing a large number of fractures and faults. In this work we propose to use an unstructured grid which matches exactly all discrete features (fractures and faults) but may not be orthogonal. The objective is to relax the orthogonality criteria to simplify the grid generation step and to deal with the orthogonality at the flux approximation step. Typical discretization techniques for nonorthogonal grids are based on multiple-point flux approximation (MPFA) where more than two pressure points are used to evaluate the flux. Although substantial progress has been made in improving MPFA, they remain more complex and less robust than TPFA. In this work we present a simple methodology to apply a TPFA to unstructured nonorthogonal grids. The idea is to determine the location of the pressure node inside each control-volume to make the connections with the surrounding cells as orthogonal as possible. Although it is not always possible to make all connections perfectly orthogonal, this optimization procedure improves systematically the grid quality. In addition, this purely geometrical optimization is done at the preprocessing level and does not require any flow information and can be used with any connectivity based flow solver.
-
-
-
High Performance Dual Mesh Method to Simulated Two-phase Gravity Dominated Flows in Porous Media
Authors M.A. Ashjari and B. FiroozabadiThis paper presents a new combined method for accurate upscaling of two-phase displacements in highly heterogeneous reservoirs. The method has the capability to retain its high performance for various flow regimes, from viscous to gravity dominant displacements, without the need for further modifications and computational steps. Two different grids are incorporated for simulation. The grid on fine scale is used to recognize the complicated physics of flow which depends on dominated driving forces and their interaction with heterogeneity. However, to achieve a fast simulation, the global flow calculation is performed on the coarse scale grid using upscaled equivalent properties. The communication between two different scale grids is achieved by the dual mesh method (DMM) procedure. Since DMM performance is still dependent on the accuracy of the coarse scale simulation, vorticity-based coarse grid generation technique is also incorporated to limit the upscaling errors. The technique optimizes the coarse grid distribution based on vorticity preservation concept where single-phase vorticity is attempted to be preserved among fine and coarse grid models. To demonstrate accuracy and efficiency, the combined DDM-vorticity method is applied to highly heterogeneous systems in two dimensions with and without gravity. The results reveal that the flow regime has only minor impact on the performance of the combined method.
-
-
-
Discretisation Schemes for Anisotropic Heterogeneous Problems on Near-well Grids
Authors S.S. Mundal, E. Keilegavlen and I. AavatsmarkIn reservoir simulation, a proper treatment of wells is crucial for the performance of fluid flow models. Important well parameters such as the well flow rate and the wellbore pressure are highly sensitive to the computational accuracy of near-well flow models. Near-well regions are characterised as high flow density regions, and the dominating flow pattern exhibits a radial-like nature with large pressure gradients and, for multiphase flow, large saturation gradients. Skew or horizontal wells will also in general imply a strong effect of anisotropy and heterogeneities due to geological layering of the reservoirs. Due to difference in the reservoir scale and the wellbore radius, reservoir flow models do not capture the true flow behavior in the well vicinity. Thus, more flexible models with local grid refinement in near-well regions are needed. The models should also be adapted to handle complex geological near-well structures and multiphase flow simulations. Existing near-well models are in general based on homogeneous media in the well vicinity. Anisotropies and strong heterogeneities are less accounted for in near-well flow simulations. In this work, we construct analytical solutions for near-well flow which is not aligned with a radial inflow pattern. These solutions resemble strongly heterogeneous, possible anisotropic media. We compare different control volume discretisation schemes and radial-type grids for such cases and give their convergence behavior. The objective is to obtain a clearer view on the accuracy of the near-well grids and discretisation schemes for large contrast in permeability, and hence, to determine which is the preferable grid and a suitable numerical scheme given certain near-well conditions. The simulations are performed for singlephase flow, however, the results will also have relevance for multiphase flow simulations.
-
-
-
A stochastic trapping model for non-wetting phase ganglias in two phase flow through porous media
Authors M. Tyagi, H.A. Tchelepi and P. JennyResidual trapping is one of the CO2 retention mechanism during CO2 storage in saline aquifers. In a region where the CO2 saturation falls below the residual saturation, isolated ganglias of CO2 are formed which can become immobile (trapped). In this paper, we propose a stochastic model for trapping and release of non-wetting phase ganglias. Opposed to traditional trapping models, which only take into account of the mean quantities, the proposed stochastic model contains additional statistical informations, e.g., correlation time. By selecting appropriate model parameters, both drainage and imbibition can be described without a priori knowledge of the displacement direction.
-
-
-
A Two-scale Operator Splitting for Flow Problems in Porous Media
Authors Y. Cao, A. Weiss and B. WohlmuthOperator splitting time discretization techniques are getting more and more popular for the numerical simulation of non-linear problems in porous media. The main idea is to split complex operators in evolution equations into simpler ones which are successively solved in each time step. For porous media flow a natural splitting is given by the diffusive and the advective part of the flux. Then for each part, optimal solvers for the particular evolution equation can be used, i.e., a parabolic solver for the diffusion equation and a hyperbolic solver (e.g., characteristics methods) for the advection equation. In this talk, we first present an operator splitting discretization using an implicit finite volume scheme for the diffusive part and a semi-Lagrangian method for the advective part. Hence, the computational cost of the semi-Lagrangian method can be neglected compared to the one of the implicit finite volume scheme. In the second part, we introduce a two-scale operator splitting method where the diffusion part is solved on a coarse grid while the advection part is considered on a fine grid. This is motivated by the fact that the diffusion and the advection act on different scales. Then, the resulting operator splitting discretization is equivalent in performance to a fast hyberbolic solver. In numerical examples, our proposed methods are compared with standard implicit finite volume methods. We will demonstrate that by using the operator splitting time discretization, the number of iterations in the implicit finite volume scheme may be significantly reduced.Moreover, for the two-scale method, it turns out that despite of the poor approximation in the diffusive part, the solution is still of equal quality compared to that of standard methods on fine grids. To summarize, operator splitting techniques provide a very flexible and powerful framework which makes it quite attractive for an efficient solution strategy for flow problems in porous media.
-
-
-
A General Multiscale Finite-Volume Method for Compressible Multiphase Flow in Porous Media
Authors H. Hajibeygi and P. JennyThe Multiscale Finite-Volume (MSFV) method was originally proposed to solve elliptic problems arising from incompressible multiphase flow in highly heterogeneous porous media at reduced computational cost. However, when phases are compressible, especially when the reservoir contains gas, mathematical formulations lead to a parabolic equation to be solved for pressure. In this paper we introduce a general MSFV method to deal with such parabolic problems. In this scheme, the basis and correction functions are numerical solutions of the full parabolic problems in localized domains. Hence compressibility effects are represented by a stencil in the coarse-system matrix, i.e. not only by a diagonal entry. Furthermore, to enhance the computational efficiency of the scheme, the basis functions are kept independent of the pressure field. As a result, only the correction functions (1 per dual-coarse cell) have to be updated during the iterative procedure. It is an important property of this approach that it requires no additional simplifications. Finally, its good efficiency is demonstrated for a number of challenging test cases.
-
-
-
Multiscale Asynchronous Algorithms Based on the Superelements Method for Multiphase Flow
Authors A.K. Pergament, V.A. Semiletov and P.Y. TominThe typical scales of pore pressure and fluid saturations variations are different both for space and temporal variables. The multiscale method is based on using the coarse grid for pore pressure equation and fine grid for saturation equations. The time step for these equations may be different too. The essential feature of the method is the basic function construction by solving the one phase stationary equations. These basis functions have the same peculiarities determined by the fine grid structures. The solution of pore pressure equations may be constructed as linear span of these basic functions for Buckley-Leverett equations. In general case it is possible to construct the tensor total permeability coefficients and to up-scale the relative permeability. The method of calculating these variables is the dissipative energy integral approximation. As a result we have the non-linear parabolic equation for pore pressure with tensor coefficients. The implicit schemes on the fine grid are considered for saturation equation. The general method is high resolution method. It is essential this method is a high resolution one. The results of modeling some problems are represented.
-
-
-
Multiscale/Mimetic Pressure Solvers with Near-well Grid Adaptation
Authors B. Skaflestad and S. KrogstadMultiscale methods have proved to be an efficient means to achieving fast simulations on large reservoir models. The main characteristic of these methods is high resolution flow fields obtained at relatively low computational cost. We are thus able to resolve large scale flow patterns as well as fine-scale details that would be impossible to obtain for models for which direct simulation using traditional methods would be prohibitive. However, there are still a number of open problems in applying these methods to reservoir simulation. In particular, we observe some discrepancy in the performance of wells when compared to direct simulation on the fine-scale grid. To improve the multiscale method's predictive power for individual wells, we consider two direct opposite strategies: First, by resolving the near-well flow in the coarse grid by adaptive grid refinement in regions near wells. Second, by ensuring that near-well flow is sufficiently captured in the corresponding multiscale basis functions. For the latter strategy, we consider both adaptive alignment of coarse pressure grid to well trajectories, and an oversampling method for the computation of the multiscale basis functions corresponding to wells. In this paper we will study the effects of such near-well grid adaptations, and state pros and cons for the approaches considered.
-
-
-
Spontaneous Ignition in Porous Media at Long Times
Authors J. Bruining and D. MarchesinIf oxygen is in intimate contact with fuel Arrhenius law says that reaction will occur, even at low temperatures. Heat losses can become equal to the small reaction heat generated, so that the system remains trapped in a slow reaction mode. Such a mode is indistinguishable from extinction. On the other hand, if heat losses remain smaller than the heat generated by the reaction the temperature increases and spontaneous ignition occurs. Heat losses are strongly dependent on the geometry of the heat generating region. We therefore distinguish three idealized geometries, viz. linear, cylindrical and spherical infinite domains. We analyze the long time behavior of a basic heat diffusion problem that incorporates an Arrhenius heat generation term; the resulting model is temperature controlled. Only in spherical geometry the model recovers the results from the conventional heat transfer concept. Indeed, we come to the following results. As reactant depletion is ignored, spontaneous ignition always occurs, in linear and cylindrical coordinates, even if it takes very long time. For the spherical case, the occurrence of ignition or extinction depends on the process conditions.
-
-
-
Study of Heavy Oil Recovery from a Fractured Carbonate Reservoir Using in Situ Combustion
Authors H. Fadaei, M. Quintard, G. Debenest, G. Renard and A. KampIn spite of its strategic importance, the topic of recovery of heavy crude oils from fractured carbonate reservoir has not been extensively addressed. Thermal methods seem well suited for this kind of problems, particularly in situ combustion has shown promising results in laboratory experiments. Extensive work has been done on development of thermal process simulator but for the in-situ combustion applied specially in fracture reservoirs where one is dealing with multi-scale multi-process problem, many unknowns are still exist. The recovery mechanism, reservoir and operational conditions on which the combustion can propagate in fractured systems are not enough clear. Also due to safety issues, air injection required careful assessment of the reservoir displacement mechanisms in particular the magnitude and the kinetics of matrix-fracture transfers. To understand the mechanism of heavy oil recovery from a fractured reservoir we propose the development of a numerical simulation strategy, starting from existing simulation tools that are adapted to this particular problem. This will allow firstly understanding the role of each driving mechanism and physical as well as operational parameters in recovery process and secondly choosing the suitable up-scaling method. The study is based on the fine grid, single porosity, multi-phase and multi-component simulation of a core surrounded by two parallel air invaded fractures using the thermal simulator. Firstly the simulator is validated for different processes: one and two dimensional diffusion and one-dimensional combustion are compared with the corresponding analytical solutions. The two-dimensional combustion is validated using experimental results available in the literature. The simulation results predict the conditions on which the combustion is sustained in the fractured reservoir as a function of oxygen diffusion coefficient, injection rate, and the permeability of the matrix. Oil production via natural drainage, hot fluid injection and in-situ combustion are compared to address the importance of different driving mechanisms. At the block scale the effect of fracture spacing, heterogeneity in the matrix and the grid size on the propagation of combustion and the oil production are studied and then a suitable up-scaling procedure is proposed.
-
-
-
Modeling CO2 Sequestration Using a Sequentially Coupled 'Iterative-IMPEC-time-split-thermal' Compositional Simulator
Authors M. Delshad, S.G. Thomas and M.F. WheelerThis paper presents an efficient numerical scheme for multiphase compositional flow coupled with subsurface heat transfer. The flow equations are first presented followed by a brief discussion of the equation of state (EOS) and a description of the two-phase flash algorithm. An implicit-pressure, explicit-concentrations (IMPEC) sequential algorithm is then applied iteratively to enforce the non-linear volume balance saturation) constraint. The pressure equation is solved using mixed FEM, while the concentrations are updated consistent with requiring local mass balance of every component. Thermal effects also play an important role in such problems since they effect the phase properties and hence, stable and accurate locally conservative methods are desirable to model the thermal energy balance equation. To this end, we present also a time-split scheme for modeling the energy balance equation that is sequentially coupled to the flow solution rendering it cheap yet accurate for the complex problems being modeled. Results of benchmark problems in compositional flow modeling are presented and validated where possible. Some large-scale parallel tests are performed on more challenging applications such as those with highly heterogeneous permeability fields on very fine grids. Efficient parallel scalability of the code on upto 512 processors is also demonstrated. Finally, some test cases simulating "real world" problems of CO2 sequestration in deep saline aquifers are presented. Initial results show reasonably good agreement of CO2 plume shapes, arrival times, leakage rates and production curves with semi-analytic solutions, wherever available. All computations were carried out within the IPARS-CO2 code base framework.
-
-
-
Density-driven Natural Convection in Dual Layered and Anisotropic Porous Media with Application for CO2 Injection Project
Authors R. Farajzadeh, F. Farshbaf Zinati, P.L.J. Zitha and J. BruiningIn this paper we investigate the mass transfer of CO2 injected into a layered and anisotropic (sub)-surface porous formation saturated with water. Solutions of carbon dioxide in water and oil are denser than pure water or oil. We perform our analysis to a rectangular part of the porous medium that is impermeable at the sides except at the top, which is exposed to CO2. For this configuration density-driven natural convection enhances the mass transfer rate of CO2 into the initially stagnant liquid. The analysis is done numerically using mass and momentum conservation laws and diffusion of CO2 into the liquid. This configuration leads to an unstable flow process. Numerical computations do not show natural convection effects for homogeneous initial conditions. Therefore a sinusoidal perturbation is added for the initial top boundary condition. It is found that the development of fingers is fastest for mass transfer enhancement by natural convection is largest for large anisotropy ratio’s and smaller for small ratio's. It is found that the mass transfer increases and concentration front moves faster with increasing Rayleigh number if the high permeability layer is on top. Of particular interest is the case when the Rayleigh number for the high permeable layer is above the critical Rayleigh number (Racr = 40) and smaller than Racr for the low permeable layer. The results of this paper have implications in enhanced oil recovery and CO2 sequestration in aquifers.
-
-
-
Operator Splitting of Advection and Diffusion on Non-uniformly Coarsened Grids
Authors V.L. Hauge, J.E. Aarnes and K.A. LieHigh-resolution geological models of subsurface reservoirs typically contain much more details than can be used in conventional reservoir simulators. Geomodels are therefore upscaled before flow simulation is performed. Here, we present a non-uniform coarsening strategy to upscale geomodels, where the coarse grid is generated by grouping cells in the fine-scaled geomodel selectively into connected coarse blocks, with some minimum volume and with the total flow through each coarse block bounded above. Transport is then modeled on this simulation grid. For purely advective flow, the coarsening strategy has been shown to be robust, allowing for both accurate and fast transport simulations of highly heterogeneous and fractured reservoirs. Here, we consider advection-dominated two-phase flow, where the capillary diffusion is discretized separately from the advective and gravity terms using operator splitting. In particular, we investigate different damping strategies of the diffusive term to counteract overestimation of the diffusion operator on the coarse grid, this to ensure accurate diffusion modelling in the transport solver.
-
-
-
Asynchronous Time Integration of Flux-conservative Transport
Authors B.T. Mallison, M.G. Gerritsen and G. KreissWe investigate the potential of a flux-conservative, asynchronous method for the explicit time integration of subsurface transport equations. This method updates each discrete unknown using a local time step, chosen either in accordance with local stability conditions, or based on predicted change of the solution, as in Omelchenko and Karimabadi (Self-adaptive time integration of flux-conservative equations with sources. J. Comput. Phys. 216 (2006)). We show that the scheme offers the advantage of avoiding the overly-restrictive global CFL conditions. This makes it attractive for transport problems with localized time scales, such as those encountered in reservoir simulation where localized time scales can arise due to well singularities, spatial heterogeneities, moving fronts, or localized kinetics, amongst others. We conduct an analysis of the accuracy properties of the method for one-dimensional linear transport and first order discretization in space. The method is found to be locally inconsistent when the temporal step sizes in two adjacent cells differ significantly. We show numerically, however, that these errors do not destroy the order of accuracy when localized. The asynchronous time stepping compares favorably with a traditional first order explicit method for both linear and nonlinear problems, giving similar accuracy for much reduced computational costs. The computational advantage is even more striking in two dimensions where we combine our integration strategy with an IMPES treatment of multiphase flow and transport. Global time steps between pressure updates are determined using a strategy often used for adaptive implicit methods. Numerical results are given for immiscible and miscible displacements using a structured grid with nested local refinements around wells.
-
-
-
High Order Adaptive Implicit Method for Conservation Laws
Authors A. Riaz, R. de Loubens and H. TchelepiNumerical solution of conservation laws with large variations in the local CFL number, requires impractically small time steps for stable, explicit time integration. Implicit time stepping with large time steps is therefore necessary but is computationally expensive and consequently limited to first-order discretization schemes. A numerical scheme that is accurate and at the same time allows large time steps is clearly desirable. We develop a numerical formulation based on the Adaptive Implicit Method (AIM) that achieves both objectives by treating implicitly only a few grid cells that restrict the time step size, and by using high-order spatial discretization in both the implicit and explicit cells. Our novel approach solves the fully coupled transport and flow equations with high order-accuracy within the AIM framework to reduce the computational cost and improve the solution accuracy. We show that high-order discretization is computationally efficient when combined with AIM and leads to significant improvements in accuracy compared to the first-order solution.
-
-
-
Effects of Sand Dunes on Full Wavefields in Exploration
More LessSeismic pressure point sources have been simulated near the surface of sand dunes in Oman. Real topographic elevation data was assembled from recorded field data in the region, in-filled with necessary interpolated data, to form the surface topography implemented. Recent results describing seismic sand curves for the region are employed in the near-surface medium definition, and a constant half space of realistic seismic velocities and densities are used for the simulated sub-sand medium. Topographic effects in the form of scattering and reflections from prominent sand dune topography are confirmed, showing differences between scattering from elongated and normally oriented sand dunes to the excited source orientation. 8th order Finite-Differences (F-Ds) are used to discretize full elastic wave equations in the velocity-stress formulation, reducing gradually the F-D order when approaching all numerical grid boundaries. At the free surface, I employed boundary conditions for arbitrary surface topography derived for the particle velocities, discretized by 2nd order F-Ds, and at the side and bottom boundaries exponential damping is applied for non-reflective boundaries. A rectangular grid is used to model the Cartesian wave equations from a curved grid by adding in the equations extra terms that depend on the arbitrary surface topography and its spatial derivatives. The curved grid is adapted to an arbitrary surface topography with gradually less curvature with depth towards the bottom of the grid, which is planar. The code is parallellized by domain decomposition through MPI (Message Passing Interface) parallellization between processors. This makes possible the time domain modelling of higher frequencies (even relevant for exploration) and larger areas than could be done otherwise. Only in the last 3-5 years has such complex modelling been commercially viable in hydrocarbon exploration.
-
-
-
Reservoir Characterization Improvement by Accounting for Well Test Extracted Effective Permeabilities
Authors A. Skorstad, F. Georgsen, P. Abrahamsen and E. SmørgravGeostatistical simulation of permeabilities on a geologically detailed resolution will account for permeabilities in the cells containing wells through kriging. These permeabilities are typically based on porosity logs and core plug measurements of both porosity and permeability. No direct measurement of permeability on the geomodelling grid scale exists. However, well tests give information on the effective permeability in an area close to the well region, covering several grid cells, and are therefore data on an aggregate scale. Given an assumption of radial flow into a vertical well, the effective permeability becomes a convolution of the permeabilities in the well test region. Downscaling of the well test data is possible by co-kriging the aggregate scaled permeability field with that of the geomodelling grid scale. Thereby, the geomodelling grid permeabilities will honour data on both scales. The effective permeability is downscaled through inverse block kriging, which implies a deconvolution procedure. Keeping the computation costs low when introducing a new conditioning parameter is ensured by transforming into the Fourier domain, since the Fast Fourier Transform algorithm is an efficient method for solving the inverse block co-kriging. Initial testing of the implemented algorithm has been made on real data from a StatoilHydro operated field on the Norwegian continental shelf in a proof of concept test. Three wells with well test data were chosen, and their derived effective permeabilities were included in the permeability simulations. Cases both with and without well test conditioning were run, and compared in a well test simulation software. All three near well regions showed a significant improvement. These tests indicate that the conditioning method can be a useful contribution in bringing dynamic data into the reservoir characterization.
-
-
-
Anisotropic Permeability in Fractured Reservoirs from Joint Inversion of Seismic AVAZ and Production Data
Authors M. Jakobsen and A. ShahrainiAnisotropic effective medium theory (for homogenization of fractured/composite porous media under the assumption of complete scale-separation) can be used to map the effects of sub-grid (or sub-seismic) fractures onto the grid-scale. The effective poroelastic stiffness and hydraulic permeability tensors that determines the seismic amplitude versus angle-azimuth (AVAZ) and production data (provided that one has information about the overburden and the porous matrix, as well as suitable tools for seismic forward modelling and fluid flow simulation) can therefore be viewed as functions of a relatively small number of parameters related with the fractures (e.g., fracture density and azimuthal orientation). In this paper, we develop a rock physics-based Bayesian method for joint inversion of seismic AVAZ and production data with respect to these parameters of the fractures. Within this stochastic framework, the expectation value of the effective permeability tensor is given by an integral of an effective medium approximation (derived by using rigorous integral equation or Green’s tensor function methods) weighted by the posterior probability density function over the allowed values of the parameters of the fractures. The present work complements those of Will et al. (2005), Jakobsen et al. (2007) and Jakobsen and Shahraini (2008) in the sense that we are using different inversion methods, seismic attributes and/or (rock physics) models of the fractured reservoir. Our interdisciplinary method for characterization of sub-grid fractures can in principle be applied to rather complicated models of fractured reservoirs (e.g., involving multiple sets of fractures characterized by different fracture densities and orientations, etc.). However, the initial (synthetic) modelling and inversion results presented here are associated with a relatively simple model of a fractured reservoir in which a single set of vertical fractures are located within sub-set of the (piecewise constant) reservoir model (that can in principle be identified using complementary methods). We have analysed synthetic AVAZ and production data contaminated with different amounts of normal distributed noise, and managed to recover the (unknown parameters of the fractures) effective permeability tensors with good accuracy. The results show clearly that the incorporation of seismic anisotropy attributes into the history matching process helps to reduce the uncertainties of the estimated permeability tensors.
-
-
-
A Novel Kriging Approach for Incorporating Nonlinear Constraints
Authors F.P. Campozana and L.W. LakeThis work presents new Kriging-based algorithms that allow incorporating nonlinear constraints such as a given average and variance into the kriged field. The first of these constraints allows one to obtain, for example, permeability fields whose average matches well-test-derived permeability; the second one enables one to generate fields that have a desired variability. Therefore, a well-known drawback of Kriging, namely excessive smoothness, is overcome. As these constraints are applied, the Kriging estimates of all blocks become mutually dependent and must be solved simultaneously. This fact led to the development of a new concept, that of Simultaneous Kriging. Furthermore, because permeability is not an additive variable, the problem becomes nonlinear; an optimization procedure based on Newton's method is used to solve it. Both average and variance constraints are applied through Lagrange multiplier technique. Power averaging with a ω exponent that varies from -1 to 1 is used to estimate field-generated permeability within the well-test drainage area. The optimization algorithm searches for an optimum ω value as well as a set of gridblock values such that the desired field average and variance are met. Starting from an initial guess and tolerance, the algorithm will reach the closest minimum within the solution space. Unlike other Kriging methods, the solution is not unique; in fact, infinite, equiprobable solutions can be found. From this point of view, the proposed algorithms become more like conditional simulation. A number of permeability fields were generated and used as input of a numerical simulator to calculate their actual well-test permeability. The simulation results proved that the fields generated by the algorithm actually matched well-test permeability and field data variance within a reasonable tolerance. The same procedure can be used to incorporate other nonlinear constraints, such as facies proportion.
-
-
-
History Matching of Truncated Gaussian Models by Parallel Interacting Markov Chains on a Reduced Dimensional Space
More LessIn oil industry and subsurface hydrology, geostatistical models are often used to represent the spatial distribution of different lithofacies in the reservoir. Two main model families exist: multipoint and truncated Gaussian models. We focus here on the latter. In history matching of lithofacies reservoir model, we attempt to find multiple realizations of lithofacies configuration, that are conditional to dynamic data and representative of the model uncertainty space. This problem can be formalized in the Bayesian framework. Given a truncated Gaussian model as a prior and the dynamic data with its associated measurement error, we want to sample from the conditional distribution of the facies given the data. A relevant way to generate conditioned realizations is to use Markov Chains Monte Carlo (MCMC). However, the dimension of the model and the computational cost of each iteration are two important pitfalls for the use of MCMC. In practice, we have to stop the chain far before it has scanned the whole support of the posterior. Further more, as the relationship between the data and the random field is non-linear, the posterior can be multimodal. Hence, the chain may stay stuck in one of the modes. In this work, we first show how to reduce drastically the dimension of the problem by using a truncated Karhunen-Loève expansion of the Gaussian random field underlying the lithofacies realization. Then we show how we can improve the mixing properties of classical single MCMC, without increasing the global computational cost, by the use of parallel interacting Markov chains at different temperatures. Applying the dimension reduction and this innovative sampling method lowers drastically the number of iterations needed to sample efficiently from the posterior. We show the encouraging results obtained when applying the methodology to a synthetic history matching case.
-
-
-
Using the EnKF with Kernel Methods for Estimation of Non-Gaussian Variables
Authors J.G. Vabø, G. Evensen, J. Hove and J.A. SkjervheimThe Ensemble Kalman Filter (EnKF) is derived under the assumption of Gaussian probability distributions. Thus, the successful application of the EnKF when conditioning to dynamic data depends on how well the stochastic reservoir state can be approximated by a Gaussian distribution. For facies models, the distribution becomes non-Gaussian and multi-modal, and the EnKF fails to preserve the qualitative geological structure. To apply the EnKF with models where the Gaussian approximation fails, we propose to map the ensemble to a higher dimensional feature space before the analysis step. By careful selection of the mapping, moments of arbitrary order in the canonical reservoir parameterization can be embedded in the 1st and 2nd order moments in the feature space parametrization. As a result, the parameterization in the feature space might be better approximated by a Gaussian distribution. However, the mapping from the canonical parameterization to the feature space parameterization does not have an inverse. Thus, finding the analyzed ensemble of canonical states from the analyzed ensemble represented in the feature space is an inverse problem. By using the kernel trick, the mapping to the feature space is never explicitly computed, and we solve the inverse problem efficiently by minimizing a cost function based on the feature space distance. As a result, the computational efficiency of the EnKF is retained, and the methodology is applicable for large scale reservoir models. The proposed methodology is evaluated as an alternative to other approaches for estimation of facies with the EnKF, such as the truncated pluri-Gaussian approach. Results from a field case are shown.
-
-
-
History Matching Using a Multiscale Ensemble Kalman Filter
Authors W. Lawniczak, R.G. Hanea, A.W. Heemink, D. McLaughlin and J.D. JansenSince the first version of Kalman Filter was introduced in 1960 it received a lot of attention in mathematical and engineering world. There are many successful successors like for example Ensemble Kalman Filter (Evensen 1996) which has been applied also for reservoir engineering problems. The method proposed in [Zhou et al. 2007] draws together the ensemble filtering ideas and an efficient covariance representation, and is expected to perform well in history matching for reservoir engineering. It is the Ensemble Multiscale Filter. The EnMSF is a different way to represent the covariance of an ensemble. The computations are done on a tree structure and are based on an ensemble of possible realizations of the states and/or parameters of interest. The ensemble consists of replicates that are the values of states per pixel. The pixels in the grid are partitioned between the nodes of the finest scale in the tree. A construction of the tree is led by the eigenvalue decomposition. Then, the state combinations with the greatest corresponding eigenvalues are kept on the higher scales. The updated states/parameters using the EnMSF are believed to keep geological structure due to localization property. It comes from the filter’s characterization where the pixels from the grid (e.g. permeability field) are distributed (in groups) over the finest scale tree nodes. We present a comparison of covariance matrices obtained with different setups used in the EnMSF. This sensitivity study is necessary since there are many parameters in the algorithm which can be adjusted to the needs of an application; they are connected to the tree construction part. The study gives the idea of how to efficiently use the EnMSF. The localization property is discussed based on the example where the filter is run with a simple simulator (2D, 2 phase) and a binary ensemble is used (the pixels in the replicates of permeability have two values only). Several possible patterns for ordering the pixels are applied.
-
-
-
Comparing different ensemble Kalman filter approaches
Authors B.V. Valles and G. NaevdalOver the last decade, the ensemble Kalman filter (EnKF) method has become an attractive tool for history matching reservoir simulation models and production forecasting. Recently, EnKF has been successfully applied to real field studies (e.g. Bianco et al., 2007). Nonetheless, Lorentzen et al. (2005) observed a consistency problem. They showed, using the Kolmogorov-Smirnov test, that for the PUNQ-S3 model, the posterior cdfs of the total cumulative oil production for 10 ensembles were not coming from the same distribution, as was expected since the 10 initial ensembles were generated using a common distribution. The forecasts from these initial ensembles gave consistent cdfs. We investigate if this issue is related to the inbreeding of the ensemble members when applying EnKF. Houtekamer and Mitchell (1998) proposed to use a paired EnKF with covariance localization to improve atmospheric data assimilation. This method was developed to hinder ensemble collapse due to inbreeding of ensemble members and spurious correlations problems far from the observation points. We show that using a paired EnKF, where each Kalman gain is used to update each other's ensemble, do not in fact prevent inbreeding. A new approach, coupled EnKF, that do prevent inbreeding is presented. The present work first review the issue of consistency encountered with EnKF and investigate the use of paired and coupled EnKFs without covariance localization as a possible remedy to this problem. The method is tested on a simple nonlinear model for which the posterior distribution can be analytically solved. The obtained results are compared with the traditional EnKF and the analytic solution. Next, a hierarchical ensemble filter approach inspired by Anderson (2007) acting as a covariance localization technique is proposed and tested on the PUNS-S3 model against the traditional EnKF. This approach seems better than paired or coupled EnKF to help solve the observed inconsistencies.
-
-
-
Channel Facies Estimation Based on Gaussian Perturbations in the EnKF
Authors D. Moreno, S.I. Aanonsen, G. Evensen and J.A. SkjervheimThe ensemble Kalman filter (EnKF) method has proven to be a promising tool for reservoir model updating and history matching. However, because the EnKF requires that the prior distributions for the parameters to be estimated are Gaussian, or approximately Gaussian, the application to facies models, and channel systems in particular, has been a challenge. In this paper we suggest two different approaches for parameterization of the facies models in terms of Gaussian perturbations of an existing "best guess" model. Method 1 is inspired by level set methods, where surfaces (here facies boundaries) are implicitly modelled through a level set function, normally defined as the signed distance from the nearest surface. Model realizations are generated by adding a Gaussian random field to the initial level set function. Method 2 is based on a similar idea, but the realizations are generated by adding a Gaussian random field to a smoothed indicator function. The smoothing is performed with a Shapiro filter, which is fast and simple to use for any number of dimensions. The performance of the methods is illustrated using a 3D model inspired by a real North Sea fluvial reservoir. It is shown that realistic facies model realizations may be generated from realizations of a Gaussian random field. Based on one realization from the prior for each of the methods, two sets of synthetic production data were generated. Then the prior model ensemble generated with each of the methods were conditioned to each of the two data sets using EnKF. Reasonably good matches were obtained in all cases, including those where the true model is not a realization from the same statistical model as is used to generate the prior realizations.
-
-
-
A Distance-based Representation of Reservoir Uncertainty: the Metric EnKF
More LessUncertainty quantification is probably one of the most important and challenging aspects of reservoir modeling, engineering and management. Traditional approaches have failed to provide manageable solution models. In most cases, uncertainty is represented through either a multi-variate distribution model (posterior) in a Bayesian modeling context or through a set of multiple realizations in a more frequentist view. The traditional Monte-Carlo simulation approach, whereby multiple alternative high-resolution models are generated and used as input to reservoir simulation or optimization codes, has not been proven to be practical mainly due to CPU limitation, nor is it effective or flexible enough in terms of addressing a varying degree of uncertainty assessment issues. In this paper, we propose a reformulation of reservoir uncertainty in terms of distances between any two reservoir model realizations. Instead of considering a reservoir model realization as some point in a high-dimensional space on which a complex posterior distribution is defined, we consider a distance matrix which contains the distances between any two model realizations. The latter matrix is of size NR×NR, NR being the number of realizations, much less for example that an N×N covariance matrix, N being the number of grid-blocks. Note that, unlike a co-variance matrix or probability function, a distance can be tailored to the specific problem at hand, for example water-breakthrough, or OOIP, even though the realizations remain the same. Next, the classical Karhunen-Loeve expansion of a Gaussian random field, based on the eigenvalue decomposition of an N×N covariance table, can now be formulated as function of the NR×NR distance matrix. To achieve this, we construct an NR×NR kernel matrix using the classical radial basis function, which is function of the distance and perform eigenvalue decomposition of this kernel matrix. In kernel space, new realizations can be generated or adjusted to new data. As application we discuss a new technique, termed metric EnKF yo simulataneously update multiple non-Gaussian realizations with production data. Using this simple dual framework to probabilistic-based uncertainty, I show how many types of reservoir uncertainty (spatial, non-spatial, scenario-based etc..) can be modeled in this fashion. I show various applications of this framework to the problem of history matching, model updating and optimization.
-
-
-
Designing Optimal Data Acquisition Schemes Using Kullback-Leibler Divergence within the Bayesian Framework
Authors A.A. Alexandre and B.N. BenoitOil and gas reservoirs are commonly complex heterogeneous natural structures that are described by means of several direct or indirect field measurements involving different physical processes operating at different spatial and temporal scales. Seismic techniques provide a description of the large scale geological structures. In some cases, they can help to characterize the spatial fluid distribution, which knowledges can in turn be used to improve the oil recovery strategy. In practice, these measurements are always expensive and due to their indirect, local and incomplete nature, an exhaustive representation of the entire reservoir cannot be attained. Several uncertainties are always remaining, that must be conveniently propagated in the modeling workflow to deduce in turn uncertainties of the production forecasts. Those uncertainties are essential when setting up a reservoir development scenario. A typical issue is to choose between several oil recovery scenario, or position and trajectory of a new well. Due to the cost of the associated field operations, it is essential to model the risks due to the remaining uncertainties. It is within this framework that devising strategies allowing to set-up optimal data acquisition schemes can have many applications in oil or gas reservoir engineering, or in the recently considered CO2 geological storages involving analogous technologies. We present a method allowing us to quantify the information that is potentially provided by any set of measurements. Applying a Bayesian framework, we quantify the information content of any set of data using the so called Kullback-Leibler divergence between posterior and prior distributions. In the case of a gaussian model where the data depend linearly on the parameters, analytic formulae are given and allow us to define the optimal set of time acquisitions. The redundancy of information can also be quantified, highlighting the role of the correlation structure of the prior model.
-
-
-
Non-intrusive Stochastic Approaches for Efficient Quantification of Uncertainty Associated with Reservoir Simulations
More LessThis paper presents non-intrusive, efficient stochastic approaches for predicting uncertainties associated with petroleum reservoir simulations. The Monte Carlo simulation method, which is the most common and straightforward approach for uncertainty quantification in the industry, requires to perform a large number of reservoir simulations and is thus computationally expensive especially for large-scale problems. We propose an efficient and accurate alternative through the collocation-based stochastic approaches. The reservoirs are considered to exhibit randomly heterogeneous flow properties. The underlying random permeability field can be represented by the Karhunen-Loeve expansion (or principal component analysis), which reduces the dimensionality of random space. Two different collocation-based methods are introduced to propagate uncertainty of the reservoir response. The first one is the probabilistic collocation method that deals with the random reservoir responses by employing the orthogonal polynomial functions as the bases of the random space and utilizing the collocation technique in the random space. The second one is the sparse grid collocation method that is based on the multi-dimensional interpolation and high-dimensional quadrature techniques. They are non-intrusive in that the resulting equations have the exactly the same form as the original equations and can thus be solved with existing reservoir simulators. These methods are efficient since only a small number of simulations are required and the statistical moments and probability density functions of the quantities of interest in the oil reservoirs can be accurately estimated. The proposed approaches are demonstrated with a 3D reservoir model originating from the 9th SPE comparative project. The accuracy, efficiency, and compatibility are compared against Monte Carlo simulations. This study reveals that, compared to traditional Monte Carlo simulations, the collocation-based stochastic approaches can accurately quantify uncertainty in petroleum reservoirs and greatly reduce the computational cost.
-
-
-
Model Complexity in Reservoir Simulation
Authors G.E. Pickup, M. Valjak and M.A. ChristieThe goal of history matching is to construct one or more reservoir models that match observed field behaviour and use those models to forecast field behaviour under different operating conditions. In constructing the model to be matched, there are a number of choices to be made, such as the type of geological model to construct and the number of unknown parameters to use for history matching. We often choose a single geological model and vary a set of parameters to give the best fit to the data (e.g. production history). There are two areas of concern with this approach. The first is that there are usually a number of possible geological models which are all plausible to some degree, and we may be able to make better forecasts by using this information. The second is that increasing the number of unknown parameters may give a better fit to the data, but may not predict so well if the model is over-fitted. The goal of this paper is to examine the application of two techniques to handle these problems. The first technique uses the concept of minimum description length (MDL), which was developed from information theory, and quantifies the trade-off between model complexity and goodness of fit. The second technique is called Bayesian Model Averaging, and was developed to improve the reliability of weather forecasts using codes developed at different centres by constructing a suitable average of the models which takes into account not only the uncertainty forecast by each model but also the between model variance. Both techniques are illustrated with real reservoir examples. The MDL approach is shown on a simple reservoir model based on a Gulf of Mexico Field. The BMA approach is shown on a field with a moderate number of injectors and producers.
-
-
-
Adaptive Multiscale-streamline Simulation and Inversion for High-resolution Geomodels
Authors K.-A. Lie, V.R. Stenerud and A.F. RasmussenFirst, we present an efficient method for integrating dynamic data in high-resolution subsurface models. The method consists of two key technologies: (i) a very fast multiscale-streamline flow simulator, and (ii) a fast and robust 'generalized travel-time inversion' method. The travel-time inversion is based on sensitivities computed analytically along streamlines using only one forward simulation. The sensitivities are also used to selectively reduce the updating of basis functions in the multiscale mixed finite-element pressure solver. Second, we discuss extensions of the methodology to grids with large differences in cell sizes and unstructured connections. To this end, we suggest to use rescaled sensitivities (average cell volume multiplied by local sensitivity density) in the inversion and propose a generalized smoothing operator for the regularization to impose smooth modification on reservoir parameters. Two numerical examples demonstrate that this reduces undesired grid effects. Finally, we show a slightly more complex example with two faults and infill drilling.
-
-
-
Structural Identifiability of Grid Block and Geological Parameters in Reservoir Simulation Models
Authors J. van Doren, J.D. Jansen, P.M.J. Van den Hof and O.H. BosgraIt is well-known that history matching of reservoir models with production measurements is an ill-posed problem, e.g. different choices for the history matching parameters may lead to equally good history matches. We analyzed this problem using the system-theoretical concept of structural identifiability. This allows us to analytically calculate a so-called information matrix. From the information matrix we can determine an identifiable parameterization with a significantly reduced number of parameters. We apply structural identifiability analysis to single-phase reservoir simulation models and obtain identifiable parameterizations. Next, we use the parameterization in minimizing an objective function that is defined as the mismatch between pressure measurements and model outputs. We also apply the structural identifiability analysis to an object-based parameterization describing channels and barriers in the reservoir. We use the iterative procedure to determine for reservoir models with 2025 grid block permeability values a structurally identifiable parameterization of only 13 identifiable parameters. Next, we demonstrate that the parameterization leads to perfect history matches without the use of a prior model in the objective function. We also demonstrate the use of the identifiable object-based parameterization, leading to geologically more realistic history matches.
-
-
-
Model-reduced Variational Data Assimilation for Reservoir Model Updating
Authors M.P. Kaleta, R.G. Hanea, J.D. Jansen and A.W. HeeminkVariational data assimilation techniques (automatic history matching) can be used to adapt a prior permeability field in a reservoir model using production data. Classical variational data assimilation requires, however, the implementation of an adjoint model, which is an enormous programming effort. Moreover, it requires the results of one complete simulation of forward and adjoint models to be stored, which is a serious problem in real-life applications. Therefore, we propose a new approach to variational data assimilation that is based on model reduction, where the adjoint of the tangent linear approximation of the original model is replaced by the adjoint of a linear reduced model. The Proper Orthogonal Decomposition approach is used to determine a reduced model. Using the reduced adjoint the gradient of the objective function is approximated and the minimization problem is solved in the reduced space. If necessary, the procedure is iterated with the updated estimate of the parameters. We evaluated the model-reduced method for a simple 2D reservoir model. We compared the method with variational data assimilation where the gradient is approximated by finite differences and we found that the reduced-order method is about 50 % more efficient. We foresee that the computational efficiency will significantly increase for larger model size and our current research is focused on quantifying this computational benefit.
-
-
-
Simultaneous AVA Stochastic Inversion of Seismic Data
Authors D.E. Kashcheev, D.G. Kirnos and A.M. GritsenkoWe consider an efficient AVA stochastic inversion algorithm which performs inversion of equal angle offset volumes to volumes of Vp, Vs and density. Exact solution of Zoeppritz equations is used in order to determine reflection coefficients. Using combination of Simulated Annealing and Metropolis-Hasting sampler, we generate a set of equiprobable high-resolution volumes of elastic properties which are consistent with seismic data, well measurements and reproduce detailed geological features. Stochastic realizations generating process is constrained to reproduce seismic data, well data, low-frequency elastic trends, 3D variograms for elastic parameters, estimates of probability distributions of (Vp,Vs,Den), estimates of vertical alteration of elastic properties, etc. The multi-trace approach and extensive use of a priori geological knowledge greatly reduces influence of seismic noise on the inversion results for each of the solutions obtained. Multiple grid approach is used to properly treat large-scale variograms features. For reproduction of vertical alteration of elastic properties we use multi-point statistics. The training dataset is formed from log-derived properties. This eliminates the need for building an artificial training model. Application of multi-point statistics has a greater potential for ensuring more accurate account of vertical variations of acoustic properties, for obtaining geologically valid inversion solutions and for decreasing the uncertainty degree. The seismic inversion algorithm has been efficiently parallelized. It uses a shared memory computer and inverts simultaneously different 1D models on different processors preserving spatial correlation of elastic properties. AVA stochastic inversion results are used for cascaded stochastic simulation of lithology and fluid units, for simulation of reservoir properties, uncertainty analysis. Based on Bayesian approach we can take into account of existing uncertainties, in particular, uncertainties caused by inaccuracy of the mathematical model that relates elastic parameters and reservoir properties, and uncertainties caused by non-uniqueness and inaccuracy of seismic inversion results.
-
-
-
Use of Solution Error Models in History Matching
Authors M.A. Christie, G.E. Pickup, A.E. O'Sullivan and V. DemyanovUncertainty in reservoir models can be quantified by generating large numbers of history-matched models, and using those models to forecast ranges of hydrocarbons produced. The need to run large numbers of simulations inevitably drives the engineer to compromises in either the physics represented in the reservoir model, or in the resolution of the simulations run. These compromises will often introduce biases in the simulations, and the unknown reservoir parameters are estimated using the biased simulations, which can lead to biases in the parameter estimates. Solution error models can be used to correct for the effects of the biases. Solution error models work by building a statistical model for the differences between fine and coarse simulations (or between full physics and reduced physics simulations) using data from simulations at a limited number of locations in parameter space. The statistical model then produces estimates of the error elsewhere in parameter space; these estimates are used to correct the effects of the coarse model biases. In this work, we apply a solution error model to material balance calculations. Material balance is frequently used in reservoir engineering to estimate the initial oil in place. However such models are very simple, treating the reservoir as a tank and allowing instantaneous equilibration of fluids within the tank. The results of material balance simulations will therefore not be consistent with multi-cell reservoir simulations. We use a model based on Teal South Reservoir in the Gulf of Mexico to demonstrate how an error model can correct a material balance model to the accuracy of a reservoir simulation.
-
-
-
Joint Quantification of Uncertainty on Spatial and Non-spatial Reservoir Parameters
Authors C. Scheidt and J.K. CaersThe experimental design methodology is widely used to quantify uncertainty in the oil and gas industry. This technique is adapted for uncertainty quantification on non-spatial parameters, such as bubble point pressure, oil viscosity, and aquifer strength. However, it is not well adapted for the case of geostatistical (spatial) uncertainty, due to the discrete nature of many input parameters as well as the potential nonlinear response with respect to those parameters. One way to handle spatial uncertainty is called the joint modeling method (JMM). This method, originally proposed in a petroleum context by Zabalza (2000), incorporates both non-spatial and spatial parameters within an experimental design framework. The method consists of the construction of two models, a mean model which accounts for the non-spatial parameters and a dispersion model which accounts for the spatial uncertainty. Classical Monte-Carlo simulation is then applied to obtain the probability density and quantiles of the response of interest (for example the cumulative oil production). Another method to quantify spatial uncertainty is the distance kernel method (DKM) proposed recently by Scheidt and Caers (2007), which defines a realization-based model of uncertainty. Based on a distance measure between realizations, the methodology uses kernel methods to select a small subset of representative realizations which have the same characteristics as the entire set. Flow simulations are then run on the subset, allowing for an efficient and accurate quantification of uncertainty. In this work, we extend the DKM to address uncertainty in both spatial and non-spatial parameters, and propose it as an alternative to the joint JMM. Both methods are applied to a synthetic test case which has spatial uncertainty on the channel representation of the facies, and non-spatial uncertainties on the channel permeability, porosity, and connate water saturation. The results show that the DKM provides a more accurate quantification of uncertainty with fewer reservoir simulations. Finally, we propose a third method which combines aspects of the DKM and the JMM. This third method again shows improvement in efficiency compared to the JMM alone.
-
-
-
An Example of Flow Based Covariance Localisation in the EnKF
Authors D. Kuznetsov and D. KachumaPopularity of application of the ensemble Kalman filter (EnKF) and other ensemble methods to history matching of reservoir models has been growing for a number of years. Recent applications demonstrated that ensemble based methods are efficient for producing models with a good match of the history. However, they don’t necessarily keep consistency of the resulting models with the initial geostatistical description. One of the causes of this problem is an imperfection of the estimation of the model state error covariance from ensembles of a finite size. First we illustrate a spurious stochastic behaviour of the covariance when applying the EnKF to a simple "well-test like" synthetic example. Experimental covariance estimated from the ensemble of a practical size of about a hundred members is clearly quite different from one that can be expected from an analytical solution. When the size of the ensemble is increased by an order of magnitude the covariance function becomes significantly smoother. Nevertheless, in all the cases it is not difficult to see that the main disturbance of the covariance happens beyond the area of the pressure front propagation. Thus, results of the EnKF potentially can be improved by application of a flow based covariance localisation. Then we show an application of the EnKF with a streamline based covariance localisation (Devegowda et al. 2007) to a real field problem. The forward model is solved by a conventional finite-difference reservoir simulator and at every update step streamlines are traced using a separate routine (Jimenez et al. 2007). Traced streamlines are used for determination of an influence zone for each well and covariance of the production data from each well and model parameters is localised accordingly. We compare the results in case of localisation based on the zones determined by the full length of the streamlines and in case of localisation zones limited by the flow fronts propagation. In both cases there is an improvement of the production data match and less disturbance of the initial geostatistical realisations compared to the EnKF without localisation.
-
-
-
Quantifying Monte Carlo Uncertainty in the Ensemble Kalman Filter
Authors K. Thulin, G. Nævdal, H.J. Skaug and S.I. AanonsenWe have suggested running multiple EnKF runs with a smaller ensemble size, and have presented a methodology for determining the optimal ensemble batch size for a synthetic 2D model. The optimal combination of ensemble size and number of EnKF runs is clearly case dependent. However, our results suggest that for a given number of forward model runs (n*m), it will be better to perform several EnKF runs with a smaller ensemble size, than one run with a larger number of ensemble members. Technical contributions: 1) Improvement of the EnKF methodology for characterization of posterior pdf by performing multiple runs with smaller ensemble sizes instead of one large run 2) A methodology for optimal choice of ensemble batch size (n) and number of EnKF runs (m) for a fixed total number of ensemble members (m*n). 3) Developed a methodology for uncertainty estimation of the posterior CDF using EnKF.
-
-
-
Dynamic Data Assimilation by MCMC and Sequential Design of Experiments
Authors D. Busby and M. FerailleThe context of this work is a dynamic data assimilation workflow where a relatively small number of simulator inputs (usually around 10) has been selected in order to calibrate the reservoir simulation model. The probabilistic approach is used to solve this complex ill-posed inverse problem where the objective is to obtain a posterior distribution of the selected parameters. The remaining uncertainty on these simulator inputs is then propagated on the output of interest such as oil, water and gas productions to obtain confidence intervals for future forecasts. To reduce the number of required simulations the objective function is approximated using a non parametric response surface method based on Gaussian Process regression (kriging). To obtain a predictive response surface a sequential design of experiments is adopted that aims at discovering the minima of the objective function and also to accurately reproduce the basins associated to these minima. At each step of the design an MCMC method is used to explore the minima and to select the new simulations to perform. Also, differently than in previous works the response surface estimated variance is included in the posterior computation. As a result, the response surface accuracy improvement obtained by the sequential design produces also a reduction of uncertainty in the obtained estimates of the input posterior distribution. This uncertainty reduction effect together with the accuracy improvement of the response surface are monitored at run time at each iteration of the sequential design. An application is presented on the PUNQS field case with seven uncertain parameters. Less than two hundreds fluid flow simulations were used to produce a reliable sample of the posterior distribution and to propagate the remaining uncertainty on future forecasts.
-
-
-
Effect of Large Number of Measurements on the Performance of EnKF Model Updating
Authors A. Fahimuddin, S.I. Aanonsen and T. MannsethThe ensemble Kalman filter (EnKF) is a Monte Carlo method for data assimilation and assessment of uncertainties during reservoir characterization and performance forecasting. The method is based on a low-rank approximation to the system covariance matrix calculated from an ensemble which may be orders of magnitude smaller than the number of state variables. In practical applications, the ensemble size has to be kept relatively small. This may lead to poor approximation of the cross-covariance matrix, and sampling errors can result in spurious correlations and incorrect changes in the state variables. Also, since the rank of the covariance matrix cannot be larger than the number of ensemble members, the number of degrees of freedom may be too low when a large number of measurements are assimilated, such as with 4D seismic data. In this work, we have investigated the shortcomings of a straightforward EnKF implementation for small ensemble size, relative to a large number of measurements. This is done by considering a single update of a simple linear model and comparing the EnKF update to the traditional Kalman filter (or Kriging) solution, which in this case is exact. The quality of the EnKF update is assessed by considering the mean and variance of the updated state variable, as well as various error norms and the eigen-spectrum of the covariance matrix. Even for this simple model, spurious long-range correlation, ensemble collapse, etc. are clearly seen as the number of measurements increases for a given ensemble size. For a traditional implementation of EnKF, the ensemble size have to be much larger than the number of measurements to obtain an accurate solution, and the solution gets worse when the measurement uncertainty is reduced.
-
-
-
Generalized Approach for Modeling the Nonlinear Hysteretic Response of Reservoir Media
Authors A. Lukyanov, A. Fadili, M. Wakefield and D. BrowningSignificant progress has been made recently in the numerical simulation of heterogeneous reservoir media. One of the fundamental reasons for the hysteretic nonlinear behavior of porous reservoir media is that heterogeneous or damaged materials contain an enormous number of mesoscopic features such as microcracks and macrocracks, joints, and grain to grain contacts containing multiple phases. Each of these mesoscopic units exhibits a hysteretic behavior which dominates the macroscopic reservoir response. Based on the work of Preisach and Mayergoyz (P-M space model), Coleman and Hodgdon, a generalized phenomenological model has been developed to describe the hysteretic nonlinear response of capillary pressure and relative permeability. The proposed approach enables the description of active hysteresis by the solution of either a differential or an integral equation. The model focuses on the correct representation of the primary drainage (forced), the imbibition (spontaneous and forced), the secondary drainage (spontaneous and forced) curves and scanning curves. The functions and parameters used in the model can be fine-tuned to match different experimental data or can be used as history matching parameters. It is shown that the proposed model incorporates the Killough type hysteresis as one analytical solution. The differential form of the proposed model allows a smooth transition of both relative permeability and capillary pressures from drainage-to-imbibition or imbibition-to-drainage states and requires minimum storage of parameters during the simulation. Validation of the model indicates that the proposed hysteresis model is stable and robust. The results are presented and discussed and future studies outlined.
-
-
-
Consistent Capillary Pressure and Relative Permeability for Mixed-wet Systems in Macroscopic Three-phase Flow Simulation
Authors R. Holm, M.I.J. van Dijke, S. Geiger and M. EspedalWAG flooding of an oil reservoir can give rise to large regions of three-phase flow, where the flow parameters, i.e. capillary pressure and relative permeability, are history dependent. This means that three-phase capillary pressure and relative permeability data have to be updated during the flow to account accurately for hysteresis. The idea of this work is to connect a pore-scale model that calculates capillary pressure and relative permeability for given saturations to a three-phase reservoir simulator. This will allow us to calculate the actual saturation paths based on pore-scale physics. The pore-scale model comprises a bundle of cylindrical capillary tubes of different radii and wettability, which are randomly distributed according to the given density functions. Within the bundle the capillary pressure controls the displacement sequence, and for given capillary pressures it is therefore possible to find the corresponding phase saturations in the bundle. However, for using the pore-scale model in the reservoir simulator it is required to obtain capillary pressure and relative permeability from saturation data, rather than the other way around. We hence invert the capillary bundle model iteratively to find the capillary pressures for given saturations. Depending on the required accuracy, these calculations can be time consuming, especially when the behaviour changes between two-phase and three-phase. A capillary bundle is completely accessible, so there will not be any trapped or residual saturations. In principle a more complex network model including residual saturations could be used. Incorporation of the bundle model into the simulator demonstrates the effects of consistent pore-scale based three-phase capillary pressure and relative permeability for different wettability on the continuum, i.e. reservoir scale. This also shows under which conditions pore-scale displacement paths can be reproduced by the macro-scale model.
-
-
-
Characterization of Pore Shapes for Pore Network Models
Authors J.O. Helland, A.V. Ryazanov and M.I.J. van DijkeSimulation of multi-phase flow processes for enhanced oil recovery requires accurate descriptions of the capillary pressures and relative permeabilities as functions of the phase saturations Pore-scale network modelling is useful tool to estimate these flow parameters. Some of the main network model characteristics are the pore and throat conductances and capillary entry pressures. Both parameters strongly depend on the pore and throat geometries. Because the shapes of the real pore cross-sections are generally highly irregular, it is important to use idealized shapes that lead to accurate approximations of the above parameters. The most common approach has been to choose a circle/ (irregular) triangle / square (C-T-S) pore geometry with a shape factor that matches that of the real pore shape. For these shapes, simple correlations between the flow parameters and the shape factor are available. However, it is well known that the parameters for these very regular convex shapes are often inaccurate compared to the real pore shapes. Here, we propose to represent the shapes by the regular, but generally non-convex, n-cornered star shapes. A new n-corner star shape characterization technique has been developed, which takes shape factor and dimensionless hydraulic radius as input parameters. A novel numerical technique has been used to derive the real pore entry pressures, whereas analytical expressions are used for the star shape. A set of 70 individual pores has been extracted from high-resolution 2D images of a Bentheim rock sample. The real shapes have been approximated by C-T-S, as well as by n-corner star shapes. The comparison results between predicted and real shape parameters for the entire set of pores show that the accuracy of the entry radius prediction for both approaches is approximately the same and quite good. The single phase conductance estimation is much better for n-corner star approximation than for C-T-S. Finally, a capillary bundle model has been constructed from the 70 pores to test the predictive capabilities of the characterization approaches in terms of relative permeabilies and capillary pressures. It has been shown that the n-corner star provides a better approximation of the "real shape" capillary pressure curve than C-T-S. The real shape relative permeabilities are in a good agreement with curves predicted by both shape approximations.
-
-
-
Optimal Subsurface Flow Management Using Model Predictive Control Techniques
Authors E. Gildin and M.F. WheelerRecent technological advancements in reservoir management, namely the use of smart-wells, i.e., controlled valves deployed downhole, has made significant impact in the improvement of oil recovery for depleted as well as new oil fields. Inspired by classical feedback control techniques, the oil industry started developing closed-loop optimal reservoir management schemes using optimal control and model updating. Optimization techniques, such as model predictive control (MPC) has been successfully applied in the downstream end of the production line, and in general in the process industry. This is due to the fact that MPC is a model based controller design procedure, which can handle processes with stability issues and time-delays, together with a framework to incorporate constraints into the design. However, in the upstream side of the production, MPC has not gained attention until recently, mainly due to the large-scale nature of the optimization problem. In this paper, we apply real-time optimal control techniques to reservoir management, and in particular to reservoir production. Based on the success of model-based optimization to the process industry, we aim to use MPC schemes to increase the potential for greater oil recovery, and therefore, enhanced reservoir management and profitability. MPC offers a robust control implementation together with constraint handling capabilities. At first, a short survey on MPC is presented and the dynamics of an oil-reservoir is introduced together with the basic equations for flow in porous media. Then, linear MPC is applied and the focus is directed to the generation of low-order reservoir models using subspace identification methods. Lastly, due to the highly nonlinear behavior of the reservoir models, nonlinear MPC (NMPC) schemes is suggested. Comparisons will be provided through a set of realistic simulations using an in-house simulator.
-
-
-
Conditioning of Geologically Realistic Fault Properties to Production Data Using the Ensemble Kalman Filter
Authors A.D. Irving and D. KuznetsovFaults in reservoir simulation models are typically represented as discontinuities between grid cells with homogeneous transmissibility multipliers used to represent their effect on fluid flow. Transmissibility multipliers may be manually altered in a trial-and-error fashion to condition the model to production data, often at the expense of geological and physical realism. Methods developed in the past decade incorporate prior geological constraints on fault permeability and thickness to produce more realistic heterogeneous transmissibility multipliers. As a typical simulation grid may have several thousand faulted cell connections, conditioning of these properties to production data is not routine. We present a method to condition geologically-derived estimates of fault transmissibility to production data. Appropriate probability density functions are estimated from databases of fault permeability and thickness. Together with parameters for grid properties, these are used as input to a geostatistical property simulation workflow in a 3D geomodelling tool. Multiple property realisations are generated, with fault permeability being averaged into grid permeability and exported for use in numerical fluid flow simulations. Conditioning to production data is achieved using the Ensemble Kalman Filter (EnKF), which has been proved as an efficient approach for such problems. Since the EnKF can handle a large number of uncertain parameters and requires the forward model only as a ‘black box’, it allows for consideration of geological features, such as faults, as objects with spatially continuous heterogeneous properties that can be conditioned to production data. The method is illustrated using a producing North Sea reservoir. Grid and fault properties can be conditioned to production data while honouring the input probability distributions and spatial continuity model. Case-specific and more general conclusions can be drawn about the advantages of the method; potential improvements and extensions are discussed.
-
-
-
Estimating Coarse-scale Relative Permeability on a North Sea Field Case Using the Ensemble Kalman Filter
Authors J.-A. Skjervheim, J.H. Hove, A.S. Seiler and G.E. EvensenIn a reservoir characterization perspective it is important to introduce a consistent parameterization to make an improvement of the reservoir model and provide more reliable predictions of the future production. In this paper we present a methodology for history matching and uncertainty quantification of reservoir simulation models using the Ensemble Kalman filter (EnKF). In the updating sequence, we propose a method for estimating coarse-scale relative permeability curves, based on a Corey function representation. In addition traditionally static (permeability, porosity) and dynamic (pressure, saturation) variables are adjusted. During the assimilation we used the oil production rate, gas-oil ratio and water-cut as history data. The EnKF is applied on a StatoilHydro operated reservoir field in the North Sea, where the relative permeability is identified to be highly uncertain. In this work a Corey function parameterization is used to estimate the relative permeability, and in the history matching process the Corey exponent, the end point saturation and the end point of the relative permeability curve are treated as poorly known parameters. The influence of the relative permeability has been investigated on real field application, and the results of the field study show that a significant improvement in the history match can be achieved by additionally updating the coarse-scale relative permeability properties. The final estimated ensemble shows a reduction in uncertainty for the relative permeability curves, which demonstrate the capability of the EnKF to quantify the uncertainty in the reservoir model. Furthermore, the final estimated ensemble is used to predict the future production performance of the reservoir. Technical contributions: The paper shows that the EnKF is capable to adjust relative permeability curves, based on the information contained in the assimilated production measurements, and it is shown that estimating coarse-scale relative permeability may be crucial to obtain satisfactory history matching results in real field applications.
-
-
-
Permeability Samples for Uncertainty Assessment by a Predictor-corrector Technique, with Application to RML
Authors T. Feng, T. Mannseth and S. AanonsenPermeability samples for performance forecasting should be consistent with all available information. Markov chain Monte Carlo (McMC) samples correctly from the posterior distribution, but is computationally extremely intensive. RML is a good approximation to McMC. RML involves history matching of each permeability sample, and although RML is computationally less intensive than McMC, the computational effort to generate a sufficiently large number of samples is huge for all but very small reservoir models. In this work, we have investigated if the sampling procedure can be made more efficient by using a predictor-corrector approach in the history matching step of RML. The predictor applies sequential parameter estimation to obtain an estimate with few degrees of freedom, utilizing only part of the available information. The corrector downscales the predictor estimate in a two-step procedure involving all available information, including the estimate obtained with the predictor. The first corrector step is a variant of Kriging. The second corrector step is parameter estimation, again involving few degrees of freedom, with basis functions derived from the results of the predictor.
-
-
-
A Local Parameterization Method to Improve History Matching
Authors D.-Y. Ding and F. RpggeroReservoir characterization needs the integration of various data, especially dynamic data, which requires history matching the measured production and 4D seismic data. However, reservoir facies and heterogeneities are generated with a geostatistical model, and random realizations cannot generally match dynamic data. To constrain the realizations by using measured dynamic data, it is necessary to parameterize the reservoir model, especially geostatistical realizations, and apply an optimization procedure by minimizing an objective function. However, there are only a few methods available to parameterize geostatistical realizations, and they are not always efficient. In this paper, we propose a local parameterization method which allows to improve history matching for better reservoir characterization. The method of gradual deformation, which allows to change continuously geostatistical realization from one to another, has been increasingly used in history matching. The domain of gradual deformation is generally delimited by gridblocks. In this paper, we present first a technique of spatial combination, which can combine geostatistical realizations on arbitrary domains to form a new one by keeping geostatistical consistency. Then, we present a technique of local parameterization to change continuously geometrical forms or sizes of the domains. This parameterization of domains leads to continuous change of geostatistical realizations with the technique of spatial combination. By analyzing values of objective function on each well or 4D seismic region, we can define local domains to be considered in the optimization. By changing forms and sizes of these domains, the proposed local parameterization method can select automatically best realizations in appropriate domains in order to improve the assistant history matching. Several examples of single or two-phase flows are presented with parameterization for radial or elliptical domains. The efficiencies of the local parameterization approach for history matching are illustrated through these examples.
-
-
-
Transient Pressure Well Test Analysis Based on Compressible Discrete Fracture Network
Authors A.A. Shchipanov and S.V. RusakovRecent years the fractured reservoir characterization is of special interest since the dual medium simulation has wide application in field studies. In practice the main difficulties that impede the simulation are evaluation of fracture properties and including of their continuous analog into reservoir model. The research focuses on characterization of deformable fractured reservoir. A new approach to evaluate fracture properties from well test has been developed. Theoretical basis of the approach is compressible discrete fracture network (CDFN) model which is an extension of the DFN concept to take into account compressibility of fractures in terms of both porosity and permeability. The fracture compressibility is related to matrix (rock) compressibility that generates change in matrix block size and fracture aperture during pressure evolution. The new approach consists of two components. The first is numerical analysis of pressure drawdown and build-up to discover behaviour that is appropriate to the compressible fracture network. The second is well test interpretation to characterize fracture density; aperture, permeability, porosity and their dependence on pressure. The interpretation consists in solution of an inverse problem using analytical procedures and can be applied to identify properties of purely fractured and fractured porous reservoirs. Simultaneously the numerical simulation of flow in CDFN is considered. The simulation of fluid flow in near well bore area allows to calibrate properties of CDFN. The approach has been tested per analysis of an actual transient pressure well test. The flow simulation allowed to compare actual and synthetic transient pressure response and hence to estimate correctness and accuracy of the approach. The developed model, approach and flow simulation allow to identify fracture properties and to calibrate them using well test simulation. Obtained fracture permeability and porosity, their dependence on pressure and matrix block size are used in dual (single) medium reservoir simulation.
-
-
-
Matrix-fracture Transfer Function in Dual-medium Flow Simulation – Improved Model of Capillary Imbibition
Authors A.S.A. Abushaikha and O.R. GosselinCapillary imbibition is one of the main recovery mechanisms of naturally fractured reservoirs where fracture fluid imbibes, by capillary forces, in the matrix and the matrix fluid is transferred to the fracture. Simulating counter-current imbibition in dual-medium models is a challenging task. The semi-steady state approach has been used in Warren and Root based transfer functions for the past forty years. However, it eliminates the speed of early time recovery and assigns average property values in matrix and fracture. In this paper, we eliminate the semi-steady state approach in matrix capillary imbibition by making the transfer function depend on time, space and two recovery periods (early and late time). We make it depend on space by dividing the invaded face into two equal sub-faces, each with its own capillary pressure, relative permeability and location. Then, the two contributions are summed up to equal one mass conservation equation for each matrix cell. In early time recovery, the saturation front moves laterally in the matrix, until it reaches the no-flux boundary. The distance of invasion is calculated using an integral of the inverse capillary pressure curve, the saturation values of previous time step, and the distance between the invaded face and the no-flux boundary. Then, new capillary pressure, relative permeability and location values are assigned to each sub-face; where the transfer of fluid is calculated. When the saturation front reaches the no-flux boundary, at start of late time, it moves vertically until the Pc at the no-flux boundary equals the Pc at the invaded face. The capillary pressure and relative permeability of the sub-faces are calculated using integral of the inverse of capillary pressure curve and the saturation value of previous time step. Our approach matched the results of fine-grid single-porosity models under various parameters of capillary pressure, matrix shape and mobility. It also outperformed the results of three transfer functions: Gilman & Kazemi, Quandalle & Sabathier, and the General Transfer Function proposed by Hu & Blunt.
-
-
-
Reserve Estimation for Naturally Fractured Reservoirs Using Numerically Derived Recovery Curves
Authors B. Pirker, A.W. Harrer and Z.E. HeinemannA new approach for early stage reserves estimation for dual porosity fractured reservoir is presented. When assessing the volumetrical reservoir content by Monte-Carlo simulation, rock volume, porosity and saturations are treated as stochastic variables. When estimating the reserves the recovery factor must be also handled as stochastic variable. The recovery factor for a given depletion mechanism depends on already mentioned parameters and additionally on the shape factor, the matrix permeability and the wettability. This dependency will be assessed by means of fine scaled numerical simulation resulting in so-called recovery curves. This paper discusses how the recovery curves will be created and how they will be used for reserve estimation. The recovery curves are generated using single porosity small scale models and are in terms of recovery versus time or dimensionless time. The block size of this small scale models corresponds to a given shape factor in the shape factor distribution. The matrix depletion processes are basically dependent on time and matrix parameters. Therefore, matrix parameters that have an influence on the depletion are incorporated in the dimensionless time. These are the shape factor, the matrix permeability, the oil viscosity and a characteristic pressure in the system. The paper also contains the application of the method to a field case.
-
-
-
Implementation of a Fully Coupled Wellbore Stability Model for Well Drilling Design
By V. RoccaDuring wellbore drilling operations and successive oil/gas production activities, both the natural stresses and the original thermodynamic equilibrium of virgin rock formations are altered, and, as a result, borehole instability phenomena can arise with time. Instability phenomena can be so severe to determine the wellbore abandonment because of its complete collapse. During the design phase of drilling operations, the adoption of a wellbore instability modeling approach is fundamental to systematically take into consideration time-dependent alterations of the initial natural equilibrium. In this paper the possibility of investigating both the stress-strain and the thermo-dynamic formation behavior through a fully coupled thermo-poro-elasto-plastic approach is discussed. The numerical solution adopted for the model implementation is presented. In the fully coupled approach, porous flow and stress-strain calculations are performed together: the whole system is discretised on one grid domain and solved simultaneously for both the thermodynamic variables (such as: temperature and pressure), and the geomechanical response (such as: displacements). This approach has the advantage of internal consistency and stability; furthermore, it preserves second order convergence of nonlinear iterations. In order to implement the plastic analysis an iteratively coupled approach was adopted in the fully coupled routine. According to the iterative coupling technique, the model basic equations (porous flow and rock deformation) and the plastic behaviour equations are solved separately and sequentially at each non-linear iteration. This is achieved by the solution of two nested Newton-Raphson cycles at each time-step. The iterative coupling approach corresponds to an implicit treatment of the plastic variables, essential to preserve the stability of the elasto-plastic solution. Wellbore stability analyses are also presented to prove the effectiveness of the proposed model to investigate the potential impact of time-dependent phenomena on the well drilling design.
-
-
-
Integration of 4D Seismic Data in a History Matching Process Using Local Parameterization Techniques
Authors V. Gervais and F. RoggeroLocal parameterization techniques, such as the gradual deformation method and the perturbation method, have already proved to be efficient for the integration of production data in a history matching process, considering regions of the reservoir associated to wells and defined according to physical or geometrical criteria. The aim of this study is to apply local parameterization techniques for the integration of 4D seismic data, using a new definition of regions based on the error on the seismic data. Each time seismic data are available, streamlines arriving in the parts of the reservoir with the highest error on these data are computed. The aim is to identify “influence areas” which should contribute to the behavior in the badly matched regions. The successive partitions can be used sequentially. Several methods are also proposed to combine them in a single partition. We intend in this study to improve the matching of the saturation distribution in the reservoir obtained at different times from seismic data interpretation. Considering for instance a constant porosity, an increase (decrease) of the permeability in the influence areas associated to a delay (advance) of the saturation front may improve the match. This can be achieved by modifying locally the mean of the permeability, using for instance a non-stationary mean to simulate the realization. We also propose to modify linear volume averages of the simulated petrophysical properties distributions, in one or several regions, using a kriging based methodology. Using this methodology, history matching processes of production and saturation data have been performed, optimizing local averages of the permeability and porosity realizations in the influence areas. The results show a strong improvement of the saturation match. They are also compared to the ones obtained with local gradual deformation based optimizations considering the same partitions of the reservoir.
-
-
-
4D Seismic Modeling Integrated with the Ensemble Kalman Filter Method for History Matching of Reservoir Simulation Model
Authors M.C. Haverl, J.A. Skjervheim and M. LandrøFrom a real Norwegian North Sea field case we show simulation results where 4D seismic data are incorporated in the history matching process of reservoir simulation models. It has been shown that the Ensemble Kalman Filter technique, a Monte Carlo type Bayesian sequential inversion method, is capable of performing this task. In addition to predicted model states, data uncertainties are provided. In fact, seismic data may not only relate to one (e.g. 3D), but to more instances in time (4D or time-lapse seismic data where their differences are of concern). Furthermore they may be investigated in different domains, depending on their sensitivity to production related changes in the reservoir. In the given case study we observe an emerging fluid contact due to gas injection and corresponding travel-time shifts of seismic events on the real data. We model these effects by generating synthetic seismic sections, but in an environment which allows to incorporate Eclipse simulated variables, namely fluid saturations, densities and pressures. The Compound model builder, an interface shared by geophysical and reservoir engineering data represents our environment for integrated seismic and geologic modeling. The basic reservoir variables to be updated in this study are the static variables porosity and permeability and the dynamic variables pressures and saturations. These variables are optimized by minimizing a joint misfit function consisting of production data, such as oil-production rate, water-cut and gas-oil ratio and seismic data, which is stacked amplitude data. The benefit of including seismic data lies in a better overall reservoir description especially in areas not sampled by observations of production data.
-
-
-
Joint Assimilation of Production Data and Time-Lapse Density Changes from Seismics Using the Representer Method
Authors J.K. Przybysz-Jarnut, J.D. Jansen and A. GisolfWe used the representer method to perform data assimilation (automatic history matching) of production data and 4D seismic data to estimate the permeability field in numerical reservoir models. The 4D seismic data were incorporated in the form of interpreted time-lapse density changes. The representer method requires two numerical simulations per measurement which becomes infeasible if seismic data in all grid blocks would be taken into account. Therefore, only the most informative data at the saturation front moving over time were used in the assimilation. The method was tested in a twin-experiment using synthetic data. The results were assessed in terms of the quality of the history match (mismatch between ‘true’ and simulated measurements) and in terms of predictions (mismatch between ‘true’ and simulated predictions after the history matching period). We found a clear improvement of the joint assimilation inversion over individual assimilation of production and seismic data.
-
-
-
The role of non-equilibrium thermodynamics in compositional modeling
Authors W. Lambert, D. Marchesin and J. BruiningWe show how to derive compositional models from balance models including source terms representing mass transfer between phases. Mass transfer rate is taken proportional to the deviation from thermodynamic equilibrium. In the balance models, the mass transfer is very fast and local thermodynamic equilibrium is quickly attained. The derivation is done by means of an asymptotic expansion where the small parameter is the time scale of mass transfer relative to the hydrodynamical time scale. The new theory is illustrated by an example of thermal flow of steam, nitrogen and water in a porous medium, which can be useful in for soil remediation.
-
-
-
Tie-simplex Based Framework for General Multi-phase Thermodynamic Equilibrium Computations
Authors D.V. Voskov and H.A. TchelepiWe present a general framework for thermodynamic equilibrium calculations of multi-phase, multi-component mixtures. We use the fact that the compositional space can be represented as a high-dimensional simplex. For given values of pressure and temperature, the phase behavior of a particular system can be described using a low-dimensional tie-simplex, for example, tie-lines and tie-triangles for two and three phase systems, respectively. In general, a high-dimensional compositional space can be parameterized using a lower dimensional tie-simplex sub-space. Tie-simplex computation and interpolation procedures complement the parameterization to complete the mathematical framework. The robustness and efficiency of the method is demonstrated using several multi-phase equilibrium problems of practical interest. One type of problem is the equilibrium flash calculation of systems with a large number of phases. The complexity and strong nonlinear behaviors associated with such systems pose serious difficulties for standard techniques. The tie-simplex representation of the equilibrium data, which may be obtained using a particular equation-of-state for example, can be pre-processed. The parameterized space can be used to obtain the phase compositions (i.e., flash results) or used as an initial guess to accelerate convergence of standard Equation of State (EoS) based procedures. The second type of application is multi-phase multi-component displacement problems. In the standard compositional simulation approach, an EoS is used to describe the phase behavior. For each gridblock, given the temperature, pressure and overall compositions, the EoS is used to detect the phase state (e.g., one, two, or more phases), and to calculate the phase compositions, if multiple phases are present. These EoS computations can dominate the overall simulation cost. Adaptive computation of the tie-simplex space can be used to speed up the EoS computations of large-scale problems of practical interest by an order of magnitude, or more.
-
-
-
Use of a Reduced Method in Compositional Simulation
Authors R. Okuno, R.T. Johns and K. SepehrnooriSimulating gas injection processes requires a compositional model to predict the fluid properties resulting from mass transfer between reservoir fluid and injection gas. A drawback of compositional simulation is the efficiency and robustness of phase equilibrium calculations. Reduced methods for phase equilibrium calculations have been studied as a potential solution to improve the efficiency of compositional simulation. However, most of those studies have been performed only in stand-alone calculations, and the robustness and efficiency of a reduced method has not been confirmed in compositional simulation. In this research we develop a robust and efficient algorithm for a reduced method and validate it in compositional simulation. We examine the efficiency and convergence property of the conventional algorithm for a reduced method, and solve several implementation problems in a compositional simulator. The reduced method is implemented in UTCOMP, a compositional IMPEC simulator, to demonstrate the performance for various numbers of components and degrees of miscibility. The results show that the reduced method enables significant saving in execution time of compositional simulation without loss of accuracy, compared to standard methods. Also, we observe that the reduced method exhibits improved robustness especially for miscible processes where composition paths go near critical regions.
-
-
-
Adapted Nonlinear Optimization Method for Production Data and 4D Seismic Inversion
Authors D. Sinoquet and F. DelbosIntegrated inversion of production history data and 4D seismic data for reservoir model characterization leads to a nonlinear inverse problem that is usually cumbersome to solve : the associated forward problem based, on one hand, on fluid flow simulation in the reservoir for production data modeling, and on the other hand, on a petro-elastic model for 4D time lapse seismic data modeling, is usually computationally time consuming, the number of measurements to be inverted is large (up to 500 000), the number of model parameters to be determined is up to 100. Moreover, all the derivatives of the modeled data with respect to those parameters are usually not available. We propose an optimization method based on a Sequential Quadratic Programming algorithm which uses gradient approximation coupled with a BFGS approximation of the Hessian. In addition, the proposed method allows to handle equality and inequality nonlinear constraints. Some realistic applications are presented to illustrate the efficiency of the method.
-
-
-
The Hamiltonian Monte Carlo Algorithm in Parameter Estimation and Uncertainty Quantification
Authors S. Subbey, M. Alfaki and D. HauglandThe Hamiltonian Monte Carlo (HMC) algorithm is a Markov Chain Monte Carlo (MCMC) technique, which combines the advantages of Hamiltonian dynamics methods and Metropolis Monte Carlo approach, to sample from complex distributions. The HMC algorithm incorporates gradient information in the dynamic trajectories and thus suppresses the random walk nature in traditional Markov chain simulation methods. This ensures rapid mixing, faster convergence, and improved efficiency of the Markov chain. The leapfrog method is generally used in discrete simulation of the dynamic transitions. In this paper, we refer to this as the leapfrog–HMC. The primary goal of this paper is to present the HMC algorithm as a tool for rapid sampling of high dimensional and complex distributions, and demonstrate its advantages over the classical Metropolis Monte Carlo technique. We demonstrate that the use of an adaptive–step discretization scheme in simulating the dynamic transitions results in an algorithm which significantly outperforms the leapfrog–HMC algorithm. Relevance to reservoir parameter estimation and uncertainty quantification will be discussed.
-
-
-
Stochastic Optimization Using EA and EnKF – A Comparison
Authors O. Pajonk, M. Krosche, R. Schulze-Riegert, R. Niekamp and H.G. Matthiesulation are increasingly included in best-practice workflows in the Oil & Gas industry. Most optimization methods applied to model validation in reservoir simulation, including so-called Evolutionary Algorithms like Genetic Algorithms (GA) and Evolution Strategies (ES), use an objective function definition based on the overall simulation period. The integration of a sequential data assimilation process is conceptually not embedded in those optimization methods. The Ensemble Kalman Filter (EnKF) has entered this field for its appealing features. Sequential data assimilation allows the implementation of real-time model updates where classical optimization techniques require simulating the complete history period. This may have negative effects on efficiency and use of computing time. In contrast, EnKF sequentially assimilates information streams into a set of numerical models. While being a special case of a fully fledged particle filter the EnKF method with application to reservoir simulation has proven to generate results with a reasonable amount of ensemble members. The similarities between a particle filter (Monte Carlo Filter) and an Evolutionary Algorithm (Generic Algorithm) have been previously pointed out from a rather theoretical point of view (1,2). In this work we present a concise overview of Evolutionary Algorithms and the Ensemble Kalman Filter in such a way that the cross-relations become apparent. Similarities are highlighted and the potential for hybrid couplings is discussed. Practical implications for the implementation of these methods are derived. 1. Higuchi, Tomoyuki. Monte carlo filter using the genetic algorithm operators. Journal of Statistical Computation and Simulation. 1997, Vol. 1, 59, pp. 1-23. 2. —. Self-organizing Time Series Model. [book auth.] A. Doucet, J.F.G. de Freitas and N.J. Gordon. Sequential Monte Carlo Methods in Practice. s.l. : Springer, 2001, pp. 429-444.
-
-
-
Waterflooding Efficiency of the Oil Rim Development
More LessWe consider an oil rim development with waterflooding using. We offer to discuss a system where recovery and injection horizontal wells have parallel orien-tation and are drilled in the oil rim. These researches are based on mathematical modeling that allows us to describe and simulate reservoir behavior. The model is quite simplified and takes into account only the most important factors. The prob-lem is represented as a three-phase 2D model. Considering area is a vertical section of the element of development system and consists of a space between the recovery and injection wells. A system of differential equations describing filtration process consists of one parabolic and two hyperbolic equations which are solved for certain initial and boundary conditions by using finite-difference methods. Here are used implicit schemes for both parabolic and hyperbolic equations. Let’s consider a simplified optimization problem and study the following objective function determining profit on the oil rim development. The objective function depends on both natural and technological factors: rock and fluid prop-erties, an oil rim thickness, water-pressure system activity, a distance between wells, their positions and operating regimes, etc. In this work the author explores some technological factors which influence an oil recovery factor and the objective function, and analysis waterflooding effi-ciency for the oil rim development. One significant effect has been found out. The effect is in an oil recovery increasing since time moment, when injected water has covered the oil rim and displaces a gas cone. This is a characteristic feature of thin oil rims development. The applied method based on mathematical modeling permits to solve pro-jection and control problems.
-
-
-
Applicability of Newton-based Optimization Method Merged with the Monte Carlo Approach to Log Interpretation
Authors D. Viberti, E. Salina Borello and V. RoccaThe determination of the main petrophysical parameters is normally based on the log data. During the log interpretation process suitable mathematical algorithms taking into account all the available data and information are applied in the attempt to reliably relate log measurements to mineralogy, porosity and water saturation. Uncertainty is always associated to the results of the interpretation due to possible errors in the measurements, in the selected petrophysical models, and/or in the input parameters required by the models. The possibility to estimate the reliability of the petrophysical characterization of the reservoir rocks has a strong impact on the evaluation of the hydrocarbon originally in place (HOIP) and, thus, on the technical and economical exploitation strategies. The evaluation of the uncertainties associated to the results of the well log interpretation process can be performed only by applying a methodology that couples a robust optimization process to a representative statistical approach. On the basis of previous studies and applications to real cases, a methodology for log inversion and uncertainty estimation was formulated. According to this methodology, log interpretation was performed using the iterative solution of the Lagrangian relaxed problem with the Gauss-Newton algorithm, in which the constrains were managed with the active set method; the Monte Carlo statistical approach was applied to the log interpretation routine in order to assess the associated uncertainty. The use of a fast iterative inversion method proved fully compatible with the use of the Monte Carlo approach to estimate the range of uncertainty associated to the reservoir characterization. The rigorous formulation of the methodology and a discussion of the applicability limits and convergence requirements of the inversion method are presented in this paper. Results of the analyses that were carried out in the study showed that the validity limits were perfectly consistent with the domain of the petrophysical interpretation. The results obtained by the application of the methodology to a real case, a deep-water exploration well data complicated by a poor characterization of the reservoir fluids, is also presented the paper.
-
-
-
Reservoir Simulation Quality Assessment Based on History Matching in Well Bore Zone
Authors S. Stepanov and V. VasilyevNumerical reservoir simulation is often accompanied by difficulties with history matching, especially as regards history matching of individual wells. On the one hand, the problems are explained by amount and quality of initial information on a reservoir. On the other hand, these problems pertain to numerical realization of a simulator applied, in particular, to well bore zone modeling. The expertise gained in reservoir simulation often demonstrates the excess of calculated watercut values over the actual ones. Similar situation occurs while performing an analytical solution of the well bore zone displacement problem. Therefore, it appears to be necessary to assess quality of reservoir simulation in terminology of well-cell fluid distribution. In order to solve the given problem, a special inverse well bore zone modeling problem has been formulated, which accounts for actual development data and numerical results of reservoir simulation obtained using the development-target model created. The uncertainty analysis is based on Monte Carlo method, the objective function is minimized using Nelder-Mead method. The key adjusted parameters are porosity, absolute & relative permeability, capillary pressure. At solving inverse problems, the filtration features become apparent, which relate to combined action of capillary, gravity and elastic forces. It is noted how important is to get a proper description of a well bore zone filtration process. Variance in results obtained with/without capillary pressures is shown. Also, it is demonstrated that the reservoir simulation models with relatively simple filtration pattern but with account for capillary, gravity and elastic forces can reproduce the complex watercut dynamics of the wells under study. The suggested method has been tested on one of Orenburg oilfields. Application of the given approach allows to analyze and adjust the numerical models on a new level, and avoid multiple reservoir simulation runs that distort a physical essence of actual displacement.
-
-
-
Comprehensive Model of Heat Transfer between ESP Motor and Multiphase Fluid Included Solid Phase
By A.V. YazkovThis work is devoted to developing of comprehensive mechanistic model of heat transfer between an ESP motor and multiphase fluid. The model takes into account not only such parameters as flow pattern, gas/oil ratio, flow regime, etc. but also solid phase. Solid phase was included based on modern experimental investigations in the area of heat transfer in multiphase systems containing solid phase and flowing in annuli. The developed model was verified by means of temperature data taken using telemetry system during steady-state production from the wells with ESPs. Based on the model sensitivity analysis for the change of such parameters as solid size, solid phase concentration, and motor shroud size was performed. Impact of each parameter on motor cooling is discussed. It is shown that optimization of motor shroud size can lead to significant heat transfer enhancement between the ESP motor and solid-liquid mixture as compared to heat transfer between the ESP motor and liquid. This effect is achieved due to solid particles which have sufficient freedom in bombarding motor wall so that they thin viscous boundary layer. However, if the motor shroud size selection is inconsistent to the given solid phase concentration and solid particle size, this can cause deterioration of thermodynamic conditions for motor cooling.
-
-
-
Effect of Kinetics of Permeability Alteration on Development of Clayey Oil Formation
By M. ChirkovFormation damage, a common problem associated with field operations, is often a major factor in reducing the productivity and injectivity of a well in a petroleum reservoir. Numerous laboratory and field studies indicate that formation damage occurs during many phases of reservoir development – drilling, completion, workover, production, stimulation, waterflooding, or improved oil recovery. Formation damage is a process of initial permeability reduction. At present many mathematic formation damage models are developed. В настоящее время разработано множество математических моделей изменения проницаемости. This models contain a number of parameters: the rate constant for re-entrainment, the rate constant for liquid absorption, the phenomenological constants for swelling potential and the release constant for swellable fines, the rate constant for release rate, the rate constant for change in pore size with depositional morphology, the critical pressure force required to mobilize fines, plugging efficiency, the critical total pore volume flux for the onset of swelling, and the coefficient of volumetric thermal expansion. In order to describe formation damage, the parameters must be known. That is very problematic. We analyzed a number of laboratory experiments and formation damage studies. Comparing the results we defined that permeability has similar kinetics of alteration for different damaging mechanism. Next we determined the principal damaging mechanisms of real oil formation and associated them with laboratory relations of permeability alteration. As a result the integrated relations of permeability reduction were obtained. This method can be used as a predictive tool for quantitative predictions of field development showings for different rock types and operation strategies. Equations of permeability kinetics were entered into computer hydrodynamic model of oil formation. The effect of permeability kinetics on formation development process was shown concerning clay oil formation. Based on laboratory and field observations the methods of enhanced oil recovery and formation damage prevention were proposed.
-
-
-
Modeling of Borehole Zone Influence on Stimulation Efficiency of Gas Production
Authors L. Gaidukov and N.N. MikhailovDuring development of gas field one of the most serious problem is a well production decline due to borehole zone properties deterioration. For well production recovery there are a lot of various methods. And hydrochloric acid treatment received a large area of implementation. However it is a common practice that actual well production increase is lower then expected. This fact has a negative influence on the economic indexes and may bring to loss of profit. Estimation of treatment efficiency demand knowledge of such impotent value as relative increase of gas production. Previously believed that after borehole zone treatment the borehole zone permeability can change only trough cleaning and seam dissolution. But special research showed that after borehole zone treatment the compressibility of the seam may change too. In connection with above we build a mathematical model of gas filtration toward the well with complex borehole zone witch take into account both blockage and compressibility. In case when we consider borehole zone blockage only the filtration equation has analytical solution in terms of integral function. But when we account compressibility of the seam a numerical methods are needed. We considered relative permeability of gas as product of two functions k(r,p)=A(r)*f(p), where function A(r) simulate a borehole zone blockage and f(p) simulate compressibility of the seam. Previously function A(r) was considered as fully modeling. There are various formulation of its view such as linear function, quadratic function, root function and constant function. But all of these formulations haven’t a physical validation. We suggest using probable convolutions method with Gaussian kernel for well-logging data interpretation and definition redial borehole zone filtration properties. In our previous papers showed that a view of A(r) function greatly influence on well production. That is why an adequate definition of this function is very impotent. Using this model we researched dependence of hydrochloric acid treatment efficiency from such parameters as radius of acid penetration, permeability on the well, coefficient of compressibility. And we offered technological scheme of treatment efficiency prediction.
-
-
-
Geomechanical Constrains for Assisted Wellbore Trajectory Design – Stability Navigation Algorithm
More LessIn wellbore stability studies, traditional methods to predict optimum drilling directions on an established stress regime involve punctual contour plots tied to a corresponding depth, which are treated as a discontinuous set of solutions for one continuous problem: the trajectory definition. In other words, desired well positions are isolated estimations that take time to refine into one integrated stability proposal. Due to this obstacle, these studies focus primarily on delivering optimum mud plans that only quote on reference drilling directions, instead of integrated mud-trajectory optimized solutions. A Stability Navigation Algorithm (SNA), developed in this study, represents a new philosophy on addressing drilling direction effects in wellbore stability. Managed stress redistributions around a borehole constitutes the basis on the algorithm’s navigation (search) criteria, resulting in interesting exposures of path volumes where complete “stable” trajectories are viable, based on geomechanical constraints for breakout widths and fracture potentials, trajectory delimitations, DLS and mud weights. The main idea is to upgrade unbound well positions into complete trajectories with automated recursive wellbore stability analysis performed by a SNA. The algorithm runs on a 3D geomechanical model, thus rock mechanical properties distribution is a mayor factor affecting possible solutions. This study analyses the development of the algorithm, its computation strategy and a proposed workflow or application method, based on the interpretation of the resulting behaviors of the outputs. Application scenarios are discussed, as well as the input parameters, and 3D visualization (interpretation) techniques.
-
-
-
Mathematical Modeling of Multiphase Flow in Fracture Media
Authors A.K. Pergament, P.Y. Tomin and V.A. SemiletovThe multiphase flow in fractural media with different fractal structures is considered. The homogenized model of processes depends on the structure pattern. One of them leads to the known double porosity model, the other may be described by tensor total and pseudo phase permeability. The influence of the capillary effects is taken into consideration. The algorithms of the tensor permeability calculation are developed by using the self-similar properties of the fractal structure. We have used the support operator method for difference scheme construction. This method allows approximating both the dissipative energy and phase flux. The results of modeling some problems are represented.
-
-
-
Optimisation of Scale Squeeze Treatment Designed for Heterogeneous Reservoir
By R. PaswanScale deposition in the near wellbore region is an extensive problem in hydrocarbon production. When sea water breakthrough occurs, scale formed by mixing of sea water with formation water flows into the wellbore to the surface. It deposited through out its path and reduces the path for hydrocarbon flow. Sever deposition of scale inside the tube or near wellbore region, highly diminishes the productivity of the well. Scale inhibitor squeeze treatment is one of the remediation processes to avoid scale deposition in the near wellbore region. High water producing zones brings more scale into the tube and therefore these zones are targeted to treat primarily. In heterogeneous reservoirs, squeezed chemicals placed into the fluid flow favourable zones rather than zones targeted to treat. To maximise the affectivity of the treatment, optimisation of the treatment is necessary. To aid in the process of optimisation of squeeze treatment, near wellbore simulation approach is applied. Using near wellbore simulation approach, sensitivity of the different parameters involved in squeeze design (e.g., main slug volume, main slug concentration, overflush volume etc) is determined to find an optimum concentration and optimum slug volume which can enhance the placement of scale inhibitor in unfavourable zones. It is found that the main slug concentration and overflush volume are significant factor which affect the chemical placement. This presentation describes the severity of scale problem and following up the process of optimisation of squeeze treatment using near wellbore simulation.
-
-
-
Reducing Well Test Duration at the Design Stage by Means of Successive Deconvolution Application on the D. Field Example
By A.V. YazkovThe issue of systematic approach to well test design does not leave any doubts. It is particularly relevant for test design in an explored area with the complex geology where we usually possess the minimum of data about the reservoir and are required to derive the latter with sufficient confidence for the shortest time. Deconvolution technique allows us to decrease duration of well testing to the minimum albeit demands of linearity of the system investigated. In this work the thorough particular well test design is performed in the attempt to take into account all the issues concerning application of deconvolution technique and conventional test data interpretation. For design purposes the integration of simulations in Eclipse and Saphir was done in order to built unsophisticated but representative model in Saphir. Also the influence of aquifer and well interference were regarded. The interference was taken into account to linearize the system for deconvolution technique application. The method for time distribution between drawdown and build-up in the test sequence was suggested. Sensitivity analysis is implemented to reservoir and well parameters. Also the conventional test and test for deconvsolution are compared and economics issue are adduced.
-
-
-
Reservoir Characterization Using the Continuous Wavelet Transform (CWT) Combined with the Self Organizing Map (SOM)
By S. OuadfeulThe main goal of this paper is to identify lithologies from well-logs data using the continuous wavelet transform CWT combined with the self organizing map (SOM), we based at the fractional Brownian motion character of well-logs data we estimate the Holder exponent at each depth. The set of Holder exponents calculated for all well-logs data represents the input of the neural machine. Our system gives at each entry a specified lithology. We applied this technique at synthetics and real well-logs data, obtained results showed that the proposed technique is a powerful tool for reservoir characterization, witch is a crucial problem in geophysics. Because it attribute at each set of roughness coefficients (witch are directly related to the rocks types ) a particular lithology. Keywords: Lithologies , well-logs data , the CWT , the Self Organizing Map SOM , Holder exponent , roughness coefficients.
-
-
-
Structure and Mobility of Irreducible Oil in Anthropogenic Changed Formations
More LessIrreducible oil (IO) is researched as hydrodynamic system, consisting of separate types. IO of micro- and macrolevel, connate and conventionally mobile IO were defined. We identified that such technological factors as pressure gradient, wettability, surface tension, etc. effect IO significantly. Irreducible oil saturation (IOS) is one of the most important parameter characterizing development efficiency. Traditionally IOS was associated with formation microconstruction and considered through displacement efficiency using its relationship with filtration capacity properties of formation. IOS was considered as one of producing formation properties. Technology influence on final oil recovery used to be determined through volumetric efficiency. It was determined that conventionally mobile irreducible oil becomes movable under the critical pressure gradient to capillary pressure ratio. When the mobility threshold is reached the values of IOS depend on technological displacement condition. After achieving the second threshold value almost all conventionally mobile oil is withdrawn and only connate oil remains in formation. Present geo-technological models do not take the IOS influence of these factors into consideration. Therefore the dynamic models of IOS were developed with the attention paid on threshold mobility character, operating practices and mechanisms of IOS formations. Distribution research of IOS in croswell area was based on special hydrodynamic models. Results of investigation technology effect on IOS (e.g. well spacing system, pattern arrangement, disturbances and modernization of designed system) are presented. As shown, the compound anthropogenic heterogeneity of IOS is formed in crosswell area even for homogeneous formations. Depending on well spacing system IOS can be heterogeneous regarding displacement efficiency and phase permeability. We consider the possibility of additional IO recovery using intensive technologies (flow control, sidetracks, horizontal wells). Taking different well spacing systems, analysis of relative influence of formation microconstruction and of technological factors on IOS was given.
-
-
-
Asymptotic Dykstra-Parsons Estimates and Confidence Intervals
Authors S. Pintos, C. Bohorquez and N.V. QueipoThe Dykstra-Parson (DP), the most popular heterogeneity static measure among petroleum engineers, may be at a significant error, in particular when assumptions are made about the permeability distribution (parametric approaches) that may lead to unrealistic reservoir performance predictions and unsuccessful development plans. This paper presents the development of an asymptotic distribution of the Dykstra-Parsons coefficient that is independent of the probability distribution of the permeability variable. The effectiveness (bias and confidence intervals) of the proposed approach is demonstrated by comparing the results those obtained using the classical method, and well-known parametric methods, under different scenarios of reservoir maturity levels (i.e., number of wells), and degree violations of the log-Normal probability density function assumption. The results show that in the vast majority of the case studies the proposed approach outperformed previously reported methods, in particular, resulted in a significant reduction of the bias and, with confidence intervals always including the estimated DP coefficient. In addition, an excellent agreement was observed between the asymptotic cumulative distribution of the DP coefficient and the corresponding empirical distribution for sample sizes as low as 100, which allows classifying reservoirs according to their DP coefficient with high success rates.
-
-
-
Pressure Transient Analysis and Simulation of Nonconventional Wells
Authors V.A. Iktissanov and R.R. IbatullinNowadays nonconventional wells (horizontal, multilateral, Rad Tech and others) are being extensively drilled throughout the world for the development of low-profit fields. Construction of these wells enables to reduce filtration resistivity resulting in productivity index increase and costs reduction. To select the optimal well design with regard to reservoir characteristics, effective well operation and determination of filtration characteristics one should possess calculation methods for steady and unsteady liquid flow in reservoir. Few related papers have been published so far. However analytical methods for steady flowing are suitable for homogeneous beds with simple geometry and equal length laterals. Available approaches for description of pressure build up allow to account for various lateral trajectories but FEA or semi-analytical decisions methods are too labor-intensive for practical application. Commercial program for interpretation of pressure build up is lacking. Therefore simple methods of productivity index determination and pressure transient test interpretation are suggested for nonconventional wells. These methods are suitable for low thickness beds. The basis of these methods is the superposition of filtration resistivity for two plane problems. Trajectory of laterals is simulated as a number of closely spaced vertical wells or nodes. The suggested method allows determining the field of application and regularities for nonconventional wells. Dimensionless fluid-movement profile calculated from steady fluid flow and a superposition method for pressure builds up in the nodes are used for determination of pressure build up. For description of build up in a node we recommend a diffusion equation in Laplace space and Stephest numerical algorithm. The problem is solved for porous and porous-fractured reservoirs. Numerical calculations show that cross-flows occur after the horizontal or multilateral well shut-down. Pressure derivative maximum testifies to low effective length of the borehole or positive skin-effect. Knowledge of effective intervals length is critical to pressure curve interpretation.
-
-
-
Simulation of Hydrocarbon Filtration Processes in Porous Media Using the Decomposition of the Simulation Zone
Authors A.V. Akhmetzyanov and A.I. ErmolaevSetting and solving of the discussed problem is caused by the next circumstances: 1. The possibilities of the software for hydrodynamic simulation aimed on finding optimal project and management of field development are extremely limited. 2. Filtration parameters in porous media can’t be obtained by interpolation. It causes the necessity to define them for every block separately. 3. The filtration equations are nonlinear. Therefore the parameters of these equations are changed while the field is being developed. That’s why it is permanently necessary to carry out history matching. A new approach is suggested to solve the listed problems. It is based on hierarchical simulation and the decomposition of the simulation zone. In this case the field is institutionally divided into homogeneous blocks (geological objects). In its turn every block is divided into elements. An element may contain only one well or no one. The number of hierarchy levels and the character of links between them is defined by the number and the superposition of homogeneous blocks in the field, by the method of blocks decomposition into elements, by methods the initial multiphase filtration equations were approximated. Special attention is paid to the calculation of the filtration flows distribution and the values of pressure inside every element. Thanks to the usage of perturbation method the solving of the equations system with quasilinear and nonlinear operators adds up to the solving of linear systems chain. The suggested approach how to solve the problem of the filtration processes simulation may be used for all the types of hydrocarbon fields excluding gas hydrate deposits.
-