- Home
- Conferences
- Conference Proceedings
- Conferences
ECMOR XII - 12th European Conference on the Mathematics of Oil Recovery
- Conference date: 06 Sep 2010 - 09 Sep 2010
- Location: Oxford, UK
- ISBN: 978-90-73781-89-4
- Published: 06 September 2010
1 - 100 of 117 results
-
-
Upscaling of Fractured Oil Reservoirs Using Homogenization Including Non-equilibrium Capillary Pressure
Authors H. Salimi and J. BruiningRecovery in incompletely water-wet fractured reservoirs can be extremely low. In the laboratory, these systems are often mistaken for oil-wet reservoirs, because imbibition only starts after removal of the oil layer, which originally covers the grains. The long time required to remove the oil film will be referred to as delay time. There are two theories that describe the delay time necessary for removal of an oil film, leading to a capillary pressure that depends on time. One of the theories is developed by Barenblatt et al. and it modifies both the capillary pressure and the relative permeabilities. The other theory is developed by Hassanziadeh et al. and it only deals with the non-equilibrium effect for the capillary pressure. No attempt has yet been made to model non-equilibrium effects in fractured reservoirs for a field-scale problem and this is an innovative aspect of this paper. To examine whether the non-equilibrium effect has any effect on larger-scale problems, we apply homogenization to derive an upscaled model for fractured reservoirs in which the non-equilibrium effects are included. We formulate a fully implicit three-dimensional upscaled numerical model. Furthermore, we develop a computationally efficient numerical approach to solve the upscaled model. We use the simulation to determine the range of delay times for which discernable effects occur in terms of oil recovery. It is shown that at low Peclet numbers, i.e., when the residence time of the fluids in the fracture is long with respect to the imbibition time, incorporation of delay times of the order of few months have no significant effect on the oil recovery. However, when the Peclet number is large, the delay times reduce the rate of oil recovery. We will discuss for which values of the delay time (Barenblatt) and capillary-damping coefficient (Hassanizadeh), equivalent results are obtained. This is the first time that such a comparison is made for a field scale project and it shows that both approached show the importance of taking to account delay effects in the capillary pressure behavior.
-
-
-
Incorporation of Global Effects in Two-phase Upscaling for Modeling Flow and Transport with Full-tensor Anisotropy
More LessUpscaling is often applied to coarsen highly-detailed geological descriptions. The upscaling of multiphase flow functions is challenging due to their strong dependency on global flow effects. In this work, we present two approaches to generate upscaled two-phase flow functions with global flow effects incorporated. In global two-phase upscaling, the upscaled two-phase functions are directly computed from global fine-scale two-phase solutions. In a new adaptive local-global two-phase upscaling approach, local boundary conditions (for both pressure and saturation) are determined from time-dependent global coarse-scale solutions. This avoids solving global fine-scale two-phase flow. In both approaches, the upscaled two-phase flow functions are adapted to a specific flow scenario, thus providing more accurate coarse models than the use of generic flows. The methods are applied to heterogeneous permeability fields with full-tensor anisotropy. We demonstrate the impact of full-tensor effects on the global flow dependency in two-phase upscaling. It is also shown that the full-tensor effects in two-phase flow are appropriately accounted for by the upscaled two-phase functions with the global flow effects incorporated. The use of those upscaled functions in conjunction with a two-point flux approximation can effectively capture the two-phase crossflow due to the full-tensor anisotropy on the coarse scale.
-
-
-
Meshless Upscaling Method and Its Application to a Fluid Flow in Porous Media
By A. LukyanovIt is known that the two-point flux approximation, a numerical scheme used in reservoir simulators, has O(1) error when grids are not K-orthogonal. The multi-point flux approximations have found significant interest in the research community. However, non-physical oscillations can appear in the developed multi-point flux approximations when the anisotropy is really strong. In this paper, the meshless multi-point flux approximation (MMPFA) for general fluid flow in porous media is proposed. The MMPFA is based on a gradient approximation commonly used in meshless method and can be extended to include higher-order terms in the appropriate Taylor series. The MMPFA is combined with the mixed corrections which ensure linear completeness. The mixed correction utilizes Shepard Functions in combination with a correction to derivative approximations. Incompleteness of the kernel support combined with the lack of consistency of the kernel interpolation in conventional meshless method results in fuzzy boundaries. In corrected meshless method, the domain boundaries and field variables at the boundaries are approximated with the default accuracy of the method. The resulting normalized and corrected MMPFA scheme not only ensures first order consistency O(h) but also alleviates the particle deficiency (kernel support incompleteness) problem.
-
-
-
The Self-flattening Nature of Trailing Shocks in Augmented Waterflooding – Segregation-in-flow Reestablish Self-sharpness
Authors A.M. AlSofi and M.J. BluntOne of the challenges for reservoir simulation is numerical dispersion. For waterflooding applications the effect is controlled due to the self-sharpening nature of a Buckley-Leverett shock. In this paper, we show that for augmented waterflooding – due to the coupling of compositional dispersion with fractional flow – the trailing shock is no longer self-sharpening. Thus, the simulation of such processes suffers from even higher numerical dispersion effects compared to pure waterflooding. Rather than implementing a higher-order discretization method, we propose a simple scheme based on segregation within a grid block. Compared to current mixing schemes, it differs in that segregation not only affects fluid properties but the transport, too. The scheme is shown to reestablish self-sharpness across the trailing shock. Simulations are performed that demonstrate the effectiveness of the scheme. The simulations also illustrate the possible effects of numerical dispersion on the predictions of augmented waterflooding. Finally, we discuss the extension of this technique to compositional simulation through a coupled limited-flash/segregation-in-flow assumption. Preliminary results demonstrate the potential of the approach as a heuristic method to control numerical dispersion for the simulation of miscible and near-miscible gas injection processes.
-
-
-
A Multiscale Framework for Simulating Multi-phase Flow on Flexible Grids
Authors A. Sandvin, J.O. Skogestad, E. Keilegavlen and J.M. NordbottenThe geological structure of porous rocks include irregular geometries and heterogeneities on a multiple of scales. Reservoir simulations (e.g. oil recovery, CO2 storage, ground water flow) often involve large spatial scales, where local fine-scale heterogeneities might have an important impact on the global flow. Discretisations of the governing equations of multi-phase flow in general render large systems of non-linear equations. The multiscale nature of the geology is inherited by the discrete mathematical problem, which can be ill-conditioned and thereby hard to solve. In this work, we apply domain decomposition (DD) as a solution strategy for the equations. The splitting of the global system into local subproblems can be done either prior to, or after linearisation of the non-linear system. Comparison between DD as a non-linear and linear solver strategy indicates that splitting into subproblems should be considered for use as preconditioners, also for non-linear systems of equations. When applied after linearisation, DD is best suited as a preconditioner in an iterative solution procedure. In this work, we consider a two-level DD framework for linear systems. The framework is flexible, and can be used both as an upscaling procedure, and as a preconditioner.
-
-
-
On the Stokes-Brinkman Equations for Modeling Flow in Carbonate Reservoirs
Authors I. Ligaarden, M. Krotkiewski, K.A. Lie, M. Pal and D. SchmidCavities and fractures can significantly affect the flow paths of carbonate reservoirs and should be accurately accounted for during flow simulation. Herein, our goal is to compute the effective permeability of rock samples based on high-resolution 3D CT-scans containing millions of voxels. Hence, we need a flow model that properly accounts for the effects of Darcy flow in the porous material and Stokes flow in the void volumes on relevant scales. The presence of different length scales and large contrasts in the petrophysical parameters lead to highly ill-conditioned linear systems that make such a flow model very difficult to solve, even on large-scale parallel computers. To identify simplifications that render the problem computationally tractable, we analyze the relative importance of the Stokes and Darcy terms for a wide variety of parameter ranges on an idealized 2D model. We find that a system with a through-going free flow region surrounded by a low permeable matrix can be accurately modeled by ignoring the Darcy matrix and simulating only the Stokes flow. Using this insight, we are able to compute the effective permeability of a specific model from a CT-scan that contains more than eight million voxels.
-
-
-
Effective Local and Extended Local Single-phase Upscaling
Authors S. Du, S.A. Hosseini, Y. Zhou and M.J. KingEffective strategies for upscaling of flow in porous media rely upon (1) the characterization of flow in the reservoir (intercell transmissibility), (2) flow near wells (effective permeability) and (3) an a priori analysis of error. Transmissibility upscaling represents approximately constant gradient Darcy flow and, compared to permeability upscaling, maximizes the spatial resolution of intercell fluid flow available within a simulator. Near well permeability upscaling replies upon the line source approximation and the calculation of well productivity for radial flow near wells. It provides a characterization of the local reservoir quality. It can be performed without knowledge of the location or rates of physical wells. The error analysis has three components. It relies upon a connectivity analysis, an analysis of variance of velocity or slowness, and an estimate of off-diagonal flow terms in the permeability tensor. All three of these error components can be estimated using local flow information. This a priori error analysis allows a practicing engineer or a spatially adaptive algorithm to adjust the resolution of the upscaled model in order to minimize the most egregious upscaling errors. All three of these calculations can be performed using local or extended-local boundary conditions. This is in contrast to global upscaling approaches which have been previously used in the literature but which rely upon knowledge of a global flow field. Although global upscaling approaches can be quite accurate, their utility is limited as specific well locations and flow rates are not available at the point in the life of a project when the geologic model is typically upscaled in preparation for reservoir flow simulation. The utility of this combined upscaling and error analysis approach is demonstrated using the SPE10 test model, which is simple structurally but which has a fairly complex and heterogeneous permeability field.
-
-
-
Upscaling of Two-phase Immiscible Flows in Communicating Stratified Reservoirs
Authors X.Z. Zhang, A.S. Shapiro and E.H.S. StenbyA new analytical upscaling method is described. It is applicable for stratified reservoirs or with vertically distributed properties. The inter-layer communication is assumed to be perfect, corresponding to viscous or gravity dominant regimes. We apply asymptotic analysis to 2D flow equations of two incompressible immiscible fluids and reduce 2D Buckley-Leverrett system to 1D flow problem. Flow velocities in the layers depend on saturations of other layers, which reflects interaction between layers; the exchange terms between the layers may be expressed explicitly. The resulting quasilinear hyperbolic equations allow a self-similar solution, which makes it possible to express the pseudo-fractional flow function as a function of average saturation. The system is solved by the finite difference method. Both cases of discontinuous and continuous (lognormal) distributed layer permeabilities are studied. As comparison, the complete 2D waterflooding problems with very good vertical communication are solved in COMSOL. Generally, saturation profiles of the pseudo 1D model are only slightly different from the 2D simulation result by Comsol. The difference between recovery curves is marginal. This proves validity of the proposed method. Our method is much more advantageous over the classical Hearn-Kurbanov procedure. It is better at predicting displacement fronts and oil recovery.
-
-
-
Analytical Solutions for Co- and Countercurrent Imbibition of Sorbing – Dispersive Solutes in Immiscible Two-phase Flow
Authors K.S. Schmid, S. Geiger and K.S. SorbieWe derive a set of analytical solutions for the transport of adsorbing solutes in an immiscible, incompressible two-phase system. Our analytical solutions are new in two ways: First, we fully account for the effects of both capillary and viscous forces on the transport for arbitrary capillary-hydraulic properties. Second, we fully take hydrodynamic dispersion for the variable two-phase flow field and adsorption effects of the solutes into account. All previously obtained results for component transport in immiscible two-phase systems account for changes in the flow field due to the components’ presence but they ignore dispersive effects. This is surprising given the enormous practical importance of diffusive effects both in hydrological settings and for situations that arise in the context of oil production for chemical flooding, polymer flooding and the transport of wettability altering agents. In both cases, one often faces the situation of fractured media or thin-layer structures where the diffussive transport processes of components dominate over viscous forces. We consider a situation where the components do not affect the flow field and focus on dispersive effects. For the purely advective transport we combine a known exact solution for the description of flow with the method of characteristics for the advective transport equations to obtain solutions that describe both co- and countercurrent flow and advective transport in one dimension. We show that for both cases the solute front can be located graphically by a modified Welge tangent and that the mathematically obtained solutions correspond to the physical notion that the solute concentrations are functions of saturation only. For the advection-dispersion case, we derive approximate analytical solutions by the method of singular perturbation expansion. We give some illustrative examples and compare the analytical solutions with numerical results.
-
-
-
Classic and Hybrid MCMC Methods to Approximate Pore Size Distributions of Carbonate Reservoirs Using Pore-network Models
Authors J.E Juri, M.I.J. van Dijke, K.S. Sorbie and S.K. MasalmehCarbonate reservoirs contain a large fraction of the remaining oil reserves, but their highly heterogeneous structures, at the pore scale, make them difficult to characterize and this is one of the main reasons for their low recovery factors. Pore space imaging and reconstruction techniques are often used to characterize the pore space topology and geometry. However, the wide range of pore sizes encountered in carbonates, including submicron microporosity, usually renders these techniques unsuitable. A classic technique to approximate the pore size distribution (Ritter and Drake) is based on inversion of the mercury intrusion capillary pressure (MICP) curve. However, this technique is inaccurate due to the lack of accessibility of the large pores before the percolation threshold is reached. In this paper, we use a simple lattice network, characterized by an average co-ordination number, a pore volume-vs-radius power law and a pore (radius) size distribution (PSD) with an arbitrarily large range of radii, to match the MICP curve. Efficient optimization methods have been implemented to estimate these principal parameters, as outlined below. The overall aim of this work is to predict rock flow functions for different floods or saturations paths that are difficult and expensive to obtain from laboratory experiments, based on the PSD estimations. In the present approach the PSD has a given number of non-constant radius classes (bins), for which the probabilities need to be determined. As an initial guess, we take the Ritter and Drake PSD, based on which the non-fixed bin widths are determined, using principal theorems derived in information theory and data compression. Then, the pore-network parameters including the PSD class probabilities are optimized to match the experimental MICP, using two methods that are based on the link between the Boltzmann - Maxwell equation describing the molecular state probabilities in statistical physics and the Bayes method for posterior distribution. The first method is the classic Metropolis Markov Chain Monte Carlo method and the second, hybrid, method adds Hamiltonian Dynamics to avoid the random nature of the classic Metropolis algorithm, thus enhancing the convergence of the classical approach. The methods have successfully been applied to two synthetic cases and three carbonate samples from a Middle East reservoir presenting three different carbonate facies. The hybrid method improves the Markov Chain Monte Carlo efficiency in terms of the accepted system parameters states by a factor of 4 to 5.
-
-
-
Pore-scale Modelling of Chemically Induced Effects on Two-phase Flow
Authors Y. Zaretskiy, S. Geiger, K. Sorbie and M. FoersterWe present a finite element - finite volume simulation method for modelling fluid flow and solute transport accompanied by chemical reactions in experimentally obtained 3D pore geometries. We solve the stationary Stokes equation on the computational domain with the FE method using the same set of nodes and the same order of basis functions for both velocity and pressure. The resulting linear system is solved by employing the algebraic multigrid library SAMG. To simulate large 3D samples we partition them into subdomains and treat each separately on a different computing node. This approach allows us to use meshes with millions of elements as input geometries without facing limitations in computer resources. We apply this method in a proof-of-concept study of a digitized Fontainebleau sandstone sample. We use the calculated velocity profile with the finite volume procedure to simulate pore-scale solute transport and diffusion. This allows us to demonstrate the correct emerging behaviour of sample’s hydrodynamic dispersivity. Finally, we model the transport of an adsorbing solute and the surface coverage dynamics is demonstrated. This information can be used to estimate the local change of a sample wettability state and the ensuing changes of the two-phase flow characteristics.
-
-
-
Determination of Connectivity in Vuggy Carbonate Rock Using Image Segmentation Techniques
Authors T.P.G. Gurholt, B.F.V. Vik, I.A. Aavatsmark and S.I.A. AanonsenIt is commonly understood that carbonate reservoirs hold much of the worlds' oil and gas reserves. In this study we examine a heterogeneous vuggy pore class. The vuggy pore space is imaged using x-ray computed tomography (CT). Micro-CT ($mu$CT) is used to better image parts of the interparticle (matrix) porosity. Image segmentation is applied to obtain a three-dimensional (3D) map of the vuggy pore space, which is used to determine its connectivity. We evaluate the performance of selected image segmentation techniques applied for segmenting CT-scans of vuggy carbonate rocks at varying resolutions. Computations are performed for both two-dimensional (2D) and three-dimensional (3D) image segmentation. The computations show that the methods perform well on high resolution $mu$CT scans. For a CT scan with low and non-uniform resolution they wrongly classify a large portion of the voxels. This makes it challenging to determine the connectivity. The computed porosity is, due to image resolution, underestimated when compared to measured porosity from laboratory experiments.
-
-
-
Roles of Transient and Local Equilibrium Foam Behavior in Porous Media – Traveling Wave
Authors E. Ashoori, D. Marchesin and W.R. RossenIn foam EOR, complex dynamics of bubble creation and destruction controls foam properties. We assume that local equilibrium applies throughout a foam displacement on the field scale, with the exception of an entrance region and at shock fronts, where saturations and bubble size change abruptly. We find a range of conditions in which the local-equilibrium condition applies even within the shock front. In a waterflood the width of a shock transition zone is determined by capillary-pressure gradients. For foam, this equation is joined by one for evolving foam texture. If there is no gas ahead of the foam, we prove that foam texture is everywhere at local equilibrium within the shock, regardless of the foam model. If there is gas initially in the formation, slow foam generation and coalescence processes can narrow the shock from that assuming local equilibrium. In other cases, the dynamics of the traveling wave leads to oscillations near the shock; these are not numerical artifacts, but reflections of the models. Multiple steady states seen in experiment for some injection rates can be predicted by certain foam models. The approach of solving for the traveling wave can give rule out some of these states.
-
-
-
Extended K-value Method for Multi-contact Miscible Displacements
Authors G. Rannou, D.V. Voskov and H.A. TchelepiAccurate predictions of gas-injection processes usually require a compositional formulation based on an Equation of State (EoS). Because the thermodynamic behavior of multi-component multiphase systems is highly nonlinear and coupling to the flow equations is quite complex, EoS-based simulations can become computationally prohibitive. For immiscible displacements an efficient approximation of the general compositional problem can be used if the phase-equilibrium ratios (K-values) are assumed to be weak functions of composition; however, this assumption is questionable for near-miscible displacements. The standard K-value approach suffers from significant difficulties, both in terms of robustness and accuracy, in the critical region. Here, we describe an extended K-values method that takes advantage of the solution-path invariance in the compositional space with respect to the hydrodynamic properties. Specifically, we propose an additional degree-of-freedom, which captures the composition dependence of the phase behavior, for use in the tabulation and interpolation of the K-values. An important aspect of the method is the use of the so-called Minimal Critical Pressure criteria (MCP), which indicates when a given composition becomes super-critical. We compare results obtained with EoS- and K-values based simulations for several (isothermal) compositional problems, and we demonstrate the efficiency and accuracy of the proposed method.
-
-
-
A New Variable-set for Nonlinear Flow Simulation Based on Compositional Space Parameterization
By D.V. VoskovWe consider numerical modeling of compositional two-phase flow in porous media, and we propose a nonlinear formulation that employs a variable-set based on compositional space parameterization. In the formulation, the phase-fraction and saturation change 'continuously' in the immiscible region of the compositional space. Inside the two-phase region, these variables are identical to the saturation and phase-fraction of the standard approach. In the single-phase regions, however, these variables can become negative, or larger that unity. We demonstrate that when this variable set is used, the EOS computations are resolved completely within the general Newton loop. That is, no separate phase-stability or flash computations are necessary. The number of general Newton iterations grows only slightly, and overall savings lead to more efficient simulations. We discuss using this variable set, which can be thought of as an extension of the natural variable, in two ways. The first scheme honors the nonlinear dependence of the overall density on phase-fractions and saturation, and the second employs a linearized relation for the overall density. Both schemes are compared with the standard natural-variables formulation using several challenging compositional problems. We also describe how the proposed approach can be used for modeling multi-contact miscible displacement processes.
-
-
-
Continuation of the Tie-simplex Space for Three-phase Thermal-compositional Systems
Authors A. Iranshahr, D. Voskov and H. TchelepiModeling the phase behavior associated with compositional flow simulation of systems that form more than two phases (e.g., steam, or CO2 injection) is a challenging problem. In addition to the coupling of thermodynamics with the nonlinear equations of flow and transport, accurate phase-state identification for mixtures that form three (or more) phases, as function of pressure, temperature and composition, is the subject of active research. We describe a general negative-flash method for multi-component thermal systems that can form three, or more, fluid phases, including a convergence proof. This extended negative-flash approach integrates quite nicely with flow simulation based on adaptive tie-simplex compositional space parameterization. We prove that tie-simplexes change continuously with pressure, temperature, or along a continuous trajectory in compositional space. Continuation of the tie-simplex space provides theoretical justification for the tie-simplex based flow modeling, in which interpolation in pressure and temperature for a limited number of tie-simplexes is performed during a simulation run. We also show how a tie-simplex associated with a given composition can degenerate as a function of pressure and temperature. We focus on the complex behaviors of the tie-triangles and tie-lines associated with three-phase, steam-injection problems. The algorithms that capture the degeneration of the tie-triangles into tie-lines are described in detail. Interpolation in the parameterized compositional space is used to identify the phase-state and proves to be as reliable as a 'conventional' phase-stability test for three-phase mixtures. This extended negative-flash algorithm has been integrated into our tie-simplex based flow simulation framework. We demonstrate the effectiveness of the framework using several compositional steam-injection problems with complex behaviors.
-
-
-
New Models for Heater Wells in Reservoir Simulation
Authors G. Aouizerate, L.J. Durlofsky and P. SamierDownhole electrical heating can be used to achieve the high temperatures required for in-situ upgrading of oil shale. Heater-well models are needed if this process is to be simulated accurately. The traditional Peaceman approach used for fluid injection and production wells may not be applicable because it does not capture transient effects, which can be important in downhole heater models. Here we develop two new models for representing heater wells in reservoir simulators. The first model is applicable for homogeneous systems with properties that are not temperature dependent. For such cases we develop a semi-analytical procedure based on Green’s functions to construct time-dependent heater-well indices and heater-block thermal transmissibilities. For the general case, which can include both fine-scale heterogeneity and nonlinearity due to the temperature dependence of rock properties, we present a numerical procedure for constructing the heater-well model. This technique is essentially a near-well upscaling method and requires a local fine-scale solution in the well region. Local boundary conditions are determined using a local-global treatment. The accuracy of the new heater-well models is demonstrated through comparison to reference solutions for two example problems.
-
-
-
Towards Temperature Transient Analysis in Horizontal Wells
Authors K.M. Muradov and D.R. DaviesTemperature modeling and analysis of horizontal wells has become increasingly important due to the extensive deployment of downhole temperature monitoring devices. Such sensors allow continuous monitoring of zonal production with a sufficiently high resolution. Published descriptions of the necessary analysis tools have concentrated on well temperature models and convective heat transfer in porous media. They rarely describe transient, reservoir temperature changes during conventional production. Here, fluid expansion effects are superimposed on heat conduction to the surrounding layers coupled to a transient inflow regime. This paper will discuss why these already available solutions to related problems cannot be employed directly. It will propose a comprehensive model of the reservoir temperature around a horizontal, liquid production well. The above task is treated as a multidimensional, heat transfer problem. Both an asymptotic, early time, solution in real space and an accurate, integral solution can be derived. The asymptotic, analytical solution has been tested against a scientific, simulation softwares. The applicability and practical importance of the solution will be discussed. It forms the basis of a Temperature Transient Analysis tool for horizontal, oil producing wells where both the flow properties of the reservoir fluid and the inflow distribution are to be evaluated.
-
-
-
Analysis of Coupled and Fully Integrated Models for Low-frequency Electrical Heating Assisted Heavy Oil Recovery
Authors J.A. Torres, I.I. Bogdanov, V. Dabir and A.M. KampWe have studied low frequency heating for heavy oil recovery accompanied by salt water recirculation around electrode wells. The multiphysics nature of this problem naturally calls for a code coupling solution. The main objective of this work is to develop an efficient methodology for such coupling. We address the coupling paradigm which includes questions about the degree of coupling (with respect to synthesis), grid and solver type used in the simulation of each separate problem, problem stability and accuracy, interpolation strategy, parallelization, etc. The implemented solution was validated and applied. We coupled a finite volume (FV) reservoir simulator (Stars) to a general purpose finite element (FE) simulator (COMSOL Multiphysics) used to compute an approximation to Maxwell’s equations. These two simulators are coupled by an in-house coupler written in Matlab, being its critical step the bi-direction mapping of interpolation/integration of data between the FE and FV mesh. Selection of solver parameters has been confirmed to be critical in order to limit the effect of numerical dispersion. Coupling solutions, such as the one described in this paper, allow developing simulation tools that reuse as much as possible specialized and existing tools. Their application to practical problems proves to be useful.
-
-
-
Low-temperature In-situ Combustion of Light Oil
Authors A.A. Mailybaev, D. Marchesin and J. BruiningWe consider flows possessing a combustion front when a gaseous oxidizer (air) is injected into the porous medium, a rock cylinder thermally insulated preventing lateral heat losses and filled with light or medium viscosity oil. When oxygen reacts with hydrocarbons at low temperatures, a series of reactions occur that will convert a part of hydrocarbons to oxygenated hydrocarbons and gaseous product (water, carbon dioxide etc.). The oxygenated hydrocarbons are compounds like ketones, alcohols, aldehydes and acids. This process is termed low temperature oxydation (LTO). Indeed, LTO only involves some 25 % of the possible sites that can react with oxygen. Therefore also the reaction heat per volume of fuel can at most be 25 % of full hydrocarbon combustion and consequently the temperature in the LTO reaction zone is very low. Upstream of the LTO reaction zone evaporization occurs. We formulate conservation laws for liquid oil, gaseous oil, oxygen and inert gas components (combustion products and nitrogen) that includes the reaction terms. Moreover we give the energy conservation equation. We give an analytical solution to the equations. It turns out that the solution consists, from upstream to downstream, of a thermal wave, an LTO wave that combines oxidation and evaporation and a Buckley-Leverett saturation wave. It is shown that in the solution a major role is played by the existence of a resonance line at which the derivatives of the oxygen and evaporated oil flux versus the oil saturation vanish. It means that the derivative of the oil flux versus saturation is positive upstream of the resonance line and negative downstream of the resonance line. The complete solution is described for typical parameters of LTO oil combustion. Computations show that only a small part of the oil vaporizes or reacts. The oil velocity is close to the LTO speed. Thus the LTO wave represents a mechanism of almost complete oil displacement by means of the temperature increase in the LTO wave, which leads to a decrease of oil viscosity and increase of the gas flux in the wave.
-
-
-
Analysis of Grid Resolution for Simulations of CO2 Storage in Deep Saline Aquifers
Authors G.E. Pickup, M. Kiatsakulphan and J.R. MillsThe simulation of CO2 injection into deep saline aquifers provides challenges to reservoir modellers. Estimates of storage and trapping of CO2 must be made for large-scale aquifers for planning purposes, and these simulations are generally undertaken on coarse models with grid cells of several 100m in length. However, some of the physical processes which arise during CO2 storage can only be represented on finer grids. In this work, we used a variety of 2D models to study the effect of grid resolution on a range of processes which take place when CO2 is injected into a saline aquifer. A reasonably fine grid is required to model the buoyant rise of the CO2 plume and the migration under the caprock. Cell sizes of several meters in the horizontal and approximately 1m in the vertical are adequate. However, an even finer grid is required to limit the effects of numerical dispersion and correctly simulate the dissolution of CO2 in brine. When CO2 dissolves in brine, the brine becomes more dense and convection may take place, enhancing the dissolution. Estimates of time to onset of convection and critical wavelength have been determined by a number of people with variable results, but for a typical aquifer, a grid resolution of less than 1m may be required. If the cell size is too large, the amount of convection will be underestimated, and therefore the amount of dissolution will be underestimated. However, our results showed that the error in the amount of CO2 dissolved was greater at the end of injection than at the end of the simulation (100 years after injection ceased), due to numerical dispersion. On the other hand, our simulations showed that the grid resolution had little effect on the build-up of pressure, which indicates that coarse grids may be sufficient for initial assessments of storage potential.
-
-
-
Accurate Discretization of Vertically-averaged Models of CO2 Plume Migration
Authors K.A. Lie, I.S. Ligaarden and H.M. NilsenWhen CO2 is injected into a deep formation, it will migrate as a plume that moves progressively higher in the formation, displacing the resident brine. The invasion front is driven by gravity, and the upward movement of the plume is limited by a low-permeable caprock. Several authors have recently proposed to make a sharp-interface assumption and only describe the plume migration in a vertically-averaged sense. For inhomogeneous permeability, the plume migration is then described by a system of conservation laws with spatially discontinuous flux. If one disregards dissolution and residual trapping, the system reduces to a scalar conservation law with a spatially dependent flux function, which may exhibit different solutions depending on the entropy condition that is enforced to pick a unique solution. We propose a certain set of assumptions that lead to the so-called minimum-jump condition and derive the corresponding solutions to the Riemann problem. Solutions to this problem are fundamental when developing accurate Godunov-type schemes. Here, we take a slightly different approach and present an unconditionally stable front-tracking method, which is optimal for this type of problem. Moreover, we verify the well-known observation that a standard upstream mobility discretization can give wrong solutions in certain cases.
-
-
-
Effects of Non-Darcy Flow in CO2 Injection into Saline Aquifers
Authors A. Mijic and T.C. LaForceOne of the most important aspects of flow in porous media is the flow velocity. It defines the convective part of the transport and usually is assumed to be given by Darcy law. However, for the analysis of near-wellbore phenomena, such as salt precipitation during CO2 injection, it is necessary to include nonlinear flow behaviour in the zone of interest. This paper presents the modification of existing solutions for the immiscible non-Darcy displacement in order to account for mutual solubility of fluids and salt precipitation. The flow is governed by the Forchheimer type equation for the two phase flow. The analytical solution is implemented in CO2 injection modelling to investigate the effects of a nonlinear flow regime on phase saturation profiles. The results enable the determination of a drying front and hence the zone where the salt precipitation will occur. By implementing the approximate solution for the solid salt saturation it is shown that non Darcy displacement has a significant influence on obtained saturation values and the skin factor than previously estimated. Finally, the non-Darcy displacement dependency on injection rates further contributes to the modification of salt precipitation pattern.
-
-
-
Multicomponent Multiphase Transport Solutions for Application to CO2 Storage
Authors A.L. Goater and T. LaForceReliable prediction of subsurface flow will be vital as Carbon Dioxide (CO2) storage grows into a global-scale industry. Therefore, it is essential to achieve fundamental objectives such as identifying and numerically modelling the physical solutions to three-phase transport problems. In this paper we present a one-dimensional (1D) conservation law for a three-phase, three-component transport problem of interest for CO2 storage in an oil reservoir. An example analytic solution for the purely advective problem is proposed. The componentwise Essentially Non-Oscillatory (ENO) method is used to minimise numerical error as compared with single-point upstream weighting (SPU). The simulated solutions both converge to the proposed dispersion-free analytic solution. However, the ENO approach exhibits only first order convergence, which is the same as SPU. Fine-grid ENO simulations of the advection/dispersion equation in three-phase fluid flow with non-negligible capillary pressures are also considered. It is demonstrated that the physically-realistic dispersion caused by capillary pressure alters the simulated solution in a fundamentally different way than numerical effects, dispersing some shocks while having little impact on others.
-
-
-
Multidimensional and Higher Order Edge Based Upwind Schemes for Hyperbolic Systems Including Gravity in Porous Media on Structured and Unstructured Grids
Authors S. Lamine and M.G. EdwardsStandard reservoir simulation schemes employ single-point upstream weighting for approximation of the convective fluxes when multiple phases or components are present. These schemes introduce both coordinate-line numerical diffusion and crosswind diffusion into the solution that is grid and geometry dependent. Families of locally conservative higher-order multidimensional upwind schemes are presented for convective flow approximation in porous media that reduce both directional and crosswind diffusion. The schemes are coupled with full-tensor Darcy flux approximations and handle general flow conditions including counter current gravity driven flows and systems of hyperbolic equations. Characteristic vector upwind approximations are proposed and compared with the simulation standard single-point upstream weighting schemes. When dealing with systems of hyperbolic equations, upwind characteristic wave decomposition is used in combination with different limiting strategies involving conservative, primitive and characteristic variables. Alternate wave vector tracing approximations are proposed based on phase velocities and characteristic velocities and comparisons are presented. The higher order multidimensional schemes are designed for general grids. Conditions for a local discrete maximum principle are presented that ensure solutions are free of spurious oscillations. Benefits of the resulting schemes are demonstrated for gravity segregated flow and polymer flood systems. The test cases involve a range of unstructured grid types with variations in orientation and permeability that lead to flow fields that are poorly resolved by standard simulation methods. The higher order multidimensional formulations are shown to effectively reduce numerical diffusion, leading to improved resolution of concentration and saturation fronts.
-
-
-
Monotone Multi-dimensional Upstream Weighting on General Grids
Authors E. Keilegavlen, J. Kozdon and B. MallisonThe governing equations for multi-phase flow in porous media often have a mixed elliptic and (nearly) hyperbolic character. The total flux for each phase consists of two parts: a geometry and rock dependent term that resembles a single-phase flux and a mobility term representing fluid properties and rock-fluid interactions. The geometric term is commonly discretized by two or multi point flux approximations (TPFA and MPFA, respectively). The mobility is usually treated with single point upstream weighting (SPU), also known as dimensional or donor cell upstream weighting. It is well known that when simulating processes with adverse mobility ratios, e.g. gas injection, SPU yields grid orientation effects. For these physical processes, the governing equations are unstable on the scale at which they are discretized, rendering a challenging numerical problem. These challenges must be addressed in order to avoid systematic biasing of simulation results and to improve the overall performance prediction of enhanced oil recovery processes. In this work, we present a framework for multi-dimensional upstream weighting for multi-phase flow with gravity on general two-dimensional grids. The methodology is based on a dual grid, and the resulting transport methods are provably monotone. The multi-dimensional transport methods are coupled with MPFA methods to solve the pressure equation. Both explicit and fully implicit approaches are considered for treatment of the transport equations. The results show considerable reduction of grid orientation effects compared to SPU, and the explicit multi-dimensional approach allows larger time steps. For the implicit method, the total number of non-linear iterations is also reduced when multi-dimensional upstream weighting is used.
-
-
-
On the Upstream Mobility Finite Difference Scheme
Authors A. Adimurthi, J. Jaffré, S. Mishra and G.D. Veerappa GowdaIncompressible two-phase flow in porous media when capillarity effects are neglected can be modeled as a nonlinear scalar conservation with unknown one of the phase saturations. The standard scheme used in reservoir simulation or in hydrogeology uses the upstream mobility flux. The solution calculated with such a scheme was proven to be converging to the entropy solution. This scheme is used in the homogeneous case as well as in the case of a change in rock types when the mobilities functions are changing from one rock type to the other. However in this latter case, while the scheme is giving the right solution most of the time, we will exhibit some examples of mobility functions for which the scheme does not give the right solution. An alternative scheme is to use an extension of the Godunov flux for which we give a simple formula which can be used when a change of rock types occurs. Thus the solution given by this scheme can be proven to be correct even in cases where the upstream mobility scheme fails.
-
-
-
Finite Volume Schemes for Multiphase Flow Simulation on Near Well Grids
Authors C. Guichard, J. Brac, R. Eymard and R. MassonIn reservoir engineering a proper well modeling requires to simulate accurately multiphase flow taking into account the singular pressure distribution in the well vicinity and the large difference of scales between the wellbore radius and the reservoir dimension. This paper investigates the numerical simulation of a 3D near-well model using a radial mesh exponentially refined down to the well boundary. The radial and the reservoir CPG grids are matched using either hexahedra or both tetrahedra and pyramids. Various multipoint finite volume methods are compared such as the O and L schemes, and the newer G and GradCell schemes which are briefly described. Our first objective is to introduce a new scheme which includes the advantages of compact stencil, symmetry and convergence. Our second objective is to study the behavior of these schemes on the above two types of meshes. First, their convergence behavior are compared for the deviated well single-phase flow analytical solution described in the literature with an homogeneous anisotropic permeability tensor. Second, the influence of the type of mesh is studied on a CO2 injection test case in an aquifer with solubility of CO2 in the water phase and anisotropic permeability fields.
-
-
-
Horizontal Simulation Grids as Alternative to Structure-based Grids for Thin Oil-zone Problems
By O. PettersenAs a general rule, the layering in reservoir simulation grids is based on the geology, e.g. structure tops. In this paper we investigate the alternative of using horizontal layers, where the link to the geology model is by the representation of the petrophysics alone. The obvious drawback is the failure to honor the structure in the grid geometry. On the other hand a horizontal grid will honor the initial fluid contacts perfectly, and horizontal wells can also be accurately represented. Both these issues are vital in thin oil-zone problems, where horizontal grids may hence be a viable alternative. To investigate this question, a number of equivalent simulation models were built for a segment of the Troll Field, both geology-based and horizontal, and various combinations of these. In the paper it is demonstrated that the horizontal grid is able to capture the essentials of fluid flow with the same degree of accuracy as the geology-based grid, and near-well flow is considerably more accurate. For grids of comparable resolution, more reliable results were obtained by a horizontal grid than a geo-grid. A geo-grid with local grid refinement and a horizontal grid produced almost identical results, but the ratio of computing times was more than 20 in favor of the horizontal grid. In the one-phase regions of the reservoir, relatively coarse cells can be used without significant loss of accuracy.
-
-
-
A Model for Conductive Faults in Heterogeneous Media for Non-matching Grids
Authors X. Tunc, I. Faille, T. Gallouët, M.C. Cacas and P. HavéIn the standard Corner Point Grid approach, a fault zone is supposed to be sufficiently thin to be represented as a surface with non-matching interfaces, due to the throw, across the fault. Following this point of view, we study an approach where fluid flow along the fault is modelized by a lower dimensionnal model, the fault thickness becoming a model parameter.A fault zone is represented by two sets of faces, corresponding to the fault surface, each set being conforme with its neighbouring matrix grid-blocks. Fault zone properties are assigned to each face. Finite Volume Discretization of fluid flow equations leads to an additional mass balance equations in each fault face, with three types of fluxes. The first one represents mass exchange between fault faces along each set. The second one is between neighbouring non-conforme fault faces across the fault and finally, the last one, is between fault faces and their neighbouring matrix grid-blocs. In this paper, we mainly focus on how to handle the non-matching grids and therefore consider only one-phase fluid flow. However, the extension of this approach to complex multiphase flow is straightforward as it enters into the Finite Volume framework.
-
-
-
Modification of a Reservoir Grid to Achieve Monotonicity of the Control Volume Method
Authors R.A. Yorgova and I. AavatsmarkSpurious oscillations may occur for nonmonotone discrete methods. Therefore, monotonicity is a desired property for control-volume methods. Control-volume MPFA methods are only conditionally monotone. Sufficient criteria for monotonicity of the MPFA O-method on quadrilateral grids in 2D for the pressure equation has previously been developed. Tests on rough quadrilateral grids show that the monotonicity conditions are often violated. In this work we present modification of rough quadrilateral grids on homogenous and on heterogeneous media obtaining grids where the MPFA O-method is monotone. The reconstruction is based on short moves of the grid nodes. The moves are defined by weighted gradients on functions based on the monotonicity criteria. The main steps of the reconstruction algorithm are described. Examples of rough quadrilateral grids and their modified grids are included in order to show the efficiency of the presented algorithm.
-
-
-
Practical Gridding Algorithms for Discrete Fracture Modeling Workflows
Authors B.T. Mallison, M.H. Hui and W. NarrThis paper describes grid generation algorithms for simulating flow and transport in fractured porous media. The methods are designed to only capture details of the fracture network geometry larger than the specified grid resolution. The final grids honor fractures approximately while maintaining good cell quality. Improved representations of the input geometry can be obtained by generating a new grid with finer resolution. Several numerical examples are presented in two and three dimensions that demonstrate that the algorithms are robust and practical for industrial applications.
-
-
-
Accurate Locally Conservative Discretizations for Modeling Multiphase Flow in Porous Media on General Hexahedra Grids
Authors M.F. Wheeler and G. XueFor many years there have been formulations considered for modeling single phase ow on general hexahedra grids. These include the extended mixed nite element method, and families of mimetic nite difference methods. In most of these schemes either no rate of convergence of the algorithm has been demonstrated both theoret- ically and computationally or a more complicated saddle point system needs to be solved for an accurate solution. Here we describe a multipoint ux mixed nite element (MFMFE) method [5, 2, 3]. This method is motivated from the multipoint ux approximation (MPFA) method [1]. The MFMFE method is locally conservative with continuous ux approximations and is a cell-centered scheme for the pressure. Compared to the MPFA method, the MFMFE has a variational formulation, since it can be viewed as a mixed nite element with special approximating spaces and quadrature rules. The framework allows han- dling of hexahedral grids with non-planar faces by applying trilinear mappings from physical elements to reference cubic elements. In addition, there are several multi- scale and multiphysics extensions such as the mortar mixed nite element method that allows the treatment of non-matching grids [4]. Extensions to the two-phase oil-water ow are considered. We reformulate the two- phase model in terms of total velocity, capillary velocity, water pressure, and water saturation. We choose water pressure and water saturation as primary variables. The total velocity is driven by the gradient of the water pressure and total mobility. Iterative coupling scheme is employed for the coupled system. This scheme allows treatments of different time scales for the water pressure and water saturation. In each time step, we first solve the pressure equation using the MFMFE method; we then Center for Subsurface Modeling, The University of Texas at Austin, Austin, TX 78712; [email protected]. yCenter for Subsurface Modeling, The University of Texas at Austin, Austin, TX 78712; [email protected]. 1 solve the saturation using discontinuous Galerkin (DG) method by taking multiple small time steps within the large time step. In addition, the MFMFE method allows effcient computations of the total and capillary velocity since the method gives the local velocity approximation in terms of surrounding pressure degrees of freedom. Both theoretical and computational results are discussed and presented. Exten- sions to advection-diffusion equations and non-Newtonian polymer ooding [6] are also discussed.
-
-
-
Quasi-positive Continuous Darcy-flux Approximation on Cell-centred Triangular Grids
Authors M.G. Edwards and H.A. FriisA new family of cell-centered finite-volume schemes is presented for solving the general full-tensor pressure equation on arbitrary unstructured triangulations. The new schemes are flux continuous and have full pressure support (FPS) over each subcell with continuous pressure imposed across each control-volume sub-interface, in contrast to earlier formulations. The earlier methods are point-wise continuous in pressure and flux with triangle-pressure-support (TPS) leading to a limited quadrature range. An M-matrix analysis identifies bounding limits for the schemes to posses a local discrete maximum principle. Positive definite conditions are also given. Comparisons show that the earlier pointwise TPS methods can induce strong spurious oscillations in solutions for problems involving strong full-tensor anisotropy where the M-matrix conditions are violated, leading to unstable solutions in such cases. The earlier unstructured cell-centered TPS schemes are investigated in terms of stability and are shown to be decoupled at high anisotropy ratio. In contrast to TPS, the new FPS formulation leads to well resolved stable solutions that are essentially free of spurious oscillations. A substantial degree of improved convergence behavior, for both pressure and velocity, is observed in all convergence tests.
-
-
-
Convergence of the MPFA L-method – Strengths and Difficulties
More LessThe convergence of the Multi Point Flux Approximation (MPFA) O-method on general grids has recently been proved by rewriting the method in a weak form using quadratures. We will examine in detail the quadrature of the MPFA L-method in 2D. To obtain the correct quadrature one must introduce auxiliary variables that impose the continuity of the pressure over an interface – a characteristic of the L-method. The auxiliary variables can successively be eliminated from the quadrature. Compared with the O-method, the L-method obtains a stronger diagonalization of the quadrature matrix. This is important as the convergence proof of the method relies on the assumption that the symmetric part of the quadrature matrix has positive eigenvalues. On the other hand, the diagonalization also means that the L-method has a stronger inter-element coupling, as pressure continuity is imposed along element interfaces and not within the partial elements themselves. We will examine and compare the assumption for convergence for the O-method and the L-method in 2D aiming for a better understanding of the merits of the two methods and how anisotropy and grid deformation influence their behaviour.
-
-
-
On Rough Grids – Convergence and Reproduction of Uniform Flow
Authors R.A. Klausen and A.F. StephansenIn reservoir simulation the flow driven by pressure differences and gravity is usually approximated by means of the empirically derived Darcy's law. The elliptic pressure equation that results is preferably solved with a mass conservative method, for which several candidates are available. We distinguish between full field methods and raw field methods. On the one hand we have continuous full field methods which approximate the velocity with a field that is defined inside the element itself. On the other hand we have discrete methods that approximate the flux, and for which no velocity field is defined inside the elements. We denote this raw field. We will examine some mass-conservative methods on the basis of two different characteristics: uniform flow reproduction and convergence on rough grids. Analysis apart, we propose an interpolation on polygons that can reproduce uniform flow by using edge basis functions constructed by means of barycentric functions. This permits us to obtain an H(div) field starting from discrete fluxes evaluated at the interfaces of the elements. We also show that a similar interpolation can be constructed to obtain an H(curl) field on polygons in 2D.
-
-
-
Locally Adaptive Timestepping in Reservoir Simulators
Authors H. Mc Namara, G. Bowen and P.J. DellarAn algorithm for locally adapting the step-size for large scale finite volume simulations of multi- phase flow in petroleum reservoirs is suggested which allows for an “all-in-one” implicit calculation of behaviour over a very large time scale. Some numerical results for simple two-phase flow in one space dimension illustrate the promise of the algorithm, which has also been applied to very simple 3D cases. A description of the algorithm is presented here along with early results. Further development of the technique is hoped to facilitate useful scaling properties.
-
-
-
Improved Gravity Splitting for Streamline and Reordering Methods
Authors H.M. Nilsen and J.R. NatvigFor heterogeneous reservoirs, fast, stable and accurate methods are hard to obtain. Large changes in the velocity field leads to severe time step restrictions in explicit schemes or expensive time steps in implicit schemes. In the absence of gravity, the exact velocity field will be loop free in the sense that there are no closed integral curves. For streamline methods, the absence of closed integral curves ensures that all streamlines have endpoints in wells. Likewise, this property implies that the Jacobian matrix of an implicit scheme with an upwind flux approximation can be reduced to a triangular matrix by a permutation, or reordering, of the unknowns. For the above methods the effect of gravity is usually handled by operator splitting in the transport equation. Gravity will introduce rotation in the total velocity, which may yield closed integral curves. Even when the effect of gravity is small, it can limit the efficiency and the robustness of streamline and reorder methods. To overcome this problem, we split the total velocity field in a loop-free part without the effect of gravity, and a part driven by gravity only. This is achieved by solving the pressure equation two times with different righthand sides.
-
-
-
A Robust Streamline Tracing Method for Systems with Non–neighbor Connections
By D. KachumaThe use of complex corner point systems in field flow reservoir simulators allows the modelling of a wider variety of cell geometries and hence a better representation of faulted reservoirs. If faults are involved, then a cell face may be next to two or more cell faces. This phenomenon can also be observed in situations where the cells in a particular region have been refined to improve accuracy. This creates complex connections between cell faces which require special attention and representation within the simulator. Conventionally, this is handled by the use of the so-called non-neighbour connections (NNC) whereby the simulator includes, on top of the regular neighbour fluxes, the fluxes between the extra cell connections. We present a robust technique which is applicable regardless of the flow configuration. Our technique involves subdividing each cell with non-neighbour fluxes into a local regular subgrid. Using this grid the local streamfunction is constructed numerically. By reducing the size of the local subgrid, the accuracy of the traced streamlines improves. We demonstrate this technique first on a detailed 2D synthetic example that is designed to show the robustness of the technique. The practical utility of our algorithm is then demonstrated in a structurally complex and heavily faulted full model of a North Sea field which includes cells with several non-neighbour configurations in different faces. This treatment is contrasted with the usual approach and also other techniques designed to trace streamlines in heavily faulted systems.
-
-
-
Streamline Tracing on Irregular Geometries
More LessThe accurate and efficient tracing of streamlines is fundamental to any streamline-based simulation method. For grids with irregular cell geometries, such as corner-point grids with faults or Voronoi-diagram (pebi) grids, most efforts to trace streamlines have been focused on subdividing irregular cells into sets of simpler subcells, typically hexahedra or simplices. Then one proceeds by reconstructing the velocity field on, and tracing through, the sets by a more basic algorithm. One such basic algorithm applies to incompressible flow on simplices. In that case there is a cell-wise constant velocity that is consistent with given face fluxes, as long as those face fluxes sum to zero for the cell. We give an efficient and simple formulation of this algorithm using barycentric coordinates. Another approach to irregular cell tracing is computing the streamline directly on the complex cell geometry. We give a new method based on generalized barycentric coordinates for direct tracing on arbitrary convex polygons, which generalizes the corner velocity interpolation method of Hægland et al (2007). The method generalizes to convex polytopes in 3D, with a restriction on the polytope topology near corners that is shown to be satisfied by several popular grid types.
-
-
-
A Stochastic Sharpening Method for the Propagation of Phase Boundaries in Multiphase Lattice Boltzmann Simulations
Authors T. Reis and P.J. DellarExisting lattice Boltzmann models that have been designed to recover a macroscopic description of immiscible liquids are only able to make predictions that are quantitatively correct when the interface that exists between the fluids is smeared over several nodal points. Attempts to minimise the thickness of this interface generally leads to a phenomenon known as lattice pinning, the precise cause of which is not well understood. This spurious behaviour is remarkably similar to that associated with the numerical simulation of hyperbolic partial differential equations coupled with a stiff source term. Inspired by the seminal work in this field, we derive a lattice Boltzmann implementation of a model equation used to investigate such peculiarities. This implementation is extended to different spacial discretisations in one and two dimensions. We shown that the inclusion of a quasi-random threshold dramatically delays the onset of pinning and facetting.
-
-
-
Parallel Sparsified Solvers for Reservoir Simulation
By H.M. KlieThe premise of the present work lies on the fact that a large number of system coefficients may be disregarded without compromising the robustness of the overall solution process. That is, given a linear system it is possible to construct a preconditioner by dropping a large number of off-diagonal nonzeros and use it as a suitable proxy to approximate the original matrix. This proxy system can be in turn factored to generate a new class of block ILU preconditioners, approximated inverses and algebraic multigrid implementations. We propose two parallel algorithms to sparsify a given linear system: (a) random sampling sparsification (RSS), and (b) percolation-based sparsification (PBS). The former one relies on the idea that coefficients are included into the sparsified system with a probability proportional to its effective transmissibility. The latter relies on capturing highly connected flow paths described by whole set of transmissibility coefficients. Depending on the case, the RSS and PS algorithms have the potential to reduce in orders of magnitude the number of floating point operations associated with the preconditioner. Results confirming the benefits of sparsified solvers are illustrated on a wide set of field cases arising in black-oil and compositional simulations.
-
-
-
High Performance Manycore Solvers for Reservoir Simulation
More LessThe forthcoming generation of many-core architectures compels a paradigm shift in algorithmic design to effectively unlock its full potential for maximum performance. In this paper, we discuss a novel approach for solving large sparse linear systems arising in realistic black oil and compositional flow simulations. A flexible variant of GMRES (FGMRES) is implemented using the CUDA programming model on the GPU platform using the Single Instruction Multiple Threads (SIMT) paradigm by taking advantage of thousands of threads simultaneously executing instructions. The implementation on the GPU is optimized to reduce memory overhead per floating point operations, given the sparsity of the linear system. FGMRES relies on a suite of different preconditioners such as BILU, BILUT and multicoloring SSOR. Additionally, the solver strategy relies on reordering/partitioning strategies algorithms to exploit further performance. Computational experiments on a wide range of realistic reservoir cases show a competitive edge when compared to conventional CPU implementations. The encouraging results demonstrate the potential that many-core solvers have to offer in improving the performance of near future reservoir simulations.
-
-
-
Multiple Kernel Learning Approach for Reservoir Modelling
Authors V. Demyanov, L. Foresti, M. Kanevski and M. ChristieThe paper proposes a novel data driven approach for modelling petrophysical properties in oil reservoir. We aim to improve realism of reservoir models with a more intelligent way of integrating the raw data and geological knowledge. Multiple Kernel Learning (MKL) provides enhanced interpretability of the model by using separate kernels for input variables performs kernel/feature selection to solve a regression problem in a high-dimensional feature space. MKL has an advantage of rigorous control over the model complexity to achieve the right balance between data fit and prediction accuracy. The MKL reservoir model was designed to integrate data and prior knowledge, which describe geological structure at multiple scales. Geological structures can be detected by applying convolution filters on noisy seismic data to capture changes in gradients, orientations, sizes of meandering channels. Such "geo-features" are added as input variables into the MKL model, which optimises the weighted combination of kernels to fit to the available data. MKL application to a synthetic meandering channel reservoir has demonstrated capacity of selecting the relevant input information for detecting the channel structure. Experiments with noisy seismic inputs highlighted feature selection skills of MKL which was able to filter them out.
-
-
-
MP Simulations Without Computing MP Statistics
Authors G. Mariethoz, P. Renard, J. Straubhaar and J. CaersIn recent years, multiple-point simulation has become an invaluable tool to integrate geological concepts in subsurface models. However, due to the high CPU and RAM demand, its use is restricted to relatively small problems with limited structural complexity. Moreover, it only allows for the simulation of univariate fields. We present an alternative method that produces conditional realizations honoring the high-order statistics of uni- or multivariate training images. It is based on a sampling method introduced by Shannon (1948), strictly equivalent to the original method of Guardiano and Srivastava (1993), but that does not need to compute conditional probabilities and to store them. In the sampling process, we use a distance between data configurations that allows simulating both discrete and continuous variables. As a result, the simulation algorithm is drastically simplified and has more possibilities. Since nothing is stored, neighborhoods can have virtually any size. Moreover, the neighborhoods are not restricted to a template, making multiple-grids unnecessary. Multivariate data configurations can be considered, allowing to generate realizations presenting a given multivariate multiple-point dependence. In addition to having virtually no RAM requirement, the method is straightforward to parallelize. Hence it can produce very large and complex realizations.
-
-
-
Application of Stochastic Partial Differential Equations to Reservoir Property Modelling
Authors R. Potsepaev and C.L. FarmerExisting algorithms of geostatistics for stochastic modelling of reservoir parameters require a mapping (the 'uvt-transform') into the parametric space and reconstruction of a stratigraphic co-ordinate system. The parametric space can be considered to represent a pre-deformed and pre-faulted depositional environment. Existing approximations of this mapping in many cases cause significant distortions to the correlation distances. In this work we propose a coordinate free approach for modelling stochastic textures through the application of stochastic partial differential equations. By avoiding the construction of a uvt-transform and stratigraphic coordinates, one can generate realizations directly in the physical space in the presence of deformations and faults. In particular the solution of the modified Helmholtz equation driven by Gaussian white noise is a zero mean Gaussian stationary random field with exponential correlation function (in 3-D). This equation can be used to generate realizations in parametric space. In order to sample in physical space we introduce a stochastic elliptic PDE with tensor coefficients, where the tensor is related to correlation anisotropy and its variation is physical space.
-
-
-
Spartan Random Fields and Applications in Spatial Interpolation and Conditional Simulation
More LessGeostatistics plays an important role in the estimation and simulation of reservoir parameters, and in forecasting reservoir production. This presentation focuses on Spartan spatial random fields (SSRFs), which provide a new framework for geostatistical applications. The idea motivating SSRF development is that spatial correlations can be represented by local interactions. A brief overview of SSRF mathematical properties will be given. SSRF variogram models, which in the isotropic case contain 3-4 parameters and thus offer more flexibility than standard models, will be presented. It will be argued that empirical variogram estimation is not necessary to infer SSRF parameters. SSRF spatial interpolation of scattered data will be discussed. It will be shown that approximate but closed-form, efficiently calculable expressions become possible. Hence, numerical solution of kriging linear systems is avoided, leading to improved numerical complexity. Connections will be drawn between to Markov random fields, statistical physics, minimum curvature estimation, and local random fields. An SSRF application to 2D conditional simulation will be presented. Finally, directions for the future development of SSRFs will be discussed, including extensions to non-Gaussian data distributions, using discretized random field models with “spin” interactions.
-
-
-
Isometric Unfolding of Stratigraphic Grid Units for Accurate Property Populating – Mathematical Concepts
Authors S. Horna, C. Bennis, H. Borouchaki, C. Delage and J.F. RainaudIn traditional methods used to populate stratigraphic units, the distortions can be very important and affect the setting up of the static and dynamic parameters necessary to the reservoir simulation, hence the simulation results. These distortions result from the mapping between the original curvilinear stratigraphic grid and the intermediate cartesian grid in which the property populating is processed. To minimize the deformation and improve the populating process, we propose a new original isometric unfolding process based on the minimization of the elastic tensor deformation. This method could be applied for every type of deposit: horizontal, parallel to top, parallel to bottom, proportional. Starting from a structural model defined into a coordinate line grid, the user chooses a reference iso-chronological level represented by a triangulated surface. The contacts between this surface and fault surfaces are explicitly extracted as coincident edges. These coincident edges are used to constraint an unfolding process, respecting the above constraints, join the fault lips opened by geological tectonic events. In this paper we will focus on mathematical concepts of the algorithms used in the whole unfolding process and present some results throw actual case studies.
-
-
-
History Matching of a Stochastic Multifractal Subesismic Fault Model
Authors M. Verscheure, A. Fourno and J.P. ChilèsMany geostatistical methods have been developed to generate realistic models of subseismic fault networks. The problem is that these models are difficult to history match. In this work, we present an original methodology in which history matching is performed through a modification of fault positions. We first propose a multifractal methodology to generate the faults. The method has been specifically developed to allow history matching. The model parameters are derived from the analysis of the seismic faults. A stochastic algorithm is used to generate 3D subseismic fault realizations that are constrained to the seismic faults. The fracture network is then discretized on a dual porosity simulation grid. Equivalent flow parameters are computed using an analytical method. Last, full field simulations are performed using a multiphase flow simulator. Then, we introduce a method to gradually change the locations of faults while preserving multifractal properties. Changes in locations are driven from a reduced number of parameters. These parameters are gradually modified to optimize the geometry of the realization and compel the initial fault model to reproduce the hydrodynamic behaviour observed on the field. The potential of the methodology is demonstrated on a 3D case study.
-
-
-
Fault Displacement Modelling Using 3D Vector Fields
Authors P. Røe, F. Georgsen, A.R. Syversveen and O. LiaIn history matching and sensitivity analysis it is useful to study the effect of changing the position of horizons near faults and the resulting facies juxtaposition due to change in fault displacements. A new algorithm for calculating a 3D displacement field is developed. This is applicable to a wide range of faults due to a flexible representation and gives the possibility to apply this field to change the displacement and thereby moving horizons. The fault is represented as a surface in a local coordinate system defined by a centre point, dip and strike angles. The displacement of points associated with the fault outside the fault surface is described by a 3D vector field. The displacement on the fault surface can be found from the intersection lines between horizons and the fault surface (fault lines). Away from the fault surface the displacement field is defined by a monotonic decreasing function. The displacement for the fault can be changed by a user-defined factor. Then the whole displacement field is changed and points on horizons around the fault are moved by applying the modified displacement field. The interaction between faults influencing the same points are taken care of by truncation rules.
-
-
-
Quantifying of the Impact of Reservoir Heterogeneity on Recovery Using Shear and Vorticity
Authors B. Rashid, A.L. Bal, G.J.J. Williams and A.H. MuggeridgeWe have used the vorticity of the displacement velocity, as defined by Heller (1966), to derive dimensionless numbers to be used to quantify the relative impact of viscosity ratio, gravity, diffusion and dispersion, and permeability heterogeneity on reservoir flow behaviour. We have used this approach to introduce a new objective measure of the impact of permeability and porosity heterogeneity on reservoir flow behaviour. Buoyancy forces are quantified using a gravity to viscous ratio (G) and diffusion/dispersion using the transverse dispersion number. Detailed simulation of first contact miscible gas/solvent displacements through a range of geologically realistic reservoir models is used to show that the new heterogeneity number, in conjunction with the dimensionless numbers, can be used to provide meaningful results for real non-linear reservoir flows. This study goes some way towards developing a unified mathematical framework to determine under which flow conditions reservoir heterogeneity becomes more important than other physical processes.
-
-
-
A New Parameterization Technique for the Calibration of Facies Proportions in History–matching Processes
Authors G. Enchery, F. Roggero, M. Le Ravalec, E. Tillier and V. GervaisIn this paper, we propose a parameterization technique to automatically adjust facies proportions in a history-matching process. Facies proportions being usually non stationary, they involve a number of parameters proportional to the number of grid blocks which can not be handled in practice. Our technique allows us to vary facies proportions from a reduced number of parameters. This method depends on the ratio of average proportions between facies groups with a priori poorly known proportions. The changes in the ratio values are driven by the optimization process and induce variations in facies proportions over the target area. Two different algorithms are introduced depending on the geological environment. The simplest and most efficient one generates discontinuity between the target area and the embedding environment while the second one ensures continuity. One advantage of both techniques is that they do not depend on the stochastic method used to simulate the facies realizations. To stress the potential of this approach, we recap the results obtained for a faulted turbidite field located in offshore Angola and show how this technique helped to improve the calibration of 4D seismic data.
-
-
-
A New Method for Updating and Assessing Validity of Prior Reservoir Models
Authors G.M. Mittermeir, M.T. Amiry and Z.E. HeinemannThe paper presents a new approach for updating reservoir models beeing history matched previously. The main concern regarding any reservoir model is to maintain the predictive capability. Practical experience shows that this capability is lost very soon. Consequently updating of prior simulation models goes along with significant changes in the model itself. Often this requires not only tuning of the aquifers but also modifications of the reservoir properties. Based on the results it will be decided if the model is still valid or if an entire update is necessary. A newly developed method called Target Pressure and Phase Method (TPPM) provides a mean to significantly improve this process. TPPM does not only speed-up this update procedure but additionally assesses the validity of the model reliably. In the conventional history matching workflow pressures, water-cut and GOR are calculated and the reservoir model will be changed until it fits the history. TPPM considers the pressure measurements and the oil/gas/water production rates as input and determines the aquifer parameters and the well conditions needed in order to match the given historical measurements. The paper will present the basic idea as well as its applicability to a field case.
-
-
-
Mathematical Reformulation of Highly Nonlinear Large-scale Inverse Problem in Metric Space
Authors K. Park, C. Scheidt and J. CaersSolving a highly nonlinear and often time-dependent large-scale inverse problem is still challenging due to the computing costs, various types and scales of data, nonlinearity between the model and the data. Based on the observation that in most inverse problem of flow the (geological) model is very complex while the response under investigation (the data) is of much lower dimension, we propose to reformulate the inverse problem in metric space. Once we know the distance between any two (geological) model realizations, any such model (whether a structure or property) can be mapped into a metric space, which is non-dimensional but can be represented as its projection to low-dimensional (typically 2D-5D) space through multi-dimensional scaling. Knowing the differences between the responses of the models and the data (= response from the true Earth), the location of this truth can be identified. Therefore, the inverse problem is reformulated by finding the ensemble of models which is mapped into the location of the truth. We propose a methodology to solve this reformulated inverse problem by a series of mathematical techniques. We apply the proposed method to a realistic reservoir history matching problem, which contains various types of constraints and requires large computational costs.
-
-
-
Monitoring 3D Reservoirs from CSEM Data Using a Level Set Technique
Authors O. Dorn and R. VillegasA new shape reconstruction technique is presented for Controlled Source ElectroMagnetic (CSEM) imaging of subsurface structures such as reservoirs. he technique can in principle be used both for exploration and for monitoring of active reservoirs. The main characteristic of our algorithm is to ncorporate certain types of a priori information into the estimation which has the potential to increase resolution. In particular, we assume that only he shape and topology of the structures need to be determined from the data. The background is assumed to be known from alternative tools such as eismics, well-logs or geological information. The full system of Maxwell's equations in 3D is employed for the inversion and combined with a level set echnique for representing shapes. Numerical examples are presented which show that very good reconstructions can be obtained with this novel econstruction technique from CSEM data for realistic simulated situations.
-
-
-
Structural Uncertainty Modelling and Updating Using the Ensemble Kalman Filter – Parameters and State Consistency
Authors A. Seiler, S.I. Aanonsen and G. EvensenIn previous work, the authors presented an elastic grid approach to handle horizon and fault geometric uncertainties in the reservoir model and established an assisted history-matching workflow for updating the structural model using the EnKF. In this paper we consider a gas-flooding experiment in which the flow path is controlled by the reservoir top structure. Uncertainties in the top horizon are accounted for and updated by sequential assimilation of production data using the Ensemble Kalman Filter (EnKF). The result is an ensemble of history-matched models, with reduced and quantified uncertainty in the top horizon. The updated estimate of the top horizon captures the main features of the reference structure. We focus on the consistency between the reservoir top horizon and the gas saturation, as it has been shown that when the assumption of Gaussian priors in the EnKF is violated, the EnKF update scheme may lead to inconsistencies between the state and model parameters. We study in detail the sequential updating of the gas saturation and investigate if the updated state is a better description of the reservoir condition given the uncertainties in the reservoir description and modelling errors, or if a reinitialization with the updated parameters is needed to solve the inconsistency issues before predictions.
-
-
-
Assessing the Impact of Different Types of Time-lapse Seismic Data on Permeability Estimation
Authors T. Feng and T. MannsethWe consider the impact of using time-lapse seismic data in addition to production data for permeability estimation in a porous medium with multi-phase fluid flows, such as a petroleum reservoir under water-assisted production. Since modeling seismic wave propagation in addition to modeling fluid flows in the reservoir is quite involved, it is assumed that the time-lapse seismic data have already been inverted into differences in elastic properties, or even fluid-saturation and pressure differences (pseudo-seismic data). Because an inversion process often leads to considerable error growth, we will consider pseudo-seismic data with large uncertainties. The impact of pseudo-seismic data is assessed through permeability estimation with and without such data, and through application of some uncertainty measures for the estimated parameters. A predictor-corrector technique is used for the parameter estimations. The predictor leads to a coarse-scale permeability estimate, using only dynamic data. The corrector downscales the predictor estimate into a more smoothly varying field, using also the prior model. A successful final corrector result will thus be consistent with the prior model and reconcile dynamic data. The predictor-corrector approach can reveal what parameter resolution that can be achieved in a stable manner with different data types, since it can be applied without an explicit regularization term in the objective function. In this work, the impact of pseudo-seismic data will be investigated based on both coarse-scale predictor solutions and fine-scale predictor-corrector solutions. The numerical examples clearly indicate that the permeability estimation problem is stabilized at a higher level of resolution when pseudo-seismic data are applied in addition to production data, even if the pseudo-seismic data have large associated uncertainties. Several types of pseudo-seismic data, like acoustic-impedance differences, fluid-saturation differences, and pressure differences, are tested, and the resulting estimates are compared.
-
-
-
Comparing the Adaptive Gaussian Mixture Filter with the Ensemble Kalman Filter
Authors A.S. Stordal, H.A. Karlsen, G. Nævdal, H.J. Skaug and B. VallèsOver the last years the ensemble Kalman filter (EnKF) and related versions have become a very popular tool for reservoir characterization. The EnKF presents the history matching result and uncertainty in terms of an ensemble of models generated from a prior model and updated sequentially in time to account for the measurements. From a statistical point of view, the optimal solution to the history matching problem is the posterior distribution of the parameters in the reservoir given all the measurements. However, since the EnKF update is linear it has severe limitations when the posterior distribution to be estimated is multimodal and/or strongly skewed due to nonlinearity of the system. As standard sequential Monte Carlo (SMC) techniques are too expensive for large models. Several methods have been proposed to combine the EnKF with SMC methods. In this paper we apply, for the first time, the recently proposed Adaptive Gaussian Mixture filter (AGM), introduced by Stordal et al. 2009, on a reservoir model and compare the results with the traditional EnKF. The AGM tries to loosen up the requirement of a nearly linear/Gaussian model by combining a relaxed EnKF update with an importance weights resampling approach, thereby taking advantage of some of the higher order moments information as in standard SMC methods such as particle filter whilst keeping the robustness of EnKF. The reservoir is a 2D synthetic reservoir model with 4 producers and 1 injector. The permeability and porosity fields are estimated. Although both methods produce good history matching, the results show that the AGM better preserves the geology of the prior model. Moreover, the last updated fields with AGM are closer to the truth than the corresponding EnKF results.
-
-
-
Population MCMC Methods for History Matching and Uncertainty Quantification
Authors L. Mohamed, B. Calderhead, M. Filippone, M. Christie and M. GirolamiThis paper presents the application of a population MCMC technique to generate history matched models. The technique has been developed and successfully adopted in challenging domains such as computational biology, but has not yet seen application in reservoir modelling. In population MCMC, multiple Markov chains are run on a set of response surfaces that form a bridge from the prior to posterior. These response surfaces are formed from the product of the prior with the likelihood raised to a varying power less than one. The chains exchange positions, with the probability of a swap being governed by a standard Metropolis accept/reject step, which allows for large steps to be taken with high probability. We show results of Population MCMC on the IC Fault Model - a simple 3 parameter model that is known to have a highly irregular misfit surface and hence be difficult to match. Our results show that population MCMC is able to generate samples from the complex, multi-modal posterior probability surface of the IC Fault model very effectively. By comparison, previous results from stochastic sampling algorithms often focus on only part of the region of high posterior probability depending on algorithm settings and starting points.
-
-
-
Adaptive Pilot-point Strategy for History Matching of 3D Seismic Data
Authors S. Da Veiga and V. GervaisMatching seismic data in assisted history matching can be a challenging task. Local parameterization techniques such as pilot-point or gradual deformation methods can be introduced. While information related to seismic data is sometimes considered to initialize such local methods, it is no longer used for further steps and this results in non-adaptive procedures. We propose to use the residual maps related to the 3D seismic data to drive the generation of pilot point locations in an adaptive way. Residual maps are considered as probability density functions of the pilot point locations. Once generated, these locations are changed according to a gradual deformation method which is applied to the random numbers associated to the locations. Last, the optimal locations and values provide a reservoir model which is used to build a new residual map and the procedure is repeated until convergence. A straightforward application of this technique leads to successive optimizations of a discontinuous objective function since locations jump from a high probability region to another. Consequently, we develop an innovative pre-treatment of residual maps based on Gaussian mixtures. The combination of these steps leads to a highly flexible perturbation technique for 3D seismic matching.
-
-
-
Combining Probabilistic Inversion and Multi-objective Optimization for Production Development under Uncertainty
Authors D. Busby and E. SergienkoProbabilistic forecasts can be obtained by taking into account different sources of uncertainty in the history matching process. When a non-unique model is used for prediction, the problem of finding the best development plan to maximize the Net Present Value can be extremely time consuming. Nevertheless, taking into account uncertainty can be critical to take better decisions reducing investments risks. The approach proposed uses response surface approximation based on Gaussian process to find the solution of the probabilistic inverse problem, thus reducing considerably the number of required simulations. Adaptive sampling strategies are used to obtain predictive response surface models in both the probabilistic history matching and in the forecasting problem. The method is illustrated on a realistic case study issued from a real field. The objective of the case study was to optimize a new development plan after six years of production history. We compare the results for both the deterministic approach based on a single history matched model and the stochastic approach. For the probabilistic approach a multi-objective optimization method is used to select the solution providing the highest profit with an acceptable level of risk.
-
-
-
A Probability Conditioning Method (PCM) for Integration of Production Data into Pattern-based Facies Simulation
Authors B. Jafarpour and M. KhodabakhshiThe ensemble Kalman filter (EnKF) has recently been proposed as a promising history matching approach. While EnKF has enjoyed favorable evaluations for history matching applications, in certain depositional environments where the spatial continuity is characterized by discrete geological objects, application of the EnKF for updating a prior ensemble of property fields can result in the loss of facies connectivity away from observation points. We present a new probability conditioning method (PCM) for updating pattern-based multipoint geostatistical facies realizations in a consistent way (with the training image). We use the EnKF to infer a probability map that indicates the probability of a particular facies occurrence at a given grid block. By incorporating this updated probability map into the snesim multipoint facies simulation algorithm, we generate a new set of facies realizations that are conditioned on production history and honor the prior structural continuity. This implementation effectively integrates the production data and prior geologic patterns by using the former to mainly resolve facies distribution around the wells and the latter to ensure consistent facies connectivity away from the wells. We illustrate the suitability of this approach using multiple history matching examples with two and three facies representing fluvial formations.
-
-
-
Uncertainty Assessment of Saturation Modeling in Geocellular Models Using a Well Log Inversion Technique
Authors T. Friedel, A. Carnegie, J. Moreno and A. CarrillatStatic or dynamic modeling of hydrocarbon reservoirs requires a detailed description of the initial capillary-gravity equilibrium saturations of each individual phase at reservoir conditions. Typically, this is a function of local rock quality and the presence of capillary forces. Without understanding these, both initial volumetric estimates and subsequent predictions are potentially meaningless. The main idea of a novel approach is to treat this challenge as a full-scale inversion problem of all suitable well and core data, similar to history matching. At first, a physical model is developed that describes the saturation anywhere in the reservoir and honors different rock types and reservoir regions. The resulting large set of parameters is initially based on core observations if available. An objective function then describes the mismatch between the simulated and observed saturations based on maximum likelihood theory. The model calibration is fully automatic using nonlinear solvers. The calibration will lead to the set of parameters which best fit core- and log-observed saturations. It can be populated in 3D geocellular models. A comprehensive statistical analysis of the results helps define the role of the individual physical processes, such as capillary pressure, as well as confidence intervals and correlations between coefficients.
-
-
-
Determination of Lower and Upper Bounds of Predicted Production from History-matched Models
Authors G.M. van Essen, J.D. Jansen and P.M.J. Van den HofWe present a method to determine lower and upper bounds to the predicted production or any other economic objective from history-matched reservoir models. This is accomplished through a hierarchical optimization procedure, which limits the solution space of a secondary optimization problem to the null-space of the primary optimization problem. We applied this procedure to a model of a channelized reservoir with a life-cycle of 6 years after 1.5 years of production. We performed a history match based on synthetic data, starting from a uniform prior, and using a gradient-based minimization procedure. For the remaining 4.5 years, minimization and maximization of net present value (NPV), using a fixed control strategy, were executed as secondary optimization problems by changing the model parameters while staying in the null space. I.e. we optimized the secondary objective functions, while requiring that optimality of the primary objective (a good history match) was preserved. For the remaining 4.5 years of production, the method gives lower and upper bounds of the predicted NPV of 63 % above and below the average respectively. This method therefore provides a way to quantify the economic consequences of the well-known problem that history matching is a strongly ill-posed problem.
-
-
-
Evolutionary Algorithms with Pairwise Conditional Sampling for History Matching Optimisation
Authors I. Petrovska and J.N. CarterWhen modelling of oil and gas reservoirs is concerned, one considers a wide range of reservoir model parameters, some of which may be dependent on others. Examples include correlation between porosity and permeability values, dependency between the structural model parameters such as fault relay ramp geometry versus its transmissibility. A reservoir engineer is always aware of the possibility of such interactions within the model studied. The logical conclusion then is to try and use this information when performing reservoir history-matching and prediction studies. The core of an efficient history matching optimisation technique is its sampling quality. Any extra information capable of guiding the search within the solution space based on the assumptions of dependence or independence between the model parameters should be welcomed into the optimisation process. This paper will concentrate on a class of evolution-inspired stochastic optimization techniques capable of sampling conditional probability distributions of model parameter – multivariate Estimation Of Distribution Algorithms. Using a synthetic reservoir model we will show that even when one considers only pairwise chain-like types of interaction between optimization parameters, this not only impacts the convergence speed of the optimization process itself but significantly influence the diversity and quality of achieved solutions.
-
-
-
An Iteratively Reweighted Algorithm for History Matching of Oil Reservoirs Is Sparse Domains
Authors L. Li, M.R. Khaninezhad and B. JafarpourIdentification of spatially variable hydraulic rock properties such as permeability and porosity is essential for accurate prediction of reservoir performance and planning future development activities. Estimation of these properties from production data usually involves solving a highly underdetermined nonlinear inverse problem. The overwhelming number of unknowns, relative to available data, leads to many parameter combinations that explain the data equally well, but provide different future predictions. To improve non-uniqueness and numerical instability, additional information is typically incorporated into the solution procedure. Reservoir properties often have large-scale spatially correlated features that are amenable to sparse (or compact) representations in compressive bases such as the Fourier or wavelet domains. In this paper, we exploit the inherently sparse representation of correlated reservoir properties to formulate an effective history matching algorithm using sparsity regularization. We show that by minimizing a data misfit cost function augmented with an additive or multiplicative sparsity-promoting regularization term in a sparse domain the reconstruction results are significantly improved. The effectiveness of the proposed implementation is related to adaptive identification of the sparsity pattern through iterative reweighting of the sparse basis components, which we illustrate through several history matching examples in oil reservoirs.
-
-
-
Grid-based Inversion of Pressure Transient Test Data
Authors R.J.S. Booth, K.L. Morton, M. Onur and F.J. KuchukIn any subsurface exploration and development, indirect measurements such as detailed geological description, outcrop data, etc., and direct measurements such as seismic, cores, logs, and fluid samples, etc., provide useful information for static and dynamic reservoir description, simulation, and forecasting. However, core and log data delineate rock properties only in the vicinity of the wellbore while geological and seismic data usually are not directly related to formation permeability. Pressure transient tests provide dynamic information about reservoir pressure which can be used to estimate rock property fields, fluid distributions, fluid samples for well productivity, and dynamic reservoir description. Therefore, such tests are very useful and hence are commonly used in the industry for exploration environments and for the general purposes of production and reservoir engineering. Conventional pressure transient tests have traditionally been used to estimate spatial distributions of the formation permeability based on the automatic history matching of the pressure data measured at the wells to an analytical or a simple numerical model selected to best represent the flow regimes observed from diagnostic plots. With the need of an improved spatial resolution of the reservoir parameters, “pixel (grid)” based approaches have been developed. In these approaches, the reservoir properties are discretized over an, often coarse, regular grid and the prior modeling of these approaches applies dense geological information. Our approach is similar; however, we discretize the reservoir parameters using the same grid as that used for the numerical simulation of the well test. This grid will be non-uniform with the greatest spatial resolution near the wells, where we may expect the well test to provide more information. In addition we propose that, since in the early characterization of the reservoir dense geological information may be unavailable, a local random field, determined by the variances and correlation lengths of the reservoir parameters, models the prior. To manage the large number of variables that this approach leads to, we employ an adjoint scheme to determine the gradient of the objective function. Hence, our approach enables one to find the most likely set of the reservoir parameters, along with our confidence in this solution and a means of producing further likely solutions. Crucially, we separate the influence of the prior from the influence of the pressure transient test measurements, which greatly improves the performance of the inversion procedure, and also ensures that the information provided by the prior is preserved.
-
-
-
Error Estimate for the Ensemble Kalman Filter Update Step
Authors A. Kovalenko, T. Mannseth and G. NævdalEnsemble Kalman filter is a data assimilation technique based on a low-rank approximation of a covariance matrix from a moderately sized ensemble. Sampling errors lead to artificial effects, such as spurious correlations, deteriorating the estimates and the forecasts of the system states. Using random matrix theory, we demonstrate the distribution of the norm of the ensemble Kalman filter sampling error, assuming noiseless data. The distribution depends explicitly on ensemble size, model dimension and observation locations. We demonstrate the use of the distribution on several examples.
-
-
-
Ensemble Kalman Filter for Nonliner Likelihoods
Authors I. Myrseth, J. Sætrom and H. OmreThis paper defines the update equation (sometimes called the analysis step) in the Ensemble Kalman filter in a conceptually different way. For linear likelihood models the new approach coincides with the traditional version. When the likelihood is nonlinear the new approach is still applicable. Another feature of the new approach is that it opens for Monte Carlo sampling of the likelihood. This can be used to improve predictions. A synthetic reservoir example with a nonlinear, black-box, seismic likelihood function shows that the approach can be applied to reservoir models.
-
-
-
Improved Initial Ensemble Generation Coupled with Ensemble Square Root Filters and Boosting to Estimate Uncertainty
Authors L. Dovera and E. Della RossaThe accuracy of ensemble Kalman filter (EnKF) methods depends on the sample size compared to the dimension of the parameters space. In real applications often sampling error may result in spurious correlations which produce a bias in the mean and a strong underestimation of the uncertainty. The Ensemble Square Root Filters (ESRF) represents an advantage in uncertainty estimation respect to the traditional EnKF. Covariance inflation and localization are a common solution to these problems. In this work we propose a method that reduces the bias of ensemble techniques by means of a convenient generation of the initial ensemble. This regeneration is based on a Stationary Orthogonal-Base Representation (SOBR), obtained via a singular value decomposition of a stationary covariance matrix estimated from the ensemble. This technique is tested on a 2D slightly compressible single phase model and compared with ESRF. The comparison is based on a reference solution obtained with a very large ensemble (one million). The example gives evidence that the SOBR reduces the effect of sampling error in the mean but covariance inflation is essential to avoid the ensemble collapse.
-
-
-
Rapid Construction of Ensembles of High-resolution Reservoir Models Constrained to Production Data
Authors C. Scheidt, J. Caers, Y. Chen and L. DurlofskyDistance-based stochastic modeling techniques have recently emerged in the context of ensemble-level reservoir modeling, in particular for history matching, model selection and uncertainty quantification. Starting with an initial ensemble of model realizations, a distance between any two realizations is defined. This distance is introduced to incorporate specific modeling purposes into geological modeling, thereby potentially enhancing the efficiency of some modeling techniques such as history matching. If one wants to create new models that are constrained to dynamic data (i.e., history matching), the calculation of distances requires flow simulation for each model in the initial ensemble. This can be very time consuming, especially for high-resolution reservoir models. In this paper, we present a multi-scale framework for ensemble-level reservoir modeling. We employ a distance-based procedure, with emphasis on a rapid construction of multiple models that have improved dynamic data conditioning. We propose to construct new fine-scale models constrained to dynamic data, while performing flow simulations on the associated, coarse-scale models with appropriate upscaling. The availability of the multiple, high-resolution models is crucial for proper uncertainty quantification, compared to retaining only a few models that match perfectly the data but do not necessarily capture the uncertainty in some desired predictions. An error modeling procedure is also introduced in the distance calculations to account for potential errors in upscaling. Based on a few fine-scale flow simulations, the upscaling error is estimated for each model using a clustering technique. We demonstrate the efficacy of the method on an example with significant upscaling errors. Results show that the error modeling procedure can reproduce the fine-scale flow behavior from coarse-scale simulations with sufficient accuracy (in terms of uncertainty predictions). As a consequence, an ensemble of high-resolution models, which are constrained to dynamic data, can be obtained, but with a minimum of flow simulations at the fine scale.
-
-
-
New Approaches for Generally Constrained Production Optimization with an Emphasis on Derivative-free Techniques
Authors D. Echeverria Ciaurri, O.J. Isebor and L.J. DurlofskyProduction optimization involves the determination of optimum well controls to maximize an objective function such as cumulative oil production or net present value. In practice, the satisfaction of general physical and economic constraints is also required, which typically results in optimization problems that are nonlinearly constrained. Examples of nonlinear constraints include maximum water cut and minimum oil rate for wells operating under bottomhole pressure control. In this paper we present and apply optimization strategies that are able to incorporate a large variety of general constraints. We have identified a promising approach in the filter method. This recently introduced methodology borrows concepts from multi-objective optimization and avoids many of the issues that arise when objective function and constraints are lumped together by a penalty function. In terms of the underlying optimization procedure, our focus here is on techniques that are not simulator invasive; i.e., they view the flow model as a black box. Along these lines, we study derivative-free methodologies such as generalized pattern search and Hooke-Jeeves direct search, in combination with nonlinear constraint handling techniques. The performance of the algorithms is demonstrated on two challenging generally-constrained production optimization problems where up to 25 wells are considered.
-
-
-
Using Evolution Strategy with Meta-models for Well Placement Optimization
Authors Z. Bouzarkouna, D.Y. Ding and A. AugerOptimum implementation of non-conventional wells allows us to increase considerably hydrocarbon recovery. By considering the high drilling cost and the potential improvement in well productivity, well placement decision is an important issue in field development. Considering complex reservoir geology and high reservoir heterogeneities, stochastic optimization methods are the most suitable approaches for optimum well placement. This paper proposes an optimization methodology to determine optimal well location and trajectory based upon the Covariance Matrix Adaptation - Evolution Strategy (CMA-ES) which is a variant of Evolution Strategies recognized as one of the most powerful derivative-free optimizers for continuous optimization. To improve the optimization procedure, two new techniques are investigated: (1). Adaptive penalization with rejection is developed to handle well placement constraints. (2). A meta-model, based on locally weighted regression, is incorporated into CMA-ES using an approximate ranking procedure. Therefore, we can reduce the number of reservoir simulations, which are computationally expensive. Several examples are presented. Our new approach is compared with a Genetic Algorithm incorporating the Genocop III technique. It is shown that our approach outperforms the genetic algorithm: it leads in general to both a higher NPV and a significant reduction of the number of reservoir simulations.
-
-
-
A Derivative Free Optimization Method for Reservoir Characterization Inverse Problem
Authors H. Langouët, F. Delbos, D. Sinoquet and S. Da VeigaReservoir characterization inverse problem aims at building reservoir models consistent with available production and seismic data for better forecasting of the production of a field . These observed data (pressures, oil/water/gas rates at the wells and 4D seismic data) are compared with simulated data to determine unknown petrophysical properties of the reservoir. The underlying optimization problem is usually formulated as the minimization of a least-squares objective function. In practice, this problem is often solved by nonlinear gradient based optimization methods with derivatives approximated by finite differences. In applications involving large 4D seismic dataset, using the classical Gauss-Newton algorithm is often infeasible because of the storage of the huge Jacobian matrix. Consequently, this optimization problem requires dedicated techniques: derivatives are not available, the associated forward problems are computionnaly expensive and some constraints may be introduced to handle a priori information. Then, we propose a derivative free optimization method under constraints based on trust region approach coupled with local quadratic interpolating models of the cost function and of non linear constraints. Results obtained with this method on a synthetic reservoir application with the joint inversion of production data and 4D seismic data are presented.
-
-
-
Nonlinear Output Constraints Handling for Production Optimization of Oil Reservoirs
Authors E. Suwartadi, S. Krogstad and B. FossThis paper presents a gradient-based optimization method to handle nonlinear output constraints problems for large-scale systems in production optimization of oil reservoirs. The method is based on a barrier function approach, where the output constraints are added as a barrier term to the objective function. The gradient is obtained by using the adjoint method. Further, two case examples are discussed. The results show that the proposed optimization method is able to preserve the efficiency of adjoint methods.
-
-
-
Optimal Well Placement
Authors C.L. Farmer, J.M. Fowkes and N.I.M. GouldOne is often faced with the problem of finding the optimal location and trajectory for an oil well. Increasingly this includes the additional complication of optimising the design of a multilateral well. We present a new approach based on the theory of expensive function optimisation. The key idea is to replace the underlying expensive function (ie. the simulator response) by a cheap approximation (ie. an emulator). This enables one to apply existing optimisation techniques to the emulator. Our approach uses a radial basis function interpolant to the simulator response as the emulator. Note that the case of a Gaussian radial basis function is equivalent to the geostatistical method of Kriging and radial basis functions can be interpreted as a single-layer neural network. We use a stochastic model of the simulator response to adaptively refine the emulator and optimise it using a branch and bound global optimisation algorithm. To illustrate our approach we apply it numerically to finding the optimal location and trajectory of a multilateral well in a reservoir simulation model using the industry standard ECLIPSE simulator. We compare our results to existing approaches and show that our technique is comparable, if not superior, in performance to these approaches.
-
-
-
Improving Perturbation Designs for Gradient-based Optimization Methods in History Matching
By D.Y. DingAssisted history matching is now widely used to constrain reservoir models by integrating well production data and/or 4D seismic data. Among the optimization methods for performing history matching, gradient-based approaches are often applied. However, history matching is a complex inverse problem, and the computational effort (in terms of the number of reservoir simulations, which are very expensive in CPU time) increases with the increasing of the number of matching parameters. For a problem with N parameters, we need generally N perturbations (or N+1 reservoir simulations) to calculate all the gradients in order to find an optimized solution direction. It is always a big challenge to history match large fields with a large number of parameters. In this paper, we present a new technique based on approximate derivative computations, which can considerably reduce the number of simulations for the gradient-based optimization. In this new approach, the objective function is first split into local components, and the dependence of each local component on principal parameters is analyzed to minimize the number of influential parameters. The interaction between parameters and local components is allowed in this approach. Then, we define a perturbation design, based on the minimisation of errors on a test function for the derivative calculation and the technique of graph colouring. The proposed perturbation design can compute the derivatives of the objective function with only a few simulations. This method is particularly interesting for regional and well level history matching, and it is also suitable to match geostatistical models by introducing numerous local parameters. This new technique makes history matching with large numbers of parameters (large field) tractable. Some numerical examples, including a large field with 400 parameters, are presented to illustrate the efficiencies of the new method. The commonly-used gradient-based optimization method is unpractical and even unfeasible to handle the large fields, which require 400 perturbations to compute the derivatives. However, using the new technique proposed in this paper, only 5 perturbations are performed to get all the required gradients. Therefore, the CPU time on this large field can be reduced by a factor of 80 and the history match is feasible.
-
-
-
Adjoint Methods for Multicomponent Flow Simulation
Authors D. Kourounis, D. Voskov and K. AzizThe focus of the present work is on efficient computation of gradients using adjoint methods. In contrast to finite-difference methods, where the number of forward simulations required to estimate the desired derivatives grows linearly with the number of control variables, adjoint techniques provide all the required derivatives of the objective function in a fraction of the computational time of one forward simulation run. However, from an implementation viewpoint they are significantly more involved than, for example, finite-difference methods. This is due to the fact that the computation of gradients through adjoints requires a deep understanding of the simulation code. While the discrete adjoint formulation is most commonly employed in the reservoir simulation community, little is known about the continuous adjoint formulation, which is usually preferred in aerodynamics. Both continuous and discrete adjoint formulations are discussed in this work, and implemented for a compositional reservoir simulator. They are applied to several optimization problems of practical interest and compared with respect to their efficiency and the quality of the gradients they provide. The computed gradients are then forwarded to standard optimization software packages to determine optimal well settings for maximizing any specified objective function, as the net present value.
-
-
-
Optimizing Well Placement Planning in the Presence of Subsurface Uncertainty and Operational Risk Tolerance
Authors P.G. Tilke, R. Banerjee, V.B. Halabe, B. Balci and R.K.M. ThambynayagamThis paper presents an automated workflow to accelerate the well placement planning process in the presence of subsurface uncertainty and operational risk. The system allows the user to screen and rank development options in minutes. This automated field development planning system is an optimization application, integrated with a larger seismic-to-simulation workflow. A key piece of technology in this system is a high-speed semi-analytical reservoir simulator, which enables an optimal strategy to be computed very rapidly. The system also embeds key technologies for optimization in the presence of uncertainty and risk which leverages an advanced uncertainty framework. The workflow starts with a reservoir model, along with existing wells and other operational constraints. Oil or gas production is computed using the high-speed reservoir simulator. Proposed well trajectories honor operational constraints, such as facility processing, water injection capacity, borehole dogleg severity, anti-collision with existing wells, and hazard avoidance on the surface and in the reservoir. The optimal strategy proposes well surface locations, trajectories, and completion locations and is calculated by optimizing the value e.g., net present value (NPV) or production. This new methodology has many applications in the field development planning context. We are able to rapidly screen multiple field development planning scenarios and produce an optimal new/infill drilling strategy with primary production or waterflooding consisting of both newly proposed and existing wells. The final result includes performance predictions based on an optimized field development plan (FDP), risk, and subsurface uncertainties. The most promising scenarios can if necessary be used subsequently for detailed numerical simulation in order to validate results.
-
-
-
Steam-assisted Gravity Drainage Optimization for Extra Heavy Oil
Authors J. Gossuin, W.J. Bailey, B. Couet and P. NaccacheExploitation of extra heavy oil assets involves complex and costly production processes, one of which is steam-assisted gravity drainage (SAGD). This method requires the generation of substantial quantities of steam, which is injected into a horizontal injection well parallel to and above a paired horizontal producer. We focus on maximization of the net present value (NPV) of a SAGD production process by combining optimization and simulation. The use of a neural network algorithm improves on the numerous limitations of manual sensitivities while needing a limited number of iterations. We demonstrate this optimization process using an example reservoir containing an extra heavy Canadian crude. A 2D proxy with rapid solution times is used to address the practical issues of the very long run-times associated with thermal simulation. This proxy is used to identify only those control parameters that impact the objective function (NPV), thereby reducing the solution search space, and also to suggest better starting points for the optimizer. Both of these facets may accelerate finding the optimum for full 3D optimizations. Tangible benefits forthcoming from this investigation are new operational strategies for maximizing NPV, recognition of the impact and optimal duration of preheating, and efficient comparisons of different well patterns.
-
-
-
An Iterative Scheme to Construct Robust Proxy Models
Authors A. Castellini, H. Gross, Y. Zhou, J. He and W. ChenQuantifying and understanding uncertainty in reservoir production forecasts is the key to robust reservoir management decisions. Monte-Carlo simulation is generally impractical because of the large number of subsurface realizations and the computationally intensive flow simulations. Response surface models have been introduced to improve the efficiency of the traditional asset development workflows: uncertainty assessment, history-matching and optimization of development plan. Linear regression techniques are the most popular methods to create the analytical response surfaces. However, the resulting proxies can be poor predictors of reservoir performance when strong non-linear effects are significant. We are proposing an iterative sampling strategy that is able to capture the non-linear behavior of the response and efficiently refine the proxy model. Thin-plate spline non-linear regression techniques have been selected to construct proxy models as they present a number of attractive properties. At each iteration, new combinations of parameters are rapidly evaluated with a score function and the best ones are selected for flow simulation. The benefit of the iterative scheme is demonstrated on a new field development and a mature asset with historical data. For a fixed computational cost, iterative response surfaces consistently provide better accuracy than traditional designs
-
-
-
A Comparison of Screening Methods for High Dimensional Input Problems
Authors S. Touzani, D. Busby and A. AntoniadisHistory matching studies can involve adjusting a high number of reservoir uncertain parameters. Sensitivity analysis can help reservoir engineers focusing on the most influential parameters to prioritize the effort thus reducing sensibly the history matching time. However, due to nonlinear and interactions effects and depending on the method used, this exercise can be misleading or in some cases very time consuming. In this work, several statistical nonparametric methods from the most recent to the most classical have been implemented and tested on some typical oil reservoir applications. Different tests are made using different problems of different dimension. Most of the nonparametric methods investigated are based on response surfaces such as kriging, polynomial or smoothing splines responses with different types of penalization (LASSO, COSSO,...), and some others are issued from the recent statistical literature. The response surface methods were tested using space filling experimental design such as maximin latin hypercube. In the comparisons the computational cpu time for each method is also reported because even if these are generally negligible respect to large simulation time, they can be considerable for large dimensional input problems. To assess the quality of the methods several criteria have been used such as the prediction accuracy of the response surfaces, but also other criteria such as the number of times the method fails to detect an influential parameter (type 1 errors) or the number of times it indicates a non influential parameter as influential (type 2 errors). Numerical tests were made on different outputs (objective functions) of realistic synthetic reservoir models.
-
-
-
Parametrized Model Order Reduction Applied to Reservoir Simulation
By E. GildinThe development of efficient numerical reservoir simulation is an essential step in devising advanced production optimization strategies and uncertainty quantification methods applied to porous media flow. In this case, a highly accurate and detailed description of the underlying models lead to a solution of a set of partial differential equations, which after discretization, induce dynamical systems of very large dimensions. In order to overcome the computational costs associated with these large-scale models, several forms of model-order reduction have been proposed in the literature. In porous media flow, two different approaches are used: (1) a "coarsening" of the discretization grid in a process called upscaling; and (2) a reduction in the number of state variables (i.e., pressure and saturation) directly in a process called approximation of dynamical systems. Recently, the the idea of combining both approaches have been proposed using the control-relevant upscaling (CRU) methodology. In this paper, we investigate the use of the so-called parametric model order reduction (PMOR) techniques applied to porous media flow simulation in a system-theoretical framework. PMOR entails the generation of reduced-order models which retains the functional dependency on specific parameters of the original large-scale system. In particular, this work focuses on the the application of PMOR to the case of single-phase flow, in which the dependencies of the porous media properties, such as, permeability, and the discretization parameter, such as, grid sizes, is investigated. The the main ideas behind model order reduction will be reviewed, including the general framework of interpolatory projection techniques and applications to single-phase flow test cases will be developed.
-
-
-
Application of Inflow Control Devices to Heterogeneous Reservoirs
Authors V.M. Birchenko, A.I. Bejan, A.V. Usnich and D.R. DaviesThe rate of inflow to a long well can vary along its completion length, e.g. due to frictional pressure losses or reservoir heterogeneity. These variations often negatively affect the oil sweep efficiency and the ultimate oil recovery. Inflow Control Devices (ICDs) represent a mature well completion technology which provides uniformity of the inflow profile by restricting high specific inflow segments while increasing inflow from low productivity segments. This paper introduces a mathematical model for effective reduction of the inflow imbalance caused by reservoir heterogeneity. The model addresses one of the key aspects of the ICD technology application - the trade-off between well productivity and inflow equalisation. Our analytical model relates the specific inflow rate and specific productivity index to well characteristics taking into account the intrinsically stochastic nature of reservoir properties along the well completion interval. A general solution to our model is available in a non-closed, analytical form. We have derived a closed form solution for some particular cases. The practical utility of the model is illustrated by considering a case study with prolific and medium productivity reservoirs. Finally, we identify limitations in using our model.
-
-
-
An Expanded Well Model for Accurate Simulation of Reservoir-well Interactions
Authors M. Karimi-Fard and L.J. DurlofskyWe present a new framework for modeling wells in reservoir simulation. This approach is based on an “expanded well model” in which the well region is expanded geometrically to include portions of the reservoir. The well region is then represented as an “equivalent” multi-segmented well defined by a list of connections for all completed segments. This generalizes and simplifies well-reservoir flow modeling by shifting the interface between the well region and the reservoir region. As is the case with standard well models, the well region is linked to the reservoir region through use of well indices. In the case of the expanded well model, these well indices, along with the connections between segments in the equivalent multi-segmented well, are computed by upscaling an underlying fine-scale description of the reservoir and well. The method is applied to model a hydraulically-fractured well and production in a tight-gas reservoir. In both cases, direct application of standard approaches is shown to lead to inaccurate coarse-scale predictions. Use of the expanded well model, by contrast, provides high degrees of accuracy in both cases.
-
-
-
Modeling of the Heel–toe Effect in a Horizontal Well with Inflow Control Devices
Authors V.M. Birchenko, K.M. Muradov and D.R. DaviesThe specific inflow rate of a horizontal well normally varies along the well's completion length due to either frictional pressure losses in the well (heel-toe effect) or variation in the well’s specific productivity index and pressure support. Downhole Inflow Control Devices (ICDs) capable of exerting an additional, rate-dependant restriction along the completion can reduce such inflow imbalances. This paper discusses a new method to quantify the pressure losses and inflow distribution along a horizontal well equipped with ICDs producing oil from a homogeneous formation. A new equation with the form of a second order, non-linear Ordinary Differential Equation is derived to describe such a well. Our analysis of the equation provides both a precise numerical solution and an asymptotic solution. Practical engineering requires a compromise between the severity of the heel-toe effect and the reduction in the overall well performance. Hence, we have proposed a dimensionless criterion for estimating the optimal ICD type. Its application is illustrated by a real oil field-based example. Our new model is one of a few attempts to describe the performance and pressure distribution of an advanced (ICD equipped) well. It brings physical understanding of the well’s performance instead of treating it as a black box simulator. The asymptotic analytical solution gives a set of simple equations for the optimal ICD design of horizontal wells with strong heel-toe effects which can be implemented as a simple design sheet for the well completion engineer. The mathematical approach can be further extended to wells with advanced completion, such as those equipped with the groups of inflow devices or Interval Control Valves.
-
-
-
Decoupled Overlapping Grids for Numerical Modelling of Transient Wellbore Pressure
Authors N. Ogbonna and D.B. DuncanAccurate modelling of transient wellbore pressure is important in well test analysis, where measured pressure responses are interpreted by comparing them to model results. Traditionally analytical models are used, but these models are limited to homogeneous reservoirs of regular shapes. Alternatively the wellbore pressure can be computed from a numerical simulation, allowing for more complex reservoir geometry and permeability heterogeneity to be included. The conventional point source or line source well approximation implemented in most commercial reservoir simulators does not provide good information about transient behaviour at the well. Another approach is to implement the well as an internal boundary with local refinement in the well vicinity. However this would significantly increase the computational cost of the iterative parameter fitting process in well testing, especially for field-scale models with large number of wells. This paper explores a new method for accurately computing wellbore and near-wellbore pressure. The method addresses the problem in two stages solved on different grids that overlap. In the first stage a global problem is solved in the entire domain with the conventional point/line source well approximation, and in the second stage a local problem is solved in a smaller near-well region with the well implemented as an internal boundary. The two solutions are linked via the external boundary condition for the local problem which is interpolated from the global solution. This method has the advantage of capturing both global reservoir properties, which can be accurately modelled using existing reservoir simulators, and the details of pressure transient phenomena associated with near-well refinement. The proposed method is validated against exact analytic solutions for a homogeneous 2D case study, and numerical results for some heterogeneous case studies are presented.
-
-
-
A Genuinely One-dimensional Upwind Scheme with Accuracy Enhancement for Multidimensional Advection Problems
Authors A. Michel, Q.H. Tran and G. FavennecWe propose a nonlinear finite-volume scheme for the numerical approximation of the linear advection equation on multidimensional arbitrary grids. This scheme inspired from the two-step reconstruction philosophy of the anti-dissipative VoFiRe proposed by Despres, Labourasse and Lagoutiere but relies on a different procedure in the transversal step and a more propriate min-max principle in the longitudinal step. As a consequence, not only the stability analysis is simpler, but also a better accuracy is achieved on numerical results for any type of initial data. To illustrate the behaviour of the method we show some results concerning transport of tracers and two phase flow in porous media.
-
-
-
Discretisation on Complex and Polyhedral Grids – Open Source MATLAB Implementation
Authors K.A. Lie, S. Krogstad, I.S. Ligaarden, J.R. Natvig, H.M. Nilsen and B. SkaflestadAccurate geological modelling of features such as faults, fractures or erosion requires grids that are flexible with respect to geometry. Such grids generally contain polyhedral cells and complex grid cell connectivities. The grid representation for polyhedral grids in turn affects the efficient implementation of numerical methods for sub-surface flow simulations. Methods based on two-point flux approximations are known not to converge on grids which are not $K$ orthogonal. In recent years, there has been significant research into mixed, multipoint, and mimetic discretisation methods that are all consistent and convergent. Furthermore, so-called multiscale methods have lately received a lot of attention. In this paper we consider a MATLAB implementation of consistent and convergent methods on unstructured, polyhedral grids. The main emphasis is put on flexibility and efficiency with respect to different grid formats, and in particular hierarchical grids used in multiscale methods. Moreover, we discuss how generic implementations of various popular methods for pressure and transport ease the study and development of advanced techniques such as multiscale methods, flow-based gridding, adjoint sensitivity analysis, and applications such as optimal control or well placement.
-
-
-
Improving Stochastic Inversion Methods in History Matching Using Proxy Models
Authors K.D. Stephen and S. ArwiniAssisted history methods have been developed to automatically search the parameter space to find optimal models. Stochastic approaches increase the breadth of the search but can be very costly. Proxy models improve convergence by producing parameter sensitivities. Better distributions may be used in stochastic generation of new models. In this paper we present modifications to the neighbourhood and a genetic algorithm. Quadratic proxy models are derived for the misfit surface so that the search process can be made more efficient. We test the approaches on several analytical test functions. We also apply them to the Schiehallion field from the west of Shetland in seismic history matching. A quadratic regression equation was derived from a representative sample of models and used to generate gradients of the misfit with respect to the parameters. These are used to direct the choice of new models during a random search increasing the chance of finding new models. The proxy based method improves convergence by a factor of three generally. Complex surfaces see a lesser improvement though and the regression equation can be updated as better models are found to maintain the benefits.
-
-
-
Explicit Singly Diagonally Implicit Runge-Kutta Methods and Adaptive Stepsize Control for Reservoir Simulation
Authors C. Völcker, J.B. Jørgensen, P.G. Thomsen and E.H. StenbySimulation of fluid flow in petroleum reservoirs is an essential tool in understanding, predicting and controlling advanced oil recovery methods. The major computational effort in reservoir simulation comes from solving a very large system of differential equations describing the fluid flow and the complex behaviour of advanced oil recovery methods. Choosing an appropriate method in the numerical solution of a large system of differential equations involves deciding on factors such as the order of the integration scheme, stability properties and concern for computational efficiency. Current simulators normally uses first order integration schemes applied with heuristically guided strategies for controlling time-step sizes. In the solution process of complex recovery methods this can lead to unnecessary computations and inappropriately small time-steps. We establish a fully implicit integrator of high order applied with an adaptive time-step selection supported by error estimates. We describe the explicit singly diagonally implicit Runge-Kutta (ESDIRK) methods with an embedded scheme for error estimation. This class of methods is computationally efficient, A- and L-stable as well as stiffly accurate. The embedded method providing the error estimate is of different order than the method used for advancing the solution. Based on this error estimate, the time-step is computed by a predictive control law. The predicitive control law is designed based on a model of the numerical method (ESDIRK) itself. Implicit integration involves the solution of a system of coupled nonlinear residual equations which need to be solved iteratively. Fast convergence of the iterative solver is crucial and may be controlled by the time-step size. We present a strategy for adaptive stepsize selection that mitigates the trade off between the convergence rate and time-step size. Consequently, the stepsize selection rule keeps the error estimate bounded (i.e., close to a user-specified tolerance) and at the same time maintains a good convergence rate of the equation solver. In addition, the controller has the ability to combine the above stepsize selection rule with classical time-step control that limits maximum change in key variables.
-
-
-
An Application of Green’s Function Technique for Well Testing Horizontal and Partially Penetrating Wells
Authors D.V. Posvyanskii, E.S. Makarova, S.V. Milytin and V.S. PosvyanskiiThe purpose of well test analysis is to estimate reservoir properties from the measurement of wellbore pressure and production rate over time. Generally this is achieved by solving an inverse problem, the mathematical model of the reservoir generates the pressure response to compare with the actual one. The efficiency of solving the inverse problem depends both on the regression scheme and the number of evaluations of a direct task. The direct task is based on the solution of the diffusivity equation, which describes fluid flow in porous media. The paper focuses on the solution of the direct problem only. The diffusivity equation can be solved through the Green’s function (GF) method, where the solution is presented as a series over eigen values of the Laplace differential operator. However these series converge conditionally and the summation is time consuming which restricts the application of this technique for inverse problems in well test analysis. The same problems are well-known in quantum theory for solid state systems and the algorithm for fast summation of such series was proposed by Ewald. In [1] Ewald’s algorithm was successfully applied to well testing of vertical wells. In the present work we apply it to well test analysis of horizontal and partially penetrating wells. The application of Ewald’s algorithm allows for a fast and highly accurate solution. We have successfully applied this procedure for interpretation of real pressure buildup data taking account of sandface flow. Unfortunately, GF can be written out analytically only for simple cases. However using time measurements of pressure drop and production rate the Green’s function can be restored. This deconvolution technique is well known in well test analysis. In this study the reservoir response function was represented as decomposition on a basis of Daubechies orthogonal wavelets. This basis reduces the number of terms in the decomposition and so avoids problems connected with the regularization procedure [2]. For simple cases, the restored Green’s function is in a good agreement with the same analytical expressions. [1] E.S. Makarova, D.V.Posvyanskii, V.S.Posvyanskii, A.B. Starostin ECMOR XI P26 2008 [2] T.Schoroeter, F.Hollaender, A.C.Gringarten SPE 71574 2001
-
-
-
Numerical Aspects of Using Vertical Equilibrium Models for Simulating CO2 Sequestration
Authors I. Ligaarden and H.M. NilsenStorage of CO2 in deep saline aquifers is considered an important means to reduce anthropogenic CO2 in the atmosphere. Assessing the risk of storage operations requires accurate modeling of migration of injected CO2. However, since potential injection sites typically are very large and time-scales long, flow simulation with traditional methods from the petroleum industry is often not feasible. Also, CO2 is very mobile and the flow is usually confined to thin layers, which put severe requirements on the vertical grid resolution. Using a vertical equilibrium assumption, the flow of a layer of CO2 can be approximated in terms of its thickness to obtain a 2D simulation model. Although this approach reduces the dimension of the model, important information of the heterogeneities in the underlying 3D medium is preserved. In this paper, we consider the Johansen formation, a candidate for CO2 sequestration, to compare the use of 3D simulations to simulations with a vertical equilibrium 2D model. We discuss numerical aspects of using the different methods, and demonstrate that the vertical equilibrium model provides more accurate results when the vertical grid resolution is low. Moreover, we investigate how averaging of parameters influences the accuracy of the vertically equilibrium solution.
-
-
-
Impact of Geological Heterogeneity on Early-stage CO2 Plume Migration
Authors M. Ashraf, K.A. Lie, H.M. Nilsen and A. SkorstadInjection of CO2 into saline aquifers can be divided in two phases: injection and plume migration over long time scales. Large-scale injection operations should be preceded by simulation studies to determine how to maximize the injection volume/rate and minimize the leakage risk. Simulation of CO2 storage differs from oil recovery prediction not only in the objectives of study, but also in the characteristic length and time scales of the process. Working with long temporal and spatial scales and large uncertainties poses the question of how detailed the geological description should be. The impact of geological uncertainty on the oil production forecast has been thoroughly investigated in the SAIGUP study, where an extensive set of synthetic but realistic models of shallow-marine reservoirs were made. Tens of thousands of simulations were run for different production scenarios, demonstrating that variations in the structural and sedimentological description had a strong impact on production responses. Herein, we use the structural and sedimentological models from SAIGUP to study two questions related to CO2 storage: • How sensitive is the injection and early-stage migration to uncertainty and variability in the geological description? • What simplifying assumptions are allowed in averaging the geological attributes over scales?
-
-
-
On Mathematical Simulation of Heavy Oil Recovery Problem
Authors A.K. Pergament and D.A. MitrushkinThe paper is dedicated to reservoir simulation of heavy oil and bitumen recovery using injection of heat-transfer agents (e.g. hot water or steam). Non-isothermal three-phase flow model, including oil, water, steam is considered. Difference scheme based on the finite volume method is proposed. The main feature of this problem is significant dependence of fluids properties on the temperature: the crucial point is a considerable decrease of oil viscosity with the increase of temperature. Consequently, it is necessary to accurately determine the temperature field close to the heat sources. The important result is that self-similar solutions of problems of steam injection and stationary gravity flow of three-phase fluid are constructed. The results of testing the derived difference scheme are represented for obtained self-similar solutions. The model problem of the heavy oil field development using the SAGD technology is discussed. The results of comparison of numerical simulation of temperature fronts on different computational grids are presented. We gratefully acknowledge the financial support of the Russian Foundation for Basic Research (grant 09-01-00823).
-
-
-
Can Formation Relative Permeabilities Rule Out a Foam EOR Process?
Authors E. Ashoori and W.R. RossenFoam is a promising means of increasing sweep in miscible- and immiscible-gas enhanced oil recovery. SAG (surfactant-alternating-gas) is a preferred method of injection. Numerous studies verify that the water relative-permeability function krw(Sw) is unaffected by foam. This paper shows a connection between the krw(Sw) function and SAG foam effectiveness that is independent of the details of how foam reduces gas mobility. The success of SAG depends on total mobility at a point of tangency to the fractional-flow curve, which defines the shock front at the leading edge of the foam bank. Geometric constraints limit the region in the fractional-flow diagram in which this point of tangency can occur. For a given krw(Sw) function, this limits the mobility reduction achievable for any possible SAG process. The implications of this work include the following: Increasing nonlinearity of the krw function is advantageous for SAG processes, regardless of how foam reduces gas mobility. SAG is inappropriate for naturally fractured reservoirs if straight-line relative permeabilities apply, even if extremely strong foam can be stabilized in fractures. It is important to measure krw(Sw) separately for any formation for which a SAG process is envisioned.
-
-
-
Improved Near-well Approximation for Prediction of the Gas/Oil Production Ratio from Oil-rim Reservoirs
Authors S.A. Halvorsen, A. Mjaavatten and R. AasheimStatoil has developed a model with short computational time to predict the rate dependent gas/oil ratio (GOR) from a horizontal well. The oil flow towards the wellbore is based on a one-dimensional model by Konieczek. The model performs remarkably well for medium time production optimization (weeks, months), while the predictions during the first days after a large change in the production can be poor. An improved one-dimensional model for the flow towards the wellbore is proposed, where the oil flow is treated as a superposition of three terms: 1) Radial flow towards the wellbore and towards a mirror well. 2) Flow to correct for modified boundary conditions due to the radial flows. 3) Flow due to height variations of the gas/oil contact (GOC). The new model takes care of the current short term and near-well deficiencies: Effect of 2D flow close to the wellbore, gas breakthrough due to viscous gas fingering, and horizontal/vertical anisotropy. Based on analysis and preliminary testing the new model should have equally good medium and long term capabilities and considerably improved short term and near-well behaviour, compared to the present implementation.
-
-
-
Flow-based Grid Coarsening for Transport Simulations
Authors V.L. Hauge, K.A. Lie and J.R. NatvigGeological models are becoming increasingly large and detailed to account for heterogeneous structures on different spatial scales. To obtain computationally tractable simulation models, it is common to remove spatial detail by upscaling. Pressure and transport equations are different in nature and generally require different strategies for optimal upgridding. To optimize accuracy of a transport calculation, the coarsened grid should be constructed based on a posteriori error estimates and adapt to the flow patterns predicted by the pressure equation. Sharp and rigorous estimates are generally hard to obtain; herein we consider various adhoc methods for generating flow-adapted grids. Common for all, is that they start by solving a single-phase flow problem once and continue by agglomerating cells from an underlying fine-scale grid. We present several variations of the original method. First, we discuss how to include a priori information in the coarsening process, e.g., to adapt to special geological features or to obtain less irregular grids where flow-adaption is not crucial. Second, we show how different algorithmic choices can simplify the matrix structure of the discretized system and lead to reduced computational complexity. Finally, we demonstrate how to improve simulation accuracy by dynamically adding local resolution near strong saturation fronts.
-
-
-
Multiscale Method for Numerical Simulation of Multiphase Flows in Giant Production Fields
Authors A.K. Pergament, V.A. Semiletov and P.Y. TominThe problem of multiphase flow modeling for giant oil and gas fields partitioned into several areas is considered. The aggregation of essential number of input fine grid cells forms the cell of coarse grid. According to ideas of I.Babuska, one can show that the pore pressure at each cell of the coarse grid may be approximated using linear combination of special basis functions. These functions are solutions of single-phase flow problem in the cell of the coarse grid with special piecewise multilinear functions used as boundary condition. The support operator method (SOM) by A.A.Samarsky is used to calculate the basis functions. Using R.P.Fedorenko idea of the superelement method (SEM), the calculated basis functions are used for approximation in SOM instead of ordinary linear function. Compared to SEM, the number of basis functions for method developed is substantially smaller: not 8 but 3 for the hexahedron grid. Finally the distribution of pressure and saturation evolution is calculated with Neumann boundary conditions for governing system. Method developed is high-resolution one and allows effective simulation of the processes for giant production fields. We gratefully acknowledge the financial support of the RFBR (grant 09-01-00823).
-
-
-
Multiscale Wavelet Analysis Applied to Localization of Covariance Estimates in EnKF
Authors O. Pajonk, R. Schulze-Riegert, M. Krosche and H.G. MatthiesThe ensemble Kalman filter (EnKF) has become very popular in the field of assisted history matching for its appealing features. Nonetheless, problems can result from so-called spurious correlations due to the finite ensemble size [e.g. Evensen, 2009], which are considered as unphysical. The result of these correlations is a reduction of ensemble spread at model locations where no related data is available. This may cause an underestimation of the uncertainty and can result in a collapsed ensemble [Hamill, 2001]. Two methods are commonly used to address the unwanted reduction of variance: covariance infla-tion and localization. This contribution presents a new covariance localization approach based on multiscale (or multiresolution) wavelet analysis [Daubechiers, 1992]: the model state vector is transformed to a multiscale wavelet space. Correlations are computed in this space, not in the model space. This procedure allows the application of a new localization scheme, i.e., a different covariance localization function can be applied for each of the scale levels using a standard Schur product approach. Especially it allows us to filter unphysical long range correlations from fine scales while retaining longer correlations on coarser scales. Afterwards EnKF updates are computed and the transformation back to model space is applied. This contribution explains our wavelet-based localization approach and presents numerical results for the application of a synthetic model. The results are compared to standard localization approaches. The application to a real field simulation model is discussed.
-