- Home
- Conferences
- Conference Proceedings
- Conferences
ECMOR XVII
- Conference date: September 14-17, 2020
- Location: Online Event
- Published: 14 September 2020
1 - 100 of 145 results
-
-
Numerical Effects of Fluid Flow Modelling in Surfactant Chemical Flooding
Authors O. Akinyele and K. StephenSummaryNumerical simulation of surfactant flooding using conventional reservoir simulation models can lead to unreliable forecasts and bad decisions due to the appearance of numerical effects. The simulations solve systems of nonlinear partial differential equations describing the physical behavior of surfactant flooding by combining multiphase flow in porous media with surfactant transport. The simulations approximate the solutions by discretization of time and space which can lead to spurious oscillations, instabilities or deviations in the model outcome.
In this work, the black oil decoupled implicit method was used to carry out simulations at various altered conditions (with dimensions at the reservoir scale) so as to investigate the model behavior in comparison with the analytical solution obtained from fractional flow theory. Various conditions were examined including changes to cell size and time step as well as the properties of the surfactant and how it affects miscibility and flow. The main aim of this study was to identify if oscillations occur, why and when they occur.
The results show spurious oscillations occur at the surfactant flood water bank and removed after the adsorption rate increased by 25% at its initial value of 0.0002kg/kg. While the oscillation was negligible after grid refinement of 5000 grid block set-up in the x-axis. The results also show aqueous phase velocity and pressure drop contributed significantly to the appearance of oscillation. The oscillation was not totally removed by the implementation of a sudden transition in the relative permeabilities around the surfactant front. The oscillations induced earlier solution miscibility that caused a misleading prediction of improved oil recovery in comparison to the solution without numerical effect. Thus, it is important to improve existing models and use appropriate guidelines to stop oscillations and remove errors.
-
-
-
A Multi-Timestep Domain Decomposition Method Applied to Polymer Flooding
Authors R.S. Tavares, R.B.D. Santos, S.A.D. Lima, A. Dos Santos and J.H.D.S. MarianoSummaryWaterflooding has been commonly used for secondary oil recovery. However, it is well known that the efficiency of oil recovery decreases when the mobility ratio is large, or the reservoir is highly heterogeneous. In these scenarios, the polymer flooding technique arises as an efficient alternative to increase the production curves. The injection of a high viscosity polymer solution reduces the mobility ratio, improving the displacement and sweep efficiency. On the other hand, mechanical retention and adsorption phenomena give rise to formation damage close to the injection wells resulting in injectivity loss. In this context, our main goal is to construct a new computational model based on domain decomposition methods capable of coupling the phenomena in different spatial and time scales during the polymer flooding. From the mathematical point of view, we consider the polymer solution a pseudo-plastic flow with the hydrodynamic model given by a non-linear Darcy’s Law where the injected fluid viscosity depends on the shear rate as suggested by the Carreau’s Law. Furthermore, the polymer movement is quantified making use of a convection-diffusion-reaction transport equation where the non-linear reactive part is due to mechanical retention and adsorption. The studied model takes formation damage into account considering that porosity and permeability depend on the retained polymer concentrations mechanically retained or adsorbed. From the computational point of view, the non-linear mathematical model is discretized making use of the finite element method together with a staggered algorithm and the Newton-Raphson method. The kinetic law for mechanical retention is post-processed by the Runge-Kutta method. It is important to highlight that polymer may accumulate in the neighborhood of the injection well on a fast time scale causing injectivity loss. Contrary to the rest of the reservoir, where large time steps and a coarse spatial mesh can be used, on the neighborhood of the injection wells small time steps and a fine spatial mesh are sometimes required. In this context, we propose the application of domain decomposition techniques to couple the near-well/reservoir domains with accuracy and lower computational cost. To this end, we apply a multi-time step domain decomposition method to couple retention and adsorption near well phenomena with polymer transport in the reservoir. Finally, we propose some numerical simulations to show the efficiency of the domain decomposition as well as to quantify injectivity during polymer flooding.
-
-
-
Modeling Compressible Gas Flow in Anisotropic Reservoirs Using A Nonlinear Finite Volume Method
Authors W. Zhang and M. Al KobaisiSummaryA nonlinear two-point flux approximation (NTPFA) finite volume method is applied to the modeling of compressible gas flow in anisotropic reservoirs. Gas compressibility factor and gas density are calculated by the Peng-Robinson equation of state. The governing equations are discretized by NTPFA in space and first-order backward Euler method in time. Newton-Raphson iteration is used as the nonlinear solver during each time step. The NTPFA method employs the harmonic averaging points as auxiliary points during the construction of onesided fluxes. A unique nonlinear flux approximation is obtained by a convex combination of the one-sided fluxes. Since a Newton-Raphson nonlinear solver is used, NTPFA will have a denser discretized coefficient matrix compared to the widely used Two-Point Flux Approximation (TPFA) method on grids that are not K-orthogonal. However, its coefficient matrix is still much sparser than the classical Multi-Point Flux Approximation O (MPFA-O) method. Results of numerical examples demonstrate that the pressure profile and gas production rate of NTPFA is in close agreement with that of MPFA-O for most cases while TPFA is inconsistent since the grid is not K-orthogonal. The MPFA-O method is well known to suffer from monotonicity issues for highly anisotropic reservoirs and our numerical experiments show that MPFA-O can fail to converge during the Newton-Raphson iterations when the permeability anisotropy is very high while NTPFA still enjoys good performance.
-
-
-
Optimization of WAG in Real Geological Field Using Machine Learning and Nature-Inspired Algorithms
Authors M. Nait Amar and A. Jahanbani GhahfarokhiSummaryMaximizing oil recovery is a challenging task for the oil industry worldwide, mainly in the presence of dynamic technical and economical constraints. To achieve this target, a number of enhanced oil recovery technologies are being applied, and one of the most successful and used methods is water alternating gas injection (WAG). The estimation of the optimal operating parameters of the WAG process is a complex problem which requires considerable number of time-consuming runs. Therefore, developing a faster alternative tool without scarifying the precision of the numerical simulators becomes essential. Proxy models that are user-friendly mathematical models based on machine learning and pattern recognition, have a noticeable ability to deal with highly complex problems, such as the outcomes of the numerical simulators in reasonable time.
The present work aims at establishing various dynamic proxy models for optimizing a constrained WAG project applied to real field data from “Gullfaks” in the North Sea. Two types of artificial neural network (ANN), namely multi-layer perceptron (MLP) and radial basis function neural network (RBFNN) were taught for predicting all the needed parameters for the formulated optimization problem. Levenberg–Marquardt (LM) algorithm was applied for optimizing the MLP model, while genetic algorithm (GA) and ant colony optimization (ACO) were applied for the proper selection of the RBFNN control parameters. Furthermore, the best proxy model found was coupled with GA and ACO for resolving the WAG optimization problems.
The results showed that the established proxies are robust, practical and effective in mimicking the performance of numerical reservoir model. In addition, the results demonstrated the effectiveness of GA and ACO in optimizing the parameters of WAG process for the real field data used in this study. The findings of this investigation contribute to the knowledge of the mathematics of oil recovery in various perspectives, namely the establishment of cheap and accurate time-dependent proxy models for real cases, the optimization of WAG process in the presence of various types of constraints and also the robustness of nature-inspired algorithms for resolving the optimization problems related to enhanced oil recovery.
-
-
-
Discrete Fracture-Matrix Simulations Using Cell-Centered Nonlinear Finite Volume Methods
Authors W. Zhang and M. Al KobaisiSummaryControl-volume based Discrete Fracture-Matrix (DFM) models have been increasingly used to simulate flow and transport in fractured porous media. The star-delta transformation is often used to eliminate the intermediate control volumes at fracture intersections. The star-delta transformation, however, assumes that the permeability at fracture intersections is very high. Therefore, it cannot accurately model the blocking effect at fracture intersections for example when a blocking fracture intersects a permeable one. In this work, we improve the star-delta transformation by making modifications to the calculation of transmissibility at fracture intersections so that the blocking effect at fracture intersections can be captured. To account for the permeability anisotropy in the matrix and the grid non-orthogonality resulting from unstructured meshing, the nonlinear finite volume methods are used to compute transmissibility for matrix-matrix connections. The linear two-point flux approximation (TPFA) is then used to couple the fracture and matrix together. Results of numerical experiments demonstrate that the improved star-delta transformation performs very well compared to the reference solution. When permeability of the matrix is anisotropic, the linear TPFA is not consistent in general and significant errors can be incurred. The nonlinear methods, on the other hand, captures the tonsorial effect in the matrix domain more accurately for all simulations.
-
-
-
Two-Phase Darcy Flows in Fractured and Deformable Porous Media, Convergence Analysis and Iterative Coupling
Authors F. Bonaldi, K. Brenner, J. Droniou and R. MassonSummaryWe consider a two-phase Darcy flow in a fractured porous medium consisting in a matrix flow coupled with a tangential flow in the fractures, described as a network of planar surfaces. This flow model is coupled with the mechanical deformation of the matrix assuming that the fractures are open and filled by the fluids, as well as small deformations and a linear elastic constitutive law. In this work, the model is derived and discretized using the gradient discretization method which covers a large class of conforming and non conforming discretizations. This framework allows a generic convergence analysis of the coupled model using a combination of discrete functional tools. The convergence of the discrete solution to a weak solution of the model is proved using a priori and compactness estimates. This is, to our knowledge, the first convergence result for this type of models taking into account two-phase flows and the nonlinear poro-mechanical coupling including the cubic nonlinear dependence of the fracture conductivity on the fracture aperture. Previous related works consider a linear approximation obtained for a single-phase flow by freezing the fracture conductivity. Numerical experiments are presented to illustrate this result using a Two-Point Flux Approximation cell centered finite volume scheme for the flow and a P2 finite element method for the mechanics. Iterative coupling algorithms are investigated to solve the coupled discrete nonlinear systems at each time step of the simulation.
-
-
-
Numerical Modelling of CO2 Migration through Faulted Storage Strata with a New Asynchronous FE-FV Compositional Simulator
Authors Q. Shao and S. MatthaiSummarySimulation of unstable subsurface CO2 migration is challenging not only because of the accompanying thermal-hydraulic-mechanical-chemical processes, but also because the interaction of the plume with geometrically complex geologic structures (e.g., faults and fractures) has to be resolved across a broad range of spatiotemporal scales. To address these challenges, we present a new hybrid finite element – finite volume simulator (ACGSS) for fully unstructured finite element meshes, including discrete representations of wells and intersecting faults. This compositional multi-phase multi-component transport scheme allows to model reactive miscible flow transport, phase transitions (e.g., CO2 dissolution, H2O evaporation and salt precipitation) and inter-phase mass transfer during CO2 geo-sequestration. Critical for its performance is an asynchronous evolution scheme, following the idea of discrete event simulation (DES). This method restricts diagnostics, phase equilibria and transport computations to those small subregions of the model where changes are occurring, resolving these accurately across temporal and spatial scales. In conjunction with parallelisation, this accelerates computation significantly, also making it more robust. Accurate compositional simulation required us to apply the asynchronous method to both the pressure and the saturation equations. This led to a genuinely new simulator. The ACGSS is applied to a complex 3D fault model, which consists of a sequence of sandstone and shale layers, intersected by multiple faults. This model was produced from a 3D medical scan of a sand-box experiment, which was converted into a finite element mesh using GoCAD and the RINGMesh software and populated with plausible properties. The adaptively refined mesh represent every detail of the intricate model geometry. In the example simulation (CO2 injected at 0.2 Mt/yr through a vertical 15-m long completion in lowest siltstone layer of graben structure), the CO2 rises up through the faults from block to block until it reaches the unfaulted topmost sandstone unit. This occurs in less than 3 years although the faults are modelled as thin (0.5-m wide) and only moderately permeable (k=5 × 10-14 m2) structures. Thanks to the asynchronous time-marching, the 3-year simulation on the >9 million cell grid, completes within several hours on a 20-core desktop PC. A sensitivity analysis to burial depth and geologic parameters is included in the paper and presentation.
-
-
-
UNISIM-III: Benchmark Case Proposal Based on a Fractured Karst Reservoir
Authors M. Correia, V. Botechia, L. Pires, V. Rios, S. Santos, V. Rios, J. Hohendorff, M. Chaves and D. SchiozerSummaryThe significant world oil reserves related to fractured karst reservoirs in Brazilian pre-salt fields adds new frontiers to the (1) development of numerical methods for upscale giant fields with multiscale heterogeneities, (2) history matching and production strategy optimization under critical uncertainties and (3) forecast of the future reservoir performance. However, there is a lack of benchmark models with a heterogeneous dynamic behavior typical from fractured karst reservoirs, to develop and validate novel numerical methods. This work presents a simulation benchmark model, available as public domain data, which represents a fractured carbonate karst reservoir and add a great opportunity to test new methodologies for reservoir development and management using numerical simulation.
The work structure is divided in three steps: (1) development of a reference model, a fine grid model with high level of geologic details, treated as the real field, (2) development of a simulation model under uncertainties considering an initial stage of the field development phase, and, (3) elaboration of a benchmark proposal for studies related to the oil field development and production strategy selection. Based on the available information from well logs, several uncertainty attributes were considered in structural framework, facies and petrophysical properties. Dynamic, economic and technical uncertainties were also considered. The reference model is a giant field divided by two stratigraphic zones - the upper zone characterized by stromatolites and the lower one by coquinas. Moreover, the model is characterized by two regions with karst features near the horizons surfaces and a cluster of fractures near faults. Volcanic rocks and high permeable trends near faults are included as non-mapped uncertainties in the simulation model, as the information from well logs at the initial stage of field development does not intercept this geologic attribute. This approach will lead to several challenges on reservoir development and management.
As this benchmark is representative of a giant field, it is divided in four sectors. Sector 1 has already a production strategy defined, aiming studies regarding field management. The strategy considers WAG (water alternate gas/CO2) as recovery mechanism and the presence of 13 wells in a first wave (6 producers and 7 injectors), and other 4 wells can be added in a second wave. Field development studies can be applied in the other sectors.
This Benchmark provides a great opportunity for develop and test novel numerical methods in giant reservoirs with geologic and dynamic pre-salt trends.
-
-
-
Upscaling of Nanoparticle Retention Rate for Single-Well Applications From Pore-Scale Simulations
Authors N. Bueno, M. Icardi, F. Municchi, H. Solano and J. MejíaSummaryOne of the main difficulties when simulating nanoparticle transport in porous media is the lack of accurate field-scale parameters to properly estimate particle retention across large distances. Furthermore, current field models are, in general, not based on mathematically rigorous upscaling techniques, and empirical models are being fed by experimental data. This study proposes a rigorous and practical way to connect pore-scale phenomena with Darcy-scale models, providing accurate macro-scales results. In order to carefully resolve nanoparticle transport at the pore-scale, we develop numerical solver based on the open-source C++ library OpenFOAM, able to account for shear-induced detachment of nanoparticles from the walls in addition to usual isotherm attachment/detachment processes. We employ an integrated approach to generate random, user-oriented, and periodic porous structures with tunable porosity and connectivity. A periodic face-centered cubic geometry is employed for simulations over a broad range of Péclet and Damköhler numbers, and effective parameters valid at the macro-scale are obtained by mean of volume averaging in periodic cells, as well as breakthrough approximates to asymptotic behaviour. Coupling between these techniques leads to a comprehensive estimation of a first-order kinetic rate for nanoparticle retention and the maximum retention capacity based on breakthrough curves and asymptotic curves. We apply this upscaling process to real cases found in literature, to estimate the penetration radius of a typical stimulation operation settling. The profiles are compared against different spatial discretization in the radial direction and different dimensionless numbers to study their impact upon travel distances. The present workflow gives a new insight into some aspects of pore-scale boundary conditions that usually are hedged, such as the validity of some usual mathematical expressions or the correctness of pore-scale results representing larger scales. Finally, this study proposes a mathematical relationship between pore-scale parameters and some important macro-scale dimensionless numbers that can be used to estimate field-scale effective parameters for nanoparticle retention in well stimulation and Oil&Gas industry applications.
-
-
-
A Novel Nanoparticle Retention Model in Porous Media for IOR & EOR Applications
More LessSummaryRecent developments based on nanotechnology have shown the immense potential of application in EOR & IOR operations, which is supported by successful results on the lab and field-scales. However, the poor understanding and the shortage of a robust framework for nanoparticle transport-and-retention modelling in porous media is a downside for its properly spread in the Ο &G industry. In this work, we propose a novel modelling framework that allows to represent jointly mechanical and chemical mechanisms for nanoparticle retention and remobilisation in porous media. This model is formulated under a phenomenological approach that considers a strong physical basis of these processes on the macroscale. Retention and remobilisation dynamics are modelled under a non-equilibrium approximation using an α-order kinetic which depends on equilibrium condition. The mathematical formulation was programmed using the open-source package Chebfun, as a function of dimensionless variables to make up-scaling to higher scales more feasible. The impact of dimensionless variables in nanoparticle transport and retention was studied by a sensibility analysis which allowed to identify their effect on nanoparticle transport and retention. In this sense, some simplifications are proposed for the model according to the dimensionless variables. In order to validate this framework and its implementation, a set of lab tests was designed and carried out using silica-nanoparticle-based nanofluid in sand packs. Some concentration jumps were used to catch its effect on nanoparticle retention and remobilisation. Experimental data show a good agreement with simulation data under each operation condition and parameter fitting. Additionally, the model is capable of predicting the profile of nanoparticle concentration and its evolution on time. Changes in that profile can be predicted if operating conditions change, allowing their optimisation. Finally, this modelling framework is implemented in the multi-physic and multi-component tool DFTmp Simulator to simulate specific EOR & IOR application on the field-scale. Using the fitting parameters obtained previously, an application of IOR is simulated considering a multiphase system and other phenomena simultaneously.
-
-
-
Consistent Formulation and Error Statistics for Reservoir History Matching
By G. EvensenSummaryIt is common to formulate the history-matching problem using Bayes’ theorem. From Bayes’, the posterior probability density function of the uncertain static model parameters is proportional to the prior probability density of the parameters multiplied by the likelihood of the measurements. The static model parameters are random variables characterizing the reservoir model while the data include, e.g., produced rates of oil, gas, and water from the wells. The reservoir prediction model is assumed to be perfect, and there are no errors besides those in the static parameters. The Bayesian formulation of this problem is given, e.g., in the recent paper by Evensen et al. (2019) , and serves as the fundamental description of the history-matching problem.
However, this formulation is flawed. The historical rate data comes from the real production of the reservoir, and they contain errors. The conditioning methods usually take these errors into account, but we neglect them when we force the simulation model by the observed rates during the historical integration. Thus, in the history-matching problem, the model prediction depends on the same data that we condition on, which prevents the direct use of Bayes’ theorem.
Here, we formulate Bayes’ theorem while taking into account the data dependency of the simulation model. In the new formulation, one must update both the poorly known model parameters and the errors in the rate data used to force the reservoir simulation model. Also, we specify time-correlated rate errors that are consistent with the use of allocation tables to generate the rate measurements. The “red” errors lead to a stronger uncertainty increase for the simulation model and also reduces the impact of the rate measurements in the conditioning process (where the measurement error-covariance matrix becomes non-diagonal).
We present results where the new subspace EnRML by Raanes et al. (2019) and Evensen et al. (2019) is used with a simple reservoir case. The result is a more consistent prediction model and a more realistic uncertainty estimate from the updated ensemble.
-
-
-
Free-Space Well Connection Method for Efficient Coupling of Wells and Grid Cells of Arbitrary Geometry
By R. PecherSummaryIn reservoir simulation studies, one of the crucial factors affecting the accuracy and hence reliability of the results is the representation of well connections in the numerical reservoir grid. Although there have been numerous attempts to redefine the relationship between wellbore pressure, grid cell pressure and the corresponding fluid flowrate, the original Peaceman formulae are still the most prevalent simulation software option by far. The simplicity of their implementation overshadows their limited applicability to symmetric 2D scenarios of purely cylindrical radial flow, also built into the "3D projected Peaceman" formula.
One of the attempts to improve the inflow model was the Multi-Point Well Connection (MPWC) method (SPE 173302) which solves the local flow problem using the Boundary Element Method (BEM). In terms of its boundary conditions, pressures of the next-neighbour cells surrounding the well-connection cell appear in the final coupling formula, which makes the method difficult to implement and computationally less efficient.
A new method has been formulated to overcome the drawbacks of MPWC and still utilise the benefits of BEM. The proposed Free-Space Well Connection (FSWC) method converts the next-neighbour cells into infinitesimal layers of equivalent transmissibilities and applies free-space boundary conditions to their outer surfaces. All cell faces are adaptively refined into a required number of boundary elements and their pressures and fluxes are expressed by means of free-space Green’s functions representing well perforation sources/sinks. The method is applicable to cells and perforations of arbitrary geometry, including perforations outside the cell of interest, and to general cases of heterogeneous anisotropic rock permeability. Balancing all boundary pressures and fluxes yields the resulting well-connection transmissibility (or well index) and inter-cell transmissibility multipliers that emulate the flow asymmetry outside the well-connection cell.
Accuracy of the FSWC method has been verified against various analytical and numerical models. Even for the ideal case of a fully penetrating vertical well in the centre of a square reservoir, the FSWC-computed well index is closer to the analytical solution than that of Peaceman. Despite its broad applicability, superior accuracy and robustness, the method is fast and requires just a few CPU seconds to reach the desired precision. This is demonstrated by various examples with realistic well trajectories from full-field reservoir simulation runs.
-
-
-
Large-Scale Field Development Optimization Using a Two-Stage Strategy
Authors Y. Nasir, O. Volkov and L.J. DurlofskySummaryThe optimization of the locations of a large number of wells represents a challenging computational problem. This is because the number of optimization variables scales with the maximum number of wells considered, and some of these variables may be categorical if the determination of the number and types of wells is part of the optimization problem. In this work, we develop and test a two-stage strategy for large-scale field development optimization problems. In the first stage, wells are constrained to lie in repeated patterns, and the optimization variables define the pattern type and geometry (e.g., well spacing, orientation). This component of the optimization follows a previous procedure ( Onwunalu and Durlofsky, 2011 ), though several important modifications, including optimization of the drilling sequence, are introduced. The solution obtained in the first stage is used as an initial guess for the second stage. In this stage we apply comprehensive field development optimization, where the well location, type, drill/do not drill decision, completion interval (for 3D models), and drilling time variables are determined for each well. Pattern geometry is no longer enforced in this stage. Specialized treatments (consistent with actual drilling practice) are introduced for cases where multiple geomodels, used to capture geological uncertainty, are considered.
The two-stage procedure is applied to 2D and 3D models corresponding to different geological scenarios. Both deterministic and geologically uncertain settings are considered. All optimizations are performed using a derivative-free particle swarm optimization – mesh adaptive direct search hybrid algorithm. Our most challenging example involves optimization over multiple realizations of the Olympus model, which we simulate using a GPU- based commercial flow simulator. In all cases, results using the two-stage procedure are compared to those from a standard single-stage approach. We achieve consistently better optimizer performance using the two-stage approach. For example, in one case, the optimum achieved after 17,500 flow simulations using the standard approach is found after only 4400 flow simulations using the two-stage approach. In another case, for the same computational effort, the NPV achieved using the two-stage approach exceeds that of the standard approach by 4.7%. These results suggest that this optimization strategy may indeed lead to improved results in practical problems.
-
-
-
Kogen-Combined Koval/Gentil Fractional Flow Model
Authors D. Santos Oliveira, B. Horowitz and J.A.R. TuerosSummaryWe propose a proxy model to separate oil and water production total predicted liquid rate. This is essential to optimal waterflooding management. The proxy models studied here are widely used to estimate parameters in the field of petroleum engineering due to their low computational cost and do not require prior knowledge of reservoir properties. The approach uses production history and the producer-based capacitance and resistance (CRMP) model, together with the combination of two fractional flow models, Koval ( Cao, 2014 ) and Gentil ( Gentil, 2005 ). We will henceforth call Kogen this combined model.
The combined fractional flow model can be formulated as a constrained nonlinear curve fitting. The objective function to be minimized is a measure of the difference between calculated and observed water cut values (Wcut) or net present values (NPV). The constraint limits the difference in water cuts of the Koval and Gentil models at the time of transition between the two. The problem can be solved using gradient-based method the sequential quadratic programming (SQP) algorithm. In this study, the gradient is computation by finite differences. The parameters of the CRMP model are the connectivity between wells, time constant, and productivity index. These parameters can be found using a Nonlinear Least Squares (NLS) algorithm. With these parameters, it is possible to predict the liquid rate of the wells. The Koval and Gentil models are used to calculate the Wcut in each producer well over the concession period which in turn allows to determine the accumulated oil and water productions.
Two synthetic models, Brush Canyon Outcrop and Brugge model are used to validate the proposed strategy. Then we compare the solutions obtained with the three fractional flow models (Koval, Gentil, and Kogen) with results obtained directly from the simulator.
It has been observed that the proposed combined model, Kogen, consistently generated more accurate results. In addition, CRMP/Kogen proxy model has demonstrated its applicability, especially when the available data for model construction is limited, always producing satisfactory results for production forecasting with a low computational cost.
-
-
-
History Matching of Time-Lapse Deep Electromagnetic Tomography with A Feature Oriented Ensemble-Based Approach
Authors K. Katterbauer, A. Marsala, M. Maucec, Y. Zhang and I. HoteitSummaryCarbonate reservoirs represent strongly complex geological structures whose main feature is that the flow dynamics primarily occurs in fractures. The complexity of the network of fractures as well as their interconnectedness may lead to unexpected flow patterns and uneven sweep efficiency. Determining the fracture distribution and reservoir properties of both matrix and fracture channels is quintessential for accurately tracking the fluid front movement in the reservoir, optimizing sweep efficiency, and maximizing hydrocarbon production.
A feature oriented ensemble-based history matching workflow was introduced previously to enhance the characterization of petroleum reservoirs through the assimilation of time-lapse electromagnetic (EM) data in combination with other available measurements. Compared with seismic measurements, which provide effective information related to reservoir structure, deep EM measurements in the interwell volumes are more sensitive to distinguish between hydrocarbon fluids and water. The developed workflow calibrates model variables of interest utilizing the information of formation resistivity that is usually made available through geophysical inversion of raw EM data. Archie’s law is typically used to build a relation between formation porosity, fluid properties (e.g., water saturation and salt concentration) and formation resistivity. Instead of integrating directly the inverted EM resistivity data, which is usually of high dimensions and noisy in amplitude, the boundary or contour information extracted from the EM resistivity field is utilized through an image oriented distance parameterization combined with an iterative ensemble smoother.
We are showcasing this framework on a realistic carbonate reservoir box model with a complex fracture channel network. Time-lapsed cross-well EM data was assimilated to update fracture and matrix reservoir properties, ensuring that the heterogeneity in the properties is maintained. The framework exhibited strong performance in the history matching of the complex carbonate reservoir structure. In comparison with conventional ensemble-based history matching techniques, this innovative developed approach led to significantly more accurate sweep efficiency maps, while maintaining the heterogeneity in the parameters between the fractures and the matrix. Finally, uncertainty in the saturation maps could be significantly reduced with the assistance of deep EM reservoir tomography.
Carbonate reservoirs represent highly complex geological structures and are characterized by flow dynamics dominated by natural fractures. The complexity of the network of fractures as well as their interconnectedness may lead to unexpected flow patterns and uneven sweep efficiency. Determining the fracture distribution and reservoir properties of both matrix and fracture channels is quintessential for accurately tracking the fluid front movement in the reservoir, optimizing sweep efficiency, and maximizing hydrocarbon production.
A feature-oriented ensemble-based history matching workflow was introduced previously to enhance the characterization of petroleum reservoirs through the assimilation of time-lapse electromagnetic (EM) data in combination with other available measurements. Compared with seismic measurements, which provide effective information related to reservoir structure, deep EM measurements in the interwell volumes are more sensitive to distinguish between hydrocarbon fluids and water due to the difference in electrical conductivity. The developed workflow calibrates model variables of interest utilizing the information of formation resistivity that is usually inferred through geophysical inversion of raw EM data. Archie’s law is typically used to describe the relation between formation porosity, fluid properties (e.g., water saturation and salt concentration) and formation resistivity. Instead of integrating directly the inverted EM resistivity data, which is usually of high dimensions and noisy in amplitude, the boundary or contour information extracted from the EM resistivity field is utilized through an image-oriented distance parameterization combined with an iterative ensemble smoother.
We are showcasing this framework using a realistic carbonate reservoir box model with a complex fracture channel network. We history matched time-lapsed crosswell EM data to update fracture and matrix reservoir properties, by preserving the heterogeneity in the properties. The framework exhibited strong performance in the history matching of the complex carbonate reservoir structure. The developed innovative approach led to significantly more accurate sweep efficiency maps, while maintaining the heterogeneity in the fractures and the matrix parameters. Uncertainties in the saturation maps were also significantly reduced with the history matching of deep EM reservoir tomography data.
-
-
-
Optimizing Low Salinity Waterflooding with Controlled Numerical Influence of Physical Mixing Considering Uncertainty
More LessSummaryControlled/Low Salinity Waterflooding (LSWF) is an augmented waterflood with well-reported improved displacement efficiency compared with conventional waterfloods. Physical mixing or dispersion of the injected low-salinity (LS) brine with the formation high-salinity (HS) brine substantially reduces the low-salinity effect. Numerical dispersion often misrepresents this mixing in conventional LSWF-simulations, causing errors in the results. Uncertainty in the reservoir description further makes the evaluated performance questionable. Existing studies have suggested optimal amounts for the injected LS-brine to sustain its displacement stability during inter- well flows with physical mixing, but with poor or no consideration of uncertainty. This work focuses on optimizing the injected LS-brine amount considering reported flow uncertainties while ensuring adequate correction of the erroneous influence of numerical dispersion on physical mixing. We investigate the impacts of flow uncertainties on the optimal LS slug-size. The sensitivity of the optimal slug-size to heterogeneity is examined under uncertainty. We evaluate how the interaction between physical mixing and geological heterogeneity influences slug integrity and performance.
We propose an improved ‘effective salinities’ concept to evaluate appropriate effective salinities to characterize the desired representative physical mixing supressing the large numerical dispersion effects usually encountered in coarse-grid LSWF-simulations. This ensures reliable representation of physical dispersion in such grids. We consider different models with characterized levels of heterogeneity and essential variables that control the impact of mixing on LSWF performance based mainly on reported data. New indicators are defined to evaluate the displacement stability and performance of injected LS-brine thereby relating its technical and economic performance. Slug performance is evaluated at different injection times to examine the sensitivity of recovery to LS injection start-time. Performance uncertainty is assessed through a designed four-stage computationally-effective approach: Parameter-space sampling to design representative experiments; Proxy modelling; Proxy validation and verification; and Monte Carlo simulation to provide a wider representative sample for the parameter-space.
We can now reliably represent physical dispersion in LSWF-simulations of current commercial reservoir simulators. Recovery is observed to be relatively insensitive to LS injection start-time until breakthrough of preinjected HS-brine. This is important for LS injection designs as they need not commence immediately for secondary-mode. The potential favourable influence of the spatial distribution of heterogeneity is seen, with links to transverse dispersion. The evaluated optimal sizes from existing studies are observed to be, at best, only suitable as displacement stability thresholds for slug injection considering uncertainty. We find an optimal slug-size of at least 1.0 HCPV to reduce risk under uncertainty.
-
-
-
Fast Robust Optimization Using Mean Field Bias Correction
Authors L. Wang and D.S. OliverSummaryEnsemble methods are remarkably powerful for quantifying geological uncertainty. However, robust optimization of a cost function for a problem in which uncertainty is characterized by a large ensemble size can be computationally demanding. In a straightforward approach, the computation of expected net present value (NPV) requires many expensive simulations. Several techniques (e.g., model selection, coarsening) have been proposed to reduce the cost but generally lead to a less accurate optimization. To reduce the amount of computation without sacrificing accuracy, we developed a fast and effective approach for computing the expected NPV by using only the reservoir mean model with a bias correction factor. At each iteration of the optimization procedure, we only require one additional simulation in the mean model with a different set of controls to obtain an initial approximate value through which the bias will be corrected with a multiplicative correction factor. Information from individual simulations with distinct controls and model realizations can be used to estimate the correction factor for different controls. The effectiveness of various bias-corrected methods is illustrated by the application of the drilling-order problem in the synthetic REEK Field model. Compared with the average NPV, the results show that the average error of estimated expected NPV from the mean model is reduced from -9% to 0.56% by estimating the bias correction factor. Distance-based localization with an appropriate taper length can further improve the accuracy of estimation. By adding a regularization term with a tuning parameter associated with the variance of the correction factor, the sensitivity of the estimates to the taper length is reduced such that the regularized estimate is potentially more accurate for a wider range of taper lengths. In previous work, we proposed a nonparametric online-learning methodology (learned heuristic search) to efficiently compute a sequence of drilling wells that is optimal or near-optimal. In this work, we apply the learned heuristic search (LHS) to the reservoir mean model with bias correction to optimize the drilling sequence and show that it leads to the same solution as the LHS with the average NPV. Moreover, we investigate the possibility of optimizing the first few wells without finding an entire drilling sequence. Our results show that LHS can optimize complete drilling sequences or only the first few wells at a reduced cost.
-
-
-
Fast Time-Stepping Scheme for Streamline-Based Transport Simulations
More LessSummaryIn this work, we propose a new time-stepping method for the simulation of transport in two-phase flows. Our method relies on constant initial saturation conditions and builds on the streamline-based discretization. For example, in sampling methods such as multi-level Monte Carlo, many probable scenarios of an uncertain permeability field have to be simulated with inexpensive models in order to quantify the uncertainty of phase saturations. However, since the statistical error converges slowly, large ensembles are needed and therefore, the computational cost per sample has to be small. We illustrate the performance of our new inexpensive, yet accurate time-stepping scheme in Buckley-Leverett type problems involving multi-Gaussian as well as more realistic channelized permeability fields.
-
-
-
Refined Ensemble-Based Method for Waterflooding Problem with State Constraints
Authors J. Tueros and B. HorowitzSummaryIn reservoir management optimization techniques are used to improve production and support new field development decisions. The waterflooding problem is based on determining optimal well control trajectories: rate, bottom hole pressure (BHP), valve openings, or a combination of them. The problem can be express as a typical nonlinear optimization problem. The objective function can be net present value (NPV) or cumulative oil production.
Linear constraints involve controls themselves, but nonlinear constraints involving state variables may also be imposed. For example, producer and injector wells controlled by BHP may be subject to flow control or vice versa. In optimization, constraints are imposed and respected at each control cycle, but not necessarily within control cycle due to discontinuity of rates due to control changes. The alternative to impose constraints at each time step of the simulation results in a high computational cost making the optimization process time-consuming. We propose correction points based on a time series within the control cycle to impose state constraints thus reducing the computational effort.
The algorithm of choice to solve the optimization problem is the sequential quadratic programming (SQP). The refined ensemble-based method is used to approximate gradient of the objective function and constraints. The sensitivity matrix is obtained as the product of pseudo-inverse of the covariance and cross-covariance matrices. The sum of the columns of the sensitivity matrix is the approximate gradient vector. The proposed refinements are based on connectivity between injector/producer wells and competitiveness coefficients between producers. The strategy aims to reduce spurious correlations in the sensitivity matrix when using small-size ensembles. Two synthetic models, Egg and Brugge, are used to validate the proposed strategy. Results are shown in different box plots, generated by performing ten optimization processes. We observe that the strategy of imposing correction points helps to impose state restrictions in the different steps of the simulation, reducing the computational cost during the optimization process.
-
-
-
Selecting Representative Models for Ensemble-Based Production Optimization in Carbonate Reservoirs with Intelligent Wells and WAG Injection
Authors S.M.G. Santos, A.A.S. Santos and D.J. SchiozerSummaryProduction optimization under uncertainty is complex and computationally demanding, a particularly challenging process for carbonate reservoirs subject to WAG injection, represented in large ensembles with high simulation runtimes. Search spaces of optimization are often large, where reservoir models are complex and the number of decision variables is high. The computational costs of ensemble-based production optimization can be decreased by reducing the size of the ensemble with representative models (RM). The validity of this method requires that the RM maintain representativeness throughout the optimization process, where the production strategy changes at each evaluation. Many techniques of RM selection use production forecasts of the ensemble for an initial production strategy, which raises questions about the robustness of the RM. This work investigates approaches to ensure the consistency of RM in ensemble-based long-term optimization. We use a metaheuristic optimization algorithm that finds sets of RM that represent the ensemble in the probability distribution of uncertain attributes and the variability of production, injection, and economic indicators ( Meira et al., 2020 ). Our case study is a benchmark light-oil fractured carbonate with features of Brazilian pre-salt reservoirs and many reservoir and operational uncertainties. We obtained production, injection and economic indicators using different approaches to provide valuable insight for RM selection. We inferred about RM fitness for production optimization based on their adequacy for uncertainty quantification for varying production strategies. Despite the effects of changing decision variables on RM representativity, our results suggest the possible use of RM for ensemble-based production optimizations with limitations related to the estimation of the probabilistic objective function due to mismatches in the probabilities of occurrence. Using production indicators obtained from a base production strategy decreased RM representativeness when compared to RM selection based on a more robust evaluation of reservoir performance using a wide-covering well pattern and no restrictions from production facilities. Finally, our results suggest valid RM selection using production forecasts for intermediate dates of the simulation period, an important contribution for ensembles with very high simulation runtimes. We also provide a broad theoretical background on the uncertain reservoir system and on approaches to obtain reduced ensembles and their applications.
-
-
-
Novel Ensemble Data Assimilation Algorithms Derived from A Class of Generalized Cost Functions
By X. LuoSummaryEnsemble data assimilation algorithms are among the state-of-the-art history matching methods. From an optimization-theoretic point of view, these algorithms can be derived by solving certain stochastic nonlinear-leastsquares problems.
In a broader picture, history matching is essentially an inverse problem, which is often nonlinear and ill-posed, and may not possess any unique solution. To mitigate these noticed issues, in the course of solving an inverse problem, domain knowledge and prior experience are often incorporated into a suitable cost function within a respective optimization problem. This helps to constrain the solution path and promote certain desired properties (e.g., sparsity, smoothness) in the solution. Whereas in the inverse problem theory there is a rich class of inversion algorithms resulting from various choices of cost functions, there are few ensemble data assimilation algorithms which in their practical uses are implemented in a form beyond nonlinear-least-squares.
This work aims to narrow this noticed gap. Specifically, we consider a class of generalized cost functions, and derive a unified formula to construct a corresponding class of novel ensemble data assimilation algorithms, which aim to promote certain desired properties that are chosen by the users, but may not be achieved by using the conventional ensemble-based algorithms.
As an example, we consider a channelized reservoir characterization problem, and formulate history matching as some minimum-average-cost problems with two new cost functions. In one of them, our objective is to restrict the changes of total variations of reservoir models during model updates. While in the other, our goal is instead to curb the modifications of histograms of reservoir models. While these two cost functions may appear unconventional in the context of ensemble data assimilation, the corresponding assimilation algorithms derived from our proposed formula are very similar to the conventional iterative ensemble smoother (IES). As such, our previous experience with the IES can be smoothly transferred into the implementations and applications of these new algorithms. In addition, the experiment results indicate that using either of these two new algorithms leads to better history matching performance, in comparison to the original IES.
-
-
-
Application of Dynamic Parametrization Algorithm for Non-Intrusive History Matching Approaches
Authors A. Mukhin, M. Elizarev, N. Voskresenskiy and A. KhlyupinSummaryHistory matching generates detailed reservoir description that matches production data and can be used for forecasting and uncertainty estimation. Due to the ill-posedness of the history matching problem, the parametrization of high-dimensional fields in the model (such as permeability and porosity) is widely applied. The common approach of existing parametrization algorithms is to generate a dataset of possible fields realizations (prior models) and then convert this dataset to an orthogonal basis using PCA-based techniques. Model reduction is achieved by truncating the majority of basis components based on energy criteria.
Due to high uncertainty and low quality of real data, the important pattern could be under-represented in the prior dataset and basis components with such structures could be truncated. We present a novel method where omitted components are defined not only by energy criteria but also by objective function sensitivity. In our Adaptive Strategies PCA (AS-PCA) technique we developed and advanced definition of the optimal basis and derived an efficient algorithm for basis recalculation using computational approaches from quantum mechanics. The algorithm requires gradient of an objective function w.r.t latent variables (only at the point of convergence). Then the new basis is obtained by a few linear transformations with negligible computational cost, and optimization continues. The method was tested on history matching of 2D reservoirs and have demonstrated improvements in terms of misfit value and field consistency in comparison with classic PCA parametrization.
However, the applicability of gradient-based methods is constrained by local convergence and high implementation efforts (i.e adjoint technique). To overcome these constraints, we extend adaptive strategies for non-intrusive history matching approaches such as stochastic optimization and ensemble-based algorithms. Numerical gradient approximation is not well-suited for AS-PCA since it is inexact and takes additional simulation time. We developed the regression-based algorithm for gradient estimation using a set of field realizations, represented by an ensemble in ensemble-based methods or a population in evolution-based algorithms. In this study, we demonstrate the theory and examples of adaptive strategies application to history matching using PSO and enKF. The results of history matching with inconsistent prior datasets for 2D gaussian fields and applications to uncertainty quantification will be provided.
-
-
-
Algebraic Wavefront Parallelization for ILU(0) Smoothing in Reservoir Simulation
By S. GriesSummaryIncomplete factorization methods are an important part of the linear solver strategy in reservoir simulation. It has been shown earlier that the inherited pressure-decoupling effect of (block)-ILU(0) plays an important role for the convergence of efficient linear solvers like System-AMG or CPR. From Black-Oil to coupled geomechanics.
With these specific linear systems, this decoupling is a by-product of the row-wise ILU-elimination. However, this also makes ILU sequential in nature, which is a problem on parallel compute hardware.
The parallelization of ILU methods has been a field of active research – and it still is. Various approaches are reported in the literature. All exploit inherited parallelism in the sparse systems to solve. Either by reordering the system accordingly or by setting synchronization points induced by the underlying structure (so-called wavefronts). All of these approaches have certain disadvantages and advantages regarding parallel efficiency and numerical robustness. It depends on the application which approach is best-suited.
Re-ordering approaches affect the elimination order. Hence, they can have significant robustness impacts for AMG in reservoir simulation.
Wavefront parallelizations guarantee equivalence to the sequential method. However, they either require the parallelization structure to be induced by the geometry. This may be challenging in unstructured cases and voids a main advantage of AMG. Or they perform a row-wise data-dependency scan, with a resulting amount of blocking communication.
In this paper, we are going to present a wavefront parallelization for (block-) ILU(0) that does not perform its dependency scan by considering groups of rows. The resulting wavefront setup works analogously to aggregative AMG setups, just with additional constraints. The outcome is a data-dependency graph where one can control the compromise between the frequency of data-exchange and wait-time. The equivalence to the sequential ILU(0) algorithm is still guaranteed.
While this approach can’t compete with the parallelizability of methods like Jacobi-relaxations, it can exploit inherited parallelism with ILU(0) for both OpenMP and MPI. It maintains the numerical properties of the original algorithm. We will demonstrate both with test problems as well as with ones from industrial reservoir simulations.
-
-
-
Extended Finite Volume Method (XFVM) for Flow Induced Tensile Failure in Fractured Reservoirs
Authors A.A. Habibabadi, R. Deb and P. JennySummaryTensile opening of pre-existing fractures and tensile failure around the fracture tips triggered by fluid injection can lead to permeability increase in a reservoir. Such hydraulically driven fracturing technologies are used in petroleum engineering to achieve enhanced extraction of oil and gas. However, such processes can also lead to increased seismic activity around the reservoir ( Ellsworth 2013 ). Numerical modelling of tensile opening and crack propagation along with shear slip modelling of pre-existing fractures is important to assess advantages and risks of hydro-fracturing.
The main criteria for numerical models of coupled flow and mechanics in fractured reservoirs are accuracy and computational efficiency. For flow, descriptions based on embedded discrete fractures in matrix domains proved to be successful in this regard ( Hajibeygi et al., 2011 ; Lee et al.; 2001 ). In this context, flow induced shear failure and tensile opening can be modelled using an extended finite element method (XFEM) ( Borja, 2008 ) or the recently introduced extended finite volume method (XFVM) ( Deb and Jenny, 2017 , 2020 ). The advantage of XFVM lies in the choice of only one degree of freedom per fracture segment for the displacement ( Deb and Jenny, 2017 ) and that the same conservative method is used for both flow and mechanics.
The current paper deals with an extension of this XFVM framework, such that also crack tip propagation can be simulated. The cohesive stress approach by ( Wells & Sluys, 2001 ) for crack tip propagation was modified and integrated into XFVM. Using the coarse-scale solution of stress field obtained by XFVM at the fracture tips, a fine-scale interpolation is generated. This finescale solution is used to obtain the stress intensity factors (SIF) by an overdeterministic method. The SIF calculation is used to estimate crack growth criterion and direction. An example testcase of tensile failure solution at the crack tips of a single fracture is studied.
-
-
-
Additive Schwarz Preconditioned Exact Newton Method as a Nonlinear Preconditioner for Multiphase Porous Media Flow
Authors Ø. Klemetsdal, A. Moncorgé, O. Moyner and K. LieSummaryDomain decomposition methods as preconditioners for Krylov methods are widely used for linear problems. There have been recently a growing interest into nonlinear preconditioning methods for Newton’s method applied to porous media flow. In this work, we study a spatial Additive Schwarz Preconditioned Exact Newton (ASPEN) method as a nonlinear preconditioner to the Newton’s method with fully implicit scheme in the context of immiscible and compositional multiphase flow. We first describe the method and how it can be implemented in a reservoir simulation package. We then study the nonlinearities addressed by the different components of the method. We observe that the local fully implicit updates are tackling well all the local nonlinearities and that the global ASPEN updates are tackling well the long range interactions. The combination of the two updates leads to a very competitive algorithm. We illustrate the behavior of the algorithm for conceptual one and two-dimensional cases, as well as realistic three dimensional models. We perform a complexity analysis and demonstrate that the Newton’s method with fully implicit scheme preconditioned by ASPEN is a very robust and scalable alternative to the well-established Newton’s method for fully implicit schemes.
-
-
-
Analytical Pore Network Approach (APNA) for Rapid Estimation of Capillary Pressure Behaviour in Rock Samples
Authors H. Rabbani, D. Guerillot and T. SeersSummaryCapillary pressure measurements are an integral part of special core analysis (SCAL) to which oil and gas industry greatly rely on. Reservoir engineers implement these macroscopic properties in simulators to determine the amount of hydrocarbons as well as the flowing capacity of fluids in the reservoir. Despite their importance, conventional laboratory techniques used to measure capillary pressure curves of core samples are expensive, tedious, time-consuming and prone to error. Motivated by the importance of capillary pressure measurements in oil and gas industry, here we propose a novel methodology called Analytical Pore Network Approach (APNA) that can provide a reliable forecast of capillary pressure using pore-scale 3D images of reservoir rocks. The proposed approach provides oil and gas companies inexpensive, fast and accurate estimation of capillary pressure data, and reduces the number of required laboratory experiments and facilitates the estimation of such properties from uncored sections of the reservoir (i.e. using drill cuttings).
-
-
-
Analytical Production Optimization with Modified NPV: Application to 2D Gas-Cone Reservoirs
Authors A. Bizzi, E. Fortaleza and F.P. MuneratoSummaryThis article investigates the analytical and computational optimization of reservoirs in alternate cost and coordinate spaces, by means of a modified NPV function (MNPV). We show that, for reduced systems, undertaking the analysis of reservoirs in these abstract spaces may lead to exact analytical expressions, unattainable under traditional analysis. This, then, may be used to speed up the optimization of large-scale reservoirs.
We demonstrate the concept under a restricted scope, focusing on a simplified case: To undertake the task of maximizing the transient yield of an idealized reservoir. It consists of a single production section of a horizontal well, in the presence of a gas cone. A set of further simplifying assumptions is then applied: For our analysis, the only depletion mechanism present is coning, and the very long 3D reservoir can be considered as composition of a group of 2D models.
Under these restrictions, we present an analytical proof that this modified NPV represents a convex function, for which the local optimization in abstract space generates the optimal global production strategy.
This, coupled to an analysis of monotonicity of the reservoir dynamics, may be used to algebraically demonstrate the existence of diminishing returns from increases in production rate, finally arriving at the most cost effective production strategy for the given system. This is then validated by a series of numerical simulations of the proposed reservoir.
Finally, we discuss similar concepts that may be used for the optimization of more realistic systems, enabling the use of analytical tools in the speeding-up of full-scale reservoir analysis.
The paper’s contributions can be stated in three points:
First, it presents new information on an emerging approach to the optimization of reservoirs. While most tools focus on optimizing the computation of reservoir-related processes, we show that a new approach to the NPV metric itself may lead to promising new results.
Second, it presents a novel, closed-form analytical solution for the optimal production rate of a reservoir with a simplified 2D gas cone.
Finally, it presents an alternate perspective on the role of analytical results in the era of computational reservoir optimization, by proposing a hybrid approach.
-
-
-
Albite-Anorthite Synergistic Effect on the Performance of Nanofluid Enhanced Oil Recovery
Authors R. Nguele, E.O. Ansah, K. Nchimi Nono and K. SasakiSummaryLarge volumes of oil sit within our reach primarily of the strong capillary forces, which themselves are subsequent to the attraction between the polar ends of the oil and the surface charges of bearing-matrix. Altering these interactions occurring within tiny pore throats or even more, unveiling the extent to which the geochemistry impacts these interactions can invariably improve the production. Therefore, we evaluated the performance of water-based nanofluid for oil production with the respect to the geochemistry.
Alumina-silica nanocomposite (Al/Si-NP), synthesized by plasma-method, was used as primary material. Functionalized by dispersing 0.25 wt.% lyophilized NP into the formation water (TDS=4301 ppm) water under carbon dioxide bubbling. The nanofluid, NF, obtained therefrom, was then used for coreflooding tests, which aim at displacing a dead heavy oil (ρ =0.854 g/cm3) from a waterflooded Berea sandstone. The ionic composition of the effluent fluids was tracked and further used for modeling the geochemical interactions. The latter considered mineral precipitation and dissolution as well as ion adsorption and desorption. Model calculations were performed using the transport algorithm in PHREEQC.
The experimental results from coreflood tests showed that Al/Si-NP, injected into a waterflooded sandstone, could displace up to 11% of the oil trapped, which was 10 times higher if no nanofluid as injected. Ionic tracking further revealed that the dissolution of albite along with anorthite weathering; both mechanisms concurred to the logjamming of Al/Si-NF. Furthermore, the geochemical modeling revealed weak and reversible cation exchange between sodium (Na+) and calcium (Ca2+). Also, we found that the pH of the preflush should be mildly basic with for controllable anorthite and albite precipitation plus silica cementation, from which derive Al-Si-NF aggregation. These points were further verified experimentally when the ionic composition was altered accordingly to the geochemical modeling, leading to the conclusion that albite, anorthite and silicate precipitation promotes high recovery, due to high Na+ and K+ ions. Silica cementation was proven to increase formation rock wettability.
-
-
-
Multiscale Matrix-Fracture Transfer Functions for Naturally Fractured Reservoirs Using an Analytical Discrete Fracture Model
Authors R. Hazlett and R. YounisSummaryFracture matrix transfer functions have long been recognized as tools in modeling naturally fractured reservoirs. If a significant degree of fracturing is present, models involving isolated matrix blocks and matrix block distributions become relevant. However, this methodology captures only the largest fracture sets and treats the matrix blocks as homogeneous, though possibly anisotropic. Herein, we produce the semi-analytic transient baseline solution for depletion for such models. More realistic multi-scale numerical models try to capture below grid scale information and pass it to the larger scale system at some numerical cost. Instead, for below block scale information, we take the semi-analytic solution to the Diffusivity Equation of Hazlett and Babu (2014 , 2018 ) for transient inflow performance of wells of arbitrary trajectory, originally developed for Neumann boundary conditions, and recast it for Dirichlet boundaries. As such, it represents the analytical solution for a matrix block with an arbitrarily complex gathering system surrounded by a constant pressure sink, we take to be the primary fracture system. Instead of using a constant rate internal boundary condition for the gathering system, we segment the well or fracture and force the internal complex fracture feature to be a constant pressure element with net zero flux. In doing so, we create a representative matrix block with any degree of infinite conductivity subscale fractures that impact the overall drainage into the surrounding fracture system. We quantify drainage from each face, capturing the anisotropic effect of internal fractures. We vary the internal fracture structure and delineate sensitivity to fracture spacing and extent of fracturing. This approach also generates the complete transient solution, enabling new well test interpretation for such systems in characterization of block size distributions or extent of below block-scale fracturing. The initial model for fully-penetrating fractures can be further generalized with the 2D distributed source model of Bao et al. (2017) for partially penetrating fractures of arbitrary inclination, as represented by floating, intersecting parallelograms embedded in the matrix block with either infinite or finite conductivity.
-
-
-
Experimental Evaluation of Sealing Effect of Nano Calcium Carbonate Blocking Agent on Shale Microfracture
More LessSummaryImproving the plugging ability of drilling fluid is an effective way to solve the instability of the wellbore in complex formations. Low porosity, low permeability and micro nano scale fracture developed in shale formation. Traditional large diameter plugging materials can not effectively block micro and nano pores, and drilling fluid filtrate is easy to enter the formation, leading to instability of shaft lining. With the help of GCTS equipment, we carried out the plugging evaluation experiment of nano CaCO3 plugging agent drilling fluid to the shale cores of the long Ma Xi formation in the Sichuan Chongqing formation. It is proposed to evaluate plugging effect by using shale permeability and longitudinal and lateral wave velocity characteristics before and after plugging. The results show that under the same concentration condition, the permeability of core decreases and the acoustic velocity increases with the use of nano CaCO3 plugging agent, which is much better than the effect of base slurry plugging; In the same nano particle material, with the increasing content of the nano CaCO3 plugging agent, the permeability of shale is reduced and the acoustic velocity increases. When the content of nano CaCO3 is 3%, the sealing effect of nanoscale drilling fluid is the best. Through the experimental evaluation study, we provide basic experiment and method support for the optimization of plugging agent for preventing wellbore instability.
-
-
-
Cube2Vec: Self-Supervised Representation Learning for Sub-Surface Models
Authors P. Lang, T. Adeyemi and R. Schulze-RiegertSummaryMeaningful representations of subsurface structures are essential to downstream machine learning tasks such as classification and regression. While unlabelled data are often abundant, labelling is expensive and for some use cases ill-defined. The ensuing lack of large, labelled datasets makes purely supervised training of models difficult for many tasks.
A self-supervised deep learning approach is developed which extends a representation learning method for spatially distributed data also referred to as Tile2Vec ( Jean et al., 2019 ) to three dimensions. A metric learning-based loss function uses the overlap between cubes of the subsurface as a proxy for their similarity. This reflects the notion that regions which are close to each other in physical space are on average semantically more similar than regions which are far apart from each other. A three-dimensional convolutional neural network has been trained accordingly on about 100,000 cubes extracted from reservoir simulation models. The resulting model is used to evaluate cubes for their embedding, and the distance to the embedding of other cubes is a direct measure of their similarity in a structural and grid property distribution sense.
The quality of the learned representation model is demonstrated quantitatively for labelled test datasets and empirically for two applications – visual search for similar cubes and the classification of formation sections according to their production potential.
Cube2Vec offers a way to leverage the large quantity of available unlabelled subsurface data to create powerful base models for visual analysis tasks in machine learning.
-
-
-
On the Robust Value Quantification of Polymer EOR Injection Strategies for Better Decision Making
Authors M. Oguntola and R. LorentzenSummaryOver the last decades several EOR methods have emerged, and corresponding models have been developed and implemented in increasingly more complex simulation tools. In this paper we present methodology and mathematical tools for optimizing and quantifying the value of EOR strategies, such as polymer, smart water or CO2. The developed methodology is demonstrated for polymer injection on medium to highly heterogeneous synthetic reservoir models with different complexity. The purpose of the work is to improve the understanding of the actual benefit of EOR methods, and to provide methodology that quickly allows users to find optimal production strategies that maximize the net present value (NPV).
In this work, the control variables for the optimization problem are polymer concentration and water injection rates for each injecting well, and oil production rates or bottom hole pressures for the producing wells, over the exploration period. Each control variable is constrained with given production limitations. To account for the uncertainty in the reservoir model, an ensemble of geological realizations is considered, and a robust ensemble-based approximate gradient method (EnOpt) is utilized. The gradient is approximated using a sample of control vectors, drawn from a Gaussian multivariate distribution with known mean and covariance. The covariance matrix is defined so that the control variables of the same well is correlated in time. The mean is updated using a preconditioned gradient ascent method with backtracking until an optimum is found.
The presented method is tested on three different synthetic reservoirs: a 2D five-spot field pattern with grid dimension 50×50×1, a 3D field provided by Equinor (the Reek field with dimension 40×64×14), and a 3D field provided by TNO (the OLYMPUS field with dimension 118×118×16). The first two fields have three phases (water, gas, and oil) and the third field has two phases (water and oil). For each case we find the optimal well controls for polymer flooding and then compared with convectional optimized continuous water flooding. The reservoir fluid flow is simulated using the Open Porous Media (OPM) simulator. However, it is worth noting that the optimization method is independent of the reservoir simulator used. Important findings of this study are the feasible control strategies for
polymer EOR methods leading to an increased NPV, and comparison of the economic values for optimized polymer and traditional water flooding for the examples considered.
-
-
-
Improved Extended Blackoil Formulation for CO2EOR Simulations
Authors T.H. Sandve, O. Sæ vareid and I. AavatsmarkSummaryA well-planned CO2EOR operation can help meet an ever-increasing need for energy and at the same time reduce the total CO2 footprint from the energy production. Good simulation studies are crucial for investment decisions where increased oil recovery is optimized and balanced with permanent CO2 storage. It is common to use a compositional simulator for CO2 injection to accurately calculate the PVT properties of the mixture of oil and CO2. Compositional simulations have significantly increased simulation time compared to blackoil simulations and thus make large simulation studies where many simulations are needed as in the representation of uncertainty and optimization unpractical. Existing extended blackoil formulations often poorly represent the PVT properties of the oil-CO2 mixtures. We therefore present an improved extended blackoil formulation with new process-dependent blackoil properties that depends on the fraction of CO2 in the cell. These properties represent the density and viscosity of the Oil - CO2 mixture more accurately and thus give results closer to the compositional simulator. A fourth component in addition to water, oil and formation gas is used to follow the injected gas. The process-dependent blackoil functions are calculated from numerical slim-tube experiments based on one-dimensional compositional EOS simulations. The same simulations also give estimates on the MMP (minimum-miscibility pressure).
The new extended blackoil model gives results that are closer to compositional simulations compared to existing blackoil formulations. We present examples based on data from the Fifth Comparative Solution Project: Evaluation of Miscible Flood Simulators as well as from CO2 injection on relevant field models.
The model has been implemented in the Flow simulator. The Flow simulator is developed as part of the open porous media (OPM) project. The Flow simulator is an openly developed and free reservoir simulator that is capable of simulating industry relevant reservoir models with similar single and parallel performance as commercial simulators.
-
-
-
Well Location Optimisation by using Surface-Based Modelling and Dynamic Mesh Optimisation
Authors P. Salinas, C. Jacquemyn, C. Heaney, C. Pain and M. JacksonSummaryPredictions of production obtained by numerical simulation often depend on grid resolution as fine resolution is required to resolve key aspects of flow. Moreover, the controls on flow can depend on well location in a model. In some cases, it may be key to capture coning or cusping; in others, it might be the location of specific high permeability thief zones or low permeability flow barriers. Thus, models with a suitable grid resolution for one particular set of well locations may fail to properly capture key aspects of flow if the wells are moved. During well optimisation, it is impossible to predict a-priori which well locations will be tested in a given model. Thus, it is unlikely to know a-priori if the grid resolution is suitable for all possible locations tested during a well optimisation procedure on a single model, and the problem is even more profound if well optimisation is tested over a range of different models.
Here, we report an optimisation methodology based on Dynamic Mesh Optimisation (DMO). DMO will produce optimised meshes for a given model, set of well locations, pressure (and other key fields) distribution and timelevel. Grid-free Surface-Based Modelling (SBM) models are automatically generated in which well trajectories are introduced (also not constrained by a mesh), respected by DMO. For the optimization of the well location a Genetic Algorithm (GA) approach is used, more specifically the open-source software package DEAP. DMO ensures that all the models automatically generated and simulated in the optimisation process are modelled with an equivalent mesh resolution without user interaction, in this way, the local pressure drawdown and associated physical effects (such as coning or cusping) can be properly captured if they appear in any of the many scenarios that are studied. We demonstrate that the method has wide application in reservoir-scale models of oil and gas fields, and regional models of groundwater resources.
-
-
-
Geoengineering Tool for Field Development: A Decision-Making Tool for Deviated Well Placement
Authors S. Bouquet and A. FornelSummaryThe developed geoengineering tool aims at improving the decision-making of deviated well positions to increase mature field production. It is based on statistical and visual analysis of oil field features. The main advantage of this method is its reservoir-engineer focus and that no additional flow simulations are needed unlike most of iterative optimization algorithms requiring thousands of simulations. Moreover, this methodology is not constrained by a well geometry, but proposes well placements and trajectories which are the most interesting considering the studied oil field features. For deviated wells, the drilling is not constrained by a fixed direction (horizontal or vertical), its direction is function of available resources (non-communicating oil-rich layers or disconnected oil-rich areas). In practice, this kind of wells are difficult to position manually by reservoir engineer. Here, we use information from field features and their classification to define a profitable well trajectory to maximize the oil production.
The field features are either static (e.g. anisotropy) or dynamic reservoir characteristics, e.g. mobile oil thickness, time-of-flight… To facilitate their analyses, an automatic, statistical analysis is performed on these features by unsupervised classification of the grid cells. A 3D-grid of classes indices, depending on the combination of the features, is obtained. This grid allows to identify the areas of interest for production. A specific visualization of potential field production capacities is proposed by defining and calculating geobodies. They are defined by groups of connected cells with the most interesting features. While these connections are hardly viewable in 3D, the geobody calculation allows to display the areas of interest and their compartmentalization.
The geobody with the highest quality index should be the first area-to-be-drained. The proposed trajectory will start at the cell with the highest quality index in this geobody. The quality indexes are calculated using a movering-average method. The trajectory is calculated with a Dijkstra algorithm, weighted by the quality indexes of cells and geobodies and constrained by a maximum well length.
This methodology was first applied on a synthetic case then on a real field case of North Africa, for which a standard reservoir engineer study had already been performed. The geoengineering tool results were compared to the reservoir engineering study results. This tool allowed to identify the high potential areas and proposed a well trajectory and placement with the most promising features according to the field constraints, improving the oil production while limiting the computational cost.
-
-
-
Comparison Between Algebraic Multigrid and Multilevel Multiscale Methods for Reservoir Simulation
Authors H. Nilsen, A. Moncorge, K. Bao, O. Møyner, K. Lie and A. BrodtkorbSummaryMultiscale methods for solving strongly heterogenous systems in reservoirs have a long history from the early ideas used on incompressible flow to the newly released version in commercial simulation. Much effort has been put into making the MsFV method work for fully unstructured multiphase problems. The MsRSB version is a newly developed version, which tackles most of the "real" world problems. It is to our knowledge, the only multiscale method that has been released in a commercial simulator. You can alternatively see the method as a variant of smoothed aggregation or as an iterative approach to AMG with energy minimizing basis functions. This will be discussed in detail.
So far, most work on comparing MsRSB with AMG methods has been on qualitative performance measures like iteration number rather than on pure runtime on fair code implementation. We discuss the theoretical performance and show the practical performance for our implementation. Here, we compare performance of pure AMG, standard two-level MsRSB with pure AMG as coarse solver, as well as a new truly multilevel MsRSB scheme. Our implementation uses the DUNE-ISTL framework. To limit the scope of the discussion we restrict our assessment to AMG with aggregation and smoothed aggregation and the MsRSB method. These three methods are closely related and are primarily distinguished in a preconditioner setting by the coarsening factors used, and the degree of smoothing applied to the basis. We also compare with other state-of-the-art AMG implementations, but do not investigate combinations of them with the MSRB method. For the MsRSB method, we also discuss practical considerations in different parallelization regimes including domain decomposition using MPI, shared memory using OpenMP, and GPU acceleration with CUDA.
All comparisons will focus on the setting in which many similar systems should be solved, e.g. during a large-scale, multiphase flow simulation. That is, our emphasis is on the performance of updating a preconditioner and on the apply time for the preconditioner relative to the convergence rate. Performance of the solvers will be tested for pure parabolic/elliptic problems that either arise as part of a sequential splitting procedure or as a pseudo-elliptic preconditioner/solver as a part of a CPR preconditioner for a multiphase system, for which block ILU0 is used as the outer smoother.
-
-
-
Modeling of Water-Induced Fracture Growth Pressure Using Poroelastic Approach
Authors P. Kabanova and E. ShelSummaryOne of the main factors affecting the efficiency of hydrocarbon production during the field development is waterflooding pattern used for the formation pressure maintenance. It is common practice when production wells that have been worked for depletion are converted to the injection. However, since hydraulic fracturing was previously performed on the majority of production wells, the injection under high pressure can cause risks associated with spontaneous fracture growth. This can lead to the water breakthrough and decreasing of production efficiency. The purpose of this work is modeling of fracture growth pressure on the injection well using poroelasticity approach.
Thus, a physico-mathematical model of the problem for determining the pressure at which the fracture will grow on the injection well is built. Solving a problem involves sequential finding of the pressure field in a development element using Laplace equation, and then the stress field using an equilibrium equation. The solutions were obtained by usage of analytical and numerical approaches including Fourier transform and finite-difference scheme. Verification of the obtained solution was carried out by validating the model on a finite element solution. The criterion of fracture growth was also derived, according to which the fracture propagation occurs when the minimum horizontal stress at the tip of the fracture is exceeded.
The influence of the parameters of the reservoir and the development on the value of the critical pressure was evaluated, namely, it was shown that an increase of Biot coefficient leads to an increase of fracture growth pressure and an increase of Poisson’s ratio decreases the critical pressure.
It was found that an increase of the distance between the wells in the line leads to the decrease of the pressure at which water-induced fracture starts to grow, while an increase in the distance in a row along the vertical increases this pressure.
It should be pointed out that the most common way to control the growth of water-induced fractures is combined hydrodynamic and geomechanical modeling, but this method is very time consuming and computationally expensive. In this connection, a quick method for estimating the fracture initiation pressure was proposed. The presented model can be used to control the growth of water-induced fracture, namely, to determine the regimes of fracture growth, to regulate the waterflood regimes (pressure and flow control), and to optimize the field development system without using combined hydrodynamic and full geomechanical modeling.
-
-
-
Analysis of Low Salinity and Polymer Synergies in a Dynamic Pore-Scale Network Simulator
Authors E. David, S. McDougall and A. BoujelbenSummaryIt has been postulated that combining different EOR techniques might yield a synergistic behaviour that could result in additional oil recovery beyond that obtained from each EOR technique applied separately. This has been investigated in recent experimental work (Alagic et al., 2010; Mohammadi and Jerauld, 2012 ; Shiran and Skauge, 2013 ; Pettersen and Skauge, 2016), where both polymer and surfactant solutions have been reported to be more efficient in a low salinity environment. We have investigated a number of different injection protocols using a pore-scale dynamic simulator that combines both low salinity brine (LS) and polymer injection.
Four synergistic combinations have been considered: (i) LS brine and polymer injected simultaneously at the start of the simulation (secondary mode), (ii) LS brine and polymer injected simultaneously following high salinity (HS) water breakthrough, (iii) LS brine injected initially, followed by simultaneous LS brine/polymer injection after LS breakthrough, and (iv) LS brine injected initially followed by polymer injection after LS water breakthrough.
A positive synergy was observed when LS brine and polymer were injected simultaneously in both secondary and tertiary modes, with the combined effect yielding significant increases in oil recovery. The mixture of polymer and LS brine was found to cause capillary fingers to thicken and swell, allowing the LS brine to access more of the pore space as a consequence of the higher viscous forces induced by the polymer. In secondary mode, the mixture of polymer and LS brine was observed to stabilise the water fingers and shifted the flow regime from viscous/capillary fingering to stable displacement. Moreover, results suggest that this synergistic LS/polymer effect is sensitive to a range of rock/fluid parameters, such as wettability, viscosity ratio, and capillary number.
-
-
-
Conditioning Surface-Based Geological Models to Well Data Using Neural Networks
Authors Z. Titus, C. Pain, C. Jacquemyn, P. Salinas, C. Heaney and M. JacksonSummaryGenerating representative reservoir models that accurately describe the spatial distribution of geological heterogeneities is crucial for reliable predictions of historic and future reservoir performance. Surface-based geological models (SBGMs) have been shown to better capture complex reservoir architecture than grid-based methods; however, conditioning such models to well data can be challenging because it is an ill-posed inverse problem with spatially distributed parameters.
Here, we propose the use of deep Convolutional Neural Networks (CNNs) to generate geologically plausible SBGMs that honour well data. Deep CNNs have previously demonstrated capability in learning representative features of spatially correlated data for large scale and highly non-linear geophysical systems similar to those encountered in subsurface reservoirs.
In the work reported here, a CNN is trained to learn the relationship between parameterised inputs to SBGM, the resulting geometry and heterogeneity distribution, and the mis-match between model surfaces and well data. We show that the trained CNN can generate a range of geologically plausible models that honour well data. The method is demonstrated for a 2D example model, representing a shallow marine reservoir and a 3D extension of the model that captures typical heterogeneities encountered in the subsurface such as parasequences, clinoforms and facies boundaries. These test cases highlight the improvement in reservoir characterisation for realistic geological cases.
We present here a method of generating geologically consistent reservoir models that match well data. The developed method will allow the generation of new high-fidelity realizations of subsurface geology conditioned to information at wells, which is the most direct observational data that can be acquired.
Technical Contributions
- – The use of surface-based modelling to describe even complex geological features compared to grid-based modelling significantly decreases the computational expense of training the network as there are fewer parameters to optimize.
- – Conditioning geological models to well data is a challenging ill-posed inverse problem in reservoir characterisation. The use of neural networks presents another approach for generating geologically plausible models that are calibrated with observed well data and can be extended to object-based modelling.
-
-
-
Modified Peaceman Correction for Improved Calculation of Polymer Injectivity in Coarse Grid Numerical Simulations
Authors I. Tai, A. Muggeridge and M.A. GiddinsSummaryAn improved method for calculating the injectivity of non-Newtonian polymers in finite volume, numerical simulation is presented. Non-Newtonian rheologies can significantly impact the performance of a polymer flood. This is especially important in the near wellbore region and at the start of injection. In the near well bore region velocities and shear rates are at a maximum and change rapidly with distance from the well. These effects are expected to be highest at the beginning of a polymer flood due to the near-wellbore region being saturated with more viscous oil.
An analytical method for calculating the modified Peaceman pressure equivalent radius when the well block contains only polymer solution is derived and then extended to the case when the well block contains both oil and polymer solution (as occurs at early time). This is done using fractional flow theory to derive well pseudo relative permeability functions. The approach is validated by comparing the results from fine grid radial and coarse grid Cartesian simulation models. The importance of the correction is demonstrated by simulating polymer injection into a realistic field scale model of a viscous oil field.
The modified Peaceman radius, combined with well pseudo relative permeabilities, significantly reduces the error when calculating the bottomhole flowing pressure in wells injecting a shear-thinning polymer solution. In the field scale simulation, with injection pressure constrained by the fracture pressure of the rock, our results show that polymer injection can be a viable technique for enhanced oil recovery in this reservoir. The new method leads to higher well injectivity and more optimistic prediction of polymer flood performance, compared to the standard Peaceman calculation used by most reservoir simulators, where non-Newtonian behaviour in the well block is unaccounted for.
This paper provides a simple and accurate method to capture the impact of shear thinning behaviour on polymer injectivity. The method will improve estimations of injectivity in reservoir simulations of shear thinning polymer solutions.
-
-
-
A Novel Method for Quickly Obtaining SRV in Multi-Stage Fracturing Reservoirs with Different Fracturing Radii
More LessSummaryMulti-stage fracturing is an effective stimulated reservoir technology for a multilayer reservoir. The evaluation of the stimulated reservoir volume (SRV) is an important quality index. Aiming at the multi-stage fracturing vertical commingled production well, and considering the different fracturing radii of any layer, an extended model of nlayer vertical commingled production well with an arbitrary distribution of the fracturing radii in the longitudinal direction was established. The Laplace domain bottom-hole pressure solution was obtained by the Laplace transformation and solving the n-th order Bessel function sparse matrix, and the real-time domain bottom-hole pressure solution was obtained by Stehfest numerical inversion method. Based on the characteristics of bottomhole pressure and it‘s derivative on the double logarithmic coordinate system, the new flow regimes are identified. The sensitivity analysis results of the vertical distribution of several different fracturing radii show that the multistage fracturing radii have the characteristics of a three-zone compound reservoir under the condition of vertical unevenness fracturing radii. On the other hand, the identification of the fracturing radii of multi-stage fracturing reservoirs is an inverse problem, that is, the fracturing radius of each layer cannot be effectively identified through the bottom-hole pressure response, but the SRV of multi-stage fracturing reservoirs can be obtained. We call these two phenomena the “equivalent compound effect” and “equivalent seepage volume effect”, respectively. These effects provide a new method for quickly obtaining the SRV, instead of being tangled in the fracturing radius of the local each layer, which provides a new direction for the evaluation of the overall stimulated reservoir effect of the multi-stage fracturing vertical commingled production well. Especially, it provides a novel perspective for understanding the complex seepage flow of multi-stage fracturing vertical commingled production reservoirs.
-
-
-
Nonlinear State Constraints Handling in Waterflooding Optimization Through Reduced Order Models
Authors A. Souza, A. Castro, M. Dall’Aqua, J. Tueros, B. Horowitz and E. GildinSummaryThis study addresses strategies to efficiently impose nonlinear state constraints using reduced order models. Nonlinear constraints imposed on state variables are of practical interest in optimizing reservoir production performance (NPV or oil production), but they are difficult to handle numerically. Constraints involve bounds on control themselves (e.g. rates, BHP or valve openings), linear functions involving the design variables, but oftentimes nonlinear constraints involving state variables must also be imposed. Examples are minimum (maximum) BHP’s at producer (injector) wells subject to rate controls, or vice versa. Enforcement of these constraints involves repeated computation of state variables, and possibly their derivatives, not only at the ends of control steps but at numerous intermediate times. Both computations are time consuming and, thus, it is proposed to make use of reduced order methods to decrease the numerical effort. The contributions of this paper are twofold: (1) we propose correction points based on a time series within the control cycle to impose state constraints thus reducing the computational effort; (2) we are coupling the optimizer with physics-based and data-driven reduced-order models to enforce state complexity reduction.
Here, two strategies are compared: Proper Orthogonal Decomposition / Trajectory Piecewise Linearization (POD/TPWL) and Dynamic Mode Decomposition (DMD). Both methods are snapshot-based linearizations but are implemented differently. TPWL/POD technique reduces the complexity of the problem by linearizing the governing equations around converged and stored states during a training simulation, and reduction is obtained by projecting states onto smaller subspaces by POD. This method requires access to the simulator code and, thus, is an intrusive method. DMD also rely on state snapshots that are used to generate a small set of optimal basis vectors called modes. The snapshot data also permits extraction of a coherent dynamic structure of the problem through the assumption that there exists a linear mapping connecting temporal evolution of the state system. This evolution can be computed without further simulation runs. DMD does not require access to the simulator code and therefore is nonintrusive. The reduced-order techniques are compared in the optimization of a BHP controlled synthetic reservoir where the objective function is maximization of oil production subject to field water production rate constraints. We will demonstrate the handling of non-linear constraints and the resulting computational savings using the MATLAB Reservoir Simulation Toolbox (MRST). We performed modifications to some of its routines to store Jacobian matrices and also snapshots, both used at POD/TPWL and DMD.
-
-
-
Effects of Lumping on the Numerical Simulation of Thermal-Compositional-Reactive Flow in Porous Media
Authors M. Cremon and M. GerritsenSummaryIn this work, we study the influence of using different lumping strategies on the thermal recovery of an extraheavy oil. Numerical simulation of thermal recovery processes typically requires advanced thermodynamic equilibrium computations to model the phase behavior and displacement. Those models rely on compositional descriptions of the oil using up to tens of components. Lumping a large number of components into a smaller number of pseudo-components in order to reduce the computational cost is standard practice for thermal simulations. In the context of reactive transport, most reaction schemes usually use at most four hydrocarbon components. However, the impact the lumping process has on the displacement processes can be hard to estimate a priori. We focus on 1D, 3-phase combustion tube-like numerical simulations of In-Situ Combustion (ISC) displacement processes. These thermal-compositional-reactive simulations exhibit a tight coupling between mass and energy conservation, through phase behavior, heat transport and reactions. We observe that depending on the number and type of lumped pseudo-components retained in the simulation, the results can exhibit modeling artefacts and/or fail to capture the relevant displacement processes. ISC cases involve multiple fronts moving downstream, including a steam front, a reaction/temperature front and multiple saturation fronts. First, we show that using a small number of components does not allow for an accurate estimation of the phase behavior of an extra-heavy oil. Using the typical reaction-based descriptions of a few hydrocarbon components (1-4) leads to inaccurate phase envelopes, for multiple compositions encountered in the displacement process. Then, we illustrate that under hot air injection without reactions, the displacement results do not capture the physical phenomena. Lumping heavy components together overestimates the size of the oil banks and gives inaccurate speeds for multiple fronts. Finally, in the presence of exothermic oxidation reactions, more components are needed to accurately capture the evaporation of medium and heavy components due to the tighter coupling and higher temperatures.
-
-
-
A Novel and Efficient Preconditioner for Solving Lagrange Multipliers-Based Discretization Schemes for Reservoir Simulations
Authors S. Nardean, M. Ferronato and A.S. AbushaikhaSummaryWe present a novel and efficient preconditioning technique to solve the non-symmetric system of equations associated with Lagrange multipliers-based discretization schemes, such as Mixed Hybrid Finite Element method (MHFE) and Mimetic Finite Difference method (MFD). These types of discretization have been gaining popularity lately and here we develop a fully dedicated preconditioner for them. Preconditioners are key to improve the efficiency of Krylov subspace methods, that provide a solution to the sequence of large-size, and often ill-conditioned, systems of equations originating from reservoir numerical simulations.
The mathematical model of flow in porous media is governed by a set of two coupled nonlinear equations: the momentum and mass balance equations, discretized using either the MHFE or the MFD, and the Finite Volume method (FV), respectively. Unknowns are located on elements (element pressure and saturation) and faces (face pressure and phase capillary pressure), the latter behaving as Lagrange multipliers. The problem is solved by adopting a fully implicit approach and linearization is provided by a Newton-Raphson method, which leads to a block-structured Jacobian matrix. An original numerical formulation of the mass balance equation, where the continuity of fluxes is strongly imposed with the aim of increasing the efficiency of the nonlinear iteration, has been investigated. The resulting block Jacobian is not symmetric, thus requiring special preconditioning tools for its efficient solution. The preconditioning approach exploits the Jacobian block structure to develop a multi-stage strategy that addresses separately the problem unknowns. A crucial point is the approximation of the resulting Schur complements, which is carried out at an algebraic level by applying proper restriction operators to the full matrix blocks. The selection of such restrictors is carried out with the aid of a domain decomposition technique algebraically enhanced by a dynamic minimal residual strategy. The proposed block preconditioner has been tested through an extensive experimentation on unstructured and highly heterogeneous reservoir systems, pointing out its robustness and computational efficiency.
-
-
-
Huff-n-Puff (HNP) Pilot Design in Shale Reservoirs Using Dual-Porosity, Dual-Permeability Compositional Simulations
Authors H. Hamdi, C.R. Clarkson, A. Esmail and M. Costa SousaSummaryBefore implementing an HNP pilot in the field, reservoir studies are usually conducted, and compositional numerical simulations performed, to assess the impact of uncertainty on HNP design parameters. In the previous work conducted by the authors, the impact of parametric uncertainty on designing a single-well HNP was demonstrated using single-porosity models. However, recent studies show that a limited region of shattered rock is likely to be created during the hydraulic fracturing process. This region is closely represented by regional dual-porosity dual-permeability (DP-DK) models. In this study, we expand on the early work and address the impact of model uncertainty on designing an optimal HNP for a Duvernay shale example. In addition, a multi-well HNP design is exemplified to assess the impact of fracture communication during the cyclic gas injection scenarios. A unified framework is required to conduct Bayesian history matching and perform HNP optimizations using the Markov chain Monte Carlo process. This task is achieved by implementing new adaptive sampling designs and employing some surrogate modelling techniques (random forests and Gaussian processes) to obtain the distributions for probabilistic HNP forecasts.
The results show that for an equivalent calibrated DP-DK model, the efficiency of HNP, for both lean and rich gas injection scenarios, can be substantially higher than that predicted with the caliberated single-porosity model. In particular, lean gas injection, predicted to have a low efficiency using single porosity models, is predicted to result in substantial incremental recovery in DP-DK models. The history matching and optimization results show that DK-DP models yield the highest recoveries during early cycles and a reduced efficiency for later cycles, whereas with single porosity models, the efficiency is fairly constant across cycles. The high efficiency of the DK-DP models is related to an enhanced swelling and mixing process due to pervasive communication (contact area) between the fracture network and the matrix. Moreover, the compositional simulations demonstrate that for multi-well HNP scenarios, communication through hydraulic fractures is far more important than the communication through the enhanced fracture region (EFR). This communication is shown to substantially reduce HNP performance, which is inferred by comparing the probabilistic forecast simulations.
This study provides a novel workflow to accurately assess the impact of model uncertainty on the HNP designs for unconventional shale and tight light oil reservoirs.
-
-
-
A Surrogate-Based Approach to Waterflood Optimisation under Uncertainty
Authors P. Ogbeiwi, K. Stephen and A. ArinkoolaSummaryThe Markowitz classical theory has been applied in the robust optimisation of petroleum engineering operations by many researchers. It involves the computation of the means and standard deviation of a specified reservoir performance measure(s), and the creation of an efficient frontier which qualifies the relationship between the optimised mean and standard deviation. However, the optimisation routine is computationally expensive as numerous simulations are required for calculating the means and standard deviations. Also, to simplify the optimisation problem many significant uncertainties are not considered in the optimisation routine. Also, previous researches have used a limited number of reservoir-model sample points of the uncertain variable(s) to calculate the means and standard deviations values. For example, if the uncertain parameter is uniformly distributed, three equiprobable (the low, median and high values) are used to correlate the uncertainty. However, this approach leads to erroneous calculations of the means and standard deviations because the actual distribution of the uncertainty is ignored.
In this study, we apply the Markowitz classical robust optimisation routine to a validated approximation model of the cumulative oil production of a case study reservoir to optimise oil recovery after waterflooding. Using this approach, we can reduce computational costs and for the first time, consider up to four geological uncertain variables in reservoir optimisation under uncertainty. We show that at least 100 sample points (realisations) of the uncertain geological parameters are required to obtain accurate computations of the means (reward) and standard deviations (risk). This allows for adequate sampling of the distribution of the uncertain parameters. We then construct an efficient frontier of the optimal solutions for various risk-aversion factors and compare the results to that obtained from a deterministic optimisation routine.
This approach was applied for the first time to optimisation under uncertainty. The result indicates that considering geological uncertainties while solving to the optimisation problem results in more realistic optimal solutions when compared to the deterministic optimisation case. This is because engineering control variables that lead to a risk-quantified strategy for the waterflooding operation are obtained.
-
-
-
Statistical Model and Experimental Study of Oil Viscosity Reduction and Rock Wettability Alteration Induced by Nanoparticles
Authors M. Bagheri Vanani, S.A. Tabatabaei-Nezhad and E. KhodapanahSummaryRecently, nanoparticles (NPs) have been introduced as useful solution for enhanced oil recovery (EOR) challenges. In this context, one of the challenges is related to precipitation of asphaltene content in oil reservoirs which affects rock and fluid properties including oil viscosity and rock wettability. This paper, at the first, aims to investigate the potential of silica NPs for oil viscosity reduction which increases the mobility of oleic phase leading to EOR. Next, the effect of silica NPs on precipitation of asphaltene on sandstone rocks in which affects rock wettability will be explored. At the last, a statistical modeling study will be performed using MINITAB Software to investigate the effect of temperature, nanofluids concentration and oil composition on rock and oil properties. To this end, viscometer oil testing and contact angle measurement were conducted. The results showed that silica NPs inhibited or delayed precipitation of asphaltene in sandstone rock and consequently, the potential of asphaltene for changing rock wettability toward decreasing oil- wet condition. In addition, the results demonstrated that the dispersion of silica NPs in the oleic phase could decrease oil viscosity as much as 98% by cracking carbon-oxygen and carbon-carbon bonds in the hydrocarbon chains. By statistical analysis also a multiple linear regression model was developed to predict the percentage of oil viscosity reduction by NPs. In addition, R squared value obtained as much as 98.9% and p value was smaller than 0.05 indicating the effective role of oil sample, silica NPs concentration and temperature parameters on the oil viscosity reduction. F values of 152.86, 845.4 and 91.78 were achieved for each parameters, respectively. Also, no interaction between each pair of parameters for the viscosity reduction was observed. The results of the modeling section was found to have acceptable application in forecasting oil field data. This study support the EOR potential of NPs in oil and gas fields.
-
-
-
How Does the Definition of the Objective Function Influence the Outcome of History Matching?
Authors G. Eremyan, I. Matveev, G. Shishaev, V. Rukavishnikov and V. DemyanovSummaryIn this work we investigate how the form of the objective function can influence the results and the speed of history matching (HM). The objective function definition depends on the production variables included in the objective and their weighting factors. These choices may impact, for instance, the speed of assisted history matching. We demonstrate how the choice of the suitable form for the objective function used in HM should depend on the particular reservoir development problem at stake.
The work presents a comparative study between different objective function formulations used in history matching a synthetic reservoir example. An industry standard stochastic optimization algorithm - evolution strategy was chosen for the comparative benchmarking of the impact of the objective function choice on history matching. The synthetic model represents waterflooding case with 3 production, 3 injection wells, 7 years of simulated history and 8 parameters of reservoir uncertainty. The findings from the comparative study are not limited to a particular assisted HM algorithms applied.
Processing and analysis of the experimental results confirmed that the formulation of the objective function is important, since its value allows the algorithm to accelerate towards finding better HM solutions. The study demonstrates how different objective function formulations lead to different computational costs to reach the history matched solution. This means an optimal objective function formulation for each particular problem should provide the fastest convergence.
Novelty of the work is in demonstrating how the different objective function formulation can help to history match a reservoir model with minimized computational cost when solving different production problems. We show that the objective function should not be defined in the same way for any history matching process but rather adjusted to the particular application allowing to reach required history match at minimum computational cost. This will give more chances to history match real complex hydrocarbon field models within a reasonable time.
-
-
-
A Coupled Geomechanics and Flow Model for Enhanced Gas Recovery and CO2 Storage in Shale Reservoirs
More LessSummaryA fully coupled multicomponent flow and geomechanics model, which incorporates viscous flow, Kundsen diffusion, molecular diffusion, multi-component adsorption/desorption and geomechanics effect, is developed to study the enhanced gas recovery and CO2 storage in fractured shale reservoirs. Specifically, an efficient hybrid model, which consists of single porosity model, multiple porosity model and Embedded Discrete Fracture Model (EDFM), is adopted to model multiscale fractures. In flow equations, the Peng-Robinson EOS, extended Langmuir isotherm and Fick’s Law are adopted. In geomechanical portion, the proppant nonlinear deformation is considered. Then, the mixed space discretization (i.e., finite volume method for flow and stabilized XFEM for geomechanics) and modified fixed stress sequential implicit methods are applied to solve the proposed model. The robustness of the proposed method is demonstrated through several numerical examples, and a comprehensive analysis of the mechanisms for enhanced gas recovery and CO2 storage in fractured shale gas reservoirs is carried out, which takes into account Kundsen diffusion, molecular diffusion, multi-component adsorption/desorption, proppant nonlinear deformation, and different injection strategies including huff-n-puff scenario. Results show that CO2 injection is an effective approach for enhancing shale gas recovery, and the injected CO2 can be stored as free, adsorbed, and dissolved state. Besides, we can also find that stimulated reservoir volume, natural/induced fractures, hydraulic fractures, various transport/storage mechanisms and injection strategies have significant effects on enhanced gas recovery and CO2 storage in fractured shale reservoirs.
-
-
-
Deep-Learning-Based 3D Geological Parameterization and Flow Prediction for History Matching
Authors M. Tang, Y. Liu and L. DurlofskySummaryIn recent work we have developed deep-learning-based procedures for parameterizing complex 2D geomodels (Liu et al., 2019) and for predicting the detailed flow responses of such systems (Tang et al., 2019). The parameterization method, referred to as CNN-PCA, entails the use of principal component analysis in combination with convolutional neural networks, while the flow surrogate model involves the application of a recurrent residual U-Net procedure. The combination of these two capabilities enables efficient history matching to be performed. This is because the variables that must be determined during data assimilation correspond to the relatively small set of parameters associated with the CNN-PCA description, and the requisite flow simulations can all be accomplished using the deep-learning-based surrogate model. The overall methodology has been successfully applied to 2D channelized systems (as shown in Tang et al., 2019).
In this work, we extend these capabilities to 3D systems. The 3D CNN-PCA procedure differs from the 2D method in that we no longer use a style loss term (as we did in 2D), but instead apply a supervised learning approach. With this method we train the network using PCA realizations along with their corresponding (desired) channelized representations. This treatment, in common with our 2D procedure, leads to faster training than some other approaches since the underlying PCA representation already captures aspects of the spatial statistics (covariance). The 3D recurrent R-U-Net consists of 3D convolutional and recurrent (convLSTM) neural networks, which are designed to capture the spatial and temporal information associated with dynamic systems. This approach shows advantages over autoregressive procedures. The recurrent R-U-Net is trained on O(3000) randomly generated 3D geomodels and their corresponding (simulated) dynamic 3D state maps; e.g., saturation and pressure at a set of time steps.
Results are first presented for each method individually. Specifically, we validate the geological parameterization procedure by demonstrating that the prior flow statistics, for a 3D channelized system, generated using CNN-PCA agree closely with those from (reference) geostatistical models. The recurrent R-U-Net surrogate flow model is validated through detailed comparisons of oil-water flow results for particular (new) realizations and through error statistics for an ensemble of new models. Finally, a 3D history matching example, in which the two procedures are used in combination, will be presented.
-
-
-
A Derivative-Free Trust-Region Algorithm for Well Control Optimization
Authors T. Silva, M. Bellout, C. Giuliani, E. Camponogara and A. PavlovSummaryA Derivative-Free Trust-Region (DFTR) algorithm is proposed to solve for the well control optimization problem. Derivative-Free (DF) methods are often a practical alternative because gradients may not be available and/or are unreliable due to cost function discontinuities, e.g., caused by enforcement of simulation-based constraints. However, the effectiveness of DF methods for solving realistic cases is heavily dependent on an efficient sampling strategy since cost function calculations often involve time-consuming reservoir simulations. The DFTR algorithm samples the cost function space around an incumbent solution and builds a quadratic approximation model, valid within a bounded region (the trust-region). A minimization of the quadratic model guides the method in its search for descent. Crucially, because of the curvature information provided by the model-based routine, the trust-region approach is able to conduct a more efficient search compared to other sampling methods, e.g., direct-search approaches.
DFTR is implemented within FieldOpt, an open-source framework for field development optimization that provides flexibility with respect to problem parameterization and parallelization capabilities. DFTR is tested in the synthetic case Olympus against two other type of methods commonly applied to production optimization: a direct-search (Asynchronous Parallel Pattern Search) and a population-based (Particle Swarm Optimization). Current results show DFTR has promising convergence properties. In particular, the method is seen to reach fairly good solutions using only a few iterations. This feature can be particularly attractive for practitioners who seek ways to improve production strategies while using full-fledged models. Future work will focus on wider application of the algorithm in more complex field development problems such as joint problems and ICD optimization, and extensions to the algorithm to deal with multiple geological realizations and output constraints.
-
-
-
Optimizing Sealing of CO2 Leakage Paths with Microbially Induced Calcite Precipitation Under Uncertainty
Authors S. Tveit, P. Pettersson and D. Landa MarbanSummaryIn large-scale CO₂ sequestration, critical pressure build-up can occur due to the high injection rates, which in the worst case can lead to leakage paths for the CO₂ through caprock fractures and/or reactivated faults. A novel leakage mitigation technology is microbially induced calcite precipitation (MICP), where microorganisms are injected to accelerate production of the sealing agent – calcite – from calcium and urea. The MICP technology has been validated on multiple scales, from laboratory to meter-scale experiments. On the field scale, the situation can be challenging since leakage path(s) are possibly tens-of-meter from the injection well, and the subsurface parameters controlling the flow, chemical reactions, and microbial processes can be uncertain.
In this work, we consider the optimization problem of maximising sealing of leakage paths in the presence of uncertainty. The control variables can, e.g., be injection rates and periods, or concentration of chemical and biological species, while the uncertain parameters can, e.g., be permeability and porosity. To quantify the effect of parameter uncertainty on control variables, an accelerated Monte Carlo (aMC) method is used, which aims to accelerate the slow convergence of the standard MC method. Even with aMC methods, a significant number of samples of the objective function is needed, that is, multiple runs of the simulator is required.
The MICP process at field scale is described by a coupled advection-diffusion-reaction, microbial, and rockaltering equation that is associated with high computational cost to solve. To alleviate the high computational cost, we generate a surrogate (or proxy) model of the original objective function that can be evaluated at negligible cost. The surrogate model is based on the sparse hierarchical multi-linear interpolation (SI) method, where the objective function is approximated to a desired accuracy using significantly fewer function evaluations than traditional interpolation methods. Hence, the computational cost of generating and sampling with the surrogate model is typically lower than the cost of sampling with the original objective function. The novel SI-aMC method is applied to different test cases showing the computational efficiency and accuracy of uncertainty estimates for field-scale MICP optimization problems.
-
-
-
A Mathematical Model for Scaling and Wettibility Alteration in ASP Flooding
More LessSummaryDaqing Oilfield has carried out a large scale of ASP(Alkali-Surfactant-Polymer) flooding application to further increase oil recovery, annual oil production for ASP flooding has been more than 4 million tons since 2016. A series of production data and theoretical study results have demonstrated that the chemicals in ASP system react with the mineral component in reservoir rock, resulting in scaling and wettibility alteration to influence the oil recovery processes for ASP flooding. The chemical reaction processes are more complicated, making the numerical simulation more difficult to simulate chemical process mechanism for ASP flooding accurately. How to develop mathematical model to simulate the process mechanism for scaling and wettibility alteration is a big challenge.
This paper has conducted a series of lab experiments to study the reaction between the chemicals in ASP system and the mineral component in reservoir rock, showing that the chemicals in ASP system has corrosion action on the reservoir rock to generate the scaling materials and cause scaling reaction. Based on the comprehensive analysis to scaling effect factors, the kinetic model for reaction between the chemicals in ASP system and the mineral component in reservoir rock has been constructed. We have also measured the relative permeability curve for that before and after wettibility alteration to model the process mechanism for wettibility alteration. All these mathematical models have been incorporated into chemical flooding simulator.
We have conducted history matching simulations for several ASP flooding application projects in Daqing Oilfield, the history matching simulation results from the simulator with scaling and wettibility alteration model agree with the observation data much better than that from the simulator without scaling and wettibility model, showing that the scaling and wettibility alteration play an important processing role in ASP flooding, and ASP flooding numerical simulation should take the scaling and wettibility alteration process mechanisms into account.
-
-
-
Machine-Learning Informed Prediction of Linear Solver Tolerance for Non-Linear Solution Methods in Numerical Simulation
Authors E. Oladokun, S. Sheth, T. Jönsthövel and K. NeylonSummaryNumerical simulators model evolution of state variables over time and space. The governing equations are often highly non-linear and exhibit significant complexity. Due to the lack of closed-form analytical solutions, nonlinear fixed-point iterative methods – most commonly the Newton method - are required to solve these problems. The key component is constructing the Newton update by solving the Jacobian, a large sparse system of linear equations.
Most state-of-the-art linear solvers for large sparse systems are based on Krylov methods, e.g. the GMRES method [Saad and Schlutz, 1986]. Solving the linear system is often a significant part of the overall computational effort; any efficiency improvement can substantially speed up the simulation. A major challenge with iterative linear solvers is how to determine the stopping criteria. A tight linear convergence tolerance (η) ensures a good linear solution but is more computationally expensive and might not necessarily affect the quality of the corresponding non-linear solution update – a phenomenon called oversolving of the Newton equation. Eisenstat and Walker [1994], proposed an approach that dynamically selects η based on the non-linear state of the system. This method is very successful in reducing the number of linear iterations and thus reducing oversolving, but can be at the expense of more non-linear iterations such that the overall effect is detrimental to performance.
In this work, we propose an new algorithm to predict η such that the total number of linear and non -linear iterations are minimized, leading to a more robust performance improvement and reduced simulation time. We derive an estimate for η using non-linear system state variables such as residual norms and Newton iteration number, coupled with linear system information such as an approximation of the condition number based on Ritz values. All the information used is readily available in a standard reservoir simulator. We augment this estimate with a selection strategy based on machine learning analysis and algorithms. Furthermore, we compare with results using an alternative heuristic developed from insights gained through the machine learning analysis.
We apply our methods to a variety of problems in reservoir simulation ranging from heterogeneous 2D two-phase models to 3D thermal compositional models. We observe 30–50% reduction in linear iterations without increasing the non-linear iteration count - which means faster simulations without compromising accuracy.
-
-
-
Glimm and Finite Volume Schemes for Polymer Flooding Model with and Without Inaccessible Pore Volume Law
Authors G. Dongmo, B. Braconnier, C. Preux, Q. Tran and C. BerthonSummaryWe investigate the numerical simulation of polymer flooding model without IPV law [1] and with the IPV percolation law [2]. The two mathematical models (with and without the percolation law) are weakly hyperbolic. They include a resonance region where strict hyperbolicity is lost.
Providing exact solution to Riemann problems and devising accurate numerical schemes is a challenging task.
Without IPV law, the mathematical model coincides with the Keyfitz-Kranzer model [3]. For all initial data, a unique solution to the Riemann problem can be defined thanks to Isaacson and Temple’s entropy condition [1], which is to be imposed in addition to Lax’s one. Our theoretical contribution is to prove that the two models (with and without the IPV percolation law) are equivalent for both smooth and discontinuous solutions, up to a change of variables. Finally, we are able to provide a unique solution of the Riemann problems for both models.
Regarding our numerical simulation contributions, we propose second order finite volume schemes based on the Godunov scheme and a new Suliciu-type [4] relaxation scheme which can be applied to any IPV law. For the two mathematical models, we perform a mesh convergence and compute the errors of approximate solutions relatively to exact solutions and then determine the effective order of those schemes. Because of the system resonance and non-linearity, the so-called first and second order schemes have respectively an effective order of about 0.24 and 0.33 in contact discontinuities, both 0.5 in shocks and, 0.66 and 0.86 in rarefaction waves.
Because of this lack of accuracy, we implement the Glimm scheme for the two mathematical models. The obtained results are in good agreement with the exact solutions. Shocks and contact discontinuities are resolved with three points at most.
[1] E. L. Isaacson, J. B. Temple (1986), “Analysis of a singular hyperbolic system of conservation laws,” J. Diff. Eqs., vol 2, no. 65, pp 250–268.
[2] G. A. Bartelds, J. Bruining, J. Molenaar (1997), “The modeling of velocity enhancement in polymer flooding,” Transp. Porous Media, vol 1, no 26, pp 75–88.
[3] B. L. Keyfitz and H. C. Kranzer (1980), “A system of non-strictly hyperbolic conservation laws arising in elasticity theory,” Archive for Rational Mechanics and Analysis, vol 72, no 3, pp 219–241.
[4] I. Suliciu (1988), “On the thermodynamics of fluids with relaxation and phase transitions fluids with relaxation,” Int. J. Engin. Sci., vol 36, pp 921–947.
-
-
-
A Novel Approach to Multilevel Data Assimilation
Authors M. Nezhadali, T. Bhakta, K. Fossum and T. MannsethSummaryThere is an increasing interest in Multi-fidelity Modeling within computational statistics research in recent years. Multilevel ensemble-based data assimilation (MLDA), taking advantage of Multi-fidelity modeling, is a novel approach for reservoir history-matching. This method has been proposed to overcome the potential sampling errors that are encountered in conventional ensemble-based data assimilation techniques. Ensemble-based methods have been successful in history-matching of large cases but the limit in computational resources normally results in the ensemble size to be confined to about 100, which can yield to sampling error. In order to address the problem of sampling error, localization has been proposed which handles the problem of non-local spurious correlations but does not allow for true non-local correlations. The basic concept of MLDA revolves about allocation of resources for computation of models on a hierarchy of accuracy and computational cost. Utilization of models with a lower computational cost enables a significant increase in the ensemble size. Doing so, it brings about the opportunity to trade an appropriate amount of computational accuracy for a better statistical accuracy. In this research, the hierarchy of computational cost is established using a variation of spatial resolutions in the simulation models, and a new scheme called Simultaneous Spatial Multilevel Data Assimilation for multilevel data is investigated on a reservoir model. This method is designed to assimilate the inverted seismic data in a multilevel manner. Accordingly, a set of different spatial resolutions of the model is created and an ensemble of models and their corresponding inverted seismic data are considered for any of the resolutions. The simulations are run for all of the levels and an independent update is performed on any of the levels using the Ensemble Smoother (ES). The reduction of computational cost in coarser resolutions entails a multilevel error which can be quantified and accounted for, by a comparison with the simulations in the finest level. Finally, a cumulative statistical analysis over all ensembles is done to assess the performance in data assimilation. Results obtained from two variants of the new scheme are evaluated and compared to ES with localization and standard ensemble size.
-
-
-
Bayesian Inference of Covariance Parameters in Spectral Approach to Geostatistical Simulation
Authors N. Ismagilov, I. Azangulov, V. Borovitskiy, M. Lifshits and P. MostowskySummaryThe spectral simulation approach (described in Ismagilov and Lifshits (ECMOR XVI)) is a relatively new geostatistical method of stochastic reservoir property simulation. It is based on Fourier analysis of well log data and simulation of Fourier expansion coefficients in the interwell space. The key advantage of this method is its ability to automatically recognize and reproduce vertical non-stationarities observed in well data (Ismagilov et al. (ATCE 2019)). This comes at a price of having many parameters: while usual geostatistical approaches like kriging or sequential Gaussian simulation require estimating one covariance function or variogram (in practice, estimated parameters are variogram model type and ranges in three directions), the spectral approach requires estimating a lot of them (typically, 100—200 covariance functions). Obviously, automatic covariance estimation becomes crucial in this setting.
While assuming parametric models for the aforementioned covariance functions and estimating their parameters by maximizing the likelihood works reasonably well in practice, this strategy has some drawbacks. First, in cases when likelihood surface turns out to be multi-modal or flat, the point estimation of parameters may lead to many problems such as incorrect uncertainty estimation or even choosing a wrong model. Second, maximum likelihood estimation usually does not provide a way to incorporate prior knowledge about parameters --- a typical example is constraining the resulting variogram range to be in geologically reasonable limits.
We argue that Bayesian inference of parameters is a way to overcome both these challenges. Treating covariance parameters as random variables avoids limitations of deterministic point estimations while introducing prior distributions for parameters is the most natural way of incorporating the prior knowledge.
We develop and implement in software a version of spectral approach where covariance parameters are treated in a Bayesian way. We show via computations on practical examples that Bayesian inference enables to build better models in cases with complex likelihood surfaces and to account for prior knowledge about covariance parameters.
-
-
-
Adaptive Nonlinear Solver for a Discrete Fracture Model in Operator-Based Linearization Framework
Authors K. Mansour Pour and D. VoskovSummarySimulation of compositional problems in hydrocarbon reservoirs with complex heterogeneous structure requires adopting stable numerical methods that rely on an implicit treatment of the flux term in the conservation equation. The discrete approximation of convection term in governing equations is highly nonlinear due to the complex properties complemented with a multiphase flash solution. Consequently, robust and efficient techniques are needed to solve the resulting nonlinear system of algebraic equations. The solution of the compositional problem often requires the propagation of the displacement front to multiple control volumes at simulation timestep. Coping with this issue is particularly challenging in complex subsurface formations such as fractured reservoirs. In this study, we present a robust nonlinear solver based on a generalization of the trust-region technique to compositional multiphase flows. The approach is designed to embed a newly introduced Operator-Based Linearization technique and is grounded on the analysis of multi-dimensional tables related to parameterized convection operators. We segment the parameter-space of the nonlinear problem into a set of trust regions where the convection operators maintain the second-order behaviour (i.e., they remain positive or negative definite). We approximate these trust regions in the solution process by detecting the boundary of convex regions via analysis of the directional derivative. This analysis is performed adaptively while tracking the nonlinear update trajectory in the parameter-space. The proposed nonlinear solver locally constraints the updating of the overall compositions across the boundaries of convex regions. Besides, we enhance the performance of the nonlinear solver by exploring diverse preconditioning strategies for compositional problems. The proposed nonlinear solution strategies have been validated for both miscible and immiscible gas injection problems of practical interest.
-
-
-
Accounting for Model Discrepancy in Uncertainty Analysis by Combining Numerical Simulation and Bayesian Emulation Techniques
Authors H. Nandi Formentin, I. Vernon, M. Goldstein, C. Caiado, G. Avansi and D. SchiozerSummaryModel discrepancy specifies unavoidable differences between a physical system and its corresponding computer model. Incomplete information, simplifications and lack of knowledge about the physical state originate model discrepancy. Misevaluation of model discrepancy exposes decision-makers to overconfident and biased forecasts, a risky situation. We describe a methodology to account for one type of model discrepancy in the Bayesian History Matching for Uncertainty Reduction (BHMUR), an approach that combines reservoir simulation and emulation techniques to find all reservoir scenarios consistent with observed data and uncertainties in the problem.
Our methodology is an alternative and more rigorous tool to account for the model discrepancy caused by errors in target data while performing uncertainty analysis. Target data used in historical period contain observational errors that propagate through the simulator, causing one type of model discrepancy. We follow a systematic procedure for uncertainty reduction previously presented by the authors, expanding the step dedicated to the model discrepancy. Our methodology: (1) obtains a training set by evaluating model discrepancy in multiple scenarios of the search space, an expensive simulation-based process; (2) characterises the model discrepancy across the entire search space via Bayesian emulators; and (3) integrates the model discrepancy in the BHMUR via bias and covariance structures.
The methodology is demonstrated in a case study: 27 valid emulators for model discrepancy were constructed and integrated into the implausibility analysis and uncertainty reduction process. Two perspectives showed the impact of this type of model discrepancy. Firstly, neglecting model discrepancy resulted in all the search space being implausible –an indicator to review the problem characterisation and uncertainties; by contrast, when considering the model discrepancy, the non-implausible region consists of 8% of the search space. Secondly, we demonstrated the uncertainty reduction in the historical and forecasting periods. A key finding is that the error in target data results in a substantial model discrepancy over many other simulation outputs, being both time and location dependent.
We advance the applicability of BHMUR by proposing a statistically consistent tool to account for one type of model discrepancy in the uncertainty quantification process. We showed that errors in target data cause model discrepancy with a complex structure. Appropriate consideration of model discrepancy is vital to (a) identify the whole class of solutions consistent with historical data and uncertainties in the problem, (b) appropriately represent the physical system; (c) avoid making decisions based on over-confident and biased information while enabling more reliable production forecast.
-
-
-
The Express Method of Well-Control Optimization for the Associated Gas Recycling Process
Authors V. Babin, N. Glavnov and E. ShelSummaryThis work describes semi-analytical technique to maximize additional oil production and economic efficiency due to the recycling process of associated petroleum gas by selecting the optimal sequence of injection wells and the distribution of the injected gas volume between them. The technique is applicable for fields with an external gas supplies from other layers of the field or the other fields. Herewith the volume of external supplies is considered to be limited. It leads to the requirement for the most effective design of the recycling strategy and thereby increases the value of the optimization problem solution.
The mathematical model of the recycling is derived using main characteristics of the recycling process for each of the injection patterns, they can be obtained using for example reservoir simulation. Additionally it is assumed that the interaction between patterns can be neglected. Then the problem of maximizing the additional oil production is reduced to application of the method of Lagrange multipliers, thus volumes of injected gas to each of patterns is determined. On the next step discounted additional cumulated oil production as the main driver of economic efficiency is maximized. The problem is reformulated as functional’s extremum search and solved by dynamic programming technique.
As the result, for maximization of oil production, only cumulative volumes of external supplies injected to each of patterns determine the solution. In opposite, while optimizing economic efficiency the optimal sequence of patterns and injected volumes strongly depend on the dynamics of external supplies. Additionally, the optimal strategy implies that patterns have to be involved in the sequential injection mode. The use of the optimal strategy increases cumulated oil production for 20–30% in comparison with the expert one. The method does not require significant computational cost compared with black-box optimization based on reservoir model. That allows to use that technique for fast optimization of operation control.
The topic of gas injection is widely represented in literature. However, commonly the optimization is based on expert estimates or on multivariate calculations using a full-scale compositional reservoir model. The method presented in the work requires to carry out only a relatively small set of reservoir model calculations. The basic effect is achieved by analytical and semi-analytical operations. Also, the current references does not pay enough attention to cases with the limited external gas supplies, where the recycling strategy has a key impact on the profitability.
-
-
-
Evaluation of A Data-Driven Flow Network Model (FlowNet) for Reservoir Prediction and Optimization
Authors A. Kiærr, O.P. Lødøen, W. De Bruin, E. Barros and O. LeeuwenburghSummaryWe describe and evaluate a physics-based proxy model approach for reservoir prediction and optimization. It builds on the recent development of so-called flow-network models which represent flow paths between wells by discrete 1D grids with permeability and pore volume properties. These types of models represent an alternative to capacitance resistance and correlation-based models and have the benefit of allowing for all physics supported by regular 3D grid-based commercial simulators. The new model is different from a previously proposed model in that we include additional nodes in the network that allow for more and indirect flow paths between wells, as well as extra nodes to represent an aquifer.
We describe the structure of our flow network and investigate the impact of design and training parameters on the performance of the network, both in history matching and prediction mode. Examples include the number and placement of network nodes, the treatment of aquifers, and the size and sampling of prior model property values. We distinguish between the accuracy of the history match and the generalizability by cross-validating the flow network performance on future well control strategies that are different from that encountered during the history period. Using this procedure, we aim to prevent overfitting of the model while ensuring sufficient predictive power. Results are presented for experiments based on phase rate and bottom hole pressure measurements and predictions generated with the Brugge benchmark model which is used as a synthetic truth.
We subsequently present a first application of flow network models for well control optimization under uncertainty. To this end we employ a stochastic simplex gradient-based optimization approach and demonstrate that strategies that are expected to deliver improved NPV can be identified at much lower computational cost and within a much shorter time frame than would be required otherwise.
-
-
-
Simulation of Foam-Assisted CO2 Storage in Saline Aquifers
More LessSummaryGeological storage of CO2 is a crucial emerging technology to reduce anthropogenic greenhouse gas emissions. Due to the buoyant characteristic of injected gas and the complex geology of subsurface reservoirs, most injected CO2 either rapidly migrates to the top of the reservoir or fingers through high-permeability layers due to instability in the convection-dominated displacement. Both of these phenomena reduce the storage capacity of subsurface media. CO2-foam injection is a promising technology for reducing gas mobility and increasing trapping within the swept region in deep brine aquifers. A consistent thermodynamic model based on a combination of a classic cubic equation of state (EOS) for gas components with an activity model for the aqueous phase has been implemented to describe the phase behavior of the CO2-brine system with impurities. This phase-behavior module is combined with representation of foam by an implicit-texture (IT) model with two flow regimes. This combination can accurately capture the complicated dynamics of miscible CO2 foam at various stages of the sequestration process. The Operator-Based Linearization (OBL) approach is applied to reduce the nonlinearity of the CO2-foam problem by transforming the discretized conservation equations into space-dependent and state-dependent operators. Surfactant-alternating-gas (SAG) injection is applied to overcome injectivity problems related to pressure build-up in the near-well region. In this study, a 3D large-scale heterogeneous reservoir is used to examine CO2-foam behaviour and its effects on CO2 storage. Simulation studies show foams can reduce gas mobility effectively by trapping gas bubbles and inhibit CO2 from migrating upward in the presence of gravity, which in turn improves remarkably the sweep efficiency and opens the unswept region for CO2 storage. We also study how surfactant injection and forming of foam affect enhanced dissolution of CO2 at various thermodynamic conditions. This work provides a possible strategy to develop robust and efficient CO2 storage technology.
-
-
-
Application of Sector Modeling Approach in a Probabilistic Study of a Giant Reservoir
Authors L.O. Pires, V.E. Botechia and D. SchiozerSummaryComputational requirements may be one of the most relevant parameters in model-based decision analysis process of giant and complex reservoirs. This may make probabilistic studies very time consuming. One proposal to work around this problem is to divide the reservoir model into sectors and use them as isolated models (Sector Modeling approach) during the decision analysis processes, assuming that isolated sectors representativeness is acceptable. The study case is a benchmark of giant offshore carbonate reservoir, analogous to pre-salt reservoirs in Brazil, which was divided into four sectors, representing four production regions with separate production systems (platforms), each one starting in different periods.
A probabilistic study is performed to evaluate if the behavior of the combination of the Isolated Sectors models (ΣSisolated) is representative of Full Field models (FF). It is also compared the behavior of Sector 1 using its Isolated Sector models (S1) and FF models. This study considers the use of 100 geological scenarios of the UNISIM-III model, combined with scalar uncertainties (relative permeability curves, faults transmissibility, PVT, well productivity/injectivity).
In this paper, it is proposed a methodology to evaluate differences between the two sets of models. Results show good correlation between the behavior of Sector 1 in S1 and FF models. ΣSisolated models are representative of the overall behavior of the FF models, presenting great correlations between both model sets. However, it is a bias indication conservative scenarios since cumulative oil production and Net Present Value (NPV) are lower, compared to the FF models. The average NPV relative difference is 13%, and thirteen models present considerable relative differences between the two sets of models (higher than 20%). A deeper study is performed using the models where highest and lowest NPV relative differences are observer to identify the main reasons of those differences. Also, it is evaluated if the behavior of the ΣSisolated models are representative of the FF models performing risk analysis quantification and selection of representative models.
To apply the Sector Modeling approach in the study case, it is necessary to consider that there is a considerable computational gain when using the Isolated Sector models, but there are models with considerable relative differences. Thus, if one chooses to adopt this methodology in the decision-making process, isolated sector models can be used during optimization processes that require a high number of simulations. Moreover, the decision making should be based on the results observed for the FF models.
-
-
-
Modified RAND Algorithms for Multiphase Geochemical Reactions
Authors F. De Azevedo Medeiros, W. Yan and E.H. StenbySummaryUnderground geological storage (UGS) of CO2 in saline aquifer or oil reservoirs is an effective means to reduce CO2 emission at scale. To evaluate these UGS processes and understand the long-term fate of the injected CO2, we need a simulator that can account for multiphase equilibrium involving CO2, speciation reactions in brine, and the reactions with minerals. The calculation algorithms for multiphase geochemical reactions are essential to the robustness and efficiency of such a simulator. We applied the modified RAND method (SPE 182706-PA) to electrolyte systems to calculate phase equilibrium together with speciation reactions and mineral dissolution/precipitation. Modified RAND is a non-stoichiometric approach for simultaneous chemical and phase equilibrium calculation. The method linearizes the species chemical potential and eventually uses the elemental chemical potentials as the main independent variables. This greatly reduces the size of the equations for geochemical systems with many species and reactions. Modified RAND is more structured than the classical methods, for which we need to reselect the independent variables during the calculation to reduce round-off errors, and thus more suitable for UGS in oil reservoirs, where both hydrocarbon phase equilibrium and brine-mineral reactions are important. It is 2nd-order and its solution can be guided by minimizing the Gibbs energy. Modified RAND can be applied directly to geochemical systems at a fixed overall composition. Some geochemical applications, however, require analysis at constant chemical potential of a neutral species (e.g., CO2) or a charged species (e.g., H+), the latter case expressed usually as constant pH. We also extended modified RAND to those open systems. For the former, a new state-function can be constructed through the Legendre transform and the obtained algorithm is an energy minimization. For the latter, the problem is no longer minimization but we can still formulate a 2nd-order convergent algorithm. We tested the modified RAND algorithms with phase equilibrium cases relevant to UGS in closed systems, open systems with specified CO2 fugacity, and open systems with specified pH. Modified RAND provides a more efficient solution than the classical equation solving approach used in PHREEQC. The algorithms for closed and open systems exhibit 2nd-order convergence in all the tested cases. We then integrated modified RAND into a 1-D simulator and included the kinetic reactions, and compared the simulator with PHREEQC for 1-D geochemical simulations. The study provides the foundation for a future reactive transport simulator using modified RAND for the core multiphase reaction calculation.
-
-
-
Consistent Update of Well Path, Grid Structure and Grid Model Parameters Using an Iterative Ensemble Smoother
Authors J. Saetrom and L. GourcSummaryFor horizontally drilled wells offshore, uncertainty in well path position can be substantial (10 meters or more in true vertical depth). In traditional modelling workflow, this uncertainty is often ignored, despite the potential impact this has on the reservoir management decisions. Fortunately, state of the art tools for seismic depth conversion allows us to incorporate and generate multiple realizations where uncertainty in well path trajectory and structural horizons can be accounted for. However, inconsistencies are easily introduced when these tools are used as part of an integrated modelling and data conditioning workflow, which includes both static (seismic, logs, etc.) and dynamic (production, 4D seismic etc.) data conditioning. Typical inconsistencies can include:
• When a well trajectory is changed during data conditioning, how do we preserve consistency with the updated well log distributions and resulting grid properties?
• When well trajectories and structural horizons are updated simultaneously, how do we prevent that artificial well tops are introduced, so that the conditioned model parameters remain physical?
• When changing the facies property in a single grid cell, how do we preserve the consistency of the petrophysical properties at large?
Although numerous papers have been published on the topic of conditioning grid structure and facies distributions to static and dynamic data, an algorithm accounting for all the three cruxes outlined above has not been published.
In this paper, we describe a complete workflow, from well path uncertainty to flow simulation, that prevents introducing these model inconsistencies during data conditioning using an iterative ensemble smoother, looking in particular at the three cruxes outlined above. The workflow can be divided into two steps:
1) Prior modelling, where we define the limits of the sample space for each model parameter and use Monte Carlo sampling to generate an ensemble of realizations. In the initial step, static data is used to guide the local and global variability for the generated realizations, without hard data conditioning. The goal of this initial step is to establish a graphical network model defining physical connections between the model parameters in an integrated workflow, and constrain the sample space of the resulting model parameters.
2) Training: Using an iterative ensemble Kalman smoother, we condition the model parameters to observed data simultaneously using all available data (both static and dynamic).
An anonymous field on the Norwegian continental shelf will be used to demonstrate the practical use of this workflow.
-
-
-
Two-Stage Scenario Reduction Process for An Efficient Robust Optimization
Authors S.K. Mahjour, A.A.D.S. Dos Santos, M.G. Correia and D.J. SchiozerSummaryProbabilistic approaches for optimization objectives need a large ensemble size to consider uncertainties, which is often computationally expensive. Our proposed method includes two scenario reduction (SR) techniques applied to geostatistical realizations and reservoir simulation models to handle geological and dynamic uncertainties. The goal is to select a subset of simulation models to be used in an efficient robust optimization (RO).
The proposed workflow is summarized in the following sections.
- Generate total geostatistical (TG) realizations representing grid properties using Latin Hypercube (LH) sampling;
- Select representative geostatistical (RG) realizations from the TG realizations using an integrated statistical technique named Distance-based Clustering with Simple Matching Coefficient (DCSMC). This section is the first stage of SR;
- Integrate other uncertainties with the RG scenarios to generate total simulation (TS) models using Discrete Latin Hypercube with Geostatistical models (DLHG);
- Apply data assimilation process to reduce uncertainty and generate total history-matched simulation (THS) models using a filtering indicator named Normalized Quadratic Deviation with Signal (NQDS);
- Select representative history-matched simulation (RHS) models from the THS models set using a tool based on a metaheuristic optimization algorithm named RMFinder. This section is the second stage of SR;
- Perform an RO to maximize NPV as the objective function using the selected RHS models;
The novel SR workflow selects the representative scenarios (RG realizations and RHS models) during two steps: (1) RG selection based on static features before the simulation process and, (2) RHS selection based on simulation-based (dynamic) features after the simulation process. The workflow is applied to a fractured synthetic reservoir model named the UNISIM-II-D flow unit-based.
To check the computational-time and efficiency of the methodology, we compare two candidate production strategies based on (1) five RHS models obtained from the two-stage SR process considering DCSMC and RMFinder techniques (workflow A), and (2) five RHS models obtained from one-stage SR process using the RMFinder method (workflow B). In workflow A, the SR process is performed gradually during two steps while in workflow B, the SR process is applied all at once in one step.
The results show that the distribution of simulation outcomes after RO for the representative scenarios and the total scenarios in workflow A are more similar than workflow B. In addition, the robust production strategy obtained from workflow A is preferred to workflow B because it presents higher chances of high NPV value and lower chances of low NPV value.
-
-
-
A Simplified Mechanistic Population Balance Model for Foam Enhanced Oil Recovery (EOR)
Authors L. Ding and D. GuerillotSummaryThe mechanistic foam population balance (PB) model has clear physics, but it is generally challenging to be applied due to the high computational cost and the difficulty for determining a number of kinetic foam parameters. In this presentation, a simplified mechanistic foam PB model was developed and applied for simulating enhanced oil recovery (EOR) process in the laboratory.
An improved foam coalescence function for oil destabilizing effect and dry-out effect on foam was incorporated into the mechanistic foam PB model, and a simplified mechanistic foam PB model was obtained after local equilibrium approximation. The simplified mechanistic foam PB model was first validated by fractional flow theory. Then, it was applied for history matching an efficient foam EOR process performed in the laboratory. These experiments involves foam flooding tests (co-injection of surfactant and nitrogen) in the absence of crude oil, foam tests in the presence of residual oil after water flooding and a series of foam quality scan tests in the presence of residual oil after foam flooding. The parameters for oil saturation dependent function were estimated after numerical simulation of foam transport in the presence of water flooded residual oil, while the parameters for foam dry-out function was estimated after history matching the steady state foam quality scan data at residual oil saturation after foam flooding. The simulation results were also compared with those obtained from the foam PB model and foam local equilibrium (LE) model of a commercial simulator in terms of history matching quality and computational costs.
It is found that the numerically calculated pressure gradient, cumulative oil recovery and effluent surfactant concentration reproduce the experimental results notably well. Both the steady-state and transient foam flows can be reproduced reasonably well by the simplified mechanistic foam PB model. Moreover, the simplified mechanistic PB model is more efficient in terms of computational cost in comparison to the full physics PB model, thereby appearing to be a potentially effective tool for modeling at field scale.
-
-
-
A Bayesian Statistical Approach to Decision Support for TNO OLYMPUS Well Control Optimisation under Uncertainty
Authors J. Owen, I. Vernon and R. HammersleySummaryWell control and field development optimisation are tasks of increasing importance within the petroleum industry, as seen by the development of and large participation in the 2018 TNO OLYMPUS Field Development Optimisation Challenge. Complex mathematical computer models, in the form of reservoir simulators, are used in the TNO Challenge as well as throughout the petroleum industry both to improve the understanding of the behaviour of oil fields, as well as to guide future decisions for well control strategies and field development.
Major limitations involved when using reservoir simulators include their complex structure; high-dimensional parameter spaces and large number of unknown model parameters; which is further compounded by their long evaluation times. The process of making decisions is commonly misrepresented as an optimisation task that frequently requires a large number of simulator evaluations, thus rendering many traditional optimisation methods intractable. Further complications arise due to the presence of many sources of uncertainty that are inherent within the modelling process such as those represented by model discrepancy. This makes it unwise to focus on a single best decision strategy that is potentially non-robust to such uncertainties.
We develop a novel iterative decision support strategy which imitates the Bayesian history matching procedure, that identifies a robust class of well control strategies. This incorporates Bayes linear emulators which provide fast and efficient statistical approximations to the computer model permitting the full exploration of the vast array of potential well control or field development strategies. The framework also includes additional sources of uncertainty such as model discrepancy which are accurately quantified to link the sophisticated computer model and the actual system and hence obtain robust and realistic decisions for the real oil field.
The developed iterative approach to decision support is demonstrated via an application to the well control problem of the TNO OLYMPUS Challenge. Accurate emulators are constructed using limited information from a relatively small number of simulations. Moreover, a variety of sources of uncertainty including many not considered by the TNO dataset are incorporated, their importance highlighted and their effects on the sensitivity of potential decisions demonstrated. Greater emulator accuracy is achieved at later waves due to iterative refocusing. This approach yields a collection of decisions which are robust to uncertainty for a greatly reduced computational cost compared to methods using the simulator only.
-
-
-
Comparing Three DFN Simplification Strategies for Two-Phase Flow Applications
More LessSummaryNumerical flow models based on Discrete Fracture Methods (DFM) represent a fractured porous rock using an unstructured mesh where fractures are a subset of the elements faces. This allows for a high degree of geometric accuracy, but it also raises numerical challenges: the mesh must honor both small and large scale geometric features while keeping tractable and stable computations. For these reasons, we previously proposed a new geometric approximation method, which can be applied before meshing.
The aim of this paper is to compare the flow impact of different geometric approximations of irregular and complex two-dimensional fracture networks. We present and validate a Control-Volume-Finite-Element DFM-based water flooding model and three fracture approximations strategies. The first strategy (A) projects fractures on the edges of an initial background mesh. The two others (B and C) rely on graph theory to analyze and modify a boundary representation of the fracture network according to minimal angles and mesh sizes criteria. Strategy B modifies the boundary representation using a contraction approach where flagged fracture elements (lines, extremities of intersections) are merged. Strategy C uses an expansion approach which moves the problematic fracture elements away one from another, hence preserving the model connectivity (we also present some adjustments as compared to the already published method). The approximation strategies A, B and C are applied to three reference data sets with respectively: two crossing fractures; highly connected fractures; anisotropic disconnected fractures. For each model, we compare the oil production and the saturation maps to the reference model. These tests show that the connectivity changes implied by the strategies A and B only have a small impact on the flow solution. Nonetheless, the expansion strategy C which preserves the fracture network topology provides the most accurate solution in all test cases.
-
-
-
A Robust, Multi-Solution Framework for Well Location and Control Optimization
Authors M. Salehian, M. Haghighat Sefat and K. MuradovSummaryOptimal field development and control aim to maximize the economic profit of oil and gas production while considering various constraints. This results in a high-dimensional optimization problem with a computationally demanding and uncertain objective function based on the simulated reservoir models. The limitations of many current robust optimization methods are: 1) they optimize only a single level of control variables (e.g. well locations only; or well production/injection scheduling only) that ignores the interferences between control variables from different levels; and 2) they provide a single optimal solution, whereas operational problems often add unexpected constraints that result in adjustments to this optimal solution scenario degrading its value.
This paper presents a robust, multi-solution framework based on sequential iterative optimization of control variables at multiple levels. Simultaneous Perturbation Stochastic Approximation (SPSA) algorithm is used as the optimizer while the estimated gradients are calculated using a 1:1 ratio mapping ensemble of control variables perturbations at each iteration onto the ensemble of selected reservoir model realizations. An ensemble of close-to-optimum solutions is then chosen from each level (e.g. from the well placement optimization level) and transferred to the next level of optimization (e.g. where the control settings are optimized), and this loop continues until no significant improvement is observed in the expected objective value. Fit-for-purpose clustering techniques are developed to systematically select an ensemble of realizations to capture the underlying model uncertainties, as well as an ensemble of solutions with sufficient differences in control variables but close-to-optimum objective values, at each optimization level.
The proposed framework has been tested on the Brugge benchmark field case study. Multiple solutions are obtained with different well locations and control settings but close-to-optimum objective values, providing the much-needed operational flexibility to field operators. We also show that suboptimal solutions from an early optimization level can approach and even outdo the optimal one at the next level(s) demonstrating the advantage of the developed framework in a more efficient exploration of the search space.
-
-
-
Calculation of Well Productivity Index in Stochastic Porous Media
Authors D. Posvyanskii and A. NovikovSummaryThe productivity index (PI) is an important characteristic of a well, which indicates its production potential. Analytical solutions of well inflow equation are frequently used to calculate PI, however these solutions are obtained under the assumption of reservoir homogeneity. In a heterogeneous reservoir with spatially variable permeability, the use of these analytical solutions leads to errors in PI calculation.
Upscaling is commonly used to calculate an effective permeability of heterogeneous medium and this technique is now applied to solve many reservoir simulation problems. In reservoirs with stochastic permeability the effective permeability is a random variable characterized by its mean value and variance. These statistics can be directly calculated from the solution of well inflow equation in reservoir, which is a partial differential equation with random coefficient. In turn, its solution is treated as the pressure averaged over the ensemble of permeability realizations. The averaged pressure can be represented as infinite perturbation series over permeability fluctuation. In [1] we used Feynman diagrammatic approach to sum this series and to obtain effective reservoir permeability. The calculation of the effective permeability of a stochastically heterogeneous porous medium has been the subject of numerous studies.
In this study we focus on the calculation of the variance of an effective permeability, which represents the error introduced by replacing heterogeneous medium with homogeneous one. We use the approach from [1] to calculate the variance of effective permeability. The knowledge of statistical characteristics of effective permeability allows us to calculate PI.
It is shown that in contrast to the mean effective permeability, its variance depends on the correlation length of permeability field. Semi-analytical expressions for mean effective permeability and for its variance are obtained for lateral and vertical stochastic heterogeneity. These expressions allow PI, well rate and corresponding uncertainties to be easily estimated. The influence of anisotropy, permeability variance and correlation length on the uncertainty in PI is investigated and compared to the results of Monte-Carlo numerical simulation.
[1] Novikov, A.V., Posvyanskii, D.V. The use of Feynman diagrammatic approach for well test analysis in stochastic porous media. Comput Geosci (2019). https://doi.org/10.1007/s10596-019-09880-1
-
-
-
Impacts of Gas Trapping and Capillarity on Oil Recovery by Near-Miscible CO2-WAG
More LessSummaryCO2 Water-Alternating-Gas injection (CO2-WAG) under near-miscible conditions is a multifaceted process due to the complex interaction of thermodynamic phase behaviour, multi-phase flow behaviour and the heterogeneity of the porous medium. The central objective of this study is to improve the fundamental understanding of fluid behaviour in the process of near-miscible CO2-WAG. This work presents a detailed simulation study of CO2-WAG displacements with unfavourable mobility ratios in a 2D areal heterogeneous system to trigger the fingering flow regime. In our previous work ( Wang et al., 2019 a; 2019b ; 2020 ), we have successfully developed a new mechanistic synthesis of near-miscible WAG, incorporating compositional effects (the MCE mechanism) and interfacial tension effects (the MIFT mechanism). Here, we extend our study to include additional key multiphase flow mechanisms, such as gas trapping and capillarity, to reflect better the flow physics in a 3-phase system.
We identify that the effect of gas trapping reduces the oil recovery due to the degraded displacement performance in the “non-preferential” flow routes (areas between gas fingers). This is because the trapping mechanism greatly hampers the MIFT mechanism acting during the secondary water injection cycle. The viscous crossflow between the non-preferential routes and preferential routes (gas fingers) is restricted, which leads to a lowered sweep efficiency. On the other hand, the effect of the capillary force is more complex. In a water-wet system, the oil production increases at the early stage of displacement but approaches the plateau more quickly. In this case, capillary pressure creates entry barriers for gas flowing into low-permeability zones, which gives rise to more severe gas fingers and a larger amount of bypassed oil. The oil recovery drops by over 7% compared to the zero capillary pressure case. For the oil-wet system with capillarity, the production life is much extended by the capillary forces compared to the water-wet case. Although the production rate is reduced at the early stage of the displacement, the oil-wet capillary pressure function enables gas to imbibe into low-permeability zones (under near-miscible conditions), which mitigates the effect of the dominant gas fingers. The improved sweep efficiency maximizes the benefits of the combined MCE and MIFT mechanisms, particularly at the late stage of the displacement. The oil recovery in the oil-wet case can be almost as good as in the base case provided the final water cycle is long enough.
-
-
-
Data-Driven, Physics-Driven and Analytic Models for Waterflooding Optimisation Under Uncertainty
Authors D.L. Moreno Bedoya and G. GarciaSummaryThe proper optimisation of fields under waterflooding under uncertainty might require the evaluation of multiple scenarios over a set of reservoir models designed to incorporate geological, structural and stratigraphic uncertainties. Nowadays, reservoir models might have several millions of grid cells and a larger computing infrastructure is needed in order to achieve a near-optimal solution for the net present value objective function given the large uncertainties.
In this work a methodology is presented where data-driven models, in the form of capacitance resistance methods, together with analytical fractional flow theory and the help of machine learning techniques are used to perform the optimization of a set of reservoir models under uncertainty.
The fractional flow parameters for the Buckley Leverett function are calculated on a well by well basis using iterative ensemble smoothers after a connectivity analysis is performed. The connectivity analysis is further conditioned initially to flow diagnostics averages and to averages for the time of flight.
The objective is to maximize the net present value by using proxy models that better match the reservoir and to provide insights on drainage areas and possible infill drilling locations for better field development plans using a fraction of the time required.
-
-
-
Scaling Foam Flow Models in Heterogeneous Reservoirs for A Better Improvement of Sweep Efficiency
Authors F. Demarche, B. Braconnier and B. BourbiauxSummaryIn heterogeneous formations foam is expected to reduce mobility more in high permeability layers hence to divert flow towards low permeability regions. This has been shown experimentally by several authors by comparing core-scale foam displacements on core plugs of contrasted average permeability and by using a two-dimensional laboratory pilot consisting of two layers with different properties. More recently, it has been shown experimentally and theoretically that the foam mobility reduction scales approximately as the square root of permeability within the framework of Darcy-type semi-empirical foam flow models. This scaling law for the effect of permeability on foam properties was inferred from an analogy between foam flow in porous media and foam flow in capillary tubes and was found consistent with the modelling of available experimental data.
This foam selectivity effect should improve the sweep efficiency and is of primary interest for liquid or gas diversion in improved oil recovery and environmental rem ediation. However, it is not yet accounted for in physical modelling and reservoir simulation rock-typing best practices nor used as a daily routine for the design of foam pilots. As such, the use of physical foam mobility reduction scaling law is highly recommended for foam process evaluation and is the purpose of the present communication.
This work assesses the impact of such effects with large-scale Darcy-type foam comprehensive modelling upon designing pilot tests. A model implemented in IFP Energies nouvelles reservoir simulator PumaFlow is considered herein for the only purpose of demonstrating the impact of foam selectivity. We work out two-dimensional cross-sectional inter-well porous media of various permeability distributions and a three-dimensional synthetic reservoir, eventually. We demonstrate by varying the permeability contrast how off-target foam flow conventional modelling can be toward this permeability selectivity effect that drives fluids diversion and sweep efficiency if not properly accounted for. Finally, we show how selective foam injections can be designed in order to make the best joint use of the considered foam and the porous medium permeability heterogeneity.
-
-
-
An Efficient Implementation of the Discontinuous Galerkin Method for Multiphase Flows through Heterogeneous Porous Media
Authors N. Dashtbesh, B. Noetinger and G. EnchérySummaryOne of the main challenges in immiscible multiphase flows lies in getting an accurate representation of the strong coupling between the unavoidable heterogeneity of the porous medium and instabilities of immiscible multiphase flows appearing near the interface of the fluids. We propose an approach to improve the accuracy of the simulation of immiscible flows in heterogeneous porous media using a Discontinuous Galerkin (DG) method. The main objective of this work is to achieve both accuracy and computational efficiency by dynamically decomposing the domain and implementing different solution strategies in different flow regions. An important advantage of DG methods is the ability to approximate the solution by discontinuous polynomials of various degrees in various elements. Thanks to this feature, local flow details near the front may be taken into account by increasing the order of polynomial approximations in the elements of this flow region. To overcome the increased computational cost associated with high-order DG methods, a finite volume scheme is used far from the front.
To this aim, we have also developed a front tracking method to model the position of the fluids interface. This method solves a simplified two-phase flow problem to identify the grid blocks in which the front is present. Knowing the position of the front using this fast computation, allows us to identify these different flow regions that are treated separately. Far from the front, the flow is mainly single-phase and the finite volume scheme proved to be satisfactory. In the vicinity of the front, high-order DG is used to capture the instabilities and complexities of the immiscible flow. In this work, the accuracy and computational efficiency of the results are presented in comparison to flow simulations where a high-order DG scheme is used over the whole domain.
-
-
-
A Bayesian Optimisation Workflow for Field Development Planning Under Geological Uncertainty
Authors R. Bordas, J.R. Heritage, M.A. Javed, G. Peacock, T. Taha, P. Ward, I. Vernon and R.P. HammersleySummaryField development planning using reservoir models is a key step in the field development process. Numerical optimisation of specific field development strategies is often used to aid planning. Bayesian Optimisation is a popular optimisation method that has previously been applied to this problem. However, reservoir models can have a high degree of geological uncertainty associated with them, even after history matching. It is important to be able to perform optimisation that accounts for this uncertainty. To date, limited attention has been given to Bayesian Optimisation of field development strategies under geological uncertainty.
Much of the recent work in this area has focused on Ensemble Optimisation methods. These naturally handle geological uncertainty using ensembles of geological realisations. This can result in a high computational cost, as large ensembles are required to capture the geological uncertainty. Bayesian Optimisation offers an alternative solution using probabilistic surrogate or proxy models that can capture the geological uncertainty. However, incorporating geological uncertainty into proxy models and using those models in a Bayesian Optimisation loop remains a challenging task. Further, the effect of the additional proxy model uncertainty on optimisation results has not been well studied.
We propose a Bayesian Optimisation workflow comprising a Stochastic Bayes Linear proxy model and a combination of experimental and sequential design techniques. The workflow is designed to include a combination of static and dynamic uncertainties, with a new geological realisation generated and used to simulate fluid flow during each run of the model. The workflow is demonstrated by optimising several field development strategies in a synthetic North Sea reservoir model. The ability of the workflow to locate optima and correctly account for the geological uncertainty is studied and the computational cost is quantified.
The performance and practical implications of the proposed approach are discussed. These are important in designing an accurate and computationally efficient optimisation workflow under geological uncertainty and, ultimately, are factors in developing decision support tools for field development.
-
-
-
Data-Driven Models Based on Flow Diagnostics
Authors M. Borregales, O. Møyner, S. Krogstad and K. LieSummaryData-driven models are an attractive alternative to reservoir simulation in workflows where full field-scale simulations may be computationally prohibitive [3,4]. One example is the forecasting and schedule optimization of waterflooding scenarios, where numerous function evaluations that correspond to a time consuming simulation may be required. Data-driven models must be calibrated to produce a satisfactory forecast, similar to the history matching of conventional simulation models. However, a lot of data is needed to produce a model capable of giving accurate forecasts for the flow distribution between the injectors and producers. Mature fields may have sufficient data to calibrate a purely data-driven model, but fields with limited historical data available require a different approach that can compensate for the lack of data.
Herein, under the assumption that a detailed reservoir simulation model exists, we use flow diagnostics [1] to obtain volumetric information about reservoir partioning and inter-well communication between injectors and producers. This enables us to quickly set up a data-driven model composed of a network of 1D inter-well communication models. This network of models is organized in a 2D Cartesian model, in which each row corresponds to one of the 1D flow paths that represent part of the corresponding 3D volume that is intersected by a certain well pair [3].
The initial data-driven model, before calibration, produces a good forecast for production data. The calibration process of the model is based on adjoint formulations, and the implementation is based on the automatic differentiation framework in MRST [2]. Several numerical examples will be presented, pointing out the advantages and limitations of this new methodology. To summarize, the main contributions of this methodology are:
A good forecast is obtained by an initial data-driven models (before calibration).
A simpler and very efficient calibration process is obtained by using gradient information obtained by solving the adjoint system.
A combination of flow diagnostic, adjoint methods, and automatic differentiation is used to build data-driven models for optimizing waterflooding.
[1] Olav Møyner, Stein Krogstad, and Knut-Andreas Lie. The application of flow diagnostics for reservoir management. SPE-Journal-April2015
[2] Knut-Andreas Lie. An Introduction to Reservoir Simulation Using MATLAB/GNU Octave: UserGuide for the MATLAB Reservoir Simulation Toolbox (MRST). Cambridge University Press, Jul 2019
[3] Zhenyu Guo and Albert C. Reynolds. INSIM-FT in three-dimensions with gravity. Journal-of-Computational Physics, 2019
[4] Guotong Ren, Jincong He, Zhenzhen Wang, Rami M. Younis, and Xian-Huan Wen. Implementation of physics-based data-driven models with a commercial simulator. SPE Reservoir-Simulation-Conference, 2019
-
-
-
Deep-CRM: A New Deep Learning Approach for Capacitance Resistive Models
Authors A. Yewgat, D. Busby, M. Chevalier, C. Lapeyre and O. TesteSummaryClassical reservoir engineering studies require building geological models and solving complex fluid flow transport equations that require high-quality data, numerous computational resources, time and workflows.
For large and mature fields data-driven models can be used to get faster answer and to perform production analysis more efficiently.
Capacitance Resistive Models (CRM) are a class of methods based on material balance that can be used to estimate production wells liquid rates as a function of injected water and Bottom Hole Pressure (BHP) variations. CRM methods quantify the connectivity between producers and injectors using only dynamic data. An important drawback of CRM is that they can suffer from parameter identification problems. Moreover, the analytical solution can be only obtained in specific conditions: linear variations of BHP and fixed injection rate between two consecutive time steps.
In this work we present a new approach combining CRM material balance equations with neural networks in order to obtain more robust and reliable estimation of the CRM parameters (i.e. well connectivity, productivity indices and time constants). This proposal is also interesting since it is not based on any assumption on BHP and injection rates.
To this end, we use a recent approach called Physics Informed Neural Networks (PINNs). In this approach neural networks are trained on observed data with additional physics constraints traduced in appropriate loss functions. The parameters of this physical equation are evaluated at the same time as the neural network weights.
The introduction of PINNs in our approach raised after testing classical machine learning (ML) models (SVMs, Random Forests …) and deep learning models (MLP, LSTM, RNNs…). Indeed, such models can perform well in some specific cases but usually struggle to produce robust results (i.e. forecasting) in the long term. Unfortunately, such systems do not natively integrate physics constraints.
Our aim is to impose physic constraints in neural networks. Thus, we may obtain more stable and reliable results. On the same time, we should be able to account for more behaviors that are not explained by simplified physic equations such as material balance.
We performed a full comparison between our approach using PINNs, other standard ML and DL approaches and a given framework of CRMs on two data-sets: a simple but realistic model build using a commercial reservoir simulator, and a real data set. We show that our approach gives more robust results (in terms of MSE) while not suffering from parameter identification issue.
-
-
-
Deep-DCA A New Approach for Well Hydrocarbon Production Forecasting
By D. BusbySummaryOil & gas reservoir production forecasting is an essential task for reservoir engineers. Forecasts are made in order to take financial decisions and for reserves calculation. For mature fields where a high number of wells and large historical data are available, physicals models can be not enough precise or very long to build. Decline curve analysis (DCA) technique is a well-established alternative to obtain rapid and reliable forecasts and it is used in many fields to perform reserves evaluation.
Due to the high level of noise in the data, changes in production mechanisms, workovers, changes in reservoir pressure, DCA are usually adjusted manually by reservoir engineers, moreover for non-declining wells or new wells type curves approaches are adopted.
In this work we present a new workflow to automatize the DCA calculation in a more robust way and to be able to predict non declining wells and new wells using state of the art machine learning solutions.
To perform automatic DCA we used a recent physics informed neural network (PINN) approach where we combine neural networks and ARP’s empirical equations to obtain more robust forecasts. The neural network proxy helps regularizing the data, moreover all the different field constraints can be easily integrated by defining appropriate loss functions that are minimized during the training phase. To balance these different losses, we use an automatic approach based on uncertainty quantification.
Uncertainty quantification is also a byproduct of the PINN approach that allow us to estimate a probabilistic set of curves that can be used to estimate the P10-P50-P90 in a more robust way respect to a more simplistic Bayesian parameter estimation that will usually underestimate the uncertainty.
In order to achieve a more robust estimation we use as a constraint an Arp’s equation with piecewise constant parameters, allowing us to consider transient regimes. The algorithm is then able to automatically find the transition zones and to assign different parameter values to the different regimes.
The last improvement concerns the non-declining and new wells approach, to address this problem we build a larger machine learning model that learn the spatio-temporal behavior of the different wells and combine static and dynamic data.
The method is applied to two real dataset, an unconventional gas field and a large heavy oil field containing each several hundreds of wells. Comparisons with existing automatic DCA solutions are presented.
-
-
-
Two-Stage Ensemble Kalman Filter Approach for Data Assimilation Applied to Flow in Fractured Media
More LessSummaryThe permeability field in a reservoir simulation greatly influences the resulting flow field and therefore a thorough knowledge of it is crucial. However, the permeability field is usually associated with a high degree of uncertainty since only few measurements of reservoir properties are available. Fractures can form highly conductive shortcuts through the matrix domain. Therefore, it is important to estimate fracture parameters such as location, orientation and size as precisely as possible. Ensemble Kalman filters (EnKF) are widely used for history matching (or data assimilation) in the context of sub-surface flows in order to estimate parameters, reduce uncertainty and improve simulation results.
This work studies the evolution of a reservoir as it might occur e.g. during reservoir stimulation of a geothermal system. During the first stage, large isolated fractures with a preferred orientation arise one after the other. During the second stage, these fractures get connected by others, which have a different preferred orientation. We assume that location, orientation and length of all fractures are known a priori. The only uncertainty therefore lies in the hydraulic aperture of each fracture segment. Further we assume that prior probabilistic knowledge of the hydraulic aperture is available, e.g. from seismic measurements. We upscale the fractures and simulate the flow in the reservoir with a single-continuum model.
We reduce the uncertainty of the hydraulic apertures with an iterative EnKF using empirical measurement data; here from a reference simulation. During the formation of the fractures, we use pressure and flow at in- and outlet boundaries as measurements. Once the whole reservoir is developed, a tracer is injected at the inlet and its concentration at the outlet boundary is used as measurement. In this context also the effect of different fracture-matrix permeability ratios is studied.
-
-
-
Non-Linear Solver Optimisation for Multiphase Porous Media Flow Based on Machine Learning
Authors V.L.S. Silva, P. Salinas, C.C. Pain and M.D. JacksonSummaryNumerical simulation of multiphase flow in porous media is of paramount importance to understand, predict and manage subsurface reservoirs with applications to hydrocarbon recovery, geothermal energy resources, CO2 geological sequestration, groundwater sources and magma reservoirs. However, the numerical solution of the governing equations is very challenging due to the non-linear nature of the problem and the strong coupling between the different equations. Newton methods have been traditionally used to solve the non-linear system of equations, although, the Picard iterative method has been gaining ground in recent years. The Picard method is attractive because the multiphysics problem can be subdivided and each subproblem solved separately, which gives wide flexibility and extensibility.
Rapid convergence of the non-linear solver is of vital importance as it strongly affects the overall computational time. Therefore, a great deal of effort has been put on obtaining robust and stable convergence rates. At the same time, machine learning (ML) is gaining more and more attention with revolutionary results in areas such as computer vision, self-driving cars and natural language processing. The success of ML in different fields has inspired recent applications in reservoir engineering and geosciences. Here, we present a Picard non-linear solver with convergence parameters dynamically controlled by ML. The ML is trained based on the parameters of the reservoir model scaled to a dimensionless space. In the approach reported here, data for the ML training is generated using simulation results obtained for multiphase flow in a two-layered reservoir model which captures many of the flow features observed in models of natural reservoirs. The presented method significantly reduces the computational effort required by the non-linear solver as it can adjust itself to the complexity/physics of the system. We demonstrate its efficiency under a variety of numerical tests cases, including gravity, capillary pressure and extremely heterogeneous models.
Technical contributions:
- – Significantly reduces the computational cost of the non-linear solver.
- – The ML model is trained very efficiently based on a two-layered reservoir model and dimensionless numbers.
- – Enables us to carry out large-scale and/or physically demanding numerical simulations.
-
-
-
Distributed Quasi-Newton Derivative-Free Optimization Method for Optimization Problems with Multiple Local Optima
More LessSummaryFor highly nonlinear problems, the objective function f(x) may have multiple local optima and it is desired to locate all of them. Analytical or adjoint-based derivatives may not be available for most real optimization problems, especially, when responses of a system are predicted by numerical simulations. The distributed-Gauss-Newton (DGN) optimization method performs quite efficiently and robustly for history-matching problems with multiple best matches. However, this method is not applicable for generic optimization problems, e.g., life-cycle production optimization or well location optimization.
In this paper, we generalized the distribution techniques of the DGN optimization method and developed a new distributed quasi-Newton (DQN) optimization method that is applicable to generic optimization problems. It can handle generalized objective functions F(x,y(x))=f(x) with both explicit variables x and implicit variables, i.e., simulated responses, y(x). The partial derivatives of F(x,y) with respect to both x and y can be computed analytically, whereas the partial derivatives of y(x) with respect to x (sensitivity matrix) is estimated by applying the same efficient information sharing mechanism implemented in the DGN optimization method. An ensemble of quasi-Newton optimization tasks is distributed among multiple high-performance-computing (HPC) cluster nodes. The simulation results generated from one optimization task are shared with others by updating a common set of training data points, which records simulated responses of all simulation jobs. The sensitivity matrix at the current best solution of each optimization task is approximated by either the linear-interpolation (LI) method or the support-vector-regression (SVR) method, using some or all training data points. The gradient of the objective function is then analytically computed using its partial derivatives with respect to x and y and the estimated sensitivities of y with respect to x. The Hessian is updated using the quasi-Newton formulation. A new search point for each distributed optimization task is generated by solving a quasi-Newton trust-region subproblem for the next iteration.
The proposed DQN method is first validated on a synthetic history matching problem and its performance is found to be comparable with the DGN optimizer. Then, the DQN method is tested on different optimization problems. For all test problems, the DQN method can find multiple optima of the objective function with reasonably small numbers of iterations (30 to 50). Compared to sequential model-based derivative-free optimization methods, the DQN method can reduce the computational cost, in terms of the number of simulations required for convergence, by a factor of 3 to 10.
-
-
-
Dynamic Saturation Reconstruction for Multiphase Flow by Time-Of-Flight Fill Functions
By O. MoynerSummaryThe hyperbolic nature of transport equations makes multiphase simulations sensitive to numerical diffusion or smearing due to insufficient grid resolution or long time -steps, in particular for cases with linear or weakly nonlinear displacement fronts. The number of grid cells is often limited by the available computational resources, and is tightly coupled to the geological description.
Apart from increasing the grid resolution, several approaches have been taken to remedy the problem. The first is to use a more accurate scheme for the transport equations, e.g., in the form of a high-resolution finite-volume scheme, or by adding more degrees of freedom in the form of higher-order finite elements. Such schemes are well developed on rectilinear and curvilinear grids, but more challenging to formulate on general polytopal grids. A second approach is to use some form of upscaling to generate new pseudo-relative permeability/mobility functions, since the simulation grid in many cases is formed by upscaling an underlying finer geocellular grid.
Herein, we present a novel approach to two-phase flow, based on dynamic reconstruction of saturations, that combines the two approaches. The key idea is to solve the transport on a coarser grid, but use a set of numerically computed filling functions to reconstruct fine-scale saturation variations. These fill functions are computed by solving local flow and time-of-flight problems before the simulation. Each fill- function accounts for the local velocity field by a simple superposition of solutions, and ensures that any 1D solution can be mapped onto the underlying fine-scale cells while preserving the average saturation within the containing coarse block. By assuming that the local solution is a self-similar solution of a Riemann problem, we can approximate the fine-scale saturation distribution at any point in the coarse block. We demonstrate that this can give highly accurate results for both linear and Buckley-Leverett type flux functions for a range of heterogeneous test cases. A comparison is made with different levels of implicitness and a WENO scheme at both coarse and fine scales.
-
-
-
Flow Diagnostics for Model Ensembles
Authors F. Watson, S. Krogstad and K. LieSummaryEnsembles of geomodels provide an opportunity to investigate the range of parameters and possible operational outcomes for a reservoir of interest. Full-featured dynamic modelling of all ensemble members is often computationally unfeasible, however some form of dynamic modelling, allowing us to discriminate between ensemble members based on their flow characteristics, is required.
Flow diagnostics involve simplified analysis of steady flow scenarios, single-phase or multiphase, and can be run in a much shorter time than a full dynamic multiphase simulation.
Fundamental quantities calculated for flow diagnostics include travel times, volumetric partitions, inter-well communications, and measures of dynamic heterogeneity. Heterogeneity measures like the dynamic Lorenz coefficient and sweep efficiency can be used as proxies for oil recovery in order to rank models. More advanced flow diagnostic techniques can also be used to estimate recovery.
We present two different forms of flow diagnostics metrics and investigate how well they perform in an ensemble setting. The first are based on volume-averaged travel times, which are calculated on a cell by cell basis from a given flow field. These measures are inexpensive to calculate and yield good results for relative rankings of models in the ensemble. The second use residence time distributions, which lead to more accurate results allowing for better estimation of recovery volumes. In addition, we have developed new metrics for better correlation between diagnostics and simulations when models have non-uniform initial saturations.
Three different ensembles of models are analysed; Egg, Norne, and Brugge. Very good correlation, in terms of model ranking and recovery estimates, is found between flow diagnostics and full simulations for all three ensembles. In the Egg and Norne examples, we consider uniform initial saturation and evenly spread well locations. Simulation results in terms of model ranking are well characterised by flow diagnostics based on volume-averaged travel times and residence time distributions, which are calculated using average initial saturations.
For the Brugge example, we consider producers placed in an oil cap, and demonstrate how the diagnostics results can be localized to the region of interest. We observe good correlations between simulations and simple flow diagnostic proxies for oil sweep. In addition, we also obtain good approximations for recovery when mapping saturation to the backward time-of-flight variable and solving 1D transport equations with the inter-well residencetime- distributions as source terms.
-
-
-
Particle Transport Scheme for Embedded Discrete Fracture Models
Authors R. Monga, R. Deb, D.W. Meyer and P. JennySummaryEmbedded Discrete Fracture Models (EDFMs) for fractured porous media are preferable over Discrete Fracture Models if complex fracture geometries are to be fully resolved and the fractures and matrix discretizations are conformal. Lagrangian particle-tracking schemes offer convenient means for solute transport modeling because in EDFM frameworks, an orthogonal grid can be used irrespective of the fracture geometries. However, the absence of resolved fracture-matrix interfaces and different dimensionalities of the matrix and fracture continua motivate the use of a stochastic framework for particle-tracking. In this work, we developed a stochastic, time-adaptive particle-tracking scheme for EDFM models of fractured media with a permeable matrix. We formulated the probabilities of inter-continuum particle transfer, which have dependency on the particle travel time through the matrix/fracture control volumes. We showcase the conservative nature of the proposed particle-tracking scheme and additionally, illustrate the estimation of averaged solute concentration field. Such an illustration hints at the potential extensions of the tracking scheme, e.g., modeling of solute transport with kinetic reactions, and its incorporation into random walk models for dispersion in fractured media.
-
-
-
Gauss-Newton Trust Region Search Optimization Method for Least Squares Problems with Singular Hessian
Authors G. Gao, F. Saaf, J. Vink, M. Krymskaya and T. WellsSummaryAlthough the Gauss-Newton trust-region sub-problem (GNTRS) solver using inverse-quadratic model (GNTRS-IQ) performs more efficiently than other direct solvers using matrix factorization and more robustly than available iterative solvers, it cannot compute the desired GN search step for many least-squares problems in which the Hessian is (near) singular. A popular approach to handle a singular matrix is to apply singular-value-decomposition (SVD). However, it is quite expensive to compute the SVD of a large matrix.
In this paper, we developed an integrated GNTRS solver by combining different methods together. For problems with a positive-definite Hessian, the GN search step is computed by solving a linear equation with N=min(Nd,Nm) unknowns, where Nd is the number of data and Nm the number of unknown parameters. Otherwise, we apply a linear transformation to reduce the dimension from Nm to r (the rank of Hessian) and then compute the GN step by solving a linear equation with r unknowns. When r=Nd< Nm, we use the sensitivity matrix J as the transformation matrix. When r is smaller than min(ND,Nm), we first compute the compact-SVD of J and then use the compact form of the right-singular matrix as the transformation matrix. For performance comparison, we also developed two GNTRS solvers using the traditional-SVD and compact-SVD formulation.
The three GNTRS solvers are validated and their performances are benchmarked on three sets of synthetic test problems. Each set contains 500 problems with different number of parameters and observed data. For small-scale and intermediate-scale problems, the solutions obtained by the three solvers have comparable accuracy. However, for large-scale problems, the solutions obtained by the solver using the traditional-SVD deviate from the solutions obtained by other solvers with unacceptably large errors. The integrated GNTRS solver performs most efficiently, and it can reduce the CPU time by a factor ranging from 1 to 173 and from 1 to 14585 when compared to the two solvers using the compact- and traditional-SVD. Our numerical tests confirm that the integrated GNTRS solver is efficient, robust, and applicable to least squares problems with singular Hessian. Finally, we applied the newly developed GN trust region search optimization method using the integrated GNTRS solver to a Gaussian-Mixture-Model (GMM) fitting problem. Compared with the one using the old version GNTRS-IQ solver, the new optimizer can reduce the CPU time used to construct an acceptable GMM by a factor of 8.
-
-
-
Upscaling Low Salinity Water Flooding in Heterogenous Reservoirs
Authors H. Al-Ibadi, K. Stephen and E. MackaySummaryModelling the dynamic fluid behaviour of Low Salinity Water Flooding (LSWF) at the reservoir scale is a challenge which requires a coarse grid enable prediction in a feasible timescale. However, evidence shows that using low resolution models will result in a considerable mismatch compared with an equivalent fine scale model with the potential of strong numerically induced oscillations. This work examines two new upscaling methods in a heterogenous reservoir where viscous crossflow takes place to improve the precision of predictions.
We apply two approaches to upscaling of the flow to improve precision. In the first upscaling method, we shift the effective salinity range for the coarse model based on algorithms that we have developed to correct for numerical dispersion. The second upscaling method uses appropriate pseudo relative permeability curves that we derive. The shape of this new set of relative permeability is designed based on a modified fractional flow analysis of LSWF that we have developed and captures the relationship between dispersion and the waterfront velocities. This approach removes the need for explicit simulation of salinity transport. We applied these approaches in layered models and for permeability distributed as a correlated random field.
Upscaling by shifting the effective salinity range of the coarse model gave a good match to the fine case scenario, while considerable mismatch was observed for traditional upscaling of the absolute permeability only using averaging methods. For highly coarsened models, this method of upscaling reduces the oscillations appear, but they can be apparent. On the other hand, upscaling by using a single (pseudo) relative permeability produced more robust results with a very promising match to the fine scale scenario. These methods of upscaling showed promising results where they were used to upscale fully communicating and non-communicating layers as well as models with randomly correlated permeability.
Unlike documented methods in literate, these newly derived methods take into account the crucial effect of numerical dispersion and effective concentration on fluid dynamic using mathematical tools. These methods could be applied for other models where the phase mobilities change as a result of an injected solute, such as surfactant flooding and alkaline flooding. Usually these models use two sets of relative permeability and switch from one to another as a function of the concentration of the solute.
-
-
-
Novel Stabilizations for A Piecewise Constant Lagrangian Formulation of Frictional Contact Mechanics with Hydraulically Active Fractures
Authors A. Franceschini, N. Castelletto, J. White, R. Settgast and H. TchelepiSummaryMany reservoir engineering applications involve tight coupling between fluid flow processes and poromechanical deformation. In particular, accurate simulation of phenomena like fault reactivation and fracture propagation strongly depends on the two-way coupled fluid-structure interaction. In this work, we focus on modeling frictional contact mechanics coupled with hydraulically active fractures. Specifically, fluid flow occurs inside the fracture, with the fluid pressure acting as an external load for the continuous body, and the conductivity of the fracture is a strong function of the bulk rock deformation.
In our numerical model we adopt a single conforming computational grid for both mechanical and flow processes. A cell-centered finite-volume scheme is used to solve the pressure field inside the fracture while the displacement field in the surrounding rock is approximated through first-order continuous finite elements. Contact conditions on the fracture are imposed through Lagrange multipliers, which represent the contact tractions. For the Lagrange multipliers we employ the same piecewise-constant interpolation (component-wise) used for the pressure approximation.
While this approximation space is convenient from a modeling perspective, the combination of linear displacement and piecewise constant traction/pressure variables is not uniformly inf-sup stable and requires a suitable stabilization. Hence, starting from a macroelement analysis, we develop three novel techniques, one local and two global, which aim at stabilizing the traction jumps across the elements discretizing the fracture surface. Effectiveness and robustness of proposed stabilization strategies are demonstrated and compared against complex analytical two- and three-dimensional benchmarks from the literature.
-
-
-
Lattice Boltzman Method Assisting WAG Hysteresis and Trapped Non-Wetting Phase Simulations
Authors L.G Rodrigues, F. Munarin, H. Vasquez and S. LucenaSummaryThe numerical formulation of an oil reservoir is a formidable task that requires the contribution of several areas of expertise, often unrelated, at different scales. Since this is a hierarchical problem, errors introduced in one step will interfere with the next step, increasing inaccuracies. Our objective is to use predictive numerical methods in different simulation scales to study the oil-trapping relationship which is influenced by wettability, flow rate, interfacial tension, and saturation histories during WAG (Water Alternate Gas) process. This complex behavior requires rigorous models to considering the simultaneous flow of all three phases and, in addition, the reversibility of drainage and imbibition scanning curves is removed. The three-phase hysteresis model implemented during numerical reservoir simulation is based on the work of Larsen and Skauge and comparative scenarios were done for parameters that came from the Lattice Boltzmann method and typical values of literature. The Lattice Boltzmann method is used to simulate two-phase flow of water-oil and oil-gas through a porous medium in order to determine capillary pressure and relative permeability curves in a pore network. Molecular simulation of fluid properties (PVT, viscosity and interfacial tension) are performed to ensure the accuracy of the state equations used in the model. In this scale, density x pressure curves and viscosity x pressure curves similar to those obtained in experimental tests of differential liberation (1 to 400 bar) of reservoir fluids were obtained through molecular simulation models. Besides, the effect of the density ratio between the fluids and contact angle on the shape of the capillary pressure and relative permeability curves are investigated in the porous scale. Hysteresis is observed in all studied cases, becoming more apparent with large density differences. The density ratio is found to influence the pressure required to remove fluids from porous media and the volume of residual fluids trapped in it. The results are important for the study of these curves of a reservoir and confirm that the multi-component Lattice Boltzmann method can supply mesoscale information to take effect at the macroscale studies using reservoir simulation software.
-
-
-
Fractured Reservoir Characterization in Brazilian Pre-Salt Using Pressure Transient Analysis with a Probabilistic Approach
Authors C.K. Quispe Cerna, D.J. Schiozer, G. Soares Oliveira, A. De Lima and R. B. Z. L. MorenoSummaryThe integration of dynamic data in the characterization of a fractured carbonate reservoir contributes to uncertainties reduction and construction of more reliable simulation models. This paper proposes the inclusion of pressure transient analysis with a probabilistic approach, in the characterization of a fractured carbonate reservoir to generation and calibration of stochastic discrete fracture network (DFN) models. The process aims to reduce uncertainty through the calibration of a set of realizations considering the well testing interpretation.
This work is supported by the pressure transient analysis performed in a reservoir located in Santos basin in Brazil´s pre-salt. The proposed methodology integrates the well testing interpretation considering their uncertainties, in the calibration and generation of stochastic sub-seismic fault models based on fractal hypothesis. We choose some realizations considering the faults density that crosses the wellbore be consistent with the borehole image logs and calibrated these realizations with the well testing. Later, we upscaled these models, imported the properties into numerical simulation models, and compared their results with those obtained by the simulation models generated before the proposed calibration.
Well test interpretation results showed characteristics of a fractured reservoir, presence of heterogeneities and boundaries. The analytical model used in the well test interpretation is supported by the borehole image logs, petrophysical data and seismic information. The inclusion of these results in the generation and calibration of DFN models allows us to obtain simulation results consistent with the well tests history, improving simulation models’ reliability. Likewise, this procedure reduced the high variability of the generated simulation models compared to simulation models corresponding to DFN models not calibrated. Additionally, the interpretation results enable us to estimate parameters of the reservoir and the well used in the numerical simulation model and also improve the characterization of the reservoir.
The main contribution of this work relies on the integration of pressure transient analysis considering uncertainties in its interpretation, into the calibration of stochastic DFN models. This methodology provides an alternative to the DFN models calibration that tries to reduce the variability and generate simulation models consistent with production data. Besides, we compare the DFN models calibrated by the proposed methodology with the DFN models not calibrated, revealing positive and negative aspects of this methodology.
-
-
-
Estimation of the Chance of Success of A Four-Dimensional Seismic Project for A Developed Oil Field
Authors A.T.F.S. Gaspar, S.M.G. Santos, C.J. Ferreira, A. Davolio and D.J. SchiozerSummaryDeveloped oil fields often present challenges for further exploitation owing to existing production facilities. Frequently with a long production history, new wells cannot be drilled as freely when compared to earlier phases. As more knowledge is acquired along the course of field development, there is less room for changes, with a potential end point for data acquisition. This paper is based on a workflow originally conceived for quantifying the chance of success (CoS) of a four-dimensional (4D) seismic project for an oil field at the beginning of the development phase, when a complete infrastructure must be defined. Here, we apply this workflow to a developed oil field combined with an assisted production strategy optimization process proposed to optimize large-scale problems using a multilevel approach, allowing to estimate CoS, within a global and integrated decision analysis framework The optimization process is composed of steps to define and optimize decision variables of an oil production strategy, involving a given set of uncertain reservoir models, within a viable number of simulation runs through the use of automatic methods and reservoir engineering analyses. The information provided by 4D seismic data can be used to identify the most-likely reservoir model and, combined with numerical reservoir simulation, to optimize the control and field revitalization variables of the production strategy. This work compares the chance of success and the expected value of information (EVoI) methodologies. We use representative models selected from an ensemble of reservoir models based on cross plots of technical and economic objective functions, the associated risk curves and the probability distribution function of the uncertain attributes. The use of representative models makes the production strategy optimization and CoS and EVOI quantification processes practical, considering all the uncertainties and decision variables involved in the same run, where limiting computational costs is essential. We also analyze the influence of the number of representative models on these estimates. The results of this study showed that, although this oil field presented limited room for changes because of the late stage of development, 4D seismic information effectively impacted decisions regarding production strategy. Besides, our methodology showed that the expected economic gains from improved decisions are higher than the acquisition and processing costs related to information acquisition.
-
-
-
Physics Based Deep Learning for Nonlinear Two-Phase Flow in Porous Media
Authors O. Fuks and H. TchelepiSummaryThere is growing interest in employing Machine Learning (ML) strategies to solve forward and inverse computational physics problems. The physics-informed machine learning (PIML) frameworks developed by Raissi et al.[ 1 ] and Zhu et al.[ 2 ] are prominent examples. The basic idea is to encode the partial differential equations (PDE) that govern the flow physics into the neural network. This encoding is achieved by enriching the loss function with the governing conservation equation. Using the initial and boundary conditions, the network is then able to learn the solution of the forward problem without any labeled data. The scarcity of site-specific “labeled” data presents serious challenges to modeling of Enhanced Oil Recovery (EOR) processes. Thus, if PIML approaches can be used to model the nonlinear flow and transport that govern EOR processes, then they could change the practice of reservoir simulation.
In this work, we explore the application of a particular PIML approach to solve the nonlinear hyperbolic equation that describes nonlinear immiscible two-phase flow in porous media. Specifically, we are concerned with the forward solution of a Riemann problem - a nonlinear conservation law together with piecewise constant data having a single discontinuity. It is well known that it is hard to solve this nonlinear transport problem, especially with a non-convex flux function, due to emergence of saturation shocks in the domain. The focus is on the pure forward problem, i.e., the absence of previously simulated (so-called labeled) data in the interior of the domain. The PIML framework breaks down for this nonlinear hyperbolic problem with non-convex flux function. We have found that it is essential to add a diffusion term to the underlying nonlinear PDE. That is, we used the parabolic form of the equation with a finite Peclet number. When the loss function includes a finite amount of diffusion, the neural network can actually produce reasonable approximations of the forward solution when shocks and mixed waves (shocks and rarefactions) are present.
For the obtained neural networks we also analyze the training process and provide 2-D visualizations of the loss landscape, then we discuss possible reasons for the observed behavior.
-
-
-
Investigation of the Accuracy and Efficiency of the Operator-based Linearization through an Advanced Reservoir Simulation Framework
Authors A. Al-Jundi, L. Li and A. AbushaikhaaSummaryAny complex phase behavior computation is a main challenge in reservoir simulation since it introduces high nonlinearities. To overcome this, an operator-based linearization (OBL) has been introduced recently. In OBL, an operator format is applied to represent the mass-based formulations. By computing the values of the operators related to rock and fluid properties on pre-defined status, the values of the operators and their derivatives on any status, which emerges during a simulation run, can be determined by interpolation. Obviously, the accuracy of the results is mainly controlled by the pre-defined status. In this work, we present a detailed investigation of the accuracy and efficiency of the OBL. To guarantee an objective evaluation, a novel advanced parallel framework is applied for reservoir simulation. In this framework, we implement a multipoint linearization method that is capable to provide accurate, robust, and convergent solutions for reservoir simulation. The number of points in the parametric space of each nonlinear known is defined as resolution. By running simulations at different resolutions, we compare the numerical solutions with analytical solutions. It shows that the resolution has a large effect on the accuracy of numerical solutions. We also investigate the robustness of the OBL by running simulations on several models with different complexity of the phase behavior. Besides, by looking into the convergent process, we also study the efficiency of the OBL method. Finally, we test several filed cases to show the performance of the OBL method for general-purpose reservoir simulations.
-
-
-
An Advanced Parallel Framework for Reservoir Simulation with Mimetic Finite Difference Discretization and Operator-based Linearization
Authors L. Li and A. AbushaikhaaSummaryReservoir simulation is the only way to reproduce flow response in subsurface reservoirs that drastically assists in reducing the uncertainties in the geological characterization and in optimizing the field development strategies. However, it is always challenging to provide efficient and accurate solutions for field cases which in turn further constrains the utilization of reservoir simulation. In this work, we develop a novel reservoir simulation framework based on advanced spatial discretization and linearization scheme, the mimetic finite difference (MFD) and operator-based linearization (OBL), for fully implicit temporal discretization. The MFD has gained some popularity lately since it was developed to solve for unstructured grids and full tensor properties while mimicking the fundamental properties of the system (i.e. conservation laws, solution symmetries, and the fundamental identities and theorems of vector and tensor calculus). On the other hand, in the OBL the mass-based formulations are written in an operator form where the parametric space of the nonlinear unknowns is treated piece-wisely for the linearization process. Moreover, the values of these operators are usually precomputed into a nodal tabulation and with the implementation of multi-linear interpolation, the values of these operators and their derivatives during a simulation run can be determined in an efficient way for the Jacobian assembly at any time-step. This saves computational time during complex phase behavior computations. By coupling these two novel schemes within a parallel framework, we can solve large and complex reservoir simulation problems in an efficient manner. Finally, we benchmark these methods with analytical solutions to assure their robustness, accuracy, and convergence. We also test several field cases to demonstrate the performance and scalability of the advanced parallel framework for reservoir simulation.
-
-
-
Discontinuous Control Volume Finite Element Method for Multiphase Flow in Porous Media on Challenging Meshes
Authors J. Al Kubaisy, H. Osman, P. Salinas, C. Pain and M. JacksonSummaryControl volume finite element methods (CVFEM) are gaining increasing popularity for modeling multi-phase flow in porous media due to their inherited geometric flexibility for modeling complex shapes. Nonetheless, classical CVFEM suffer from two key problems; first, mass conservation is enforced by the use of control volumes that span element boundaries. Consequently, when modeling flow in regions with discontinuous material properties, control volumes that span geologic domain boundaries result in non-physical leakage that degrades the numerical solution accuracy. Another challenge is to provide an accurate solution for distorted elements; elements with high aspect ratio that are part of the discretized heterogeneous domain. In fact, most numerical methods struggle to provide a converged pressure solution for high aspect ratio elements of the domain.
Here, we introduce a numerical scheme that removes non-physical leakage across geologic domains and addresses the accuracy of classical control volume finite element method (CVFEM) in high aspect ratio elements. The scheme utilizes the frameworks of double-CVFEM (DCVFEM) where pressure is discretized CV-wise rather than element-wise. In addition, it introduces discontinuous control volumes by allowing pressure to be discontinuous between elements. The resultant finite element pair has an equal-order of velocity and pressure, with discontinuous linear elements for both the pressure and velocity fields P1DG–P1DG. This type of element pair is LBB unstable. The instability issue is circumvented by global enrichment of the finite element velocity interpolation space with an interior bubble function, given by the new element pair P1(BL)DG–P1DG. This element pair resolves both issues addressed earlier.
We demonstrate that the developed numerical method is mass conservative, and it accurately preserves sharp saturation changes across different material properties or discontinuous permeability fields as well as improves convergence to the pressure solution for distorted mesh, i.e. elements with high aspect ratio. We show the effectiveness of the presented formulation on realistic highly heterogeneous models.
-
-
-
A Modeling Workflow for Geological Carbon Storage Integrated with Coupled Flow and Geomechanics Simulations
Authors J. Torres, I. Bogdanov and M. BoissonSummaryGeological Carbon Storage (GCS) is called to play a critical role for accelerating the transition toward a lowcarbon economy during the remainder of the century. However, for making a significant contribution, GCS targets should be scaled-up from megatons (per year) to gigatons levels. Among other challenges, the ability to perform reliable Reservoir Modeling and Simulation is an important aspect needed for an optimal management of potential risks. The injection of large volumes of fluid has raised public awareness about potentially induced seismicity. Subsurface mechanisms that may trigger seismicity are numerous and complex, and affect the prediction capabilities of the standard simulation tools. Understanding and simulating the physics of these complex phenomena is an important part of TOTAL’s CCUS R&D program, as it is the company’s ambition to become a major actor in CO2 storage activity, while ensuring risk management and mitigation. To improve their understanding, we studied fundamental aspects related with Reservoir Modeling and Simulation in the GCS context.
This study is the first step towards enhancing our understanding of these complex phenomena. We revisited the role that the presence of fractures and faults has on reservoir containment. A coupled flow and geomechanics simulation was developed to investigate the influence of different parameters on the fault reactivation of critically stressed faults. We evaluated different experimental scenarios, which resulted in values of axial displacements in the range of from 15 m to 170 m, within a lab model that has a characteristic length of 20m. The worst-case scenario relates to a situation with pre-existing vertical fractures. Results indicate that the dynamics of the fault permeability is a critical factor, among those that have to be taken into account for CO2 injection scenarios in complex geological formations. In our view, these results confirm that uncertainty in the fault characterization needs to be taken into account for improved risk assessments associated with CO2 injection.
-
-
-
Optimization of Reservoir Surveillance Strategies Under Uncertainty: An Application to the Design of Sparse Monitoring Surveys
Authors E. Barros and O. LeeuwenburghSummaryReservoir monitoring or surveillance is crucial for a responsible and efficient use of subsurface reservoirs. In both production and storage systems, operators need to demonstrate that their assets can be managed in a safe way. Effective monitoring practices also help operators unlock additional value from their assets, by revising expectations, gaining confidence on their projected potentials and allowing development strategies to be adjusted.
The design of monitoring systems can be a challenge for both subsurface operators and regulators. The main difficulties range from the technical uncertainties on subsurface characterization and the unavailability of direct measurements to the lack of technologies to support monitoring design decisions.
As a step to bridge this gap, we have developed a quantitative value-of-information (VOI) methodology within the context of conformance management in CO2 storage operations. It is a practical model-based approach that uses a Bayesian framework to derive a measure of the expected contribution of (future) measurements from candidate monitoring strategies for discrimination of conformance and non-conformance situations.
In this work we integrate our practical VOI approach into an optimization workflow. This novel workflow is applied to determine the optimal design of time-lapse seismic surveys in a realistic CO2 storage case study. Obviously, the larger the spatial coverage of the survey, the more informative it will be. Therefore, the search for cheaper sparse survey designs is by nature a bi-objective problem which needs to consider both the accuracy requirements and the costs of the survey in the same optimization. Our optimization workflow also accounts for the uncertainties associated with the reservoir system by using ensembles of plausible measurement outcomes and model realizations.
Our results show that sparse survey designs can be optimized to reduce costs while keeping accuracy levels comparable to denser designs. Our results also suggest that optimal survey configurations lie on a Pareto front of the two objectives considered, corroborating the idea that the design of cost-efficient monitoring strategies has a multi-objective character.
-
-
-
Application of Diffuse Source Basis Functions for Improved Near Well Upscaling
More LessSummaryNear well flow can have significant impact on the accuracy of the upscaling of geologic models. A recent benchmark study has shown that these errors may dominate over other aspects of upscaling in commercial reservoir simulators. This same study showed the advantage of "Diffuse Source" (DS) upscaling over previous approaches. We now demonstrate the application of the DS basis functions to the calculation of the upscaled well index and the well cell intercell transmissibility.
DS upscaling is an extension of pseudo-steady-state (PSS) flow-based upscaling that utilizes the diffusive time of flight to distinguish well-connected and weakly-connected sub-volumes. DS upscaling retains the localization advantage of a PSS calculation: unlike steady state flow, the local upscaling problem does not couple to adjacent regions and local-global iterations are not required. DS upscaling has been developed and utilized for the calculation of the intercell transmissibility, but we now apply it to calculation of the upscaled well index. Consistent with other researchers, we adjust the intercell transmissibility in the near well region.
We also consider the upscaling of the well index for a reservoir model in which the well trajectory and the high resolution geologic model are not simultaneously available. For many practitioners, this remains the most common reservoir modelling workflow. The result is an algebraic well index upscaling calculation, which also improves upon commercial applications.
The industry standard for the well index follows Peaceman. We show that PSS/DS upscaling reduces to Peaceman’s well index on a coarse grid, and is consistent with Peaceman’s numerical convergence analysis. (In contrast, steady state upscaling for the well index reduces to the Dietz well index.) The current approach is a generalization of Peaceman’s well index, but now extended to represent near well reservoir heterogeneity and with arbitrary placement of a well perforation within a simulation well cell.
Consistent with steady state upscaling, we find an advantage in adjusting the intercell transmissibility in the near well region. However, we have found that it is only necessary to do so for the well cell itself, which may be a consequence of the improved localization of the current calculation.
The new methodology performs very well. It is tested for several models, including the SPE10 reference model, the Amellago carbonate outcrop model, and the Equinor Volve full-field model. We compare the results to steady state flow-based upscaling, the algebraic well index upscaling described above, and to algorithms found in commercial applications.
-
-
-
Deep-Learning Inversion to Efficiently Handle Big-Data Assimilation: Application to Seismic History Matching
Authors C. Xiao, A. Heemink, H. Lin and O. LeeuwenburghSummarySeismic history matching can play a key role in geological characterization and uncertainty quantification. However, challenges related to intensive computational demands and complexity restricts its application in many practical cases. This paper presents a conceptual deep-learning-based framework fully deployed in the popular Pytorch architecture to accelerate the seismic history matching. We introduce a surrogate model based on a deep convolutional neural network with a stack of dense blocks, specifically a conditional deep convolutional autoencoder-decoder architecture (cDCAE). The surrogate model allows us to fully deploy data assimilation algorithms in Pytorch architecture and hence to easily make full use of the efficient computing units, in particular GPU’s for the matrix-matrix and matrix-vector multiplications. The feature of built-in automatic differentiation (AD) provided by Pytorch also makes is possible to evaluate gradient information efficiently in a parallel manner. Furthermore, it has been acknowledged to benefit from the deep learning practice of using stochastic gradient (SG) optimizers, e.g., Adam, instead of full gradient optimizers, e.g., Quasi-Newton, as is most common in conventional big-data assimilation. The proposed framework is tested on a benchmark 3D model in the context of petroleum engineering. This surrogate model is demonstrated to be capable of accurately predicting the quantity of interest, e.g., dynamic saturation maps for new geological realizations. Assessments demonstrating high surrogate-model accuracy are presented for an ensemble of test models. The robustness and dramatic speedup provided by the surrogate model suggests that it can help facilitate the application of large-scale seismic history matching.
-
-
-
Inclusion of Variable Characteristic Length in Microemulsion Flash Calculations
Authors D. Magzymov and R.T. JohnsSummaryRecent developments in predicting microemulsion phase behavior for use in chemical flooding are based on the hydrophilic-lipophilic deviation (HLD) and net-average curvature (NAC) equation-of-state (EoS). The most advanced version of the HLD-NAC EoS assumes that the three-phase micelle characteristic length is constant as parameters like salinity and temperature vary. In this paper, we relax this assumption to improve the accuracy and thermodynamic consistency of these flash calculations.
We introduce a variable characteristic length in the three-phase region based on experimental data that is monotonic with salinity or other formulation variables, such as temperature and pressure. The characteristic length at the boundary of the three-phase region is then used for flash calculations in the two-phase lobes for Winsor type II-/II+. The functional form of the characteristic length is made consistent with the Gibbs phase rule.
The improved EoS can capture asymmetric phase behavior data around the optimum, whereas current HLD-NAC based models cannot. The variable characteristic length formulation also resolves the thermodynamic inconsistency of existing phase behavior models that give multiple solutions for the optimum. We show from experimental data and theory that the inverse of the characteristic length varies linearly with formulation variables. This important result means that it is easy to predict the characteristic length in the three-phase region, which also improves the estimation of surrounding two-phase lobes. This improved physical understanding of microemulsion phase behavior should greatly aid in the design of surfactant blends and improve recovery predictions in a chemical flooding simulator.
-