- Home
- Conferences
- Conference Proceedings
- Conferences
ECMOR XVII
- Conference date: September 14-17, 2020
- Location: Online Event
- Published: 14 September 2020
41 - 60 of 145 results
-
-
A Novel Method for Quickly Obtaining SRV in Multi-Stage Fracturing Reservoirs with Different Fracturing Radii
More LessSummaryMulti-stage fracturing is an effective stimulated reservoir technology for a multilayer reservoir. The evaluation of the stimulated reservoir volume (SRV) is an important quality index. Aiming at the multi-stage fracturing vertical commingled production well, and considering the different fracturing radii of any layer, an extended model of nlayer vertical commingled production well with an arbitrary distribution of the fracturing radii in the longitudinal direction was established. The Laplace domain bottom-hole pressure solution was obtained by the Laplace transformation and solving the n-th order Bessel function sparse matrix, and the real-time domain bottom-hole pressure solution was obtained by Stehfest numerical inversion method. Based on the characteristics of bottomhole pressure and it‘s derivative on the double logarithmic coordinate system, the new flow regimes are identified. The sensitivity analysis results of the vertical distribution of several different fracturing radii show that the multistage fracturing radii have the characteristics of a three-zone compound reservoir under the condition of vertical unevenness fracturing radii. On the other hand, the identification of the fracturing radii of multi-stage fracturing reservoirs is an inverse problem, that is, the fracturing radius of each layer cannot be effectively identified through the bottom-hole pressure response, but the SRV of multi-stage fracturing reservoirs can be obtained. We call these two phenomena the “equivalent compound effect” and “equivalent seepage volume effect”, respectively. These effects provide a new method for quickly obtaining the SRV, instead of being tangled in the fracturing radius of the local each layer, which provides a new direction for the evaluation of the overall stimulated reservoir effect of the multi-stage fracturing vertical commingled production well. Especially, it provides a novel perspective for understanding the complex seepage flow of multi-stage fracturing vertical commingled production reservoirs.
-
-
-
Nonlinear State Constraints Handling in Waterflooding Optimization Through Reduced Order Models
Authors A. Souza, A. Castro, M. Dall’Aqua, J. Tueros, B. Horowitz and E. GildinSummaryThis study addresses strategies to efficiently impose nonlinear state constraints using reduced order models. Nonlinear constraints imposed on state variables are of practical interest in optimizing reservoir production performance (NPV or oil production), but they are difficult to handle numerically. Constraints involve bounds on control themselves (e.g. rates, BHP or valve openings), linear functions involving the design variables, but oftentimes nonlinear constraints involving state variables must also be imposed. Examples are minimum (maximum) BHP’s at producer (injector) wells subject to rate controls, or vice versa. Enforcement of these constraints involves repeated computation of state variables, and possibly their derivatives, not only at the ends of control steps but at numerous intermediate times. Both computations are time consuming and, thus, it is proposed to make use of reduced order methods to decrease the numerical effort. The contributions of this paper are twofold: (1) we propose correction points based on a time series within the control cycle to impose state constraints thus reducing the computational effort; (2) we are coupling the optimizer with physics-based and data-driven reduced-order models to enforce state complexity reduction.
Here, two strategies are compared: Proper Orthogonal Decomposition / Trajectory Piecewise Linearization (POD/TPWL) and Dynamic Mode Decomposition (DMD). Both methods are snapshot-based linearizations but are implemented differently. TPWL/POD technique reduces the complexity of the problem by linearizing the governing equations around converged and stored states during a training simulation, and reduction is obtained by projecting states onto smaller subspaces by POD. This method requires access to the simulator code and, thus, is an intrusive method. DMD also rely on state snapshots that are used to generate a small set of optimal basis vectors called modes. The snapshot data also permits extraction of a coherent dynamic structure of the problem through the assumption that there exists a linear mapping connecting temporal evolution of the state system. This evolution can be computed without further simulation runs. DMD does not require access to the simulator code and therefore is nonintrusive. The reduced-order techniques are compared in the optimization of a BHP controlled synthetic reservoir where the objective function is maximization of oil production subject to field water production rate constraints. We will demonstrate the handling of non-linear constraints and the resulting computational savings using the MATLAB Reservoir Simulation Toolbox (MRST). We performed modifications to some of its routines to store Jacobian matrices and also snapshots, both used at POD/TPWL and DMD.
-
-
-
Effects of Lumping on the Numerical Simulation of Thermal-Compositional-Reactive Flow in Porous Media
Authors M. Cremon and M. GerritsenSummaryIn this work, we study the influence of using different lumping strategies on the thermal recovery of an extraheavy oil. Numerical simulation of thermal recovery processes typically requires advanced thermodynamic equilibrium computations to model the phase behavior and displacement. Those models rely on compositional descriptions of the oil using up to tens of components. Lumping a large number of components into a smaller number of pseudo-components in order to reduce the computational cost is standard practice for thermal simulations. In the context of reactive transport, most reaction schemes usually use at most four hydrocarbon components. However, the impact the lumping process has on the displacement processes can be hard to estimate a priori. We focus on 1D, 3-phase combustion tube-like numerical simulations of In-Situ Combustion (ISC) displacement processes. These thermal-compositional-reactive simulations exhibit a tight coupling between mass and energy conservation, through phase behavior, heat transport and reactions. We observe that depending on the number and type of lumped pseudo-components retained in the simulation, the results can exhibit modeling artefacts and/or fail to capture the relevant displacement processes. ISC cases involve multiple fronts moving downstream, including a steam front, a reaction/temperature front and multiple saturation fronts. First, we show that using a small number of components does not allow for an accurate estimation of the phase behavior of an extra-heavy oil. Using the typical reaction-based descriptions of a few hydrocarbon components (1-4) leads to inaccurate phase envelopes, for multiple compositions encountered in the displacement process. Then, we illustrate that under hot air injection without reactions, the displacement results do not capture the physical phenomena. Lumping heavy components together overestimates the size of the oil banks and gives inaccurate speeds for multiple fronts. Finally, in the presence of exothermic oxidation reactions, more components are needed to accurately capture the evaporation of medium and heavy components due to the tighter coupling and higher temperatures.
-
-
-
A Novel and Efficient Preconditioner for Solving Lagrange Multipliers-Based Discretization Schemes for Reservoir Simulations
Authors S. Nardean, M. Ferronato and A.S. AbushaikhaSummaryWe present a novel and efficient preconditioning technique to solve the non-symmetric system of equations associated with Lagrange multipliers-based discretization schemes, such as Mixed Hybrid Finite Element method (MHFE) and Mimetic Finite Difference method (MFD). These types of discretization have been gaining popularity lately and here we develop a fully dedicated preconditioner for them. Preconditioners are key to improve the efficiency of Krylov subspace methods, that provide a solution to the sequence of large-size, and often ill-conditioned, systems of equations originating from reservoir numerical simulations.
The mathematical model of flow in porous media is governed by a set of two coupled nonlinear equations: the momentum and mass balance equations, discretized using either the MHFE or the MFD, and the Finite Volume method (FV), respectively. Unknowns are located on elements (element pressure and saturation) and faces (face pressure and phase capillary pressure), the latter behaving as Lagrange multipliers. The problem is solved by adopting a fully implicit approach and linearization is provided by a Newton-Raphson method, which leads to a block-structured Jacobian matrix. An original numerical formulation of the mass balance equation, where the continuity of fluxes is strongly imposed with the aim of increasing the efficiency of the nonlinear iteration, has been investigated. The resulting block Jacobian is not symmetric, thus requiring special preconditioning tools for its efficient solution. The preconditioning approach exploits the Jacobian block structure to develop a multi-stage strategy that addresses separately the problem unknowns. A crucial point is the approximation of the resulting Schur complements, which is carried out at an algebraic level by applying proper restriction operators to the full matrix blocks. The selection of such restrictors is carried out with the aid of a domain decomposition technique algebraically enhanced by a dynamic minimal residual strategy. The proposed block preconditioner has been tested through an extensive experimentation on unstructured and highly heterogeneous reservoir systems, pointing out its robustness and computational efficiency.
-
-
-
Huff-n-Puff (HNP) Pilot Design in Shale Reservoirs Using Dual-Porosity, Dual-Permeability Compositional Simulations
Authors H. Hamdi, C.R. Clarkson, A. Esmail and M. Costa SousaSummaryBefore implementing an HNP pilot in the field, reservoir studies are usually conducted, and compositional numerical simulations performed, to assess the impact of uncertainty on HNP design parameters. In the previous work conducted by the authors, the impact of parametric uncertainty on designing a single-well HNP was demonstrated using single-porosity models. However, recent studies show that a limited region of shattered rock is likely to be created during the hydraulic fracturing process. This region is closely represented by regional dual-porosity dual-permeability (DP-DK) models. In this study, we expand on the early work and address the impact of model uncertainty on designing an optimal HNP for a Duvernay shale example. In addition, a multi-well HNP design is exemplified to assess the impact of fracture communication during the cyclic gas injection scenarios. A unified framework is required to conduct Bayesian history matching and perform HNP optimizations using the Markov chain Monte Carlo process. This task is achieved by implementing new adaptive sampling designs and employing some surrogate modelling techniques (random forests and Gaussian processes) to obtain the distributions for probabilistic HNP forecasts.
The results show that for an equivalent calibrated DP-DK model, the efficiency of HNP, for both lean and rich gas injection scenarios, can be substantially higher than that predicted with the caliberated single-porosity model. In particular, lean gas injection, predicted to have a low efficiency using single porosity models, is predicted to result in substantial incremental recovery in DP-DK models. The history matching and optimization results show that DK-DP models yield the highest recoveries during early cycles and a reduced efficiency for later cycles, whereas with single porosity models, the efficiency is fairly constant across cycles. The high efficiency of the DK-DP models is related to an enhanced swelling and mixing process due to pervasive communication (contact area) between the fracture network and the matrix. Moreover, the compositional simulations demonstrate that for multi-well HNP scenarios, communication through hydraulic fractures is far more important than the communication through the enhanced fracture region (EFR). This communication is shown to substantially reduce HNP performance, which is inferred by comparing the probabilistic forecast simulations.
This study provides a novel workflow to accurately assess the impact of model uncertainty on the HNP designs for unconventional shale and tight light oil reservoirs.
-
-
-
A Surrogate-Based Approach to Waterflood Optimisation under Uncertainty
Authors P. Ogbeiwi, K. Stephen and A. ArinkoolaSummaryThe Markowitz classical theory has been applied in the robust optimisation of petroleum engineering operations by many researchers. It involves the computation of the means and standard deviation of a specified reservoir performance measure(s), and the creation of an efficient frontier which qualifies the relationship between the optimised mean and standard deviation. However, the optimisation routine is computationally expensive as numerous simulations are required for calculating the means and standard deviations. Also, to simplify the optimisation problem many significant uncertainties are not considered in the optimisation routine. Also, previous researches have used a limited number of reservoir-model sample points of the uncertain variable(s) to calculate the means and standard deviations values. For example, if the uncertain parameter is uniformly distributed, three equiprobable (the low, median and high values) are used to correlate the uncertainty. However, this approach leads to erroneous calculations of the means and standard deviations because the actual distribution of the uncertainty is ignored.
In this study, we apply the Markowitz classical robust optimisation routine to a validated approximation model of the cumulative oil production of a case study reservoir to optimise oil recovery after waterflooding. Using this approach, we can reduce computational costs and for the first time, consider up to four geological uncertain variables in reservoir optimisation under uncertainty. We show that at least 100 sample points (realisations) of the uncertain geological parameters are required to obtain accurate computations of the means (reward) and standard deviations (risk). This allows for adequate sampling of the distribution of the uncertain parameters. We then construct an efficient frontier of the optimal solutions for various risk-aversion factors and compare the results to that obtained from a deterministic optimisation routine.
This approach was applied for the first time to optimisation under uncertainty. The result indicates that considering geological uncertainties while solving to the optimisation problem results in more realistic optimal solutions when compared to the deterministic optimisation case. This is because engineering control variables that lead to a risk-quantified strategy for the waterflooding operation are obtained.
-
-
-
Statistical Model and Experimental Study of Oil Viscosity Reduction and Rock Wettability Alteration Induced by Nanoparticles
Authors M. Bagheri Vanani, S.A. Tabatabaei-Nezhad and E. KhodapanahSummaryRecently, nanoparticles (NPs) have been introduced as useful solution for enhanced oil recovery (EOR) challenges. In this context, one of the challenges is related to precipitation of asphaltene content in oil reservoirs which affects rock and fluid properties including oil viscosity and rock wettability. This paper, at the first, aims to investigate the potential of silica NPs for oil viscosity reduction which increases the mobility of oleic phase leading to EOR. Next, the effect of silica NPs on precipitation of asphaltene on sandstone rocks in which affects rock wettability will be explored. At the last, a statistical modeling study will be performed using MINITAB Software to investigate the effect of temperature, nanofluids concentration and oil composition on rock and oil properties. To this end, viscometer oil testing and contact angle measurement were conducted. The results showed that silica NPs inhibited or delayed precipitation of asphaltene in sandstone rock and consequently, the potential of asphaltene for changing rock wettability toward decreasing oil- wet condition. In addition, the results demonstrated that the dispersion of silica NPs in the oleic phase could decrease oil viscosity as much as 98% by cracking carbon-oxygen and carbon-carbon bonds in the hydrocarbon chains. By statistical analysis also a multiple linear regression model was developed to predict the percentage of oil viscosity reduction by NPs. In addition, R squared value obtained as much as 98.9% and p value was smaller than 0.05 indicating the effective role of oil sample, silica NPs concentration and temperature parameters on the oil viscosity reduction. F values of 152.86, 845.4 and 91.78 were achieved for each parameters, respectively. Also, no interaction between each pair of parameters for the viscosity reduction was observed. The results of the modeling section was found to have acceptable application in forecasting oil field data. This study support the EOR potential of NPs in oil and gas fields.
-
-
-
How Does the Definition of the Objective Function Influence the Outcome of History Matching?
Authors G. Eremyan, I. Matveev, G. Shishaev, V. Rukavishnikov and V. DemyanovSummaryIn this work we investigate how the form of the objective function can influence the results and the speed of history matching (HM). The objective function definition depends on the production variables included in the objective and their weighting factors. These choices may impact, for instance, the speed of assisted history matching. We demonstrate how the choice of the suitable form for the objective function used in HM should depend on the particular reservoir development problem at stake.
The work presents a comparative study between different objective function formulations used in history matching a synthetic reservoir example. An industry standard stochastic optimization algorithm - evolution strategy was chosen for the comparative benchmarking of the impact of the objective function choice on history matching. The synthetic model represents waterflooding case with 3 production, 3 injection wells, 7 years of simulated history and 8 parameters of reservoir uncertainty. The findings from the comparative study are not limited to a particular assisted HM algorithms applied.
Processing and analysis of the experimental results confirmed that the formulation of the objective function is important, since its value allows the algorithm to accelerate towards finding better HM solutions. The study demonstrates how different objective function formulations lead to different computational costs to reach the history matched solution. This means an optimal objective function formulation for each particular problem should provide the fastest convergence.
Novelty of the work is in demonstrating how the different objective function formulation can help to history match a reservoir model with minimized computational cost when solving different production problems. We show that the objective function should not be defined in the same way for any history matching process but rather adjusted to the particular application allowing to reach required history match at minimum computational cost. This will give more chances to history match real complex hydrocarbon field models within a reasonable time.
-
-
-
A Coupled Geomechanics and Flow Model for Enhanced Gas Recovery and CO2 Storage in Shale Reservoirs
More LessSummaryA fully coupled multicomponent flow and geomechanics model, which incorporates viscous flow, Kundsen diffusion, molecular diffusion, multi-component adsorption/desorption and geomechanics effect, is developed to study the enhanced gas recovery and CO2 storage in fractured shale reservoirs. Specifically, an efficient hybrid model, which consists of single porosity model, multiple porosity model and Embedded Discrete Fracture Model (EDFM), is adopted to model multiscale fractures. In flow equations, the Peng-Robinson EOS, extended Langmuir isotherm and Fick’s Law are adopted. In geomechanical portion, the proppant nonlinear deformation is considered. Then, the mixed space discretization (i.e., finite volume method for flow and stabilized XFEM for geomechanics) and modified fixed stress sequential implicit methods are applied to solve the proposed model. The robustness of the proposed method is demonstrated through several numerical examples, and a comprehensive analysis of the mechanisms for enhanced gas recovery and CO2 storage in fractured shale gas reservoirs is carried out, which takes into account Kundsen diffusion, molecular diffusion, multi-component adsorption/desorption, proppant nonlinear deformation, and different injection strategies including huff-n-puff scenario. Results show that CO2 injection is an effective approach for enhancing shale gas recovery, and the injected CO2 can be stored as free, adsorbed, and dissolved state. Besides, we can also find that stimulated reservoir volume, natural/induced fractures, hydraulic fractures, various transport/storage mechanisms and injection strategies have significant effects on enhanced gas recovery and CO2 storage in fractured shale reservoirs.
-
-
-
Deep-Learning-Based 3D Geological Parameterization and Flow Prediction for History Matching
Authors M. Tang, Y. Liu and L. DurlofskySummaryIn recent work we have developed deep-learning-based procedures for parameterizing complex 2D geomodels (Liu et al., 2019) and for predicting the detailed flow responses of such systems (Tang et al., 2019). The parameterization method, referred to as CNN-PCA, entails the use of principal component analysis in combination with convolutional neural networks, while the flow surrogate model involves the application of a recurrent residual U-Net procedure. The combination of these two capabilities enables efficient history matching to be performed. This is because the variables that must be determined during data assimilation correspond to the relatively small set of parameters associated with the CNN-PCA description, and the requisite flow simulations can all be accomplished using the deep-learning-based surrogate model. The overall methodology has been successfully applied to 2D channelized systems (as shown in Tang et al., 2019).
In this work, we extend these capabilities to 3D systems. The 3D CNN-PCA procedure differs from the 2D method in that we no longer use a style loss term (as we did in 2D), but instead apply a supervised learning approach. With this method we train the network using PCA realizations along with their corresponding (desired) channelized representations. This treatment, in common with our 2D procedure, leads to faster training than some other approaches since the underlying PCA representation already captures aspects of the spatial statistics (covariance). The 3D recurrent R-U-Net consists of 3D convolutional and recurrent (convLSTM) neural networks, which are designed to capture the spatial and temporal information associated with dynamic systems. This approach shows advantages over autoregressive procedures. The recurrent R-U-Net is trained on O(3000) randomly generated 3D geomodels and their corresponding (simulated) dynamic 3D state maps; e.g., saturation and pressure at a set of time steps.
Results are first presented for each method individually. Specifically, we validate the geological parameterization procedure by demonstrating that the prior flow statistics, for a 3D channelized system, generated using CNN-PCA agree closely with those from (reference) geostatistical models. The recurrent R-U-Net surrogate flow model is validated through detailed comparisons of oil-water flow results for particular (new) realizations and through error statistics for an ensemble of new models. Finally, a 3D history matching example, in which the two procedures are used in combination, will be presented.
-
-
-
A Derivative-Free Trust-Region Algorithm for Well Control Optimization
Authors T. Silva, M. Bellout, C. Giuliani, E. Camponogara and A. PavlovSummaryA Derivative-Free Trust-Region (DFTR) algorithm is proposed to solve for the well control optimization problem. Derivative-Free (DF) methods are often a practical alternative because gradients may not be available and/or are unreliable due to cost function discontinuities, e.g., caused by enforcement of simulation-based constraints. However, the effectiveness of DF methods for solving realistic cases is heavily dependent on an efficient sampling strategy since cost function calculations often involve time-consuming reservoir simulations. The DFTR algorithm samples the cost function space around an incumbent solution and builds a quadratic approximation model, valid within a bounded region (the trust-region). A minimization of the quadratic model guides the method in its search for descent. Crucially, because of the curvature information provided by the model-based routine, the trust-region approach is able to conduct a more efficient search compared to other sampling methods, e.g., direct-search approaches.
DFTR is implemented within FieldOpt, an open-source framework for field development optimization that provides flexibility with respect to problem parameterization and parallelization capabilities. DFTR is tested in the synthetic case Olympus against two other type of methods commonly applied to production optimization: a direct-search (Asynchronous Parallel Pattern Search) and a population-based (Particle Swarm Optimization). Current results show DFTR has promising convergence properties. In particular, the method is seen to reach fairly good solutions using only a few iterations. This feature can be particularly attractive for practitioners who seek ways to improve production strategies while using full-fledged models. Future work will focus on wider application of the algorithm in more complex field development problems such as joint problems and ICD optimization, and extensions to the algorithm to deal with multiple geological realizations and output constraints.
-
-
-
Optimizing Sealing of CO2 Leakage Paths with Microbially Induced Calcite Precipitation Under Uncertainty
Authors S. Tveit, P. Pettersson and D. Landa MarbanSummaryIn large-scale CO₂ sequestration, critical pressure build-up can occur due to the high injection rates, which in the worst case can lead to leakage paths for the CO₂ through caprock fractures and/or reactivated faults. A novel leakage mitigation technology is microbially induced calcite precipitation (MICP), where microorganisms are injected to accelerate production of the sealing agent – calcite – from calcium and urea. The MICP technology has been validated on multiple scales, from laboratory to meter-scale experiments. On the field scale, the situation can be challenging since leakage path(s) are possibly tens-of-meter from the injection well, and the subsurface parameters controlling the flow, chemical reactions, and microbial processes can be uncertain.
In this work, we consider the optimization problem of maximising sealing of leakage paths in the presence of uncertainty. The control variables can, e.g., be injection rates and periods, or concentration of chemical and biological species, while the uncertain parameters can, e.g., be permeability and porosity. To quantify the effect of parameter uncertainty on control variables, an accelerated Monte Carlo (aMC) method is used, which aims to accelerate the slow convergence of the standard MC method. Even with aMC methods, a significant number of samples of the objective function is needed, that is, multiple runs of the simulator is required.
The MICP process at field scale is described by a coupled advection-diffusion-reaction, microbial, and rockaltering equation that is associated with high computational cost to solve. To alleviate the high computational cost, we generate a surrogate (or proxy) model of the original objective function that can be evaluated at negligible cost. The surrogate model is based on the sparse hierarchical multi-linear interpolation (SI) method, where the objective function is approximated to a desired accuracy using significantly fewer function evaluations than traditional interpolation methods. Hence, the computational cost of generating and sampling with the surrogate model is typically lower than the cost of sampling with the original objective function. The novel SI-aMC method is applied to different test cases showing the computational efficiency and accuracy of uncertainty estimates for field-scale MICP optimization problems.
-
-
-
A Mathematical Model for Scaling and Wettibility Alteration in ASP Flooding
More LessSummaryDaqing Oilfield has carried out a large scale of ASP(Alkali-Surfactant-Polymer) flooding application to further increase oil recovery, annual oil production for ASP flooding has been more than 4 million tons since 2016. A series of production data and theoretical study results have demonstrated that the chemicals in ASP system react with the mineral component in reservoir rock, resulting in scaling and wettibility alteration to influence the oil recovery processes for ASP flooding. The chemical reaction processes are more complicated, making the numerical simulation more difficult to simulate chemical process mechanism for ASP flooding accurately. How to develop mathematical model to simulate the process mechanism for scaling and wettibility alteration is a big challenge.
This paper has conducted a series of lab experiments to study the reaction between the chemicals in ASP system and the mineral component in reservoir rock, showing that the chemicals in ASP system has corrosion action on the reservoir rock to generate the scaling materials and cause scaling reaction. Based on the comprehensive analysis to scaling effect factors, the kinetic model for reaction between the chemicals in ASP system and the mineral component in reservoir rock has been constructed. We have also measured the relative permeability curve for that before and after wettibility alteration to model the process mechanism for wettibility alteration. All these mathematical models have been incorporated into chemical flooding simulator.
We have conducted history matching simulations for several ASP flooding application projects in Daqing Oilfield, the history matching simulation results from the simulator with scaling and wettibility alteration model agree with the observation data much better than that from the simulator without scaling and wettibility model, showing that the scaling and wettibility alteration play an important processing role in ASP flooding, and ASP flooding numerical simulation should take the scaling and wettibility alteration process mechanisms into account.
-
-
-
Machine-Learning Informed Prediction of Linear Solver Tolerance for Non-Linear Solution Methods in Numerical Simulation
Authors E. Oladokun, S. Sheth, T. Jönsthövel and K. NeylonSummaryNumerical simulators model evolution of state variables over time and space. The governing equations are often highly non-linear and exhibit significant complexity. Due to the lack of closed-form analytical solutions, nonlinear fixed-point iterative methods – most commonly the Newton method - are required to solve these problems. The key component is constructing the Newton update by solving the Jacobian, a large sparse system of linear equations.
Most state-of-the-art linear solvers for large sparse systems are based on Krylov methods, e.g. the GMRES method [Saad and Schlutz, 1986]. Solving the linear system is often a significant part of the overall computational effort; any efficiency improvement can substantially speed up the simulation. A major challenge with iterative linear solvers is how to determine the stopping criteria. A tight linear convergence tolerance (η) ensures a good linear solution but is more computationally expensive and might not necessarily affect the quality of the corresponding non-linear solution update – a phenomenon called oversolving of the Newton equation. Eisenstat and Walker [1994], proposed an approach that dynamically selects η based on the non-linear state of the system. This method is very successful in reducing the number of linear iterations and thus reducing oversolving, but can be at the expense of more non-linear iterations such that the overall effect is detrimental to performance.
In this work, we propose an new algorithm to predict η such that the total number of linear and non -linear iterations are minimized, leading to a more robust performance improvement and reduced simulation time. We derive an estimate for η using non-linear system state variables such as residual norms and Newton iteration number, coupled with linear system information such as an approximation of the condition number based on Ritz values. All the information used is readily available in a standard reservoir simulator. We augment this estimate with a selection strategy based on machine learning analysis and algorithms. Furthermore, we compare with results using an alternative heuristic developed from insights gained through the machine learning analysis.
We apply our methods to a variety of problems in reservoir simulation ranging from heterogeneous 2D two-phase models to 3D thermal compositional models. We observe 30–50% reduction in linear iterations without increasing the non-linear iteration count - which means faster simulations without compromising accuracy.
-
-
-
Glimm and Finite Volume Schemes for Polymer Flooding Model with and Without Inaccessible Pore Volume Law
Authors G. Dongmo, B. Braconnier, C. Preux, Q. Tran and C. BerthonSummaryWe investigate the numerical simulation of polymer flooding model without IPV law [1] and with the IPV percolation law [2]. The two mathematical models (with and without the percolation law) are weakly hyperbolic. They include a resonance region where strict hyperbolicity is lost.
Providing exact solution to Riemann problems and devising accurate numerical schemes is a challenging task.
Without IPV law, the mathematical model coincides with the Keyfitz-Kranzer model [3]. For all initial data, a unique solution to the Riemann problem can be defined thanks to Isaacson and Temple’s entropy condition [1], which is to be imposed in addition to Lax’s one. Our theoretical contribution is to prove that the two models (with and without the IPV percolation law) are equivalent for both smooth and discontinuous solutions, up to a change of variables. Finally, we are able to provide a unique solution of the Riemann problems for both models.
Regarding our numerical simulation contributions, we propose second order finite volume schemes based on the Godunov scheme and a new Suliciu-type [4] relaxation scheme which can be applied to any IPV law. For the two mathematical models, we perform a mesh convergence and compute the errors of approximate solutions relatively to exact solutions and then determine the effective order of those schemes. Because of the system resonance and non-linearity, the so-called first and second order schemes have respectively an effective order of about 0.24 and 0.33 in contact discontinuities, both 0.5 in shocks and, 0.66 and 0.86 in rarefaction waves.
Because of this lack of accuracy, we implement the Glimm scheme for the two mathematical models. The obtained results are in good agreement with the exact solutions. Shocks and contact discontinuities are resolved with three points at most.
[1] E. L. Isaacson, J. B. Temple (1986), “Analysis of a singular hyperbolic system of conservation laws,” J. Diff. Eqs., vol 2, no. 65, pp 250–268.
[2] G. A. Bartelds, J. Bruining, J. Molenaar (1997), “The modeling of velocity enhancement in polymer flooding,” Transp. Porous Media, vol 1, no 26, pp 75–88.
[3] B. L. Keyfitz and H. C. Kranzer (1980), “A system of non-strictly hyperbolic conservation laws arising in elasticity theory,” Archive for Rational Mechanics and Analysis, vol 72, no 3, pp 219–241.
[4] I. Suliciu (1988), “On the thermodynamics of fluids with relaxation and phase transitions fluids with relaxation,” Int. J. Engin. Sci., vol 36, pp 921–947.
-
-
-
A Novel Approach to Multilevel Data Assimilation
Authors M. Nezhadali, T. Bhakta, K. Fossum and T. MannsethSummaryThere is an increasing interest in Multi-fidelity Modeling within computational statistics research in recent years. Multilevel ensemble-based data assimilation (MLDA), taking advantage of Multi-fidelity modeling, is a novel approach for reservoir history-matching. This method has been proposed to overcome the potential sampling errors that are encountered in conventional ensemble-based data assimilation techniques. Ensemble-based methods have been successful in history-matching of large cases but the limit in computational resources normally results in the ensemble size to be confined to about 100, which can yield to sampling error. In order to address the problem of sampling error, localization has been proposed which handles the problem of non-local spurious correlations but does not allow for true non-local correlations. The basic concept of MLDA revolves about allocation of resources for computation of models on a hierarchy of accuracy and computational cost. Utilization of models with a lower computational cost enables a significant increase in the ensemble size. Doing so, it brings about the opportunity to trade an appropriate amount of computational accuracy for a better statistical accuracy. In this research, the hierarchy of computational cost is established using a variation of spatial resolutions in the simulation models, and a new scheme called Simultaneous Spatial Multilevel Data Assimilation for multilevel data is investigated on a reservoir model. This method is designed to assimilate the inverted seismic data in a multilevel manner. Accordingly, a set of different spatial resolutions of the model is created and an ensemble of models and their corresponding inverted seismic data are considered for any of the resolutions. The simulations are run for all of the levels and an independent update is performed on any of the levels using the Ensemble Smoother (ES). The reduction of computational cost in coarser resolutions entails a multilevel error which can be quantified and accounted for, by a comparison with the simulations in the finest level. Finally, a cumulative statistical analysis over all ensembles is done to assess the performance in data assimilation. Results obtained from two variants of the new scheme are evaluated and compared to ES with localization and standard ensemble size.
-
-
-
Bayesian Inference of Covariance Parameters in Spectral Approach to Geostatistical Simulation
Authors N. Ismagilov, I. Azangulov, V. Borovitskiy, M. Lifshits and P. MostowskySummaryThe spectral simulation approach (described in Ismagilov and Lifshits (ECMOR XVI)) is a relatively new geostatistical method of stochastic reservoir property simulation. It is based on Fourier analysis of well log data and simulation of Fourier expansion coefficients in the interwell space. The key advantage of this method is its ability to automatically recognize and reproduce vertical non-stationarities observed in well data (Ismagilov et al. (ATCE 2019)). This comes at a price of having many parameters: while usual geostatistical approaches like kriging or sequential Gaussian simulation require estimating one covariance function or variogram (in practice, estimated parameters are variogram model type and ranges in three directions), the spectral approach requires estimating a lot of them (typically, 100—200 covariance functions). Obviously, automatic covariance estimation becomes crucial in this setting.
While assuming parametric models for the aforementioned covariance functions and estimating their parameters by maximizing the likelihood works reasonably well in practice, this strategy has some drawbacks. First, in cases when likelihood surface turns out to be multi-modal or flat, the point estimation of parameters may lead to many problems such as incorrect uncertainty estimation or even choosing a wrong model. Second, maximum likelihood estimation usually does not provide a way to incorporate prior knowledge about parameters --- a typical example is constraining the resulting variogram range to be in geologically reasonable limits.
We argue that Bayesian inference of parameters is a way to overcome both these challenges. Treating covariance parameters as random variables avoids limitations of deterministic point estimations while introducing prior distributions for parameters is the most natural way of incorporating the prior knowledge.
We develop and implement in software a version of spectral approach where covariance parameters are treated in a Bayesian way. We show via computations on practical examples that Bayesian inference enables to build better models in cases with complex likelihood surfaces and to account for prior knowledge about covariance parameters.
-
-
-
Adaptive Nonlinear Solver for a Discrete Fracture Model in Operator-Based Linearization Framework
Authors K. Mansour Pour and D. VoskovSummarySimulation of compositional problems in hydrocarbon reservoirs with complex heterogeneous structure requires adopting stable numerical methods that rely on an implicit treatment of the flux term in the conservation equation. The discrete approximation of convection term in governing equations is highly nonlinear due to the complex properties complemented with a multiphase flash solution. Consequently, robust and efficient techniques are needed to solve the resulting nonlinear system of algebraic equations. The solution of the compositional problem often requires the propagation of the displacement front to multiple control volumes at simulation timestep. Coping with this issue is particularly challenging in complex subsurface formations such as fractured reservoirs. In this study, we present a robust nonlinear solver based on a generalization of the trust-region technique to compositional multiphase flows. The approach is designed to embed a newly introduced Operator-Based Linearization technique and is grounded on the analysis of multi-dimensional tables related to parameterized convection operators. We segment the parameter-space of the nonlinear problem into a set of trust regions where the convection operators maintain the second-order behaviour (i.e., they remain positive or negative definite). We approximate these trust regions in the solution process by detecting the boundary of convex regions via analysis of the directional derivative. This analysis is performed adaptively while tracking the nonlinear update trajectory in the parameter-space. The proposed nonlinear solver locally constraints the updating of the overall compositions across the boundaries of convex regions. Besides, we enhance the performance of the nonlinear solver by exploring diverse preconditioning strategies for compositional problems. The proposed nonlinear solution strategies have been validated for both miscible and immiscible gas injection problems of practical interest.
-
-
-
Accounting for Model Discrepancy in Uncertainty Analysis by Combining Numerical Simulation and Bayesian Emulation Techniques
Authors H. Nandi Formentin, I. Vernon, M. Goldstein, C. Caiado, G. Avansi and D. SchiozerSummaryModel discrepancy specifies unavoidable differences between a physical system and its corresponding computer model. Incomplete information, simplifications and lack of knowledge about the physical state originate model discrepancy. Misevaluation of model discrepancy exposes decision-makers to overconfident and biased forecasts, a risky situation. We describe a methodology to account for one type of model discrepancy in the Bayesian History Matching for Uncertainty Reduction (BHMUR), an approach that combines reservoir simulation and emulation techniques to find all reservoir scenarios consistent with observed data and uncertainties in the problem.
Our methodology is an alternative and more rigorous tool to account for the model discrepancy caused by errors in target data while performing uncertainty analysis. Target data used in historical period contain observational errors that propagate through the simulator, causing one type of model discrepancy. We follow a systematic procedure for uncertainty reduction previously presented by the authors, expanding the step dedicated to the model discrepancy. Our methodology: (1) obtains a training set by evaluating model discrepancy in multiple scenarios of the search space, an expensive simulation-based process; (2) characterises the model discrepancy across the entire search space via Bayesian emulators; and (3) integrates the model discrepancy in the BHMUR via bias and covariance structures.
The methodology is demonstrated in a case study: 27 valid emulators for model discrepancy were constructed and integrated into the implausibility analysis and uncertainty reduction process. Two perspectives showed the impact of this type of model discrepancy. Firstly, neglecting model discrepancy resulted in all the search space being implausible –an indicator to review the problem characterisation and uncertainties; by contrast, when considering the model discrepancy, the non-implausible region consists of 8% of the search space. Secondly, we demonstrated the uncertainty reduction in the historical and forecasting periods. A key finding is that the error in target data results in a substantial model discrepancy over many other simulation outputs, being both time and location dependent.
We advance the applicability of BHMUR by proposing a statistically consistent tool to account for one type of model discrepancy in the uncertainty quantification process. We showed that errors in target data cause model discrepancy with a complex structure. Appropriate consideration of model discrepancy is vital to (a) identify the whole class of solutions consistent with historical data and uncertainties in the problem, (b) appropriately represent the physical system; (c) avoid making decisions based on over-confident and biased information while enabling more reliable production forecast.
-
-
-
The Express Method of Well-Control Optimization for the Associated Gas Recycling Process
Authors V. Babin, N. Glavnov and E. ShelSummaryThis work describes semi-analytical technique to maximize additional oil production and economic efficiency due to the recycling process of associated petroleum gas by selecting the optimal sequence of injection wells and the distribution of the injected gas volume between them. The technique is applicable for fields with an external gas supplies from other layers of the field or the other fields. Herewith the volume of external supplies is considered to be limited. It leads to the requirement for the most effective design of the recycling strategy and thereby increases the value of the optimization problem solution.
The mathematical model of the recycling is derived using main characteristics of the recycling process for each of the injection patterns, they can be obtained using for example reservoir simulation. Additionally it is assumed that the interaction between patterns can be neglected. Then the problem of maximizing the additional oil production is reduced to application of the method of Lagrange multipliers, thus volumes of injected gas to each of patterns is determined. On the next step discounted additional cumulated oil production as the main driver of economic efficiency is maximized. The problem is reformulated as functional’s extremum search and solved by dynamic programming technique.
As the result, for maximization of oil production, only cumulative volumes of external supplies injected to each of patterns determine the solution. In opposite, while optimizing economic efficiency the optimal sequence of patterns and injected volumes strongly depend on the dynamics of external supplies. Additionally, the optimal strategy implies that patterns have to be involved in the sequential injection mode. The use of the optimal strategy increases cumulated oil production for 20–30% in comparison with the expert one. The method does not require significant computational cost compared with black-box optimization based on reservoir model. That allows to use that technique for fast optimization of operation control.
The topic of gas injection is widely represented in literature. However, commonly the optimization is based on expert estimates or on multivariate calculations using a full-scale compositional reservoir model. The method presented in the work requires to carry out only a relatively small set of reservoir model calculations. The basic effect is achieved by analytical and semi-analytical operations. Also, the current references does not pay enough attention to cases with the limited external gas supplies, where the recycling strategy has a key impact on the profitability.
-