- Home
- Conferences
- Conference Proceedings
- Conferences
ECMOR XVII
- Conference date: September 14-17, 2020
- Location: Online Event
- Published: 14 September 2020
1 - 50 of 145 results
-
-
Numerical Effects of Fluid Flow Modelling in Surfactant Chemical Flooding
Authors O. Akinyele and K. StephenSummaryNumerical simulation of surfactant flooding using conventional reservoir simulation models can lead to unreliable forecasts and bad decisions due to the appearance of numerical effects. The simulations solve systems of nonlinear partial differential equations describing the physical behavior of surfactant flooding by combining multiphase flow in porous media with surfactant transport. The simulations approximate the solutions by discretization of time and space which can lead to spurious oscillations, instabilities or deviations in the model outcome.
In this work, the black oil decoupled implicit method was used to carry out simulations at various altered conditions (with dimensions at the reservoir scale) so as to investigate the model behavior in comparison with the analytical solution obtained from fractional flow theory. Various conditions were examined including changes to cell size and time step as well as the properties of the surfactant and how it affects miscibility and flow. The main aim of this study was to identify if oscillations occur, why and when they occur.
The results show spurious oscillations occur at the surfactant flood water bank and removed after the adsorption rate increased by 25% at its initial value of 0.0002kg/kg. While the oscillation was negligible after grid refinement of 5000 grid block set-up in the x-axis. The results also show aqueous phase velocity and pressure drop contributed significantly to the appearance of oscillation. The oscillation was not totally removed by the implementation of a sudden transition in the relative permeabilities around the surfactant front. The oscillations induced earlier solution miscibility that caused a misleading prediction of improved oil recovery in comparison to the solution without numerical effect. Thus, it is important to improve existing models and use appropriate guidelines to stop oscillations and remove errors.
-
-
-
A Multi-Timestep Domain Decomposition Method Applied to Polymer Flooding
Authors R.S. Tavares, R.B.D. Santos, S.A.D. Lima, A. Dos Santos and J.H.D.S. MarianoSummaryWaterflooding has been commonly used for secondary oil recovery. However, it is well known that the efficiency of oil recovery decreases when the mobility ratio is large, or the reservoir is highly heterogeneous. In these scenarios, the polymer flooding technique arises as an efficient alternative to increase the production curves. The injection of a high viscosity polymer solution reduces the mobility ratio, improving the displacement and sweep efficiency. On the other hand, mechanical retention and adsorption phenomena give rise to formation damage close to the injection wells resulting in injectivity loss. In this context, our main goal is to construct a new computational model based on domain decomposition methods capable of coupling the phenomena in different spatial and time scales during the polymer flooding. From the mathematical point of view, we consider the polymer solution a pseudo-plastic flow with the hydrodynamic model given by a non-linear Darcy’s Law where the injected fluid viscosity depends on the shear rate as suggested by the Carreau’s Law. Furthermore, the polymer movement is quantified making use of a convection-diffusion-reaction transport equation where the non-linear reactive part is due to mechanical retention and adsorption. The studied model takes formation damage into account considering that porosity and permeability depend on the retained polymer concentrations mechanically retained or adsorbed. From the computational point of view, the non-linear mathematical model is discretized making use of the finite element method together with a staggered algorithm and the Newton-Raphson method. The kinetic law for mechanical retention is post-processed by the Runge-Kutta method. It is important to highlight that polymer may accumulate in the neighborhood of the injection well on a fast time scale causing injectivity loss. Contrary to the rest of the reservoir, where large time steps and a coarse spatial mesh can be used, on the neighborhood of the injection wells small time steps and a fine spatial mesh are sometimes required. In this context, we propose the application of domain decomposition techniques to couple the near-well/reservoir domains with accuracy and lower computational cost. To this end, we apply a multi-time step domain decomposition method to couple retention and adsorption near well phenomena with polymer transport in the reservoir. Finally, we propose some numerical simulations to show the efficiency of the domain decomposition as well as to quantify injectivity during polymer flooding.
-
-
-
Modeling Compressible Gas Flow in Anisotropic Reservoirs Using A Nonlinear Finite Volume Method
Authors W. Zhang and M. Al KobaisiSummaryA nonlinear two-point flux approximation (NTPFA) finite volume method is applied to the modeling of compressible gas flow in anisotropic reservoirs. Gas compressibility factor and gas density are calculated by the Peng-Robinson equation of state. The governing equations are discretized by NTPFA in space and first-order backward Euler method in time. Newton-Raphson iteration is used as the nonlinear solver during each time step. The NTPFA method employs the harmonic averaging points as auxiliary points during the construction of onesided fluxes. A unique nonlinear flux approximation is obtained by a convex combination of the one-sided fluxes. Since a Newton-Raphson nonlinear solver is used, NTPFA will have a denser discretized coefficient matrix compared to the widely used Two-Point Flux Approximation (TPFA) method on grids that are not K-orthogonal. However, its coefficient matrix is still much sparser than the classical Multi-Point Flux Approximation O (MPFA-O) method. Results of numerical examples demonstrate that the pressure profile and gas production rate of NTPFA is in close agreement with that of MPFA-O for most cases while TPFA is inconsistent since the grid is not K-orthogonal. The MPFA-O method is well known to suffer from monotonicity issues for highly anisotropic reservoirs and our numerical experiments show that MPFA-O can fail to converge during the Newton-Raphson iterations when the permeability anisotropy is very high while NTPFA still enjoys good performance.
-
-
-
Optimization of WAG in Real Geological Field Using Machine Learning and Nature-Inspired Algorithms
Authors M. Nait Amar and A. Jahanbani GhahfarokhiSummaryMaximizing oil recovery is a challenging task for the oil industry worldwide, mainly in the presence of dynamic technical and economical constraints. To achieve this target, a number of enhanced oil recovery technologies are being applied, and one of the most successful and used methods is water alternating gas injection (WAG). The estimation of the optimal operating parameters of the WAG process is a complex problem which requires considerable number of time-consuming runs. Therefore, developing a faster alternative tool without scarifying the precision of the numerical simulators becomes essential. Proxy models that are user-friendly mathematical models based on machine learning and pattern recognition, have a noticeable ability to deal with highly complex problems, such as the outcomes of the numerical simulators in reasonable time.
The present work aims at establishing various dynamic proxy models for optimizing a constrained WAG project applied to real field data from “Gullfaks” in the North Sea. Two types of artificial neural network (ANN), namely multi-layer perceptron (MLP) and radial basis function neural network (RBFNN) were taught for predicting all the needed parameters for the formulated optimization problem. Levenberg–Marquardt (LM) algorithm was applied for optimizing the MLP model, while genetic algorithm (GA) and ant colony optimization (ACO) were applied for the proper selection of the RBFNN control parameters. Furthermore, the best proxy model found was coupled with GA and ACO for resolving the WAG optimization problems.
The results showed that the established proxies are robust, practical and effective in mimicking the performance of numerical reservoir model. In addition, the results demonstrated the effectiveness of GA and ACO in optimizing the parameters of WAG process for the real field data used in this study. The findings of this investigation contribute to the knowledge of the mathematics of oil recovery in various perspectives, namely the establishment of cheap and accurate time-dependent proxy models for real cases, the optimization of WAG process in the presence of various types of constraints and also the robustness of nature-inspired algorithms for resolving the optimization problems related to enhanced oil recovery.
-
-
-
Discrete Fracture-Matrix Simulations Using Cell-Centered Nonlinear Finite Volume Methods
Authors W. Zhang and M. Al KobaisiSummaryControl-volume based Discrete Fracture-Matrix (DFM) models have been increasingly used to simulate flow and transport in fractured porous media. The star-delta transformation is often used to eliminate the intermediate control volumes at fracture intersections. The star-delta transformation, however, assumes that the permeability at fracture intersections is very high. Therefore, it cannot accurately model the blocking effect at fracture intersections for example when a blocking fracture intersects a permeable one. In this work, we improve the star-delta transformation by making modifications to the calculation of transmissibility at fracture intersections so that the blocking effect at fracture intersections can be captured. To account for the permeability anisotropy in the matrix and the grid non-orthogonality resulting from unstructured meshing, the nonlinear finite volume methods are used to compute transmissibility for matrix-matrix connections. The linear two-point flux approximation (TPFA) is then used to couple the fracture and matrix together. Results of numerical experiments demonstrate that the improved star-delta transformation performs very well compared to the reference solution. When permeability of the matrix is anisotropic, the linear TPFA is not consistent in general and significant errors can be incurred. The nonlinear methods, on the other hand, captures the tonsorial effect in the matrix domain more accurately for all simulations.
-
-
-
Two-Phase Darcy Flows in Fractured and Deformable Porous Media, Convergence Analysis and Iterative Coupling
Authors F. Bonaldi, K. Brenner, J. Droniou and R. MassonSummaryWe consider a two-phase Darcy flow in a fractured porous medium consisting in a matrix flow coupled with a tangential flow in the fractures, described as a network of planar surfaces. This flow model is coupled with the mechanical deformation of the matrix assuming that the fractures are open and filled by the fluids, as well as small deformations and a linear elastic constitutive law. In this work, the model is derived and discretized using the gradient discretization method which covers a large class of conforming and non conforming discretizations. This framework allows a generic convergence analysis of the coupled model using a combination of discrete functional tools. The convergence of the discrete solution to a weak solution of the model is proved using a priori and compactness estimates. This is, to our knowledge, the first convergence result for this type of models taking into account two-phase flows and the nonlinear poro-mechanical coupling including the cubic nonlinear dependence of the fracture conductivity on the fracture aperture. Previous related works consider a linear approximation obtained for a single-phase flow by freezing the fracture conductivity. Numerical experiments are presented to illustrate this result using a Two-Point Flux Approximation cell centered finite volume scheme for the flow and a P2 finite element method for the mechanics. Iterative coupling algorithms are investigated to solve the coupled discrete nonlinear systems at each time step of the simulation.
-
-
-
Numerical Modelling of CO2 Migration through Faulted Storage Strata with a New Asynchronous FE-FV Compositional Simulator
Authors Q. Shao and S. MatthaiSummarySimulation of unstable subsurface CO2 migration is challenging not only because of the accompanying thermal-hydraulic-mechanical-chemical processes, but also because the interaction of the plume with geometrically complex geologic structures (e.g., faults and fractures) has to be resolved across a broad range of spatiotemporal scales. To address these challenges, we present a new hybrid finite element – finite volume simulator (ACGSS) for fully unstructured finite element meshes, including discrete representations of wells and intersecting faults. This compositional multi-phase multi-component transport scheme allows to model reactive miscible flow transport, phase transitions (e.g., CO2 dissolution, H2O evaporation and salt precipitation) and inter-phase mass transfer during CO2 geo-sequestration. Critical for its performance is an asynchronous evolution scheme, following the idea of discrete event simulation (DES). This method restricts diagnostics, phase equilibria and transport computations to those small subregions of the model where changes are occurring, resolving these accurately across temporal and spatial scales. In conjunction with parallelisation, this accelerates computation significantly, also making it more robust. Accurate compositional simulation required us to apply the asynchronous method to both the pressure and the saturation equations. This led to a genuinely new simulator. The ACGSS is applied to a complex 3D fault model, which consists of a sequence of sandstone and shale layers, intersected by multiple faults. This model was produced from a 3D medical scan of a sand-box experiment, which was converted into a finite element mesh using GoCAD and the RINGMesh software and populated with plausible properties. The adaptively refined mesh represent every detail of the intricate model geometry. In the example simulation (CO2 injected at 0.2 Mt/yr through a vertical 15-m long completion in lowest siltstone layer of graben structure), the CO2 rises up through the faults from block to block until it reaches the unfaulted topmost sandstone unit. This occurs in less than 3 years although the faults are modelled as thin (0.5-m wide) and only moderately permeable (k=5 × 10-14 m2) structures. Thanks to the asynchronous time-marching, the 3-year simulation on the >9 million cell grid, completes within several hours on a 20-core desktop PC. A sensitivity analysis to burial depth and geologic parameters is included in the paper and presentation.
-
-
-
UNISIM-III: Benchmark Case Proposal Based on a Fractured Karst Reservoir
Authors M. Correia, V. Botechia, L. Pires, V. Rios, S. Santos, V. Rios, J. Hohendorff, M. Chaves and D. SchiozerSummaryThe significant world oil reserves related to fractured karst reservoirs in Brazilian pre-salt fields adds new frontiers to the (1) development of numerical methods for upscale giant fields with multiscale heterogeneities, (2) history matching and production strategy optimization under critical uncertainties and (3) forecast of the future reservoir performance. However, there is a lack of benchmark models with a heterogeneous dynamic behavior typical from fractured karst reservoirs, to develop and validate novel numerical methods. This work presents a simulation benchmark model, available as public domain data, which represents a fractured carbonate karst reservoir and add a great opportunity to test new methodologies for reservoir development and management using numerical simulation.
The work structure is divided in three steps: (1) development of a reference model, a fine grid model with high level of geologic details, treated as the real field, (2) development of a simulation model under uncertainties considering an initial stage of the field development phase, and, (3) elaboration of a benchmark proposal for studies related to the oil field development and production strategy selection. Based on the available information from well logs, several uncertainty attributes were considered in structural framework, facies and petrophysical properties. Dynamic, economic and technical uncertainties were also considered. The reference model is a giant field divided by two stratigraphic zones - the upper zone characterized by stromatolites and the lower one by coquinas. Moreover, the model is characterized by two regions with karst features near the horizons surfaces and a cluster of fractures near faults. Volcanic rocks and high permeable trends near faults are included as non-mapped uncertainties in the simulation model, as the information from well logs at the initial stage of field development does not intercept this geologic attribute. This approach will lead to several challenges on reservoir development and management.
As this benchmark is representative of a giant field, it is divided in four sectors. Sector 1 has already a production strategy defined, aiming studies regarding field management. The strategy considers WAG (water alternate gas/CO2) as recovery mechanism and the presence of 13 wells in a first wave (6 producers and 7 injectors), and other 4 wells can be added in a second wave. Field development studies can be applied in the other sectors.
This Benchmark provides a great opportunity for develop and test novel numerical methods in giant reservoirs with geologic and dynamic pre-salt trends.
-
-
-
Upscaling of Nanoparticle Retention Rate for Single-Well Applications From Pore-Scale Simulations
Authors N. Bueno, M. Icardi, F. Municchi, H. Solano and J. MejíaSummaryOne of the main difficulties when simulating nanoparticle transport in porous media is the lack of accurate field-scale parameters to properly estimate particle retention across large distances. Furthermore, current field models are, in general, not based on mathematically rigorous upscaling techniques, and empirical models are being fed by experimental data. This study proposes a rigorous and practical way to connect pore-scale phenomena with Darcy-scale models, providing accurate macro-scales results. In order to carefully resolve nanoparticle transport at the pore-scale, we develop numerical solver based on the open-source C++ library OpenFOAM, able to account for shear-induced detachment of nanoparticles from the walls in addition to usual isotherm attachment/detachment processes. We employ an integrated approach to generate random, user-oriented, and periodic porous structures with tunable porosity and connectivity. A periodic face-centered cubic geometry is employed for simulations over a broad range of Péclet and Damköhler numbers, and effective parameters valid at the macro-scale are obtained by mean of volume averaging in periodic cells, as well as breakthrough approximates to asymptotic behaviour. Coupling between these techniques leads to a comprehensive estimation of a first-order kinetic rate for nanoparticle retention and the maximum retention capacity based on breakthrough curves and asymptotic curves. We apply this upscaling process to real cases found in literature, to estimate the penetration radius of a typical stimulation operation settling. The profiles are compared against different spatial discretization in the radial direction and different dimensionless numbers to study their impact upon travel distances. The present workflow gives a new insight into some aspects of pore-scale boundary conditions that usually are hedged, such as the validity of some usual mathematical expressions or the correctness of pore-scale results representing larger scales. Finally, this study proposes a mathematical relationship between pore-scale parameters and some important macro-scale dimensionless numbers that can be used to estimate field-scale effective parameters for nanoparticle retention in well stimulation and Oil&Gas industry applications.
-
-
-
A Novel Nanoparticle Retention Model in Porous Media for IOR & EOR Applications
More LessSummaryRecent developments based on nanotechnology have shown the immense potential of application in EOR & IOR operations, which is supported by successful results on the lab and field-scales. However, the poor understanding and the shortage of a robust framework for nanoparticle transport-and-retention modelling in porous media is a downside for its properly spread in the Ο &G industry. In this work, we propose a novel modelling framework that allows to represent jointly mechanical and chemical mechanisms for nanoparticle retention and remobilisation in porous media. This model is formulated under a phenomenological approach that considers a strong physical basis of these processes on the macroscale. Retention and remobilisation dynamics are modelled under a non-equilibrium approximation using an α-order kinetic which depends on equilibrium condition. The mathematical formulation was programmed using the open-source package Chebfun, as a function of dimensionless variables to make up-scaling to higher scales more feasible. The impact of dimensionless variables in nanoparticle transport and retention was studied by a sensibility analysis which allowed to identify their effect on nanoparticle transport and retention. In this sense, some simplifications are proposed for the model according to the dimensionless variables. In order to validate this framework and its implementation, a set of lab tests was designed and carried out using silica-nanoparticle-based nanofluid in sand packs. Some concentration jumps were used to catch its effect on nanoparticle retention and remobilisation. Experimental data show a good agreement with simulation data under each operation condition and parameter fitting. Additionally, the model is capable of predicting the profile of nanoparticle concentration and its evolution on time. Changes in that profile can be predicted if operating conditions change, allowing their optimisation. Finally, this modelling framework is implemented in the multi-physic and multi-component tool DFTmp Simulator to simulate specific EOR & IOR application on the field-scale. Using the fitting parameters obtained previously, an application of IOR is simulated considering a multiphase system and other phenomena simultaneously.
-
-
-
Consistent Formulation and Error Statistics for Reservoir History Matching
By G. EvensenSummaryIt is common to formulate the history-matching problem using Bayes’ theorem. From Bayes’, the posterior probability density function of the uncertain static model parameters is proportional to the prior probability density of the parameters multiplied by the likelihood of the measurements. The static model parameters are random variables characterizing the reservoir model while the data include, e.g., produced rates of oil, gas, and water from the wells. The reservoir prediction model is assumed to be perfect, and there are no errors besides those in the static parameters. The Bayesian formulation of this problem is given, e.g., in the recent paper by Evensen et al. (2019) , and serves as the fundamental description of the history-matching problem.
However, this formulation is flawed. The historical rate data comes from the real production of the reservoir, and they contain errors. The conditioning methods usually take these errors into account, but we neglect them when we force the simulation model by the observed rates during the historical integration. Thus, in the history-matching problem, the model prediction depends on the same data that we condition on, which prevents the direct use of Bayes’ theorem.
Here, we formulate Bayes’ theorem while taking into account the data dependency of the simulation model. In the new formulation, one must update both the poorly known model parameters and the errors in the rate data used to force the reservoir simulation model. Also, we specify time-correlated rate errors that are consistent with the use of allocation tables to generate the rate measurements. The “red” errors lead to a stronger uncertainty increase for the simulation model and also reduces the impact of the rate measurements in the conditioning process (where the measurement error-covariance matrix becomes non-diagonal).
We present results where the new subspace EnRML by Raanes et al. (2019) and Evensen et al. (2019) is used with a simple reservoir case. The result is a more consistent prediction model and a more realistic uncertainty estimate from the updated ensemble.
-
-
-
Free-Space Well Connection Method for Efficient Coupling of Wells and Grid Cells of Arbitrary Geometry
By R. PecherSummaryIn reservoir simulation studies, one of the crucial factors affecting the accuracy and hence reliability of the results is the representation of well connections in the numerical reservoir grid. Although there have been numerous attempts to redefine the relationship between wellbore pressure, grid cell pressure and the corresponding fluid flowrate, the original Peaceman formulae are still the most prevalent simulation software option by far. The simplicity of their implementation overshadows their limited applicability to symmetric 2D scenarios of purely cylindrical radial flow, also built into the "3D projected Peaceman" formula.
One of the attempts to improve the inflow model was the Multi-Point Well Connection (MPWC) method (SPE 173302) which solves the local flow problem using the Boundary Element Method (BEM). In terms of its boundary conditions, pressures of the next-neighbour cells surrounding the well-connection cell appear in the final coupling formula, which makes the method difficult to implement and computationally less efficient.
A new method has been formulated to overcome the drawbacks of MPWC and still utilise the benefits of BEM. The proposed Free-Space Well Connection (FSWC) method converts the next-neighbour cells into infinitesimal layers of equivalent transmissibilities and applies free-space boundary conditions to their outer surfaces. All cell faces are adaptively refined into a required number of boundary elements and their pressures and fluxes are expressed by means of free-space Green’s functions representing well perforation sources/sinks. The method is applicable to cells and perforations of arbitrary geometry, including perforations outside the cell of interest, and to general cases of heterogeneous anisotropic rock permeability. Balancing all boundary pressures and fluxes yields the resulting well-connection transmissibility (or well index) and inter-cell transmissibility multipliers that emulate the flow asymmetry outside the well-connection cell.
Accuracy of the FSWC method has been verified against various analytical and numerical models. Even for the ideal case of a fully penetrating vertical well in the centre of a square reservoir, the FSWC-computed well index is closer to the analytical solution than that of Peaceman. Despite its broad applicability, superior accuracy and robustness, the method is fast and requires just a few CPU seconds to reach the desired precision. This is demonstrated by various examples with realistic well trajectories from full-field reservoir simulation runs.
-
-
-
Large-Scale Field Development Optimization Using a Two-Stage Strategy
Authors Y. Nasir, O. Volkov and L.J. DurlofskySummaryThe optimization of the locations of a large number of wells represents a challenging computational problem. This is because the number of optimization variables scales with the maximum number of wells considered, and some of these variables may be categorical if the determination of the number and types of wells is part of the optimization problem. In this work, we develop and test a two-stage strategy for large-scale field development optimization problems. In the first stage, wells are constrained to lie in repeated patterns, and the optimization variables define the pattern type and geometry (e.g., well spacing, orientation). This component of the optimization follows a previous procedure ( Onwunalu and Durlofsky, 2011 ), though several important modifications, including optimization of the drilling sequence, are introduced. The solution obtained in the first stage is used as an initial guess for the second stage. In this stage we apply comprehensive field development optimization, where the well location, type, drill/do not drill decision, completion interval (for 3D models), and drilling time variables are determined for each well. Pattern geometry is no longer enforced in this stage. Specialized treatments (consistent with actual drilling practice) are introduced for cases where multiple geomodels, used to capture geological uncertainty, are considered.
The two-stage procedure is applied to 2D and 3D models corresponding to different geological scenarios. Both deterministic and geologically uncertain settings are considered. All optimizations are performed using a derivative-free particle swarm optimization – mesh adaptive direct search hybrid algorithm. Our most challenging example involves optimization over multiple realizations of the Olympus model, which we simulate using a GPU- based commercial flow simulator. In all cases, results using the two-stage procedure are compared to those from a standard single-stage approach. We achieve consistently better optimizer performance using the two-stage approach. For example, in one case, the optimum achieved after 17,500 flow simulations using the standard approach is found after only 4400 flow simulations using the two-stage approach. In another case, for the same computational effort, the NPV achieved using the two-stage approach exceeds that of the standard approach by 4.7%. These results suggest that this optimization strategy may indeed lead to improved results in practical problems.
-
-
-
Kogen-Combined Koval/Gentil Fractional Flow Model
Authors D. Santos Oliveira, B. Horowitz and J.A.R. TuerosSummaryWe propose a proxy model to separate oil and water production total predicted liquid rate. This is essential to optimal waterflooding management. The proxy models studied here are widely used to estimate parameters in the field of petroleum engineering due to their low computational cost and do not require prior knowledge of reservoir properties. The approach uses production history and the producer-based capacitance and resistance (CRMP) model, together with the combination of two fractional flow models, Koval ( Cao, 2014 ) and Gentil ( Gentil, 2005 ). We will henceforth call Kogen this combined model.
The combined fractional flow model can be formulated as a constrained nonlinear curve fitting. The objective function to be minimized is a measure of the difference between calculated and observed water cut values (Wcut) or net present values (NPV). The constraint limits the difference in water cuts of the Koval and Gentil models at the time of transition between the two. The problem can be solved using gradient-based method the sequential quadratic programming (SQP) algorithm. In this study, the gradient is computation by finite differences. The parameters of the CRMP model are the connectivity between wells, time constant, and productivity index. These parameters can be found using a Nonlinear Least Squares (NLS) algorithm. With these parameters, it is possible to predict the liquid rate of the wells. The Koval and Gentil models are used to calculate the Wcut in each producer well over the concession period which in turn allows to determine the accumulated oil and water productions.
Two synthetic models, Brush Canyon Outcrop and Brugge model are used to validate the proposed strategy. Then we compare the solutions obtained with the three fractional flow models (Koval, Gentil, and Kogen) with results obtained directly from the simulator.
It has been observed that the proposed combined model, Kogen, consistently generated more accurate results. In addition, CRMP/Kogen proxy model has demonstrated its applicability, especially when the available data for model construction is limited, always producing satisfactory results for production forecasting with a low computational cost.
-
-
-
History Matching of Time-Lapse Deep Electromagnetic Tomography with A Feature Oriented Ensemble-Based Approach
Authors K. Katterbauer, A. Marsala, M. Maucec, Y. Zhang and I. HoteitSummaryCarbonate reservoirs represent strongly complex geological structures whose main feature is that the flow dynamics primarily occurs in fractures. The complexity of the network of fractures as well as their interconnectedness may lead to unexpected flow patterns and uneven sweep efficiency. Determining the fracture distribution and reservoir properties of both matrix and fracture channels is quintessential for accurately tracking the fluid front movement in the reservoir, optimizing sweep efficiency, and maximizing hydrocarbon production.
A feature oriented ensemble-based history matching workflow was introduced previously to enhance the characterization of petroleum reservoirs through the assimilation of time-lapse electromagnetic (EM) data in combination with other available measurements. Compared with seismic measurements, which provide effective information related to reservoir structure, deep EM measurements in the interwell volumes are more sensitive to distinguish between hydrocarbon fluids and water. The developed workflow calibrates model variables of interest utilizing the information of formation resistivity that is usually made available through geophysical inversion of raw EM data. Archie’s law is typically used to build a relation between formation porosity, fluid properties (e.g., water saturation and salt concentration) and formation resistivity. Instead of integrating directly the inverted EM resistivity data, which is usually of high dimensions and noisy in amplitude, the boundary or contour information extracted from the EM resistivity field is utilized through an image oriented distance parameterization combined with an iterative ensemble smoother.
We are showcasing this framework on a realistic carbonate reservoir box model with a complex fracture channel network. Time-lapsed cross-well EM data was assimilated to update fracture and matrix reservoir properties, ensuring that the heterogeneity in the properties is maintained. The framework exhibited strong performance in the history matching of the complex carbonate reservoir structure. In comparison with conventional ensemble-based history matching techniques, this innovative developed approach led to significantly more accurate sweep efficiency maps, while maintaining the heterogeneity in the parameters between the fractures and the matrix. Finally, uncertainty in the saturation maps could be significantly reduced with the assistance of deep EM reservoir tomography.
Carbonate reservoirs represent highly complex geological structures and are characterized by flow dynamics dominated by natural fractures. The complexity of the network of fractures as well as their interconnectedness may lead to unexpected flow patterns and uneven sweep efficiency. Determining the fracture distribution and reservoir properties of both matrix and fracture channels is quintessential for accurately tracking the fluid front movement in the reservoir, optimizing sweep efficiency, and maximizing hydrocarbon production.
A feature-oriented ensemble-based history matching workflow was introduced previously to enhance the characterization of petroleum reservoirs through the assimilation of time-lapse electromagnetic (EM) data in combination with other available measurements. Compared with seismic measurements, which provide effective information related to reservoir structure, deep EM measurements in the interwell volumes are more sensitive to distinguish between hydrocarbon fluids and water due to the difference in electrical conductivity. The developed workflow calibrates model variables of interest utilizing the information of formation resistivity that is usually inferred through geophysical inversion of raw EM data. Archie’s law is typically used to describe the relation between formation porosity, fluid properties (e.g., water saturation and salt concentration) and formation resistivity. Instead of integrating directly the inverted EM resistivity data, which is usually of high dimensions and noisy in amplitude, the boundary or contour information extracted from the EM resistivity field is utilized through an image-oriented distance parameterization combined with an iterative ensemble smoother.
We are showcasing this framework using a realistic carbonate reservoir box model with a complex fracture channel network. We history matched time-lapsed crosswell EM data to update fracture and matrix reservoir properties, by preserving the heterogeneity in the properties. The framework exhibited strong performance in the history matching of the complex carbonate reservoir structure. The developed innovative approach led to significantly more accurate sweep efficiency maps, while maintaining the heterogeneity in the fractures and the matrix parameters. Uncertainties in the saturation maps were also significantly reduced with the history matching of deep EM reservoir tomography data.
-
-
-
Optimizing Low Salinity Waterflooding with Controlled Numerical Influence of Physical Mixing Considering Uncertainty
More LessSummaryControlled/Low Salinity Waterflooding (LSWF) is an augmented waterflood with well-reported improved displacement efficiency compared with conventional waterfloods. Physical mixing or dispersion of the injected low-salinity (LS) brine with the formation high-salinity (HS) brine substantially reduces the low-salinity effect. Numerical dispersion often misrepresents this mixing in conventional LSWF-simulations, causing errors in the results. Uncertainty in the reservoir description further makes the evaluated performance questionable. Existing studies have suggested optimal amounts for the injected LS-brine to sustain its displacement stability during inter- well flows with physical mixing, but with poor or no consideration of uncertainty. This work focuses on optimizing the injected LS-brine amount considering reported flow uncertainties while ensuring adequate correction of the erroneous influence of numerical dispersion on physical mixing. We investigate the impacts of flow uncertainties on the optimal LS slug-size. The sensitivity of the optimal slug-size to heterogeneity is examined under uncertainty. We evaluate how the interaction between physical mixing and geological heterogeneity influences slug integrity and performance.
We propose an improved ‘effective salinities’ concept to evaluate appropriate effective salinities to characterize the desired representative physical mixing supressing the large numerical dispersion effects usually encountered in coarse-grid LSWF-simulations. This ensures reliable representation of physical dispersion in such grids. We consider different models with characterized levels of heterogeneity and essential variables that control the impact of mixing on LSWF performance based mainly on reported data. New indicators are defined to evaluate the displacement stability and performance of injected LS-brine thereby relating its technical and economic performance. Slug performance is evaluated at different injection times to examine the sensitivity of recovery to LS injection start-time. Performance uncertainty is assessed through a designed four-stage computationally-effective approach: Parameter-space sampling to design representative experiments; Proxy modelling; Proxy validation and verification; and Monte Carlo simulation to provide a wider representative sample for the parameter-space.
We can now reliably represent physical dispersion in LSWF-simulations of current commercial reservoir simulators. Recovery is observed to be relatively insensitive to LS injection start-time until breakthrough of preinjected HS-brine. This is important for LS injection designs as they need not commence immediately for secondary-mode. The potential favourable influence of the spatial distribution of heterogeneity is seen, with links to transverse dispersion. The evaluated optimal sizes from existing studies are observed to be, at best, only suitable as displacement stability thresholds for slug injection considering uncertainty. We find an optimal slug-size of at least 1.0 HCPV to reduce risk under uncertainty.
-
-
-
Fast Robust Optimization Using Mean Field Bias Correction
Authors L. Wang and D.S. OliverSummaryEnsemble methods are remarkably powerful for quantifying geological uncertainty. However, robust optimization of a cost function for a problem in which uncertainty is characterized by a large ensemble size can be computationally demanding. In a straightforward approach, the computation of expected net present value (NPV) requires many expensive simulations. Several techniques (e.g., model selection, coarsening) have been proposed to reduce the cost but generally lead to a less accurate optimization. To reduce the amount of computation without sacrificing accuracy, we developed a fast and effective approach for computing the expected NPV by using only the reservoir mean model with a bias correction factor. At each iteration of the optimization procedure, we only require one additional simulation in the mean model with a different set of controls to obtain an initial approximate value through which the bias will be corrected with a multiplicative correction factor. Information from individual simulations with distinct controls and model realizations can be used to estimate the correction factor for different controls. The effectiveness of various bias-corrected methods is illustrated by the application of the drilling-order problem in the synthetic REEK Field model. Compared with the average NPV, the results show that the average error of estimated expected NPV from the mean model is reduced from -9% to 0.56% by estimating the bias correction factor. Distance-based localization with an appropriate taper length can further improve the accuracy of estimation. By adding a regularization term with a tuning parameter associated with the variance of the correction factor, the sensitivity of the estimates to the taper length is reduced such that the regularized estimate is potentially more accurate for a wider range of taper lengths. In previous work, we proposed a nonparametric online-learning methodology (learned heuristic search) to efficiently compute a sequence of drilling wells that is optimal or near-optimal. In this work, we apply the learned heuristic search (LHS) to the reservoir mean model with bias correction to optimize the drilling sequence and show that it leads to the same solution as the LHS with the average NPV. Moreover, we investigate the possibility of optimizing the first few wells without finding an entire drilling sequence. Our results show that LHS can optimize complete drilling sequences or only the first few wells at a reduced cost.
-
-
-
Fast Time-Stepping Scheme for Streamline-Based Transport Simulations
More LessSummaryIn this work, we propose a new time-stepping method for the simulation of transport in two-phase flows. Our method relies on constant initial saturation conditions and builds on the streamline-based discretization. For example, in sampling methods such as multi-level Monte Carlo, many probable scenarios of an uncertain permeability field have to be simulated with inexpensive models in order to quantify the uncertainty of phase saturations. However, since the statistical error converges slowly, large ensembles are needed and therefore, the computational cost per sample has to be small. We illustrate the performance of our new inexpensive, yet accurate time-stepping scheme in Buckley-Leverett type problems involving multi-Gaussian as well as more realistic channelized permeability fields.
-
-
-
Refined Ensemble-Based Method for Waterflooding Problem with State Constraints
Authors J. Tueros and B. HorowitzSummaryIn reservoir management optimization techniques are used to improve production and support new field development decisions. The waterflooding problem is based on determining optimal well control trajectories: rate, bottom hole pressure (BHP), valve openings, or a combination of them. The problem can be express as a typical nonlinear optimization problem. The objective function can be net present value (NPV) or cumulative oil production.
Linear constraints involve controls themselves, but nonlinear constraints involving state variables may also be imposed. For example, producer and injector wells controlled by BHP may be subject to flow control or vice versa. In optimization, constraints are imposed and respected at each control cycle, but not necessarily within control cycle due to discontinuity of rates due to control changes. The alternative to impose constraints at each time step of the simulation results in a high computational cost making the optimization process time-consuming. We propose correction points based on a time series within the control cycle to impose state constraints thus reducing the computational effort.
The algorithm of choice to solve the optimization problem is the sequential quadratic programming (SQP). The refined ensemble-based method is used to approximate gradient of the objective function and constraints. The sensitivity matrix is obtained as the product of pseudo-inverse of the covariance and cross-covariance matrices. The sum of the columns of the sensitivity matrix is the approximate gradient vector. The proposed refinements are based on connectivity between injector/producer wells and competitiveness coefficients between producers. The strategy aims to reduce spurious correlations in the sensitivity matrix when using small-size ensembles. Two synthetic models, Egg and Brugge, are used to validate the proposed strategy. Results are shown in different box plots, generated by performing ten optimization processes. We observe that the strategy of imposing correction points helps to impose state restrictions in the different steps of the simulation, reducing the computational cost during the optimization process.
-
-
-
Selecting Representative Models for Ensemble-Based Production Optimization in Carbonate Reservoirs with Intelligent Wells and WAG Injection
Authors S.M.G. Santos, A.A.S. Santos and D.J. SchiozerSummaryProduction optimization under uncertainty is complex and computationally demanding, a particularly challenging process for carbonate reservoirs subject to WAG injection, represented in large ensembles with high simulation runtimes. Search spaces of optimization are often large, where reservoir models are complex and the number of decision variables is high. The computational costs of ensemble-based production optimization can be decreased by reducing the size of the ensemble with representative models (RM). The validity of this method requires that the RM maintain representativeness throughout the optimization process, where the production strategy changes at each evaluation. Many techniques of RM selection use production forecasts of the ensemble for an initial production strategy, which raises questions about the robustness of the RM. This work investigates approaches to ensure the consistency of RM in ensemble-based long-term optimization. We use a metaheuristic optimization algorithm that finds sets of RM that represent the ensemble in the probability distribution of uncertain attributes and the variability of production, injection, and economic indicators ( Meira et al., 2020 ). Our case study is a benchmark light-oil fractured carbonate with features of Brazilian pre-salt reservoirs and many reservoir and operational uncertainties. We obtained production, injection and economic indicators using different approaches to provide valuable insight for RM selection. We inferred about RM fitness for production optimization based on their adequacy for uncertainty quantification for varying production strategies. Despite the effects of changing decision variables on RM representativity, our results suggest the possible use of RM for ensemble-based production optimizations with limitations related to the estimation of the probabilistic objective function due to mismatches in the probabilities of occurrence. Using production indicators obtained from a base production strategy decreased RM representativeness when compared to RM selection based on a more robust evaluation of reservoir performance using a wide-covering well pattern and no restrictions from production facilities. Finally, our results suggest valid RM selection using production forecasts for intermediate dates of the simulation period, an important contribution for ensembles with very high simulation runtimes. We also provide a broad theoretical background on the uncertain reservoir system and on approaches to obtain reduced ensembles and their applications.
-
-
-
Novel Ensemble Data Assimilation Algorithms Derived from A Class of Generalized Cost Functions
By X. LuoSummaryEnsemble data assimilation algorithms are among the state-of-the-art history matching methods. From an optimization-theoretic point of view, these algorithms can be derived by solving certain stochastic nonlinear-leastsquares problems.
In a broader picture, history matching is essentially an inverse problem, which is often nonlinear and ill-posed, and may not possess any unique solution. To mitigate these noticed issues, in the course of solving an inverse problem, domain knowledge and prior experience are often incorporated into a suitable cost function within a respective optimization problem. This helps to constrain the solution path and promote certain desired properties (e.g., sparsity, smoothness) in the solution. Whereas in the inverse problem theory there is a rich class of inversion algorithms resulting from various choices of cost functions, there are few ensemble data assimilation algorithms which in their practical uses are implemented in a form beyond nonlinear-least-squares.
This work aims to narrow this noticed gap. Specifically, we consider a class of generalized cost functions, and derive a unified formula to construct a corresponding class of novel ensemble data assimilation algorithms, which aim to promote certain desired properties that are chosen by the users, but may not be achieved by using the conventional ensemble-based algorithms.
As an example, we consider a channelized reservoir characterization problem, and formulate history matching as some minimum-average-cost problems with two new cost functions. In one of them, our objective is to restrict the changes of total variations of reservoir models during model updates. While in the other, our goal is instead to curb the modifications of histograms of reservoir models. While these two cost functions may appear unconventional in the context of ensemble data assimilation, the corresponding assimilation algorithms derived from our proposed formula are very similar to the conventional iterative ensemble smoother (IES). As such, our previous experience with the IES can be smoothly transferred into the implementations and applications of these new algorithms. In addition, the experiment results indicate that using either of these two new algorithms leads to better history matching performance, in comparison to the original IES.
-
-
-
Application of Dynamic Parametrization Algorithm for Non-Intrusive History Matching Approaches
Authors A. Mukhin, M. Elizarev, N. Voskresenskiy and A. KhlyupinSummaryHistory matching generates detailed reservoir description that matches production data and can be used for forecasting and uncertainty estimation. Due to the ill-posedness of the history matching problem, the parametrization of high-dimensional fields in the model (such as permeability and porosity) is widely applied. The common approach of existing parametrization algorithms is to generate a dataset of possible fields realizations (prior models) and then convert this dataset to an orthogonal basis using PCA-based techniques. Model reduction is achieved by truncating the majority of basis components based on energy criteria.
Due to high uncertainty and low quality of real data, the important pattern could be under-represented in the prior dataset and basis components with such structures could be truncated. We present a novel method where omitted components are defined not only by energy criteria but also by objective function sensitivity. In our Adaptive Strategies PCA (AS-PCA) technique we developed and advanced definition of the optimal basis and derived an efficient algorithm for basis recalculation using computational approaches from quantum mechanics. The algorithm requires gradient of an objective function w.r.t latent variables (only at the point of convergence). Then the new basis is obtained by a few linear transformations with negligible computational cost, and optimization continues. The method was tested on history matching of 2D reservoirs and have demonstrated improvements in terms of misfit value and field consistency in comparison with classic PCA parametrization.
However, the applicability of gradient-based methods is constrained by local convergence and high implementation efforts (i.e adjoint technique). To overcome these constraints, we extend adaptive strategies for non-intrusive history matching approaches such as stochastic optimization and ensemble-based algorithms. Numerical gradient approximation is not well-suited for AS-PCA since it is inexact and takes additional simulation time. We developed the regression-based algorithm for gradient estimation using a set of field realizations, represented by an ensemble in ensemble-based methods or a population in evolution-based algorithms. In this study, we demonstrate the theory and examples of adaptive strategies application to history matching using PSO and enKF. The results of history matching with inconsistent prior datasets for 2D gaussian fields and applications to uncertainty quantification will be provided.
-
-
-
Algebraic Wavefront Parallelization for ILU(0) Smoothing in Reservoir Simulation
By S. GriesSummaryIncomplete factorization methods are an important part of the linear solver strategy in reservoir simulation. It has been shown earlier that the inherited pressure-decoupling effect of (block)-ILU(0) plays an important role for the convergence of efficient linear solvers like System-AMG or CPR. From Black-Oil to coupled geomechanics.
With these specific linear systems, this decoupling is a by-product of the row-wise ILU-elimination. However, this also makes ILU sequential in nature, which is a problem on parallel compute hardware.
The parallelization of ILU methods has been a field of active research – and it still is. Various approaches are reported in the literature. All exploit inherited parallelism in the sparse systems to solve. Either by reordering the system accordingly or by setting synchronization points induced by the underlying structure (so-called wavefronts). All of these approaches have certain disadvantages and advantages regarding parallel efficiency and numerical robustness. It depends on the application which approach is best-suited.
Re-ordering approaches affect the elimination order. Hence, they can have significant robustness impacts for AMG in reservoir simulation.
Wavefront parallelizations guarantee equivalence to the sequential method. However, they either require the parallelization structure to be induced by the geometry. This may be challenging in unstructured cases and voids a main advantage of AMG. Or they perform a row-wise data-dependency scan, with a resulting amount of blocking communication.
In this paper, we are going to present a wavefront parallelization for (block-) ILU(0) that does not perform its dependency scan by considering groups of rows. The resulting wavefront setup works analogously to aggregative AMG setups, just with additional constraints. The outcome is a data-dependency graph where one can control the compromise between the frequency of data-exchange and wait-time. The equivalence to the sequential ILU(0) algorithm is still guaranteed.
While this approach can’t compete with the parallelizability of methods like Jacobi-relaxations, it can exploit inherited parallelism with ILU(0) for both OpenMP and MPI. It maintains the numerical properties of the original algorithm. We will demonstrate both with test problems as well as with ones from industrial reservoir simulations.
-
-
-
Extended Finite Volume Method (XFVM) for Flow Induced Tensile Failure in Fractured Reservoirs
Authors A.A. Habibabadi, R. Deb and P. JennySummaryTensile opening of pre-existing fractures and tensile failure around the fracture tips triggered by fluid injection can lead to permeability increase in a reservoir. Such hydraulically driven fracturing technologies are used in petroleum engineering to achieve enhanced extraction of oil and gas. However, such processes can also lead to increased seismic activity around the reservoir ( Ellsworth 2013 ). Numerical modelling of tensile opening and crack propagation along with shear slip modelling of pre-existing fractures is important to assess advantages and risks of hydro-fracturing.
The main criteria for numerical models of coupled flow and mechanics in fractured reservoirs are accuracy and computational efficiency. For flow, descriptions based on embedded discrete fractures in matrix domains proved to be successful in this regard ( Hajibeygi et al., 2011 ; Lee et al.; 2001 ). In this context, flow induced shear failure and tensile opening can be modelled using an extended finite element method (XFEM) ( Borja, 2008 ) or the recently introduced extended finite volume method (XFVM) ( Deb and Jenny, 2017 , 2020 ). The advantage of XFVM lies in the choice of only one degree of freedom per fracture segment for the displacement ( Deb and Jenny, 2017 ) and that the same conservative method is used for both flow and mechanics.
The current paper deals with an extension of this XFVM framework, such that also crack tip propagation can be simulated. The cohesive stress approach by ( Wells & Sluys, 2001 ) for crack tip propagation was modified and integrated into XFVM. Using the coarse-scale solution of stress field obtained by XFVM at the fracture tips, a fine-scale interpolation is generated. This finescale solution is used to obtain the stress intensity factors (SIF) by an overdeterministic method. The SIF calculation is used to estimate crack growth criterion and direction. An example testcase of tensile failure solution at the crack tips of a single fracture is studied.
-
-
-
Additive Schwarz Preconditioned Exact Newton Method as a Nonlinear Preconditioner for Multiphase Porous Media Flow
Authors Ø. Klemetsdal, A. Moncorgé, O. Moyner and K. LieSummaryDomain decomposition methods as preconditioners for Krylov methods are widely used for linear problems. There have been recently a growing interest into nonlinear preconditioning methods for Newton’s method applied to porous media flow. In this work, we study a spatial Additive Schwarz Preconditioned Exact Newton (ASPEN) method as a nonlinear preconditioner to the Newton’s method with fully implicit scheme in the context of immiscible and compositional multiphase flow. We first describe the method and how it can be implemented in a reservoir simulation package. We then study the nonlinearities addressed by the different components of the method. We observe that the local fully implicit updates are tackling well all the local nonlinearities and that the global ASPEN updates are tackling well the long range interactions. The combination of the two updates leads to a very competitive algorithm. We illustrate the behavior of the algorithm for conceptual one and two-dimensional cases, as well as realistic three dimensional models. We perform a complexity analysis and demonstrate that the Newton’s method with fully implicit scheme preconditioned by ASPEN is a very robust and scalable alternative to the well-established Newton’s method for fully implicit schemes.
-
-
-
Analytical Pore Network Approach (APNA) for Rapid Estimation of Capillary Pressure Behaviour in Rock Samples
Authors H. Rabbani, D. Guerillot and T. SeersSummaryCapillary pressure measurements are an integral part of special core analysis (SCAL) to which oil and gas industry greatly rely on. Reservoir engineers implement these macroscopic properties in simulators to determine the amount of hydrocarbons as well as the flowing capacity of fluids in the reservoir. Despite their importance, conventional laboratory techniques used to measure capillary pressure curves of core samples are expensive, tedious, time-consuming and prone to error. Motivated by the importance of capillary pressure measurements in oil and gas industry, here we propose a novel methodology called Analytical Pore Network Approach (APNA) that can provide a reliable forecast of capillary pressure using pore-scale 3D images of reservoir rocks. The proposed approach provides oil and gas companies inexpensive, fast and accurate estimation of capillary pressure data, and reduces the number of required laboratory experiments and facilitates the estimation of such properties from uncored sections of the reservoir (i.e. using drill cuttings).
-
-
-
Analytical Production Optimization with Modified NPV: Application to 2D Gas-Cone Reservoirs
Authors A. Bizzi, E. Fortaleza and F.P. MuneratoSummaryThis article investigates the analytical and computational optimization of reservoirs in alternate cost and coordinate spaces, by means of a modified NPV function (MNPV). We show that, for reduced systems, undertaking the analysis of reservoirs in these abstract spaces may lead to exact analytical expressions, unattainable under traditional analysis. This, then, may be used to speed up the optimization of large-scale reservoirs.
We demonstrate the concept under a restricted scope, focusing on a simplified case: To undertake the task of maximizing the transient yield of an idealized reservoir. It consists of a single production section of a horizontal well, in the presence of a gas cone. A set of further simplifying assumptions is then applied: For our analysis, the only depletion mechanism present is coning, and the very long 3D reservoir can be considered as composition of a group of 2D models.
Under these restrictions, we present an analytical proof that this modified NPV represents a convex function, for which the local optimization in abstract space generates the optimal global production strategy.
This, coupled to an analysis of monotonicity of the reservoir dynamics, may be used to algebraically demonstrate the existence of diminishing returns from increases in production rate, finally arriving at the most cost effective production strategy for the given system. This is then validated by a series of numerical simulations of the proposed reservoir.
Finally, we discuss similar concepts that may be used for the optimization of more realistic systems, enabling the use of analytical tools in the speeding-up of full-scale reservoir analysis.
The paper’s contributions can be stated in three points:
First, it presents new information on an emerging approach to the optimization of reservoirs. While most tools focus on optimizing the computation of reservoir-related processes, we show that a new approach to the NPV metric itself may lead to promising new results.
Second, it presents a novel, closed-form analytical solution for the optimal production rate of a reservoir with a simplified 2D gas cone.
Finally, it presents an alternate perspective on the role of analytical results in the era of computational reservoir optimization, by proposing a hybrid approach.
-
-
-
Albite-Anorthite Synergistic Effect on the Performance of Nanofluid Enhanced Oil Recovery
Authors R. Nguele, E.O. Ansah, K. Nchimi Nono and K. SasakiSummaryLarge volumes of oil sit within our reach primarily of the strong capillary forces, which themselves are subsequent to the attraction between the polar ends of the oil and the surface charges of bearing-matrix. Altering these interactions occurring within tiny pore throats or even more, unveiling the extent to which the geochemistry impacts these interactions can invariably improve the production. Therefore, we evaluated the performance of water-based nanofluid for oil production with the respect to the geochemistry.
Alumina-silica nanocomposite (Al/Si-NP), synthesized by plasma-method, was used as primary material. Functionalized by dispersing 0.25 wt.% lyophilized NP into the formation water (TDS=4301 ppm) water under carbon dioxide bubbling. The nanofluid, NF, obtained therefrom, was then used for coreflooding tests, which aim at displacing a dead heavy oil (ρ =0.854 g/cm3) from a waterflooded Berea sandstone. The ionic composition of the effluent fluids was tracked and further used for modeling the geochemical interactions. The latter considered mineral precipitation and dissolution as well as ion adsorption and desorption. Model calculations were performed using the transport algorithm in PHREEQC.
The experimental results from coreflood tests showed that Al/Si-NP, injected into a waterflooded sandstone, could displace up to 11% of the oil trapped, which was 10 times higher if no nanofluid as injected. Ionic tracking further revealed that the dissolution of albite along with anorthite weathering; both mechanisms concurred to the logjamming of Al/Si-NF. Furthermore, the geochemical modeling revealed weak and reversible cation exchange between sodium (Na+) and calcium (Ca2+). Also, we found that the pH of the preflush should be mildly basic with for controllable anorthite and albite precipitation plus silica cementation, from which derive Al-Si-NF aggregation. These points were further verified experimentally when the ionic composition was altered accordingly to the geochemical modeling, leading to the conclusion that albite, anorthite and silicate precipitation promotes high recovery, due to high Na+ and K+ ions. Silica cementation was proven to increase formation rock wettability.
-
-
-
Multiscale Matrix-Fracture Transfer Functions for Naturally Fractured Reservoirs Using an Analytical Discrete Fracture Model
Authors R. Hazlett and R. YounisSummaryFracture matrix transfer functions have long been recognized as tools in modeling naturally fractured reservoirs. If a significant degree of fracturing is present, models involving isolated matrix blocks and matrix block distributions become relevant. However, this methodology captures only the largest fracture sets and treats the matrix blocks as homogeneous, though possibly anisotropic. Herein, we produce the semi-analytic transient baseline solution for depletion for such models. More realistic multi-scale numerical models try to capture below grid scale information and pass it to the larger scale system at some numerical cost. Instead, for below block scale information, we take the semi-analytic solution to the Diffusivity Equation of Hazlett and Babu (2014 , 2018 ) for transient inflow performance of wells of arbitrary trajectory, originally developed for Neumann boundary conditions, and recast it for Dirichlet boundaries. As such, it represents the analytical solution for a matrix block with an arbitrarily complex gathering system surrounded by a constant pressure sink, we take to be the primary fracture system. Instead of using a constant rate internal boundary condition for the gathering system, we segment the well or fracture and force the internal complex fracture feature to be a constant pressure element with net zero flux. In doing so, we create a representative matrix block with any degree of infinite conductivity subscale fractures that impact the overall drainage into the surrounding fracture system. We quantify drainage from each face, capturing the anisotropic effect of internal fractures. We vary the internal fracture structure and delineate sensitivity to fracture spacing and extent of fracturing. This approach also generates the complete transient solution, enabling new well test interpretation for such systems in characterization of block size distributions or extent of below block-scale fracturing. The initial model for fully-penetrating fractures can be further generalized with the 2D distributed source model of Bao et al. (2017) for partially penetrating fractures of arbitrary inclination, as represented by floating, intersecting parallelograms embedded in the matrix block with either infinite or finite conductivity.
-
-
-
Experimental Evaluation of Sealing Effect of Nano Calcium Carbonate Blocking Agent on Shale Microfracture
More LessSummaryImproving the plugging ability of drilling fluid is an effective way to solve the instability of the wellbore in complex formations. Low porosity, low permeability and micro nano scale fracture developed in shale formation. Traditional large diameter plugging materials can not effectively block micro and nano pores, and drilling fluid filtrate is easy to enter the formation, leading to instability of shaft lining. With the help of GCTS equipment, we carried out the plugging evaluation experiment of nano CaCO3 plugging agent drilling fluid to the shale cores of the long Ma Xi formation in the Sichuan Chongqing formation. It is proposed to evaluate plugging effect by using shale permeability and longitudinal and lateral wave velocity characteristics before and after plugging. The results show that under the same concentration condition, the permeability of core decreases and the acoustic velocity increases with the use of nano CaCO3 plugging agent, which is much better than the effect of base slurry plugging; In the same nano particle material, with the increasing content of the nano CaCO3 plugging agent, the permeability of shale is reduced and the acoustic velocity increases. When the content of nano CaCO3 is 3%, the sealing effect of nanoscale drilling fluid is the best. Through the experimental evaluation study, we provide basic experiment and method support for the optimization of plugging agent for preventing wellbore instability.
-
-
-
Cube2Vec: Self-Supervised Representation Learning for Sub-Surface Models
Authors P. Lang, T. Adeyemi and R. Schulze-RiegertSummaryMeaningful representations of subsurface structures are essential to downstream machine learning tasks such as classification and regression. While unlabelled data are often abundant, labelling is expensive and for some use cases ill-defined. The ensuing lack of large, labelled datasets makes purely supervised training of models difficult for many tasks.
A self-supervised deep learning approach is developed which extends a representation learning method for spatially distributed data also referred to as Tile2Vec ( Jean et al., 2019 ) to three dimensions. A metric learning-based loss function uses the overlap between cubes of the subsurface as a proxy for their similarity. This reflects the notion that regions which are close to each other in physical space are on average semantically more similar than regions which are far apart from each other. A three-dimensional convolutional neural network has been trained accordingly on about 100,000 cubes extracted from reservoir simulation models. The resulting model is used to evaluate cubes for their embedding, and the distance to the embedding of other cubes is a direct measure of their similarity in a structural and grid property distribution sense.
The quality of the learned representation model is demonstrated quantitatively for labelled test datasets and empirically for two applications – visual search for similar cubes and the classification of formation sections according to their production potential.
Cube2Vec offers a way to leverage the large quantity of available unlabelled subsurface data to create powerful base models for visual analysis tasks in machine learning.
-
-
-
On the Robust Value Quantification of Polymer EOR Injection Strategies for Better Decision Making
Authors M. Oguntola and R. LorentzenSummaryOver the last decades several EOR methods have emerged, and corresponding models have been developed and implemented in increasingly more complex simulation tools. In this paper we present methodology and mathematical tools for optimizing and quantifying the value of EOR strategies, such as polymer, smart water or CO2. The developed methodology is demonstrated for polymer injection on medium to highly heterogeneous synthetic reservoir models with different complexity. The purpose of the work is to improve the understanding of the actual benefit of EOR methods, and to provide methodology that quickly allows users to find optimal production strategies that maximize the net present value (NPV).
In this work, the control variables for the optimization problem are polymer concentration and water injection rates for each injecting well, and oil production rates or bottom hole pressures for the producing wells, over the exploration period. Each control variable is constrained with given production limitations. To account for the uncertainty in the reservoir model, an ensemble of geological realizations is considered, and a robust ensemble-based approximate gradient method (EnOpt) is utilized. The gradient is approximated using a sample of control vectors, drawn from a Gaussian multivariate distribution with known mean and covariance. The covariance matrix is defined so that the control variables of the same well is correlated in time. The mean is updated using a preconditioned gradient ascent method with backtracking until an optimum is found.
The presented method is tested on three different synthetic reservoirs: a 2D five-spot field pattern with grid dimension 50×50×1, a 3D field provided by Equinor (the Reek field with dimension 40×64×14), and a 3D field provided by TNO (the OLYMPUS field with dimension 118×118×16). The first two fields have three phases (water, gas, and oil) and the third field has two phases (water and oil). For each case we find the optimal well controls for polymer flooding and then compared with convectional optimized continuous water flooding. The reservoir fluid flow is simulated using the Open Porous Media (OPM) simulator. However, it is worth noting that the optimization method is independent of the reservoir simulator used. Important findings of this study are the feasible control strategies for
polymer EOR methods leading to an increased NPV, and comparison of the economic values for optimized polymer and traditional water flooding for the examples considered.
-
-
-
Improved Extended Blackoil Formulation for CO2EOR Simulations
Authors T.H. Sandve, O. Sæ vareid and I. AavatsmarkSummaryA well-planned CO2EOR operation can help meet an ever-increasing need for energy and at the same time reduce the total CO2 footprint from the energy production. Good simulation studies are crucial for investment decisions where increased oil recovery is optimized and balanced with permanent CO2 storage. It is common to use a compositional simulator for CO2 injection to accurately calculate the PVT properties of the mixture of oil and CO2. Compositional simulations have significantly increased simulation time compared to blackoil simulations and thus make large simulation studies where many simulations are needed as in the representation of uncertainty and optimization unpractical. Existing extended blackoil formulations often poorly represent the PVT properties of the oil-CO2 mixtures. We therefore present an improved extended blackoil formulation with new process-dependent blackoil properties that depends on the fraction of CO2 in the cell. These properties represent the density and viscosity of the Oil - CO2 mixture more accurately and thus give results closer to the compositional simulator. A fourth component in addition to water, oil and formation gas is used to follow the injected gas. The process-dependent blackoil functions are calculated from numerical slim-tube experiments based on one-dimensional compositional EOS simulations. The same simulations also give estimates on the MMP (minimum-miscibility pressure).
The new extended blackoil model gives results that are closer to compositional simulations compared to existing blackoil formulations. We present examples based on data from the Fifth Comparative Solution Project: Evaluation of Miscible Flood Simulators as well as from CO2 injection on relevant field models.
The model has been implemented in the Flow simulator. The Flow simulator is developed as part of the open porous media (OPM) project. The Flow simulator is an openly developed and free reservoir simulator that is capable of simulating industry relevant reservoir models with similar single and parallel performance as commercial simulators.
-
-
-
Well Location Optimisation by using Surface-Based Modelling and Dynamic Mesh Optimisation
Authors P. Salinas, C. Jacquemyn, C. Heaney, C. Pain and M. JacksonSummaryPredictions of production obtained by numerical simulation often depend on grid resolution as fine resolution is required to resolve key aspects of flow. Moreover, the controls on flow can depend on well location in a model. In some cases, it may be key to capture coning or cusping; in others, it might be the location of specific high permeability thief zones or low permeability flow barriers. Thus, models with a suitable grid resolution for one particular set of well locations may fail to properly capture key aspects of flow if the wells are moved. During well optimisation, it is impossible to predict a-priori which well locations will be tested in a given model. Thus, it is unlikely to know a-priori if the grid resolution is suitable for all possible locations tested during a well optimisation procedure on a single model, and the problem is even more profound if well optimisation is tested over a range of different models.
Here, we report an optimisation methodology based on Dynamic Mesh Optimisation (DMO). DMO will produce optimised meshes for a given model, set of well locations, pressure (and other key fields) distribution and timelevel. Grid-free Surface-Based Modelling (SBM) models are automatically generated in which well trajectories are introduced (also not constrained by a mesh), respected by DMO. For the optimization of the well location a Genetic Algorithm (GA) approach is used, more specifically the open-source software package DEAP. DMO ensures that all the models automatically generated and simulated in the optimisation process are modelled with an equivalent mesh resolution without user interaction, in this way, the local pressure drawdown and associated physical effects (such as coning or cusping) can be properly captured if they appear in any of the many scenarios that are studied. We demonstrate that the method has wide application in reservoir-scale models of oil and gas fields, and regional models of groundwater resources.
-
-
-
Geoengineering Tool for Field Development: A Decision-Making Tool for Deviated Well Placement
Authors S. Bouquet and A. FornelSummaryThe developed geoengineering tool aims at improving the decision-making of deviated well positions to increase mature field production. It is based on statistical and visual analysis of oil field features. The main advantage of this method is its reservoir-engineer focus and that no additional flow simulations are needed unlike most of iterative optimization algorithms requiring thousands of simulations. Moreover, this methodology is not constrained by a well geometry, but proposes well placements and trajectories which are the most interesting considering the studied oil field features. For deviated wells, the drilling is not constrained by a fixed direction (horizontal or vertical), its direction is function of available resources (non-communicating oil-rich layers or disconnected oil-rich areas). In practice, this kind of wells are difficult to position manually by reservoir engineer. Here, we use information from field features and their classification to define a profitable well trajectory to maximize the oil production.
The field features are either static (e.g. anisotropy) or dynamic reservoir characteristics, e.g. mobile oil thickness, time-of-flight… To facilitate their analyses, an automatic, statistical analysis is performed on these features by unsupervised classification of the grid cells. A 3D-grid of classes indices, depending on the combination of the features, is obtained. This grid allows to identify the areas of interest for production. A specific visualization of potential field production capacities is proposed by defining and calculating geobodies. They are defined by groups of connected cells with the most interesting features. While these connections are hardly viewable in 3D, the geobody calculation allows to display the areas of interest and their compartmentalization.
The geobody with the highest quality index should be the first area-to-be-drained. The proposed trajectory will start at the cell with the highest quality index in this geobody. The quality indexes are calculated using a movering-average method. The trajectory is calculated with a Dijkstra algorithm, weighted by the quality indexes of cells and geobodies and constrained by a maximum well length.
This methodology was first applied on a synthetic case then on a real field case of North Africa, for which a standard reservoir engineer study had already been performed. The geoengineering tool results were compared to the reservoir engineering study results. This tool allowed to identify the high potential areas and proposed a well trajectory and placement with the most promising features according to the field constraints, improving the oil production while limiting the computational cost.
-
-
-
Comparison Between Algebraic Multigrid and Multilevel Multiscale Methods for Reservoir Simulation
Authors H. Nilsen, A. Moncorge, K. Bao, O. Møyner, K. Lie and A. BrodtkorbSummaryMultiscale methods for solving strongly heterogenous systems in reservoirs have a long history from the early ideas used on incompressible flow to the newly released version in commercial simulation. Much effort has been put into making the MsFV method work for fully unstructured multiphase problems. The MsRSB version is a newly developed version, which tackles most of the "real" world problems. It is to our knowledge, the only multiscale method that has been released in a commercial simulator. You can alternatively see the method as a variant of smoothed aggregation or as an iterative approach to AMG with energy minimizing basis functions. This will be discussed in detail.
So far, most work on comparing MsRSB with AMG methods has been on qualitative performance measures like iteration number rather than on pure runtime on fair code implementation. We discuss the theoretical performance and show the practical performance for our implementation. Here, we compare performance of pure AMG, standard two-level MsRSB with pure AMG as coarse solver, as well as a new truly multilevel MsRSB scheme. Our implementation uses the DUNE-ISTL framework. To limit the scope of the discussion we restrict our assessment to AMG with aggregation and smoothed aggregation and the MsRSB method. These three methods are closely related and are primarily distinguished in a preconditioner setting by the coarsening factors used, and the degree of smoothing applied to the basis. We also compare with other state-of-the-art AMG implementations, but do not investigate combinations of them with the MSRB method. For the MsRSB method, we also discuss practical considerations in different parallelization regimes including domain decomposition using MPI, shared memory using OpenMP, and GPU acceleration with CUDA.
All comparisons will focus on the setting in which many similar systems should be solved, e.g. during a large-scale, multiphase flow simulation. That is, our emphasis is on the performance of updating a preconditioner and on the apply time for the preconditioner relative to the convergence rate. Performance of the solvers will be tested for pure parabolic/elliptic problems that either arise as part of a sequential splitting procedure or as a pseudo-elliptic preconditioner/solver as a part of a CPR preconditioner for a multiphase system, for which block ILU0 is used as the outer smoother.
-
-
-
Modeling of Water-Induced Fracture Growth Pressure Using Poroelastic Approach
Authors P. Kabanova and E. ShelSummaryOne of the main factors affecting the efficiency of hydrocarbon production during the field development is waterflooding pattern used for the formation pressure maintenance. It is common practice when production wells that have been worked for depletion are converted to the injection. However, since hydraulic fracturing was previously performed on the majority of production wells, the injection under high pressure can cause risks associated with spontaneous fracture growth. This can lead to the water breakthrough and decreasing of production efficiency. The purpose of this work is modeling of fracture growth pressure on the injection well using poroelasticity approach.
Thus, a physico-mathematical model of the problem for determining the pressure at which the fracture will grow on the injection well is built. Solving a problem involves sequential finding of the pressure field in a development element using Laplace equation, and then the stress field using an equilibrium equation. The solutions were obtained by usage of analytical and numerical approaches including Fourier transform and finite-difference scheme. Verification of the obtained solution was carried out by validating the model on a finite element solution. The criterion of fracture growth was also derived, according to which the fracture propagation occurs when the minimum horizontal stress at the tip of the fracture is exceeded.
The influence of the parameters of the reservoir and the development on the value of the critical pressure was evaluated, namely, it was shown that an increase of Biot coefficient leads to an increase of fracture growth pressure and an increase of Poisson’s ratio decreases the critical pressure.
It was found that an increase of the distance between the wells in the line leads to the decrease of the pressure at which water-induced fracture starts to grow, while an increase in the distance in a row along the vertical increases this pressure.
It should be pointed out that the most common way to control the growth of water-induced fractures is combined hydrodynamic and geomechanical modeling, but this method is very time consuming and computationally expensive. In this connection, a quick method for estimating the fracture initiation pressure was proposed. The presented model can be used to control the growth of water-induced fracture, namely, to determine the regimes of fracture growth, to regulate the waterflood regimes (pressure and flow control), and to optimize the field development system without using combined hydrodynamic and full geomechanical modeling.
-
-
-
Analysis of Low Salinity and Polymer Synergies in a Dynamic Pore-Scale Network Simulator
Authors E. David, S. McDougall and A. BoujelbenSummaryIt has been postulated that combining different EOR techniques might yield a synergistic behaviour that could result in additional oil recovery beyond that obtained from each EOR technique applied separately. This has been investigated in recent experimental work (Alagic et al., 2010; Mohammadi and Jerauld, 2012 ; Shiran and Skauge, 2013 ; Pettersen and Skauge, 2016), where both polymer and surfactant solutions have been reported to be more efficient in a low salinity environment. We have investigated a number of different injection protocols using a pore-scale dynamic simulator that combines both low salinity brine (LS) and polymer injection.
Four synergistic combinations have been considered: (i) LS brine and polymer injected simultaneously at the start of the simulation (secondary mode), (ii) LS brine and polymer injected simultaneously following high salinity (HS) water breakthrough, (iii) LS brine injected initially, followed by simultaneous LS brine/polymer injection after LS breakthrough, and (iv) LS brine injected initially followed by polymer injection after LS water breakthrough.
A positive synergy was observed when LS brine and polymer were injected simultaneously in both secondary and tertiary modes, with the combined effect yielding significant increases in oil recovery. The mixture of polymer and LS brine was found to cause capillary fingers to thicken and swell, allowing the LS brine to access more of the pore space as a consequence of the higher viscous forces induced by the polymer. In secondary mode, the mixture of polymer and LS brine was observed to stabilise the water fingers and shifted the flow regime from viscous/capillary fingering to stable displacement. Moreover, results suggest that this synergistic LS/polymer effect is sensitive to a range of rock/fluid parameters, such as wettability, viscosity ratio, and capillary number.
-
-
-
Conditioning Surface-Based Geological Models to Well Data Using Neural Networks
Authors Z. Titus, C. Pain, C. Jacquemyn, P. Salinas, C. Heaney and M. JacksonSummaryGenerating representative reservoir models that accurately describe the spatial distribution of geological heterogeneities is crucial for reliable predictions of historic and future reservoir performance. Surface-based geological models (SBGMs) have been shown to better capture complex reservoir architecture than grid-based methods; however, conditioning such models to well data can be challenging because it is an ill-posed inverse problem with spatially distributed parameters.
Here, we propose the use of deep Convolutional Neural Networks (CNNs) to generate geologically plausible SBGMs that honour well data. Deep CNNs have previously demonstrated capability in learning representative features of spatially correlated data for large scale and highly non-linear geophysical systems similar to those encountered in subsurface reservoirs.
In the work reported here, a CNN is trained to learn the relationship between parameterised inputs to SBGM, the resulting geometry and heterogeneity distribution, and the mis-match between model surfaces and well data. We show that the trained CNN can generate a range of geologically plausible models that honour well data. The method is demonstrated for a 2D example model, representing a shallow marine reservoir and a 3D extension of the model that captures typical heterogeneities encountered in the subsurface such as parasequences, clinoforms and facies boundaries. These test cases highlight the improvement in reservoir characterisation for realistic geological cases.
We present here a method of generating geologically consistent reservoir models that match well data. The developed method will allow the generation of new high-fidelity realizations of subsurface geology conditioned to information at wells, which is the most direct observational data that can be acquired.
Technical Contributions
- – The use of surface-based modelling to describe even complex geological features compared to grid-based modelling significantly decreases the computational expense of training the network as there are fewer parameters to optimize.
- – Conditioning geological models to well data is a challenging ill-posed inverse problem in reservoir characterisation. The use of neural networks presents another approach for generating geologically plausible models that are calibrated with observed well data and can be extended to object-based modelling.
-
-
-
Modified Peaceman Correction for Improved Calculation of Polymer Injectivity in Coarse Grid Numerical Simulations
Authors I. Tai, A. Muggeridge and M.A. GiddinsSummaryAn improved method for calculating the injectivity of non-Newtonian polymers in finite volume, numerical simulation is presented. Non-Newtonian rheologies can significantly impact the performance of a polymer flood. This is especially important in the near wellbore region and at the start of injection. In the near well bore region velocities and shear rates are at a maximum and change rapidly with distance from the well. These effects are expected to be highest at the beginning of a polymer flood due to the near-wellbore region being saturated with more viscous oil.
An analytical method for calculating the modified Peaceman pressure equivalent radius when the well block contains only polymer solution is derived and then extended to the case when the well block contains both oil and polymer solution (as occurs at early time). This is done using fractional flow theory to derive well pseudo relative permeability functions. The approach is validated by comparing the results from fine grid radial and coarse grid Cartesian simulation models. The importance of the correction is demonstrated by simulating polymer injection into a realistic field scale model of a viscous oil field.
The modified Peaceman radius, combined with well pseudo relative permeabilities, significantly reduces the error when calculating the bottomhole flowing pressure in wells injecting a shear-thinning polymer solution. In the field scale simulation, with injection pressure constrained by the fracture pressure of the rock, our results show that polymer injection can be a viable technique for enhanced oil recovery in this reservoir. The new method leads to higher well injectivity and more optimistic prediction of polymer flood performance, compared to the standard Peaceman calculation used by most reservoir simulators, where non-Newtonian behaviour in the well block is unaccounted for.
This paper provides a simple and accurate method to capture the impact of shear thinning behaviour on polymer injectivity. The method will improve estimations of injectivity in reservoir simulations of shear thinning polymer solutions.
-
-
-
A Novel Method for Quickly Obtaining SRV in Multi-Stage Fracturing Reservoirs with Different Fracturing Radii
More LessSummaryMulti-stage fracturing is an effective stimulated reservoir technology for a multilayer reservoir. The evaluation of the stimulated reservoir volume (SRV) is an important quality index. Aiming at the multi-stage fracturing vertical commingled production well, and considering the different fracturing radii of any layer, an extended model of nlayer vertical commingled production well with an arbitrary distribution of the fracturing radii in the longitudinal direction was established. The Laplace domain bottom-hole pressure solution was obtained by the Laplace transformation and solving the n-th order Bessel function sparse matrix, and the real-time domain bottom-hole pressure solution was obtained by Stehfest numerical inversion method. Based on the characteristics of bottomhole pressure and it‘s derivative on the double logarithmic coordinate system, the new flow regimes are identified. The sensitivity analysis results of the vertical distribution of several different fracturing radii show that the multistage fracturing radii have the characteristics of a three-zone compound reservoir under the condition of vertical unevenness fracturing radii. On the other hand, the identification of the fracturing radii of multi-stage fracturing reservoirs is an inverse problem, that is, the fracturing radius of each layer cannot be effectively identified through the bottom-hole pressure response, but the SRV of multi-stage fracturing reservoirs can be obtained. We call these two phenomena the “equivalent compound effect” and “equivalent seepage volume effect”, respectively. These effects provide a new method for quickly obtaining the SRV, instead of being tangled in the fracturing radius of the local each layer, which provides a new direction for the evaluation of the overall stimulated reservoir effect of the multi-stage fracturing vertical commingled production well. Especially, it provides a novel perspective for understanding the complex seepage flow of multi-stage fracturing vertical commingled production reservoirs.
-
-
-
Nonlinear State Constraints Handling in Waterflooding Optimization Through Reduced Order Models
Authors A. Souza, A. Castro, M. Dall’Aqua, J. Tueros, B. Horowitz and E. GildinSummaryThis study addresses strategies to efficiently impose nonlinear state constraints using reduced order models. Nonlinear constraints imposed on state variables are of practical interest in optimizing reservoir production performance (NPV or oil production), but they are difficult to handle numerically. Constraints involve bounds on control themselves (e.g. rates, BHP or valve openings), linear functions involving the design variables, but oftentimes nonlinear constraints involving state variables must also be imposed. Examples are minimum (maximum) BHP’s at producer (injector) wells subject to rate controls, or vice versa. Enforcement of these constraints involves repeated computation of state variables, and possibly their derivatives, not only at the ends of control steps but at numerous intermediate times. Both computations are time consuming and, thus, it is proposed to make use of reduced order methods to decrease the numerical effort. The contributions of this paper are twofold: (1) we propose correction points based on a time series within the control cycle to impose state constraints thus reducing the computational effort; (2) we are coupling the optimizer with physics-based and data-driven reduced-order models to enforce state complexity reduction.
Here, two strategies are compared: Proper Orthogonal Decomposition / Trajectory Piecewise Linearization (POD/TPWL) and Dynamic Mode Decomposition (DMD). Both methods are snapshot-based linearizations but are implemented differently. TPWL/POD technique reduces the complexity of the problem by linearizing the governing equations around converged and stored states during a training simulation, and reduction is obtained by projecting states onto smaller subspaces by POD. This method requires access to the simulator code and, thus, is an intrusive method. DMD also rely on state snapshots that are used to generate a small set of optimal basis vectors called modes. The snapshot data also permits extraction of a coherent dynamic structure of the problem through the assumption that there exists a linear mapping connecting temporal evolution of the state system. This evolution can be computed without further simulation runs. DMD does not require access to the simulator code and therefore is nonintrusive. The reduced-order techniques are compared in the optimization of a BHP controlled synthetic reservoir where the objective function is maximization of oil production subject to field water production rate constraints. We will demonstrate the handling of non-linear constraints and the resulting computational savings using the MATLAB Reservoir Simulation Toolbox (MRST). We performed modifications to some of its routines to store Jacobian matrices and also snapshots, both used at POD/TPWL and DMD.
-
-
-
Effects of Lumping on the Numerical Simulation of Thermal-Compositional-Reactive Flow in Porous Media
Authors M. Cremon and M. GerritsenSummaryIn this work, we study the influence of using different lumping strategies on the thermal recovery of an extraheavy oil. Numerical simulation of thermal recovery processes typically requires advanced thermodynamic equilibrium computations to model the phase behavior and displacement. Those models rely on compositional descriptions of the oil using up to tens of components. Lumping a large number of components into a smaller number of pseudo-components in order to reduce the computational cost is standard practice for thermal simulations. In the context of reactive transport, most reaction schemes usually use at most four hydrocarbon components. However, the impact the lumping process has on the displacement processes can be hard to estimate a priori. We focus on 1D, 3-phase combustion tube-like numerical simulations of In-Situ Combustion (ISC) displacement processes. These thermal-compositional-reactive simulations exhibit a tight coupling between mass and energy conservation, through phase behavior, heat transport and reactions. We observe that depending on the number and type of lumped pseudo-components retained in the simulation, the results can exhibit modeling artefacts and/or fail to capture the relevant displacement processes. ISC cases involve multiple fronts moving downstream, including a steam front, a reaction/temperature front and multiple saturation fronts. First, we show that using a small number of components does not allow for an accurate estimation of the phase behavior of an extra-heavy oil. Using the typical reaction-based descriptions of a few hydrocarbon components (1-4) leads to inaccurate phase envelopes, for multiple compositions encountered in the displacement process. Then, we illustrate that under hot air injection without reactions, the displacement results do not capture the physical phenomena. Lumping heavy components together overestimates the size of the oil banks and gives inaccurate speeds for multiple fronts. Finally, in the presence of exothermic oxidation reactions, more components are needed to accurately capture the evaporation of medium and heavy components due to the tighter coupling and higher temperatures.
-
-
-
A Novel and Efficient Preconditioner for Solving Lagrange Multipliers-Based Discretization Schemes for Reservoir Simulations
Authors S. Nardean, M. Ferronato and A.S. AbushaikhaSummaryWe present a novel and efficient preconditioning technique to solve the non-symmetric system of equations associated with Lagrange multipliers-based discretization schemes, such as Mixed Hybrid Finite Element method (MHFE) and Mimetic Finite Difference method (MFD). These types of discretization have been gaining popularity lately and here we develop a fully dedicated preconditioner for them. Preconditioners are key to improve the efficiency of Krylov subspace methods, that provide a solution to the sequence of large-size, and often ill-conditioned, systems of equations originating from reservoir numerical simulations.
The mathematical model of flow in porous media is governed by a set of two coupled nonlinear equations: the momentum and mass balance equations, discretized using either the MHFE or the MFD, and the Finite Volume method (FV), respectively. Unknowns are located on elements (element pressure and saturation) and faces (face pressure and phase capillary pressure), the latter behaving as Lagrange multipliers. The problem is solved by adopting a fully implicit approach and linearization is provided by a Newton-Raphson method, which leads to a block-structured Jacobian matrix. An original numerical formulation of the mass balance equation, where the continuity of fluxes is strongly imposed with the aim of increasing the efficiency of the nonlinear iteration, has been investigated. The resulting block Jacobian is not symmetric, thus requiring special preconditioning tools for its efficient solution. The preconditioning approach exploits the Jacobian block structure to develop a multi-stage strategy that addresses separately the problem unknowns. A crucial point is the approximation of the resulting Schur complements, which is carried out at an algebraic level by applying proper restriction operators to the full matrix blocks. The selection of such restrictors is carried out with the aid of a domain decomposition technique algebraically enhanced by a dynamic minimal residual strategy. The proposed block preconditioner has been tested through an extensive experimentation on unstructured and highly heterogeneous reservoir systems, pointing out its robustness and computational efficiency.
-
-
-
Huff-n-Puff (HNP) Pilot Design in Shale Reservoirs Using Dual-Porosity, Dual-Permeability Compositional Simulations
Authors H. Hamdi, C.R. Clarkson, A. Esmail and M. Costa SousaSummaryBefore implementing an HNP pilot in the field, reservoir studies are usually conducted, and compositional numerical simulations performed, to assess the impact of uncertainty on HNP design parameters. In the previous work conducted by the authors, the impact of parametric uncertainty on designing a single-well HNP was demonstrated using single-porosity models. However, recent studies show that a limited region of shattered rock is likely to be created during the hydraulic fracturing process. This region is closely represented by regional dual-porosity dual-permeability (DP-DK) models. In this study, we expand on the early work and address the impact of model uncertainty on designing an optimal HNP for a Duvernay shale example. In addition, a multi-well HNP design is exemplified to assess the impact of fracture communication during the cyclic gas injection scenarios. A unified framework is required to conduct Bayesian history matching and perform HNP optimizations using the Markov chain Monte Carlo process. This task is achieved by implementing new adaptive sampling designs and employing some surrogate modelling techniques (random forests and Gaussian processes) to obtain the distributions for probabilistic HNP forecasts.
The results show that for an equivalent calibrated DP-DK model, the efficiency of HNP, for both lean and rich gas injection scenarios, can be substantially higher than that predicted with the caliberated single-porosity model. In particular, lean gas injection, predicted to have a low efficiency using single porosity models, is predicted to result in substantial incremental recovery in DP-DK models. The history matching and optimization results show that DK-DP models yield the highest recoveries during early cycles and a reduced efficiency for later cycles, whereas with single porosity models, the efficiency is fairly constant across cycles. The high efficiency of the DK-DP models is related to an enhanced swelling and mixing process due to pervasive communication (contact area) between the fracture network and the matrix. Moreover, the compositional simulations demonstrate that for multi-well HNP scenarios, communication through hydraulic fractures is far more important than the communication through the enhanced fracture region (EFR). This communication is shown to substantially reduce HNP performance, which is inferred by comparing the probabilistic forecast simulations.
This study provides a novel workflow to accurately assess the impact of model uncertainty on the HNP designs for unconventional shale and tight light oil reservoirs.
-
-
-
A Surrogate-Based Approach to Waterflood Optimisation under Uncertainty
Authors P. Ogbeiwi, K. Stephen and A. ArinkoolaSummaryThe Markowitz classical theory has been applied in the robust optimisation of petroleum engineering operations by many researchers. It involves the computation of the means and standard deviation of a specified reservoir performance measure(s), and the creation of an efficient frontier which qualifies the relationship between the optimised mean and standard deviation. However, the optimisation routine is computationally expensive as numerous simulations are required for calculating the means and standard deviations. Also, to simplify the optimisation problem many significant uncertainties are not considered in the optimisation routine. Also, previous researches have used a limited number of reservoir-model sample points of the uncertain variable(s) to calculate the means and standard deviations values. For example, if the uncertain parameter is uniformly distributed, three equiprobable (the low, median and high values) are used to correlate the uncertainty. However, this approach leads to erroneous calculations of the means and standard deviations because the actual distribution of the uncertainty is ignored.
In this study, we apply the Markowitz classical robust optimisation routine to a validated approximation model of the cumulative oil production of a case study reservoir to optimise oil recovery after waterflooding. Using this approach, we can reduce computational costs and for the first time, consider up to four geological uncertain variables in reservoir optimisation under uncertainty. We show that at least 100 sample points (realisations) of the uncertain geological parameters are required to obtain accurate computations of the means (reward) and standard deviations (risk). This allows for adequate sampling of the distribution of the uncertain parameters. We then construct an efficient frontier of the optimal solutions for various risk-aversion factors and compare the results to that obtained from a deterministic optimisation routine.
This approach was applied for the first time to optimisation under uncertainty. The result indicates that considering geological uncertainties while solving to the optimisation problem results in more realistic optimal solutions when compared to the deterministic optimisation case. This is because engineering control variables that lead to a risk-quantified strategy for the waterflooding operation are obtained.
-
-
-
Statistical Model and Experimental Study of Oil Viscosity Reduction and Rock Wettability Alteration Induced by Nanoparticles
Authors M. Bagheri Vanani, S.A. Tabatabaei-Nezhad and E. KhodapanahSummaryRecently, nanoparticles (NPs) have been introduced as useful solution for enhanced oil recovery (EOR) challenges. In this context, one of the challenges is related to precipitation of asphaltene content in oil reservoirs which affects rock and fluid properties including oil viscosity and rock wettability. This paper, at the first, aims to investigate the potential of silica NPs for oil viscosity reduction which increases the mobility of oleic phase leading to EOR. Next, the effect of silica NPs on precipitation of asphaltene on sandstone rocks in which affects rock wettability will be explored. At the last, a statistical modeling study will be performed using MINITAB Software to investigate the effect of temperature, nanofluids concentration and oil composition on rock and oil properties. To this end, viscometer oil testing and contact angle measurement were conducted. The results showed that silica NPs inhibited or delayed precipitation of asphaltene in sandstone rock and consequently, the potential of asphaltene for changing rock wettability toward decreasing oil- wet condition. In addition, the results demonstrated that the dispersion of silica NPs in the oleic phase could decrease oil viscosity as much as 98% by cracking carbon-oxygen and carbon-carbon bonds in the hydrocarbon chains. By statistical analysis also a multiple linear regression model was developed to predict the percentage of oil viscosity reduction by NPs. In addition, R squared value obtained as much as 98.9% and p value was smaller than 0.05 indicating the effective role of oil sample, silica NPs concentration and temperature parameters on the oil viscosity reduction. F values of 152.86, 845.4 and 91.78 were achieved for each parameters, respectively. Also, no interaction between each pair of parameters for the viscosity reduction was observed. The results of the modeling section was found to have acceptable application in forecasting oil field data. This study support the EOR potential of NPs in oil and gas fields.
-
-
-
How Does the Definition of the Objective Function Influence the Outcome of History Matching?
Authors G. Eremyan, I. Matveev, G. Shishaev, V. Rukavishnikov and V. DemyanovSummaryIn this work we investigate how the form of the objective function can influence the results and the speed of history matching (HM). The objective function definition depends on the production variables included in the objective and their weighting factors. These choices may impact, for instance, the speed of assisted history matching. We demonstrate how the choice of the suitable form for the objective function used in HM should depend on the particular reservoir development problem at stake.
The work presents a comparative study between different objective function formulations used in history matching a synthetic reservoir example. An industry standard stochastic optimization algorithm - evolution strategy was chosen for the comparative benchmarking of the impact of the objective function choice on history matching. The synthetic model represents waterflooding case with 3 production, 3 injection wells, 7 years of simulated history and 8 parameters of reservoir uncertainty. The findings from the comparative study are not limited to a particular assisted HM algorithms applied.
Processing and analysis of the experimental results confirmed that the formulation of the objective function is important, since its value allows the algorithm to accelerate towards finding better HM solutions. The study demonstrates how different objective function formulations lead to different computational costs to reach the history matched solution. This means an optimal objective function formulation for each particular problem should provide the fastest convergence.
Novelty of the work is in demonstrating how the different objective function formulation can help to history match a reservoir model with minimized computational cost when solving different production problems. We show that the objective function should not be defined in the same way for any history matching process but rather adjusted to the particular application allowing to reach required history match at minimum computational cost. This will give more chances to history match real complex hydrocarbon field models within a reasonable time.
-
-
-
A Coupled Geomechanics and Flow Model for Enhanced Gas Recovery and CO2 Storage in Shale Reservoirs
More LessSummaryA fully coupled multicomponent flow and geomechanics model, which incorporates viscous flow, Kundsen diffusion, molecular diffusion, multi-component adsorption/desorption and geomechanics effect, is developed to study the enhanced gas recovery and CO2 storage in fractured shale reservoirs. Specifically, an efficient hybrid model, which consists of single porosity model, multiple porosity model and Embedded Discrete Fracture Model (EDFM), is adopted to model multiscale fractures. In flow equations, the Peng-Robinson EOS, extended Langmuir isotherm and Fick’s Law are adopted. In geomechanical portion, the proppant nonlinear deformation is considered. Then, the mixed space discretization (i.e., finite volume method for flow and stabilized XFEM for geomechanics) and modified fixed stress sequential implicit methods are applied to solve the proposed model. The robustness of the proposed method is demonstrated through several numerical examples, and a comprehensive analysis of the mechanisms for enhanced gas recovery and CO2 storage in fractured shale gas reservoirs is carried out, which takes into account Kundsen diffusion, molecular diffusion, multi-component adsorption/desorption, proppant nonlinear deformation, and different injection strategies including huff-n-puff scenario. Results show that CO2 injection is an effective approach for enhancing shale gas recovery, and the injected CO2 can be stored as free, adsorbed, and dissolved state. Besides, we can also find that stimulated reservoir volume, natural/induced fractures, hydraulic fractures, various transport/storage mechanisms and injection strategies have significant effects on enhanced gas recovery and CO2 storage in fractured shale reservoirs.
-
-
-
Deep-Learning-Based 3D Geological Parameterization and Flow Prediction for History Matching
Authors M. Tang, Y. Liu and L. DurlofskySummaryIn recent work we have developed deep-learning-based procedures for parameterizing complex 2D geomodels (Liu et al., 2019) and for predicting the detailed flow responses of such systems (Tang et al., 2019). The parameterization method, referred to as CNN-PCA, entails the use of principal component analysis in combination with convolutional neural networks, while the flow surrogate model involves the application of a recurrent residual U-Net procedure. The combination of these two capabilities enables efficient history matching to be performed. This is because the variables that must be determined during data assimilation correspond to the relatively small set of parameters associated with the CNN-PCA description, and the requisite flow simulations can all be accomplished using the deep-learning-based surrogate model. The overall methodology has been successfully applied to 2D channelized systems (as shown in Tang et al., 2019).
In this work, we extend these capabilities to 3D systems. The 3D CNN-PCA procedure differs from the 2D method in that we no longer use a style loss term (as we did in 2D), but instead apply a supervised learning approach. With this method we train the network using PCA realizations along with their corresponding (desired) channelized representations. This treatment, in common with our 2D procedure, leads to faster training than some other approaches since the underlying PCA representation already captures aspects of the spatial statistics (covariance). The 3D recurrent R-U-Net consists of 3D convolutional and recurrent (convLSTM) neural networks, which are designed to capture the spatial and temporal information associated with dynamic systems. This approach shows advantages over autoregressive procedures. The recurrent R-U-Net is trained on O(3000) randomly generated 3D geomodels and their corresponding (simulated) dynamic 3D state maps; e.g., saturation and pressure at a set of time steps.
Results are first presented for each method individually. Specifically, we validate the geological parameterization procedure by demonstrating that the prior flow statistics, for a 3D channelized system, generated using CNN-PCA agree closely with those from (reference) geostatistical models. The recurrent R-U-Net surrogate flow model is validated through detailed comparisons of oil-water flow results for particular (new) realizations and through error statistics for an ensemble of new models. Finally, a 3D history matching example, in which the two procedures are used in combination, will be presented.
-