- Home
- Conferences
- Conference Proceedings
- Conferences
ECMOR 2022
- Conference date: September 5-7, 2022
- Location: The Hague, Netherlands / Online
- Published: 05 September 2022
1 - 20 of 86 results
-
-
Numerical Investigation of Phase Behaviors and Condensate Blockage in a Fractured Tight Gas Condensate Reservoir
More LessSummaryAlmost half of Western Canada’s natural-gas production comes from the Triassic-aged Montney formation, which is rich in condensate. It is significant to efficiently develop liquid-rich Montney formation. The objective of this study is to investigate phase behaviors and condensate blockage in liquid-rich Montney formation. Further, the development potential of the liquid-rich Montney formation and the timing of production gas reinjection were fully discussed.
In this study, the compositional field scale numerical model is construct to simulate condensate distribution and phase behaviors during long-term production and produced gas injection periods in a fractured tight gas condensate reservoir. Two different types of gas condensate fluids obtained from the liquid-rich Montney formation. The first fluid named Fluid A is rich in condensate and its dew point pressure is closer to formation pressure, and the other named Fluid B is poor of condensate and its dew point pressure is much smaller than formation pressure. Two fluid are compared in terms of the location and time of condensate accumulation based on compositional and field scale modelling.
This study concluded that the gas-oil ratio with production time of a gas well is characterized by two stable stages, and the process of rising gas-oil ratio is the time when condensate appears and accumulates in the formation. Condensate blockage mainly occurs in the matrix near the fracture, and the location and time of condensate accumulation are affected by the condensate oil content and dew point pressure. For Fluid A, condensate blockage occurs quickly and maintains a large range (within tens of meters near the fracture), which causes production capacity of gas well decreasing. Early reinjection of produced gas can significantly relieve the damage of condensate blockage and improve the recovery of condensate and gas. In contrast, considering the economic cost, it may not be necessary to reinject the produced gas for Fluid B.
This work can provide a better understanding of retrograde condensation and condensate blockage in a tight condensate gas reservoir. It also provides a reference for the evaluation of timing of production gas reinjection.
-
-
-
A Novel Strategy for Recovery Efficiency Forecast in Tight Oil by Combining XGBoost and SVR
More LessSummaryWith the development of horizontal well and volume fracturing technologies as well as the conventional oil reserves depleting, the vast tight oil discovered in the world play a important role in source of energy supply. However, it is challenge to effectively forecast the recovery efficiency and the traditional knowledge and methodologies are not suitable for tight oil reservoirs.
To better forecast the recovery efficiency and exploit the tight oil reservoirs with lower expense, a variable weight combination model of XGBoost-SVR has been proposed based on data mining in this paper. It can be acquired by combining the Limit Gradient Climbing (XGBoost) and Support Vector Regression (SVR) and using the method of residual analysis. This model establishes the relationship between recovery efficiency and essential parameters including geography, formation and engineering by machine learning analysis.
The predictive results have shown that the cluster number of fracture, horizontal length sand intensity and permeability play a significant role in recovery efficiency. However, the well spacing, liquid intensity and reservoir temperature have an weak effect on recovery efficiency. The accuracy in predicting recovery efficiency for combination model of XGBoost-SVR can reach as high as 94.63%, which is higher than those of single XGBoost model and SVR model. Therefor, the predictive model based on XGBoost-SVR can be used as a feasible tool for economic evaluation.
The methodology, predictive model and predictive results demonstrated in this paper will have a great and novel effect on development of tight oil. This study gives an insight on the production mechanism for tight oil reservoir from a big data mining perspective, as well as a feasible and accurate method to forecast recovery efficiency. The methodology and model established in this paper can be easily applied to other unconventional oil reservoirs.
-
-
-
3D Facies Modelling of Tuban Formation, North East Java Basin
Authors M.R. Luthfan, A. Haris, D. Hernadi and R.M. ZainalSummaryBased on the BP Statistical Review of World Energy 2021, at the end of 2020 Indonesia’s proven oil reserves are still around 2.4 thousand million barrels. Indonesia’s average daily oil production is 743 thousand barrels, while the average daily demand for oil is 1449 thousand barrels. In fact, the oil and gas reserves have not met the energy needs in Indonesia. Therefore, it is necessary to optimize the utilization of one of the productive marginal oil and gas fields by building a 3D facies model approach as a reference to increase production in S Field.
3D facies model is a computational depiction of the earth’s crust based on petrography analysis, electrofacies analysis, well correlation, facies association analysis, structural modelling and variogram analysis. The aim is to determine the type of platform, lithofacies, depositional environment and facies distribution in S Field. Geologically, S Field is an oil field located in the Tuban Formation, North East Java Basin. The Tuban Formation is a carbonate build-up that has grown since the Early Miocene with a depositional location in the form of an Isolated Platform. Based on petrographic analysis, the Tuban Formation consists of 3 lithofacies including Skeletal Grainstone, Skeletal Packestone, and Skeletal Wackestone which are physiographically associated with Fore Reef and Inter Reef ( Enos and Moore, 1983 ).
3D modeling of carbonate rock facies was built using the Truncated Gaussian simulation (TGS) with trend method. TGS method is a stochastic facies modeling method that is suitable for modeling reservoir units or facies that have natural sequences (Matheron et al., 1987). This sequence can be either a reservoir quality sequence or a stratigraphic sequence, either vertically or laterally.
Based on the results of 3D modeling of carbonate rock facies, the facies association of Fore Reef is in the middle as an elongated shelf in the west-east direction, which is dominated by Skeletal Grainstone and Skeletal Packstone as carbonate peaks in the S Field, while the facies association of Inter reef is the result debris or detritus from the Fore Reef in the slope transition area.
-
-
-
A Machine-Learning-Accelerated Distributed LBFGS Method for Field Development Optimization: Algorithm, Validation, and Applications
More LessSummaryWe have developed a support vector regression (SVR) accelerated variant of the distributed derivative-free optimization (DFO) method using the limited-memory BFGS Hessian updating formulation (LBFGS) for subsurface field-development optimization problems. The SVR enhanced distributed LBFGS (D-LBFGS) optimizer is designed to effectively locate multiple local optima of highly nonlinear optimization problems subject to numerical noise. It operates both on single- and multiple-objective field-development optimization problems. The basic D-LBFGS DFO optimizer runs multiple optimization threads in parallel and uses the linear interpolation method to approximate the sensitivity matrix of simulated responses with respect to optimized model parameters. However, this approach is less accurate and slows down convergence. In this paper, we implement an effective variant of the SVR method, namely ε-SVR, and integrate it into the D-LBFGS engine in synchronous mode within the framework of a versatile optimization library inside a next-generation reservoir simulation platform. Because ε-SVR has a closed-form of predictive formulation, we analytically calculate the approximated objective function and its gradients with respect to input model variables subject to optimization We investigate two different methods to propose a new search point for each optimization thread in each iteration through seamless integration of SVR with the D-LBFGS optimizer. The first method estimates the sensitivity matrix and the gradients directly using the analytical ε-SVR surrogate and then solves a LBFGS trust-region subproblem (TRS). The second method applies a trust-region search LBFGS method to optimize the approximated objective function using the analytical ε-SVR surrogate within a box-shaped trust region.
We first show that ε-SVR provides accurate estimates of gradient vectors on a set of nonlinear analytical test problems. We then report the results of numerical experiments conducted using the newly proposed SVR-enhanced D-LBFGS algorithms on both synthetic and realistic field-development optimization problems. We demonstrate that these algorithms operate effectively on realistic nonlinear optimization problems subject to numerical noise. We show that both SVR-enhanced D-LBFGS variants converge faster and thereby provide a significant acceleration over the basic implementation of D-LBFGS with linear interpolation
-
-
-
Solving Gauss-Newton Trust Region Subproblem with Bound Constraints
More LessSummaryIn realistic field history-matching problems uncertainty parameters are subject to upper- and lower bounds which must be satisfied. Violation of bounds (e.g., using a negative porosity or permeability in a grid-block) may result in unphysical solutions or the failure of simulations. The Gauss-Newton (GN) optimizer using a trust-region (TR) search method performs more efficiently and robustly than using a line-search method. The GN trust-region search optimizer requires solving a trust-region subproblem (GNTRS) iteratively. Given gradient and Hessian evaluated at the current best solution, the objective function can be approximated by a quadratic model of the search step. The global minimum of the quadratic model within a ball-shaped trust-region, which is the solution of the GNTRS, is used as the new search step for the next iteration. However, available methods to solve a GNTRS cannot correctly handle bound constraints.
This paper introduces an iterative dimension-reduction procedure to solve the GNTRS with bound constraints, which involves the following three steps. First, an unconstrained GNTRS with n variables is solved and the solution is accepted if no bound is violated. Otherwise, at least one bound is violated, and the dimension of the problem is reduced to m by activating one or more violated bounds, according to the Karush-Kuhn-Tucker (KKT) conditions. Second, the gradient, Hessian, and trust region size are updated in the reduced subspace accordingly. Third, an unconstrained GNTRS with m variables is solved in the reduced subspace. We repeat the last two steps until no bound is violated. To achieve better performance, we devised several algorithms to update gradient, sensitivity matrix and Hessian in the reduced subspace adapted to the problem type: (1) using the full Hessian expression to solve the GNTRS directly for problems with more observed data, (2) applying the matrix inversion lemma for problems with the regularization term and with fewer observed data, and (3) applying the linear transformation approach for problems without the regularization term and with fewer observed data.
The proposed new GNTRS solver is first validated on different synthetic problems with known solutions and then tested on a suite of realistic field history matching problems. Our numerical tests confirm that the newly proposed GNTRS solver outperforms other methods for handling bound constraints. In our testing the new solver finds the correct solutions in all cases – with the least CPU time – while other methods failed for some test problems.
-
-
-
A New Post-Fracture Production Profile Forecasting Model Integrating Physics Into Deep Learning
More LessSummaryIn the past decade, machine learning and deep learning have been popular tools for well production forecasting since these black-box approaches can bypass the incomplete understanding of physics while obtaining satisfactory prediction results given a considerable amount of data. However, due to their large data requirements, inability to produce physically consistent results, and their lack of generalizability to out-of-sample scenarios, pure machine learning approaches cannot meet the requirements for predictive performance and generalization ability in complex production forecasting problems.
Especially in tight oil and shale gas fields, the post-fracture production mechanisms of oil and gas wells have been challenging because of the application of horizontal drilling and hydraulic fracturing techniques. Typical deep learning models are generally scenario-specific and they may fail to capture relationships behind post-fracture production directly from limited observation data, leading to their failure to generalize to scenarios not encountered in training data.
In this work, we propose a new workflow for post-fracture production profile forecasting that could constrain production prediction profiles within known physics laws for more robust results, even beyond the training set. To this end, we use a state-of-the-art deep learning method called Physics-informed Long Short-Term Memory (PI-LSTM) networks. In this approach, PI-LSTM models are trained with additional production physics constraints of decline curve analysis incorporated into the loss function.
Our goal is to impose appropriate physics constraints on the LSTM networks and also explore the mathematical constraint patterns. In this way, we can take advantage of the complementary strengths of machine learning and corresponding physics equations to obtain more reliable and robust prediction results. Furthermore, this grey-box approach may provide us with more insights into the production behavior of fractured wells that cannot be simply described by simplified physics or empirical equations.
We conduct comparison experiments on two cases to show the forecasting ability of the proposed PI-LSTM approach, conventional decline curve analysis methods and deep learning models (i.e. RNN and LSTM). The results show that the PI-LSTM model has the smallest prediction errors both in the simulated case and in the actual field case.
-
-
-
System-Theoretic Ensemble Generation in Ensemble-Based History Matching
Authors T. Diaa-eldeen, C.F. Berg and M. HovdSummaryReservoir model updating is an essential component in the closed-loop reservoir management and model-based production optimization. Ensemble-based methods, such as the ensemble Kalman filter (EnKF) and the ensemble smoother (ES), have been widely used as feasible alternatives that extend the application of the standard Kalman filtering techniques to such high-dimensional systems with inherent nonlinearities. However, the performance of the ensemble-based algorithms highly depends on the number and the distribution of the initial samples. In the case of linear dynamics, for instance, the analysis state vector is searched for in the subspace spanned by the initial state vectors. Therefore, ensemble initiation is essential to the performance of ensemble-based data assimilation approaches. In this paper, a system-theoretic method based on the observability characteristics of the underlying system is introduced to generate the initial ensemble realizations in ensemble-based history matching. Firstly, a generic approach using algorithmic differentiation is derived to obtain the linearized model of the reservoir with respect to both the dynamic (state) and static (parameter) variables directly from the numerical simulator. Then, the system’s observability is analyzed, and an ensemble-based history matching is initiated by perturbing an initial guess in the directions of the high-observable vectors in the reservoir, instead of the traditional random perturbation. This, in addition, guarantees the orthogonality of the generated perturbations, and consequently, reduces the redundancy within the realizations. The statistical properties of the generated ensemble are analyzed, and the overall performance of the algorithm is assessed on the basis of a history matching twin experiment of a two-phase synthetic reservoir, and compared with the performance of the random sampling strategy. The ensemble randomized maximum likelihood algorithm (EnRML) is used as the assimilation algorithm in this study; however, the method is also applicable to the other ensemble-based assimilation algorithms. Numerical experiments show promising results for the proposed observability-based sampling strategy over the random sampling strategy in terms of the prediction errors of estimating the directional permeability field of subsurface porous media from noisy sparse production data.
-
-
-
Asphaltene Formation Damage Modelling for a Low Permeability HP/HT Oil Reservoir Offshore Denmark
By Q. DangSummaryThe Ravn oilfield is located in licences 5/06 and 2/16 in the Central Graben of the Danish Offshore. It is a low permeable Upper Jurassic HP/HT oil reservoir which is developed using long horizontal wells with multiple hydraulic fractures. The Ravn oil is a light oil however with asphaltene in it. SARA analysis of the oil sample from one of the exploratory wells showed 3.3 wt% of asphaltene. The asphaltene inhibitor was injected downhole to mitigate potential asphaltene issues in the well tubing and surface facilities. The reservoir aquifer support was deemed very weak due to the low reservoir permeability and reservoir compartmentalization Pressure maintenance, i.e., water injection, was considered not viable for the same reasons. When reservoir pressure drops below Asphaltene Onset Pressure (AOP), asphaltene will precipitate and likely cause formation plugging, especially when a reservoir has a low permeability sand with small pore throats. The lab AOP measurements were deemed suspicious due to the unrepresentative oil samples from the production well. The modelled AOP was estimated instead after the field was brought in production with the help of surface oil sampling data and production well pressure response.
In order to simulate the asphaltene formation damage which occurred in the reservoir, a characterized fluid was developed in the PVT modelling software PVTsim. The Equation of State (EOS) parameters were calibrated with the PVT experiments performed on the oil sample from one of the exploratory wells. The AOP and asphaltene solution with pressure variation were modelled with the characterized fluid model in PVTsim. Thereafter the characterized fluid model was exported to the dynamic reservoir modelling software tNavigator which utilized relevant asphaltene formation damage keywords together with the exported compositional model to simulate the asphaltene precipitation, deposition and reservoir permeability impairment. Single parameter analysis showed flocculation rate and permeability reduction versus asphaltene deposits saturation table were the most sensitive parameters in achieving a good history match.
So far, there has been no sufficient production data to pinpoint the location of the asphaltene formation damage. Two alternative scenarios were simulated: (1) asphaltene formation damage happened in both reservoir and hydraulic fractures; (2) asphaltene formation damage happened only in the reservoir. Dynamic models based on the two scenarios could both match the well production history quite well, however the production forecasts based on the two scenarios were different and indicated a range of uncertainty.
-
-
-
Optimization Workflow Using Deep Learning Based Forward Models for Waterflooded Oil Reservoirs
Authors G. Di Federico, G. Fighera, E. Vignati, A. Shokry, E. Zio and E. AbbateSummaryIn specific reservoir engineering problems, such as medium-term oil production forecast in waterflooded reservoirs, data-driven models are interesting alternatives to complex numerical simulations, as they can speedup decision making without compromising accuracy.
In this work, an optimization framework using deep neural networks (DNNs) as surrogate models was established to optimize the waterflooding strategy in two synthetic cases of distinct geological complexity. Although DNNs have been tested for production forecasting in literature, the novelty of this work is the application of DNNs to optimize the injection schedule of brown fields and its comparison against a commercially available solution.
Three different families of optimization algorithms were considered: gradient-free, gradient-based, and ensemble-based. Their results are compared against a commercial simulator-based software that performs a streamline-based optimization with a given water availability target. The benchmark is run using the “true” geological model.
The first case is a 2D reservoir, with 5 injection wells and 4 production wells. It has uniform geological properties, with two high permeability streaks connecting two injector-producer pairs. The second case is Olympus, a realistic 3D reservoir with many geological heterogeneities and non-linearity sources, with 7 injection wells and 11 production wells. For each model a DNN was trained using synthetically generated historical data.
Results are compared in terms of Net Present Value (NPV) considering oil price, cost of water produced and injected, and actualization rate.
In the first case, the NPV was improved similarly by the three algorithms by reducing injections along the high permeability streaks, thus promoting the drainage of unswept areas. The benchmark achieved poor performances, promoting instead the injection in the high permeability streaks. In the second case, the three algorithms and the benchmark achieved similar NPVs with slightly different injection strategies.
The ensemble-based optimizer proved to be the best-performing algorithm, as opposed to the gradient-free which required a higher number of objective function evaluations, and the gradient-based which tended to get trapped in local optima.
The presented framework proved to be successful in optimizing the waterflooding strategy even in a complex geological setting. Compared to simulator-based optimization, the main benefit of the proposed methodology lies in its reduced computational time, both in model calibration and objective function evaluation. The time saving is especially significant when a tuned 3D model of the reservoir is unavailable or too expensive to build.
-
-
-
Modeling of Non-Newtonian Polymer Flooding with Adsorption and Retention Using Parametrization Approach
More LessSummaryPolymer flooding is one efficient EOR technology by overcoming non-uniform and unstable displacement caused by water injection. Polymer flooding in reservoirs is a complicated process that involves strongly nonlinear physics, e.g., non-Newtonian rheology in porous media with retention and adsorption. In the presence of multi-scale heterogeneity, high-fidelity simulations are usually required to capture such nonlinear behavior, which is a time-consuming process for conventional reservoir modelling.
In this study, we extend an advanced linearization strategy, called the Operator-Based Linearization (OBL) approach, to simulate non-Newtonian polymer flooding with retention and adsorption mechanisms using the fully implicit method. A velocity-dependent viscosity multiplier compliments the operator form of governing equations to represent the non-Newtonian rheology of the high-molecular-compound polymer. The retention of polymer, reducing the porosity, is represented by a Langmuir-type adsorption model. Several simplified models have been used for validation of the developed numerical framework. The numerical results show good agreement with both the analytical solutions and the coreflood experimental data though some negligible discrepancies can be observed in simulation results.
A highly resolved near-well model is used to test the performance of polymer flooding in realistic reservoir conditions. Both shear-thinning and thickening regimes, depending on the injection velocity and polymer concentration, are recognized in the near-wellbore zone. The injected polymer concentration and brine salinity significantly affect the shear viscosity, and consequently, polymer injectivity. Polymer retention and adsorption have a substantial effect on the rate of polymer propagation through porous media. Overall, polymer flooding shows its advantages to mitigate water fingering in field-scale operations and improves the ultimate sweep of the reservoir. However, optimal injectivity is one essential factor that affects the performance of polymer flooding. The computational superiority of the proposed model allows us to optimize the parameters of polymer flooding in realistic reservoirs and operational settings.
-
-
-
Implementation of Soreide and Whitson EoS in a GPU-based Reservoir Simulator
Authors P. Panfili, L. Patacchini, A. Ferrari, T. Garipov, K. Esler and A. CominelliSummaryReservoir simulation is traditionally based on the assumption that water is an inert phase, while hydrocarbon components split into oil and gas phases. This approach is usually reasonable when modeling conventional hydrocarbon recovery, but specific applications may require accounting for mass exchange between the water and hydrocarbon phases.
We here present the extension of our Graphics Processing Units (GPUs) compositional reservoir simulator (Esler et al., 2021) to support gas-water equilibrium. Specifically, the Søreide and Whitson equation of state (EOS) ( Søreide and Whitson, 1992 ) was implemented to compute mutual solubilities of hydrocarbon/brine mixtures. The impact of salinity on phase equilibrium is accounted for, with salt being treated as an active tracer. The simulator uses a mass-variables formulation, meaning that little modifications to the construction of transport equations and Jacobian assembly was required; most of the required code changes are localized in the EOS module for the computation of component fugacities, and phase properties such as partial molar fractions and partial molar volumes.
Treating salt as an active tracer instead of defining a further pseudo-component has an important advantage with the Søreide and Whitson EOS. If salinity changes as in water vaporization processes, our choice ensures that flash iterations can still be cast as a Gibbs Minimization problem with salt being a constant parameter. On the contrary, salinity would change as flash iterations progress, jeopardizing the thermodynamic consistency of the phase equilibria. The overall reservoir simulation system of equations is still accurate to first order in time, at the cost of possibly slight volume imbalances at the end of converged timesteps.
The accuracy of the implementation with respect to conventional CPU ones is proved using a wide range of problems where hydrocarbon-water mass exchange play an important role in the physics of the recovery/storage process. In particular, we focused on CO2 sequestration in saline aquifers, where solubility trapping is a key mechanism.
A key conclusion of this work is that the extreme performance of GPU-based reservoir simulation naturally transfers to new fields of study, which is critical when modeling saline aquifers whose extent is an order of magnitude larger than that of typical oil and gas fields.
-
-
-
A Workflow for High-Resolution Reservoir Characterization using Multiple 4D Seismic Datasets
Authors M. Maleki, D. José Schiozer, A. Davolio and J. LopezSummaryThe uncertainties and risks related to the complex environments of deepwater oil and gas reservoirs require interdisciplinary workflows and advanced reservoir monitoring techniques to surveil reservoir performance during production activities. An example of the latter is a Permanent Reservoir Monitoring (PRM) system, which provides 4D seismic data on demand. A challenge of such approach to reservoir monitoring is the rapid assimilation of frequently acquired high-quality 4D seismic data into fast-track information. Delays and limitations in the interpretation and integration of 4D seismic data reduce the benefits of such approach to impact the decision-making processes, despite obtaining data on demand. This work presents an integrated workflow of 4D seismic and reservoir engineering data to harvest hidden dynamic reservoir insights over short periods of time. This information provides novel inputs to update reservoir models to improve model-based reservoir management. Our workflow illustrates the importance of multiple 4D seismic datasets in the field management strategy. This study was carried out in a Brazilian deepwater turbidite field, where a PRM system recorded a baseline survey at the start of production in 2013 and 5 monitor surveys up to 2020. We obtain an enhancement in the interpretive capability of the dynamic reservoir behavior such as identifying regions of possible pushed oil by injectors and revealing a hidden channel of aquifer movement.
-
-
-
Analysis of Compositional Models for Laboratory Scale In-Situ Combustion Simulation
Authors M. Cremon and M. GerritsenSummaryWe study compositional, thermal and reactive flow in porous media and the numerical simulation of those processes, with a focus on laboratory scale In-Situ Combustion (ISC) cases. We discuss the governing equations and models, our numerical framework and its implementation, as well as a two-step verification process using state-of-the-art industrial codes and a grid convergence study. First, we go over a numerical verification process using two industrial codes as well as internal grid-convergence studies. We illustrate that our framework is robust and accurate on complex thermal, compositional and reactive cases. We then investigate the appropriateness and performance (in terms of both runtime and non-linear iterations) of some of the models typically used for compositional and isothermal simulation. We illustrate that in the presence of strong thermal effects, we need to make sure that we can capture the relevant physics using those models. More specifically, we discuss our findings on compositional and thermal models for heat capacity, enthalpy and phase behavior in the context of laboratory scale in-situ combustion. We show that a free-water flash captures the right behavior for our low pressure, high temperature conditions, even in a K-value form. The computational cost is three times lower than using a full flash and two times lower than using a regular K-value flash. Using a temperature-independent heat capacity model, as is often done in the thermal literature, will lead to issues to capture the ignition by overestimating heat capacity at low temperature. We discuss the different options to compute enthalpy in a compositional, thermal setting and show some possible unphysical behavior.
-
-
-
Rapid Permeability Upscaling using Convolutional Neural Networks
Authors M. Sayyafzadeh and D. GuérillotSummaryCalculating the effective permeability entails considerable computation, even with local upscaling techniques. This study proposes a convolutional neural network architecture that estimates effective permeability. The network’s input is the permeability maps of those layers of the fine-scale model that are intended to be upscaled into a single layer with horizontally coarsened cells. It treats each layer of the fine-scale permeability maps as a channel of a high-resolution image. The output is a 3 -channel lower-resolution image where each channel presents the upscaled permeability map in one major direction. The proposed architecture is simple and robust. It consists of two 2D convolutional hidden layers with small kernels and a 2D MaxPooling layer.
The network was tested using two different datasets, (1)- a binary-facie fluvial model and (2)- a continuous Gaussian model. For each dataset, 500 geological realisations were created and upscaled using a pressure-solver with periodic boundary conditions. 50% of the realisations were used for training and the remaining for testing. The network captured the nonlinear behaviour very promisingly with no overfitting signs. The results were not only visually acceptable, but also the mean of the L2 norms was negligible (below 0.05 mD in the first dataset and below 0.01 mD in the second dataset). Two-phase simulation results also verified the accuracy of the estimated permeabilities. The proposed method can reduce the upscaling computation significantly. The training time was considerably less than the computation time needed for upscaling using the pressure-solver method. The proposed method can be a step towards more computationally efficient extended-local and quasi-global permeability upscaling methods.
-
-
-
Simulating Unsteady CO2 Flow through Brine-Saturated Cross-Bedded Sandstones: Towards Relative Permeability Curves for Unstable Displacement
Authors A. Youssef, A. Tertois, Q. Shao and S. MatthaiSummaryAt geo-storage conditions, carbon-dioxide is a supercritical low-viscosity fluid that is buoyant with a mobility ratio < 1, highly prone to unstable displacement.
The best candidate storage formations consist of highly permeable and porous fluvial – intertidal and deltaic siliciclastics with prominent bedforms and laminations. The impact of these heterogeneities on multiphase flow and trapping already gained considerable attention (e.g., Pickup & Sorbie 1996 ; Trevisan et al. 2017 ; Ringrose and Bentley 2021 ). The USGS (Ruben & Carter 2005) created a software tool that permits realistic geometric modelling of cross bedding. Output quadrilateral surfaces are converted into a boundary representation (BREP) with a water-tight subdivision into distinct rock types contributing to representative elementary volumes. The resulting heterogeneous sandstone models are periodic, and are meshed with tetrahedra, providing the flow grids for the present analysis aimed at determining dynamic relative permeability using a full physics approach, accounting for viscous, gravitational, and capillary forces.
To accurately model complex coarse-fine interface flow dynamics, extra degrees of freedom / discontinuities are introduced into the flow models, employing the new nonlinear interface transfer algorithm of Tran et al. (2020) . This resolves the formation of dynamic capillary pressure and saturation discontinuities.
Our results reveal that 1) at the < 10-cm scale, gravity influences dynamic barrier formation, accelerating capillary sealing. 2) The latter can choke the flow at small fluid supply rates. 3) Increasing rate will lead to episodic drainage-imbibition cycling or continuous flow. 4) Ensemble relative permeability is flow direction dependent and a function of flow rate. 5) saturation distributions are strongly influenced by bedforms. A new method is presented to extract ensemble relative permeability curves for cross-bedded sandstones. It does not rely on uniform inlet fractional flows raising the physical realism of this multiphase flow upscaling.
-
-
-
Ensemble Reservoir Data Assimilation with Generic Constraints
More LessSummaryThis work investigates an ensemble-based workflow to simultaneously handle generic (possibly nonlinear) equality and inequality constraints in reservoir data assimilation problems. The proposed workflow is built upon a recently proposed umbrella algorithm, called the generalized iterative ensemble smoother (GIES), and inherits the benefits of ensemble-based data assimilation algorithms in geoscience applications. Unlike the traditional ensemble assimilation algorithms, the proposed workflow admits cost functions beyond the form of nonlinear-least-squares, and has the potential to develop an infinite number of constrained assimilation algorithms. In the proposed workflow, we treat data assimilation with constraints as a constrained optimization problem. Instead of relying on a general-purpose numerical optimization algorithm to solve the constrained optimization problem, we derive an (approximate) closed form to iteratively update model variables, but without the need to explicitly linearize the (possibly nonlinear) constraint systems. The established model update formula bears similarities to that of an iterative ensemble smoother (IES). Therefore, in terms of theoretical analysis, it becomes relatively easy to transit from an ordinary IES to the proposed constrained assimilation algorithms, and in terms of practical implementation, it is also relatively straightforward to implement the proposed workflow for users who are familiar with the IES, or other conventional ensemble data assimilation algorithms like the ensemble Kalman filter (EnKF). Apart from the aforementioned features, we also develop efficient methods to handle two noticed issues that would be of practical importance for ensemble-based constrained assimilation algorithms. These issues include localization in the presence of constraints, and the (possible) high dimensionality induced by the constraint systems. We use one 2D and one 3D case studies to demonstrate the performance of the proposed workflow. In particular, the 3D example contains experiment settings close to those of real field case studies. In both case studies, the proposed workflow achieves better data assimilation performance in comparison to the choice of using an original IES algorithm. As such, the proposed workflow has the potential to further improve the efficacy of ensemble-based data assimilation in practical reservoir data assimilation problems.
-
-
-
The Neural Upscaling Method for Single-Phase flow in Porous Medium
Authors M. Pal, P. Makauskas, P. Saxena and P. PatilSummaryA neural upscaling methodology is introduced for upscaling heterogeneous permeability field in porous media. Traditionally, permeability upscaling is carried out either using analytical upscaling methods like arithmetic/harmonic averaging or numerical upscaling methods like local/global flow-based averaging. Analytical upscaling methods are only accurate for layered or homogeneous medium. Numerical upscaling methods have higher accuracy on heterogeneous media but involve large computational cost and results are dependent on choice of boundary conditions. Neither of the two methods account for uncertainty in geological heterogeneity. Accuracy of these methods have only been demonstrated for engineered or known permeability distribution. The Neural upscaling method proposed in this paper is built upon the framework of deep learning from large number of geological realizations, which accounts for uncertainty in geology. The Neural upscaling method is not bounded by choice of boundary conditions. Neural upscaling method is based on deep machine learning algorithms involving a multilayer neural network and could serve as a more effective alternative to standard upscaling methods, which is also computationally fast and accurate. Series of 2D test cases, where upscaling is carried out using neural networks, are presented in this paper to demonstrate the accuracy of the method. Comparisons of the neural upscaling method with numerical and analytical upscaling are also presented.
-
-
-
Ensemble Data Space Inversion for Fast CO2 Injection Forecast Evaluation
Authors E. Abbate, P. Anastasi, D. Di Curzio, G. Facchi and E. Della RossaSummaryThe geological storage of CO2 into depleted reservoirs represents a potential strategy for large-scale greenhouse gas mitigation. An evaluation of the storable volume and of the behaviour of the injected fluid considering the uncertainty is essential to mitigate the associated geo-mechanical risks and the potential CO2 leakage.
The accurate estimation of the quantity of CO2 that can be injected into a depleted reservoir is usually carried out after the calibration of the reservoir model. This model calibration can be addressed via a history matching process, by assimilating production and pressure data collected during the field production history. The prior uncertainties represented by an ensemble of reservoir model realizations are thus reduced, by solving a nonlinear inverse problem with computationally demanding methods such as iterative ensemble data assimilation. The history matching phase is crucial for the forecast simulation under realistic conditions of carbon capture and storage applications, but it can be time consuming especially in presence of a long historical time.
In the present work, we propose an innovative method to significantly reduce the computational impact of the calibration process, adopting a direct forecasting approach based on the ensemble data space inversion (DSI) introduced by Lima et al. [ 1 ]. With this approach, a direct prior ensemble prediction update is performed to account for the historical data, without modifying the models themselves. The Posterior (history-matched) geological models are not explicitly obtained as in standard model-space inversion methods and as consequence the requirements in term of computational resources and CPU time are strongly reduced.
The DSI approach is here implemented adopting an iterative ensemble smoother formulation tailored to quantify the uncertainty on the total volume of CO2 that can be stored into the depleted reservoir. Moreover, the method can be used to assess the uncertainty of the subsurface fluid flow. The application examples show that a calibration process via the DSI algorithm can produce accurate forecast predictions and can consistently reduce uncertainty. The results are comparable to the ones that are obtained by standard model-space inversion approaches.
-
-
-
Neural Solution to Elliptic PDE with Discontinuous Coefficients for Flow in Porous Media
Authors M. Pal, P. Makauskas, M. Ragulskis and D. GuerillotSummaryLocally conservative finite-volume schemes have been developed for solving the general tensor pressure equation of petroleum reservoir-simulation on structured and unstructured grids. The schemes are applicable to diagonal and full tensor pressure equation with generally discontinuous coefficients and remove the O (1) errors introduced by standard reservoir simulation schemes when applied to full tensor flow approximation.
Two-point flux schemes (TPFA) are not applicable to full-tensor permeability and multi-Point flux approximation schemes (MPFA) has a major drawback that when it is applied to strongly anisotropic heterogeneous media as it fails to satisfy a maximum principle and result in loss of solution monotonicity for high anisotropy ratios causing spurious oscillations in the numerical pressure solution. Although variations of TPFA and MPFA have been proposed, challenges related to application on highly heterogeneous and anisotropic media still exists.
In this paper a Neural solution method to general tensor elliptic PDE with discontinuous coefficients is presented. The Neural solution to elliptic PDE is based on utilization of a deep learning multi-layer neural network, which could serve as a more effective alternative to TPFA or MPFA type schemes for fast and accurate results. Series of 2D test cases are presented, where the results of Neural solutions are compared with numerical solution using TPFA and MPFA schemes with range of heterogeneities demonstrating general applicability and accuracy of the Neural solution method. Order of accuracy of the method is compared with the numerical solution using a measure of error like the L2 norm, which shows convergence with refinement study. Neural solution for specific cases is also tested on a range of general grid types.
-
-
-
Quantifying Prior Model Complexity for Subsurface Reservoir Models
Authors T.N. Mioratina and D.S. OliverSummaryIn Bayesian approaches to subsurface inference, the prior model specifies the model parameters that are uncertain and the joint probability of those parameters before incorporating production-related data. A good prior model is generally complex enough to capture the future reservoir behaviour in the long term, realistic enough to be plausible, consistent with geologic knowledge, and simple enough to allow calibration for data matching. Model complexity is often associated with the number of model parameters, thus the focus on finding the sufficient number of parameters needed for history matching and to quantify uncertainty in the future. This work explores model selection based on concepts of complexity and informativeness of models for subsurface reservoir models. It focuses on the effect of misspecification of prior models for assimilating flow data and their predictive accuracy. Using concepts of mutual information, entropy and information criteria, we investigate the suitability of various types of prior models with different level of complexity, ranging from a highly simplified bilinear trends to realistic multipoint statistical models and complex hierarchical Gaussian models and explore the effect of level of model complexity on robustness of forecasting. We perform experiments with different combinations of data type, prior informativeness, forecast type and model type to compare the effect of different prior models on robustness of the results. For each model simulation, we analyse the effective number of parameters, entropy, time required for calibration and evaluate their predictive accuracy.
We show that information content and the number of parameters are useful measures for selection of prior models for history matching. We observe that model selection according to the “widely applicable information criterion” (WAIC) gives the same results as Bayesian leave-one-out cross validation (LOO-CV) for hierarchical Gaussian priors. Experiments indicate that penalizing model complexity could be useful for models that contain parameters without physical meaning. Moreover, hierarchical Bayesian models are useful when uncertainty in prior hyper-parameters is reasonable. In addition, we suggest a workflow for model development depending on forecast objectives and data availability.
-