- Home
- Conferences
- Conference Proceedings
- Conferences
ECMOR XVII
- Conference date: September 14-17, 2020
- Location: Online Event
- Published: 14 September 2020
61 - 80 of 145 results
-
-
Evaluation of A Data-Driven Flow Network Model (FlowNet) for Reservoir Prediction and Optimization
Authors A. Kiærr, O.P. Lødøen, W. De Bruin, E. Barros and O. LeeuwenburghSummaryWe describe and evaluate a physics-based proxy model approach for reservoir prediction and optimization. It builds on the recent development of so-called flow-network models which represent flow paths between wells by discrete 1D grids with permeability and pore volume properties. These types of models represent an alternative to capacitance resistance and correlation-based models and have the benefit of allowing for all physics supported by regular 3D grid-based commercial simulators. The new model is different from a previously proposed model in that we include additional nodes in the network that allow for more and indirect flow paths between wells, as well as extra nodes to represent an aquifer.
We describe the structure of our flow network and investigate the impact of design and training parameters on the performance of the network, both in history matching and prediction mode. Examples include the number and placement of network nodes, the treatment of aquifers, and the size and sampling of prior model property values. We distinguish between the accuracy of the history match and the generalizability by cross-validating the flow network performance on future well control strategies that are different from that encountered during the history period. Using this procedure, we aim to prevent overfitting of the model while ensuring sufficient predictive power. Results are presented for experiments based on phase rate and bottom hole pressure measurements and predictions generated with the Brugge benchmark model which is used as a synthetic truth.
We subsequently present a first application of flow network models for well control optimization under uncertainty. To this end we employ a stochastic simplex gradient-based optimization approach and demonstrate that strategies that are expected to deliver improved NPV can be identified at much lower computational cost and within a much shorter time frame than would be required otherwise.
-
-
-
Simulation of Foam-Assisted CO2 Storage in Saline Aquifers
More LessSummaryGeological storage of CO2 is a crucial emerging technology to reduce anthropogenic greenhouse gas emissions. Due to the buoyant characteristic of injected gas and the complex geology of subsurface reservoirs, most injected CO2 either rapidly migrates to the top of the reservoir or fingers through high-permeability layers due to instability in the convection-dominated displacement. Both of these phenomena reduce the storage capacity of subsurface media. CO2-foam injection is a promising technology for reducing gas mobility and increasing trapping within the swept region in deep brine aquifers. A consistent thermodynamic model based on a combination of a classic cubic equation of state (EOS) for gas components with an activity model for the aqueous phase has been implemented to describe the phase behavior of the CO2-brine system with impurities. This phase-behavior module is combined with representation of foam by an implicit-texture (IT) model with two flow regimes. This combination can accurately capture the complicated dynamics of miscible CO2 foam at various stages of the sequestration process. The Operator-Based Linearization (OBL) approach is applied to reduce the nonlinearity of the CO2-foam problem by transforming the discretized conservation equations into space-dependent and state-dependent operators. Surfactant-alternating-gas (SAG) injection is applied to overcome injectivity problems related to pressure build-up in the near-well region. In this study, a 3D large-scale heterogeneous reservoir is used to examine CO2-foam behaviour and its effects on CO2 storage. Simulation studies show foams can reduce gas mobility effectively by trapping gas bubbles and inhibit CO2 from migrating upward in the presence of gravity, which in turn improves remarkably the sweep efficiency and opens the unswept region for CO2 storage. We also study how surfactant injection and forming of foam affect enhanced dissolution of CO2 at various thermodynamic conditions. This work provides a possible strategy to develop robust and efficient CO2 storage technology.
-
-
-
Application of Sector Modeling Approach in a Probabilistic Study of a Giant Reservoir
Authors L.O. Pires, V.E. Botechia and D. SchiozerSummaryComputational requirements may be one of the most relevant parameters in model-based decision analysis process of giant and complex reservoirs. This may make probabilistic studies very time consuming. One proposal to work around this problem is to divide the reservoir model into sectors and use them as isolated models (Sector Modeling approach) during the decision analysis processes, assuming that isolated sectors representativeness is acceptable. The study case is a benchmark of giant offshore carbonate reservoir, analogous to pre-salt reservoirs in Brazil, which was divided into four sectors, representing four production regions with separate production systems (platforms), each one starting in different periods.
A probabilistic study is performed to evaluate if the behavior of the combination of the Isolated Sectors models (ΣSisolated) is representative of Full Field models (FF). It is also compared the behavior of Sector 1 using its Isolated Sector models (S1) and FF models. This study considers the use of 100 geological scenarios of the UNISIM-III model, combined with scalar uncertainties (relative permeability curves, faults transmissibility, PVT, well productivity/injectivity).
In this paper, it is proposed a methodology to evaluate differences between the two sets of models. Results show good correlation between the behavior of Sector 1 in S1 and FF models. ΣSisolated models are representative of the overall behavior of the FF models, presenting great correlations between both model sets. However, it is a bias indication conservative scenarios since cumulative oil production and Net Present Value (NPV) are lower, compared to the FF models. The average NPV relative difference is 13%, and thirteen models present considerable relative differences between the two sets of models (higher than 20%). A deeper study is performed using the models where highest and lowest NPV relative differences are observer to identify the main reasons of those differences. Also, it is evaluated if the behavior of the ΣSisolated models are representative of the FF models performing risk analysis quantification and selection of representative models.
To apply the Sector Modeling approach in the study case, it is necessary to consider that there is a considerable computational gain when using the Isolated Sector models, but there are models with considerable relative differences. Thus, if one chooses to adopt this methodology in the decision-making process, isolated sector models can be used during optimization processes that require a high number of simulations. Moreover, the decision making should be based on the results observed for the FF models.
-
-
-
Modified RAND Algorithms for Multiphase Geochemical Reactions
Authors F. De Azevedo Medeiros, W. Yan and E.H. StenbySummaryUnderground geological storage (UGS) of CO2 in saline aquifer or oil reservoirs is an effective means to reduce CO2 emission at scale. To evaluate these UGS processes and understand the long-term fate of the injected CO2, we need a simulator that can account for multiphase equilibrium involving CO2, speciation reactions in brine, and the reactions with minerals. The calculation algorithms for multiphase geochemical reactions are essential to the robustness and efficiency of such a simulator. We applied the modified RAND method (SPE 182706-PA) to electrolyte systems to calculate phase equilibrium together with speciation reactions and mineral dissolution/precipitation. Modified RAND is a non-stoichiometric approach for simultaneous chemical and phase equilibrium calculation. The method linearizes the species chemical potential and eventually uses the elemental chemical potentials as the main independent variables. This greatly reduces the size of the equations for geochemical systems with many species and reactions. Modified RAND is more structured than the classical methods, for which we need to reselect the independent variables during the calculation to reduce round-off errors, and thus more suitable for UGS in oil reservoirs, where both hydrocarbon phase equilibrium and brine-mineral reactions are important. It is 2nd-order and its solution can be guided by minimizing the Gibbs energy. Modified RAND can be applied directly to geochemical systems at a fixed overall composition. Some geochemical applications, however, require analysis at constant chemical potential of a neutral species (e.g., CO2) or a charged species (e.g., H+), the latter case expressed usually as constant pH. We also extended modified RAND to those open systems. For the former, a new state-function can be constructed through the Legendre transform and the obtained algorithm is an energy minimization. For the latter, the problem is no longer minimization but we can still formulate a 2nd-order convergent algorithm. We tested the modified RAND algorithms with phase equilibrium cases relevant to UGS in closed systems, open systems with specified CO2 fugacity, and open systems with specified pH. Modified RAND provides a more efficient solution than the classical equation solving approach used in PHREEQC. The algorithms for closed and open systems exhibit 2nd-order convergence in all the tested cases. We then integrated modified RAND into a 1-D simulator and included the kinetic reactions, and compared the simulator with PHREEQC for 1-D geochemical simulations. The study provides the foundation for a future reactive transport simulator using modified RAND for the core multiphase reaction calculation.
-
-
-
Consistent Update of Well Path, Grid Structure and Grid Model Parameters Using an Iterative Ensemble Smoother
Authors J. Saetrom and L. GourcSummaryFor horizontally drilled wells offshore, uncertainty in well path position can be substantial (10 meters or more in true vertical depth). In traditional modelling workflow, this uncertainty is often ignored, despite the potential impact this has on the reservoir management decisions. Fortunately, state of the art tools for seismic depth conversion allows us to incorporate and generate multiple realizations where uncertainty in well path trajectory and structural horizons can be accounted for. However, inconsistencies are easily introduced when these tools are used as part of an integrated modelling and data conditioning workflow, which includes both static (seismic, logs, etc.) and dynamic (production, 4D seismic etc.) data conditioning. Typical inconsistencies can include:
• When a well trajectory is changed during data conditioning, how do we preserve consistency with the updated well log distributions and resulting grid properties?
• When well trajectories and structural horizons are updated simultaneously, how do we prevent that artificial well tops are introduced, so that the conditioned model parameters remain physical?
• When changing the facies property in a single grid cell, how do we preserve the consistency of the petrophysical properties at large?
Although numerous papers have been published on the topic of conditioning grid structure and facies distributions to static and dynamic data, an algorithm accounting for all the three cruxes outlined above has not been published.
In this paper, we describe a complete workflow, from well path uncertainty to flow simulation, that prevents introducing these model inconsistencies during data conditioning using an iterative ensemble smoother, looking in particular at the three cruxes outlined above. The workflow can be divided into two steps:
1) Prior modelling, where we define the limits of the sample space for each model parameter and use Monte Carlo sampling to generate an ensemble of realizations. In the initial step, static data is used to guide the local and global variability for the generated realizations, without hard data conditioning. The goal of this initial step is to establish a graphical network model defining physical connections between the model parameters in an integrated workflow, and constrain the sample space of the resulting model parameters.
2) Training: Using an iterative ensemble Kalman smoother, we condition the model parameters to observed data simultaneously using all available data (both static and dynamic).
An anonymous field on the Norwegian continental shelf will be used to demonstrate the practical use of this workflow.
-
-
-
Two-Stage Scenario Reduction Process for An Efficient Robust Optimization
Authors S.K. Mahjour, A.A.D.S. Dos Santos, M.G. Correia and D.J. SchiozerSummaryProbabilistic approaches for optimization objectives need a large ensemble size to consider uncertainties, which is often computationally expensive. Our proposed method includes two scenario reduction (SR) techniques applied to geostatistical realizations and reservoir simulation models to handle geological and dynamic uncertainties. The goal is to select a subset of simulation models to be used in an efficient robust optimization (RO).
The proposed workflow is summarized in the following sections.
- Generate total geostatistical (TG) realizations representing grid properties using Latin Hypercube (LH) sampling;
- Select representative geostatistical (RG) realizations from the TG realizations using an integrated statistical technique named Distance-based Clustering with Simple Matching Coefficient (DCSMC). This section is the first stage of SR;
- Integrate other uncertainties with the RG scenarios to generate total simulation (TS) models using Discrete Latin Hypercube with Geostatistical models (DLHG);
- Apply data assimilation process to reduce uncertainty and generate total history-matched simulation (THS) models using a filtering indicator named Normalized Quadratic Deviation with Signal (NQDS);
- Select representative history-matched simulation (RHS) models from the THS models set using a tool based on a metaheuristic optimization algorithm named RMFinder. This section is the second stage of SR;
- Perform an RO to maximize NPV as the objective function using the selected RHS models;
The novel SR workflow selects the representative scenarios (RG realizations and RHS models) during two steps: (1) RG selection based on static features before the simulation process and, (2) RHS selection based on simulation-based (dynamic) features after the simulation process. The workflow is applied to a fractured synthetic reservoir model named the UNISIM-II-D flow unit-based.
To check the computational-time and efficiency of the methodology, we compare two candidate production strategies based on (1) five RHS models obtained from the two-stage SR process considering DCSMC and RMFinder techniques (workflow A), and (2) five RHS models obtained from one-stage SR process using the RMFinder method (workflow B). In workflow A, the SR process is performed gradually during two steps while in workflow B, the SR process is applied all at once in one step.
The results show that the distribution of simulation outcomes after RO for the representative scenarios and the total scenarios in workflow A are more similar than workflow B. In addition, the robust production strategy obtained from workflow A is preferred to workflow B because it presents higher chances of high NPV value and lower chances of low NPV value.
-
-
-
A Simplified Mechanistic Population Balance Model for Foam Enhanced Oil Recovery (EOR)
Authors L. Ding and D. GuerillotSummaryThe mechanistic foam population balance (PB) model has clear physics, but it is generally challenging to be applied due to the high computational cost and the difficulty for determining a number of kinetic foam parameters. In this presentation, a simplified mechanistic foam PB model was developed and applied for simulating enhanced oil recovery (EOR) process in the laboratory.
An improved foam coalescence function for oil destabilizing effect and dry-out effect on foam was incorporated into the mechanistic foam PB model, and a simplified mechanistic foam PB model was obtained after local equilibrium approximation. The simplified mechanistic foam PB model was first validated by fractional flow theory. Then, it was applied for history matching an efficient foam EOR process performed in the laboratory. These experiments involves foam flooding tests (co-injection of surfactant and nitrogen) in the absence of crude oil, foam tests in the presence of residual oil after water flooding and a series of foam quality scan tests in the presence of residual oil after foam flooding. The parameters for oil saturation dependent function were estimated after numerical simulation of foam transport in the presence of water flooded residual oil, while the parameters for foam dry-out function was estimated after history matching the steady state foam quality scan data at residual oil saturation after foam flooding. The simulation results were also compared with those obtained from the foam PB model and foam local equilibrium (LE) model of a commercial simulator in terms of history matching quality and computational costs.
It is found that the numerically calculated pressure gradient, cumulative oil recovery and effluent surfactant concentration reproduce the experimental results notably well. Both the steady-state and transient foam flows can be reproduced reasonably well by the simplified mechanistic foam PB model. Moreover, the simplified mechanistic PB model is more efficient in terms of computational cost in comparison to the full physics PB model, thereby appearing to be a potentially effective tool for modeling at field scale.
-
-
-
A Bayesian Statistical Approach to Decision Support for TNO OLYMPUS Well Control Optimisation under Uncertainty
Authors J. Owen, I. Vernon and R. HammersleySummaryWell control and field development optimisation are tasks of increasing importance within the petroleum industry, as seen by the development of and large participation in the 2018 TNO OLYMPUS Field Development Optimisation Challenge. Complex mathematical computer models, in the form of reservoir simulators, are used in the TNO Challenge as well as throughout the petroleum industry both to improve the understanding of the behaviour of oil fields, as well as to guide future decisions for well control strategies and field development.
Major limitations involved when using reservoir simulators include their complex structure; high-dimensional parameter spaces and large number of unknown model parameters; which is further compounded by their long evaluation times. The process of making decisions is commonly misrepresented as an optimisation task that frequently requires a large number of simulator evaluations, thus rendering many traditional optimisation methods intractable. Further complications arise due to the presence of many sources of uncertainty that are inherent within the modelling process such as those represented by model discrepancy. This makes it unwise to focus on a single best decision strategy that is potentially non-robust to such uncertainties.
We develop a novel iterative decision support strategy which imitates the Bayesian history matching procedure, that identifies a robust class of well control strategies. This incorporates Bayes linear emulators which provide fast and efficient statistical approximations to the computer model permitting the full exploration of the vast array of potential well control or field development strategies. The framework also includes additional sources of uncertainty such as model discrepancy which are accurately quantified to link the sophisticated computer model and the actual system and hence obtain robust and realistic decisions for the real oil field.
The developed iterative approach to decision support is demonstrated via an application to the well control problem of the TNO OLYMPUS Challenge. Accurate emulators are constructed using limited information from a relatively small number of simulations. Moreover, a variety of sources of uncertainty including many not considered by the TNO dataset are incorporated, their importance highlighted and their effects on the sensitivity of potential decisions demonstrated. Greater emulator accuracy is achieved at later waves due to iterative refocusing. This approach yields a collection of decisions which are robust to uncertainty for a greatly reduced computational cost compared to methods using the simulator only.
-
-
-
Comparing Three DFN Simplification Strategies for Two-Phase Flow Applications
More LessSummaryNumerical flow models based on Discrete Fracture Methods (DFM) represent a fractured porous rock using an unstructured mesh where fractures are a subset of the elements faces. This allows for a high degree of geometric accuracy, but it also raises numerical challenges: the mesh must honor both small and large scale geometric features while keeping tractable and stable computations. For these reasons, we previously proposed a new geometric approximation method, which can be applied before meshing.
The aim of this paper is to compare the flow impact of different geometric approximations of irregular and complex two-dimensional fracture networks. We present and validate a Control-Volume-Finite-Element DFM-based water flooding model and three fracture approximations strategies. The first strategy (A) projects fractures on the edges of an initial background mesh. The two others (B and C) rely on graph theory to analyze and modify a boundary representation of the fracture network according to minimal angles and mesh sizes criteria. Strategy B modifies the boundary representation using a contraction approach where flagged fracture elements (lines, extremities of intersections) are merged. Strategy C uses an expansion approach which moves the problematic fracture elements away one from another, hence preserving the model connectivity (we also present some adjustments as compared to the already published method). The approximation strategies A, B and C are applied to three reference data sets with respectively: two crossing fractures; highly connected fractures; anisotropic disconnected fractures. For each model, we compare the oil production and the saturation maps to the reference model. These tests show that the connectivity changes implied by the strategies A and B only have a small impact on the flow solution. Nonetheless, the expansion strategy C which preserves the fracture network topology provides the most accurate solution in all test cases.
-
-
-
A Robust, Multi-Solution Framework for Well Location and Control Optimization
Authors M. Salehian, M. Haghighat Sefat and K. MuradovSummaryOptimal field development and control aim to maximize the economic profit of oil and gas production while considering various constraints. This results in a high-dimensional optimization problem with a computationally demanding and uncertain objective function based on the simulated reservoir models. The limitations of many current robust optimization methods are: 1) they optimize only a single level of control variables (e.g. well locations only; or well production/injection scheduling only) that ignores the interferences between control variables from different levels; and 2) they provide a single optimal solution, whereas operational problems often add unexpected constraints that result in adjustments to this optimal solution scenario degrading its value.
This paper presents a robust, multi-solution framework based on sequential iterative optimization of control variables at multiple levels. Simultaneous Perturbation Stochastic Approximation (SPSA) algorithm is used as the optimizer while the estimated gradients are calculated using a 1:1 ratio mapping ensemble of control variables perturbations at each iteration onto the ensemble of selected reservoir model realizations. An ensemble of close-to-optimum solutions is then chosen from each level (e.g. from the well placement optimization level) and transferred to the next level of optimization (e.g. where the control settings are optimized), and this loop continues until no significant improvement is observed in the expected objective value. Fit-for-purpose clustering techniques are developed to systematically select an ensemble of realizations to capture the underlying model uncertainties, as well as an ensemble of solutions with sufficient differences in control variables but close-to-optimum objective values, at each optimization level.
The proposed framework has been tested on the Brugge benchmark field case study. Multiple solutions are obtained with different well locations and control settings but close-to-optimum objective values, providing the much-needed operational flexibility to field operators. We also show that suboptimal solutions from an early optimization level can approach and even outdo the optimal one at the next level(s) demonstrating the advantage of the developed framework in a more efficient exploration of the search space.
-
-
-
Calculation of Well Productivity Index in Stochastic Porous Media
Authors D. Posvyanskii and A. NovikovSummaryThe productivity index (PI) is an important characteristic of a well, which indicates its production potential. Analytical solutions of well inflow equation are frequently used to calculate PI, however these solutions are obtained under the assumption of reservoir homogeneity. In a heterogeneous reservoir with spatially variable permeability, the use of these analytical solutions leads to errors in PI calculation.
Upscaling is commonly used to calculate an effective permeability of heterogeneous medium and this technique is now applied to solve many reservoir simulation problems. In reservoirs with stochastic permeability the effective permeability is a random variable characterized by its mean value and variance. These statistics can be directly calculated from the solution of well inflow equation in reservoir, which is a partial differential equation with random coefficient. In turn, its solution is treated as the pressure averaged over the ensemble of permeability realizations. The averaged pressure can be represented as infinite perturbation series over permeability fluctuation. In [1] we used Feynman diagrammatic approach to sum this series and to obtain effective reservoir permeability. The calculation of the effective permeability of a stochastically heterogeneous porous medium has been the subject of numerous studies.
In this study we focus on the calculation of the variance of an effective permeability, which represents the error introduced by replacing heterogeneous medium with homogeneous one. We use the approach from [1] to calculate the variance of effective permeability. The knowledge of statistical characteristics of effective permeability allows us to calculate PI.
It is shown that in contrast to the mean effective permeability, its variance depends on the correlation length of permeability field. Semi-analytical expressions for mean effective permeability and for its variance are obtained for lateral and vertical stochastic heterogeneity. These expressions allow PI, well rate and corresponding uncertainties to be easily estimated. The influence of anisotropy, permeability variance and correlation length on the uncertainty in PI is investigated and compared to the results of Monte-Carlo numerical simulation.
[1] Novikov, A.V., Posvyanskii, D.V. The use of Feynman diagrammatic approach for well test analysis in stochastic porous media. Comput Geosci (2019). https://doi.org/10.1007/s10596-019-09880-1
-
-
-
Impacts of Gas Trapping and Capillarity on Oil Recovery by Near-Miscible CO2-WAG
More LessSummaryCO2 Water-Alternating-Gas injection (CO2-WAG) under near-miscible conditions is a multifaceted process due to the complex interaction of thermodynamic phase behaviour, multi-phase flow behaviour and the heterogeneity of the porous medium. The central objective of this study is to improve the fundamental understanding of fluid behaviour in the process of near-miscible CO2-WAG. This work presents a detailed simulation study of CO2-WAG displacements with unfavourable mobility ratios in a 2D areal heterogeneous system to trigger the fingering flow regime. In our previous work ( Wang et al., 2019 a; 2019b ; 2020 ), we have successfully developed a new mechanistic synthesis of near-miscible WAG, incorporating compositional effects (the MCE mechanism) and interfacial tension effects (the MIFT mechanism). Here, we extend our study to include additional key multiphase flow mechanisms, such as gas trapping and capillarity, to reflect better the flow physics in a 3-phase system.
We identify that the effect of gas trapping reduces the oil recovery due to the degraded displacement performance in the “non-preferential” flow routes (areas between gas fingers). This is because the trapping mechanism greatly hampers the MIFT mechanism acting during the secondary water injection cycle. The viscous crossflow between the non-preferential routes and preferential routes (gas fingers) is restricted, which leads to a lowered sweep efficiency. On the other hand, the effect of the capillary force is more complex. In a water-wet system, the oil production increases at the early stage of displacement but approaches the plateau more quickly. In this case, capillary pressure creates entry barriers for gas flowing into low-permeability zones, which gives rise to more severe gas fingers and a larger amount of bypassed oil. The oil recovery drops by over 7% compared to the zero capillary pressure case. For the oil-wet system with capillarity, the production life is much extended by the capillary forces compared to the water-wet case. Although the production rate is reduced at the early stage of the displacement, the oil-wet capillary pressure function enables gas to imbibe into low-permeability zones (under near-miscible conditions), which mitigates the effect of the dominant gas fingers. The improved sweep efficiency maximizes the benefits of the combined MCE and MIFT mechanisms, particularly at the late stage of the displacement. The oil recovery in the oil-wet case can be almost as good as in the base case provided the final water cycle is long enough.
-
-
-
Data-Driven, Physics-Driven and Analytic Models for Waterflooding Optimisation Under Uncertainty
Authors D.L. Moreno Bedoya and G. GarciaSummaryThe proper optimisation of fields under waterflooding under uncertainty might require the evaluation of multiple scenarios over a set of reservoir models designed to incorporate geological, structural and stratigraphic uncertainties. Nowadays, reservoir models might have several millions of grid cells and a larger computing infrastructure is needed in order to achieve a near-optimal solution for the net present value objective function given the large uncertainties.
In this work a methodology is presented where data-driven models, in the form of capacitance resistance methods, together with analytical fractional flow theory and the help of machine learning techniques are used to perform the optimization of a set of reservoir models under uncertainty.
The fractional flow parameters for the Buckley Leverett function are calculated on a well by well basis using iterative ensemble smoothers after a connectivity analysis is performed. The connectivity analysis is further conditioned initially to flow diagnostics averages and to averages for the time of flight.
The objective is to maximize the net present value by using proxy models that better match the reservoir and to provide insights on drainage areas and possible infill drilling locations for better field development plans using a fraction of the time required.
-
-
-
Scaling Foam Flow Models in Heterogeneous Reservoirs for A Better Improvement of Sweep Efficiency
Authors F. Demarche, B. Braconnier and B. BourbiauxSummaryIn heterogeneous formations foam is expected to reduce mobility more in high permeability layers hence to divert flow towards low permeability regions. This has been shown experimentally by several authors by comparing core-scale foam displacements on core plugs of contrasted average permeability and by using a two-dimensional laboratory pilot consisting of two layers with different properties. More recently, it has been shown experimentally and theoretically that the foam mobility reduction scales approximately as the square root of permeability within the framework of Darcy-type semi-empirical foam flow models. This scaling law for the effect of permeability on foam properties was inferred from an analogy between foam flow in porous media and foam flow in capillary tubes and was found consistent with the modelling of available experimental data.
This foam selectivity effect should improve the sweep efficiency and is of primary interest for liquid or gas diversion in improved oil recovery and environmental rem ediation. However, it is not yet accounted for in physical modelling and reservoir simulation rock-typing best practices nor used as a daily routine for the design of foam pilots. As such, the use of physical foam mobility reduction scaling law is highly recommended for foam process evaluation and is the purpose of the present communication.
This work assesses the impact of such effects with large-scale Darcy-type foam comprehensive modelling upon designing pilot tests. A model implemented in IFP Energies nouvelles reservoir simulator PumaFlow is considered herein for the only purpose of demonstrating the impact of foam selectivity. We work out two-dimensional cross-sectional inter-well porous media of various permeability distributions and a three-dimensional synthetic reservoir, eventually. We demonstrate by varying the permeability contrast how off-target foam flow conventional modelling can be toward this permeability selectivity effect that drives fluids diversion and sweep efficiency if not properly accounted for. Finally, we show how selective foam injections can be designed in order to make the best joint use of the considered foam and the porous medium permeability heterogeneity.
-
-
-
An Efficient Implementation of the Discontinuous Galerkin Method for Multiphase Flows through Heterogeneous Porous Media
Authors N. Dashtbesh, B. Noetinger and G. EnchérySummaryOne of the main challenges in immiscible multiphase flows lies in getting an accurate representation of the strong coupling between the unavoidable heterogeneity of the porous medium and instabilities of immiscible multiphase flows appearing near the interface of the fluids. We propose an approach to improve the accuracy of the simulation of immiscible flows in heterogeneous porous media using a Discontinuous Galerkin (DG) method. The main objective of this work is to achieve both accuracy and computational efficiency by dynamically decomposing the domain and implementing different solution strategies in different flow regions. An important advantage of DG methods is the ability to approximate the solution by discontinuous polynomials of various degrees in various elements. Thanks to this feature, local flow details near the front may be taken into account by increasing the order of polynomial approximations in the elements of this flow region. To overcome the increased computational cost associated with high-order DG methods, a finite volume scheme is used far from the front.
To this aim, we have also developed a front tracking method to model the position of the fluids interface. This method solves a simplified two-phase flow problem to identify the grid blocks in which the front is present. Knowing the position of the front using this fast computation, allows us to identify these different flow regions that are treated separately. Far from the front, the flow is mainly single-phase and the finite volume scheme proved to be satisfactory. In the vicinity of the front, high-order DG is used to capture the instabilities and complexities of the immiscible flow. In this work, the accuracy and computational efficiency of the results are presented in comparison to flow simulations where a high-order DG scheme is used over the whole domain.
-
-
-
A Bayesian Optimisation Workflow for Field Development Planning Under Geological Uncertainty
Authors R. Bordas, J.R. Heritage, M.A. Javed, G. Peacock, T. Taha, P. Ward, I. Vernon and R.P. HammersleySummaryField development planning using reservoir models is a key step in the field development process. Numerical optimisation of specific field development strategies is often used to aid planning. Bayesian Optimisation is a popular optimisation method that has previously been applied to this problem. However, reservoir models can have a high degree of geological uncertainty associated with them, even after history matching. It is important to be able to perform optimisation that accounts for this uncertainty. To date, limited attention has been given to Bayesian Optimisation of field development strategies under geological uncertainty.
Much of the recent work in this area has focused on Ensemble Optimisation methods. These naturally handle geological uncertainty using ensembles of geological realisations. This can result in a high computational cost, as large ensembles are required to capture the geological uncertainty. Bayesian Optimisation offers an alternative solution using probabilistic surrogate or proxy models that can capture the geological uncertainty. However, incorporating geological uncertainty into proxy models and using those models in a Bayesian Optimisation loop remains a challenging task. Further, the effect of the additional proxy model uncertainty on optimisation results has not been well studied.
We propose a Bayesian Optimisation workflow comprising a Stochastic Bayes Linear proxy model and a combination of experimental and sequential design techniques. The workflow is designed to include a combination of static and dynamic uncertainties, with a new geological realisation generated and used to simulate fluid flow during each run of the model. The workflow is demonstrated by optimising several field development strategies in a synthetic North Sea reservoir model. The ability of the workflow to locate optima and correctly account for the geological uncertainty is studied and the computational cost is quantified.
The performance and practical implications of the proposed approach are discussed. These are important in designing an accurate and computationally efficient optimisation workflow under geological uncertainty and, ultimately, are factors in developing decision support tools for field development.
-
-
-
Data-Driven Models Based on Flow Diagnostics
Authors M. Borregales, O. Møyner, S. Krogstad and K. LieSummaryData-driven models are an attractive alternative to reservoir simulation in workflows where full field-scale simulations may be computationally prohibitive [3,4]. One example is the forecasting and schedule optimization of waterflooding scenarios, where numerous function evaluations that correspond to a time consuming simulation may be required. Data-driven models must be calibrated to produce a satisfactory forecast, similar to the history matching of conventional simulation models. However, a lot of data is needed to produce a model capable of giving accurate forecasts for the flow distribution between the injectors and producers. Mature fields may have sufficient data to calibrate a purely data-driven model, but fields with limited historical data available require a different approach that can compensate for the lack of data.
Herein, under the assumption that a detailed reservoir simulation model exists, we use flow diagnostics [1] to obtain volumetric information about reservoir partioning and inter-well communication between injectors and producers. This enables us to quickly set up a data-driven model composed of a network of 1D inter-well communication models. This network of models is organized in a 2D Cartesian model, in which each row corresponds to one of the 1D flow paths that represent part of the corresponding 3D volume that is intersected by a certain well pair [3].
The initial data-driven model, before calibration, produces a good forecast for production data. The calibration process of the model is based on adjoint formulations, and the implementation is based on the automatic differentiation framework in MRST [2]. Several numerical examples will be presented, pointing out the advantages and limitations of this new methodology. To summarize, the main contributions of this methodology are:
A good forecast is obtained by an initial data-driven models (before calibration).
A simpler and very efficient calibration process is obtained by using gradient information obtained by solving the adjoint system.
A combination of flow diagnostic, adjoint methods, and automatic differentiation is used to build data-driven models for optimizing waterflooding.
[1] Olav Møyner, Stein Krogstad, and Knut-Andreas Lie. The application of flow diagnostics for reservoir management. SPE-Journal-April2015
[2] Knut-Andreas Lie. An Introduction to Reservoir Simulation Using MATLAB/GNU Octave: UserGuide for the MATLAB Reservoir Simulation Toolbox (MRST). Cambridge University Press, Jul 2019
[3] Zhenyu Guo and Albert C. Reynolds. INSIM-FT in three-dimensions with gravity. Journal-of-Computational Physics, 2019
[4] Guotong Ren, Jincong He, Zhenzhen Wang, Rami M. Younis, and Xian-Huan Wen. Implementation of physics-based data-driven models with a commercial simulator. SPE Reservoir-Simulation-Conference, 2019
-
-
-
Deep-CRM: A New Deep Learning Approach for Capacitance Resistive Models
Authors A. Yewgat, D. Busby, M. Chevalier, C. Lapeyre and O. TesteSummaryClassical reservoir engineering studies require building geological models and solving complex fluid flow transport equations that require high-quality data, numerous computational resources, time and workflows.
For large and mature fields data-driven models can be used to get faster answer and to perform production analysis more efficiently.
Capacitance Resistive Models (CRM) are a class of methods based on material balance that can be used to estimate production wells liquid rates as a function of injected water and Bottom Hole Pressure (BHP) variations. CRM methods quantify the connectivity between producers and injectors using only dynamic data. An important drawback of CRM is that they can suffer from parameter identification problems. Moreover, the analytical solution can be only obtained in specific conditions: linear variations of BHP and fixed injection rate between two consecutive time steps.
In this work we present a new approach combining CRM material balance equations with neural networks in order to obtain more robust and reliable estimation of the CRM parameters (i.e. well connectivity, productivity indices and time constants). This proposal is also interesting since it is not based on any assumption on BHP and injection rates.
To this end, we use a recent approach called Physics Informed Neural Networks (PINNs). In this approach neural networks are trained on observed data with additional physics constraints traduced in appropriate loss functions. The parameters of this physical equation are evaluated at the same time as the neural network weights.
The introduction of PINNs in our approach raised after testing classical machine learning (ML) models (SVMs, Random Forests …) and deep learning models (MLP, LSTM, RNNs…). Indeed, such models can perform well in some specific cases but usually struggle to produce robust results (i.e. forecasting) in the long term. Unfortunately, such systems do not natively integrate physics constraints.
Our aim is to impose physic constraints in neural networks. Thus, we may obtain more stable and reliable results. On the same time, we should be able to account for more behaviors that are not explained by simplified physic equations such as material balance.
We performed a full comparison between our approach using PINNs, other standard ML and DL approaches and a given framework of CRMs on two data-sets: a simple but realistic model build using a commercial reservoir simulator, and a real data set. We show that our approach gives more robust results (in terms of MSE) while not suffering from parameter identification issue.
-
-
-
Deep-DCA A New Approach for Well Hydrocarbon Production Forecasting
By D. BusbySummaryOil & gas reservoir production forecasting is an essential task for reservoir engineers. Forecasts are made in order to take financial decisions and for reserves calculation. For mature fields where a high number of wells and large historical data are available, physicals models can be not enough precise or very long to build. Decline curve analysis (DCA) technique is a well-established alternative to obtain rapid and reliable forecasts and it is used in many fields to perform reserves evaluation.
Due to the high level of noise in the data, changes in production mechanisms, workovers, changes in reservoir pressure, DCA are usually adjusted manually by reservoir engineers, moreover for non-declining wells or new wells type curves approaches are adopted.
In this work we present a new workflow to automatize the DCA calculation in a more robust way and to be able to predict non declining wells and new wells using state of the art machine learning solutions.
To perform automatic DCA we used a recent physics informed neural network (PINN) approach where we combine neural networks and ARP’s empirical equations to obtain more robust forecasts. The neural network proxy helps regularizing the data, moreover all the different field constraints can be easily integrated by defining appropriate loss functions that are minimized during the training phase. To balance these different losses, we use an automatic approach based on uncertainty quantification.
Uncertainty quantification is also a byproduct of the PINN approach that allow us to estimate a probabilistic set of curves that can be used to estimate the P10-P50-P90 in a more robust way respect to a more simplistic Bayesian parameter estimation that will usually underestimate the uncertainty.
In order to achieve a more robust estimation we use as a constraint an Arp’s equation with piecewise constant parameters, allowing us to consider transient regimes. The algorithm is then able to automatically find the transition zones and to assign different parameter values to the different regimes.
The last improvement concerns the non-declining and new wells approach, to address this problem we build a larger machine learning model that learn the spatio-temporal behavior of the different wells and combine static and dynamic data.
The method is applied to two real dataset, an unconventional gas field and a large heavy oil field containing each several hundreds of wells. Comparisons with existing automatic DCA solutions are presented.
-
-
-
Two-Stage Ensemble Kalman Filter Approach for Data Assimilation Applied to Flow in Fractured Media
More LessSummaryThe permeability field in a reservoir simulation greatly influences the resulting flow field and therefore a thorough knowledge of it is crucial. However, the permeability field is usually associated with a high degree of uncertainty since only few measurements of reservoir properties are available. Fractures can form highly conductive shortcuts through the matrix domain. Therefore, it is important to estimate fracture parameters such as location, orientation and size as precisely as possible. Ensemble Kalman filters (EnKF) are widely used for history matching (or data assimilation) in the context of sub-surface flows in order to estimate parameters, reduce uncertainty and improve simulation results.
This work studies the evolution of a reservoir as it might occur e.g. during reservoir stimulation of a geothermal system. During the first stage, large isolated fractures with a preferred orientation arise one after the other. During the second stage, these fractures get connected by others, which have a different preferred orientation. We assume that location, orientation and length of all fractures are known a priori. The only uncertainty therefore lies in the hydraulic aperture of each fracture segment. Further we assume that prior probabilistic knowledge of the hydraulic aperture is available, e.g. from seismic measurements. We upscale the fractures and simulate the flow in the reservoir with a single-continuum model.
We reduce the uncertainty of the hydraulic apertures with an iterative EnKF using empirical measurement data; here from a reference simulation. During the formation of the fractures, we use pressure and flow at in- and outlet boundaries as measurements. Once the whole reservoir is developed, a tracer is injected at the inlet and its concentration at the outlet boundary is used as measurement. In this context also the effect of different fracture-matrix permeability ratios is studied.
-