- Home
- Conferences
- Conference Proceedings
- Conferences
ECMOR XVI - 16th European Conference on the Mathematics of Oil Recovery
- Conference date: September 3-6, 2018
- Location: Barcelona, Spain
- Published: 03 September 2018
101 - 120 of 172 results
-
-
Well Placement Optimization Under Uncertainty Using Opportunity Indexes Analysis And Probability Maps
Authors H.M. Mustapha and D.D. DiasSummaryImproving hydrocarbon recovery from green and mature fields through targeting potential drilling requires a computationally complex process of well placement optimization. Because these operational activities are expensive and particularly critical in the periods of low oil prices, a risk quantification analysis is often required for uncertainty considerations.
A new automated probabilistic workflow optimizes well placement using probability maps based on reservoir and simulation opportunity indexes. Well types include vertical wells and horizontal wells. The probability map concept aims at unifying the existing model realizations into a single probability map by establishing thresholds for key physical parameters and reservoir characteristics. The opportunity indexes method is a fast way to identify zones with high potential for production from both oil and gas reservoirs.
The workflow is generic and can be applied to oil and gas in both mature and green fields as a fast method of well placement optimization under uncertainty. Starting from a set of reservoir modelling scenarios, a pattern recognition algorithm is first applied to classify and rank the realizations. A representative subset of these realizations is then used in an ensemble-based method and, when needed, calibrated to existing observed data. Reservoir and simulation opportunity indexes are applied to all calibrated realizations. Finally, a single probability map is created to unify the opportunity index maps. Based on the pattern observed in the probability map, areas of interest (AOI) are outlined, and several realizations of well configurations are generated. The designed wells are screened based on engineering criteria and ultimately assessed using numerical simulations on each realization.
The workflow terminates by results analysis and well design selection. This considers not only the improvement in oil recovery, but also a measure of risk coming from the uncertainty assessment. In testing on several simulation models, the unswept reservoir regions were successfully identified and ranked for drilling targets. Compared to existing methods, the workflow showed superior results.
-
-
-
Integrated Production Strategy Optimization Based On Iterative Discrete Latin Hypercube
Authors J.C. Hohendorff Filho and D.J. SchiozerSummaryMost problems of production strategy optimization in the oil industry are characterized by a large number of discrete random variables in discontinuous search spaces with non-necessarily monotonic objective functions, usually net present value or oil recovery, with many local maximums within a maximization problem. It demands a large number of simulations to adequately evaluate search space, what becomes more complex when integrating reservoir and production system. This paper evaluates a new iterative discrete Latin Hypercube (IDLHC) sampling based method to maximize the objective function in integrated production strategy optimization.
Inside decision making study, to evaluate the best placement of wells in the reservoir, we have used an optimization process evaluating some objective function. We compared the optimization between the IDLHC and the genetic algorithm method. We used both methodologies to maximize the net present value objective function for the same variable set and search space. We used the benchmark case UNISIM-II-D (carbonate field in Brazil) reservoir model as an application case. And we applied our explicit methodology to integrate reservoir and production system simulators during optimization process.
IDLHC adequately treated posterior frequency distributions of discrete random variables and maximizes nonnecessarily monotonic objective functions within great discontinuous search space and many local optimums set by the well placement problem.
Population based optimization using iterative discrete Latin Hypercube sampling best suited this problem, with consistent convergence to global optimum, few objective function evaluations and the simultaneous multiple numeric reservoir simulations runs.
The IDLHC method showed the advantage of being a simple methodology to maximize the objective function, reducing the search space gradually with each iteration, while addressing posterior frequency distributions of discrete variable levels.
The method successfully maximized the net present value in the well placement step of production strategy optimization, and more faster when compared with a well-established optimization methodology (genetic algorithm).
This easy to use, reliable methodology with lower computational time costs is an interesting option for optimization methods in problems of integrated production strategy design related to the oil industry.
-
-
-
Application Of Adjoint-Based Optimal Control To Gas Reservoir With A Memory Effect
Authors A. Kadyrova and A. KhlyupinSummaryIn recent years researchers in oil-gas industry have established that the contribution of memory is significant for the modeling of fluid flow in unconventional reservoirs. According to modern works, the memory effect appears due to the contrast between high permeable fractures and nanoporous matrix leading to the gap of fluid velocities at the interface between these media. Also in the homogenization procedure from micro to macro scale the nonlocality in time (in other words, the memory) reflects the delay of fluid pressure and density between subdomains with different properties of pore space geometry.
Mathematically, a memory-based fluid flow model can be described by the system of integro-differential equations. Despite the fact that a large number of journal articles are devoted to numerical methods for the forward solution of such equations, the problems of optimization and optimal control of these systems are actual and insufficiently studied.
We consider the one-dimensional model of gas filtration and diffusion as a model with memory. The system includes a partial differential equation for filtration in fractures and weakly singular Volterra integral equation of the second kind, which describes the diffusion of gas from blocks with closed nanopores. Numerical simulation, obtained using a Navot-trapezoidal algorithm, shows that the effect of memory influences on the distribution and the time evolution of pressure and density in comparison with the classical double porosity model.
The pressure-constrained maximization of discounted cumulative gas production was chosen as a basic optimization problem. The appearance of memory in the model makes the standard adjoint-based approach not applicable since it was developed only for conventional systems of partial differential equations. The novel adjoint model for media with memory was obtained from the necessary conditions of optimality using the classical theory of calculus of variations and efficiently applied to production optimization problem.
In conclusion we compare optimal control scenarios for the model with memory and for the classical double porosity model. Analysis has shown the importance of memory accounting in reservoir optimization problems.
-
-
-
Optimization Of Well Rates For Condensate Field Development
Authors A.I. Ermolaev, A. Nekrassov and I. TrubachevaSummaryWe solve a problem of optimal control of wells on a condensate field. The problem is defined as follows: maximize condensate recovery subject to (selected) well group rate, while individual well rates to are unknown. This problem could be solved using existing oilfield & gasfiled industry standard simulators. However, the use of these simulators and underlying optimization procedures require a lot of simulation runs. Even for the small number of wells (about 10) it may lead to sufficient simulation time consumption for solving this optimization problem.
We propose a method to significantly reduce the total simulation time, allowing to find almost (within a given tolerance) optimal solutions. This time reduction is achieved by swapping the criteria of condensate recovery maximization to the criteria of minimizing maximal well pressure draw-down. Furthermore, this method allows to take into account well-to-well influence by introducing additional equations. All that allow to convert the initial problem to a problem of solving a system of linear equations.
We show an example for a test case with five wells. With a simulator we get an optimal solution within 2.5 days; using our algorithm, we got a condensate recovery 0.5% less than optimal within 2 hours.
-
-
-
An Efficient Method To Improve Oil Field Productivity Using Tracers And Dynamic Flow Control Valves
Authors H.M. Mustapha, D.D. Dias, K.M. Makromallis and T.M. ManaiSummaryImproved recovery methods such as waterflooding, gas injection or tertiary injection fluids are often used to extend production following primary recovery, either by maintaining reservoir pressure or changing the reservoir fluid properties for enhanced oil displacement. A common challenge exists in all these recovery methods: maximizing the sweep efficiency of the injected fluids and predicting flow patterns. This is increasingly difficult in heterogeneous and stacked reservoirs where complex connectivity between wells leads to poor operational choices, often resulting in early breakthrough and diminished ultimate recovery.
To achieve better control of the displacement, advanced completions aim to control flow around the wellbore and strategically allocate production and injection from different parts of the completion. There are currently various types of flow control devices that can be installed to improve overall oilfield productivity. These include static inflow control devices (ICDs) by which inflowing fluids are choked back with nozzles that remain fixed in aperture size during the entire production cycle; devices that can respond to a change in flow rate, density or viscosity of inflowing product that are known as autonomous inflow control devices (AICDs); and devices that can change their flowing area through independent surface control and are called flow control valves (FCVs). Despite recent advances in the optimization of secondary and tertiary recovery performance, optimization methods generally involve elevated computational costs mainly for field-scale development plans. Their strategy is often centered around several optimization variables, complex optimization algorithms, and a substantial number of iterations to succeed.
A new method to optimize the performance of fluid injection schemes uses reservoir simulation techniques. An optimization methodology involving the analysis of numerical tracers attached to wells and FCVs was used. A function optimization method was implemented by tracing the injected fluids through each device of the injectors towards the production well. The breakthrough of tracer, and consequently injected fluid, was measured in each production well. These data were implemented as the input for a feedback control on the device to reduce the injection of fluids accordingly. The operation was performed dynamically to account for changes in fluid distribution with time. The results presented reveal the significant importance of the method as a fast solution for field applications.
-
-
-
Closed-Loop Reservoir Management Using Nonlinear Model Predictive Control: A Field Case Study
Authors R. Patel, J.L. Guevara and J. TrivediSummaryClosed-loop reservoir management (CLRM) consists of near-continuous data assimilation and real-time optimization to improve oil recovery and reservoir economics. In deep oil sands deposits using steam-assisted gravity drainage (SAGD) recovery process, CLRM involves real-time subcool (difference between actual and saturation temperature) control to develop the uniform steam chamber along the horizontal injector-producer well pair. Recently, model predictive control (MPC) has been implemented to maintain the optimal subcool; however, oversimplified models used in MPC are inadequate as reservoir dynamics in SAGD is highly complex, spatially distributed, and nonlinear. This provides an opportunity for the improved CLRM workflow which can incorporate the nonlinear physical/empirical models in MPC to represent the flow dynamics accurately over the reservoir lifecycle.
In this research, two novel workflows, comprising linearization and nonlinear optimization are proposed to implement nonlinear model predictive control (NMPC) in CLRM of SAGD reservoirs. Linearization basically reduces an NMPC problem to linear MPC by estimating an equivalent linear model of a nonlinear black box model for a given input signal in a mean-square-error sense. Due to linear approximation, cost function in the MPC can be minimized using quadratic programming (QP) over the specified time horizon. Another approach is to use nonlinear dynamic models directly for accurate prediction of the plant states and/or outputs. Resulting nonconvex, nonlinear cost optimization problem is solved using interior-point algorithm at each control interval. Proposed workflows are tested using the history-matched, field-scale model of a SAGD reservoir located in northern Alberta, Canada. The horizontal well pair with dual-tubing string completion is segmented and subcool in each section is considered as an output variable while steam injection rates in both tubings and liquid production rate are the input variables of the NMPC controller. Bi-directional communication link was established between the controller and thermal reservoir simulator, acting as a virtual process plant. Qualitative and quantitative analysis of the results reveals that nonlinear black-box models can successfully capture the nonlinearity of the SAGD process in CLRM. Also, both workflows can control the subcool above desired set-point while ensuring the stable well operations. Furthermore, net-present-value (NPV) is increased by 24% when proposed NMPC workflows are used in CLRM as compared to the base case with no closed-loop control. Overall, NMPC can be successfully employed in CLRM of SAGD reservoirs for improved real-time subcool control, energy efficiency, and greenhouse gas emissions while satisfying the constraints offered by the surface facilities.
-
-
-
Well Optimisation With Goal-Based Sensitivity Maps Using Time Windows And Ensemble Perturbations
Authors C.E. Heaney, P. Salinas, C.C. Pain, F. Fang and I.M. NavonSummaryKnowledge of the sensitivity of a solution to small changes in the model parameters is exploited in many areas in computational physics and used to perform mesh adaptivity, or to correct errors based on discretisation and sub-grid-scale modelling errors, to perform the assimilation of data based on adjusting the most sensitive parameters to the model-observation misfit, and similarly to form optimised sub-grid-scale models. We present a goal-based approach for forming sensitivity (or importance) maps using ensembles. These maps are defined as regions in space and time of high relevance for a given goal, for example, the solution at an observation point within the domain. The presented approach relies solely on ensembles obtained from the forward model and thus can be used with complex models for which calculating an adjoint is not a practical option. This provides a simple approach for optimisation of sensor placement, goal based mesh adaptivity, assessment of goals and data assimilation. We investigate methods which reduce the number of ensembles used to construct the maps yet which retain reasonable fidelity of the maps.
The fidelity comes from an integrated method including a goal-based approach, in which the most up-to-date importance maps are fed back into the perturbations to focus the algorithm on the key variables and domain areas. Also within the method smoothing is applied to the perturbations to obtain a multi-scale, global picture of the sensitivities; the perturbations are orthogonalised in order to generate a well-posed system which can be inverted; and time windows are applied (for time dependent problems) where we work backwards in time to obtain greater accuracy of the sensitivity maps.
The approach is demonstrated on a multi-phase flow problem.
-
-
-
Compositional Simulation With Capillary Pressure For Oil Production From Tight Formation
Authors D.R. Sandoval, W. Yan and E.H. StenbySummaryThe influence of porous media on phase behaviour is a topic of interest driven by the shale gas boom because many field observations suggest the saturation pressure in tight shale formation may change dramatically. There has acmally been such a concern for other low permeable tight formation, such as the Lower Cretaceous (LC) formation in the Danish North Sea. for decades. However, there is no consensus on the extent of the influence and also little analysis of the issue in the open literature.
The integration of the capillary pressure effect on phase equilibrium into a reservoir simulator is not entirely trivial. The modifications needed will depend on the implicitness level of the numerical model of the simulator, with an increasing complexity as the level increases. In general, the standard thermodynamic routines should be modified to handle the cases where the liquid pressure becomes negative as a result of the high capillary pressures. The flash and stability analysis routines involving capillary pressure need an efficient implementation to maintain the robustness and speed needed during simulation. For the linear solver, the derivatives of the selected pressure models must be obtained and implemented in a consistent way to avoid differences between the capillary pressure model used for phase equilibrium, and the capillary pressure used for the flow equations. A fully implicit compositional simulator was modified by adding the influence of the capillary pressure into the phase behavior. The customized tool served to investigate a natural depletion scenario of a shale reservoir and a tight reservoir from the LC formation in the Danish North Sea using different capillary pressure models.
In general, low to moderate deviations in the cumulative oil production, pressure profiles, and saturation profiles were observed for the cases with effective pore sizes less than 40 nm. For the producing gas oil ratio considerable deviations were found even for pore sizes close to 100 nm. Moreover, a pore size distribution was compared to the fixed pore size assumption in the capillary pressure model. A variable pore size capillary pressure model shows similar results to those obtained at fixed capillary radius. In the long term, the results are closer to effective pore size calculated at the bubble point given by the maximum value of the pore size distribution.
-
-
-
Scaling Law For Slip Flow Of Gases In Nanoporous Media From Nanofluidics, Rocks, And Pore-Scale Simulations
More LessSummaryIn unconventional reservoirs, as the effective pore size becomes close to the mean free path of gas molecules, gas transport behavior begins to deviate from Darcy’s law. The objective of this study is to explore the similarities of gas flows in nanochannels and core samples as well as those simulated by direct simulation BGK (DSBGK). a particle-based method that solves the Bhatnagar-Gross-Krook (BGK) equation.
Due to fabrication difficulties, previous work on gas flow experiments in nanochannels is very limited. In this work, steady-state gas flow was measured in reactive-ion etched nanochannels on a silicon wafer, which have a controlled channel size. A core-flooding apparatus was used to perform steady-state gas flow measurements on carbonate and shale samples. Klinkenberg permeability was obtained under varying pore pressures but constant temperature and effective stress. Same gas was used in nanofluidic and rock experiments, making them directly comparable. Results from both experiments were then compared to gas flow simulations by DSBGK method carried out on several independently constructed geometry models. DSBGK uses hundreds of millions of simulated molecules to approximate gas flow inside the pore space. The intermolecular collisions were handled by directly integrating the BGK equation along each molecule’s trajectory, rather than through a sampling scheme like that in the direct simulation Monte-Carlo (DSMC) method. Consequently, the stochastic noise is significantly reduced, and simulation of nano-scale gas flows in complex geometries becomes computationally affordable.
The Klinkenberg factors obtained from these independent studies varied across three orders of magnitude, yet they all appear to collapse on a single scaling relation where the Klinkenberg factor in the slip flow regime is inversely proportional to the square root of intrinsic permeability over porosity. Our correlation could also fit the data in the literature, which were often obtained using nitrogen, after correcting for temperature and gas properties. This study contributes to rock characterization, well testing analysis as well as the understanding of rarefied gas transport in porous media.
-
-
-
Systematic Hybrid Modelling Using Fracture Subset Upscaling
Authors D.L.Y. Wong, F. Doster, A. Kamp and S. GeigerSummaryMulti-scale fractured reservoirs can be modelled effectively using hybrid methods that partition fractures into two subsets: one where fractures are upscaled and another one where fractures are represented explicitly. Existing partitioning methods are qualitative or empirical.
In this paper, we present a novel and quantitative partitioning approach based on a single-porosity hybrid modelling workflow that uses numerical (Embedded Discrete Fracture Methods – EDFM) and semi-analytical (Effective Medium Theory – EMT) methods for fracture subset upscaling. We demonstrate this workflow using synthetic fracture data and realistic data sourced from outcrops of the Jandaira Carbonate Formation in the Potiguar Basin, Brazil.
Fracture subset upscaling with EDFM and EMT using three datasets (two real, one synthetic) shows that the smallest, most numerous fractures are poorly connected. The ability of fracture subset upscaling to identify these fractures is essential to the hybrid modelling workflow. EDFM and EMT methods give nearly identical results, but EMT enables us to greatly accelerate the calculations.
To validate our workflow, hybrid models were created with different partitioning sizes and compared against EDFM simulations where all fractures are represented explicitly. A single-phase pressure drawdown was used a test problem. The simulation results show that once the upscaled fractures begin to connect, deviations in flow response start to grow because single-porosity representations are inadequate to capture the separation of timescales between flow in a well-connected fracture subset and flow in the matrix. In some cases, the flow regime in the model were observed to change entirely.
Overall, the results justify the proposed workflow as a means for systematic and quantitative construction of hybrid models.
-
-
-
An Efficient Hybrid Grid Cross-Flow Equilibrium Model For General Purpose Field-Scale Fractured Reservoir Simulation
Authors H.M. Mustapha, K.M. Makromallis and A.C. CominelliSummaryMultiphase flow simulation in fractured reservoirs at field scale is a significant challenge. Despite recent advances and a wide range of applications in both hydrology and hydrocarbon reservoir engineering, discussing efficient methods that cover computational complexity, accuracy and flexibility aspects is still of paramount importance for a better understanding of these complex media. In this work, we present a new method that handles both the topological and computational complexities of fractures, taking into consideration advantages of various existing approaches, which include hybrid grid and crossflow equilibrium models. The hybrid grid (HG) model consists of representing fractures as lower-dimensional objects that still are represented as control volumes in a computational grid. The HG model is equivalent to a single-porosity model with a practical solution for the small control volumes at the intersection between fractures; however, the overall simulation run time is still dominated by the remaining fracture small control volumes. To overcome single-porosity computational challenges, a crossflow equilibrium (CFE) concept between discrete fractures and a small neighborhood in the matrix blocks can be employed. The CFE model consists of combining fractures with a small fraction of the neighborhood matrix blocks on either side in larger elements to achieve a better computational efficiency than conventional single-porosity models. The implementation of a CFE model at field scale is not practical because of the fracture topological challenges associated with the construction of an accurate computational grid for the CFE elements.
In this work, we propose a method based on a combination of HG and CFE models to overcome the challenges associated with the HG fracture small control volumes and field-scale CFE computational grid construction. First, we assess the performance of the existing CFE model, and we propose an improved model. In addition, we suggest an input data handling method that is sufficient to account for fractional flow inside the CFE elements for flow in homogeneous fractured reservoirs without the need of any change in the simulator. Second, we describe the uniqueness of the proposed method, and we discuss different numerical examples to assess both the accuracy and computational efficiency. The results obtained are very accurate, and, computationally, one to two orders of magnitude speedup can be achieved. The improved CFE results are superior over the traditional CFE model. Combined with the HG model, the results are significantly improved while retaining a very good performance.
-
-
-
Quantification Of Coarsening Effect On Response Uncertainty In Reservoir Simulation
Authors S. de Hoop, D.V. Voskov, F.C. Vossepoel and A. JungSummaryIn this study, an attempt is made to better understand the effect coarsening of the parameter space has on the uncertainty representation of the response. Firstly, an HF ensemble of channelized reservoir models is constructed using a Multi-Point Statistic (MPS) approach. Several levels of coarsening are generated using a flow-based xipscaling algorithm. A water injection strategy is simulated for each scale of the hierarchical ensemble. Dynamic analysis is performed on a reduced representation of the response uncertainty obtained via Multidimensional Scaling (MDS). We introduce an Uncertainty Trajectory (UT), which quantifies the coarsening effect in terms of deviation from the HF ensemble response uncertainty. The UT also includes the temporal beha\'ior of the response uncertainty of each ensemble scale. The mean integrated distance from the HF ensemble UT can be used as a measure of dissimilarity in the flow- behavior of consecutive coarser ensembles scales. Reducing the number of HF flow- simulations required for uncertainty quantification can be achieved via the proposed methodology and thereby greatly reducing the overall computational cost.
-
-
-
Multifidelity Framework For Uncertainty Quantification With Multiple Quantities Of Interest
Authors F.F. Kostakis, B.T. Mallison and L.J. DurlofskySummaryA systematic framework, involving flow simulation and model selection at many fidelity (resolution) levels, is introduced to accurately quantify the impact of geological xincertainty on output quantities of interest (Qols). The methodology considers large numbers of realizations (0(1000) in the cases presented), though very few (0(10)) simulations are performed at the highest resolutions. We proceed from coarser to finer resolution levels, and at each stage simulation results are used to select a subset of realizations to simulate at the next (higher) fidelity level. Models are constructed at all resolution levels through upscaling of the underlying fine-scale realizations. A global transmissibility upscaling procedure is applied for this purpose. Approximate cumulative distribution functions (CDFs) are constructed for all Qols considered. The Qol values themselves are always computed at the finest scale, but corresponding percentile values are determined using results at a ‘rank-preserving’ fidelity level. Detailed results are presented for oil-water flow in a channelized system. Simulations at seven different fidelity levels are used, and eight Qols are evaluated. Results for the example considered demonstrate accurate reconstruction of fine-scale CDFs for all Qols. with a speedup factor of 12 relative to performing all simulations on the fine scale.
-
-
-
Stochastic Oilfield Optimization For Hedging Against Uncertain Future Development Plans
Authors A. Jahandideh and B. JafarpourSummaryWe develop a new oilfield optimization framework that assumes future development plans are uncertain. To handle this uncertainty, we formulate a multi-stage stochastic optimization approach that considers several plausible scenarios for future development. These scenarios are used to predict the Net Present Value of the reservoir through its life-cycle. At each stage, the current decision variables (including well locations and controls) are identified by optimizing the predicted project NPV, which is computed based on stochastic descriptions of the number, location, and well control settings of future development stages. We compare the performance of the stochastic approach with optimization by assuming perfect information about the future development plans and by disregarding uncertainty future development activities, and draw important conclusion about the behavior of the proposed formulation.
-
-
-
Monte Carlo Simulation For Uncertainty Quantification In Reservoir Simulation: A Convergence Study
Authors M.A. Cremon, M. Christie and M.G. GerritsenSummaryThe present work illustrates the convergence properties of a Monte Carlo Simulation (MCS) used to quantify the geological uncertainty in a 3D, 3-phase reservoir simulation test case. Our reservoir model along with fluid and numerical properties was obtained from a major oil and gas company. We generate 10,000 realizations of a geological model and run a Black-Oil flow simulation using a commercial reservoir simulator and a synchronous parallel implementation. The distributions of the moments and quantiles of the Net Present Value (NPV) are presented in the form of their Cumulative Density Functions (CDF). We also show the distributions of the break even time (BET) and the probability of breaking even in order to see the effect of considering quantities that are different in nature. We use log-plots to assess the convergence of the results, and verify that the convergence of the quantities of interest follows a squared-root law in the number of realizations used. We quantify the relative error made on various quantities and illustrate that the use of a small ensemble can yield errors of hundreds of percents, and that lowering the error to a given precision (e.g. below ten percent) can require thousands of realizations. For decision making and profitability assessments, using large sets of realizations is now feasible due to the availability of fast, distributed architectures and the parallel nature of MCS. Our results suggest that the improvement in the quality of the results is significant and well worth the extra effort. For optimization and sensitivity studies, running large ensembles is still intractable but yields sets of quantiles that can be used as a Reduced Order Model (ROM). Setting up a test case using this dataset is under consideration, and could provide an interesting integrated setup for comparisons of uncertainty quantification (UQ) methods.
-
-
-
A Data-Space Approach For Well Control Optimization Under Uncertainty
Authors S. Jiang, W. Sun and L.J. DurlofskySummaryData-space inversion (DSI) methods provide posterior (history-matched) predictions for quantities of interest, along with uncertainty quantification, without constructing posterior models. Rather, predictions are generated directly from a large set of prior-model simulations and observed data. In this work we develop a data-space inversion with variable controls (DSIVC) procedure that enables forecasting with user-specified well controls in the prediction period. In DSIVC, flow simulations on all prior realizations, with randomly sampled well controls, are first performed. User-specified controls are treated as additional observations to be matched in posterior predictions. Posterior data samples are generated using a randomized maximum likelihood procedure, with some algorithmic treatments applied to improve performance. Results are presented for a channelized system. For any well control specification, posterior predictions can be generated in seconds or minutes. Posterior predictions from DSIVC are compared to reference DSI results. DSI requires prior models to be resimulated using the specified controls, while DSIVC requires only one set of prior simulations. Substantial uncertainty reduction is achieved through data-space inversion, and reasonable agreement between DSIVC and DSI results is consistently observed. DSIVC is applied for data assimilation combined with production optimization under uncertainty, and clear improvement in the objective function is attained.
-
-
-
Performance Enhancement Of Gauss-Newton Trust Region Solver For Distributed Gauss-Newton Optimization Method
Authors G. Gao, H. Jiang, J.C. Vink, P.H. van Hagen and T.J. WellsSummaryDistributed Gauss-Newton (DGN) has been proved very efficient and robust for history matching (HM) and uncertainty quantification (UQ). In each iteration, DGN needs to solve an ensemble of hundreds to thousands of trust-region subproblems (TRS) concurrently, and it is extremely computational expensive, especially when applied to large-scale history matching problems. In this paper, different approaches are developed to reduce the computational cost, and their performances are compared with other well-known methods.
The original Gauss-Newton trust-region (GNTR) solver solves a nonlinear equation iteratively using the modified Newton-Rapson method, which involves solving a large-scale symmetric linear system twice. In this paper, we propose to estimate the nonlinear GNTR equation using either an inverse quadratic model or a cubic spline model, by fitting the nonlinear equation evaluated at different points in previous iterations. The computation cost can be cut by half, because it requires solving the symmetric linear system only one time in each iteration.
The proposed approach is validated on two sets of synthetic test problems, small-scale problems with 2000 to 5000 parameters and large-scale problems with 10000 to 100000 parameters. Each set contains 500 test problems with different number of parameters and observed data. The GNTR solver using an inverse quadratic model has performance that is comparable to the GNTR solver using a cubic spline model. Their performances are also compared with the well-known direct TRS solver using factorization and iterative TRS solver using conjugate-gradient approach of the GALAHAD optimization library. In terms of efficiency, robustness, and memory usage, the two newly proposed GNTR solvers outperform the two TRS solvers of the GALAHAD optimization library.
Finally, the proposed GNTR solver have been implemented in our in-house distributed HM and UQ system and has been validated on different real field HM examples. Our numerical experiments indicate that the DGN optimizer using the new GNTR solver performs quite stable and effective when applied to real field HM problems.
-
-
-
Performance Assessment of Ensemble Kalman Filter and Markov Chain Monte Carlo under Forecast Model Uncertainty
Authors R. Patel, T. Jain, J. Trivedi and J. GuevaraSummaryEnsemble Kalman filter (EnKF) and Markov chain Monte Carlo (MCMC) are popular methods to obtain the posterior distribution of unknown parameters in the reservoir model. However, millions of simulation runs may be required in MCMC for accurate sampling of posterior as subsurface flow problems are highly nonlinear and non-Gaussian. Similarly, EnKF formulated on the basis of linear and Gaussian assumptions may also require a large number of realizations to correctly map the solution space of the unknown model parameters, ultimately resulting in the high computational cost. Data-driven meta/surrogate/proxy models provide an alternative solution to alleviate the issue of high computational cost. Since these models are not as accurate as numerical solutions of partial differential equations (PDE), their implementation may add an uncertainty in the forecast model. In literature, the effect of uncertainty in forecast model on data assimilation is not well studied, especially with field-scale reservoir models.
In this work, we propose the robust assisted history matching workflow using polynomial chaos expansion (PCE) based forecast model. Proposed forecast model relies on reducing parameter space using Karhunen–Loeve (KL) expansion which preserves the two-point statistics of the field. Random variables from KL expansion and orthogonal polynomials corresponding to the prior probability density function (pdf) form the set of input parameters in PCE. Further, non-intrusive probabilistic collocation method (PCM) is used to compute PCE coefficients. PCE forecast model is then used in EnKF and MCMC to calculate the likelihood of the samples in place of high fidelity full physics simulation runs.
A case study is performed using a 3D field scale model of a reservoir located near Fort McMurray in northern Alberta, Canada. Performance of EnKF and MCMC are assessed under forecast model uncertainty using rigorous qualitative and quantitative analysis and posterior distribution characterization. Results clearly depict that, although EnKF provided reliable mean and variance estimates of model parameters, MCMC outperformed the former even under the uncertainty associated with PCE metamodel. Inaccurate initial assumptions of model parameters were successfully handled by MCMC, although, with a longer burn-in period. Furthermore, characterization of posterior demonstrated reduced uncertainty in the estimation of model parameters using MCMC as compared to EnKF.
Practical implications of the proposed approach and performance assessment under forecast model uncertainty will be consequential in designing accurate and computationally efficient reservoir characterization and optimization workflows and hence, improved decision-making in reservoir management.
-
-
-
A Multiscale Method For Data Assimilation
Authors R. Moraes, H. Hajibeygi and J.D. JansenSummaryIn data assimilation problems, various types of data are naturally linked to different spatial resolutions (e.g. seismic and electromagnetic data), and these scales are usually not coincident to the subsurface simulation model scale. Alternatives like down/upscaling of the data and/or the simulation model can be used, but with potential loss of important information. To address this issue, a novel Multiscale (MS) data assimilation method is introduced. The overall idea of the method is to keep uncertain parameters and observed data at their original representation scale, avoiding down/upscaling of any quantity. The method relies on a recently developed mathematical framework to compute adjoint gradients via a MS strategy. The fine-scale uncertain parameters are directly updated and the MS grid is constructed in a resolution that meets the observed data resolution. The advantages of the technique are demonstrated in the assimilation of data represented at a coarser scale than the simulation model. The misfit objective function is constructed to keep the MS nature of the problem. The regularization term is represented at the simulation model (fine) scale, whereas the data misfit term is represented at the observed data (coarse) scale. The performance of the method is demonstrated in synthetic models and compared to down/upscaling strategies. The experiments show that the MS strategy provides advantages 1) on the computational side – expensive operations are only performed at the coarse scale; 2) with respect to accuracy – the matched uncertain parameter distribution is closer to the “truth”; and 3) in the optimization performance – faster convergence behaviour due to faster gradient computation. In conclusion, the newly developed method is capable of providing superior results when compared to strategies that rely on the up/downscaling of the response/observed data, addressing the scale dissimilarity via a robust, consistent MS strategy.
-
-
-
History Matching Of Real Production And Seismic Data In The Norne Field
Authors R. Lorentzen, T. Bhakta, D. Grana, X. Luo, R. Valestrand and G. NaevdalSummaryAutomatic history matching using production and seismic data is still challenging due to the size of seismic datasets. The most severe problem when applying ensemble based methods for assimilating large datasets, is that the uncertainty is usually underestimated due to the limited number of models in the ensemble compared to the dimension of the data, which inevitably leads to an ensemble collapse. Localization and data reduction methods are promising approaches mitigating this problem.
In this paper, we present a new robust and flexible workflow for assimilating seismic attributes and production data. The methodology is based on sparse representation of the seismic data, using methods developed for image denoising. The approach can be applied seismic data or inverted seismic attributes obtained from geophysical inverse methods. The seismic response in the forward model is computed using a petroelastic model, that depends on several petrophysical parameters, including lithology, porosity, and saturation.
We propose to assimilate production and seismic data sequentially, which makes scaling of different data types redundant, and allows for use of different localization techniques. We use traditional distance-based localization for production data, and a newly developed correlation-based localization technique for seismic data. The latter is necessary because the image denoising method utilize discrete wavelet transforms, which render the seismic data without spatial positions.
The workflow is successfully implemented for the Norne field, and an iterative ensemble smoother is used for the sequential assimilation of production data and acoustic impedance. We show that the methodology is robust and ensemble collapse is avoided. Furthermore, the proposed workflow is flexible, as it can be applied to seismic data or inverted seismic properties, and the methodology requires only moderate computer memory. The results show that through this method we can successfully reduce the data mismatch for both production data and seismic data.
-