- Home
- Conferences
- Conference Proceedings
- Conferences
ECMOR X - 10th European Conference on the Mathematics of Oil Recovery
- Conference date: 04 Sep 2006 - 07 Sep 2006
- Location: Amsterdam, Netherlands
- ISBN: 978-90-73781-47-4
- Published: 04 September 2006
21 - 40 of 78 results
-
-
Improved Modeling of 4D Seismic Response Using Flow-Based Downscaling of Coarse Grid Saturations
Authors S. Castro, J. Caers and L. Durlofsky4D seismic data is used to monitor the movement of fluids in the reservoir with time and can be incorporated into the history matching process by minimizing the difference between the 4D seismic observed in the field and the 4D seismic computed from the reservoir model. Modeling 4D seismic data involves the computation of a “base” 3D seismic, which is straightforward if the reservoir has a known initial distribution of fluids, and the “time lapse” 3D seismic, which entails use of the simulated fluid distribution. The flow simulations that provide these fluid distributions are typically performed at relatively coarse scales.
It has been observed that fine details in the saturation distribution (although below seismic resolution) can impact the seismic response (Mavko and Mukerji, 1998; Sengupta and Mavko, 1998; Sengupta, 2000). Downscaling saturation outputs from the flow simulator may therefore be required to correctly model the 4D seismic response. In this paper we propose an approximate method for downscaling saturations where local fine scale flows are simulated to reconstruct the fine scale saturation using local boundary conditions determined from the global coarse scale two-phase flow solution. This reconstruction does not require any global fine scale computations, guarantees flux continuity across fine scale cells in neighboring coarse blocks, and accounts for subgrid heterogeneity. Using a 2D synthetic example, we demonstrate how ignoring the fine scale effects can produce erroneous 4D seismic responses. We demonstrate that our flow-based downscaling procedure improves these results significantly.
-
-
-
Using Time Domain Seismic Attributes for History Matching
Authors A. Fornel, F. Roggero and B. NoetingerThe goal of reservoir characterization is to enhance the description of oil and gas reservoirs to get a reliable prediction of their dynamic behaviour using numerical simulations. In optimized management, geological models need to be updated with each new available data (well tests, production data, seismic data, ...). The value of each individual data does not lie in its isolated use, but rather in the value it adds when integrated with other data.
We present an integrated methodology for reservoir characterization, based on a non-linear optimization loop, which uses both production (rather local information) and 4D seismic (spatial information) informations for an improved geological modelling. The main difficulty is to manage the depth/time conversion for the integration of 4D attributes and to keep it consistent with fluid flow simulation results. To do this, we propose a new methodology to match seismic attributes in time by updating the depth/time conversion in the inversion workflow.
The proposed methodology is based on a simulation workflow integrating geological modelling, upscaling, multi-phase fluid flow simulation coupled with a rock physics modelling, depth/time conversion and frequency filtering. The reliability of the geological model is improved through the minimisation of a weighted, least-squares objective function, which measures the mismatch between the simulated results and the data (both production and 4D seismic).
The huge number of data brought by the 4D seismic increases the difficulty in solving the numerical problem during optimization process. Thus, we propose an innovative approach to compute gradients of the objective function with respect to optimization parameters.
Our methodology was successfully applied for a multi-scale reservoir characterization process. The 3D model is constrained by compressional/shear impedances and the two-way travel times for compressional waves at the base seismic survey.
-
-
-
Using the Ensemble Kalman Filter with 4D Data to Estimate Properties and Lithology of Reservoir Rocks
Authors J. A. Skjervheim, B. O. Ruud, S. I. Aanonsen, G. Evensen and T. A. JohansenImprovement of the seismic reservoir characterisation is performed by introducing an uncertainty connected to parameters in the rock physics model. Traditionally the Ensemble Kalman filter (EnKF) method has been used to estimate permeability and porosity.
In this paper we show that when seismic difference data are available also the lithology can be estimated, which is coupled to the effective bulk modulus via the rock physics model. Incorporation of inverted seismic difference data in the EnKF introduces a large amount of data in the assimilation step. Thus, to improve the results a methodology based on a combination of a global and a local analysis scheme is proposed. The global and the local analysis are used to assimilate respectively the production data and the inverted seismic difference data, where the local scheme assume that only seismic data within a certain distance from a state variable will impact the analysis in this state variable. The technique is applied to synthetic 2D and 3D reservoir models, where effects of using local versus global analysis schemes on different inverted seismic difference data, such as acoustic impedance and Poisson's ratio, are investigated. Other evaluated factors are the effects of using an incorrect seismic data error model in the analysis schemes.
-
-
-
A Multiscale and Metamodel Simulation-Based Method for History Matching
Authors A. A. Rodriguez, H. M. Klie, S. G. Thomas and M. F. WheelerThis paper presents a novel framework for history matching using the concept of simulation-based optimization with guided search sampling, multiscale resolution and incremental metamodel (surrogate model) generation, aimed to mitigate the computational burden of large-scale history matching.
The initial stage of the framework consists of a multiscale treatment of the permeability field through successive wavelet transformations. The coarsest grid which represents a highly constrained parameter space, is sampled with the aid of a derivative-free stochastic optimization algorithm that detects the most promising search regions. Due to the size of the coarse grid, thousands of simulation runs are possible at a low computational cost. Next, a sequence of intermediate metamodels is built iteratively by gradually increasing the number of sampling points in the decision space and using these temporary models to guide an incremental sampling. This incremental sampling is dictated by the use of an optimization method that finds a local optimum solution in a few iteration steps. The iterative refinement process is terminated when the metamodel solution is capable of reproducing (within a predefined tolerance) the reservoir simulator response. These metamodels are constructed using a support vector machine approach that captures the causal relations embedded in reservoir simulation by discriminating the true signal from the noise without over-fitting the simulation results. Finally, the coarse grid optimal solution is used as an initial point for the next finer grid level with the use of the inverse wavelet transform. The procedure is repeated with a decreasing number of function evaluations as the grid resolution level is increased. The objective function includes well production data and sensor measurements. Numerical experiments on realistic data reveal that the proposed framework improves the history matching process, not only in terms of computing savings and the accuracy of the estimated permeability field.
-
-
-
Iterative Forms of the Ensemble Kalman Filter
Authors A. C. Reynolds, M. Zafari and G. LiRecently, the Ensemble Kalman filter (EnKF) has been proposed as an alternative to traditional history matching. Because of its computational efficiency, ease of implementation into a reservoir simulator and ability to provide an evaluation of uncertainty in the reservoir model and in production forecasts, EnKF appears to be a more attractive method for integrating essentially continuous streams of dynamic data to update reservoir models and characterize uncertainty than automatic history matching based on methods such as randomized maximum likelihood using the adjoint/LBFGS approach for optimization.
Although the standard theoretical underpinnings of EnKF rest on Bayesian updating with Gaussian priors, we show that the EnKF update equations can also be derived as an approximation to the Gauss-Newton method, which uses an ``average'' sensitivity matrix.
This suggests that for highly nonlinear, non-Gaussian problems, EnKF may not provide an appropriate characterization of uncertainty and that some form of iteration is required. By viewing EnKF through the lens of optimization, instead of Monte Carlo sampling, we derive an iterative EnKF procedure for nonlinear problems. Although the iterative scheme incorporates some of the main features of EnKF, the computational efficiency of the basic EnKF method is not preserved.
We show, however, that for some problems, iteration can provide an improved characterization of uncertainty.
-
-
-
Decision Making Methodology in a Risk Prone Environment - Application to Production Scheme Management
Authors I. Zabalza-Mezghani, M. Feraille, B. Guard and E. ManceauPetroleum field understanding and management is always associated to uncertainties, whose importance varies during all the production period. However, even if Monte-Carlo technique is well known to investigate uncertainty assessment problems, in case of reservoir simulation it is not relevant anymore due to both the high number of simulations required, and the high computation time per simulation.
In recent years, probabilistic forecasting has gained popularity and has become the preferred approach when assessing the value of a project, given the uncertainty of many input variables. Reservoir understanding and production forecasting may involve as uncertain parameters both continuous uncertain parameters, which vary inside a range of possible values, and discrete parameters to model physical or geological possible scenarios, or options to be evaluated and tested to optimize the field management.
Experimental design technique has proven its efficiency to assess risk on reservoir performances related to uncertainties on continuous uncertain parameters. On the other hand, decision tree analysis is a widely validated technique, used when a problem involves subsequent decisions, to model nested scenarios and configurations as well as decision options. It is extremely useful for quantifying the impact of each scenario and configuration and to set up the value of each possible decision, finally leading to discriminate the optimal decision among all possible options.
We present here an integrated approach, which by taking the best of both experimental design techniques and decision tree analysis will allow to manage decision making in an uncertain framework, taking into account risk associated with both technical reservoir uncertain parameters and economical uncertain parameters.
This methodology has been applied to a reservoir synthetic case to highlight the need to integrate all available sources of uncertainty, both on technical and economical parameters, to avoid sub-optimal decisions in terms of maximizing the economical value of the field.
-
-
-
Adaptive Experimental Design for Non-Linear Modeling – Application to Quantification of Risk for Real Field Production
Authors I. Zabalza-Mezghani, C. Scheidt, M. Feraille, B. Guard and D. CollombierOne of the most important aspect in reservoir engineering is to quantify uncertainty in reservoir behavior. Because of the large number of parameters and the physical complexity of the reservoir, fluid flow models are complex and time consuming. In order to control the cost of an uncertainty study, traditional uncertainty management is routinely performed using proxy models of the production, advocated by experimental design methodology. This problem is complex since the impact of variables in reservoir performances is often non-regular. By selecting optimally the simulation to perform, experimental design technique allows to fit polynomial models of the response but often ignores non-regularity. In this paper, we propose an original methodology to construct irregular proxy models of the fluid flow simulator. Contrary to classical experimental designs which assume a polynomial behavior of the response, we propose to built evolutive experimental design to fit gradually the potentially irregular shape of uncertainty. This methodology benefits from the advantage of experimental design, which allows to control the number of fluid flow simulations, combined with the flexibility to study non-regular behavior. We propose here an original way to increase the prior predictivity of the approximation in the non-explored areas of the experimental domain. Based on the pilot points methodology, the research of the more predictive estimator is realized by constraining the approximation to fictitious data. These data, which are not simulated, are calibrated to ensure a better robustness and quality of the approximation. The proxy model obtained with the evolutive methodology can be considered as a good representation of the fluid flow simulator, whose evaluation is not expensive and therefore allows better risk analysis using Monte Carlo sampling. This innovative approach has been applied to model production behavior for an offshore Brazilian field, and thus to quantify the risk associated with the main reservoir uncertainties.
-
-
-
The Impact of Data Errors on Uncertainty Analysis
Authors G. E. Pickup, M. A. Christie and M. SambridgeSince there is much uncertainty in reservoir modelling, it makes sense to start with coarse-scale models, so that a wide range of scenarios can be assessed rapidly, before focussing on fewer, more detailed models. The simplest model for reservoir analysis is the material balance equation, and this forms a good starting point for uncertainty appraisal. Although there are drawbacks with this method, such as the assumption of pressure equilibration throughout the reservoir (or compartment), there is the advantage that a minimum number of a priori assumptions are made regarding the reservoir volume and drive mechanism.
As the first stage in a top-down reservoir evaluation procedure, we have applied stochastic history matching and uncertainty analysis to a material balance problem, using a synthetic reservoir model which had aquifer influx and high rock compressibility. A truth case simulation was run and noise was added to the resulting fluid production and pressure values to generate synthetic data sets. The parameters adjusted were the volume of oil (STOIIP), the initial aquifer size and the rock compressibility. A thorough analysis of the errors was performed, including propagation of errors in the pressure data to determine their effect on the modelled production. The Neighbourhood Approximation (NA) method was used to home in on models with low misfit. Then the posterior probability distributions and their correlations were assessed using a Bayesian approach.
Results showed that the shape of the posterior probability distributions (PPDs) depended on the assumed level of the noise. In particular, they indicated that, if the amount of noise is not assessed correctly, the position of the maximum likelihood value may be estimated incorrectly.
-
-
-
Quantification of Uncertainty in Coarse-Scale Relative Permeabilities Due to Sub-Grid Heterogeneity
Authors H. Okano, G. E. Pickup, M. A. Christie, S. Subbey and M. SambridgeReservoir production forecasts are essentially uncertain due to the lack of data. Specifically, it is impossible to estimate detailed heterogeneity in a reservoir. In order to mitigate the ambiguity of a model, production data is incorporated into a history-matching process. However, there is insufficient data to constrain the subsurface properties all over the field.
We investigated the parameterisation of coarse-scale relative permeabilities for history-matching and uncertainty quantification. Coarse-scale models are often employed in history matching, because of computational cost. The results of the investigation provided us with guidelines for history-matching a coarse-scale model to the observed data by adjusting relative permeabilities.
This paper addresses two issues. Firstly, because the coarse-scale model inevitably misses out sub-grid heterogeneity, physical dispersion is ignored in the simulation. Secondly, the small-scale heterogeneity is not explicitly known and can only be inferred by history-matching. To solve these problems, local features in the coarse-scale relative permeability curves were adjusted in history-matching to capture the effect of physical dispersion and to compensate for the effect of numerical dispersion.
The success of history-matching relative permeabilities depends on the flexibility of the saturation function. We applied the flexible B-spline function as well as a conventional power or exponential function, namely the Corey or Chierici functions, respectively. We compared these parameterisations in terms of the resulting relative permeabilities during history-matching and uncertainty appraisal.
The history-matched relative permeabilities and their uncertainty envelopes were examined in comparison with the two-phase upscaling results. We used a synthetic data set for which the true solution is known. The two-phase upscaling was conducted using the truth model to give a reference set of coarse-scale relative permeability curves. We also compared the truth production profiles with the uncertainty envelopes which were quantified in Bayesian framework. Our results highlight the fact that the parameterisation affects the width of uncertainty envelope.
-
-
-
Uncertainty Quantification in Producing Fields using the Neighbourhood Algorithm
Authors M. Christie, G. Nicotra, M. Rotondi and A. GodiNowadays, the majority of the world’s most important oil and gas provinces have reached a mature stage. A proper exploitation and recovery maximisation primarily rely on the ability to foresee the consequences of different reservoir management decisions and production scenarios.
Because of the inherent non-uniqueness of the history match process, using only one data-conditioned reservoir model to forecast hydrocarbon production may lead to erroneous interpretations and discrepancies with reality. A possible solution to this inverse problem and the related uncertainty quantification has been recently introduced by a novel methodology based on a stochastic search technique, called the Neighbourhood Algorithm (NA). The above methodology consists of two main steps: an optimization phase and an uncertainty assessment phase carried out in a Bayesian framework (NA-Bayes). After sampling acceptable data-fit regions of the parameter space, the posterior probabilities are calculated and a quantitative inference is performed.
In this paper the NA-NAB approach was firstly evaluated by means of four analytical test functions (Branin, Six-Hump Camel Back, Goldstein and Price, Levy) with the aim of assessing its efficiency in searching and sampling the parameters space, verifying its stability and robustness and examining the effects of the algorithm control parameters.
Two case studies are reported in order to investigate the suitability of the methodology for real fields. The hydrocarbon production, the pressure and the water cut were selected as match variables for the considered oil and gas reservoirs. The approach was promising and the results obtained were similar to those of the manually history matched model though achieved with a significant time reduction.
In addition to the increased speed of history matching, the procedure allowed a proper uncertainty quantification by means of multiple production forecasts that better quantify risk and uncertainty in reservoir performance, which is crucial for economical evaluation and decision making.
-
-
-
Uncertainty Assessment of Transport in Porous Media Based on a Probability Density Function Method
Authors P. Jenny, H. A. Tchelepi and D. W. MeyerA new approach for uncertainty assessment in porous media is devised. The goal is to study tracer or phase transport while assuming that the multi-point velocity statistics is known. The method depends on solving a transport equation for the joint probability density function (PDF) of velocity and concentration (or phase saturation). Similar PDF methods have been developed for turbulent flow simulations and proved extremely successful in providing joint statistics of species concentration and velocity required for turbulent combustion modeling. As in the case of turbulent flows, the joint PDF equation for uncertainty assessment of transport in porous media is defined over a high dimensional space (physical, velocity, concentration or phase, and time). This results in severe computational limitations, if a finite-volume, finite-difference or finite-element method is employed, which is the motivation for the use of a particle method. The crucial advantage of the PDF modeling approach is its flexibility and high level of closure. For example, no modeling is required for macro-dispersion (ensemble mean of the product of velocity-concentration fluctuations). Moreover, no assumptions are made regarding correlation length scales and unlike perturbation-based Statistical Moment Equation (SME) methods, the PDF method is not restricted to small input variance. Here, the PDF methodology is presented for incompressible single-phase tracer transport in heterogeneous porous media, but more general applications involving multi-phase transport are discussed. A detailed description of the PDF method is given and its accuracy is demonstrated by comparison with published Monte Carlo results.
-
-
-
Projection-Based Approximation Methods for the Optimal Control of Smart Oil Fields
Authors E. Gildin, H. Klie, A. Rodriguez, M. F. Wheeler and R. H. BishopComputational improvements of instrumented large-scale reservoir simulation are becoming one of the main research topics in the oil industry. In particular, the problem of closed-loop control is capturing a great deal of interest for reliable reservoir management. One of the main difficulties in designing controllers for large-scale reservoir systems has to do with the high dimensional state-space and parameter uncertainties. Hence, lower dimensional models, linear or nonlinear, that approximate the full order system are desirable to either mitigate the cost of large-scale reservoir simulation or design efficient closed-loop control systems. This work aims to compare recent advances in model order reduction techniques applied to reservoir simulation. In general, the problem of reducing the order of a large-scale model is known as approximation of dynamical systems. Several techniques have been developed in the linear dynamical systems framework, namely, the Balanced Truncation, Moment Matching by Krylov techniques, among others and in the nonlinear setting, namely the use of the Proper Orthogonal Decomposition (POD) and its variants. They all share a common approach: they are based on projection techniques. This work provides a comparative analysis of these techniques with particular emphasis on Krylov approaches since they are becoming one of the most active areas of research in large-scale optimal control but yet, they has not been broadly reported within the reservoir community. Preliminary computational experiments reveal that these methods offer promising opportunities to design closed-loop low-order controllers for the management of large-scale smart fields.
-
-
-
Integrated, Multiple-Component Proxy Model for Optimizing Production
Authors D. Johnson, A. S. Cullick and G. ShiThis paper addresses integrated asset optimization models that are enabled by multiple-component, multi-layer neural network proxy models. The proxy models are used to represent non-linear production behavior for real-time production optimization.
The paper introduces a methodology for training neural networks that are robust proxy models for nonlinear physics-based simulators. The neural network architecture, the training, and model combinations will be presented. The training of a proxy initiates with a design of experiments. If multiple simulators, e.g. reservoir, well nodal analysis, and gathering network process analysis represent different components of the physical system, then each can be executed individually. Each results in a trained proxy which are then combined mathematically into a single proxy for the integrated system. Two example applications are presented.
The first example develops a nodal analysis model, which is critical for real-time management in fields with hundreds or even thousands of wells. The model represents the system analysis for the Inflow Performance Relationship (IPR) of the reservoir and the Vertical Lift Performance (VLP) for the tubing. The example goes through a multi-dimension table generation for training the neural network models. An optimization solver is used to find the oil rate where the bottom-hole pressures equal tubing-head pressures for a range of conditions. The complete IPR/VLP “nodal” solution is represented by the proxy. We expand this to include not just a single IPR/VLP pair of equations but a series of subsystems.
Another example builds a proxy from flow simulation of a reservoir operating with water injection and wells with downhole control valves. A multi-component proxy is used both to optimize the valve settings pro-actively in response to reservoir performance and to quickly update the reservoir model history match.
Significant contributions:
1. Demonstrate that a properly trained multi-layer neural network can be a robust proxy for complex, nonlinear multiple physics-based simulations of surface, reservoir, and well performance.
2. Demonstrate use of gradient optimization of a proxy model, for pro-active optimization of field and well operations in response to high-frequency data input.
3. Show how multiple neural network proxies that represent different physical components of a field operating system can be combined mathematically rather easily to form a fully integrated model.
-
-
-
Bang-Bang Control in Reservoir Flooding
Various studies have shown that dynamic optimization of waterflooding using optimal control theory has a significant potential to increase Net Present Value (NPV). In these studies, gradient-based optimization methods are used, where the gradients are usually obtained with an adjoint formulation. However, the shape of the optimal injection and production settings is generally not known beforehand. The main contribution of this paper is to show that a whole variety of reservoir flooding problems can be formulated as optimal control problems that are linear in the control and that, if the only constraints are upper and lower bounds on the control, these problems will sometimes have bang-bang (on-off) optimal solutions. This is supported by a waterflooding example of a 3-dimensional reservoir in a fluvial depositional environment, modeled with 18.553 grid blocks. The valve settings of 8 injection and 4 production wells are optimized over the life of the reservoir, with the objective to maximize NPV. For various situations, the optimal solution is either bang-bang, or a bang-bang solution exists that is only slightly suboptimal. This has obvious practical implications, since bang-bang solutions can be implemented with simple on-off control valves.
-
-
-
Production Optimization under Constraints Using Adjoint Gradients
Authors P. de Montleau, A. Cominelli, K. Neylon, D. Rowan, I. Pallister, O. Tesaker and I. NygardThe introduction of controllable downhole devices has greatly improved the ability of the reservoir engineer to implement complex well control strategies to optimize hydrocarbon recovery. The determination of these optimal control strategies, subject to limitations imposed by production and injection constraints, is an area of much active research and generally involves coupling some form of control logic to a reservoir simulator. Some of these strategies are reactive: interventions are made when conditions are met at particular wells or valves, with no account taken for the effect on the future lifetime of the reservoir. Moreover, it may be too late to prevent unwanted breakthrough when the intervention is applied. Alternative proactive strategies may be applied to the lifetime of the field and fluid flow controlled early enough to delay breakthrough. This paper presents a proactive, gradient-based method to optimize production throughout the field life. This method requires the formulation of a constrained optimization problem, where bottomhole pressure or target flow rates of wells, or flow rates of groups, represent the controllable parameters. To control a large number of wells or groups at a reasonably high frequency, efficient calculation of accurate well sensitivities (gradients) is required. Hence, the adjoint method has been implemented in a commercial reservoir simulator to compute these gradients. Once these have been calculated, the simulator can be run in optimization mode to find a locally optimal objective function (e.g., cumulative production). This optimization procedure usually involves progressively activating constraints, with each new constraint representing a significant improvement in the objective. Proper management of degrees of freedom of the parameters is essential when calculating the constrained optimization search direction. Adjoint methods have already been used for production optimization within reservoir simulation; however, an accurate analysis of optimal management of active and inactive constraints for different type of recovery processes in field-like cases has not been discussed to our knowledge.
-
-
-
Efficient Optimization of Production from Smart Wells Based on the Augmented Lagrangian Method
Authors D. C. Doublet, R. Martinsen, S. I. Aanonsen and X. C. TaiA new method for dynamic optimization of water flooding with smart wells is developed. The algorithm finds optimal injection and production well or well segment rates. In the new method, we solve a constrained optimization problem where the net present value is maximized and the reservoir flow equations are considered as constraints. The problem is formulated as finding the saddle point of the associated augmented Lagrangian functional, and solved efficiently. The method is compared with a more traditional optimal-control method, based on solving the adjoint system of equations. In the examples tested the new method obtains the same maximum profit as the adjoint method using approximately the same number of iterations. An advantage of the new method is that we do not solve the flow equations exactly at each iteration. As the optimization proceeds, the flow equations will be fulfilled at convergence. Thus, each iteration of the minimization algorithm is much cheaper than for the adjoint method. The method is tested on a small 2D model, but the results should be valid also for larger, 3D models.
-
-
-
Reservoir Flow Uncertainty Management in Presence of Stochastic Parameters
By E. FetelThis paper focuses on the management of uncertainty associated with production variables in presence of stochastic uncertain input parameters. In particular, it aims at dealing with n-dimensional non-linear response surfaces. A stochastic parameter is defined when the relationship between its variations and flow response variations is purely random. A typical example is the seed for geostatistical simulations. Alternatively, if the relationship is not random the parameter is said continuous. Here, the key idea is to model not a single response surface but a probability density function varying in the n-dimensional space of the continuous parameters. In this framework, this paper develops (1) a response surface building approach, (2) a variance based sensitivity analysis scheme for identifying influential parameters and (3) a bayesian inversion technique for integrating a given production history. The proposed techniques do not require any prior regression model and are based on Monte Carlo sampling. Thus, the developed approach is suitable for n-dimensional and non-linear problems. Finally, the approach is validated on a fluviatile-like reservoir model.
-
-
-
Computational Techniques for Closed-Loop Reservoir Modeling with Application to a Realistic Reservoir
Authors P. Sarma, L. J. Durlofsky and K. AzizThis paper extends and applies novel computational procedures for the efficient closed-loop optimal control of petroleum reservoirs under uncertainty. It addresses two important issues that were present in our earlier implementation [2] that limited the application of the procedure to practical problems.
Specifically, the previous approach encountered difficulties in handling nonlinear path constraints (constraints that must be satisfied at every time-step of the forward model) during optimization. Such constraints (e.g., maximum liquid production rate) are frequently present in practical problems. To address this issue, an approximate feasible direction optimization algorithm was proposed. The algorithm uses the objective function gradient and a combined gradient of the active constraints [3], both of which can be obtained efficiently with adjoint models. The second limitation of the implementation in [2] was the use of the standard Karhunen-Loeve (K-L) expansion for parameterization of the input random fields of the simulation model. This parameterization is computationally expensive and preserves only two-point statistics of the random field. It is thus not suitable for large simulation models or for complex geological scenarios, such as channelized systems. In another paper [4], a nonlinear form of the K-L expansion, referred to as kernel PCA, is applied for parameterizing arbitrary random fields. Kernel PCA successfully addresses the limitations of the K-L expansion, and is differentiable, meaning that gradient-based methods can be utilized in conjunction with this parameterization within the closed-loop.
An example based on a Gulf of Mexico reservoir model is considered. For this case it is demonstrated that the proposed algorithms indeed provide a viable real-time closed-loop optimization framework. Application of the closed-loop methodology is shown to result in a 25% increase in NPV over the base case. This is almost the same improvement achieved using an open-loop approach, which is an idealized formulation in which the geological model is assumed to be known.
-
-
-
Stochastic Subspace Projection Methods for Efficient Multiphase Flow Uncertainty Assessment
Authors H. M. Klie, M. F. Wheeler, G. Liu and D. ZhangThis work introduces an efficient Krylov subspace strategy for the implementation of the Karhunen-Loève moment equation (KLME) method. The KLME method has recently emerged as a competitive alternative for subsurface uncertainty assessment since it involves simulations at a lower resolution level than Monte Carlo simulations. Algebraically, the KLME method reduces to the solution of a sequence of linear systems with multiple right-hand sides. We propose a Krylov subspace projection method to efficiently compute different stochastic orders and moments of the primary variable response from the zero-order solution. The Krylov basis is recycled to deflate and improve the initial guess for the block and seed treatment of right-hand sides. Numerical results are encouraging to extend the capabilities of the proposed stochastic framework to address more complex simulation models.
-
-
-
Some performance investigations of a compositional reservoir flow simulator
Authors B. O. Heimsund, E. Øian and G. E. FladmarkA reservoir fluid flow simulator for the study of advanced subsurface processes is described. It is based on the volume balance formulation for implicitly determining the fluid pressure. A set of component conservation equations are used to determine the molar masses of each component, and an energy conservation law is used to determine the system emperature. Fluid property calculations are clearly separate from flow computations, and this is illustrated by a brief discussion on a compositional black-oil formulation. As well-flow is not of primary interest, only some very simple source and sink treatment has been implemented. Different degrees of implicitness have been tested. To try to keep the formulation simple, we have preferred to use a sequential approach in which the different equations are solved in sequence. Then for each equation, and implicit or explicit solver may be used. For the pressure, only implicit solvers have been used, while for the molar masses, explicit and different types of implicit discretization have been tried. However, we were not able to develop an implicit molar mass solver which yielded higher performance than a simple explicit solver. This appeared to be due to large decoupling errors and the increased time to solve the additional implicit equations. Consequently, the simulator is using an implicit pressure solver followed by a temperature and explicit molar mass solvers. Another aspect of the simulator is its use of general unstructured grids. On those grids, it is often necessary to apply a multi-point flux approximation scheme (MPFA). As many commercial simulators experience performance degradation using MPFA rather than a two-point flux approximation (TPFA), we provide some examples which show that in our case, MPFA is not considerably slower than TPFA on unstructured meshes. This is partly due to the choice of not using a high degree of implicitness, which reduces the size of the linear systems. Finally, the simulator parallelization strategy is outlined. A domain decomposition preconditioner is used as a solver, and the its performance is ascertained using a varying number of subdomains.
-