- Home
- Conferences
- Conference Proceedings
- Conferences
ECMOR XII - 12th European Conference on the Mathematics of Oil Recovery
- Conference date: 06 Sep 2010 - 09 Sep 2010
- Location: Oxford, UK
- ISBN: 978-90-73781-89-4
- Published: 06 September 2010
61 - 80 of 117 results
-
-
Combining Probabilistic Inversion and Multi-objective Optimization for Production Development under Uncertainty
Authors D. Busby and E. SergienkoProbabilistic forecasts can be obtained by taking into account different sources of uncertainty in the history matching process. When a non-unique model is used for prediction, the problem of finding the best development plan to maximize the Net Present Value can be extremely time consuming. Nevertheless, taking into account uncertainty can be critical to take better decisions reducing investments risks. The approach proposed uses response surface approximation based on Gaussian process to find the solution of the probabilistic inverse problem, thus reducing considerably the number of required simulations. Adaptive sampling strategies are used to obtain predictive response surface models in both the probabilistic history matching and in the forecasting problem. The method is illustrated on a realistic case study issued from a real field. The objective of the case study was to optimize a new development plan after six years of production history. We compare the results for both the deterministic approach based on a single history matched model and the stochastic approach. For the probabilistic approach a multi-objective optimization method is used to select the solution providing the highest profit with an acceptable level of risk.
-
-
-
A Probability Conditioning Method (PCM) for Integration of Production Data into Pattern-based Facies Simulation
Authors B. Jafarpour and M. KhodabakhshiThe ensemble Kalman filter (EnKF) has recently been proposed as a promising history matching approach. While EnKF has enjoyed favorable evaluations for history matching applications, in certain depositional environments where the spatial continuity is characterized by discrete geological objects, application of the EnKF for updating a prior ensemble of property fields can result in the loss of facies connectivity away from observation points. We present a new probability conditioning method (PCM) for updating pattern-based multipoint geostatistical facies realizations in a consistent way (with the training image). We use the EnKF to infer a probability map that indicates the probability of a particular facies occurrence at a given grid block. By incorporating this updated probability map into the snesim multipoint facies simulation algorithm, we generate a new set of facies realizations that are conditioned on production history and honor the prior structural continuity. This implementation effectively integrates the production data and prior geologic patterns by using the former to mainly resolve facies distribution around the wells and the latter to ensure consistent facies connectivity away from the wells. We illustrate the suitability of this approach using multiple history matching examples with two and three facies representing fluvial formations.
-
-
-
Uncertainty Assessment of Saturation Modeling in Geocellular Models Using a Well Log Inversion Technique
Authors T. Friedel, A. Carnegie, J. Moreno and A. CarrillatStatic or dynamic modeling of hydrocarbon reservoirs requires a detailed description of the initial capillary-gravity equilibrium saturations of each individual phase at reservoir conditions. Typically, this is a function of local rock quality and the presence of capillary forces. Without understanding these, both initial volumetric estimates and subsequent predictions are potentially meaningless. The main idea of a novel approach is to treat this challenge as a full-scale inversion problem of all suitable well and core data, similar to history matching. At first, a physical model is developed that describes the saturation anywhere in the reservoir and honors different rock types and reservoir regions. The resulting large set of parameters is initially based on core observations if available. An objective function then describes the mismatch between the simulated and observed saturations based on maximum likelihood theory. The model calibration is fully automatic using nonlinear solvers. The calibration will lead to the set of parameters which best fit core- and log-observed saturations. It can be populated in 3D geocellular models. A comprehensive statistical analysis of the results helps define the role of the individual physical processes, such as capillary pressure, as well as confidence intervals and correlations between coefficients.
-
-
-
Determination of Lower and Upper Bounds of Predicted Production from History-matched Models
Authors G.M. van Essen, J.D. Jansen and P.M.J. Van den HofWe present a method to determine lower and upper bounds to the predicted production or any other economic objective from history-matched reservoir models. This is accomplished through a hierarchical optimization procedure, which limits the solution space of a secondary optimization problem to the null-space of the primary optimization problem. We applied this procedure to a model of a channelized reservoir with a life-cycle of 6 years after 1.5 years of production. We performed a history match based on synthetic data, starting from a uniform prior, and using a gradient-based minimization procedure. For the remaining 4.5 years, minimization and maximization of net present value (NPV), using a fixed control strategy, were executed as secondary optimization problems by changing the model parameters while staying in the null space. I.e. we optimized the secondary objective functions, while requiring that optimality of the primary objective (a good history match) was preserved. For the remaining 4.5 years of production, the method gives lower and upper bounds of the predicted NPV of 63 % above and below the average respectively. This method therefore provides a way to quantify the economic consequences of the well-known problem that history matching is a strongly ill-posed problem.
-
-
-
Evolutionary Algorithms with Pairwise Conditional Sampling for History Matching Optimisation
Authors I. Petrovska and J.N. CarterWhen modelling of oil and gas reservoirs is concerned, one considers a wide range of reservoir model parameters, some of which may be dependent on others. Examples include correlation between porosity and permeability values, dependency between the structural model parameters such as fault relay ramp geometry versus its transmissibility. A reservoir engineer is always aware of the possibility of such interactions within the model studied. The logical conclusion then is to try and use this information when performing reservoir history-matching and prediction studies. The core of an efficient history matching optimisation technique is its sampling quality. Any extra information capable of guiding the search within the solution space based on the assumptions of dependence or independence between the model parameters should be welcomed into the optimisation process. This paper will concentrate on a class of evolution-inspired stochastic optimization techniques capable of sampling conditional probability distributions of model parameter – multivariate Estimation Of Distribution Algorithms. Using a synthetic reservoir model we will show that even when one considers only pairwise chain-like types of interaction between optimization parameters, this not only impacts the convergence speed of the optimization process itself but significantly influence the diversity and quality of achieved solutions.
-
-
-
An Iteratively Reweighted Algorithm for History Matching of Oil Reservoirs Is Sparse Domains
Authors L. Li, M.R. Khaninezhad and B. JafarpourIdentification of spatially variable hydraulic rock properties such as permeability and porosity is essential for accurate prediction of reservoir performance and planning future development activities. Estimation of these properties from production data usually involves solving a highly underdetermined nonlinear inverse problem. The overwhelming number of unknowns, relative to available data, leads to many parameter combinations that explain the data equally well, but provide different future predictions. To improve non-uniqueness and numerical instability, additional information is typically incorporated into the solution procedure. Reservoir properties often have large-scale spatially correlated features that are amenable to sparse (or compact) representations in compressive bases such as the Fourier or wavelet domains. In this paper, we exploit the inherently sparse representation of correlated reservoir properties to formulate an effective history matching algorithm using sparsity regularization. We show that by minimizing a data misfit cost function augmented with an additive or multiplicative sparsity-promoting regularization term in a sparse domain the reconstruction results are significantly improved. The effectiveness of the proposed implementation is related to adaptive identification of the sparsity pattern through iterative reweighting of the sparse basis components, which we illustrate through several history matching examples in oil reservoirs.
-
-
-
Grid-based Inversion of Pressure Transient Test Data
Authors R.J.S. Booth, K.L. Morton, M. Onur and F.J. KuchukIn any subsurface exploration and development, indirect measurements such as detailed geological description, outcrop data, etc., and direct measurements such as seismic, cores, logs, and fluid samples, etc., provide useful information for static and dynamic reservoir description, simulation, and forecasting. However, core and log data delineate rock properties only in the vicinity of the wellbore while geological and seismic data usually are not directly related to formation permeability. Pressure transient tests provide dynamic information about reservoir pressure which can be used to estimate rock property fields, fluid distributions, fluid samples for well productivity, and dynamic reservoir description. Therefore, such tests are very useful and hence are commonly used in the industry for exploration environments and for the general purposes of production and reservoir engineering. Conventional pressure transient tests have traditionally been used to estimate spatial distributions of the formation permeability based on the automatic history matching of the pressure data measured at the wells to an analytical or a simple numerical model selected to best represent the flow regimes observed from diagnostic plots. With the need of an improved spatial resolution of the reservoir parameters, “pixel (grid)” based approaches have been developed. In these approaches, the reservoir properties are discretized over an, often coarse, regular grid and the prior modeling of these approaches applies dense geological information. Our approach is similar; however, we discretize the reservoir parameters using the same grid as that used for the numerical simulation of the well test. This grid will be non-uniform with the greatest spatial resolution near the wells, where we may expect the well test to provide more information. In addition we propose that, since in the early characterization of the reservoir dense geological information may be unavailable, a local random field, determined by the variances and correlation lengths of the reservoir parameters, models the prior. To manage the large number of variables that this approach leads to, we employ an adjoint scheme to determine the gradient of the objective function. Hence, our approach enables one to find the most likely set of the reservoir parameters, along with our confidence in this solution and a means of producing further likely solutions. Crucially, we separate the influence of the prior from the influence of the pressure transient test measurements, which greatly improves the performance of the inversion procedure, and also ensures that the information provided by the prior is preserved.
-
-
-
Error Estimate for the Ensemble Kalman Filter Update Step
Authors A. Kovalenko, T. Mannseth and G. NævdalEnsemble Kalman filter is a data assimilation technique based on a low-rank approximation of a covariance matrix from a moderately sized ensemble. Sampling errors lead to artificial effects, such as spurious correlations, deteriorating the estimates and the forecasts of the system states. Using random matrix theory, we demonstrate the distribution of the norm of the ensemble Kalman filter sampling error, assuming noiseless data. The distribution depends explicitly on ensemble size, model dimension and observation locations. We demonstrate the use of the distribution on several examples.
-
-
-
Ensemble Kalman Filter for Nonliner Likelihoods
Authors I. Myrseth, J. Sætrom and H. OmreThis paper defines the update equation (sometimes called the analysis step) in the Ensemble Kalman filter in a conceptually different way. For linear likelihood models the new approach coincides with the traditional version. When the likelihood is nonlinear the new approach is still applicable. Another feature of the new approach is that it opens for Monte Carlo sampling of the likelihood. This can be used to improve predictions. A synthetic reservoir example with a nonlinear, black-box, seismic likelihood function shows that the approach can be applied to reservoir models.
-
-
-
Improved Initial Ensemble Generation Coupled with Ensemble Square Root Filters and Boosting to Estimate Uncertainty
Authors L. Dovera and E. Della RossaThe accuracy of ensemble Kalman filter (EnKF) methods depends on the sample size compared to the dimension of the parameters space. In real applications often sampling error may result in spurious correlations which produce a bias in the mean and a strong underestimation of the uncertainty. The Ensemble Square Root Filters (ESRF) represents an advantage in uncertainty estimation respect to the traditional EnKF. Covariance inflation and localization are a common solution to these problems. In this work we propose a method that reduces the bias of ensemble techniques by means of a convenient generation of the initial ensemble. This regeneration is based on a Stationary Orthogonal-Base Representation (SOBR), obtained via a singular value decomposition of a stationary covariance matrix estimated from the ensemble. This technique is tested on a 2D slightly compressible single phase model and compared with ESRF. The comparison is based on a reference solution obtained with a very large ensemble (one million). The example gives evidence that the SOBR reduces the effect of sampling error in the mean but covariance inflation is essential to avoid the ensemble collapse.
-
-
-
Rapid Construction of Ensembles of High-resolution Reservoir Models Constrained to Production Data
Authors C. Scheidt, J. Caers, Y. Chen and L. DurlofskyDistance-based stochastic modeling techniques have recently emerged in the context of ensemble-level reservoir modeling, in particular for history matching, model selection and uncertainty quantification. Starting with an initial ensemble of model realizations, a distance between any two realizations is defined. This distance is introduced to incorporate specific modeling purposes into geological modeling, thereby potentially enhancing the efficiency of some modeling techniques such as history matching. If one wants to create new models that are constrained to dynamic data (i.e., history matching), the calculation of distances requires flow simulation for each model in the initial ensemble. This can be very time consuming, especially for high-resolution reservoir models. In this paper, we present a multi-scale framework for ensemble-level reservoir modeling. We employ a distance-based procedure, with emphasis on a rapid construction of multiple models that have improved dynamic data conditioning. We propose to construct new fine-scale models constrained to dynamic data, while performing flow simulations on the associated, coarse-scale models with appropriate upscaling. The availability of the multiple, high-resolution models is crucial for proper uncertainty quantification, compared to retaining only a few models that match perfectly the data but do not necessarily capture the uncertainty in some desired predictions. An error modeling procedure is also introduced in the distance calculations to account for potential errors in upscaling. Based on a few fine-scale flow simulations, the upscaling error is estimated for each model using a clustering technique. We demonstrate the efficacy of the method on an example with significant upscaling errors. Results show that the error modeling procedure can reproduce the fine-scale flow behavior from coarse-scale simulations with sufficient accuracy (in terms of uncertainty predictions). As a consequence, an ensemble of high-resolution models, which are constrained to dynamic data, can be obtained, but with a minimum of flow simulations at the fine scale.
-
-
-
New Approaches for Generally Constrained Production Optimization with an Emphasis on Derivative-free Techniques
Authors D. Echeverria Ciaurri, O.J. Isebor and L.J. DurlofskyProduction optimization involves the determination of optimum well controls to maximize an objective function such as cumulative oil production or net present value. In practice, the satisfaction of general physical and economic constraints is also required, which typically results in optimization problems that are nonlinearly constrained. Examples of nonlinear constraints include maximum water cut and minimum oil rate for wells operating under bottomhole pressure control. In this paper we present and apply optimization strategies that are able to incorporate a large variety of general constraints. We have identified a promising approach in the filter method. This recently introduced methodology borrows concepts from multi-objective optimization and avoids many of the issues that arise when objective function and constraints are lumped together by a penalty function. In terms of the underlying optimization procedure, our focus here is on techniques that are not simulator invasive; i.e., they view the flow model as a black box. Along these lines, we study derivative-free methodologies such as generalized pattern search and Hooke-Jeeves direct search, in combination with nonlinear constraint handling techniques. The performance of the algorithms is demonstrated on two challenging generally-constrained production optimization problems where up to 25 wells are considered.
-
-
-
Using Evolution Strategy with Meta-models for Well Placement Optimization
Authors Z. Bouzarkouna, D.Y. Ding and A. AugerOptimum implementation of non-conventional wells allows us to increase considerably hydrocarbon recovery. By considering the high drilling cost and the potential improvement in well productivity, well placement decision is an important issue in field development. Considering complex reservoir geology and high reservoir heterogeneities, stochastic optimization methods are the most suitable approaches for optimum well placement. This paper proposes an optimization methodology to determine optimal well location and trajectory based upon the Covariance Matrix Adaptation - Evolution Strategy (CMA-ES) which is a variant of Evolution Strategies recognized as one of the most powerful derivative-free optimizers for continuous optimization. To improve the optimization procedure, two new techniques are investigated: (1). Adaptive penalization with rejection is developed to handle well placement constraints. (2). A meta-model, based on locally weighted regression, is incorporated into CMA-ES using an approximate ranking procedure. Therefore, we can reduce the number of reservoir simulations, which are computationally expensive. Several examples are presented. Our new approach is compared with a Genetic Algorithm incorporating the Genocop III technique. It is shown that our approach outperforms the genetic algorithm: it leads in general to both a higher NPV and a significant reduction of the number of reservoir simulations.
-
-
-
A Derivative Free Optimization Method for Reservoir Characterization Inverse Problem
Authors H. Langouët, F. Delbos, D. Sinoquet and S. Da VeigaReservoir characterization inverse problem aims at building reservoir models consistent with available production and seismic data for better forecasting of the production of a field . These observed data (pressures, oil/water/gas rates at the wells and 4D seismic data) are compared with simulated data to determine unknown petrophysical properties of the reservoir. The underlying optimization problem is usually formulated as the minimization of a least-squares objective function. In practice, this problem is often solved by nonlinear gradient based optimization methods with derivatives approximated by finite differences. In applications involving large 4D seismic dataset, using the classical Gauss-Newton algorithm is often infeasible because of the storage of the huge Jacobian matrix. Consequently, this optimization problem requires dedicated techniques: derivatives are not available, the associated forward problems are computionnaly expensive and some constraints may be introduced to handle a priori information. Then, we propose a derivative free optimization method under constraints based on trust region approach coupled with local quadratic interpolating models of the cost function and of non linear constraints. Results obtained with this method on a synthetic reservoir application with the joint inversion of production data and 4D seismic data are presented.
-
-
-
Nonlinear Output Constraints Handling for Production Optimization of Oil Reservoirs
Authors E. Suwartadi, S. Krogstad and B. FossThis paper presents a gradient-based optimization method to handle nonlinear output constraints problems for large-scale systems in production optimization of oil reservoirs. The method is based on a barrier function approach, where the output constraints are added as a barrier term to the objective function. The gradient is obtained by using the adjoint method. Further, two case examples are discussed. The results show that the proposed optimization method is able to preserve the efficiency of adjoint methods.
-
-
-
Optimal Well Placement
Authors C.L. Farmer, J.M. Fowkes and N.I.M. GouldOne is often faced with the problem of finding the optimal location and trajectory for an oil well. Increasingly this includes the additional complication of optimising the design of a multilateral well. We present a new approach based on the theory of expensive function optimisation. The key idea is to replace the underlying expensive function (ie. the simulator response) by a cheap approximation (ie. an emulator). This enables one to apply existing optimisation techniques to the emulator. Our approach uses a radial basis function interpolant to the simulator response as the emulator. Note that the case of a Gaussian radial basis function is equivalent to the geostatistical method of Kriging and radial basis functions can be interpreted as a single-layer neural network. We use a stochastic model of the simulator response to adaptively refine the emulator and optimise it using a branch and bound global optimisation algorithm. To illustrate our approach we apply it numerically to finding the optimal location and trajectory of a multilateral well in a reservoir simulation model using the industry standard ECLIPSE simulator. We compare our results to existing approaches and show that our technique is comparable, if not superior, in performance to these approaches.
-
-
-
Improving Perturbation Designs for Gradient-based Optimization Methods in History Matching
By D.Y. DingAssisted history matching is now widely used to constrain reservoir models by integrating well production data and/or 4D seismic data. Among the optimization methods for performing history matching, gradient-based approaches are often applied. However, history matching is a complex inverse problem, and the computational effort (in terms of the number of reservoir simulations, which are very expensive in CPU time) increases with the increasing of the number of matching parameters. For a problem with N parameters, we need generally N perturbations (or N+1 reservoir simulations) to calculate all the gradients in order to find an optimized solution direction. It is always a big challenge to history match large fields with a large number of parameters. In this paper, we present a new technique based on approximate derivative computations, which can considerably reduce the number of simulations for the gradient-based optimization. In this new approach, the objective function is first split into local components, and the dependence of each local component on principal parameters is analyzed to minimize the number of influential parameters. The interaction between parameters and local components is allowed in this approach. Then, we define a perturbation design, based on the minimisation of errors on a test function for the derivative calculation and the technique of graph colouring. The proposed perturbation design can compute the derivatives of the objective function with only a few simulations. This method is particularly interesting for regional and well level history matching, and it is also suitable to match geostatistical models by introducing numerous local parameters. This new technique makes history matching with large numbers of parameters (large field) tractable. Some numerical examples, including a large field with 400 parameters, are presented to illustrate the efficiencies of the new method. The commonly-used gradient-based optimization method is unpractical and even unfeasible to handle the large fields, which require 400 perturbations to compute the derivatives. However, using the new technique proposed in this paper, only 5 perturbations are performed to get all the required gradients. Therefore, the CPU time on this large field can be reduced by a factor of 80 and the history match is feasible.
-
-
-
Adjoint Methods for Multicomponent Flow Simulation
Authors D. Kourounis, D. Voskov and K. AzizThe focus of the present work is on efficient computation of gradients using adjoint methods. In contrast to finite-difference methods, where the number of forward simulations required to estimate the desired derivatives grows linearly with the number of control variables, adjoint techniques provide all the required derivatives of the objective function in a fraction of the computational time of one forward simulation run. However, from an implementation viewpoint they are significantly more involved than, for example, finite-difference methods. This is due to the fact that the computation of gradients through adjoints requires a deep understanding of the simulation code. While the discrete adjoint formulation is most commonly employed in the reservoir simulation community, little is known about the continuous adjoint formulation, which is usually preferred in aerodynamics. Both continuous and discrete adjoint formulations are discussed in this work, and implemented for a compositional reservoir simulator. They are applied to several optimization problems of practical interest and compared with respect to their efficiency and the quality of the gradients they provide. The computed gradients are then forwarded to standard optimization software packages to determine optimal well settings for maximizing any specified objective function, as the net present value.
-
-
-
Optimizing Well Placement Planning in the Presence of Subsurface Uncertainty and Operational Risk Tolerance
Authors P.G. Tilke, R. Banerjee, V.B. Halabe, B. Balci and R.K.M. ThambynayagamThis paper presents an automated workflow to accelerate the well placement planning process in the presence of subsurface uncertainty and operational risk. The system allows the user to screen and rank development options in minutes. This automated field development planning system is an optimization application, integrated with a larger seismic-to-simulation workflow. A key piece of technology in this system is a high-speed semi-analytical reservoir simulator, which enables an optimal strategy to be computed very rapidly. The system also embeds key technologies for optimization in the presence of uncertainty and risk which leverages an advanced uncertainty framework. The workflow starts with a reservoir model, along with existing wells and other operational constraints. Oil or gas production is computed using the high-speed reservoir simulator. Proposed well trajectories honor operational constraints, such as facility processing, water injection capacity, borehole dogleg severity, anti-collision with existing wells, and hazard avoidance on the surface and in the reservoir. The optimal strategy proposes well surface locations, trajectories, and completion locations and is calculated by optimizing the value e.g., net present value (NPV) or production. This new methodology has many applications in the field development planning context. We are able to rapidly screen multiple field development planning scenarios and produce an optimal new/infill drilling strategy with primary production or waterflooding consisting of both newly proposed and existing wells. The final result includes performance predictions based on an optimized field development plan (FDP), risk, and subsurface uncertainties. The most promising scenarios can if necessary be used subsequently for detailed numerical simulation in order to validate results.
-
-
-
Steam-assisted Gravity Drainage Optimization for Extra Heavy Oil
Authors J. Gossuin, W.J. Bailey, B. Couet and P. NaccacheExploitation of extra heavy oil assets involves complex and costly production processes, one of which is steam-assisted gravity drainage (SAGD). This method requires the generation of substantial quantities of steam, which is injected into a horizontal injection well parallel to and above a paired horizontal producer. We focus on maximization of the net present value (NPV) of a SAGD production process by combining optimization and simulation. The use of a neural network algorithm improves on the numerous limitations of manual sensitivities while needing a limited number of iterations. We demonstrate this optimization process using an example reservoir containing an extra heavy Canadian crude. A 2D proxy with rapid solution times is used to address the practical issues of the very long run-times associated with thermal simulation. This proxy is used to identify only those control parameters that impact the objective function (NPV), thereby reducing the solution search space, and also to suggest better starting points for the optimizer. Both of these facets may accelerate finding the optimum for full 3D optimizations. Tangible benefits forthcoming from this investigation are new operational strategies for maximizing NPV, recognition of the impact and optimal duration of preheating, and efficient comparisons of different well patterns.
-