- Home
- Conferences
- Conference Proceedings
- Conferences
ECMOR XVI - 16th European Conference on the Mathematics of Oil Recovery
- Conference date: September 3-6, 2018
- Location: Barcelona, Spain
- Published: 03 September 2018
161 - 172 of 172 results
-
-
Hybrid Finite Volume discretization of two-phase Discrete Fracture Matrix models with nonlinear interface solver
Authors J. Aghili, K. Brenner, J. Hennicker, R. Masson and L. TrentySummaryA new Hybrid Finite Volume discretization is proposed in this work for two-phase Darcy flow in Discrete Fracture Matrix (DFM) models accounting for nonlinear transmission conditions at matrix fracture (mf) interfaces. This type of model is more accurate than alternative hybrid-dimensional two-phase Darcy flow models based either on continuous phase pressures at the mf interfaces assuming fractures acting as drains, or based on the elimination of the mf interface phase pressures by harmonic transmissibility. On the other hand, keeping the pressure and saturation unknowns and the nonlinear flux continuity equations at the mf interfaces increases the difficulty to solve the nonlinear and linear systems due to the highly contrasted permeabilities, capillary pressures, and scales between the fractures and the matrix. In order to solve efficiently the nonlinear systems arising at each time step from the fully implicit time integration, a Newton solver with linear elimination and nonlinear update of the mf interface unknowns is derived. Numerical experiments show the efficiency of our approach on several 2D test cases including an anisotropic matrix permeability and a large fracture network.
-
-
-
Acute Boundary Aligned Unstructured Grid Generation And Consistent Flux Approximations
Authors S. Manzoor, M.G. Edwards and A. DogruSummaryGrids based on Voronoi diagrams, comprise the dual of Delaunay triangulations (DTs), and remain predominant in reservoir simulation. Voronoi-based grids are locally orthogonal, i.e., PErpendicular BIsectional(PEBI), and permit consistent two-point flux approximation if the permeability field is isotropic, or if the grid is generated to be K-orthogonal. In addition, Voronoi grids can be made to honor classical key constraints, minimizing discretization error, thereby aligning associated control volumes with solid walls, well-trajectories, and geological features: layers, shale barriers, fractures, faults, and pinch-outs.
Typically, in reservoirs formed by deposition, the directional trend in the horizontal plane is not very distinct, whereas across the layers, rock properties may jump by orders of magnitude. The PEBI property associated with DTs is of major significance, and can only be exploited provided the circumcenter is used as the approximation point. This requires that the grids generated be boundary aligned, and comprised of entirely acute simplexes. The control-volume centroid is commonly used as the approximation point, because a geological feature honored acute triangulation cannot be guaranteed in the general case; especially in the presence of complex geometries and geological constraints. Development of a geological feature-based acute DT technique is presented. A boundary-aligned grid generation method is augmented with a mesh reconstruction technique, which can ensure circumcenter containment of a DT. To honor the geological feature idea of protection-circle is used. In the mesh reconstruction technique, each mesh point is optimized iteratively, using a local-advancing front method incorporating the length of opposite edges of the set of simplexes sharing it. The methods presented generate boundary-aligned acute DT, where previously proposed methods fail to ensure the acute DT property. Details of the method will be presented, together with results for a number of test cases that verify consistency of the two-point flux on the resulting boundary-aligned acute Voronoi grids.
-
-
-
Combining Face Based And Nodal Based Discretizations For Multiphase Darcy Flow Problems
More LessSummaryA new methodology is introduced in this work to combine face based (Hybrid Finite Volume, HFV or Two Point Flux Approximation, TPFA) and nodal based (Vertex Approximate Gradient, VAG) discretizations on hybrid meshes in order to adapt the numerical scheme to the different types of cells and medium properties in different parts of the mesh. The stability and convergence of the combined VAG-HFV schemes is studied in the gradient scheme framework and is shown to hold on arbitrary partitions of the cells for the unstabilised version and on arbitrary partitions of the faces for the stabilised version. The framework preserves at the interface the discrete conservation properties of the VAG and HFV schemes with fluxes based on local to each cell transmissibility matrices. This discrete conservative form allows to naturally extend the VAG and HFV discretizations of two-phase Darcy flow models to the combined VAG-HFV schemes. Numerical results on different types of meshes show the accuracy and efficiency of the combined schemes which are compared to the stand alone VAG and HFV (or TPFA) discretizations.
-
-
-
Hybrid-mixed Mimetic Method For Reservoir Simulation With Full Tensor Permeability
Authors A.S. Abushaikha and K. TerekhovSummaryIn this work, we present a fully implicit hybrid mimetic finite difference formulation for general-purpose compositional reservoir simulation. The formulation is locally conservative, and the momentum and mass balance equations are solved simultaneously; including Lagrange multipliers on element interfaces. The mimetic finite difference (MFD) method mimics fundamental properties of mathematical and physical systems and the mixed finite element (MFE) finite-element method assures the coupling of the mass and momentum balance equations. The method utilizes automatic differentiation for the Jacobian construction. This hybrid approach accommodates unstructured grids, and we apply compositional test cases with permeability tensors. We also discuss the accuracy for the new formulation. For all tests, we compare the performance and accuracy of the proposed approach with the trivial TPFA method.
-
-
-
Fv-Mhmm: Local Adaptation Driven By An A Posteriori Error Estimator
Authors J. Franc, G. Debenest, L. Jeannin and R. MassonSummaryMultiscale methods for simulating groundwater flow and for predicting production of large scale reservoirs have known significant breakthrough over the last decades. These approaches successively solve coarse scale problems from fine-scale local solutions and map the coarse scale solution to the fine scale. They make it possible to include petrophysical information from the fine scale, while keeping acceptable computation time. However, information exchange between coarse scale and fine scale has to be improved to deal with highly heterogeneous reservoirs.
Recently, FV-MHMM method[1] has been derived as an adaptation to the finite volume formalism of the Mixed Hybrid Multiscale Method developed in [2].
The pressure field is obtained by solving a hybrid form of the parabolic system at a coarse scale. The mathematical formulation relies on Lagrange multipliers, viewed as coarse scale fluxes, to ensure pressure continuity between coarse blocks. This paper proposes different strategies for improving the performance of FV-MHMM on heterogeneous media.
On one hand, basis functions of the global problem can be adapted to account for heterogeneities. Two approaches are proposed: a transmissivity weighted (tw) scheme and a multiscale two point flux approximation (mstpfa) scheme. This last approach uses local simulation to build a weighting scheme based on estimates of the heterogeneous fluxes across coarse faces.
On the other hand, the number of degrees of freedom (that is to say of basis functions at the coarse scale) can be increased in order to improve the solution. Such an adaptive mechanism driven by an a posteriori error estimator has been developed. It will trigger locally the division of coarse faces and, hence, the addition of degrees of freedom.
These two approaches may also be combined to further improve the FV-MHMM.
Finally, different numerical tests are presented and different strategies to improve the multiscale FV-MHMM solution are discussed. For example, a compromise has to be found between gain in accuracy and the number of degrees of freedom added.
[1] Franc, J., Jeannin, L., Debenest, G., Masson, R. “FV-MHMM method for reservoir modeling.”, 21(5–6), 895–908, Comput. Geosc., 2017
[2] Harder, C., Parades, D., Valentin, F. “A family of multiscale hybrid-mixed finite element methods for the Darcy equation with rough coefficients”, Journ. of Comput. Phys., 2013
-
-
-
History Matching Channelized Facies Models Using Ensemble Smoother With A Deep Learning Parameterization
Authors S.W.A Canchumuni, A.A Emerick and M.A.C PachecoSummaryEnsemble data assimilation methods have been successfully applied in several real-life history-matching problems. However, because these methods rely on Gaussian assumptions, their performance is severely degraded when the prior geology is described in terms of complex facies distributions. This work introduces a novel parameterization based on deep learning for history matching facies models with ensemble methods.
The proposed method consists on a parameterization of geological facies by means of a deep belief network (DBN) used as an autoencoder. The process begins with a large set of facies realizations which are used for training the DBN. The trained network has two parts: an encoder and a decoder function. The encoder is used to construct a continuous parameterization of the facies which is iteratively updated to account for observed production data using the method ensemble smoother with multiple data assimilation (ES-MDA). After each iteration of ES-MDA, the decoder is used to reconstruct the facies realizations.
The proposed method is tested in three synthetic history-matching problems with channelized facies constructed with multiple point geostatistics. We compare the results of the DBN parameterization against the standard ES-MDA (with no parameterization) and the recently proposed optimization-based principal component analysis (OPCA). Our results show that all procedures are able to match the observed production data. However, standard ES-MDA failed to generate channel facies with well-defined boundaries. OPCA and DBN parameterizations improved the facies description resulting in the expected bi-modal distributions of log-permeability. This paper reports our initial results on an ongoing investigation with deep learning. Nevertheless, the results presented here indicate a great potential on the use of deep learning technologies in the inverse modeling of petroleum reservoirs.
-
-
-
Towards Automatic And Adaptive Localization For Ensemble-Based History Matching
More LessSummaryEnsemble-based history matching methods are among the state-of-the-art approaches to reservoir characterization. In practice, however, they often suffer from ensemble collapse, a phenomenon that deteriorates history matching performance. To prevent ensemble collapse, it is customary to equip an ensemble history matching algorithm with a certain localization scheme.
In a previous study (SPE Journal, SPE-185936-PA), we propose an adaptive localization scheme that exploits the correlations between model variables and simulated observations for localization. Correlation-based adaptive localization not only overcomes some longstanding issues arising in conventional distance-based localization, but also is more convenient to implement and use in real field case studies (SPE conference paper, SPE-191305-MS).
The aforementioned correlation-based localization is subject to two problems. One is that, it requires to run a relatively large ensemble in order to achieve decent performance in an automatic manner, which becomes computationally expensive in large-scale problems. As a result, certain empirical tuning factors are introduced in the previous work to reduce the computational costs. The other problem is that, the way used to compute the tapering coefficients in the previous work may induce dis-continuities, and neglect the information of certain still-influential observations for model updates.
The main objective of this work is to improve the efficiency and accuracy of correlation-based adaptive localization proposed in the previous work, making it run in an automatic manner but without incurring substantial extra computational costs. To this end, we introduce two enhancements to address the aforementioned problems. We apply the resulting automatic and adaptive correlation-based localization with the two enhancements to a 2D and a 3D case studies, and show that it leads to better history matching performance than that is achieved in the previous work.
-
-
-
Gaussian Mixture Model Fitting Method For Uncertainty Quantification By Conditioning To Production Data
More LessSummaryFor most real-field history matching problems, the dynamic system of multi-phase flow in the reservoir induces strong nonlinear behavior in the data responses. Therefore, the posterior probability density function (PPDF) formulated within the Bayesian framework may have multiple local maxima. It is extremely challenging to properly quantify uncertainty of reservoir simulation forecast results for such real-field problems with this type of complex PPDF.
In this paper, our previously introduce Gaussian-Mixture-Model (GMM) method to approximate the PPDF is improved by adding an arbitrarily large number of additional Gaussian components to the superposition, where the relative heights and widths of these components are determined using a suitable fitting procedure. Simulation results of all reservoir models generated during the history matching process, e.g., using distributed Gauss-Newton (DGN) optimizer, are used as training data points for this GMM fitting. The distance between the GMM approximation and the actual posterior PDF is estimated by summing up the errors calculated at all training data points. The distance is an analytical function of unknown GMM parameters such as covariance matrix and weighting factor for each Gaussian component. These unknown GMM parameters are determined by minimizing the distance function. A GMM is accepted if the distance is reasonably small. Otherwise, new Gaussian components will be added iteratively to further reduce the distance until convergence. Finally, high quality conditional realizations are generated by sampling from each Gaussian component in the mixture, with the appropriate relative probability.
The proposed method is first validated using nonlinear toy problems and then applied to real-field cases. GMM generates better samples with a computational cost comparable to or less than other methods, including the well-known expectation-maximization (EM) algorithm for GMM, Randomized-Maximum-Likelihood (RML) method, and ensemble-based methods. More importantly, by adding more Gaussian components, the accuracy of the GMM approximation can always be further increased (at a higher computational cost). Hence, as is illustrated in our test cases, the samples yield production forecasts that match production data reasonably well in the history matching period and are consistent with production data in the blind test period.
-
-
-
Introducing Stochastic Model Errors In Ensemble-Based History Matching
By G. EvensenSummaryIn reservoir history matching, we usually neglect model errors other than those associated with the parametrization of the model. Neglected model errors include the stochastic errors in the model forcing, such as errors in production-rate data that are used to force the simulation model, although we allow for the rate data to contain errors in the update step. Thus, we assume that the selected uncertain parameters of the model account for all the model errors. If significant errors are unaccounted for, there is a risk for an unphysical update, of some uncertain parameters, that compensates for other neglected errors.
When using EnKF or ES, it is relatively easy to include stochastic model errors as long as we update both the parameters and the state variables simultaneously. However, typically when we use ES and in particular the new iterative smoothers like IES (Chen and Oliver, 2012, 2013) and ESMDA (Emerick and Reynolds, 2013), it is standard to assume the model to be perfect. Thus, we update only the uncertain parameters and then rerun the ensemble simulation to obtain the final result. In this setting, it is not straightforward to consistently include stochastic model errors in the assimilation scheme.
This paper gives the theoretical foundation for introducing additive stochastic model errors in ensemble methods for history matching. We review recent works on including model errors in IEnKF by Sakov et al. (2017) and also an approach by Tarantola (1987) who account for additive model errors by combining them with the measurement errors. Based on these results we explain possible procedures for practically including model errors in the iterative smoothers (IES and ESMDA). Also, we demonstrate the impact of adding (or neglecting) stochastic model errors in applications with reservoir models.
-
-
-
Seismic History Matching Uncertainty With Weighted Objective Functions
Authors Q. Zhang, R. Chassagne and C. MacBethSummaryHistory matching using 4D seismic time-lapse data provide a method to manage uncertainties of multi-models, but it remains a great challenge without reaching any clear statements or practical methodology or good practise to follow. What makes the SHM so difficult is mainly the nature of the seismic data. Indeed, acquisition, interpretation, processing, make this data embedded with uncertainties, due to the physics and measurement issues. One of the challenge is to be able to extract and quantify the uncertainties carried over by the seismic data and use it to guide decisions.
The way we insert seismic data and its inherent uncertainty into the workflow is the key to further enforce progress in 4D seismic history matching. A comprehensive workflow is implemented which allows shape to estimate uncertainties using weighted factor. In the proposed history matching workflow, 4D seismic attributes are converted to binary images which are representative of fluid saturations, then binary maps are compared using a pattern-matching objective function which can capture the main feature of the seismic data. Novelty is that weighted binary maps are generated on different estimation of the uncertainty within seismic attribute, to explore and screen performance on the seismic history matching procedure. Weighted maps are associated with error/uncertainty quantification of 4D seismic signature, which allow us to identify even specific governing region of seismic reflecting fluid properties, also it shows greatest general-usage potential. An adaptive derivative free optimisation has been applied for the history matching process. Global and local properties are parameterised in the history matching loop, which includes permeability, porosity, fault transmissibility and net-to-gross.
Numerical experiments with a UKCS field shows this methodology is quite flexible and efficient, which circumvent large uncertainty in seismic data and use of uncertain petro-elastic model. This study also implies that the seismic history matching achieves a reasonable production matching while constrains saturation changes derived from time-lapse data using weighted binary seismic history matching method.
-
-
-
Seismic Data Assimilation With An Imperfect Model
Authors D.S. Oliver and M.A. AlfonzoSummaryData assimilation methods often assume perfect models and uncorrelated observation error. The assumption of a perfect model is probably always wrong for real problems, and since model error is known to generally induce correlated effective observation errors, then the assumption of uncorrelated observation errors is probably almost always wrong, too. Ignoring the correlation of observation errors, leads to suboptimal assimilation of observations. Common methods for dealing with correlated observation errors included thinning of data, creation of super-observations, and inflation of error variance. While those methods can reduce the tendency to underestimate uncertainty, they tend to exclude small-scale information in the data.
In this paper, we examine the consequences of model errors on assimilation of seismic data. To provide a controlled investigation, we investigate two sources of model error -- errors in seismic resolution and errors in the petroelastic model. Both errors result in correlated total observation errors, which must be accounted for in the data assimilation scheme. We show how to recognize the existence of correlated error through model diagnostics, how to estimate the correlation in the error, and how to use a model with correlated errors in a perturbed observation form of an iterative ensemble smoother to improve estimates of uncertainty after assimilation of seismic data. The methodology is applied to synthetic seismic data from the Norne Field model. Parameters of the seismic resolution and the observation noise are estimated from the actual inverted impedance. Using this approach, we are able to assimilate approximately 115,000 observations with correlated total observation error efficiently without neglecting correlations. The examples show that the iterative estimation of total observation error compensates for the model error and improves forecasts. The method requires the observation error to be non-diagonal, but we show that this is easily handled even for large problems.
-
-
-
Reservoir Inverse Modeling by Ensemble Smoother with Multiple Data Assimilation for Seismic and Production Data
More LessSummaryTime-lapse seismic data has been widely used for the detailed reservoir characterization, and data assimilation algorithms are commonly used for petroleum reservoir history matching of production data. However, hardly any seismic data has been integrated into the reservoir inverse modeling workflow, due to the large data size, finer gridding, and especially the scarcity in time of the seismic data. Popular ensemble-based reservoir inverse modeling methods such as ensemble Kalman filter (EnKF) also face the problems of high computational cost due to the storage of the intermediate variables, restarting the reservoir simulating process and the inconsistency of the full-step and step-wise simulations. The intrinsic sequential data assimilations characteristics of the EnKF set obstacles to assimilate 4D seismic data. The alternative method can be the ensemble smoother (ES), which is an ensemble based method for data assimilation. The ES is based on a Bayesian updating scheme of the reservoir model to match the production history and improve the production forecast. To improve the algorithm convergence, a multiple data assimilation (MDA) method was proposed by Emerick and Reynolds (2012a) .
The algorithm we used is called ensemble smoother with multiple data assimilation (ESMDA), we modified the ESMDA to integrate the geophysical data, and created the new workflow to history match both the production and geophysical data. The available production data is well measurements, including oil production rate, well water cut and bottom-hole pressure, as well as time-lapse geophysical data, including P-wave impedance. In this paper, we first formulated the mathematical descriptions of ESMDA, we then down-scaled the seismic data, and modified the data assimilation algorithms, illustrated the history matching workflow for both production and geophysical data, finally, we showed how it works with a case study on a water flooding operation in a synthetic reservoir. In comparison, we also showed the history matching only on the production data, which yields an inferior results than the one matches both production and seismic data.
-