- Home
- Conferences
- Conference Proceedings
- Conferences
ECMOR X - 10th European Conference on the Mathematics of Oil Recovery
- Conference date: 04 Sep 2006 - 07 Sep 2006
- Location: Amsterdam, Netherlands
- ISBN: 978-90-73781-47-4
- Published: 04 September 2006
1 - 20 of 78 results
-
-
Modeling of Nucleation Fronts during Depletion of Gas-Saturated Porous Media
Authors S. Zaleski, M. Chraibi and F. FrancoOil production through cold depletion leads to degassing of the light species and the formation of a bubbly phase, sometimes called the foamy oil effect. This bubbly phase is particularly observed with heavy oils, combining high viscosity and asphaltènes. We have modeled depletion experiments on laboratory-scale cores using a one-dimensional model at the Darcy scale, describing the multiphase flow of the oil and gas. The oil and gas phases move with a classical relative permeability model.
Initially, the gas phase is in the form of very small bubbles, which are physically viewed as stabilized in crevices of the solid phase or as small germs surrounded by a coating skin.
When surface tension is taken into account in the phase equilibrium a Gibbs radius appears
so that bubbles grow rapidly above the Gibbs radius and collapse below it.
As a result, the solution of the partial differential equations describing mass conservation display nonlinear fronts that connect two regions of almost constant concentration and velocity, in a manner analogous to shock waves. We describe an asymptotic theory that allows understanding the formation of the fronts and connects them to the dynamics of bubble activation at nucleation sites.
-
-
-
Numerical Analysis of Foam Motion in Porous Media Using a New Stochastic Bubble Population Model
Authors P. L. J. Zitha, D. -X. Du and F. J. VermolenWe present a numerical analysis of the stochastic population balance (SPB) theory for foam motion in porous media. The theory condenses into a set of non-linear partial differential equations in the saturation, pressure and bubble density. We solve the equations using the IMPES method and perform sensitivity analysis. Finally, we compare the saturation profiles obtained numerically with those obtained from CT scan foam experiments.
-
-
-
Underground Storage of H2 and H2-CO2-CH4 Mixtures
Authors M. Panfilov, G. Gravier and S. FillacierIn conditions of a natural irreversible reduction in world-wide oil production, the hydrogen is examined as one of the basic sources of the renewable energy able to replace the fossil energies. The underground storage of H2 is considered as a most perspective way to store huge amounts of hydrogen. Various industrial methods usually produces the H2 mixtures with a dominating presence of CO2 and CH4, so the objective of the present research was to analyse various hydrodynamic aspects related to the problem of storing the hydrogen mixtures in porous reservoirs. The storage of H2+CO2+CH4 mixtures like manufactured or civil gas produced by the industrial coal gasification technique represents a double industrial interest, by ensuring, in addition, to capture CO2. When stored in an aquifer, this gas mixture is subjected to an intense bacterial activity. The bacteria consume H2 and CO2 with generating new gases like CH4. Such transformation of the stored gas composition was observed in European civil gas storages. The new model of multicomponent gas injection in reservoir accompanied with chemical reactions and a bacteria population growth is developed, which was next analysed numerically and by using the mathematical methods of the theory of population dynamics. The characteristic parameters like the constants of Monod and Michaelis are obtained by fitting the results of modelling with the in-situ experimental data. Various scenarios of the system evolution are revealed. In particular, an irregular oscillating regime was detected, in which various individual gases can create their owns space accumulations which move in time all along the reservoir. These results are confirmed by in-situ observations. The results of modelling yield optimal regimes of gas injection. The third part deals with using CO2 as a cushion gas for a H2 storage, which may be examined as a new technique of storing large amounts of CO2 in order to reduce the climate changes in the atmosphere. Within the zone of contact between the gases the same chemical-bacteria interactions are observed which leads to a natural reduction of CO2 and a possibility to perform a permanent CO2 injection into the reservoir. Using the mathematical model developed, we have estimated the characteristic rate and the amount of CO2 reduction in time, which determine the main parameters of CO2 injection in practice.
-
-
-
Analytical Modeling of CO2 Sequestration and Enhanced Coalbed Methane Recovery (ECBM)
Authors C. J. Seto, K. Jessen, T. La Force and F. M. Orr Jr.Injection of CO2 into deep unminable coal seams is an option for geological storage of CO2. In many industrial settings, pure CO2 streams are expensive to obtain and a mixture of CO2 and N2 would be less expensive. New analytical solutions are presented for two-phase, four-component flow with volume change on mixing in adsorbing systems. Analytical solutions have been reported previously for single-phase three-component gas flow (Zhu et al.) and for multicomponent flow of incompressible fluids with adsorption (Johansen and Winther, Dahl et al., Shapiro et al.).
In this paper, we analyze the simultaneous flow of water and gas containing multiple adsorbing components by the method of characteristics. Mixtures of N2, CH4, CO2 and H2O are used to represent the ECBM-flue gas process. The displacement behavior is demonstrated to be strongly dependent on the relative adsorption strength of the gas components.
N2 and CO2 recover CH4 through different mechanisms: CO2 preferentially adsorbs onto the coal surface, resulting in a shock solution; while N2 displaces CH4 by reducing the partial pressure of CH4, resulting in a rarefaction solution. When mixtures of N2 and CO2 are injected, the displacement exhibits both shock and rarefaction features. For CO2-rich flue gas, a path that includes a switch between branches of non-tieline paths is observed, a feature not previously reported for gas-liquid displacements. In these solutions, an additional key tieline, at which the switch between non-tieline paths occurs, is required. In the shock along this tieline, the non-tieline eigenvalues of injection and initial segments and the tieline shock velocity are equal.
Analytical solutions to ECBM processes provide insight into the complex interplay among adsorption, phase behavior and convection. Improved understanding of the physics of these displacements will aid in developing more efficient and physically accurate techniques for predicting the fate of injected CO2 in the subsurface.
-
-
-
Measures of Efficiency for Assisted History Matching
Authors G. J. Walker and S. PettigrewThe recent advances in computer assisted history matching have enabled the asset team to investigate multiple alternative reservoir descriptions (SPE89974, Williams et al). The systems are being asked to work in a high number of dimensions, and yet we know that the problem is tractable as we are able to find models that satisfy history match criterion. What we need is a measure of efficiency or elegance to the finding of the alternative solutions, to then allow optimization of the search and an objective discussion of the way in which different strategies interact with the task. This paper covers two case studies, at different stages in their lifecycle, and how different choices of genetic algorithm parameters modify the efficiency.
-
-
-
Rapid, Stochastic Updating of Reservoir Models to Dynamic Data – An Evaluation of P-Field Approach
Authors S. T. Reinlie and S. SrinivasanContinuous dynamic data such as well flowing bottom-hole pressure carry information that characterizes reservoir heterogeneity. A novel approach to analyze continuous monitoring pressure data and to update reservoir models based on incremental information is presented. First, the pressure transient data is analyzed to identify the size and shape of permeability heterogeneity in the presence of fluctuations in rate and pressure. Unlike the complicated pressure or rate deconvolution algorithms presented in the literature, a simple semi-analytical approach is presented here that attempts to reconstruct the bottom hole pressure response after removing the effect of rate fluctuation. Using the reconstructed pressure profile, estimates of the radius to the boundary of the heterogeneous region and the effective average permeability are obtained by applying a simple optimization procedure for fitting the pressure and pressure-derivative plots.
Once the configuration of the reservoir heterogeneity in the vicinity of wells has been identified that information is used to condition high-resolution reservoir models. The conditional probability distribution that characterises the uncertainty in permeability value at any location is perturbed using the dynamic pressure response as conditioning information. The gradual deformation of the conditional probability distribution is carried out within a p-field simulation framework. In the p-field approach, permeability values are sampled from the conditional distributions using a correlated field of random numbers (or uniform probability values). This approach retains the computational efficiency of the traditional gradual deformation algorithm, while at the same time is amenable to modelling non-Gaussian permeability fields that exhibit severe discontinuities such as facies/indicator type distributions. Moreover, since the updated conditional distribution is available in the p-field approach, uncertainty assessment is possible by sampling several realizations from the updated distribution. The application of the proposed method is demonstrated on a realistic 3-D example.
-
-
-
Measuring the Value of Time-Lapse (4D) Seismic as Part of History Matching in the Schiehallion UKCS Field
More LessIn seismic history matching we use production data from wells and time-lapse (4D) seismic to constrain simulation models so that they better represent reservoir properties and behaviour. Together, these data types reduce the non-uniqueness of the problem, and therefore reduce the uncertainty of both the reservoir description and also the estimation of future behaviour. The more constraints we have, however, the harder it is to find the best models and more simulations may be required to search the parameter space. This leads to increasing computing costs, which must be balanced against the reduction in model uncertainty.
We have developed a method of performing a cost:benefit analysis of including extra data and simulations in the history matching process. We use a Neighbourhood Algorithm to sample the parameter space and work in a Bayesian framework to determine model probabilities. After history matching, we then resample the posterior probability density to estimate parameter uncertainties. In addition, the parameter sampling has a density roughly in proportion to the probability distribution of the models. With this property of our method and with sufficient models, we then determine the most likely model outcome and its uncertainty. This enables calculation of expected saturation and pressure distributions at the time the data was measured and into the future. This is beneficial for reservoir management, particularly for identifying unswept areas.
We apply our method to a UKCS field and analyse how the uncertainty changes in response to adding the seismic data to the history match. We also analyse the change in uncertainty as a function of the number of simulations carried out. We identify an optimum number of models that are required before we enter the domain of diminishing returns. We confirm that seismic is important if we wish to describe the reservoir some distance from production wells. We also find that some parameters may be determined more quickly than others, depending on their location relative to the data being used.
-
-
-
Wavelet Based Regularization of the Well Test Deconvolution Problem
By O. SaevareidPermanent downhole measurements provide "well test" data in abundance, but their behaviour often reflects the erratic environment of everyday well operations rather than the more sterile conditions typical of a traditional well test. Consequently, processing and interpretation demand an increased level of sophistication.
In reference [1] the authors make a strong case for total least squares (TLS) applied to deconvolution of well test data. Their approach includes regularization based on penalizing the total curvature of the response function.
The present effort combines a TLS approach with regularization based on a discrete wavelet transform. As indicated in reference [2], this allows a systematic approach where the regularization have a concrete interpretation in terms of resolved details of the response function. Highly contaminated data only allows the most prominent details to be determined within relevant accuracy, while improved data quality allows correspondingly more details to be revealed.
Preliminary results demonstrating feasibility, accuracy and robustness will be presented.
[1] von Schroeter,T., Hollaender,F and Gringarten,A.C. - Deconvolution of Well Test Data as a Nonlinear Total Least Squares Problem - SPEJ 9 no 4, 375-390 2004
[2] Nikolaou,M. and Vuthandam,P. - FIR Model Identification: Achieving Parsimony through Kernel Compression with Wavelets - AIChE J. 44 no 1, 141-150 1998
-
-
-
Ensemble Kalman Filter for Field Estimation – Investigations on the Effect of the Ensemble Size
Authors G. Naevdal and K. ThulinThe ensemble Kalman filter has obtained popularity for field estimation, in particular for updating of reservoir simulation models. Here we will present a through study on the effect of the ensemble size. To be able to run a large number of simulations, we have simplified the modeling by focusing on the heat equation, but will also include a discussion on what to expect for reservoir simulation models. We find that a modest ensemble size is enough to match the measurements, but the ensemble size has to be increased significantly to get a proper picture of the model uncertainty.
-
-
-
Estimation of Distribution Algorithms for History Matching
Authors I. Petrovska and J. N. CarterIn previous history matching studies it has been shown that there may be multiple local optima to the response surface, even when the inverse problem is well defined in a mathematical sense. Practical algorithms that allow the identification of these local optima, such as Genetic Algorithms, have been demonstrated to work. Whilst it is useful to know where in parameter space multiple solutions exist, it is not every thing we would wish to know. At each local optimum we would like to know the range of uncertainty for each parameter and how the parameters are correlated. This will allow us to make more useful predictions, including better estimates of the uncertainty in those predictions.
In this paper we demonstrate the use of a simple Estimation of Distribution Algorithm, Probability Based Incremental Learning, on a simple reservoir cross sectional model with three parameters which is known to have multiple high quality local optima. The probability distribution function, for each parameter, can be approximated by a histogram which is adjusted using the results of the search. The sampling of the parameter space is guided by the current pdfs. We show that this algorithm can evolve steady-state pdfs which would allow us to sample the parameter space more efficiently when estimating uncertainty.
We have also introduced a modified version of the sum of square objective function. This allows a better treatment of water break through as part of the objective function. The result is that some of the optima are wider and this allows optimisers to find them more easily.
-
-
-
History Matching and Uncertainty Quantification Assisted by Global Optimization Techniques
Authors A. Castellini, B. Yeten, U. Singh, A. Vahedi and R. SawirisRanges in production forecast provide critical information for reservoir management decisions. Well developed methodologies exist for handling subsurface uncertainties for new field developments. The task is more challenging for fields that have been produced for several years as all models need to be conditioned to available production data in order to deliver reliable predictions. The computing cost associated with the exhaustive search of models that reproduce historical data is in general prohibitive. This paper describes an efficient method that combines the strength of various techniques, including optimization algorithms, experimental design and non-linear response surfaces. It is applied to an off-shore field in the Philippines for which multiple history-matched models are obtained in a reasonable timeframe.
-
-
-
Direct Inversion by de-trending stochastic permeability Fields
More LessIn this paper a new method for direct inversion of production data to a permeability field is proposed. With this direct inversion methodology, history matching is physics based, honouring both the measurements and retaining the full variability in geology.
The method involves adjustment of the geological permeability realizations using the measurements. First, the realizations are split in a trend and in a residual. The trend is a smooth function satisfying the same boundary conditions as the original realization; the residual then contains the geological stochastic information and satisfies homogenous boundary conditions. The split is made both for the potential and for the stream function. The trends from a realization are in serious conflict with the measurements. Therefore we replace the wrong trends in the realization by trends calculated with the measurements. The result is a model that obeys all measurements as well as the geology, and the objective function in terms of indirect inversion is zero. Darcy’s law is explicitly included, continuity in flow is guaranteed and the boundary conditions are fulfilled. The prize that has to be paid is anisotropy in the resulting permeability field. However, in general this anisotropy is not serious in most of the model cells. We are still working on solutions to suppress this anisotropy.
A possible extension of our current method is to incorporate the water cut measurements by a multiplication factor applied directly to the stream function. Another extension is from 2D to 3D: the velocity can then replace the stream function and the permeability will be derived from the velocity vectors and the potential gradients.
-
-
-
Comparing Facies Realizations – Defining Metrices on Realization Space
Authors H. H. Soleng, A. R. Syversveen and O. KolbjørnsenA typical workflow for generating flow simulation grids goes through a facies modelling step. This typically involves setting up a stochastic model that is supposed to capture the important properties of the facies bodies in the reservoir volume in question and their uncertainties. This may be difficult or impossible within a particular modelling framework. Either we end up with a model too simple to be able to reproduce the characteristics of the reservoir, or the model parametres become too many and too difficult to specify. Hence users ask for methods able to reproduce the properties of a training image automatically. In any case one would like objective measures of similarity of facies realizations so that one is able to determine if a set of realizations have the properties that one wants.
Here we discuss possible components in a metric on a space of facies realizations and present an implementation of a facies realization analyzer program. The algorithm simply scans a number of realizations and computes the global volume fractions of each facies and the number of facies bodies of each type. Then it computes the surface areas, volumes, and extensions in each directions for the bodies and performs simple statistical analysis of the realizations and compare it with properties of a training image.
We present results of applying this software on facies realizations produced with variogram based methods, multipoint methods, and sequential Markov random fields.
The analyzer algorithm is fast, applicable in 2D and 3D, and the results are in excellent agreement with the subjective impression of similarity or dissimilarity obtained through visual inspection.
-
-
-
Facies Modelling in Fault Zones
Authors A. R. Syversveen, A. Skorstad, H. H. Soleng, P. Røe and J. TverangerTraditionally fault impact on fluid flow is included by assigning transmissibility multipliers to flow simulation grid cell faces co-located with the fault plane (Manzocchi et al. 1999). A new method, called Fault Facies modelling (Tveranger et al. 2004, 2005), captures fault impact by considering faults as deformed rock volumes rather than simple planes. Architectures and petrophysical properties of these deformed volumes (i.e. fault zones) are linked to a range of factors such as lithology, host rock petrophysical properties, tectonic regime, orientation, magnitude, and distribution of stress, as well as the burial depth at the time of faulting. By understanding these links and identifying bounding values for distributions and parameters, fault zone architectures and properties, as well as uncertainties attached to these, can be forecasted.
The fault facies approach allows 3D features such as anisotropic permeability fields, capillarity effects and tortuosity of flow paths inside the fault zone to be explicitly represented in the reservoir models. Furthermore, on the simulation grid scale, flow between cells on opposite sides of faults, as well as any uncertainty attached to this, can be estimated a priori rather than set deterministically a posteriori using history matching.
The paper compares fluid flow behaviour of conventional transmissibility multiplier-type fault property models and fault facies type models through a series of simple tests. The study demonstrates that the fault facies concept is a technically feasible methodology that represents an alternative or supplement to standard industrial fault modelling methods.
-
-
-
MPG Simulation of Facies Thicknesses Interpreted through Sequence Stratigraphy – Application on a Carbonate Outcrop
Authors L. Dovera, J. Caers and J. BorgomanoOutcrop models can provide important information about reservoir architecture and heterogeneity. However, geological information from outcrop, because of its great variety and complexity, cannot be integrated to its fullest extent using traditional geostatistics. The conventional variogram-based modeling techniques typically fail to capture complex geological structures.
Multiple-point geostatistics encompasses a set of an innovative modeling technique that performs the simulation starting from a training image, a 3D conceptual visual representation of how heterogeneities are believed to be distributed in the actual reservoir. The training image forms a gateway for geological expertise and interpretation to be quantitatively used in reservoir modelling. Training images can be constructed using object-based simulation or using outcrop data. The latter will be the topic of investigation in this paper.
This paper presents an application of simpat, a multiple-point stochastic simulation method for generating reservoir facies models by capturing the complex facies series of a carbonate outcrop and anchoring them to subsurface well data. The outcrop consists of a complex depositional sequence of mound and lobe bodies with complex spatial relationships. The idea is to generate reservoir models that reflect the observed geological complexity, yet at the same time are constrained to any available reservoir data. In order to reproduce this complex architecture in such models and to make explicit use of the sequence stratigraphic depositional information, the depositional bodies were not simulated directly but with a new approach relying on the simulation of facies thicknesses interpreted through sequence stratigraphy.
The main idea of this new approach is to separate thickness and facies information into two different yet coupled 3D training images, then to rely on the capabilities of simpat to jointly simulate these two properties. This approach can easily integrate complex information regarding bodies’ geometry and sequence stratigraphy and it has the potential to be applied to several different geological depositional system.
-
-
-
Direct Spatiotemporal Interpolation of Reservoir Flow Responses
Authors S. Srinivasan and S. SrinivasanThe traditional reservoir modeling workflow consists of first developing a reservoir model, performing flow simulation on that model, validating the model by performing history matching and finally using the history matched model to make predictions of future performance. In contrast, this paper presents two approaches to directly analyze the spatio-temporal variations of dynamic responses such as pressure and well flowrates and perform interpolation. Both these techniques are anchored to the data at the wells. Therefore, the resultant spatio-temporal predictions of dynamic response are history matched by construction. Interpolation or extrapolation of dynamic response to locations away from wells is possible using both the approaches. Therefore, the proposed approaches can be used to quickly determine optimal location to drill additional wells and to gauge the influence of reservoir management decisions.
In the first approach, dynamic responses such as pressure transients are treated as time series data. They are analyzed using wavelets that facilitate multiscale decomposition of pressure signals. Using a wavelet-lifting scheme, the transient signal is decomposed into averages and residuals. The corresponding filter coefficients defining the wavelets are treated as spatial random variables and estimated using geostatistics at locations away from wells. Pressure response is reconstructed at unsampled locations by employing the inverse wavelet transform.
In the alternate approach, direct spatiotemporal extrapolation of pressure is performed. The transient pressure data at the wells are first analyzed using correlation measures such as semi-variograms. For simplicity, time is taken as another spatial dimension and variogram values corresponding to the resultant lag-vectors are inferred and subsequently modeled. Spatiotemporal extrapolation is then performed to obtain the response at any location in space and at any instant in time. The robustness of both these approaches is verified on a number of case examples.
-
-
-
A Probabilistic Approach to Integration of Well Log, Geological Information, 3D/4D Seismic and Production Data
More LessThe ultimate goal of reservoir modeling is to obtain a model of the reservoir that is able to predict future flow performance. Achieving this challenging goal requires the model to honor all available static (well log, geological information and 3D seismic) and dynamic (4D seismic and production) data. This paper introduces a general methodology and workflow for reservoir modeling that integrates data from multiple and diverse sources, using a probabilistic approach addressing the possible inconsistency and/or redundancy between various data sources. The goal of the workflow is to model an unknown A (facies/petrophysical property) using data from different sources D1, D2, …, Dn. The workflow requires modeling the information content of each data source as a spatial distribution model termed P(A|Di). Next, all P(A|Di) are combined into a joint conditional P(A|D1,D2,D3) from which reservoir models are drawn using sequential simulation. The procedure followed to modeling the information content of each data source as a spatial probability distribution model depends on the data source. Geological information about A is made quantitative through a training image, a 3D geological analog representation of the subsurface heterogeneity. Multiple-point geostatistics translates training image information into local conditional probabilities of the unknown variable A. 3D seismic data is translated to a spatial probability distribution using a calibration between well logs and the 3D seismic data itself. Production and 4D seismic data are translated using and iterative procedure termed the probability perturbation method. Using Journel's tau model, a model for data redundancy, the spatial probability distributions are combined into a joint probability model used for sequential simulation. This paper shows that the reservoir model obtained using this approach honors both static and dynamic data simultaneously while explicitly accounting for data redundancy.
-
-
-
Conditioning Sedimentary Models to Well-Log Data – An Application of Ensemble Kalman Filter
Authors A. Barrera, R. A. Rmaileh, S. Srinivasan and C. HuhGeological models consistent with the physics of sediment transport and deposition are increasingly becoming popular for the evaluation and development of water, oil and gas reservoirs. Several statistical methods have been used for the geologic modeling but they are based on very scarce information which does not consider the underlying physical principles that control the process of sediment deposition. Flow-based sedimentary models that consider depositional and physical principles for the distribution of sediments in space can be used to help generate geologically realistic models. However, these models have an important limitation, in that, it is impossible to assimilate reservoir specific information in these models. In order to overcome this limitation, an approach utilizing the Ensemble Kalman Filter (EnKF) technique for sequentially conditioning the sedimentary process model to chrono-stratigraphic information recorded at wells is presented in this paper. The EnKF technique allows the conditioning of sedimentary models to petrophysical and poro-elastic acoustic measurements from seismic and well-logs. Starting from initial probability distributions representing the uncertainty in state variables, an iterative updating process is implemented within the EnKF framework. The ensemble of state variables is adjusted so that the updated model parameters, when used in the sediment deposition model will yield a match to the conditioning data. This method was initially tested with a 1-Dimensional sedimentary deposit model for quasi-steady state turbidity currents. The geologic model was then extended to a 2-Dimensional space and the probabilistic method of Lattice Boltzmann Simulation was implemented using the three-speed D2Q9 lattice.
Besides presenting a novel application of EnKF for arriving at data-conditioned, sedimentary process models, the paper also sheds light on the capabilities and limitations of the EnKF approach for data conditioning.
-
-
-
Data Assimilation in Reservoir Management Using the Representer Method and the Ensemble Kalman Filter
Authors J. R. Rommelse, O. Kleptsova, J. D. Jansen and A. W. HeeminkThe aim of data assimilation is to improve numerical models by adding measurement information. In case of petroleum engineering, the model might be the combination of a reservoir simulator, a rock-physics model and a wave propagation package. Measurements can originate from geology, seismics, petrophysics, down-hole sensors and surface facilities. In theory, one tries to maximize the likelihood of the parameters given the measurements, where the numerical model is used as a weak constraint. In practice the problem is often reduced to a least squares problem by assuming Gaussian error statistics, resulting in a variety of related data assimilation algorithms. In this paper the Ensemble Kalman Filter (EnKF) and the Representer Method (RM) are compared. For linear systems they solve the same least squares problem; for non-linear systems, like multiphase flow in porous media, they have their own peculiarities and utilization. A variational method like the RM might get stuck in a local minimum of the squared data misfit objective function, whereas this objective does not even have a physical or probabilistic interpretation for non-linear models or non-Gaussian probability distributions. The measurement update of a filter overestimates the jump of the forecasted reservoir states towards the observed reservoir states. These errors accumulate and cannot be corrected in an iterative process, unlike what can be done in a variational method. The RM is computationally more demanding than an ENKF, especially when the number of measurements increases. Unlike the ENKF, the RM not only calculates estimates of state variables and model parameters, but it also quantifies what the isolated effect of every measurement in space and time is on the final estimate. It is therefore a promising method to quantify the “value of information” of a specific measurement.
-
-
-
Characterizing Data Measurement Errors with the EM Algorithm
Authors A. C. Reynolds, Y. Zhao and G. LiThe characterization of errors in measured data is important if one
wishes to condition reservoir models to diverse data sets because measurement/processing errors determine the proper relative weights of the data. In the literature, the measurement error for each data type is often estimated by some smoothing technique in the whole data domain, which often over-smoothes the data particularly around
points where the underlying true data change sharply and results in
over estimation of the measurement error. Here, we apply a modified
EM (Expectation-Maximization) algorithm to separate measured data
into groups based on the value of the measurement and the spatial
location. By applying a moving polynomial fit within each group, we generate estimates of the mean and covariance of measurement errors. The algorithm is applied to both synthetic and field time lapse seismic data as well as production data sets. For synthetic cases, the covariance functions estimated with the EM algorithm are superior to those obtained using smoothing with a moving window. The EM based procedure also appears to give more reasonable results for the field data.
-