- Home
- Conferences
- Conference Proceedings
- Conferences
ECMOR XVI - 16th European Conference on the Mathematics of Oil Recovery
- Conference date: September 3-6, 2018
- Location: Barcelona, Spain
- Published: 03 September 2018
61 - 80 of 172 results
-
-
Effects Of Control And Revitalization Variables To Improve The Performance Of A Polymer Flooding Strategy
Authors V.E.B. Botechia and D.J. SchiozerSummaryDecisions related to production strategy selection are complex tasks involving large investments and high risk. Even applying in-depth probabilistic procedures to define the number and location of wells, the strategy is likely to be sub-optimal when field information is collected and the geologic model becomes better known. The objective of this work is to improve the performance of sub-optimal strategies through analyzing the effects of control and revitalization variables. The simulation models used to optimize the strategies showed variable levels to be different to those predicted, and so modifications to the strategy are necessary.
Control variables relate to field management, and can be altered daily, without fore-planning and without requiring further investment (e.g., well rates). Revitalization variables represent possible future alternatives, which are not usually accounted for in the initial production strategy, and involve additional investment (such as infill drilling). The proposed methodology changes both control and revitalization variables throughout the lifetime of the field, using numerical simulation and economic analysis, to improve performance as measured by Net Present Value (NPV). We apply the procedure to two simulation models representative of an offshore heavy oil field using polymer flooding as the recovery mechanism. These are low flexibility cases (the platform already has the maximum number of wells), thus it is necessary to shut down some wells before opening others (well replacement). These new wells generate extra expenditures that were not accounted for in the original project.
The results showed that the economic performance was greatly increased by actions that (1) do not generate extra expenditures (adjustment of well rates and specificities of the recovery mechanism) and (2) by actions that require extra investments (for instance, allocation of wells to substitute the ones that present low performance). In the studied case, the economic performance was increased up to 39%, even with the extra costs caused by the substitution of wells. This great increase in NPV was caused mainly by two reasons: the higher amount of oil produced due to the wells replacement (up to 17%) and the reduction in the amount and cost of the polymer injection (up to 89%). We also showed that higher oil recovery not necessary means better economic performance, since large investments may be required to produce more oil, and this increased production must pay the extra expenditures.
-
-
-
Deep Learning-Driven Pore-Scale Simulation For Permeability Estimation
Authors M. Araya-Polo, F.O. Alpak, S. Hunter, R. Hofmann and N. SaxenaSummaryCurrent micro-CT image resolution is limited to ∼1-2 microns. A recent study has identified that at least 10 image voxels are needed to resolve pore throats, which limits the applicability of direct simulations using the Digital Rock (DR) technology to medium-to-coarse grained rocks (i.e., rocks with permeability > 100 mD). On the other hand, 2D high-resolution colored images such as the ones obtained from Scanning Electron Microscopy (SEM) deliver a much higher resolution (∼0.5 microns). However, reliable and efficient workflows to jointly utilize full-size SEM images, measured 3D core-plug permeabilities, and 2D direct pore-scale flow simulations on SEM images within a predictive framework for permeability estimation are lacking. In order to close this gap, we introduce a Deep Learning (DL) algorithm for the direct prediction of permeability from SEM images. The trained DL model predicts properties accurately within seconds, and therefore, provide a significant speeding up simulation workflows. Preliminary results will be presented and discussed.
-
-
-
Automatic Lithofacies Classification From Well-Logs Data Using The Walsh Transform Combined With The Self-Organizing Map
Authors L. Aliouane and S.A. OuadfeulSummaryThe main goal of this paper is to implement an automatic lithofacies classification algorithm based on the Walsh transform and the Kohonen’s Self-Organizing Map neural network machine. The first stage is to apply the Walsh transform to a set of well-logs data, the output is a set of different segmentations each one is based on the type of the well-log. The second stage is to use the different output of the Walsh transform applied to different logs as an input, the output of the SOM machine is the different lithological classes. Application to well-logs data recorded in vertical wells located in the Algeria Sahara clearly shows that the output of the proposed combination is more powerful compared to the Self-Organizing map with the well-logs data as an input since this combination is able to attenuate the high frequency components in the well-logs data which can affect the output of neural network machines.
-
-
-
Hybrid Technologies For Computation Of Enhanced Oil Recovery Problem Using Mobile Devices
Authors D.Zh. Akhmed-Zaki, B.S. Daribayev, D.V. Lebedev and T.S. ImankulovSummaryRecently heterogeneous computational systems consisting of supercomputers, FPGA, mobile devices in a state of active evolution. Problems related to enhanced oil recovery among most computational intensive ones. Given paper considers stages of hybrid parallel algorithm development for solving three-dimensional problem of the oil displacement by the method of polymer injection into oil reservoir and stages of creation of system of distributed computations on heterogeneous computational resources using mobile device. System based on using mobile device for input of computational parameters and obtaining data from sensors located directly at production field, their preprocessing using FPGA and transferring through long range and energy efficient wireless communication channels onto mobile device. After determination of computational characteristics mobile device allows to perform computation on remote heterogeneous computational resources which allows to considerably reduce computation time.
System has ability to connect to computational clusters and Grids as well as enterprise cloud services consisting of GPU- and FPGA-based computers.
Implemented parallel algorithms allow to conduct computation on CPUs. Where there are coprocessors (GPU, KNL) available system automatically determine their computational capabilities and distributes computational tasks among them.
If remote high-performance resources not available computations cam be conducted on a local mobile device. There high-performance mobile devices (Xiamoi MiPad, nVidia Shield) which allow to implement parallel algorithms using CUDA technology. Computation results displayed directly on mobile device.
Proposed technology of computation of enhanced oil recovery models allows to conduct more accurate computations and perform them directly near production field which provides quicker response to changes in field condition.
-
-
-
Coherent Linear Noises Attenuation From 3D Seismic Data Using Artificial Neural Network: Application To Algerian Sahara
Authors S.A. Ouadfeul and L. AliouaneSummaryHere, we use the Multilayer perceptron neural network for attenuation of the ground roll from 3D raw seismic data recorded in Algerian Sahara. Firstly, the ground roll of the In-lines of the first swath are attenuated using the F-K filter. Then, a Multilayer perceptron neural network model with Hidden Weight Optimization algorithm is trained in a supervised mode using the raw seismic data of these In-lines as an input and the filtered data as an output and the weights of connection are optimized. Data of other swaths are propagated through the neural network machine; the output of the MLP machine is the filtered seismic data from coherent linear noises.
Comparison between the calculated output and the filtered data using the F-K filter of other swaths shows that the neural machine can be used for automatization of seismic data processing and the linear noise filtering using the F-K method.
-
-
-
Automatic Analysis Of Petrographic Thin Section Images Of Sandstone
Authors A.Y. Bukharev, S.A. Budennyy, A.A. Pachezhertsev, B.V. Belozerov and E.A. ZhukSummaryPetrographic data remains one of fundamental information sources for characterization of hydrocarbon reservoirs. Thin section analysis of sandstone aims to describe depositional textures, major grain types and granulometric distribution, sedimentary structures, grain sorting, mineral composition, structural features, pore types, porosity, etc. These features are exploited to interpret the sedimentary environments, to predict the distribution of sedimentary bodies and their geometry which determines a recovery factor and other key reservoir exploitation characteristics. To conduct these studies one is to segment thin section images first – to partition them into grains, fractures, cleavages, pores, cement. The segmentation process is time-consuming as it is carried out manually or with specialized software that requires a proper recipe preparation for each image. The segmentation accuracy directly shapes the quality of further petrographic analysis.
The goal of work is to develop a fully automatic algorithm for segmentation of thin section images for sandstone and further analysis of partitioned objects. The developed algorithm combines both image processing (IP) and deep learning (DL) approaches. IP methods exploit color intensity and local textural information to segment key structural elements in thin section image: voids (pores and fractures). The combination of DL and IP methods exploit primary information from images to solve semantic and instance segmentation problems for grains and to classify grains, cement and pores. Implementing of DL approaches demands a comprehensive training sample, full enough to have a reasonable segmentation accuracy. Thereby, the dataset of labeled images has been prepared manually.
The developed algorithm has been efficiently applied for thin section analysis of sandstone. It has showed not only high agreement with manually processed thin sections and tremendous working time optimization, but more consistent results of segmentation as well. The algorithm plays a role of auxiliary tool that simplifies significantly the petrographic analysis of sandstone: most routine processes are automated; each thin section specimen can be processed statistically in a straightforward manner.
-
-
-
Robustness Of Extra Net Thickness Identification Within Vertical And Spatial Scale Using Statistical Learning Methods
Authors A. Reshitko, M. Golytsma, A. Gruzdev, A. Semenikhin, D. Egorov, N. Bukhanov, O. Osmonalieva and B. BelozerovSummaryExtra net thickness may bring a huge impact on projects NPV, especially in case of brownfields with vast production wells stock and maintained surface infrastructure. Reservoir beds with sand may be misinterpreted by petrophysicist within a well and miscorrelated spatially. We propose statistical learning methods to identify missed reservoir beds and therefore extra net thickness by predictions of supervised model. Robustness analysis of such identification is the main purpose of our paper.
Methodology is tested on 3 brownfields in Western Siberia along with computational experiments with digital outcrop model, representing complex fluvial facies sedimentology. All the three brownfields represent different geological environment and have significant production history. Digital outcrop model is used primarily as a benchmark for different statistical learning algorithms.
The main idea behind extra net thickness identification within vertical scale is to train the model on manual interpretation (reservoir/non-reservoir, binary classification) and perform predictions on validation wells. False positives errors give potential reservoir intervals, which were not identified in manual interpretation. Such candidates are evaluated by an expert and validated on production data through perforation.
Recurrent neural network is chosen as the baseline algorithm for the methodology. The choice was made according to benchmark testing of different approaches (including Bayesian networks, support vector machines and others) and according to sensitivity analysis of training error for different size of training set (amount of wells). Although RNN gives high accuracy of prediction, this approach still need improvements in term of interpretability and generalization for brownfields covering regions with high variation of geological properties. Feature engineering includes augmentation and creating synthetic curves in case of absence of some significant well log. Missing or noisy well logs were reconstructed based on logs not only from a particular well but also on logs from its neighbor wells. Using of data from neighbor wells as additional features showed dramatic improvement of synthetic log quality. Robustness of a spatial forecast examined in the presented paper was dependent on a number of neighbor wells taken as features and search window size within a particular well. Evaluation of forecast accuracy was done not only by statistical but also by geological metrics such as compartmentalization and net-to-gross ratio. According to the experiments presented in this paper the optimal vertical window is around 1 meter thick, collected from 5 neighbor wells.
-
-
-
Automated Clustering Based Scenario Reduction To Accelerate Robust Life-Cycle Optimization
Authors E.G.D. Barros, S. Maciel, R.J. Moraes and R.M. FonsecaSummaryIncorporating uncertainty in reservoir life-cycle optimization has been shown to achieve results of significant practical value. We introduce an automated technique for scenario reduction using clustering techniques to accelerate robust life-cycle optimization. The technique determines, based on a statistical metric, a representative subset of model realizations that correlates with the cumulative distribution function (CDF) of a quantity of interest of the full ensemble. More specifically, the proposed approach addresses the automatic determination of the appropriate number of clusters. A database of clustering results is generated by repeating the inexpensive clustering procedure with different number of clusters. This allows for the construction of a “distance” curve, which is then used to determine the appropriate number of clusters. We have applied the workflow to waterflooding optimization in two synthetic cases where geological uncertainty is characterized through an ensemble of equiprobable model realizations. The optimization based on the subset of representative model realizations obtained from the newly introduced workflow lead to almost the same objective function values compared to the optimization of the full ensemble using approximately 70% fewer simulations. Our results indicate that the proposed automated workflow provides a computationally efficient scheme for optimization under uncertainty.
-
-
-
Development Of Proxy Models For Reservoir Simulation By Sparsity Promoting Methods And Machine Learning Techniques
Authors A. Bao, E. Gildin and H. ZalavadiaSummaryLearning from data has been a rich topic of research in many engineering disciplines. In particular, in reservoir engineering, data-driven methodologies have been applied successfully to infer interwell connections and flow patterns in the subsurface and in assisting field development plans, including, history matching and performance prediction phases, of conventional and unconventional reservoirs. Although real-time data acquisition and analysis are becoming routine in many workflows, there is still a disconnect with the traditional theoretical first laws principles, whereby conservation laws and phenomenological behavior are used to derive the underlying spatio-temporal evolution equations.
In this work, we propose to combine sparsity promoting methods and machine learning techniques to find the governing equation from the spatio-temporal data series from a reservoir simulator. The idea is to connect data with the physical interpretation of the dynamical system. We achieve this by identifying the nonlinear ODE system equations of our discretized reservoir system. The solution is assumed sparse because we know there is only few terms are relevant for each governing equation. The sparse structure is invoked by two methods: sparse regression with hard threshold (SINDy) and sparse regression with soft threshold (LASSO). For each method to work properly without overfitting, unique ways have been developed for seeking a balance between accuracy and complexity of the model with either l1 or l2 norm penalty. In addition, the sparsity structure can be further fixed with the physical fact that flow term is only related with its adjacent cells.
We apply the method to a two-dimensional single phase flow system. First, the time series data is generated from the simulator with recording points equally spread in space. Then a large library is built containing possible linear, nonlinear terms of the governing ODE equation and finally the combination of the terms is identified through a coefficient vector for each equation. Difference in each technique and detailed modification to the threshold tolerance and penalty factor will be discussed and compared. Extensions to the two-phase flow case is also underway and promising initial results will also be shown in this paper. The validation process is achieved by comparing the original single/two phase simulator results and the results solved from the identified ODE system by Newton iteration.
-
-
-
Intellectual System For Analyzing Thermal Compositional Modeling With Chemical Reactions
Authors T.S. Imankulov, D.Zh. Akhmed-Zaki, B.S. Daribayev, D.V. Lebedev and K.A. AidarovSummaryThe aim of any oil producing company is daily effective and safe oil production. To maximize the extraction of oil from oil reservoirs, it is necessary to continuously improve the work of the oil industry, carry out various measures to optimize the work of producing and injection wells, maintain optimal reservoir pressure, and use modern approved and tested methods of increasing oil recovery. Also, it is necessary to improve the technology of oil production by automating the management of the production process on the basis of “i-fields” concept. When considering large and complex oil and gas fields, the implementation of such technologies requires their qualitative research. Operative decision-making and optimal exploitation of fields imply the need for modeling and monitoring of these fields in real time with the involvement of modern software and hardware.
When managing a field, real-time collection and processing of information is required. Whereas, not all fields are provided with advanced infrastructure for wireline data collection. For them, it is suggested to collect and pre-process data in the fields in an automatic mode with the help of sensors of the embedded system (FPGA-based system).
Given work devoted to development of the intellectual distributed high-performance information system of analysis of different scenarios of the oil production to determine optimal development parameters of oil fields. Proposed system uses thermal compositional model taking into account chemical reactions and supports high-performance computing based on CUDA technology for mobile platforms and MPI for supercomputers in realtime. System allows rapid sequence reading from wells using sensors and controllers (FPGA) and if necessary preprocess data for usage in further calculations.
The principles of Closed-Loop Reservoir Management (CLRM) methodology will be used as a basis, which is a combination of optimization of the life cycle and comparison of the development history of a field. The implementation of this requires large computational resources with the use of procedures for assessing the Value of Information (VOI). Different methods of data clustering (K-averages, multidimensional scaling and tensor decomposition) comparing to select a limited number of representative members from the ensemble of field models with the choice of the optimal set of controls for multiple modeling scenarios.
-
-
-
Multiscale Reconstruction Of Compositional Transport
Authors C. Ganapathy, Y. Chen and D. VoskovSummaryA compositional formulation is a reliable option for understanding the complex subsurface processes and the associated physical changes. However, this type of model has a great computational cost, since the number of equations that needs to be solved in each grid block increases proportionally with the number of components employed, thereby making them computationally demanding. In an effort to enhance the solution strategy of the hyperbolic problem, we herewith propose a multiscale reconstruction of compositional transport problem. Until recently, multiscale techniques have been seldom implemented on transport equations. Here, the ideology consists of two stages, wherein two different sets of restriction and prolongation operators are defined based on the dynamics of compositional transport. In the first stage, an operator restricting the arbitrary number of components to single transport equation is implemented with the objective of reconstructing the leading and trailing shock positions in space. The prediction of front propagation is the most critical aspect of the approach, as they involve a lot of uncertainty. Once their positions are identified, the full solution lying in the regions outside the shocks can be conservatively reconstructed based on the prolongation interpolation operator. Subsequently, the solution for the multicomponent problem (full system) in the two-phase region is reconstructed by solving just two transport equations with the aid of restriction operator defined based on an invariant thermodynamic path (based on Compositional Space Parameterization technique). We demonstrate applicability of the approach for the idealistic 1D test cases involving various gas drives with different number of components. Further, the first stage reconstruction was tested successfully on more realistic problems based on implementation in recently developed Operator-Based Linearization (OBL) platform.
-
-
-
Tie-Simplex Parametrization For Operator-Based Linearization For Non-Isothermal Multiphase Compositional Flow In Porous
Authors M. Khait, D. Voskov and G. KonidalaSummaryAs oil production continues worldwide, more oil fields require complex EOR methods to achieve outlined recovery factors. Reservoir engineers are dealing more often with problems involving thermal multiphase multi-component flow models tightly coupled with complex phase behavior. Such modeling implies the solution of governing laws describing mass and energy transfer in the subsurface, which in turn requires the linearization of strongly nonlinear systems of equations. The recently proposed Operator-Based Linearization (OBL) framework suggests an unconventional strategy using the discrete representation of physics. The terms of governing PDEs, discretized in time and space, which depend only on state variables, are approximated by piece-wise multilinear operators. Since the current physical state fully defines operators for a given problem, each operator can be parametrized over the multidimensional space of nonlinear unknowns for a given distribution of supporting points. Onwards, the values of operators, along with their derivatives with respect to nonlinear unknowns, are obtained from the parametrization using multilinear interpolation and are used for Jacobian assembly in the course of a simulation. Previously, the distribution of supporting points was always uniform, requiring higher parametrization resolution to provide accurate and consistent interpolation of an operator around its most nonlinear regions in parameter space. In addition, when the resolution is low, the system can lose hyperbolicity causing convergence issues. In this work, we apply the prior knowledge of underlying physics to distribute the supporting points according to the tie-simplex behavior of the multiphase mixture in parameter space. The approach allows to decrease the parametrization resolution keeping the same accuracy. In addition, the OBL framework is extended to describe multisegment wells working under different controls. We test the accuracy of the developed framework for truly multi-component systems of practical interest.
-
-
-
Rapid Computation Of Permeability From Micro-CT Images On GPGPUs
Authors F.O. Alpak and M. Araya-PoloSummaryDigital rock physics (DRP) is a rapidly evolving technology targeting fast turnaround times for repeatable core analysis and multi-physics simulation of rock properties. We develop a rapid and scalable distributed-parallel single-phase pore-scale flow simulator for permeability estimation on real 3D pore-scale micro-CT images using a novel variant of the lattice Boltzmann method (LBM). The LBM code implementation is designed to take maximum advantage of distributed computing on multiple general-purpose graphics processing units (GPGPUs). We describe and extensively test the distributed parallel implementation of an innovative LBM algorithm for simulating flow in pore-scale media based on the multiple-relaxation-time (MRT) model. The novel contributions of this work are (1) integration of mathematical and high-performance computing components together with a highly optimized implementation and (2) quantitative results with the resulting simulator in terms of robustness, accuracy, and computational efficiency for a variety of flow geometries including various types of real rock images. We report on extensive tests with the simulator in terms of accuracy and provide near-ideal distributed parallel scalability results on large pore-scale image volumes that were largely computationally inaccessible prior to our implementation. Permeability estimation results are provided on large 3D binary microstructures including real rocks from various sandstone and carbonate formations. We quantify the scalability behavior of the distributed parallel implementation of MRT-LBM as a function of model type/size and the number of utilized new-generation NVIDIA V100 GPGPUs for a panoply of permeability estimation problems.
-
-
-
Towards Pore-Network Modelling Of Imbibition: Dynamic Barriers And Contact Angle Dependent Invasion Patterns
Authors S. Pavuluri, J. Maes, J. Yang, M. Regaieg, A. Moncorgé and F. DosterSummaryImbibition is an ubiquitous process encountered in many porous media applications. At the pore scale, Pore Network Models (PNM) are computationally efficient and can model drainage accurately. However, using PNM to model imbibition still remains a challenge due to the complexities encountered in understanding pore scale flow phenomena related to Pore Body Filling (PBF), snap-off events along with the relative competition between them. In this work we use Direct Numerical Simulations (DNS) to revisit the basic principles of PBF in a two dimensional synthetic pore geometry. We notice that PBF during spontaneous imbibition is interdependent on several parameters such as the shape of the pore and fluid properties (contact angle, density of the fluids). The interaction between these interdependent parameters is investigated in a quantitative manner. We demonstrate the existence of a critical contact angle that determines the occurrence of a capillary barrier zone in which the capillary forces act against imbibition. Farther and larger the contact angle of the wetting phase compared to the critical contact angle results in a wider capillary barrier zone. It is important to acknowledge the occurrence of the capillary barriers as they can potentially prevent filling of the pore space and play a vital role in choosing the invasion path. For the synthetic pore geometries considered, we provide analytical and semi-analytical expressions to determine the critical contact angle and the position of the capillary barrier zone respectively. During spontaneous imbibition, only inertial forces can dynamically help the interface overcome the capillary barrier zone where interfacial reconfigurations are observed. The inertial contact angle is the contact angle of the wetting phase that can overcome the capillary barrier zone using inertial forces. The inertial contact angle is computed numerically for several inertial systems and for various shapes of the synthetic pore geometry. The results of this quantitative analysis can be utilized to improve the existing pore filling rules and better the predictive capabilities of PNM related to two phase flow dynamics.
-
-
-
Phase Behavior Simulation With Dynamic Multicomponent Adsorption
Authors O.A. Lobanova, I.M. Indrupskiy and M.V. ScherbyakSummaryIn ultra-low-permeability reservoirs, like shale rocks, adsorption has a strong effect on hydrocarbons in place and production dynamics. Experimental studies show that adsorption can have significant influence on mixture composition and phase envelope.
However, modeling the effect of adsorption on dynamic changes in mixture composition was not given much attention previously. The most comprehensive studies considered adsorption impact on effective pore radii through formation of adsorption films, which resulted in minor changes of capillary pressure. Thus, the influence of adsorption on phase behavior was found almost negligible.
In the present study we introduce a method to account for dynamic adsorption/desorption of components while modeling phase behavior of a hydrocarbon mixture. The iterative method uses a multicomponent adsorption model to compute adsorbed amounts of components and correct mixture composition within phase equilibrium calculations. The method deals with actual parameters of adsorbent, thermodynamic conditions and properties of the reservoir.
Examples of phase behavior calculations with multicomponent adsorption/desorption for hydrocarbon reservoirs with different properties are presented. It is shown that neglecting the dynamic desorption impact on mixture composition for (ultra) low-permeability reservoirs may lead to dramatic errors in prediction of phase fractions and compositions, which is important for PVT- and compositional flow simulations.
-
-
-
Braided River Reservoir Architecture Modelling And Remaining Oil Analysis
By J. WangSummaryThe braided river is mainly composed of mid-channel bar and channel. The falling-silt seams are developed in the mid-channel bar, which can be identified by higher GR respond and 5%-25% return in RMN curve compared with the main bar sandbody. The architecture model of braided river is featured by gentle down-flow progradation and multi-stage vertical accretion, with the development of falling-silt seams mostly in the tail of mid-channel bar. The falling-silt seams are nearly horizontal and parallel with the main flow line, with the dig angle of 0-3 degree, the average length of 300m, the average width of 125m and the average thickness of 1.25m. According to the statistical analysis, the average length ratio of falling-silt seam and mid-channel bar is in the range of 30%-90%, the average width ratio is mainly in the range of 60%-80%, stating different erosion degree by following sedimentation. The remaining oil is distributed mainly around the development of falling-silt seams, which is also influenced by the incision between mid-channel bar and associated channel. So, a small space well pattern (close to the scale of falling-silt seams) is proposed.
The study offers a systematic study on braided river reservoir characterization and remaining oil analysis, which possesses a large portion of proved reserve worldwide. It offers an integrated method for understanding other similar oilfields in the future.
-
-
-
Two-Phase Displacement In Porous Media Studied By MRI Techniques
Authors J. Fannir, S. Leclerc, I. Panfilova and D. StemmelenSummaryTo perform the numerical simulations the phenomenological meniscus model [1] for two-phase flow was used. It takes into consideration the phase distribution in porous medium, the displacing front deformation and the residual phase formation. The closing relations for this model were obtained from the given experiments. The simulations confirmed qualitatively the experimental results.
-
-
-
Wax Precipitation Modelling In Low-Temperature Reservoir Under Different Production Regimes
Authors L.A. Gaydukov and A.V. NovikovSummaryMost of oil fields in East Siberia are accumulated in low-temperature (12-20 °C) terrigenous reservoirs at depths about 1800 meters, disposed on crystalline basement. Such an oil has a high wax content (up to 5% by mass). Pressure and temperature conditions are close to wax appearance point. Thus, small temperature and pressure changes caused by production lead to wax precipitation around well. Damage pore space by wax particles results in formation of near-wellbore affected area that reduces well production.
In this paper, we develop mathematical model of multiphase well inflow complicated by wax precipitation caused by pressure and temperature changes. The model is based on results of special PVT analysis of wax precipitation (ultrasonic, high pressure microscope, particle size analysis, filtration tests) and analysis of oil thermal properties. The model includes simulation of non-isothermal multiphase multicomponent fluxes. Gas, oil and solid phases are considered. Calculation of wax phase transitions are based on known from the laboratory tests wax concentration in various pressure and temperature conditions. Clogging is simulated by permeability change according to porosity kinetic equation. Energy balance includes Joule-Thomson effect, adiabatic effect, heat production and absorption due to phase transitions. The model allows distribution of wax saturation, porosity and permeability to be evaluated in the area around well.
For a range of initial wax concentration, wellbore pressure, gas-oil ratio values calculations were performed and sensitivity of affected area size and skin-factor were estimated. In the case of high gas-oil ratio values, it was demonstrated that low wellbore temperature can lead to significant permeability decrease in the area. Influence of clogging on formation damage and well inflow parameters was also analyzed. Toolkit for estimation of optimal production regime in low-temperature reservoir for oil with high wax content was developed.
-
-
-
Geological Model Review During The Intense Infill Drilling
Authors E. Kharyba, L. Malencic and M. PilipenkoSummaryThe main task is to review and adjust geological model during the process of intense infill drilling in oil field Velebit and to identify zones with residual reserves, define current oil-water (OWC) and oil-gas (OGC) contact and to achieve higher accuracy of production forecast of new wells. Infill drilling started in 2013 as a result of geological and reservoir engineering study and it was carried out in 3 stages: first stage (from 2013 to 2015)-13 wells, second stage (in 2016)-28 wells and third stage (in 2017)-27 wells were drilled.
The object of interest is an oil field with a gas cap Velebit located in norther part of Serbia. The field was discovered in 1963 and in terms of production, it is one of the most important fields in Serbia. The reservoir is located in Lower Pontian and Middle Trias. Lithological deposits of Lower Pontian are represented by sands, sandstones (Pt1-2) and conglomerates and shaly sandstones (Pt1-1). Middle Trias (T1) is represented by dolomitic limestones and dolomites.
As the drilling went on new information kept coming and the geological model underwent changes. Drilling out of the South and West part of the field revealed the deeper current OWC then initially expected. Drilling out of the East part of the field revealed a tilted OWC. Results of analysis showed a link between the current OWC, structural factor and overall formation thickness.
Results of the undertaken research include adjustment of geological model, monitoring current OWC and OGC, identification of zones with residual reserves for future infill drilling, recommendation for new wells location in respect for tilted and current OWC, identify perspective zones near OGC and reducing risks associated with drilling plans.
-
-
-
Geological Modeling Of Reservoir Systems – An Adaptive Concept
Authors S. Ursegov, A. Zakharian and V. SerkovaSummaryThe construction of realistic geological models of petroleum reservoir systems is an essential prerequisite to the design and implementation of their development scenarios. Unlike most of the conventional approaches of geological modeling, an adaptive concept takes into account the uncertainty and incompleteness of initial data. First, the adaptive models have the number of layers no more than can be distinguished by means of detailed correlation. Typically, the adaptive models consist of 8 to 16 layers with an average layer’s thickness of 5 to 10 m. “Mechanical cutting” of the layer’s thickness down to the traditional 0.4 m is added nothing to the information content of the model, instead such tiny layers make a false representation. When a certain layer is selected, it is implied that it is exactly correlated over the entire region of the model, but this is completely wrong. If it cannot be correlated using logs, then such layer has no justification and only creates a precedent, asserting that there is not in reality.
A 95% of any geological model quality depends on the correctness of detailed correlation. According to the seventh Shanonn’s theorem, no one mathematical transformation can increase the amount of information. Only time and energy spending might increase it. This is the time and effort of a specialist who makes detailed correlation and creates new information, which initially was not in logs. This information is the input data of the model and thereby increases its quality. If there is not time and energy spending in detailed correlation, then the model is reduced to a pure formality.
The layer selected according to detailed correlation is a separate geological subsystem with its own historical evolution, which is reflected in the distribution of gross and net pay thicknesses and other petrophysical parameters. Therefore, in the adaptive concept, a multilayer geological model is constructed using the superposition principle. At first, the geological models of separate layers are built entirely independently, and then they are summed up into a multilayer model. Historically, the sedimentary cover is formed successively layer by layer in its own paleogeographic environment. Because of this, it is fundamentally incorrect to apply any 3D interpolation. When the multilayer model is constructed in a layer by layer mode, the uniqueness of each layer can be distinguished avoiding their averaging.
In in the adaptive concept, traditional methods of interpolation are not used. The basis of the model’s construction is the seismic data – the structural surfaces of reflecting horizons. There should be at least three such surfaces. They have everything that is needed. The distances between them show the rates of sedimentation, the absolute marks - the structure of the section, and the degree of curvature - the tectonic stresses that affects the properties of permeable formations. The essence is that there is a vector of seismic data both at wells and in the inter-well space, so a multi - parametric fuzzy logic function can be created by means of which any geological parameter can be obtained from the vector. At the same time, such function cannot be one for the entire region of the model, that is why, a so-called fuzzy grid is constructed that is a grid with a step of 200 – 300 m which nodes contain local functions connecting the geological parameter with the seismic data.
-