- Home
- Conferences
- Conference Proceedings
- Conferences
First EAGE Digitalization Conference and Exhibition
- Conference date: November 30 - December 3, 2020
- Location: Vienna, Austria
- Published: 30 November 2020
41 - 60 of 93 results
-
-
Modelling Hydraulically Fractured Tight Gas Reservoirs with an Artificial Intelligence (AI)-Based Simulator, Deep Net Simulator (DNS)
Authors S. Ghassemzadeh, M. Gonzalez Perdomo, E. Abbasnejad and M. HaghighiSummaryHydraulic fracturing in tight gas reservoirs increases the connectivity of the well to further areal regions, thus boosting the production as well as the net-present-value of the asset. This type of reservoir typically exhibits considerable uncertainty in rock and fracture properties, which coupled with significant heterogeneity makes history matching, uncertainty quantification, and optimisation time-consuming tasks. Therefore, engineers are always looking for processes to reduce simulation time. Artificial intelligence enables machine-learning to learn from data. This allows for time-consuming fluid flow equations to be explicitly formulated while keeping the accuracy found through the implicit approach. This is achieved through the use of deep learning. In this research, a fully standalone simulator is developed for a range of hydraulically fractured tight gas reservoirs in a 2-dimensional space. Considering the low value of metrics (RMSE<65 psi, MAPE < 0.99%, and R2 ≈ 1) for training, validation and test sets, the results confirmed that the developed model, Deep Net Simulator (DNS), is accurate and reliable when compared with numerical models. Furthermore, DNS shows remarkable reliability when comparing the results of 140 unseen complete reservoir models over a 4-year period against a numerical simulator. The average value of MAPE for all 140 cases is 10.55%.
-
-
-
Optimizing the Dynamic Behavior of Wells and Facilities with Machine Learning and Agent Negotiation Techniques
Authors M. Piantanida, A. Amendola, G. Esposito, P. Iorio, S. Carminati, D. Vanzan, F. Castiglione, D. Vergni, P. Stolfi and C.N. CoriaSummaryThe paper proposes an approach to deal with the day by day dynamic behaviour of Oil & Gas assets, providing support for optimized decisions on wells and facilities. The approach is based on:
• A set of software agents, trained with a machine learning approach to understand the health status of the components of the reservoir/well/plant system and capable of proposing optimization actions for the corresponding subsystem;
• An inter-agent negotiation approach, capable of evaluating the optimization actions of the single agents in the wider picture of the overall optimization of the producing asset.
The paper will describe how this approach has been implemented, as well as an example application.
-
-
-
Deep Learning for Seismic Data Reconstruction: Opportunities and Challenges
Authors O. Ovcharenko and S. HouSummaryNatural and instrumental conditions during field seismic survey lead to noise and irregularities in acquired seismic data. In this work, we explore challenges and opportunities related to denoising and interpolation of seismic data by deep convolutional neural networks. In particular, we apply three network configurations to field data and match them with suitable applications. We show that U-Net is beneficial for denoising applications while adversarial generative networks (GAN) are superior in interpolation tasks. Enhanced interpolation capability of GANs, however, comes at cost of increased uncertainty in the results and we raise awareness about this observation. In the end, we consider the pitfalls of conventional metrics and outline the requirements for data-driven approaches to be suitable in production applications.
-
-
-
Novel Digital Rock Simulation Approach in Characterizing Gas Trapping by Modified Morphological Workflow
Authors F. Zekiri, J. Steckhan, S. Linden, P. Arnold and H. OttSummaryThe quantification of trapped non-wetting phase saturation and distribution in petroleum reservoirs is essential to understand hydrocarbon recovery efficiency. Laboratory experiments on core samples are regarded industry best practice to estimate hydrocarbon trapping. To implement entrapment characteristics in reservoir modeling, empirical correlations between initial saturation and respective residual non-wetting phase saturation (trapping curves) are commonly used.
To overcome long lead times for setting up reservoir models due to time-consuming laboratory workflows, pore-scale simulations of fluid flow on digital representation of the pore space - so called digital twins - imaged by micro computed tomography have been considered a viable alternative to estimate hydrocarbon entrapment. In this study, we compare simulation results for water/gas capillary dominated imbibition in a sandstone reservoir. So far, digital rock simulations could not predict representative trapped phase saturation levels with the classical morphological approach. This was the motivation to adapt the simulation concepts by inclusion of sub-resolution wetting-phase layers to the pore-structure. As a result, it was possible to simulate representative spatial distribution of the trapped non-wetting phase in the pore-space and to estimate realistic residual saturations. For verification purposes, the simulated results have been compared to the trapping model by Land (1968) .
-
-
-
Using Blockchain and Smart Contracts for Marine Seismic Data Integrity and Contract Control
By L. FloodSummaryThere are opportunities using blockchain combined with smart contracts to enhance data integrity and contract control within the marine seismic industry for individual contracts. Further an implementation of blockchain and smart contracts at the industry level will redefine industry standards and create a payment platform for the industry, and associated subcontractors, making contracts easier to administer and control.
-
-
-
How to Leverage Advanced TensorFlow and Cloud Computing for Efficient Deep Learning on Large Seismic Datasets
Authors C.E. Birnie and H. JarrayaSummarySeveral seismic applications benefit from using all available receivers and a long time-window, allowing greater representation of signal and noise. Neural networks have the ability to utilise spatio-temporal data and extract high level patterns thanks to their non-linear function compositions. However, the training of such networks is memory intensive, often resulting in the downsizing of data introducing constraints on the number of traces and/or the length of the recording. Through the example of developing a deep learning model for passive seismic event detection on a large array of ∼3500 sensors, we describe an end-to-end workflow from synthetic labelled data creation to distributed model training to model deployment. We demonstrate how to overcome the memory challenges of large input data by utilizing TensorFlow’s data generators for on-the-fly generation and loading of large seismic recordings during the training procedure. Furthermore, we illustrate how training time can be drastically reduced by distributing training across multiple machines with GPU capability. Kubernetes and cloud resources are leveraged for ease of orchestration of compute resources and scaling up horizontally. Finally, we highlight that whilst training is computationally expensive, the trained model can be deployed on a standard, non-GPU machine for real-time detection of passive seismic events.
-
-
-
Digital Analysis of the whole Core Photos
Authors V. Abashkin, I. Seleznev, A. Chertova, A. Samokhvalov, S. Istomin and D. RomanovSummaryIn this work, we present the technique for automatic processing of whole size slabbed core digital images. The technique helps to identify areas of the photo that are close in properties that can correspond to different rock types, facies, etc. These distinguishing features can be used to predict petrophysical rock properties using available laboratory measurements. Obtained data can be used in complex log interpretation, construction and further validation of the reservoir hydrodynamic model, refinement of well geomechanical models. Texture characteristics of whole core surfaces obtained from the images of an average and high resolution can considerably increase the accuracy of the rock classification and more reliable distribution of rock properties on larger scales. Image color clustering is carried out using the Gaussian approximation for image pixels’ density in the digital color-coding space. The technique complements available log and core data by specific properties descriptors (porosity, permeability, natural gamma-rays emission, etc.), including rock mass color characteristics, texture, bedding planes angle and thickness, and shape and size of clastic inclusions. The possibility of the generation of high-resolution curves for physical properties measured on core plugs was demonstrated.
-
-
-
Generating Custom Word Embeddings for Geoscientific Corpi
Authors C.E. Birnie and M. RavasiSummaryIn the field of natural language processing, word embeddings are a set of techniques that transform words from an input corpus into a low-dimensional space with the aim of capturing the relationships between words. It is well known that such relations are highly dependent on the context of the input corpus, which in science varies highly from field to field. In this work we compare the performance of word embeddings pre-trained on generic text versus custom made word embeddings trained on an extensive corpus of geoscientific papers. Numerous examples highlight the difference in meaning and closeness of words betweeen geoscientific and generic context. A prime example is the term ghost which has a specific definition in geophysics, different to that of its common usage in the English language. Moreover, domain specific analogies, such as ‘Compressional is to P-wave what shear is to… S-wave’, are investigated to understand the extent to which the different word embeddings capture the relationship between terms. Finally, we anticipate some use cases of word embeddings aimed at extracting key information from documents and providing better indexing.
-
-
-
Architecting a Digitalization Platform to Deliver Transformative Business Results
Authors J. McConnell and P. QuinnSummaryDesigning and implementing a data platform to support both traditional applications and modern analytical techniques is difficult in any industry - and building an oil company solution for subsurface information brings additional requirements and potential pit-falls. Defining your digitalization strategy may provide direction, but that alone is not enough. How do you ensure the strategy remains valid, and guarantee its adoption and overall success?
Feedback, agility and de-coupling are required in order to build-in fundamental flexibility and longer-term openness to change and innovation. To balance this against rock-solid operational concerns, this therefore must by underpinned by robust data and architectural principles and governance at both a high and low level. Without this cooperative and coordinated effort, investments in digitalization programs are unlikely to see their expected value fully realised.
-
-
-
Tools for Automated Rock Description
Authors E. Baraboshkin, D. Orlov and D. KoroteevSummaryThe algorithms for the classification of images are well developed in recent years. They work well when the classes are clearly defined between each other. The geological classes are not like that because they can be observed in different scales and classification paradigms.
To tackle this problem, we compare different feature extraction techniques and classification (semi- and supervised) algorithms. We present methods which help to increase the accuracy of rock type classification.
-
-
-
A Practical Workflow Using Seismic Attributes to Enhance Sub Seismic Geological Structures and Natural Fractures Correlation
Authors A. Bacetti and M.Z. DoghmaneSummarySince 1970s, seismic attributes has been widely used in every seismic interpretation and reservoir characterization workflow. This practice has generalized due to the rapid development in computers technology (both hardware and software) and the emergence of 3D seismic surveys. In this paper, we describe the workflow of using seismic attributes to visualize hidden structures that cannot be seen on the original 3D seismic. We also studied the existence of a relationship between seismic attributes and natural fracture density. The results are interesting from geological and reservoir modeling aspects as the workflow helped to reveal hidden small faults below seismic resolution and some attributes had a good correlation with natural fracture density. This workflow is useful exploration cost optimization strategy for oil and gas national companies.
-
-
-
Domain Expertise, Deep Learning and Cloud: How to Build Powerful Workflows for Exploration
By L. VynnytskaSummaryDigital transformation in general emphasizes technology and migration to the cloud. However, once the most important technical questions are resolved, the focus should be shifted to the users since their adoption and incorporation of new tools into everyday work will measure the success of digital transformation. One of the most time-consuming and important tasks in exploration is interpretation of seismic data. Therefore, E&P companies and software providers have put much efforts into solving this problem. Deep learning has received a lot of attention due to its ability to efficiently recognize patterns in large and complex data. However, to create value for oil companies, deep learning solutions should become an integral part of workflows. Interactive training allows to combine domain expertise of geoscientists and algorithms themselves to ensure adoption of the deep learning technology, high accuracy and confidence in the results. Cloud architecture should be flexible and extensible. Efficiency and flexibility must be supported by a distributed compute framework that will act on workflows instead of data.
-
-
-
Towards an Ai-Based Advisor for Capturing Interpretive Trails and Supporting Geological Exploration Activities
Authors R. Brandão, L. Azevedo, C. Paz, M. Moreno and R. CerqueiraSummaryMany activities in the Oil & Gas (O&G) industry rely on expert interpretation over unstructured data and interpretation of elaborate geological concepts. Keeping track of the consumption and production of conceptual knowledge and data is crucial to structure such investigative processes. Nevertheless, capturing and structuring activities of this nature is a complex requirement if an advisor system is to be designed and implemented to support decision making in such domain. We propose a novel representation to keep track and model experts’ interaction with different systems, along with multimodal data and conceptual knowledge they consume and produce during interpretive activities. The proposed representational approach aims at supporting the design and implementation of intelligent advisor systems for knowledge-intensive processes, such as the ones observed in the multidisciplinary domain of O&G.
-
-
-
Vendor-Independent Workflow Architecture to Integrate Domain Applications and Accelerate R&D to Production
Authors P.V. Nunes, V. Furuholt, N. Burns and P. AursandSummaryReal data liberation can only be achieved by substantial industrial cooperation to establish robust API standards for data transfer in to and out of the data layer. As the E&P industry moves to enable this transformation, more players are entering the software landscape providing modern and innovative solutions. A vendor-independent workflow architecture solution that allows users to connect services from different providers and internal products as part of their routine workflows has been established to ensure flexibility and to drive automation.
The solution is built predominantly in Python, with a series of microservices containerized with Docker and running in Kubernetes clusters on Google Cloud Platform (GCP), all of which is managed as infrastructure as code with Terraform. It has key important components: User interface, UI backend and logic, commend que, data abstraction service and system monitoring. The ambition is to open source some of these components and develop same functionalities in other cloud providers. So, the proposed Workflow Framework could become an industry standard to attract several services and expand this ecosystem.
-
-
-
Comparison of Two Different Methods for Estimating Oil Recovery from In-Situ Combustion in Heavy Oil Fields
By A. VermaSummaryThis is a comparison study done on the heavy oil reservoir fields to estimate the production by the use of thermal recovery technique called In-situ combustion, in which the air is injected along with water and combusting it to increase the mobility of the heavy oils, thus enhancing the production in heavy oil fields. It gives an insight of the contrast between the theoretical estimations and the actual production of hydrocarbons on the field scale and helps in better and efficient optimization of the production from the field. An analysis was done by comparing the two methods namely Gates and Ramey method and Nelson and McNeil method and correlation was shown in terms of total oil produced and air/oil ratio based on the data of the field.
-
-
-
Digitalization for Data Liberation
Authors Z. Manan, A. Hazet, T. Bramono and D. GalihSummaryThe Government of Indonesia governs the disclosure of upstream oil and gas data by issuing Regulation No. 29 of 2017. Contractors have to propose permits to the government in order to disclose their data to investors. Although the Government meant to boost investment by issuing the regulation, processes to get the permit can take some times.
The paper is organized as follows: After the introduction, the second section gives a brief overview about process that has been established to show the disclosure of upstream oil and gas data in Indonesia for investment purpose. The third part of the paper promising a new applied policy for the Government of Indonesia in order to improve efficiency for accessing upstream oil and gas data. In other country such as, United States of America, Canada and Mexico Disclosing oil and gas data generally can be transferred by transfer agreement. In last section we describe advantages and disadvantages about our new concept policy compare to Indonesia existing policy that roles the disclosure of upstream oil and gas data.
-
-
-
Machine Learning on Field Data for Hydraulic Fracturing Design Optimization: Digital Database and Production Forecast Model
Authors A. Morozov, D. Popkov, V. Duplyakov, R. Mutalova, A. Osiptsov, A. Vainshtein, E. Burnaev, E. Shel and G. PaderinSummaryIncreasing amount of hydraulic fracturing (HF) jobs in the recent two decades brought in a significant amount of measured data available for development of predictive models via machine learning (ML). In multistage fractured completions, post-fracturing production reveals evidence that different stages produce very non-uniformly, and up to 30% may not be producing at all due to a combination of geomechanics and fracturing design factors. Therefore, there is a significant room for fracture design optimization. We propose a data-driven model for fracturing design optimization, where the workflow is essentially split into two stages: prediction of 12-month cumulative oil production and maximizing the target by optimizing HF design parameters. In this work, the first stage is considered, and the result of the ML model’s prediction of the target is 81.5% on test set.
-
-
-
Application of Artificial Intelligence Algoritnms for Tight Oil Field Development
Authors A. Povalyaev, A. Fedorov, B. Suleymanov, I. Dilmukhametov, D. Salnikova and A. SergeychevSummaryNowadays economical exploitation of unconventional low-permeable reservoirs is a grand challenge for oil and gas producers. In particular, the continued development of mature fields is complicated by the fact that prospective drilling is concentrated in the zones with high geological uncertainty and unclear production potential. To provide an effective and sustainable solution for development planning, a new methodology that would enable high-quality forecasting of production profiles for various development strategies is required.
In this paper, we present a novel technique for quick-look estimation of different wells placement schemes efficiencies, based on LWD data of the new wells and production history of existing ones. The data analysis is performed via the incorporation of Artificial Intelligence tools. The feasibility of this method was verified in several pilot projects within the frames of the ongoing drilling campaign.
As a result of this research, the global optimization of Prioskoe field development was proposed, and the same works were adopted for other tight oil assets of the company, namely Achimov and Tumen deposits in the West Siberian region.
-
-
-
Unlocking New Exploration Opportunities with Digital Transformation
Authors F.T. Amir, A.D. Wibisono and S.E. SaputraSummaryThis paper delivers the journey of redefining our business process as government institution, with digital transformation to create data-driven decision-making that powered by technology, along with some challenges and opportunities in terms of its development and implementation. The transformation consists of 4 stages: Data Collection, Digitization, Digitalization and Digital Transformation. As the project progressed since 2016, every aspect is done collaboratively in parallel and rapidly, also reinforced by the Government with the launching of new regulation that accentuate on data openness. The results are efficency in business process and notably increase exploration investments in Indonesia.
-
-
-
ANNs Trained on Synthetic and Lab Data for Modeling Steady-State Multiphase Pipe Flow
Authors E. Baryshnikov, E. Kanin, A. Vainshtein, A. Osiptsov and E. BurnaevSummaryThe present work considers the development of a machine learning model trained on synthetic and lab data for the steady-state multiphase pipe flow. We propose a new method for calculating flow characteristics such as liquid holdup, flow regime, and pressure gradient in the pipe segment based on ANNs and transfer learning technique. Besides, the created tool is implemented within the marching algorithm for calculating flow parameters along the whole pipe. The segment module consists of three sub-models, namely, for calculating liquid holdup, defining flow regime, and estimating pressure gradient. For sub-models creation, we use transfer learning methodology: on the first stage, the ANNs are trained on synthetic data, which we generate by using OLGAS mechanistic model; on the second stage, we train meta-models additionally on the real data, which in our case presented by lab measurements. As a result, we create the new multiphase flow correlation, which includes the basics of the physics-based OLGAS model and is tuned for real data that can be in the general case field measurements. At the final stage, we apply marching algorithms with the suggested segment model to the field dataset for testing purposes.
-