- Home
- Conferences
- Conference Proceedings
- Conferences
First EAGE Digitalization Conference and Exhibition
- Conference date: November 30 - December 3, 2020
- Location: Vienna, Austria
- Published: 30 November 2020
93 results
-
-
Prediction of Hydrocarbon Using 1-D Numerical Modelling of Low Temperature Thermochronology for Various Basin Scenarios
Authors J. Singh and D. M. WhippSummaryThermochronometer age provide time since our temperature thermochronometers have cooled down till present. Degree of maturation of organic matter i.e. kerogen in the rock depends on the maximum temperature it has achieved in its thermal history and the amount of time it has spent in that temperature range. Low temperature thermochronology helps to relate the age of cooling to the oil and gas window and help to identify whether the kerogen has been overcooked or preserved in the rock. As low temperature thermochronometers have low effective closure temperature which is in junction to oil and gas window, AFT have closure temperature ∼120° C while range of oil window lies between 50° – 150°C. Numerical model for predicting the Apatite Fission Track (AFT) age using low temperature thermochronology has been developed using Python under various condition of exhumation and subsidence of a basin in 1-D. The model requires certain material parameters & characteristics such as subsidence and exhumation rate, conductivity, thermal diffusivity, thermal gradient, transition time, specific heat, etc. Model provides with the AFT age along with the fission track length distribution statistics in the mineral at present time and depth history of the rock.
-
-
-
Geostatistic Recognition of Genetically Distinct Lacustrine Shale Facies Based on Big Data Technology
By S. LinSummaryWith the breakthrough and progress of shale oil and gas exploration, the strong heterogeneity of shale has been highly paid attention. Relying on the traditional geological methods, the classification of shale is difficult to meet the current needs of exploration and development. It is a very important problem that how to complete the genetic classification of shale based on the comprehensive consideration of many parameters including genesis and characteristics. This paper demonstrates that the geostatistic recognition based on big data technology analysis can processing numerous data and identifies genetically distinct shale facies, which improve understanding of changes in a single shale formation. It has functions (1) of assigning genetic affinities and (2) of making it available that a confidence level in the classification for any additional shale samples.
-
-
-
Geological Synthesis and Analysis of Potential Petroleum Systems of the Bida and Sokoto Basins in Nigeria
More LessSummaryThe Bida and Sokoto Basins are two of Nigeria’s inland basins that constitute another set of a series of Cretaceous and later rift basins in Central and West Africa whose origin is related to the opening of the South Atlantic. Geophysical aeromagnetic interpretation has assisted in the interpretation of the geology of the basins. Organic geochemical studies show that the Kudu Shale in the Northern Bida Basin equivalent to the Ahoko Shale in the Southern Bida Basin and the Dukamaje Formation (dark shales and limestones) in the Sokoto Basin constitute the source rocks in the potential petroleum system. With averages for source rock thickness of 40m, area of basin of 45,000km2, TOC of 9.0wt%, and HI of 220mgHC/gTOC, charge modeling indicates 623 million barrels of oil equivalent extractable hydrocarbons in the Bida Basin at the appropriate kitchens.
-
-
-
Deep Learning a Poro-Elastic Rock Physics Model for Pressure and Saturation Discrimination
Authors W. Weinzierl and B. WieseSummaryUtilizing time lapse seismic for determining pore pressure and saturation effects is relevant for hydrocarbon production as well as natural gas and CO2 storage. Its quantitative interpretation enables a detailed understanding of 4D evolution of fluid/gas migration. We focus on the rock physics model to invert for rock physical parameters. A training dataset is generated with a forward modeling operator, with parameters adapted from a 65 m deep unconsolidated high porosity reservoir from the Svelvik field laboratory, Norway. Two independent rock physical formulations are considered and multiple deep fully connected neural networks conditioned and trained to invert for different rock physics parameters. The network can rapidly derive rock physical parameters such as pressure, saturation, and porosity from seismic attributes, thus acting as an inversion tool. Subsets of the input parameters can be preset based on prior knowledge of a site. Utilizing neural networks in discriminating pressure and saturation allows real time field site conformance verification during seismic campaigns targeting 4D effects in an operations scenario.
-
-
-
IoT and Openness as the Design Language for Changing the Landscape for Oilfield Workflows
Authors R. Toolsi, A.A. Aqrawi and K. JansaSummaryDigital transformation has been a buzzword for several years; however, this transformation was both technological and hierarchical. Similar to that ethos, if we look at IoT as simply just new technology with edge gateways, sensors, and actuators, then we are only driving innovation across technology. We can start looking at IoT as a design language and making changes to the organization, communication, processes, and people.
I propose is to have IoT as part of the operational roadmap, that can liberate data, remove silos and bring about new business opportunities. Management can get real-time dash boards of all operations and make better Opex and Capex decisions, it’s also possible to automate several functions, thereby having better control and improving reaction time. IT would no longer be an vertical by itself by rather integrated as necessary to better support business objectives.
The advantages for employees, with better access to quality data, you can improve your decision making and it also gives your visibility across several different aspects of the operations. Thereby empowering employees to make decisions horizontally where traditionally there were only able to do so vertically.
-
-
-
Lifecycle Management in an Dynamic Open Ecosystem
Authors K. Jansa, A.A. Aqrawi and R. ToolsiSummaryThe rise and growth of Software Ecosystems over the last decade has resulted in a significant transition in software engineering practices.
This presents a tremendous opportunity in the E&P sector to deliver tailored, instant solutions to the user. By transitioning to a cloud-based E&P environment, we can effectively manage the lifecycle of software used in a monolithic desktop application from development, validation, deployment and ultimately to retirement. In this shared environment, the user has complete control and the extensibility of a monolithic desktop application in a cloud setting can be maintained without losing invested value in technology.
Lifecycle Management incorporates multiple disciplines: project management, requirements management, software development, software testing, quality assurance and customer support.
The Ecosystem allows the platform owner to communicate with the software developer community the release milestones of the platform and clearly state the requirements. Hence the developers may adjust their development cycle to ensure that their updates are submitted to validation on time to keep pace.
-
-
-
Machine Algorithm for Predicting Shale and Sand Arrangement Using Seismic Attributes
Authors S. Gabitova and M. NaugolnovSummaryThe work is devoted to a tool for lithology prediction using seismic data inversion and a creation of a classifier based on machine learning algorithms. There was found a hidden connection between the input data seismic interpretation results: P-Impedance, Vp/Vs and NTG and with high precision predict the probability of the water and oil saturated layers in the understudied and unexplored field sectors.
There were held multiple express tests of different machine learning classification algorithms from examples (labeled data): based on boosting - Gradient Boost, XGBoost, Cat Boost, bagging - Random Forest; Support Vector Machines (SVM), K-Nearest Neighbors (KNN). The best methods: Gradient Boost, XGBoost, Cat Boost were studied more carefully. As a result there was found a method based on machine learning algorithm (Boosting) that allows evaluating data, building a model and then forecasting with a high precision sand and shale arrangement in undrilled and poorly studied zones. This method could be applied when there is hardly distinguishable shales and sand as the insufficient contrast of elastic inversion data. We showed that this method that exhibit relatively high classification accuracy and allows to classify lithotypes: sands, shales and build maps of the sand probability.
-
-
-
Super Resolution of Fault Plane Prediction by a Generative Adversarial Network
Authors F. Jiang and P. NorlundSummaryInterpreting seismic data enhances understanding of subsurface geological features, particularly for assisted fault interpretation. The results of assisted fault interpretation workflows can provide valuable information to optimize hydrocarbon production during drilling and stimulation treatments. However, given the complexity of seismic data such workflows can generate incorrect or misleading interpretations, such as discontinuous fault segments and mispositioned fault planes, particularly when deep-learning convolutional neural networks are used. Fault extraction results often face difficulties locating the fault plane where low reflectivity or signal-to-noise ratio exists. In this abstract, a novel approach is introduced that implements a super-resolution generative adversarial network to help improve the resolution of fault prediction results. Synthetic fault data were generated to train an adversarial model, which was then applied to different field data sets. This approach could serve as a standard post-processing workflow to decrease the uncertainty as part of an assisted fault interpretation approach and provides an efficient method of helping improve the fidelity of fault prediction results.
-
-
-
Clastic Reservoir Rock Grain Size Estimation from Wireline Logs Using a Random Forest Model: Initial Results
Authors F. Anifowose, S. Shahrani and M. MezghaniSummaryGrain size is a key input to various reservoir models. The models require a continuous log of grain size. Core samples are usually not available over the entire reservoir section. The most accurate grain size measurement is obtained from sieve and laser particle size analyses. These methods are expensive. The conventional method, the visual core description, is time-consuming, subjective, and nonreproducible. Alternative methods include the use of empirical equations, nuclear magnetic resonance (NMR) relaxation time, and acoustic velocities. These latter methods require inputs that are not sufficiently available, not applicable to different geological settings, or not available for all wells. This paper proposes a new methodology that estimates reservoir rock grain size for a new well or reservoir section from archival core description data and their corresponding wireline logs using machine learning technology. Nine wells from a clastic reservoir are used. Seven wells are combined to build the training set while the remaining two are used for model validation. Three machine learning methods are implemented and trained with optimized parameters. The results showed that, despite the subjectivity and bias associated with the core description data, the machine learning methods are capable of estimating the grain size for the validation wells.
-
-
-
Pathways to Exploration Success – Orchestrating the Steppingstones
Authors S. Roy, C. Castagnac, T. Levy and S. NolletSummaryIn the overall E&P industry the definition of corporate exploration success differs from organization to organization. It could vary widely from investing early into the right piece of acreage for an investment company, to drilling a discovery well for an exploration only company, to matching the producing volume of hydrocarbon from subsurface with the discovery volume claimed by the Exploration department for a major oil and gas producer. Most of the organizations define their business processes as per their corporate goals and establish stage gate processes for decision making.
The exploration processes framework is the guidance to identify and prioritize the work to be done as per the business goal, facilitate collaboration between domains, make each domain understand their roles and responsibility to achieve the common goal, and how their work is going to be used in the next step.
The digital solution is embedding the exploration processes framework into cloud native application, with the preservation of data and knowledge, empowers the capacity to integrate analysis from multiple exploration domains, increase the understanding of regional geology to find and mature opportunities. The digital technology successfully establishes the connection between the business goals and the technical project execution.
-
-
-
Inverting Elastic Model Properties Using ResNet
More LessSummaryWe develop a novel seismic data inversion method to estimate the properties of subsurface layered elastic models using Convolutional Neural Network (CNN). Specially, we use ResNet (Residual Neural Network) to predict the parameters of layered elastic models, including layer depth, layer density, P-wave, and S-wave velocities for its unique identity block architecture. The entire dataset consists of 10,000 layered elastic models and their corresponding single shot records. We use 80% pairs from the dataset to implement the training process, and then the trained network could make predictions on the rest of the models in the dataset. Our trained network presents satisfying prediction results on both simple (i.e., few-layer) and complex (i.e., multi-layer) models, and it suggests that the proposed approach could be a useful tool for data processing, especially when dealing with near-surface layered models.
-
-
-
Robust Evaluation of Fault Prediction Results: Machine Learning Using Synthetic Seismic
Authors M. Sarajaervi, T. Hellem Bo, B. Goledowski and M. NickelSummaryMetrics to assess machine learning methods are necessary for evaluation of results and for comparisons to existing methods. For the segmentation of faults in seismic data, we suggest the use of a robust Jaccard metric that allows for small lateral inaccuracies in fault positioning. This error tolerance is necessary because interpretations are often inaccurate or subjective as a result of low seismic resolution, noise or other image deficiencies. The metric is used to evaluate results during the development of a 3D convolution neural network. In practice, this is done by applying new versions of the convolutional neural network to field data and by using metrics to compare machine learning results to the manual interpretations.
-
-
-
Harnessing Data Standards and Cloud Computing to Achieve Global Screening and Compare Exploration Potential
Authors M. Treloar, T. Butt and K. HeyburnSummaryExploration for conventional hydrocarbons has become increasingly challenging. Exploration frontiers represent high-risk opportunities, and explorers require time and cost-effective tools to consistently understand and rank these opportunities within an exploration portfolio. This is particularly true in the continuing environment of risk-aversion and budgetary pressure. In addition, despite the ongoing digitalization of the industry providing more data than ever before, exploration teams remain reduced in size and often lack the resources to make full use of the ever-increasing volumes of data at their disposal.
These challenges speak to a need for faster, better-integrated, and more rigorous screening of exploration opportunities. This article examines how the need can be addressed by combining the efficiency gains in data access, integration, and processing capabilities offered by cloud technologies with standardized geological interpretations. As a case study, all Cretaceous clastic exploration potential in offshore basins has been assessed globally. The technology, workflows and inputs used to achieve this are covered, alongside the importance of applying a consistent framework for data integration.
-
-
-
A Transformational Journey from Unstructured Geoscience Data to a Digital Analytical Experience
Authors D. Slidel and I. FletcherSummaryThe automated standardization of legacy datasets can facilitate the generation of valuable new insights, when consumed within a business intelligence platform. Where the digital transformation in the energy services sector has successfully led to the automation of data gathering and interpretation processes, a geoscientist’s time can be freed up to undertake the more valuable interpretive tasks that are so vital in understanding subsurface geology. A recent advancement in managing data that has significantly helped with this transition has been the adoption of business intelligence and analytics platforms to produce advanced visualizations and automated analysis. This presentation and article focusses on a recent example of this process in action where play cross sections in an unstructured PDF format are processed for use within an analytical platform.
-
-
-
Real Time Well Engineering for Intelligent Rig State Identification: An Edge Computing Use Case
Authors V. Kemajou, R. Samuel and M. YasirSummaryThe internet of things has brought better connectivity among devices in various industries including the oil and gas industry. With this improved connectivity comes improved applications, especially in the area of real-time data processing and analytics. A major requirement for real-time application is minimized latency. Cloud computing, despite its many benefits, is often limited in that area. A lag often starts and worsens when the realtime data has to go back and forth between field locations and data centers. On the other hand, edge computing enables faster processing speeds. Edge computing applied to real-time well engineering is presented and discussed. It has the potential to be an adequate approach for applications dependent on real-time processing such as geosteering, which requires instantaneous processing to drastically enhance the economic profitability of a well. With edge computing, some sensor data can be cleaned and processed instantaneously to automatically identify the rig state, which is a key step in the real-time well engineering workflow during drilling.
-
-
-
Machine Learning for Wax Deposition Prediction
Authors M. Nait Amar and A. Jahanbani GhahfarokhiSummaryAccurate prediction of wax deposition is of vital interest in digitalized systems to avoid issues that interrupt the flow assurance during production of hydrocarbon fluids. The present investigation aims at establishing rigorous intelligent schemes for predicting wax deposition under extensive production conditions. To do so, multilayer perceptron (MLP) optimized with Levenberg-Marquardt algorithm (MLP-LMA) and Bayesian Regularization algorithm (MLP-BR) were established using 88 experimental measurements. The obtained results showed that MLP-LMA achieved the best performance with an overall root mean square error of 0.2198 and a coefficient of determination (R²) of 0.9971. The performance comparison revealed that MLP-LMA outperforms the prior approaches in the literature.
-
-
-
Geoscience Workflow Tracking by Means of RESQML Standard Format
Authors M. Piantanida, M. Dalla Rosa and B. VolpiSummaryThe paper describes Eni’s implementation of a RESQML database, capable of tracking the models exchanged across different G&G applications along interpretation and modelling workflows, together with all the metadata that will allow the later identification of the right model to be re-used for future work.
The implementation is targeted at sustaining Eni’s vision of a future ecosystem of small geo-apps, each focused on performing at best a small piece of the workflow, and with the possibility of easily composing the apps into a full workflow by exchanging the RESQML models across each other. This vision must be coupled with a powerful model tracking database, capable of identifying which models were used at each step of the workflow. The paper describes the approach used by Eni for such implementation, including the capability of disaggregating RESQML models into the basic data components to avoid duplication or inconsistencies, as well as some examples of the metadata used to correctly label, store and retrieve the RESQML models within the database.
-
-
-
Principal Component Analysis and Deep Learning along Directional Image Gathers for High-Definition Classification of Subsurface Features
By B. De RibetSummaryDiffraction imaging has proven to be an attractive method for delivering high-resolution subsurface images containing different types and scales of continuous and discontinuous geometrical objects. For depth domain 3D subsurface models, Koren and Ravve (2011) described an imaging method which is based on the ability to decompose the full recorded seismic wavefield into continuous full-azimuth directivity components in situ at the subsurface image points. This method follows the concept of imaging and analysis in the “Local Angle Domain” and allows us to generate azimuthal directivity gathers, from which we can separate specular and diffracted energies.
As part of the ongoing effort to automatically enhance procedures for classifying directivity driven image data into N geometrical features such as continuous reflectors, faults, point diffractors, acquisition noise, and ambient noise, Itan et al. (2017) presented a Deep Learning (DL) approach to this challenging task. This work expands on this method, as in addition to vertical section image patches, we also train the net with horizontal patches. This leads to further improvements, particularly in areas masked by ambient and coherency noise for classifying different geometrical features. We demonstrate our method on seismic data from the Eagle Ford and Barnett unconventional shale plays.
-
-
-
The Big Loop: an Innovative Solution for Automating Subsurface Workflows and Transforming Inter-Disciplinary Asset Team Collaboration
Authors C. Cosson, A. Plougoulen, M. Morin and G. MaisonneuveSummaryThe efficient management of subsurface operations depends on the asset team’s technical excellence. Skilled practitioners in various domains interpret, model and predict reservoir performance. This, however, leads to silos between domains, each with its own language, methods and technologies, resulting in: a) An overly long cycle time for passing through domains, analyzing data and delivering key information that helps management decisions. b) The challenge of creating consistency between specialists and building confidence in the results. This presentation demonstrates a workflow that improves collaboration by helping practitioners work synchronously, based on fully automated reservoir modeling technologies allied with agile collaborative methods. The technology driver is a workflow orchestrator, which runs atop optimized reservoir modeling and simulation software packages and scripts. A cognitive approach is used: initial models are built from available data, and then refined iteratively as new information arrives. Updates are automatically propagated through domains and deliver new results. Consistency across domains is preserved and models are evergreen. Continuous alignment is guaranteed, and results reflect the asset’s needs. This solution, called Big Loop™, is software-agnostic, customizable to the needs of the individual organization, and has been shown to significantly improve asset development and management efficiency.
-
-
-
Kriging Season 3: From Geosciences to Geo_DATA_Sciences
Authors L. Sandjivy, S. Valentin and L. LacailleSummaryBig Geo Data and cloud computing are a real E&P game changer taking us from Geosciences to Geo DATA sciences. Probability theory offers a consistent mathematical framework for developing specific “kriging based” machine learning algorithms for automating reservoir modelling and updating in real time. This is the season 3 of the Kriging algorithm saga in the oil industry, after season 1, kriging as an interpolator, and season 2, kriging as a geophysical workflow optimizer. In season 3, kriging-based software packages give way to kriging-based software apps for operating digital E&P projects
-
-
-
Automatic Method for Anomaly Detection while Drilling
Authors M. Golitsyna, A. Semenikhin, I. Chebuniaev, V. Vasilyev, V. Koryabkin, V. Makarov, I. Simon, T. Baybolov and O. OsmonalievaSummaryA lot of anomalies can occur and lead to failures during drilling process. It is crucial to detect these deviations from normal process as soon as possible, so engineers can analyse and decide what activities to take in order to prevent potential NPT.
In this work we propose a new machine learning based approach for detection abnormal drilling behaviour in an online manner. The idea is to cluster drilling data, which is preprocessed in a very special way. Our aproach allows using all available data for training as it does not need any labeled data and incorporates both raw drilling parameters and expert knowledge, thus enhancing prediction results.
-
-
-
Uncover 2% Advanced Production Optimization across Complex Operational Plants through Industry 4.0, AI and Digital Twin
Authors D. Piotrowski and J. KalagnnanamSummaryThis is a client case study illustrating an advanced implementation of Industry 4.0, AI, and digital twin to achieve material gain in production optimization across complex, interdependent plant processes. From working with leadership and selecting a the right impactful business case to implementation and garnering support from operational stakeholders, we demonstrate how end-to-end value chain optimization is possible.
-
-
-
Production Optimization Under Constraints: Development and Application of Software Combining Data Science and Petroleum Engineering Knowledge
By G. JoffreSummaryOMV New Zealand gas/condensate fields’ gas production is limited by commercial demand, which also constrain production of associated condensate. No test separators nor individual well multiphase flow meters are installed, only single-phase gas flow meters (V-cones flow meters and orifice plate) for each individual well. In order to produce the maximum revenue for the fields, the wells with the highest condensate-gas-ratio need to be prioritized, while still ensuring that well and facilities constraints are managed.
An agile crew of engineers, developers and data scientists, have been mobilized to design and create reliable, easy to use and easy to maintain software solutions to solve three different parts of the optimization problem: A live, dynamic visualization of the wells operating envelopes for dynamic monitoring of the current status of individual wells versus the constraints and direct comparison with simulation models results. A software solution to automatically identify step-changes in well gas, water and condensate rates at facility output level, using these changes to improve CGR and WGR allocated value for each individual well. A software application to calculate the best combination of individual well rates to meet gas export demand while maximizing condensate production, within facility limits and well operating envelopes.
-
-
-
Data-Driven Detection of Well Events in Mature Gas Fields
Authors J. Poort, P. Shoeibi Omrani and A.L. VecchiaSummaryThe production optimization of mature gas fields is severely complicated by the occurrence of certain undesired well events such as salt precipitation, liquid loading, or gas/water coning. Learning from production data of periods in which such events have taken place could help operators improve the process optimization. However, due to the current manual process of interpreting production data, many well events can go unreported. Reanalyzing historic data could retrieve missed events, but this is a time-consuming and costly process. In this study, the dynamic time warping (DTW) algorithm was used in a developed workflow that automates the process of detecting well events which can be operational both in an offline and real-time manner. Such a workflow supports operators in finding well events within production data based on characteristics of target events provided by operators. Based on a case study using field data for a gas well suffering from salt precipitation, the workflow has been proven to be accurate and significantly computational-efficient in finding 8 new events which were not detected by the operator. Additionally, the algorithm was robust in detecting well events even after introducing up to 10% of added noise.
-
-
-
Automated Surface Fault Block Delineation
By T. BrennaSummaryWe present a fully automated method for delineating potential compartments in faulted reservoirs based solely on the geometry of a single reservoir horizon interpretation. Such a technology has potential applications in for instance reservoir compartmentalization studies where it is often advantageous to have an a priori delineation of the reservoir compartments as a credible starting point for the analysis.
In our solution we integrate methods from the geometric modeling discipline, for extracting high-quality curvature information, and novel extensions of existing image processing techniques for segmentation. The result is a fully transparent, deterministic and extensible workflow.
Getting automation right will create value in itself by freeing domain experts from manual laborious work to focus on more fulfilling, higher-value activities. Also, automation could be an enabler for entirely new intelligent, or even transformational, workflows by effectively letting us bypass processes requiring manual user interaction to ultimately leverage alternative applications of the technology stacks. The impact of automation in the emerging digital space will empower us with new capabilities enabling accelerated hydrocarbon discovery.
-
-
-
Deep Bayesian Neural Networks for Fault Identification and Uncertainty Quantification
Authors L. Mosser, S. Purves and E.Z. NaeiniSummaryThe interpretation of faults within a geological basin or reservoir from seismic data is a time-consuming, and often manual task associated with high uncertainties. Recently, numerous approaches using machine learning, especially various types of convolutional neural networks, have been presented to automate the process of identifying fault planes within seismic images, which have been shown to outperform traditional fault detection techniques. While these proposed methods show good performance, many of these approaches do not allow investigation of the associated uncertainties that arise in the fault identification process. In this study, we present an application of Bayesian deep convolutional neural networks for identifying faults within seismic datasets. Using an approximate Bayesian inference method a Bayesian deep neural network was trained on a large dataset of synthetic faulted seismic images. The model is then applied to a benchmark dataset and a real data case from NW shelf Australia to identify fault planes, and to investigate the associated uncertainty in the predictive distribution.
-
-
-
Processing Thin Section Photos with Neural Networks and Computer Vision
Authors S. Polushkin, Y. Volokitin, I. Edelman, E. Sadykhov, O. Lokhanova, Y. Murzaev and S. PastushkovSummaryThe neural network, which was designed for diagnosing cardiovascular diseases was trained to identify and analyze grains at thin section photos. Identifying pores and pore throats is done with computer vision. About 150 thin section photos were processed in about 20 minutes. The output contains grain sizes and mineral composition for more than 10000 grains, and pores and pore throat diameters for several thousand of pores. Comparison with alternative methods of determining pore size distribution like Cap Curves and NMR is presented.
-
-
-
CESI Is a Numerical Approach for Oil Field Study Optimization
Authors O. Melnikova, B. Belozerov and I. PavelevaSummaryThe main goals of oil field study are risks reduction during appraisal and exploration stage and uncertainty decreasing during exploration and development stages.
Software module (patent name is KOGI, English equivalent of this abbreviation is CESI), which based on methodology of complex exploration state estimation (CESI), is a digital tool for identifying zones of insufficient researches. Consequently, these are zones of high risk and uncertainty in terms of STOIIP calculation and planning future investigations.
Methodology is still developing and it is in process adding qualitative features of productive formations (such as complexity, architecture aspects etc.) as well as value of information (VOI) getting from studies.
-
-
-
Digital Multiscale Flow Modeling for Fractured Carbonates with Hessian-Based Cracks Detection
Authors I. Varfolomeev, N. Evseev, O. Ridzel, V. Abashkin, A. Zozulya, S. Karpukhin and M. MiletskySummaryThe results of the pilot project on studying petrophysical and transport properties of core from a fractured carbonate gas-condensate reservoir are described. The studied whole core samples are characterized by low absolute permeability of matrix and highly heterogeneous multiscale network of cracks and fractures. Modern full core 3D X-Ray computed tomography was unable to resolve the geometry of thinner cracks, which made it impossible to create a regular binary solid/void digital rock model typically used for pore-scale hydrodynamic modeling. Thus, a Hessian-based crack detection method, allowing to differentiate voxels with different permeabilities, was employed to construct a model with effective properties. To calibrate the effective properties, smaller sub-plugs were scanned at substantially higher resolution and their images were spatially registered to the whole-core image. The density functional hydrodynamics + chemical potential drive method was used to carry out numerical simulation of three-phase water-gas-condensate flow on the constructed whole core digital rock model with effective properties.
-
-
-
Smarter Well Engineering Concepts Aid in Reducing Planning Time and Increasing ROP
Authors N. Islam, A. Rosener, W. Souza and M. YasirSummaryModern well engineers struggle with digital confusion; they have either too much data or not enough, and the quality is often questionable. Additionally, well engineers are usually operations focused and might not fully appreciate optimization through data-driven insight. This paper illustrates how to optimize the rate of penetration (ROP) in any given field using an automated and timesaving process for designing wells using machine-learning (ML) techniques.
By prescribing optimized ROPs through automated ML of offset well attributes, free from subjective human bias, engineers can push technical limits. Automated analysis, regression, and visualization of high-volume data can reduce planning time significantly and help establish optimized operational parameters to reduce drilling time and costs.
The next step is to build a real-time downhole advisory system to help achieve the predicted ROPs by predicting and prescribing drilling parameters ahead of the bit.
-
-
-
Solving Problems with the Discrete Smooth Interpolation Framework, from Geomodelling to Geophysics and Beyond
Authors A. Tertois and Z. KorenSummaryA number of algorithms developed in geomodelling software rely on the Discrete Smooth Interpolation (DSI) method, a mathematical framework which enables interpolation of sparse values with geological and geophysical constraints on any type of discrete models such as triangulated surfaces or volumetric grids. Leaning perhaps more towards data integration than machine learning, this powerful tool is also evolving as part of our digital transformation. Today’s dynamic environment is favourable to building upon DSI’s principles and ability to add geological or physical concepts as constraints in discrete models.
DSI already offers solutions to many geomodelling problems as part of a successful commercial software suite. The Fourth Industrial Revolution is an opportunity to rejuvenate DSI by lifting it out of the geomodelling toolkit and making it available as a separate entity for any scientist to use, as a seamless and invisible link between linear equations and elegant solutions.
In this paper, we first review the Discrete Smooth Interpolation theory, then show how we currently apply it to various geomodelling problems and finally, we look towards its future in helping us solve our digital challenges in different domains.
-
-
-
Multi-Sensor Acoustic Parameter Analysis System for Monitoring, and Performance Prediction of Deep Drilling and Stimulation Operations
More LessSummaryAcoustic Emission (AE) based systems have been under development and used at Fraunhofer IEG to monitor, evaluate, and control conventional and novel drilling processes and their pertinent equipment used in geothermal and drilling applications. Moreover, novel jetting and drilling operations in deep geothermal reservoirs do heavily rely on such new technologies in order to be able to control them properly and thus, to result in a viable technical and economical option.
AE monitoring is based on the detection and conversion of elastic waves into electrical signals, which are associated with a rapid release of localized stress-energy propagating within a material. It is passive testing, logging, and analysis method to evaluate changes in the properties and behavior of machines and mineral type materials such as rocks. Such changes may be induced by drilling, jetting, or other drilling methods and being recorded, characterized, and evaluated via an AE system and will be used ultimately used for process performance prediction using machine learning methods. This is the core of the novel monitoring system development, the AE based, so-called Multi-Sensor acoustic parameter analysis as the primary control and monitoring mechanism during rock breaking, drilling, jetting, and stimulation.
-
-
-
Standardized Direct Data Transfers Between Applications Accelerates Workflows and Improves Operational Adoption of Innovative Technologie
More LessSummaryGeoscience and engineering workflows are applied to increasingly complex reservoirs. Collaborative teams require the use of different vendor solutions to apply the best technologies to solve problems and deliver the most accurate models and predictions. A new direct data transfer protocol based on existing mature industry standards simplifies and speeds up the data connection between applications. It also ensures better data integrity and complete flexibility in assembling and executing workflows. Based on the mature WebSockets protocol, this new standard has the necessary sub-protocols to reliably handle complex data relationships, very large data arrays as well as unique item identifiers. In addition to accelerating workflows and making them more reliable, this new protocol simplifies the addition of new innovative technologies alongside proven ones, for the best outcome within the tight resource and time limits imposed by the upstream industry.
-
-
-
A Digital Methodology for Large Scale Integrated Optimization of Production Planning and Operations
By M. ScottSummaryProducing assets and their gathering networks are multi-faceted, with multiple diverse data sets and modelling and analysis tools. Consolidating these into a single automated, operational environment can greatly streamline surveillance and management of these assets. However, this type of modelling does not always properly represent the asset as a whole, as individual elements have impacts on preceding and subsequent areas. The purpose of this presentation is to show a simple and effective methodology to generate this integrated model, to allow optimization of production planning and operation processes. By leveraging modern data integration, modelling and orchestration tools, up to date insight into all aspects of the operation can be provided across the business, enhancing planning, forecasting and decision making capabilities.
-
-
-
Accelerating Seismic Data Access, QC and Vendor-Independent Automated Workflows with Cloud-Based Seismic Datastore and API
Authors C. Caso, P. Aursand and T. StraySummarySeismic data discovery, quality assessment, and retrieval are often time-consuming and iterative processes between geoscientists and data managers.
In this paper, we describe the implementation of a seismic datastore in the Aker BP cloud environment, as part of the company’s digital program Eureka. The objectives of this implementation have been to get fast, tool-independent access to the Aker BP seismic data through an API allowing queries of the whole survey but also of subsets of the seismic data; overview of the actual data within each survey; preview of a seismic section (inline, crossline or arbitrary line); comparison of the same section from two different seismic cubes; and enabling 3rd party applications to run as services on top of the seismic. The implementation was carried out in a five-month project, involving software developers and the support of data managers, in an Agile setup with demos every two weeks and continuous feedback from the end-users. The solution has been delivered as a cloud-based API architecture to ingest, store, query, visualize and consume seismic data.
-
-
-
Facies Classification: Combining Domain Knowledge with Machine Learning Solutions
SummaryAutomated facies identification workflows which use Machine Learning (ML) are publicly available but perform sub-optimally (accuracy in the order of 60%) due to a lack of integration with geological domain knowledge. Existing tools consider well log values mostly on a depth-by-depth basis, using only very basic feature engineering. Our solution aims to integrate ML with well-established geoscience principles (also referred to as geo-rules) such as sequence stratigraphy, proximal-distal trends, and log-trend patterns. Geological knowledge is incorporated into ML to improve the quality and robustness of facies prediction and is captured as additional geologically-inspired features added to the dataset. These features include the mean value and other derived properties of intervals, density-neutron separation, segmentation and wavelet transform. All ML algorithms tested with this augmented set of features show significant improvement in performance metrics as compared to solutions with basic logs only.
-
-
-
Deep Learning Applications to Unstructured Geological Data: From Rock Images Characterization to Scientific Literature Mining
Authors A. Bouziat, S. Desroziers, M. Feraille, J. Lecomte, R. Divies and F. CokelaerSummaryIn the last decade, Deep Learning applications to unstructured data, such as images and texts, has known significant technical progress and democratization. However successfully adapting these technologies to geological data and activities is far from straightforward. As a contribution to the digital transformation of the subsurface industries, in this study we present three promising Deep Learning applications to unstructured geological data. The first use case is an automated classification of macroscopic rock samples pictures with convolutional neural networks. The second use case is an accelerated delineation of foraminifera micro-fossils on thin sections scans using segmentation algorithms. The third use case is an assisted mining of scientific texts to characterize hydrocarbon source rock formations, based on an entity extraction engine. From these use cases, we highlight the main challenges to expect in similar projects and share some good practices. Notably, we describe innovative methods to embed prior geological knowledge in the algorithms, to handle situations where only little training data is available, and to distribute the corresponding codes to geologists in user-friendly ways.
-
-
-
Analysis of Seismic Attributes to Assist in the Classification of Salt by Multi-channel Convolutional Neural Networks
Authors F. Jiang, P. Norlund and Z. WeiSummaryRecently, many deep-learning approaches have been applied to geophysical problems, such as seismic processing and interpretation, to aid in the exploration of hydrocarbon reservoirs. Convolutional neural networks (CNNs) are a popular new method to identify salt bodies in seismic data, by analyzing image segmentation and feature extraction. In this study, four ensemble classifiers were trained to analyze the importance of various seismic attributes with respect to the predictability of a salt body. By choosing seismic attributes with the highest importance as input data to a multi-channel CNN architecture, we successfully improved the accuracy of salt prediction. Both binary and multi-label salt classifications are shown, as well as comparisons of salt classification probability maps generated from models trained by seismic-only data vs models trained using seismic-plus-attributes data. The results demonstrated that using seismic-plus-attributes models significantly improved the continuity of salt boundaries and reduced unwanted artifacts, whilst also converging faster during training.
-
-
-
Preliminary Assessment of Structural Controls in the Sokoto Basin, Northwestern Nigeria Using Non-Evasive Techniques
More LessSummaryPreliminary assessment of the structural controls of the Nigerian sector of the Iullemmeden Basin, northwestern Nigeria has been carried out using non-evasive techniques. The Sokoto Basin deepens towards Niger Republic. Depth to basement interpretations from aeromagnetic data show eight major depressions in the basin comprising the Yerimawa-SabonBirni-Isah trough, Wurno-Rabah trench, Sokoto-Bodinga-Tambulwa trench, Tureta-Bakura ditches, Lema-Tambo sinks, Koko-Giro sinks, Gada holes and Kiwon Allah-Sokwoi-Illela pits. Structural interpretations show that three major fault lines trending NW-SE modified the sagged basement over geologic time. Integrating depth to basement and structural interpretations show that the Sokoto-Bodinga-Tambulwa trench, Kiwon Allah-Sokwoi-Illela pits and Lema-Tambo sinks are possibly connected by parallel faults trending NW-SE. Evidence from field studies of surface tectonic structures as well as the presence of a deep seated fault below the Wurno hill leads us to the conclusion that the Wurno hill is possibly tectonically controlled. Furthermore, the presence of a reverse fault and rollover anticline along Goronyo-Taloka road indicate possible convergent plate boundary and regional active faulting respectively. This may play a significant role in the maturity of organic rich sediments of the Taloka and Dukamaje Formations, flow of fluids as well as mineralization in the basin.
-
-
-
Study on Geological Feature Extraction from FMI Logging Data by Using Deep Learning Neural Network
More LessSummaryThis paper firstly studies the structure and algorithm principle of deep neural network which is divided into two processes of “pre training” and “fine tuning”, and it can avoid falling into local minimum and improve the learning speed. As an efficient feature extraction method, deep learning can complete the most essential description of the data.
-
-
-
Modelling Hydraulically Fractured Tight Gas Reservoirs with an Artificial Intelligence (AI)-Based Simulator, Deep Net Simulator (DNS)
Authors S. Ghassemzadeh, M. Gonzalez Perdomo, E. Abbasnejad and M. HaghighiSummaryHydraulic fracturing in tight gas reservoirs increases the connectivity of the well to further areal regions, thus boosting the production as well as the net-present-value of the asset. This type of reservoir typically exhibits considerable uncertainty in rock and fracture properties, which coupled with significant heterogeneity makes history matching, uncertainty quantification, and optimisation time-consuming tasks. Therefore, engineers are always looking for processes to reduce simulation time. Artificial intelligence enables machine-learning to learn from data. This allows for time-consuming fluid flow equations to be explicitly formulated while keeping the accuracy found through the implicit approach. This is achieved through the use of deep learning. In this research, a fully standalone simulator is developed for a range of hydraulically fractured tight gas reservoirs in a 2-dimensional space. Considering the low value of metrics (RMSE<65 psi, MAPE < 0.99%, and R2 ≈ 1) for training, validation and test sets, the results confirmed that the developed model, Deep Net Simulator (DNS), is accurate and reliable when compared with numerical models. Furthermore, DNS shows remarkable reliability when comparing the results of 140 unseen complete reservoir models over a 4-year period against a numerical simulator. The average value of MAPE for all 140 cases is 10.55%.
-
-
-
Optimizing the Dynamic Behavior of Wells and Facilities with Machine Learning and Agent Negotiation Techniques
Authors M. Piantanida, A. Amendola, G. Esposito, P. Iorio, S. Carminati, D. Vanzan, F. Castiglione, D. Vergni, P. Stolfi and C.N. CoriaSummaryThe paper proposes an approach to deal with the day by day dynamic behaviour of Oil & Gas assets, providing support for optimized decisions on wells and facilities. The approach is based on:
• A set of software agents, trained with a machine learning approach to understand the health status of the components of the reservoir/well/plant system and capable of proposing optimization actions for the corresponding subsystem;
• An inter-agent negotiation approach, capable of evaluating the optimization actions of the single agents in the wider picture of the overall optimization of the producing asset.
The paper will describe how this approach has been implemented, as well as an example application.
-
-
-
Deep Learning for Seismic Data Reconstruction: Opportunities and Challenges
Authors O. Ovcharenko and S. HouSummaryNatural and instrumental conditions during field seismic survey lead to noise and irregularities in acquired seismic data. In this work, we explore challenges and opportunities related to denoising and interpolation of seismic data by deep convolutional neural networks. In particular, we apply three network configurations to field data and match them with suitable applications. We show that U-Net is beneficial for denoising applications while adversarial generative networks (GAN) are superior in interpolation tasks. Enhanced interpolation capability of GANs, however, comes at cost of increased uncertainty in the results and we raise awareness about this observation. In the end, we consider the pitfalls of conventional metrics and outline the requirements for data-driven approaches to be suitable in production applications.
-
-
-
Novel Digital Rock Simulation Approach in Characterizing Gas Trapping by Modified Morphological Workflow
Authors F. Zekiri, J. Steckhan, S. Linden, P. Arnold and H. OttSummaryThe quantification of trapped non-wetting phase saturation and distribution in petroleum reservoirs is essential to understand hydrocarbon recovery efficiency. Laboratory experiments on core samples are regarded industry best practice to estimate hydrocarbon trapping. To implement entrapment characteristics in reservoir modeling, empirical correlations between initial saturation and respective residual non-wetting phase saturation (trapping curves) are commonly used.
To overcome long lead times for setting up reservoir models due to time-consuming laboratory workflows, pore-scale simulations of fluid flow on digital representation of the pore space - so called digital twins - imaged by micro computed tomography have been considered a viable alternative to estimate hydrocarbon entrapment. In this study, we compare simulation results for water/gas capillary dominated imbibition in a sandstone reservoir. So far, digital rock simulations could not predict representative trapped phase saturation levels with the classical morphological approach. This was the motivation to adapt the simulation concepts by inclusion of sub-resolution wetting-phase layers to the pore-structure. As a result, it was possible to simulate representative spatial distribution of the trapped non-wetting phase in the pore-space and to estimate realistic residual saturations. For verification purposes, the simulated results have been compared to the trapping model by Land (1968) .
-
-
-
Using Blockchain and Smart Contracts for Marine Seismic Data Integrity and Contract Control
By L. FloodSummaryThere are opportunities using blockchain combined with smart contracts to enhance data integrity and contract control within the marine seismic industry for individual contracts. Further an implementation of blockchain and smart contracts at the industry level will redefine industry standards and create a payment platform for the industry, and associated subcontractors, making contracts easier to administer and control.
-
-
-
How to Leverage Advanced TensorFlow and Cloud Computing for Efficient Deep Learning on Large Seismic Datasets
Authors C.E. Birnie and H. JarrayaSummarySeveral seismic applications benefit from using all available receivers and a long time-window, allowing greater representation of signal and noise. Neural networks have the ability to utilise spatio-temporal data and extract high level patterns thanks to their non-linear function compositions. However, the training of such networks is memory intensive, often resulting in the downsizing of data introducing constraints on the number of traces and/or the length of the recording. Through the example of developing a deep learning model for passive seismic event detection on a large array of ∼3500 sensors, we describe an end-to-end workflow from synthetic labelled data creation to distributed model training to model deployment. We demonstrate how to overcome the memory challenges of large input data by utilizing TensorFlow’s data generators for on-the-fly generation and loading of large seismic recordings during the training procedure. Furthermore, we illustrate how training time can be drastically reduced by distributing training across multiple machines with GPU capability. Kubernetes and cloud resources are leveraged for ease of orchestration of compute resources and scaling up horizontally. Finally, we highlight that whilst training is computationally expensive, the trained model can be deployed on a standard, non-GPU machine for real-time detection of passive seismic events.
-
-
-
Digital Analysis of the whole Core Photos
Authors V. Abashkin, I. Seleznev, A. Chertova, A. Samokhvalov, S. Istomin and D. RomanovSummaryIn this work, we present the technique for automatic processing of whole size slabbed core digital images. The technique helps to identify areas of the photo that are close in properties that can correspond to different rock types, facies, etc. These distinguishing features can be used to predict petrophysical rock properties using available laboratory measurements. Obtained data can be used in complex log interpretation, construction and further validation of the reservoir hydrodynamic model, refinement of well geomechanical models. Texture characteristics of whole core surfaces obtained from the images of an average and high resolution can considerably increase the accuracy of the rock classification and more reliable distribution of rock properties on larger scales. Image color clustering is carried out using the Gaussian approximation for image pixels’ density in the digital color-coding space. The technique complements available log and core data by specific properties descriptors (porosity, permeability, natural gamma-rays emission, etc.), including rock mass color characteristics, texture, bedding planes angle and thickness, and shape and size of clastic inclusions. The possibility of the generation of high-resolution curves for physical properties measured on core plugs was demonstrated.
-
-
-
Generating Custom Word Embeddings for Geoscientific Corpi
Authors C.E. Birnie and M. RavasiSummaryIn the field of natural language processing, word embeddings are a set of techniques that transform words from an input corpus into a low-dimensional space with the aim of capturing the relationships between words. It is well known that such relations are highly dependent on the context of the input corpus, which in science varies highly from field to field. In this work we compare the performance of word embeddings pre-trained on generic text versus custom made word embeddings trained on an extensive corpus of geoscientific papers. Numerous examples highlight the difference in meaning and closeness of words betweeen geoscientific and generic context. A prime example is the term ghost which has a specific definition in geophysics, different to that of its common usage in the English language. Moreover, domain specific analogies, such as ‘Compressional is to P-wave what shear is to… S-wave’, are investigated to understand the extent to which the different word embeddings capture the relationship between terms. Finally, we anticipate some use cases of word embeddings aimed at extracting key information from documents and providing better indexing.
-
-
-
Architecting a Digitalization Platform to Deliver Transformative Business Results
Authors J. McConnell and P. QuinnSummaryDesigning and implementing a data platform to support both traditional applications and modern analytical techniques is difficult in any industry - and building an oil company solution for subsurface information brings additional requirements and potential pit-falls. Defining your digitalization strategy may provide direction, but that alone is not enough. How do you ensure the strategy remains valid, and guarantee its adoption and overall success?
Feedback, agility and de-coupling are required in order to build-in fundamental flexibility and longer-term openness to change and innovation. To balance this against rock-solid operational concerns, this therefore must by underpinned by robust data and architectural principles and governance at both a high and low level. Without this cooperative and coordinated effort, investments in digitalization programs are unlikely to see their expected value fully realised.
-
-
-
Tools for Automated Rock Description
Authors E. Baraboshkin, D. Orlov and D. KoroteevSummaryThe algorithms for the classification of images are well developed in recent years. They work well when the classes are clearly defined between each other. The geological classes are not like that because they can be observed in different scales and classification paradigms.
To tackle this problem, we compare different feature extraction techniques and classification (semi- and supervised) algorithms. We present methods which help to increase the accuracy of rock type classification.
-
-
-
A Practical Workflow Using Seismic Attributes to Enhance Sub Seismic Geological Structures and Natural Fractures Correlation
Authors A. Bacetti and M.Z. DoghmaneSummarySince 1970s, seismic attributes has been widely used in every seismic interpretation and reservoir characterization workflow. This practice has generalized due to the rapid development in computers technology (both hardware and software) and the emergence of 3D seismic surveys. In this paper, we describe the workflow of using seismic attributes to visualize hidden structures that cannot be seen on the original 3D seismic. We also studied the existence of a relationship between seismic attributes and natural fracture density. The results are interesting from geological and reservoir modeling aspects as the workflow helped to reveal hidden small faults below seismic resolution and some attributes had a good correlation with natural fracture density. This workflow is useful exploration cost optimization strategy for oil and gas national companies.
-
-
-
Domain Expertise, Deep Learning and Cloud: How to Build Powerful Workflows for Exploration
By L. VynnytskaSummaryDigital transformation in general emphasizes technology and migration to the cloud. However, once the most important technical questions are resolved, the focus should be shifted to the users since their adoption and incorporation of new tools into everyday work will measure the success of digital transformation. One of the most time-consuming and important tasks in exploration is interpretation of seismic data. Therefore, E&P companies and software providers have put much efforts into solving this problem. Deep learning has received a lot of attention due to its ability to efficiently recognize patterns in large and complex data. However, to create value for oil companies, deep learning solutions should become an integral part of workflows. Interactive training allows to combine domain expertise of geoscientists and algorithms themselves to ensure adoption of the deep learning technology, high accuracy and confidence in the results. Cloud architecture should be flexible and extensible. Efficiency and flexibility must be supported by a distributed compute framework that will act on workflows instead of data.
-
-
-
Towards an Ai-Based Advisor for Capturing Interpretive Trails and Supporting Geological Exploration Activities
Authors R. Brandão, L. Azevedo, C. Paz, M. Moreno and R. CerqueiraSummaryMany activities in the Oil & Gas (O&G) industry rely on expert interpretation over unstructured data and interpretation of elaborate geological concepts. Keeping track of the consumption and production of conceptual knowledge and data is crucial to structure such investigative processes. Nevertheless, capturing and structuring activities of this nature is a complex requirement if an advisor system is to be designed and implemented to support decision making in such domain. We propose a novel representation to keep track and model experts’ interaction with different systems, along with multimodal data and conceptual knowledge they consume and produce during interpretive activities. The proposed representational approach aims at supporting the design and implementation of intelligent advisor systems for knowledge-intensive processes, such as the ones observed in the multidisciplinary domain of O&G.
-
-
-
Vendor-Independent Workflow Architecture to Integrate Domain Applications and Accelerate R&D to Production
Authors P.V. Nunes, V. Furuholt, N. Burns and P. AursandSummaryReal data liberation can only be achieved by substantial industrial cooperation to establish robust API standards for data transfer in to and out of the data layer. As the E&P industry moves to enable this transformation, more players are entering the software landscape providing modern and innovative solutions. A vendor-independent workflow architecture solution that allows users to connect services from different providers and internal products as part of their routine workflows has been established to ensure flexibility and to drive automation.
The solution is built predominantly in Python, with a series of microservices containerized with Docker and running in Kubernetes clusters on Google Cloud Platform (GCP), all of which is managed as infrastructure as code with Terraform. It has key important components: User interface, UI backend and logic, commend que, data abstraction service and system monitoring. The ambition is to open source some of these components and develop same functionalities in other cloud providers. So, the proposed Workflow Framework could become an industry standard to attract several services and expand this ecosystem.
-
-
-
Comparison of Two Different Methods for Estimating Oil Recovery from In-Situ Combustion in Heavy Oil Fields
By A. VermaSummaryThis is a comparison study done on the heavy oil reservoir fields to estimate the production by the use of thermal recovery technique called In-situ combustion, in which the air is injected along with water and combusting it to increase the mobility of the heavy oils, thus enhancing the production in heavy oil fields. It gives an insight of the contrast between the theoretical estimations and the actual production of hydrocarbons on the field scale and helps in better and efficient optimization of the production from the field. An analysis was done by comparing the two methods namely Gates and Ramey method and Nelson and McNeil method and correlation was shown in terms of total oil produced and air/oil ratio based on the data of the field.
-
-
-
Digitalization for Data Liberation
Authors Z. Manan, A. Hazet, T. Bramono and D. GalihSummaryThe Government of Indonesia governs the disclosure of upstream oil and gas data by issuing Regulation No. 29 of 2017. Contractors have to propose permits to the government in order to disclose their data to investors. Although the Government meant to boost investment by issuing the regulation, processes to get the permit can take some times.
The paper is organized as follows: After the introduction, the second section gives a brief overview about process that has been established to show the disclosure of upstream oil and gas data in Indonesia for investment purpose. The third part of the paper promising a new applied policy for the Government of Indonesia in order to improve efficiency for accessing upstream oil and gas data. In other country such as, United States of America, Canada and Mexico Disclosing oil and gas data generally can be transferred by transfer agreement. In last section we describe advantages and disadvantages about our new concept policy compare to Indonesia existing policy that roles the disclosure of upstream oil and gas data.
-
-
-
Machine Learning on Field Data for Hydraulic Fracturing Design Optimization: Digital Database and Production Forecast Model
Authors A. Morozov, D. Popkov, V. Duplyakov, R. Mutalova, A. Osiptsov, A. Vainshtein, E. Burnaev, E. Shel and G. PaderinSummaryIncreasing amount of hydraulic fracturing (HF) jobs in the recent two decades brought in a significant amount of measured data available for development of predictive models via machine learning (ML). In multistage fractured completions, post-fracturing production reveals evidence that different stages produce very non-uniformly, and up to 30% may not be producing at all due to a combination of geomechanics and fracturing design factors. Therefore, there is a significant room for fracture design optimization. We propose a data-driven model for fracturing design optimization, where the workflow is essentially split into two stages: prediction of 12-month cumulative oil production and maximizing the target by optimizing HF design parameters. In this work, the first stage is considered, and the result of the ML model’s prediction of the target is 81.5% on test set.
-
-
-
Application of Artificial Intelligence Algoritnms for Tight Oil Field Development
Authors A. Povalyaev, A. Fedorov, B. Suleymanov, I. Dilmukhametov, D. Salnikova and A. SergeychevSummaryNowadays economical exploitation of unconventional low-permeable reservoirs is a grand challenge for oil and gas producers. In particular, the continued development of mature fields is complicated by the fact that prospective drilling is concentrated in the zones with high geological uncertainty and unclear production potential. To provide an effective and sustainable solution for development planning, a new methodology that would enable high-quality forecasting of production profiles for various development strategies is required.
In this paper, we present a novel technique for quick-look estimation of different wells placement schemes efficiencies, based on LWD data of the new wells and production history of existing ones. The data analysis is performed via the incorporation of Artificial Intelligence tools. The feasibility of this method was verified in several pilot projects within the frames of the ongoing drilling campaign.
As a result of this research, the global optimization of Prioskoe field development was proposed, and the same works were adopted for other tight oil assets of the company, namely Achimov and Tumen deposits in the West Siberian region.
-
-
-
Unlocking New Exploration Opportunities with Digital Transformation
Authors F.T. Amir, A.D. Wibisono and S.E. SaputraSummaryThis paper delivers the journey of redefining our business process as government institution, with digital transformation to create data-driven decision-making that powered by technology, along with some challenges and opportunities in terms of its development and implementation. The transformation consists of 4 stages: Data Collection, Digitization, Digitalization and Digital Transformation. As the project progressed since 2016, every aspect is done collaboratively in parallel and rapidly, also reinforced by the Government with the launching of new regulation that accentuate on data openness. The results are efficency in business process and notably increase exploration investments in Indonesia.
-
-
-
ANNs Trained on Synthetic and Lab Data for Modeling Steady-State Multiphase Pipe Flow
Authors E. Baryshnikov, E. Kanin, A. Vainshtein, A. Osiptsov and E. BurnaevSummaryThe present work considers the development of a machine learning model trained on synthetic and lab data for the steady-state multiphase pipe flow. We propose a new method for calculating flow characteristics such as liquid holdup, flow regime, and pressure gradient in the pipe segment based on ANNs and transfer learning technique. Besides, the created tool is implemented within the marching algorithm for calculating flow parameters along the whole pipe. The segment module consists of three sub-models, namely, for calculating liquid holdup, defining flow regime, and estimating pressure gradient. For sub-models creation, we use transfer learning methodology: on the first stage, the ANNs are trained on synthetic data, which we generate by using OLGAS mechanistic model; on the second stage, we train meta-models additionally on the real data, which in our case presented by lab measurements. As a result, we create the new multiphase flow correlation, which includes the basics of the physics-based OLGAS model and is tuned for real data that can be in the general case field measurements. At the final stage, we apply marching algorithms with the suggested segment model to the field dataset for testing purposes.
-
-
-
Benefits of Digital for Better Exploration Planning and Execution
Authors C. Le Turdu, A. Pugh, C. Fraser, S. Nollet, C. Castagnac, F. Stabell, D. Palmowski, P.A. Hole, E. Steele and K. TushinghamSummaryDigital technologies and innovations are disrupting our industry by bringing new, cloud-native solutions to enable better planning and execution of exploration workflows. Being able to standardize a portfolio prioritization process globally, and instantly rank and update with key data or market changes, is key to increase planning efficiency and transparency. The examples provided in this paper show that those new digital solutions break down silos and enable closer collaboration between technical teams and decision makers.
The standardization of the portfolio prioritization process is expected to play a key role in dramatically reducing early and costly exploration spend by focusing on the opportunities that matter. In addition, this standardization associated with new ways of working is expected to minimize human bias and reduce competition amongst exploration teams. This should have a clear impact on reducing the gap between pre- and post-drill resource estimations.
Finally, capturing and sharing experience and knowledge is key for a sustainable future, and we hope that these new solutions, powered by the cloud, will help in attracting and retaining key talents to the oil and gas industry.
-
-
-
Expert-guided Machine-learning for Well Location Optimization under Subsurface Uncertainty
Authors R. Schulze-Riegert, P. Lang, W. Pongtepupathum, C. Drew, S. Topdemir, S. Pattie, H. Nasiri and T.M. HegreSummaryThis work investigates the application of an expert-guided machine learning technique for identification of connected and high saturated oil volumes for optimal well placement. The technique is designed to work on property maps of a single model as well as ensembles of reservoir models for robust field development optimization under subsurface uncertainty.
The methodology is embedded into a structured workflow design for improving a baseline well location design of the Olympus reservoir model ensemble, a public benchmark project for field development optimization under uncertainty. This work suggests an iterative improvement of well location designs using probabilistic well ranking to identify low performing wells, probability maps to understand reservoir performance and analytics-based optimization steps targeting large connected and high saturated oil volumes.
The methodology is described, and application results are presented for a full optimization loop. The structured approach highlights the value of novel learning techniques to provide an efficient and manageable solution for optimizing a well location design under subsurface uncertainty.
-
-
-
Managing Data Lineage of O&G Machine Learning Models: The Sweet Spot for Shale Use Case
SummaryMachine Learning has increased its role in several industries, becoming an essential tool, and competitive advantage. However, questions around training data lineage, or provenance, e.g., “where did the data used to train this model came from?”; the introduction of several new data protection legislation; and, the need for data governance requirements, has hindered the adoption of machine learning models in the real world.
In this paper, we discuss how data lineage can be leveraged to benefit the Machine Learning (ML) lifecycle to build ML models to discover sweet-spots for shale oil and gas production, a major application for the Oil and Gas (O&G) Industry.
-
-
-
Understanding How a Deep Neural Network Architecture Choice Can Be Related to a Seismic Processing Task
Authors J. Messud and M. ChambefortSummaryOne of the many challenges in the way of the adoption of Deep Learning (DL) for seismic processing is the understanding of deep neural network (DNN) architecture and components with the associated underlying physics involved in a specific processing task. In this article, we study how some convolutional DNN architectures can be naturally suited to given processing tasks, helping the interpretability and opening the door to meaningful QCs. For instance, we show that the Unet architecture ( Ronneberger et al., 2015 ) can naturally learn to “separate” the kinematics of seismic events from their amplitude variations and use both information efficiently; this is illustrated on the CIG (common image gathers) skeletonization (or picks probability computation) and muting task. We also illustrate that the Denet ( Remez et al, 2017 ) architecture can naturally learn to decompose a “noise” model into meaningful complementary contributions, with the receiver deghosting from variable depth streamer data example.
-
-
-
Productivity Prediction Integrating Data-Driven Method, Deep Neural Network and Exploratory Data Analysis in Montney Shale Plays
More LessSummaryThis paper presents a novel approach of productivity prediction in Montney shale formation by integrating data-driven method, exploratory data analysis (EDA) and deep neural network (DNN). In this study, a total of approximately 1500 wells. First, the EDA discovered a distribution of un-refined data and null data. In the above analysis, in order to avoid overfitting the proposed DNN model, an outlier analysis of the dataset was performed and an 1143 well was selected as a training data set. Second, in the DNN model, the applicability of categorical variables through one-hot encoding was verified. Hyperparameters optimization of the DNN model also resulted in dropout layer application (without), number of hidden layers (3), number of neurons (200), activation function (ReLU), and learning rate (0.002). Comparisons with optimized DNN model and other supervised learning models, random forest and support vector machine, showed that the DNN model had a minimum of 3.2% lower the mean absolute percentage error values and a minimum of 0.025 lower the root mean squared error values. The proposed DNN model was found to have superior predictive performance.
-
-
-
An Automatic Algorithm for Core-To-Log Matching
Authors A. Kuvaev, E. Stremichev and R. KhudorozhkovSummaryPredicting reservoir properties, such as porosity and permeability, is one of the major tasks of a petrophysicist. But since the most reliable information about these properties can be obtained only by studying core plugs, one must ensure, that well logs and core data are properly depth-aligned before fitting any statistical model.
Unfortunately, specifics of the coring process can lead to depths divergence between them of up to several meters and an auxiliary procedure called core-to-log matching is required to determine the actual depths of coring intervals. Currently, the matching is performed manually, since there is still no option to do it automatically in many popular commercially available software packages.
In this paper, we propose a novel algorithm for core-to-log matching, which automatically shifts coring intervals so that the correlation between well and core logs is maximized. On the example of 12 wells, we show that the algorithm not only outperforms a petrophysicist by a large margin but also works more than two orders of magnitude faster. The algorithm is implemented in an open-source PetroFlow framework available at https://github.com/gazprom-neft/petroflow.
-
-
-
Seismic-ZFP: Fast and Efficient Compression and Decompression of Seismic Data
By D. WadeSummaryWe present our open-source seismic data compression library, built on top of a state-of-the-art floating-point compression algorithm, and motivated by the demands of Machine Learning and cloud computing.
Fast arbitrary reading is achieved by using two key observations, namely that regularity may be preserved by using fixed-rate compression, and that storage hardware may be efficiently utilized by packing disk blocks with data which is frequently accessed together.
We also demonstrate the quality of reproduction of the input data, with the claim that it is suitable for the purposes of Machine Learning.
-
-
-
Application of Conditional Random Fields for Seismic Segmentation
Authors T. Karaderi and E. Zabihi NaeiniSummaryConventionally, computer vision tasks such as semantic segmentation are handled by probabilistic models such as the Conditional Random Fields (CRFs). The wide usage of CRF models in most modern semantic segmentation pipelines is because of their ability in modelling the structural information. Despite CRFs’ successful application in natural and medical data, the application to seismic data, however, is limited. In this paper, it is shown how CRFs can be incorporated into deep learning pipelines to improve automatic seismic interpretation by acknowledging that we are predicting a structured output and thus by including our prior knowledge about the spatial image architecture.
-
-
-
The Testing of Powered Drill String and its Operational Impact
Authors R. Kucs, H. Freissmuth and B. CazacuSummaryOMV is convinced the technology of a powered drill string with electric power and bidirectional data flow along the entire string down to the bit will change the dynamics of drilling operations tremendously. The availability of high resolution MWD, LWD, and along string measurement data in real time will decrease operational risk, time, and cost. To prove this technology OMV ran the PDS powered drill string of the company TDE on the surface section of one of its wells in the Vienna basin. The results are very promising. No connection failed to transmit power and data throughout drilling the entire surface section. Along string measurement were constantly received. Now it is important to investigate the benefits for the drilling and subsurface team in detail to get a grip on the overall benefits of this technology.
-
-
-
How Named Entity Recognition and Document Comprehension Unlock Geosciences and Engineering Semantic Search without Big Data
By J. MassotSummaryBy combining Named Entity Recognition model trained on a tiny labeled dataset with a generalist Reading Comprehension engine, this abstract shows how to implement an efficient Semantic Search engine which can complete and sometime replace traditional keywords-based search engine. The proposed solution does not require massive amount of annotated data for training the models involved, taking advantage of transfer learning and model adaptation allowed by BERT and BiDAF model architectures. Because no Big Data is needed, such solution is very easy to implement at an early stage of any project related to Geosciences and Petroleum Engineering knowlegge management project.
-
-
-
Automatic Detection and Classification of Unconformities on Seismic Data Using Machine Learning
Authors L. Alberts, K. Duffaut and T. RannemSummaryDuring a seismic interpretation exercise, picking an unconformity is one of the most time-consuming and ambiguous tasks. In this paper we present a method to quickly detect areas that are highly likely to be an unconformity, using the principle that at angular unconformities the azimuth and dip of the strata changes. We introduce a workflow to classify what kind of unconformity has been detected, by feeding the areas with high unconformity probabilities into a convolutional neural network. This adds the benefit that one can quickly discern whether the region was associated with significant uplift or not.
-
-
-
Practical Applications of Real-time Surface Monitoring in Mature Offshore Brown Field for Production Optimization
Authors M.F. Haron, K.A. Md Yunos, K.L. Tan, F.L. Bakon, C. Tang Ye Lin and S. M NazriSummaryField B is located 40 km offshore Sarawak in the north-western part of the Baram Delta Province. The field was discovered in July 1967 and started production in 1972. Digitalization efforts to the field was introduced in 2014 to enhance surveillance scope as offshore environment posed numerous challenges in data gathering. Real-time monitoring was implemented and has greatly boosted the response time for well issues and elevated efficiency in production improvement efforts such as gaslift optimization. Two case studies were chosen to reflect on the practical applications of real-time surface monitoring. In case 1, well B-1 is producing on gaslift and it was flowed back post stimulation job with higher casing pressure. Wellhead pressure was unstable and upon comparison with established pressure envelope, well was found to be injecting gaslift from multiple points which is inefficient gaslift utilization. For case 2, real-time surface monitoring has helped to indicate tubing-casing communication when casing pressure in well B-2 was observed to be increasing in relation to wellhead pressure; enabling the issue to be handled earlier before it worsens. Both cases have evidently proved that real-time surface monitoring is both practical and advantageous in field applications.
-
-
-
DeepSeismic: a Deep Learning Library for Seismic Interpretation
Authors M. Salvaris, M. Kaznady, V. Paunic, I. Karmanov, A. Bhatia, W.H. Tok and S. ChikkerurSummaryWe introduce DeepSeismic, an open source Github repository (https://github.com/microsoft/seismic-deeplearning) that provides implementation of deep learning algorithms for seismic facies interpretation. The repository provides composable machine learning pipelines, that enables a data scientists and geophysicists to use state-of-the-art segmentation algorithms for seismic interpretation (e.g. UNet: Ronneberger et al. (2015) , SEResNet: Hu et al. (2018) , HRNet: Sun et al. (2019) ). We provide scripts to reproduce benchmark results from running these algorithms using various public seismic datasets (Dutch F3, and Penobscot). Finally,the repository provides documentation, and quick start Jupyter notebook and Python scripts to enable the community to get started with seismic interpretation projects quickly. We believe the results in this paper provide a strong baseline on which others can build upon. To the best of our knowledge,these provide state-of-the-art result on Dutch F3 data set. We have released the code and the models in an open-source GitHub repository with permissive MIT license
-
-
-
Graphical Network Based Reservoir Modelling to Quickly Use Data and Physics to Explore the Subsurface
Authors J. Saetrom and A. SkorstadSummaryIn this paper, we demonstrate how we can combine reservoir physics, data and knowledge with fit-for-purpose machine learning algorithms in a graphical network model to utilise reservoir models as part of an efficient discovery process. Contrary to a traditional reservoir modelling approach, where we integrate data in a sequential manner, we train the graphical network model by utilising the information in all available simultaneously. This help overcome the common pitfalls in reservoir modelling, which typically limits the value of reservoir modelling efforts in asset teams today. We demonstrate the value of the solution on a study conducted on the Norwegian continental shelf. By having the ability to quickly generate reservoir models that all are plausible given the current available data, under different prior assumptions regarding the subsurface, we both increase our subsurface understanding, by also the confidence in our reservoir management decisions.
-
-
-
Improvement of Well Logs Autointerpretation Robustness via Application of Spatial Geological Information
Authors D. Egorov, G. Nugmanov, A. Semenikhin, A. Karavaev, A. Shchepetnov, O. Osmonalieva and B. BelozerovSummaryThese days machine learning models are used on a regular basis for various complex oil and gas tasks. One of the most popular business problems is well log data autointerpretation. However, these models, in most cases, require large amount of data which can be obtained only from mature oil fields. It assumes high variability in data due to different tools, well and measurements record conditions. It leads to noisy datasets with number inconsistencies affecting accuracy of model prediction and resulting in many misclassifications and outliers. Application of spatial geological information from other wells can increase prediction robustness.
The main aim of presented research is to develop a method of spatial geological data incorporation into conventional machine learning net pay intervals autointerpretation in pipeline in order to improve quality of model prediction. Approach for a spatial geological features aggregation was proposed. They were used for developing a spatial ranking model allowing estimation of geological consistency for each predicted net pay interval. Capability of the proposed methodology was proven by mathematical metrics and expert blind test. It was clearly shown that incorporation of spatial data dramatically increases machine learning models prediction quality and eliminate inconsistent intervals produced by noisy data.
-
-
-
A Comparative Analysis of Supervised Classification Algorithms for Lithofacies Characterization
Authors S. Sarkar and C. MajumdarSummaryFacies characterization is important to distinctly define rocks of interest and to build a better understanding of the depositional environments encountered at wellbore. The conventional approach of facies analysis by human interpreters involves a time-consuming process. In addition, lack of experience and difference in interpretational approach often leads to inconsistencies that may affect the overall geological modeling. To solve these problems, we introduce three commonly used machine learning algorithms for automated lithofacies classification -Decision Trees, Random Forest and the Support Vector Machine. In this study, we apply these supervised classification algorithms on a set of wireline logs from different wells and evaluate the efficiency of each algorithm for facies classification under different constraints.
While machine learning proves to be a more time-efficient and consistent solution, the performance and accuracy varies with the algorithm and the preconditioning of the data. Support Vector Machine outperforms the Random Forest and Decision Trees when the training dataset is limited. It is also inferred that the models are more efficient with fewer predictive classes and less complexity. Also conditioning of the training data to provide equal weightage to all the predictive classes are of equal importance to create a robust and unbiased model.
-
-
-
Seismic Interpretation of Partially Labeled 3D Cube with Neural Networks
Authors A. Zhuchkov, D. Prokhorov, V. Gusev and L. MatyushinSummaryWe propose to solve seismic interpretation problem by manually labelling very small (0.5%) fraction of inline and crossline sections of the seismic cube, followed by automatic segmentation of the rest of the cube by a neural network model.
There are several methods to improve the quality of segmentation. First, we use an additional input image, which is essentially an interpolation of orthogonal labelled images. Second, we describe two types of augmentations, which work particularly well for seismic segmentation, grid distortion and linear-harmonic transform. This workflow results in high quality segmentation and is a good candidate to be used in real world situations to reduce manual labelling.
-
-
-
REEF: A Framework for Information Extraction and Automated Knowledge Graph Construction
Authors J. Laigle, C. Collantes, A. Cortis, Z. Jin and A. BissetSummaryWe present Reef (Recursive Evidence Extraction Framework), a Python framework for automated information extraction from Petroleum Geoscience databases. Reef enables an end to end pipeline from raw documents to a Knowledge Graph. Reef makes possible two essential operations: 1/ discover entities in documents, characterize them and connect them to abstract concepts present in a knowledge graph and 2/ discover new knowledge with distant supervision.
Knowledge graphs are key to build better search engines, Question Answering systems, recommendation engines, feed algorithms for the cross analysis of multiple datasets. Reef unique approach leverages a comprehensive stack of open source and state-of-the-art libraries for documents digitalization and parsing, Natural Language Processing, Language Modeling, Logic Reasoning and Graph Analysis. These foundational components are seconded by custom applications for specific tasks.
Documents processed in Reef are digitized and sent through a pipeline where their content is filtered according to a flexible, easily extensible, Petroleum Geoscience specific object model. Information can be extracted from text, tables, figures, diagrams. Reef contains functions to infer information nature, digitize it, disambiguate and reconcile it into a graph database. Reef can be deployed in any cloud and delivers production ready knowledge graphs which can be served to third party applications.
-
-
-
Advances in the Digital Outcrop Modeling Software and Workflow: Examples and Lessons from Outcrops: Saudi Arabia
More LessSummaryPhotorealistic 3D Digital Outcrop Models (DOMs) are increasingly encountered as a cutting edge topic in geosciences, especially reservoir analog characterization. A custom viewing and analysis software have been developed specifically for work with these DOMs. Although 3D outcrop models are an exciting topic, finding software tailored to provide the range of geological analysis tools one would want to use to extract meaningful results from them is still challenging. Utilizing the excellent outcrops in Saudi Arabia, a new 3D outcrop model visualization & analysis software has been developed, with a focus on being able to load and display large outcrop model datasets, in fully georeferenced coordinates. Analysis tools have been implemented to perform interactive analysis & annotation of the outcrop, including both sedimentological and structural analysis tools. Also in this work, we suggested workflow for digital data acquisition (by high-resolution camera and GCPs); processing (by Reality Capture software) and interpretation (by ArcGIS and VRGS software). Following this workflow will provide a practical guide towards gearing the data collection to meet the desired outcomes within time and effort constraints.
-
-
-
The Well Productivity Index Determination Based on Machine Learning Approaches
Authors A. Gruzdev, V. Babov, Y. Simonov, A. Kosarev, I. Simon, V. Koryabkin and A. SemenikhinSummaryIn this paper, we presented an approach to building a machine learning model for predicting well productivity index. The proposed approach is based mainly on LWD data and well log interpretation results, based on the petrophysical model of the oilfield and digital signal processing approaches. The proposed approach was tested on historical data from the Novoportovskoye oilfield. The model was tested based on the LOOCV cross-validation process. As a result, the median relative error over wells is less than 20%.
-
-
-
Tight Integration of Decision Forests into Geostatistical Modelling
By C. DalySummaryThis presentation will develop a novel approach to geostatistical modelling based on Conditional Random Fields. Instead of starting with a full multigaussian prior, the conditional distribution are estimated at each target location where results are required. This step does not require the specification of a full Random Function. The conditional distributions are estimated with a Decision Forest approach which is known to converge to the true conditional distribution in quite general conditions. A method for simulating realizations by correlated sampling from the distributions is shown. The advantages of this new approach are that it provides good quality estimates of uncertainty, allows the use of many secondary variables and does not require strong models of stationarity either for the target variable or for the relationship between target and secondary variables. The results of the method are compared with the classic algorithm.
-
-
-
Leveraging Scalable Cloud Computing to Facilitate Collaborative, Large-Volume Pre-Stack Data Analysis: A Frontier Basin Example
Authors D. Goulding, W. Shea, S. Cutten, S. Perrier and O. MichotSummaryAnalysis and interpretation of pre-stack seismic data has historically been limited by the compute capacity and storage required to visualize, process, and analyze large volumes of pre-stack seismic traces in a typical 3D seismic survey. Over the past decade, Sharp Reflections has developed a solution to this problem by leveraging advancements in parallelized computing and memory sharing capabilities in High Performance Computing (HPC) environments to produce a software package (Pre-Stack Pro) designed to view, process, and interpret full fidelity pre-stack seismic data volumes, in memory, in real-time.
In recent years, Sharp Reflections has adapted Pre-Stack Pro to run in public compute clouds, benefits of which have been multi-fold; access to required software and hardware resources from any location at any time; scalability options allowing customization of cluster sizes for the survey size at hand; and increased opportunity for collaboration across geophysical disciplines and geographically distant locations.
This presentation describes how OMV New Zealand and Sharp Reflections exploited the new cloud digital building block to develop a new full-fidelity workflow for pre-stack data analysis, QC, improvement and interpretation using a remote compute solution. The subject data set was a large frontier exploration survey from the Great South Basin in New Zealand.
-
-
-
Digitalization through Automated Prospect Ranking Evaluation of NFED to Improve Business Decision Process
Authors N.A. Rilian, H. Rasid, K.A. Zamri, E.B. Keong and A. JazimSummaryNear Field Exploration and Development (NFED) is a part of the ongoing effort to ensure the successful achievement of PETRONAS Upstream 2030 Blueprint, where the intent is to provide focus on monetizing prospects and leads (2U: Undiscovered Resources) within 20 km radius from existing hubs across offshore Malaysia.
Currently, there are no establish processes that provide a comprehensive review into the viability of monetization of exploration candidates within 20 km radius, meanwhile there are circa thousand prospects and leads included discovered resources (2C) lied across offshore Malaysia and all those data have been scattered randomly. The challenges for NFED team is how to manage more than 600 2U within 20 km and 400 2C for both oil and gas molecules that has been registered, there is no standardize and integrated screening evaluation tools to deal with currently.
To addresses these challenges, new application tool as part of digitalization effort should be developed and established to deal with big information data and standardize screening and evaluation process workflow by accommodating and catering the diversity and uniqueness of each resources. The new application tool shall be functioned as Dashboard, Database and have the ability for data processing capability.
-
-
-
Learning the Matrix-Fractures Transfer Rate Using a Convolutional Neural Network
Authors N. Andrianov and H.M. NickSummaryOne of the key elements in constructing of representative dual porosity/dual permeability models is to provide the mass transfer rate between the matrix and the fractures. Whereas it is possible to compute numerically this transfer rate for specific geometries, it is challenging to estimate the transfer function without running the CPU intensive computations. In this work, we demonstrate that a convolutional neural network can approximate a transfer function using the encoded fracture geometry and the precomputed fine-scale simulation results.
-
-
-
Enhancing Fault Interpretation Efficiency and Accuracy with Deep Convolutional Neural Network and Elastic Cloud Compute
By S. ManralSummarySeismic Interpretation, Deep Convolution Neural Network , cloud compute, digital
-
-
-
Automated Well Portfolio Optimization – Leveraging Digital Technologies to Accelerate Well Intervention
Authors R. Holy, P. Songchitruksa, R. Sinha, K. Vadivel, S. Ramachandran and R. GaskariSummaryWell Portfolio Optimization
-
-
-
Artificial Intelligence in Early Stage Exploration at Wintershall Dea
More LessSummarySuccessful exploration relies on quickly managing uncertainties and opportunities. Examples include bidding rounds or farm-in offers. Explorationists must quickly develop an understanding of the available data and the implications on value of the assets. Under strong time pressure to develop reliable evaluations, any technology to speed up a comprehensive overview of data is beneficial. A cognitive exploration advisor tool that intuitively provides an overview of subsurface analogues to new projects will reduce cycle times and improve effectiveness. By ingesting internal and external sources, the tool enables research with lower uncertainties in a shorter time. In 2019, Wintershall Dea performed a pilot to develop a cognitive tool to support searching unstructured data. The pilot concluded with promising results: in less than 3 months, we created a minimum viable product that could search for key words and concepts and identify these in documents used for training the cognitive engine. The tool also extracted tables and images. Search results were presented in a GIS interface with queries geographically constrained by user-defined polygons. Users testing the system experienced increasingly effective and timely searches and considered this to be quite helpful in their daily work.
-
-
-
Scaling Well Log Interpretation for Faster Results with AI
Authors J. Fowler and J. StrobelSummaryWe present a case study of applying machine learning for well log interpretation. The project started with a pilot phase using a selection of 30 wells, then expanding to 126 wells. The developed workflow empowered by machine learning provided excellent interpretation results with higher than expected quality and significant reduction in turnaround time. The workflow is cheaper, faster, unbiased (being data-driven), and able to capture uncertainty – overall, able to produce a higher quality outcome. We project cost reductions of more than 40% compared to conventional workflows, making a good business case to implement this technology routinely. The technology will now be extended to further field studies, allowing further understanding of the technology and a growing financial benefit from the advantages it delivers today.
-
-
-
Seamless Translation of Modern File Formats to SEG-Y through the File System Interface
Authors L. C. Villa Real and M. De BayserSummarySEG-Y is the de-facto exchange and archiving data file format used by the oil and gas industries. Sadly, it has failed to keep up with improvements made in the last decades in storage technologies, which includes compression algorithms for scientific data, parallel data storage and retrieval, spatial indexing, and portable metadata interfaces. Although replacements for SEG-Y do exist, the majority of geophysical software continues to rely on that file format. This paper presents an alternative self-describing container format for geophysical data and a translation layer based on the file system interface which enables unmodified legacy programs to benefit from this new format.
-
-
-
Combining Stratigraphic Forward Modeling with Multi-Point Statistics
Authors A. Miller and J. PeiskerSummaryThe combination of stratigraphic forward modeling (SFM) with multi-point statistics enables the user to generate more dynamically diverse models, while maintaining to a geological concept and its corresponding depositional patterns that conforms to observed data. In comparison with classical geostatistics, this helps to achieve a more robust history match. While classical constraints are always tailor-made for every reservoir, the generated TI can now be recycled for other reservoirs with similar depositional environments.
Although this workflow showed satisfactory results, a more effective process needs to be developed. Automatization of SFM-generation by matching the well data with an objective function based differential evolution is one of the key elements to reduce the time consuming step in the process.
-
-
-
How to Use Automation to Improve Log Quality Control Process for Multiple Wells
Authors A. Fraser, H. Beurdouche, B. Rosvoll Bøklepp and C. FraserSummaryThe first ever well-log was recorded by Schlumberger, in 1927, 93 years ago, and since then a large number and variety of well-logs have been acquired, processed and stored to characterize various properties of rocks and fluids in a formation. The right characterization of these properties is essential to make good commercial decisions to develop these hydrocarbon reservoirs. Due to challenges related to complex formations, borehole environments, data acquired by different service companies, with potentially different technologies, calibrations or vintage, the first step is to quality control (QC) these logs. A key challenge, often encountered, is working with well-log data acquired over many decades which can be difficult to use straight away due to different legacy data formats, missing or inconsistent units and/or properties associated with well-logs, and multiple repeat or duplicated logs present across the same interval. These challenges make manual quality analysis and processing tedious, a problem which is further exaggerated in a high volume multi well setting. In such a setting, detecting log types, unit issues, best data in repeat sections can be cumbersome. A robust workflow to select logs for further interpretation and analysis workflows must be put in place.
-
-
-
Blockchain Applied to the E&P Phase in the Oil & Gas Sector
By A. AbadSummaryIn a global industry where each year billions of operations move through a complex supply chain involving tens of thousands of suppliers, blockchain’s potential cannot be understated. Blockchain is particularly useful in the seismic sector, an area in which Repsol is pioneering the development of one first blockchain pilots.
Seismic rights require complicated processes to ensure the traceability of ownership and their limited use over time, and involve many different suppliers and international teams of workers. As part of the Oil&Gas Blockchain Consortium, Repsol has been responsible for developing this pilot to bring the use of Blockchain technology to the management of seismic rights.
-
-
-
Digital Experimentation to Meet the Challenges of Sustainable and Safe Energy
More LessSummaryIn the E&P sector, this is a time of decreasing margins and in which the requirements for meeting HSE (health, safety & environment) standards are becoming increasingly strict and necessary.
Repsol has opted for digital experimentation as a way of responding to the challenges of this new context in E&P. Drones, IoT, 3D printing are just some of the initiatives we are considering implementing in our facilities.
-