- Home
- Conferences
- Conference Proceedings
- Conferences
First EAGE Digitalization Conference and Exhibition
- Conference date: November 30 - December 3, 2020
- Location: Vienna, Austria
- Published: 30 November 2020
61 - 80 of 93 results
-
-
Benefits of Digital for Better Exploration Planning and Execution
Authors C. Le Turdu, A. Pugh, C. Fraser, S. Nollet, C. Castagnac, F. Stabell, D. Palmowski, P.A. Hole, E. Steele and K. TushinghamSummaryDigital technologies and innovations are disrupting our industry by bringing new, cloud-native solutions to enable better planning and execution of exploration workflows. Being able to standardize a portfolio prioritization process globally, and instantly rank and update with key data or market changes, is key to increase planning efficiency and transparency. The examples provided in this paper show that those new digital solutions break down silos and enable closer collaboration between technical teams and decision makers.
The standardization of the portfolio prioritization process is expected to play a key role in dramatically reducing early and costly exploration spend by focusing on the opportunities that matter. In addition, this standardization associated with new ways of working is expected to minimize human bias and reduce competition amongst exploration teams. This should have a clear impact on reducing the gap between pre- and post-drill resource estimations.
Finally, capturing and sharing experience and knowledge is key for a sustainable future, and we hope that these new solutions, powered by the cloud, will help in attracting and retaining key talents to the oil and gas industry.
-
-
-
Expert-guided Machine-learning for Well Location Optimization under Subsurface Uncertainty
Authors R. Schulze-Riegert, P. Lang, W. Pongtepupathum, C. Drew, S. Topdemir, S. Pattie, H. Nasiri and T.M. HegreSummaryThis work investigates the application of an expert-guided machine learning technique for identification of connected and high saturated oil volumes for optimal well placement. The technique is designed to work on property maps of a single model as well as ensembles of reservoir models for robust field development optimization under subsurface uncertainty.
The methodology is embedded into a structured workflow design for improving a baseline well location design of the Olympus reservoir model ensemble, a public benchmark project for field development optimization under uncertainty. This work suggests an iterative improvement of well location designs using probabilistic well ranking to identify low performing wells, probability maps to understand reservoir performance and analytics-based optimization steps targeting large connected and high saturated oil volumes.
The methodology is described, and application results are presented for a full optimization loop. The structured approach highlights the value of novel learning techniques to provide an efficient and manageable solution for optimizing a well location design under subsurface uncertainty.
-
-
-
Managing Data Lineage of O&G Machine Learning Models: The Sweet Spot for Shale Use Case
SummaryMachine Learning has increased its role in several industries, becoming an essential tool, and competitive advantage. However, questions around training data lineage, or provenance, e.g., “where did the data used to train this model came from?”; the introduction of several new data protection legislation; and, the need for data governance requirements, has hindered the adoption of machine learning models in the real world.
In this paper, we discuss how data lineage can be leveraged to benefit the Machine Learning (ML) lifecycle to build ML models to discover sweet-spots for shale oil and gas production, a major application for the Oil and Gas (O&G) Industry.
-
-
-
Understanding How a Deep Neural Network Architecture Choice Can Be Related to a Seismic Processing Task
Authors J. Messud and M. ChambefortSummaryOne of the many challenges in the way of the adoption of Deep Learning (DL) for seismic processing is the understanding of deep neural network (DNN) architecture and components with the associated underlying physics involved in a specific processing task. In this article, we study how some convolutional DNN architectures can be naturally suited to given processing tasks, helping the interpretability and opening the door to meaningful QCs. For instance, we show that the Unet architecture ( Ronneberger et al., 2015 ) can naturally learn to “separate” the kinematics of seismic events from their amplitude variations and use both information efficiently; this is illustrated on the CIG (common image gathers) skeletonization (or picks probability computation) and muting task. We also illustrate that the Denet ( Remez et al, 2017 ) architecture can naturally learn to decompose a “noise” model into meaningful complementary contributions, with the receiver deghosting from variable depth streamer data example.
-
-
-
Productivity Prediction Integrating Data-Driven Method, Deep Neural Network and Exploratory Data Analysis in Montney Shale Plays
More LessSummaryThis paper presents a novel approach of productivity prediction in Montney shale formation by integrating data-driven method, exploratory data analysis (EDA) and deep neural network (DNN). In this study, a total of approximately 1500 wells. First, the EDA discovered a distribution of un-refined data and null data. In the above analysis, in order to avoid overfitting the proposed DNN model, an outlier analysis of the dataset was performed and an 1143 well was selected as a training data set. Second, in the DNN model, the applicability of categorical variables through one-hot encoding was verified. Hyperparameters optimization of the DNN model also resulted in dropout layer application (without), number of hidden layers (3), number of neurons (200), activation function (ReLU), and learning rate (0.002). Comparisons with optimized DNN model and other supervised learning models, random forest and support vector machine, showed that the DNN model had a minimum of 3.2% lower the mean absolute percentage error values and a minimum of 0.025 lower the root mean squared error values. The proposed DNN model was found to have superior predictive performance.
-
-
-
An Automatic Algorithm for Core-To-Log Matching
Authors A. Kuvaev, E. Stremichev and R. KhudorozhkovSummaryPredicting reservoir properties, such as porosity and permeability, is one of the major tasks of a petrophysicist. But since the most reliable information about these properties can be obtained only by studying core plugs, one must ensure, that well logs and core data are properly depth-aligned before fitting any statistical model.
Unfortunately, specifics of the coring process can lead to depths divergence between them of up to several meters and an auxiliary procedure called core-to-log matching is required to determine the actual depths of coring intervals. Currently, the matching is performed manually, since there is still no option to do it automatically in many popular commercially available software packages.
In this paper, we propose a novel algorithm for core-to-log matching, which automatically shifts coring intervals so that the correlation between well and core logs is maximized. On the example of 12 wells, we show that the algorithm not only outperforms a petrophysicist by a large margin but also works more than two orders of magnitude faster. The algorithm is implemented in an open-source PetroFlow framework available at https://github.com/gazprom-neft/petroflow.
-
-
-
Seismic-ZFP: Fast and Efficient Compression and Decompression of Seismic Data
By D. WadeSummaryWe present our open-source seismic data compression library, built on top of a state-of-the-art floating-point compression algorithm, and motivated by the demands of Machine Learning and cloud computing.
Fast arbitrary reading is achieved by using two key observations, namely that regularity may be preserved by using fixed-rate compression, and that storage hardware may be efficiently utilized by packing disk blocks with data which is frequently accessed together.
We also demonstrate the quality of reproduction of the input data, with the claim that it is suitable for the purposes of Machine Learning.
-
-
-
Application of Conditional Random Fields for Seismic Segmentation
Authors T. Karaderi and E. Zabihi NaeiniSummaryConventionally, computer vision tasks such as semantic segmentation are handled by probabilistic models such as the Conditional Random Fields (CRFs). The wide usage of CRF models in most modern semantic segmentation pipelines is because of their ability in modelling the structural information. Despite CRFs’ successful application in natural and medical data, the application to seismic data, however, is limited. In this paper, it is shown how CRFs can be incorporated into deep learning pipelines to improve automatic seismic interpretation by acknowledging that we are predicting a structured output and thus by including our prior knowledge about the spatial image architecture.
-
-
-
The Testing of Powered Drill String and its Operational Impact
Authors R. Kucs, H. Freissmuth and B. CazacuSummaryOMV is convinced the technology of a powered drill string with electric power and bidirectional data flow along the entire string down to the bit will change the dynamics of drilling operations tremendously. The availability of high resolution MWD, LWD, and along string measurement data in real time will decrease operational risk, time, and cost. To prove this technology OMV ran the PDS powered drill string of the company TDE on the surface section of one of its wells in the Vienna basin. The results are very promising. No connection failed to transmit power and data throughout drilling the entire surface section. Along string measurement were constantly received. Now it is important to investigate the benefits for the drilling and subsurface team in detail to get a grip on the overall benefits of this technology.
-
-
-
How Named Entity Recognition and Document Comprehension Unlock Geosciences and Engineering Semantic Search without Big Data
By J. MassotSummaryBy combining Named Entity Recognition model trained on a tiny labeled dataset with a generalist Reading Comprehension engine, this abstract shows how to implement an efficient Semantic Search engine which can complete and sometime replace traditional keywords-based search engine. The proposed solution does not require massive amount of annotated data for training the models involved, taking advantage of transfer learning and model adaptation allowed by BERT and BiDAF model architectures. Because no Big Data is needed, such solution is very easy to implement at an early stage of any project related to Geosciences and Petroleum Engineering knowlegge management project.
-
-
-
Automatic Detection and Classification of Unconformities on Seismic Data Using Machine Learning
Authors L. Alberts, K. Duffaut and T. RannemSummaryDuring a seismic interpretation exercise, picking an unconformity is one of the most time-consuming and ambiguous tasks. In this paper we present a method to quickly detect areas that are highly likely to be an unconformity, using the principle that at angular unconformities the azimuth and dip of the strata changes. We introduce a workflow to classify what kind of unconformity has been detected, by feeding the areas with high unconformity probabilities into a convolutional neural network. This adds the benefit that one can quickly discern whether the region was associated with significant uplift or not.
-
-
-
Practical Applications of Real-time Surface Monitoring in Mature Offshore Brown Field for Production Optimization
Authors M.F. Haron, K.A. Md Yunos, K.L. Tan, F.L. Bakon, C. Tang Ye Lin and S. M NazriSummaryField B is located 40 km offshore Sarawak in the north-western part of the Baram Delta Province. The field was discovered in July 1967 and started production in 1972. Digitalization efforts to the field was introduced in 2014 to enhance surveillance scope as offshore environment posed numerous challenges in data gathering. Real-time monitoring was implemented and has greatly boosted the response time for well issues and elevated efficiency in production improvement efforts such as gaslift optimization. Two case studies were chosen to reflect on the practical applications of real-time surface monitoring. In case 1, well B-1 is producing on gaslift and it was flowed back post stimulation job with higher casing pressure. Wellhead pressure was unstable and upon comparison with established pressure envelope, well was found to be injecting gaslift from multiple points which is inefficient gaslift utilization. For case 2, real-time surface monitoring has helped to indicate tubing-casing communication when casing pressure in well B-2 was observed to be increasing in relation to wellhead pressure; enabling the issue to be handled earlier before it worsens. Both cases have evidently proved that real-time surface monitoring is both practical and advantageous in field applications.
-
-
-
DeepSeismic: a Deep Learning Library for Seismic Interpretation
Authors M. Salvaris, M. Kaznady, V. Paunic, I. Karmanov, A. Bhatia, W.H. Tok and S. ChikkerurSummaryWe introduce DeepSeismic, an open source Github repository (https://github.com/microsoft/seismic-deeplearning) that provides implementation of deep learning algorithms for seismic facies interpretation. The repository provides composable machine learning pipelines, that enables a data scientists and geophysicists to use state-of-the-art segmentation algorithms for seismic interpretation (e.g. UNet: Ronneberger et al. (2015) , SEResNet: Hu et al. (2018) , HRNet: Sun et al. (2019) ). We provide scripts to reproduce benchmark results from running these algorithms using various public seismic datasets (Dutch F3, and Penobscot). Finally,the repository provides documentation, and quick start Jupyter notebook and Python scripts to enable the community to get started with seismic interpretation projects quickly. We believe the results in this paper provide a strong baseline on which others can build upon. To the best of our knowledge,these provide state-of-the-art result on Dutch F3 data set. We have released the code and the models in an open-source GitHub repository with permissive MIT license
-
-
-
Graphical Network Based Reservoir Modelling to Quickly Use Data and Physics to Explore the Subsurface
Authors J. Saetrom and A. SkorstadSummaryIn this paper, we demonstrate how we can combine reservoir physics, data and knowledge with fit-for-purpose machine learning algorithms in a graphical network model to utilise reservoir models as part of an efficient discovery process. Contrary to a traditional reservoir modelling approach, where we integrate data in a sequential manner, we train the graphical network model by utilising the information in all available simultaneously. This help overcome the common pitfalls in reservoir modelling, which typically limits the value of reservoir modelling efforts in asset teams today. We demonstrate the value of the solution on a study conducted on the Norwegian continental shelf. By having the ability to quickly generate reservoir models that all are plausible given the current available data, under different prior assumptions regarding the subsurface, we both increase our subsurface understanding, by also the confidence in our reservoir management decisions.
-
-
-
Improvement of Well Logs Autointerpretation Robustness via Application of Spatial Geological Information
Authors D. Egorov, G. Nugmanov, A. Semenikhin, A. Karavaev, A. Shchepetnov, O. Osmonalieva and B. BelozerovSummaryThese days machine learning models are used on a regular basis for various complex oil and gas tasks. One of the most popular business problems is well log data autointerpretation. However, these models, in most cases, require large amount of data which can be obtained only from mature oil fields. It assumes high variability in data due to different tools, well and measurements record conditions. It leads to noisy datasets with number inconsistencies affecting accuracy of model prediction and resulting in many misclassifications and outliers. Application of spatial geological information from other wells can increase prediction robustness.
The main aim of presented research is to develop a method of spatial geological data incorporation into conventional machine learning net pay intervals autointerpretation in pipeline in order to improve quality of model prediction. Approach for a spatial geological features aggregation was proposed. They were used for developing a spatial ranking model allowing estimation of geological consistency for each predicted net pay interval. Capability of the proposed methodology was proven by mathematical metrics and expert blind test. It was clearly shown that incorporation of spatial data dramatically increases machine learning models prediction quality and eliminate inconsistent intervals produced by noisy data.
-
-
-
A Comparative Analysis of Supervised Classification Algorithms for Lithofacies Characterization
Authors S. Sarkar and C. MajumdarSummaryFacies characterization is important to distinctly define rocks of interest and to build a better understanding of the depositional environments encountered at wellbore. The conventional approach of facies analysis by human interpreters involves a time-consuming process. In addition, lack of experience and difference in interpretational approach often leads to inconsistencies that may affect the overall geological modeling. To solve these problems, we introduce three commonly used machine learning algorithms for automated lithofacies classification -Decision Trees, Random Forest and the Support Vector Machine. In this study, we apply these supervised classification algorithms on a set of wireline logs from different wells and evaluate the efficiency of each algorithm for facies classification under different constraints.
While machine learning proves to be a more time-efficient and consistent solution, the performance and accuracy varies with the algorithm and the preconditioning of the data. Support Vector Machine outperforms the Random Forest and Decision Trees when the training dataset is limited. It is also inferred that the models are more efficient with fewer predictive classes and less complexity. Also conditioning of the training data to provide equal weightage to all the predictive classes are of equal importance to create a robust and unbiased model.
-
-
-
Seismic Interpretation of Partially Labeled 3D Cube with Neural Networks
Authors A. Zhuchkov, D. Prokhorov, V. Gusev and L. MatyushinSummaryWe propose to solve seismic interpretation problem by manually labelling very small (0.5%) fraction of inline and crossline sections of the seismic cube, followed by automatic segmentation of the rest of the cube by a neural network model.
There are several methods to improve the quality of segmentation. First, we use an additional input image, which is essentially an interpolation of orthogonal labelled images. Second, we describe two types of augmentations, which work particularly well for seismic segmentation, grid distortion and linear-harmonic transform. This workflow results in high quality segmentation and is a good candidate to be used in real world situations to reduce manual labelling.
-
-
-
REEF: A Framework for Information Extraction and Automated Knowledge Graph Construction
Authors J. Laigle, C. Collantes, A. Cortis, Z. Jin and A. BissetSummaryWe present Reef (Recursive Evidence Extraction Framework), a Python framework for automated information extraction from Petroleum Geoscience databases. Reef enables an end to end pipeline from raw documents to a Knowledge Graph. Reef makes possible two essential operations: 1/ discover entities in documents, characterize them and connect them to abstract concepts present in a knowledge graph and 2/ discover new knowledge with distant supervision.
Knowledge graphs are key to build better search engines, Question Answering systems, recommendation engines, feed algorithms for the cross analysis of multiple datasets. Reef unique approach leverages a comprehensive stack of open source and state-of-the-art libraries for documents digitalization and parsing, Natural Language Processing, Language Modeling, Logic Reasoning and Graph Analysis. These foundational components are seconded by custom applications for specific tasks.
Documents processed in Reef are digitized and sent through a pipeline where their content is filtered according to a flexible, easily extensible, Petroleum Geoscience specific object model. Information can be extracted from text, tables, figures, diagrams. Reef contains functions to infer information nature, digitize it, disambiguate and reconcile it into a graph database. Reef can be deployed in any cloud and delivers production ready knowledge graphs which can be served to third party applications.
-
-
-
Advances in the Digital Outcrop Modeling Software and Workflow: Examples and Lessons from Outcrops: Saudi Arabia
More LessSummaryPhotorealistic 3D Digital Outcrop Models (DOMs) are increasingly encountered as a cutting edge topic in geosciences, especially reservoir analog characterization. A custom viewing and analysis software have been developed specifically for work with these DOMs. Although 3D outcrop models are an exciting topic, finding software tailored to provide the range of geological analysis tools one would want to use to extract meaningful results from them is still challenging. Utilizing the excellent outcrops in Saudi Arabia, a new 3D outcrop model visualization & analysis software has been developed, with a focus on being able to load and display large outcrop model datasets, in fully georeferenced coordinates. Analysis tools have been implemented to perform interactive analysis & annotation of the outcrop, including both sedimentological and structural analysis tools. Also in this work, we suggested workflow for digital data acquisition (by high-resolution camera and GCPs); processing (by Reality Capture software) and interpretation (by ArcGIS and VRGS software). Following this workflow will provide a practical guide towards gearing the data collection to meet the desired outcomes within time and effort constraints.
-
-
-
The Well Productivity Index Determination Based on Machine Learning Approaches
Authors A. Gruzdev, V. Babov, Y. Simonov, A. Kosarev, I. Simon, V. Koryabkin and A. SemenikhinSummaryIn this paper, we presented an approach to building a machine learning model for predicting well productivity index. The proposed approach is based mainly on LWD data and well log interpretation results, based on the petrophysical model of the oilfield and digital signal processing approaches. The proposed approach was tested on historical data from the Novoportovskoye oilfield. The model was tested based on the LOOCV cross-validation process. As a result, the median relative error over wells is less than 20%.
-