- Home
- Conferences
- Conference Proceedings
- Conferences
First EAGE/PESGB Workshop Machine Learning
- Conference date: November 29-30, 2018
- Location: London, UK
- Published: 30 November 2018
21 - 25 of 25 results
-
-
Functional Estimator For Reservoir Proxy Models Made Scalable Through A Big Data Platform
Authors M. Piantanida, A. Amendola and G. FormatoSummaryThe abstract documents how a Big Data Analytics platform allowed to implement a complex functional estimator of a reservoir proxy model, involving complex machine learning operations on dynamic reservoir models, so that it can scale up to the size of realistic reservoir models.
-
-
-
An Extension For The RA Methodology: Stability Analysis
Authors E. Vital Brazil, R. Silva and L. FariasSummaryWe present an extension for a methodology proposed by Perez-Valiente et al (2014) , known as Reservoirs Analogues (RA). This method finds analogues using machine learning to complete a dataset. Our concern is this methodology does not track error carried from the imputation of missing values until ranking lists of analogues. This study aims to analyze the inherent uncertainty of this step discussing how it can be beneficial to obtain accurate information for reservoirs with limited information.
-
-
-
Building A Robust, Company-Wide Data Science Pipeline Using Programming Abstraction And Virtualization
Authors N. Jones and K. TorbertSummaryThe oil and gas industry presents a challenging and exciting environment for data projects due to the size, complexity, and variability in formatting, type, and quality of the data collected. This environment makes delivering and maintaining a data science pipeline from source systems through to the end user an enormous challenge in many companies ( Scully et al. 2014 ). Many projects fail before any analytics can even applied to the data due to difficulties handling legacy systems, data silos, complex dependencies between data sources, and more. In other cases, data science projects can only advance in one area or division of a company because of differences in data handling despite having broad applicability through the company’s assets. This presentation will discuss California Resources Corporation’s new company-wide data analytics effort as a case study of how we have used technologies like data virtualization ( Van Der Lans, 2018 ) and programming architectural principles such as abstraction to tackle difficult data integration and data quality problems to construct a data science pipeline capable of delivering results company-wide. Many of these problems have frustrated multimillion dollar attempts to address them in the recent past.
-
-
-
An Automated Information Retrieval Platform For Unstructured Well Data Utilizing Smart Machine Learning Algorithms Within A Hybrid Cloud Container
Authors N.M. Hernandez, P.J. Lucañas, J.C. Graciosa, C. Mamador, L. Caezar, I. Panganiban, C. Yu, K.G. Maver and M.G. MaverSummaryThere is a large amount of historic and valuable well information available stored either on paper and more recently as digital documents and reports in the oil and gas industry especially by national data management systems and oil companies. These technical documents contain valuable information from disciplines like geoscience and engineering and are in general stored in a unstructured format. To extract and utilize all this well data, a machine learning-enabled platform, consisting of a carefully selected sequence of algorithms, has been developed as a hybrid cloud container that automatically reads and understands the technical documents with little human supervision. The user can upload raw data to the platform, which are stored on a private local server. The machine learning algorithms are activated and implement the necessary processing and workflows. Structured data is generated as output, which are pushed through to a search engine that is accessible to the user in the cloud. The aim of the platform is to ease the identification of important parts of the technical documents, automatically extract relevant information and visualize it for the user, so they can easily do further analysis, share it with colleagues or agnostically port it to other platforms as input.
-
-
-
Input Data Quality Influence On Lithoclass Predictions In Relation To Supervised Machine Learning
Authors H.W. Bøe, K.B. Brandsegg, L. Marello and A.E. ČrneSummaryWe assess the importance of data availability and consistency prior to applying supervised machine learning for predicting lithoclasses from wireline logs. A dataset is pre-processed and used as training data by three machine learning models in order to investigate the sensitivity of the lithoclasses predictions. The first model uses the quality assured dataset without any modifications. The second model standardizes log signatures, whereas the third model uses the dataset in combination with additional features that dampens extreme outliers. The three models are evaluated against lithofacies interpretations based on CPI’s to show the varying predicting power of the models. The method is applied on a quality-controlled Jurassic interval dataset of ~100 exploration wells within a quadrant of the Norwegian part of the North Sea. The results shows that the number of wireline logs available has a direct influence on the prediction accuracy. For an acceptable prediction accuracy the wells should contain at least the gamma ray, density and neutron log. To distinguish between water-bearing and hydrocarbonbearing intervals in sandstones the resistivity logs should also be present. When implementing machine learning on a regional scale we should consider varying burial depth and depositional environment in order to gain optimal predicting power.
-