- Home
- Conferences
- Conference Proceedings
- Conferences
Second EAGE Digitalization Conference and Exhibition
- Conference date: March 23-25, 2022
- Location: Vienna, Austria
- Published: 23 March 2022
41 - 60 of 72 results
-
-
The First Digital Oilfield Maturity Model Allowing Fast Value Extraction from Mature, Diverse Assets
Authors P. Tippel, M. Smith, S. Pöllitzer and D. KauschederSummaryA new approach to scaling digital on top of existing field infrastructure from various design and operating philosophies enabling cross company, cross industry collaboration
-
-
-
Asset Integrity Management: Automatic Crack Detection on Concrete Surfaces for Early Preventive Maintenance
Authors M. Pal, P. Palevicius, M. Landauskas, U. Orinaite, I. Timofejeva and M. RagulskisSummaryIn last decade alone image analysis-based methods have gained a lot of momentum as an early non-invasive crack detection technique. More recently artificial intelligence-based deep learning methods have also been applied to automatically process the images of concrete surfaces for crack identification. Although all these methods claim very high accuracy, they often ignore the complexity of image collection process itself. Real time images are often impacted by illumination conditions, randomness of crack shapes, and irregular size of cracks, and various noises such as shadows, shading, blemishes, and concrete spall in the acquired images. In this paper we explore the complexity of image classification for concrete crack detection in presence of complex shadow shapes. Challenges associated with application of deep learning-based methods to detect concrete crack in presence of shadows are elaborated in detail and a new deep learning approach, which allows for crack detection in presence of shadow shapes is presented. In this paper significant effort are spent on development of an image data base of images of concrete structures with crack and shadows for training and testing. crack characterization through image augmentation techniques is also presented. Such Methods have not been published before to the best of our knowledge.
-
-
-
A Highly Accurate Machine Learning Approach to Detect Salt Bodies in 3D Seismic Data
Authors S. Amini and W. GonzalezSummaryIn recent years, Machine Learning techniques have been widely used to assist seismic interpretation to detect salt bodies in 3D seismic. The success of such applications significantly depends on the inputs selected to train the model and the workflow developed to perform the related tasks.
In this work, we present a workflow which makes use of Principal Component Analysis (PCA) to evaluate 18 seismic attributes and identify the most influential attributes on the first four principal components. The seismic attributes (including spectral, structure and miscellaneous) are calculated from SEAM (SEG Advanced Modeling Corporation) phase-I model dataset. During the attribute evaluation, we identify10 seismic attributes with the highest contribution to the cumulative variance of the seismic dataset. Finally, the selected seismic attributes are used to train 3 supervised machine learning algorithms (Deep Neural Network, Random Forest, and AdaBoost Classifiers). Neural network classifier demonstrates the highest accuracy among the 3 algorithms, with classification F1 score of 0.99 both for salt and non-salt classes. In addition, due to the specific input features which have been used in the training data, the developed model has very high accuracy when it is applied on any new cross section within the seismic volume.
-
-
-
Bridging the gap between Domain and Data Science; Explaining Model Prediction using SHAP in Duvernay Field
Authors A. Shamsa, M. Paydayesh and M.M. FaskhoodiSummaryMachine Learning (ML) solutions are now used everywhere, however very often, predicted ML results are difficult to validate and domains experts tend to reject results due to the “black box” syndrome. In this paper, we have used the Shapley Additive Explanations (SHAP) interpretative model to demonstrate that this method could help in the interpretation of the ML results. We have applied that method to a large data set of 800 wells in the Duvernay shale gas field in Canada with the objective to model 1-year of cumulative gas production.
We applied fast gradient-boosted decision trees (Xgboost) and light gradient boosting machine learning models (GBM). SHAP model was then used to identify the contribution of each modeling parameter in the production prediction. SHAP method effectively revealed how each parameter impacts production prediction positively or negatively, and we can quantify their impact.
Finally, these findings are key in decision making in the selection of optimum completion parameters and for picking a location for infill drilling.
-
-
-
A comparison of traditional, supervised, and unsupervised machine learning-based denoising methods for post-stack seismic data
Authors L. Mosser, T. Papadopoulos, A. Kuha, J. Herredsvela and E.Z. NaeiniSummaryConditioning of seismic data is a key step in interpretative and quantitative exploration workflows. A key step in seismic data conditioning is the removal of various noise signatures. While denoising operations are applied throughout the seismic processing workflow, typically we observe a remnant of noise in post-migration seismic images.
We present a comparison of supervised and unsupervised post-stack seismic denoising methods against traditional denoising operators. Our findings show that supervised deep learning-based denoising operators can provide good results but are challenged due to the need for good synthetic datasets and valid assumptions about the noise characteristics present in the seismic dataset they are being applied on.
We evaluate the performance of these denoising approaches on downstream tasks such as coloured inversion and 3D geological fault detection. Our results show that the direct evaluation of these tasks on resulting denoised volumes provides a good benchmark for various denoising operators and directly highlights added value for structural and quantitative interpretation.
Our preliminary results on unsupervised denoising methods highlight these methods as an important research avenue by reformulating the denoising problem for seismic data in a more rigorous manner that does not require one to create supervised models that generalize across datasets.
-
-
-
Improving the subjective labelling of interpretation of geological conditions ahead of the tunnel face
Authors A. Sapronova, P.J. Unterlas, T. Marcher, J. Hecht-Méndez and T. DickmannSummaryGeological prognosis during tunnelling work is a fundamental task in order to gain knowledge about the rock mass condition ahead of the face, improve the initial geological model available and help for a more efficient and safer tunnel excavation. Tunnel seismic prediction has established as a reliable methodology for predicting the rock mass condition ahead of the face. The quality of the final results or seismic model are conditioned by the quality of the recorded seismic data, data processing and the interpretation of output, that is mainly conditioned to the user’s expertise.
The goal of this work is to use machine learning methods to create a new way of classifying seismic data as unaffected by human interpretations as possible. In this work, we propose a model where a cascading ensemble of machine learning classifiers is used to analyse the seismic data and available geological documentation at the underground construction site to predict geological conditions.
We show that machine learning methods' application eliminates subjective perceptions in prediction, and the proposed ensemble approach improves the accuracy of the geological conditions forecast.
-
-
-
The innovative breakthrough : first Use Case on OSDU within TotalEnergies
Authors T. Akimova and B. JoudetSummaryTotalEnergies decided to demonstrate the capabilities of OSDU open source data plateform on Azure and started the very ambitious SURINAME use case for a 8-months duration.
The main goals of this project were to demonstrate how OSDU could improve collaboration by reducing internal business silos between drilling, geosciences and development, and help accelerating an Exploration or Appraisal well definition by providing every parties the same updated data during the project lifecycle.
The Suriname Golden Block was chosen for this use case for the transversality between Exploration avec Project Development, the multiple stakeholders, multiple sites, multiple business software involved and the very limited legacy data for this field. It was then decided to focus on the well data.
Several connectors from applications to the platform were created.
In addition to the standard functionalities provided by the OSDU Open Group, some additional objects, corresponding schemas, and customized API were created by TotalEnergies and its partners in order to match the business needs.
A proper entitlement model was setup according to its internal security policies.
The end to end testing phase completed this use case including key end users.
This is a major breakthrough for the management of Geosciences data within the company.
-
-
-
Svalbox: digitizing and integrating Svalbard’s geoscientific record
Authors K. Senger, P. Betlem, T. Birchall, A. Dahlin, R.K. Horota, G. Lord, J. Janocha, L. Kuckero, K. Ogata, S. Olaussen, N. Rodes and A. Smyrak-SikoraSummaryThe Svalbard Archipelago has unique vegetation-free outcrops from nearly all periods in the Phanerozoic. Svalbard has a rich geological heritage based on more than 100 years of coal production and in addition subsurface data from petroleum exploration and research drilling and geophysical acquisition. In addition, Svalbard is a superb playground for international geophysical and geological research with a tremendous scientific production from the 1800s. Currently, increased R&D are targeting paleo- and current climate changes. All these activities have generated important data sets that contribute both to understanding Svalbard’s geological evolution, and as an excellent analogue not only for the Barents Shelf, but also for the nearby Arctic Basins. The challenge is that these data are often fragmented and inaccessible. Through the Svalbox database, we liberate the data and integrate them in a single geo-spatial database and subsurface portal.
-
-
-
Advanced Process Control - a Cornerstone of Digitalization and Operational Excellence
Authors G. Kotsiopoulou, M. Smith, W. Rodriguez and A. BraicuSummaryDigitalization and Automation play a key role in Operational Excellence, by enabling Safe and Secure process optimization of production facilities. Advanced Process Control (APC), or Model Predictive Control (MPC) is a cornerstone of the digital transformation that is driving the industry. The current global challenges require the development of new and novel ways to deliver projects and specifically Advanced Process Optimization solutions remotely. The abstract and paper present how APC was successfully engineered, deployed, tested, commissioned, and put in operation in two of our company’s most critical Gas Production Facilities in an almost completely remote manner. A very rigorous project methodology has been applied in order to minimize the risks to the benefits, considering the complexity of the project and the pioneering implementation strategy that had to be applied. The controllers that are based on the developed dynamic models which capture all the processes and their constraints have been running continuously and autonomously targeting to self-optimization system by using real-time data. Furthermore, this paper will cover the overall challenges that were encountered and eventually were overcome during the execution of the project, as well as the substantial benefits that were obtained such as incremental production, plant stability and energy efficiency.
-
-
-
European Geological Data Infrastructure (EGDI) – making European geological data accessible
SummaryEuropean planning is required to accommodate for the increasing need of resources and to enlarge knowledge about preventing effects of climate changes. Consequently, access to pan-European geological data are becoming increasingly important. Data and knowledge from many European geological research projects are not always publicly accessible, and EU among others wishes to harmonise and integrate geological data from these projects to ensure public accessibility to the growing amount of geological data.
One of the central pillars in the EuroGeoSurveys (EGS) strategy is harmonization and sharing of pan-European geological data and research results, and the concrete result of this is the European Geological Data Infrastructure (EGDI), and the first version was launched in 2016. The EGDI platform is a collection of applications managed by various partners from the national and regional Geological Survey Organisations (GSO). EGDI gives access to pan-European geological datasets and services to the wider European research and digital landscape by connecting to infrastructures and other geodata related platforms.
At present, the EGDI platform is expanded with data and results from the GeoERA research projects and further development is planned as part of the Horizon Europe CSA Call to support further harmonisation and access to European geological data.
-
-
-
Rapid Screening for Carbon Sequestration and Hybrid Opportunities on a Digital Platform
Authors M. Neumaier, B. Kurtenbach and A. NeumaierSummaryWe are currently developing a cloud-based system for efficient early stage evaluation of broader subsurface energy and resource assets. Horizontal geoscience to economics cross-domain workflows enable fast, transparent and reliable assessments and decisions. A powerful Monte Carlo simulator allows for fast probabilistic modelling, completed by a unique system of artificial intelligence for calibration to known subsurface data in a digital environment.
The carbon sequestration screening workflow is building up on our commercial solution for oil and gas based on our over 30 years’ experience in the exploration industry. In the near future, we will be able to assess carbon sequestration opportunities independently, or as “hybrid assets”, i.e., combined oil/gas and carbon sequestration opportunities, consistently evaluated for economic feasibility and carbon footprint.
-
-
-
Artificial intelligence applied to the geological facies classification of pre-salt carbonates.
Authors J. Silva, M. Martins, L. Brelaz, L. Medeiros and R. CunhaSummaryThis work deals with the use of artificial intelligence techniques in the classification of pre-salt geological facies using high resolution images. Hundreds of meters of core samples were used for deep neural network training, generating an automatic facies classifier with very good accuracy.
The CORE-HDI images were produced with the aid of a robot for photographic digitization. Manual segmentation tools from the Rocklab Digital WEB platform were used to generate thousands of facies samples distributed in 21 categories. The results showed the efficiency and speed of using machines to identify textural patterns in HD photo images of carbonate rocks.
-
-
-
Data Mining For Prediction of Petrophysical Properties From Well Logs
Authors R. Ruiz, C. Reiser, A. Roubíčková and N. BanglawalaSummaryPetrophysical interpretation of well logs is a complex and lengthy process. It requires skillful and experienced analysts, who can take this set of measurements and translate them into reservoir properties such as porosity and hydrocarbon saturation. Deriving these logs can easily take a couple of days, but the length of this task and quality of results are highly dependent on the geological complexity of the area. Moreover, the experience gained while working on one well doesn’t necessarily translate in a reduction of the amount of time spent processing a new well. Therefore, any opportunity to optimize and reduce the turnaround of the standard petrophysical workflow, while integrating regional knowledge is highly sought-after. In this work, a methodology is proposed to predict total porosity and hydrocarbon saturation from well logs using Machine Learning (ML) and an extensive petrophysics well data base in the Norwegian Sea. Blind test results from this methodology are of a quality comparable to that achieved by a specialist, at a fraction of the time, proving its potential for quick estimation of reservoir properties while a more thorough analysis is performed.
-
-
-
Uncertainty in Chemical Flooding: A new data optimization algorithm for modelling of surfactant flood.
Authors O. Akinyele and K.D. StephenSummaryA data optimization algorithm has been created to solve the inversion modelling by fractional-flow theory for chemical enhanced oil recovery processes. The routine is scalable, flexible, and easy to deploy in other programming and numerical computing environments for uncertainty analysis especially when upscaling methods is not available. Fractional-flow theory provides valuable insights for 1-D calculations, and reservoir model validation. We used the MATLAB programming environment to develop an adaptable framework, deployed the code to manipulate the plotting of functions, and data needed to calibrate and derive relative permeability functions for upscaling. The code is published in open format and is accessible online for research purposes.
The approach integrates three computational processes to attain the effective properties, total mobility, oil bank saturation, mass balance, and flow velocity. We used predicted and regularization terms to define the multi-objective function, then minimized using an active-region quasi-newton optimization algorithm. The graphical user interface is embedded with an adjustable response cost function manifold used to ensure the solution converges to the observed flow parameters. The method was applied to design relative permeability curves for miscible and immiscible surfactant displacement systems, which may be encountered in experimental or simulation studies.
-
-
-
Predicting Well Production Performance Using Deep Learning Techniques
Authors F. Sumarna, J. Ekundayo and A. SaeediSummaryPublic production data from the decommissioned Volve field of the North Sea, Norway was used to develop sequential deep learning models, namely the simple recurrent neural network (RNN), long-short term memory (LSTM) and gated recurrent units (GRU), for prediction of oil, gas, and water production based on surface (and limited subsurface) data. Following an extensive exploratory data analysis, the dataset was split in the ratio 70:15:15 for training, validation and testing, respectively, with each of the three deep learning models. For a fair comparison, the same neural network structures and hyper-parameters were applied for both field and well level calculations.
The results show that both LSTM and GRU have superior accuracies than simple RNN. The relative squared error (RSE) for the simple RNN, LSTM and GRU, respectively are: Oil Rate - 0.0820, 0.0793, and 0.0815; Gas Rate -0.2297, 0.0854, and 0.0948; Water Rate - 0.0213, 0.0202, and 0.0176. Both advanced RNNs have similar performances, but GRU performed slightly better in terms of runtime – faster to reach model training convergence. These results indicate that the novel approach applied in this study can provide reliable data-driven history-match models for both field and well level production performance predictions.
-
-
-
Adaptive production forecast - a key element in petroleum reservoir digital transformation
Authors S. Ursegov, A. Zakharian and O. MiklinaSummaryFirstly, the existing objective limitations of the computer-based forecast of oil and gas production are discussed. The second topic is to present the possibilities of adaptive system as an alternative to the traditional options of production forecasting. It is extremely difficult to predict the future oil and gas production, especially for each well. That is why, during the digital transformation of petroleum reservoirs, anyone should have an approach of protection against false assumptions. One of such tools is the adaptive forecasting system. From the results presented in this work, it follows that the reliability of adaptive forecast is primarily due to the fact that this system uses extrapolation of existing trends in petroleum production, combined with assumptions about the unrealized consequences of these trends, which may manifest themselves in the short or long term. The most significant difference between the adaptive system as a representative of today's popular methods of machine learning and processing the big data sets is that it uses multidimensional fuzzy-logic matrices containing about a thousand different parameters, some of which are taken from the adaptive hydrodynamic model, which are necessarily created in automated mode for each petroleum reservoir under study.
-
-
-
Decoding 3 Critical Building Blocks of Energy Digitalization
By F. DeslogesSummaryMoving data to the cloud is key to the future — even the present — of the modern digital workspace. Energy companies especially must be able to access their data in the cloud to generate insights quickly and make critical business decisions in real-time. However, this is a constant and ongoing challenge for the industry, and it can often be a multidimensional problem considering the size, complexity, format, and location of data. In fact, in a recent survey by Bentley Systems, more than 83% of respondents stated that data management is a critical issue for their organization. Migrating to the cloud requires a strong, consistent digital data management strategy with several key components.
-
-
-
The Digital Revolution – Applications Towards Hydrocarbon Exploitation
Authors I. Kjørsvik, K. Sorbo, A.K. Kvalheim and D. StoddartSummaryAdvances in computational power alongside exponentially increasing subsurface databases has triggered Wintershall Dea to start a journey that leverages new applications for hydrocarbon exploitation in exploration and development projects
-
-
-
From Digital to Decisions - The Required Value Adding Step
Authors A. Skorstad and D. Gese JarqueSummaryOil and gas companies are getting increasing capabilities to generate more value-adding data with the ongoing shift to both large in-house and cloud-based computation and storage facilities. It is easy to get lost in the digitization journey and get blinded by all the computational power that unveils with high performance computing. The question then becomes: how do we transform the digital footprint of all this data into something that is easy to relate to for decision makers? In other words, what does sensible, aggregated decision support data look like that adds value and justifies the production and collection of all the digital data that is and will be available in the future? What is noise, what is nice to have, and most importantly, what are the key findings in the data?
The abstract discussed how the information gathered from ensemble-based can be utilized for better decision making by adding value compared to traditional focus by considering uncertainties as an integral part of the modelling process.
-
-
-
Case Study: Predicting Missing Well Logs using Classification and Regression Methodology
Authors R. Thiyagu, M.A. Muhammad Nazmi, A. Amirsaman, T. Shaur and M.Y. Mohamad YusmanSummaryWell Logs is been a key data to interpret the reservoir and rock properties. This data used by petrophysicist, geoscientist, reservoir engineers for their respective interpretation, due diligence. Often this well logs have gaps due to drilling or logging operational challenges. Several other factors influence the quality of well log data is tool types & sensitivity, mud types, or geological condition of the subsurface. The team work on a solution during APGCE Geo Hackhathon 2019, on how to predict the missing sections in well logs using classification and regression methodology.
-