- Home
- Conferences
- Conference Proceedings
- Conferences
Fourth EAGE Digitalization Conference & Exhibition
- Conference date: March 25-27, 2024
- Location: Paris, France
- Published: 25 March 2024
1 - 20 of 76 results
-
-
System Based on Generative Adversarial Neural Networks (GAN) for Obtaining Acoustic Seismograms from Elastic Seismograms
Authors D. Ramírez, S. Abreo, O. Reyes and A. RamírezSummaryIn a seismic survey, transducers (geophones or hydrophones) record seismic traces that convey the interaction between seismic wavefields and geological layers. While these traces are fundamentally elastic due to the Earth’s natural layer elasticity, acoustic waves are often the focus of interpretation and imaging in seismic processing. This study introduces a method for isolating acoustic wave information from seismic traces by employing a 1D conditional generative adversarial network (cGAN). A GAN comprises two neural networks, a generator (G) and a discriminator (D), engaged in a zero-sum game. In this game, G learns to generate data with the same statistics as the training set, effectively converting elastic input into an acoustic equivalent. D’s role is to distinguish between real and generated data. In this context, the model is conditional, meaning that G’s generation of acoustic events depends on the statistical characteristics of the input elastic events during the training stage. This approach offers a promising solution for obtaining acoustical information in seismic data processing.
-
-
-
Automatic First-Break Picking in Seismic Data with Characteristics of the Middle Magdalena Valley in Colombia
Authors S. Rincón, O. Reyes, S. Abreo, O. Reyes and A. RamírezSummaryThe manual detection of the first break (FB) in seismic data relies on visual recognition of amplitude and waveform variations by experts. However, for vast datasets generated during seismic acquisitions, this manual process becomes highly time-consuming and subject to individual interpretation. In this study, we propose an automated FB picking method based on neural networks for detecting the initial arrival of synthetic data resembling the characteristics of the Middle Magdalena Valley in Colombia, a basin historically associated with hydrocarbon exploration. Our approach involves supervised training of a neural network (NN) comprising a 1D convolutional layer and two dense layers. The NN categorizes reprocessed trace samples into two groups: pre-FB and post-FB. Subsequently, through post-processing, we identify the most likely sample corresponding to the FB. Our method successfully detects the first arrival in 77.36%, 91.19%, and 95.86% of cases, allowing for a margin of error of 10 samples when signal-to-noise ratios (SNRs) are 0 dB, 6 dB, and 20 dB, respectively. This demonstrates its effectiveness, particularly for noisy signal conditions.
-
-
-
How Generative AI can Speed up the Preparation of Subsurface Study Reports with Fully Trusted Information
Authors S. Severino, R. Taula, D. Mezzapesa, M. Piantanida, S. Rubini, M. Pagani and P. LorenziSummaryWe present a use case to leverage the power of generative AI to speed up the preparation of subsurface study reports, enabling the geoscientist to easily incorporate into the report fully trusted information collected from Eni proprietary systems, as well as from official Company reports, validated through assurance reviews and milestone gates. We describe how to provide trusted paragraphs to a generative AI engine as input, so that the answers provided by the generative AI models are based on certified information that can be reliably incorporated into subsurface study reports. The techniques used to achieve this goal spans from the usage of vector embeddings applied to validated documents to create a repository to retrieve relevant information and to generate accurate responses to user queries using Large Language Model (LLM), the usage of fuzzy techniques to match the objects of the question with the content of the subsurface data platform, the exploitation of Knowledge Graphs (KG) prepopulated with the geological concepts extracted from regional studies and basin analysis reports, as well as the usage of a “Wiki BO” to consolidate the answers of the generative AI as validated descriptions of business objects (BO), such as plays and prospects.
-
-
-
Mixed Reality in Geology: Supercharging Geological Core Interpretation
Authors K. Oogorah, M.A. Al Ibrahim and M.M. MezghaniSummaryIntegrating mixed reality (MR) technologies into the core interpretation process offers a promising solution to the challenges faced by traditional methods. The MR system, utilizing a headset with depth sensor technology, bridges the gap between digital data and physical core samples, allowing for the direct overlay of essential digital information onto physical core samples. This approach merges digital data with the physical world, significantly enhancing the accuracy and depth of core analysis. The depth sensor uses structured light range sensing, projecting a structured infrared light pattern onto the environment. As this infrared pattern interacts with objects in its path, the sensor captures the distortions in the pattern to accurately gauge depth and surface information, which is crucial for the device’s immersive mixed-reality experience. Users can view multiple logs overlaid on their corresponding core sample, with the system displaying relevant laboratory results directly on the physical core. The immersive MR experience encompasses wells, thin sections, and core sequence details, transforming the laboratory space into an informative display. Mixed reality technology creates a more interactive core analysis and interpretation environment, equipping geoscientists and engineers with dynamic tools for more precise examinations of geological cores.
-
-
-
Enhancing Well Operation Data Quality Assurance and Stakeholder Collaboration in an Established Corporate Data Hub
Authors I. Casetto, P. Tempone, C. Piras, A. Fontana, F. Senestraro, D. Mezzapesa and G. SalaSummaryThe abstract highlights the crucial importance of maintaining data reliability and quality within the central corporate hub for real-time well data management. It introduces an approach focused on improving data acquisition and integrity by emphasizing the implementation of data monitoring mechanisms such as dashboards, KPIs, and KQIs for a comprehensive evaluation of data integrity and quality. Our methodology comprises two primary components: Data Monitoring and Quality Assurance & Stakeholder Collaboration. Collaboration with internal stakeholders and contractors plays a pivotal role in defining and improving data quality. This involves utilizing international standards and company-specific technical specifications to assess data quality while fostering a collaborative environment. Additionally, two recent examples have been included to further substantiate the evidence of the results achieved through this applied approach.
-
-
-
Automated Detection of Geological Lineament Using Deep Learning Model with Airborne Gravity Gradiometry (AGG) Dataset
By S. IshinabeSummaryAutomated lineament detection based on the deep learning model, using gravity data and subsurface lineaments, was verified to avoid problems such as accuracy and consistency of the manual lineament extraction. For the training and detection, the model used both extracted lineaments and gravity data. The gravity data was transformed to shape index which later input to the model. The detection architecture is composed of “Lineament Detector” and “Pseudo-Hough Transformer” employing FPN and U-net, respectively. Pseudo-Hough Transformer, which is connected to Lineament Detector, has the role of forcing the model to highlight linear geometry. The automated detection firmly tracks the extracted lineaments in the validation area. The detection shows a desirable geometry that is linearly corrected by the Pseudo-Hough Transformer. Following evaluation reveals that the contribution of each lineament to the extraction accuracy is not simply a function of extraction difficulty of lineaments. The contribution tends to be lower for lineaments that can be predicted with relatively high accuracy even without including in the training data.
-
-
-
Correlational Analysis of MWD Data for Rock Mass Characterization and Risk Assessment
Authors A. Sapronova, A. Hammoud, F. Klein and T. MarcherSummaryThe study demonstrates how the Measurement-While-Drilling (MWD) data is used for real-time rock mass characterization, highlighting the importance of preprocessing MWD data to address its complex nature. The correlation analysis method is central to understanding the relationships within MWD datasets, aiding in feature selection and reducing dimensionality. The research showcases the ability of the correlational analysis method to maintain and boost the data’s informational value, making it suitable for advanced applications like machine learning. The methodology integrates Spearman’s correlation coefficient to measure variable associations, emphasizing the exploration of relational dynamics over point-value analysis. The research demonstrates that models trained on data averaged over specific depth or time windows—via a ‘sliding window’ technique—outperform those trained on per borehole averaged data. This indicates that localized averaging captures essential information that enhances model performance. The research advocates for a comprehensive preprocessing regime as a precursor to effective data analysis and robust outputs from machine learning models.
-
-
-
GECO: Integrating Structured and Unstructured Geological Data to De-Risk Any Kind of Surface and Subsurface Geohazard
Authors P. Sala, C. Caborni, N. Lamonaca, G. Davoli, M. Piantanida, A. Singh, M. Far, V. Nalagatla and T. JiaSummaryThe GECO (Geological ECOsystem) project is aimed at implementing an ecosystem where any data sources, structured or unstructured, can be integrated and used to extract key insights and in the development of the geological model for the area of interest. The vision behind GECO is first and foremost to provide Geoscientists with a digital playground where it will be possible to structure relationships, to promote know-how exchange among the specialists, increasing at the same time the efficiency and the accuracy of the search results and data analytics, and reducing the realization time. Moreover, GECO aims to support the Geoscience community in the development and deployment of new digital workflows.
-
-
-
Optimization of the Design and Operation of Low-Carbon Aggregated Energy Systems
Authors S. Sala, A. Amendola, G. Gioco, M. Primato, C. Aceto, D. Belverato, E. Martelli, V. Dipierro and F. AlpegianiSummaryThe so-called Aggregated Energy Systems (AES) represent an opportunity to satisfy the energy demand more efficiently, and economically with respect to traditional energy systems. For instance, this may be achieved by the integration of variable (renewable) and dispatchable energy sources (e.g. gas turbines), possibly integrated with both seasonal and short-duration storage systems. To support optimal design and operation of AES, optimization tools based on a linear programming (LP) or mixed-integer linear programming (MILP) formulation are commonly employed. In this work, we have developed a MILP optimization model tailored to the design of utilities systems for heat and power generation in upstream facilities. The tool is suitable for particularly challenging scenarios (e.g. FPSOs) where traditional generation is integrated with variable renewables.
-
-
-
Core Description Digitization from Paper to Byte
Authors G. Chirila, N. Naji, J. Estevez Gonzalez and C. ReidSummarySedimentological core description plays a crucial role in the understanding of subsurface data, and is essential for geological research. In the past vertical cores, extracted during drilling, were described in a graphical format. In recent years there has been a technological shift towards capturing rock properties directly in digital format, using a range of proprietary hardware and software. Digitization process started with data standardization whereby templates and agreed upon symbols were implemented. The next step was to create the appropriate data model to accommodate digitized core descriptions. In order to be incorporated in the database, a list of codes for each geological feature (texture, bedding contacts, sedimentary structures, etc.) was created. Furthermore, to expedite and automate the data extractions from standardized templates, prior to being uploaded to the database, a customized export script was created. We have also produced customized scripts to map and extract the digitized core descriptions directly to various basin modeling and visualization software packages used within the company through dedicated templates.
-
-
-
Artificial Intelligence Assisted Deviation Survey Extraction from Well Reports
More LessSummaryA well deviation survey, also called a directional survey, provides measurement of a well’s inclination and azimuth at various locations in the well trajectory during the well drilling process to monitor and control the trajectory. Obtaining deviation survey can have multifold advantages like identifying geohazard risks, regions of drilling complications, the best-suited drilling techniques, understanding of the subsurface geology, building the comprehensive knowledge base useful for future well planning, etc.
Well deviation surveys are commonly presented in tabular format in multipage well reports in a PDF/TIFF format. Despite the immense value of extracting this information, the process of extracting this valuable information from well reports has predominantly been a manual effort. Manual efforts demand increased working hours, are prone to errors, and have challenges in terms of scalability, quality control, and adaptability to new tasks. This paper presents an attempt to alleviate the above limitations by introducing an Artificial Intelligence (AI) assisted workflow that brings in the advantage of time efficiency, accuracy, and scalability in extracting well deviation survey from multipage documents. Digitizing the deviation survey data at scale and storing them in structured data format (Example: CSV, JSON) can make their analysis and applications in the downstream tasks effective.
-
-
-
Revolutionizing SCAL Data Digitalization: Maximizing Analog Data and Utilizing Trend Analysis for Full Field Applications
Authors M.N.A. Akbar and O. TunikSummaryThis study explores the transformation of Special Core Analysis (SCAL) data utilization as part of subsurface digitalization. The focus is on a cloud-native solution for leveraging SCAL data from the Norwegian Continental Shelf (NCS) and maximizing these resources to generate synthetic capillary pressure and relative permeability curves without the need for new core samples.
This innovative approach involves digitizing, structuring, and integrating SCAL data into a unified cloud platform, employing AI-based similarity ranking and SCAL trend analysis. The method integrates data collection, interpretation, and storage in the native cloud-based system, utilizing statistical techniques to identify analogous SCAL data. It enables the prediction of representative relative permeability and capillary pressure curves, as illustrated through field examples and uncertainty ranges based on the LET method.
Key conclusions highlight three main solutions: Core Data Management, SCAL Analog, and SCAL Trend Analysis. These solutions enhance reservoir characterization, facilitate data-driven decisions, and support full-field applications. This platform is adaptable, transparent, and open to further enhancements like SCAL upscaling and rock typing, presenting promising potential for broader applications in the oil and gas industry.
-
-
-
Advancements in Seismic Imaging and Earth Model Building Using Digital Transformation
Authors A. Ali, G. Menzel, I. Pasechnik, D. Fernandez and S. AdekunleSummaryThis paper unveils a shift in the world of seismic data processing through digital transformation, redefine Earth model building and interpretation workflows. Faced with escalating data volumes and tight project timelines, the seismic industry demanded faster delivery of top-tier, efficiently processed data. This paper showcases how digital transformation addresses these challenges, enhancing subsurface understanding and reduce exploration uncertainties.
The authors highlight advancements in seismic processing, Earth model building, and interpretation workflows across two key areas: user-driven interface transformation and geophysical advancements. It explores a transition from standalone algorithms to cohesive workflows, meta-parameterization paradigms, and a user-centric workspace. On the geophysics front, the authors present newly developed space-partitioned model representation, tomography algorithm modifications, and the integration of machine learning for seismic interpretation. Powered by high-performance cloud computing engines further amplifies computational capabilities.
The results section demonstrates the practical application of digital transformation in seismic tomography, revealing a remarkable 70% reduction in turnaround time and substantial cost savings. Consistent stable geologically plausible models with enhanced interpretability, showcasing the transformative power of digital processing in seismic geophysics.
In conclusion, Digital transformation represents a steady step forward, not only reducing turnaround time by 70% but also minimizing errors, exploration uncertainty and elevating results consistency.
-
-
-
An Efficient Retraining Framework for Raster Segmentation
Authors O.A. Gune and P. MangsuliSummaryOil and gas raster images are important source of data in understanding reservoir characteristics, drilling operations, and future well planning. Raster digitization i.e., converting traditional paper raster logs into digital formats can help in tapping the valuable information across diverse set of rasters for various applications such as well log interpretation and correlation. As the first step towards the raster digitization, the raster segmentation task automatically extracts the important raster regions such as plot segment, depth track, and log headers. While raster segmentation can effectively be performed using modern deep learning (DL)-based methods, such methods perform poorly on raster data which is not used during DL model training. In this work, a novel retraining framework is proposed to improve the performance of raster segmentation module on out-of-(training)-distribution data. The retraining of a DL model is a time-consuming process because of the data labeling requirement. The proposed framework overcomes the challenge of large-scale data labeling by demanding very few data labels from the user while simultaneously improving the model performance. Moreover, the novel retraining framework can also address the data residency and privacy issues with respect to different users.
-
-
-
Deep Reinforcement Learning Algorithm for Wellbore Cleaning Across Drilling Operation
Authors S. Keshavarz, A. Elmgerbi and G. ThonhauserSummaryWe propose a novel framework for real-time drilling operation planning updates using deep reinforcement learning algorithm, enabling drilling process reactions to be automated. The framework includes a decision tree algorithm to represent the environment dynamic changes based on the imposed actions parallel to a Gaussian process algorithm to quantify the safe operating window in real-time. Combining these two algorithms leads to mounting a Markov Decision Process (MDP) environment for a decision-making system.
We demonstrate the effectiveness of our framework by implementing an off-policy deep reinforcement learning algorithm, using a deep Q-learning network to create experiences, and employing synchronous updates on the agent. Given the essence of reinforcement learning, the framework can be efficiently implemented for on-the-spot decision-making, allowing the driller to receive an effective sequence of actions considering company policies.
Our algorithm achieves state-of-the-art performance on weight-to-slip hole conditioning operation, a wellbore cleaning operation after drilling a stand before connecting to the next pipe. The performance evaluation exhibits its efficiency in real-time operation overhaul, eliminating non-value-added activities. Our framework thus opens the door for automating the process based on the operating parameters obtained in real-time.
-
-
-
Navigating Data Seas: Introducing Wintershall Dea’s Data Hub
Authors C. Hermanns, J.P. Major and K. BollerheiSummaryWintershall Dea has created a central Datahub solution with the help of different disciplines across the company like geologists and developers. We live in a changing world with lots of data, therefore we have created a solution which suits many different requirements with modern state of the art technologies to make use of all the existing data in an easy, user-friendly way.
-
-
-
Graph Offset Wells Analysis for mud Weight Prediction Using Seismic Attributes
Authors M. Pakhomov, N. Bukhanov and M. AbughabanSummaryAccurate mud weight choice is essential part of new well plan, which would allow to optimize construction cost and reduce the risk of well control. Such choice become crucial if drilling is supposed to be done through formation with high anomalous reservoir pressure. We propose a graph-based solution which can computationally combine seismic information from nearby wells to provide a relevant and accurate prediction to support decisions of drilling engineer during well planning stage. Our experiments show that graph method outperforms baseline machine learning approaches and conventional geostatistics.
-
-
-
Can Language Models Solve Complex Subsurface Data Integrations: Building Subsurface Copilots with Large Language Models (LLMs)
Authors T.B. Grant, J. Goldwater and E. KnudsenSummarySubsurface data is stored across multiple applications, databases, and in a wide range of flat file formats. Data may have different modalities (text, image, numerical), be unstructured or structured, or conform to different domain models. This variation makes data interoperability a major challenge for the industry. Finding, querying, and analyzing data by subsurface professionals is therefore a difficult and time-consuming task. We explore how recent advancements in artificial intelligence, specifically large language models, can be used to extract unstructured data from text into a domain model that can then easily be combined with application data. Once the domain model is population with multiple sources, language model -based agents can be used to ask any natural language question against the data. We find that language models are effective tools for data extraction when combined with domain knowledge and field descriptors in the prompt context. In our design, the domain model and field descriptors are highly customizable depending on the user’s data ecosystem. We also find that language model-based agents are effective at distilling user queries against the data once combined in a common data model.
-
-
-
A Source of Truth Approach: Integrating Microsoft Fabric and Anomaly Detection for Reliable Well Data Management
By T. FreydSummaryThe project aims to establish a reliable Source of Truth (SoT) for well data management in the oil and gas industry in Norway. The SoT will automatically compare and rectify discrepancies in well metadata and data from the customer, the Norwegian Petroleum Directorate (NPD), and the Operator. The process begins with data ingestion facilitated by Microsoft Fabric, which allows the ingestion of data from diverse sources and formats. The extraction of data from unstructured PDFs from Diskos is addressed using AI and NLP techniques. The identified discrepancies and the corrected metadata are then presented to the customer in a Power BI dashboard. The project initially focuses on well coordinates before delving into more detailed well data.
-
-
-
Effect of Image Augmentation and Data Quality Control on Lithology Classification of Whole Core Images
Authors S.Z. Ghavami, S. Sadeghnejad, D. Khoozan and T. SchäferSummaryLithology is an important key factor in petroleum exploration and reservoir characterization. The traditional method for lithology classification in reservoir wells is usually performed manually which makes it expensive and labor-extensive. Also, it can be affected by user bias. Machine learning algorithms proved to be a suitable alternative for automatic lithology identification from well log and coring data. While the previous studies predominantly focused on predicting lithology from well logs using shallow networks, recent attention has shifted toward whole core images using deep learning approaches. In this study, a convolutional neural network is implemented to classify whole core images into three classes of limestone, sandstone, and shale. The integration of image augmentation techniques and data quality control noticeably enhanced prediction accuracy compared to prior published researches. By utilizing different architectures (ResNeXt-50, ResNet-50, and a CNN-based model) and fine-tunning hyperparameters, the ResNeXt-50 model achieved an impressive accuracy of 97.65% on unseen data for the classification of whole core images from 28 wells in southern Australia.
-