- Home
- Conferences
- Conference Proceedings
- Conferences
First EAGE Digitalization Conference and Exhibition
- Conference date: November 30 - December 3, 2020
- Location: Vienna, Austria
- Published: 30 November 2020
1 - 20 of 93 results
-
-
Prediction of Hydrocarbon Using 1-D Numerical Modelling of Low Temperature Thermochronology for Various Basin Scenarios
Authors J. Singh and D. M. WhippSummaryThermochronometer age provide time since our temperature thermochronometers have cooled down till present. Degree of maturation of organic matter i.e. kerogen in the rock depends on the maximum temperature it has achieved in its thermal history and the amount of time it has spent in that temperature range. Low temperature thermochronology helps to relate the age of cooling to the oil and gas window and help to identify whether the kerogen has been overcooked or preserved in the rock. As low temperature thermochronometers have low effective closure temperature which is in junction to oil and gas window, AFT have closure temperature ∼120° C while range of oil window lies between 50° – 150°C. Numerical model for predicting the Apatite Fission Track (AFT) age using low temperature thermochronology has been developed using Python under various condition of exhumation and subsidence of a basin in 1-D. The model requires certain material parameters & characteristics such as subsidence and exhumation rate, conductivity, thermal diffusivity, thermal gradient, transition time, specific heat, etc. Model provides with the AFT age along with the fission track length distribution statistics in the mineral at present time and depth history of the rock.
-
-
-
Geostatistic Recognition of Genetically Distinct Lacustrine Shale Facies Based on Big Data Technology
By S. LinSummaryWith the breakthrough and progress of shale oil and gas exploration, the strong heterogeneity of shale has been highly paid attention. Relying on the traditional geological methods, the classification of shale is difficult to meet the current needs of exploration and development. It is a very important problem that how to complete the genetic classification of shale based on the comprehensive consideration of many parameters including genesis and characteristics. This paper demonstrates that the geostatistic recognition based on big data technology analysis can processing numerous data and identifies genetically distinct shale facies, which improve understanding of changes in a single shale formation. It has functions (1) of assigning genetic affinities and (2) of making it available that a confidence level in the classification for any additional shale samples.
-
-
-
Geological Synthesis and Analysis of Potential Petroleum Systems of the Bida and Sokoto Basins in Nigeria
More LessSummaryThe Bida and Sokoto Basins are two of Nigeria’s inland basins that constitute another set of a series of Cretaceous and later rift basins in Central and West Africa whose origin is related to the opening of the South Atlantic. Geophysical aeromagnetic interpretation has assisted in the interpretation of the geology of the basins. Organic geochemical studies show that the Kudu Shale in the Northern Bida Basin equivalent to the Ahoko Shale in the Southern Bida Basin and the Dukamaje Formation (dark shales and limestones) in the Sokoto Basin constitute the source rocks in the potential petroleum system. With averages for source rock thickness of 40m, area of basin of 45,000km2, TOC of 9.0wt%, and HI of 220mgHC/gTOC, charge modeling indicates 623 million barrels of oil equivalent extractable hydrocarbons in the Bida Basin at the appropriate kitchens.
-
-
-
Deep Learning a Poro-Elastic Rock Physics Model for Pressure and Saturation Discrimination
Authors W. Weinzierl and B. WieseSummaryUtilizing time lapse seismic for determining pore pressure and saturation effects is relevant for hydrocarbon production as well as natural gas and CO2 storage. Its quantitative interpretation enables a detailed understanding of 4D evolution of fluid/gas migration. We focus on the rock physics model to invert for rock physical parameters. A training dataset is generated with a forward modeling operator, with parameters adapted from a 65 m deep unconsolidated high porosity reservoir from the Svelvik field laboratory, Norway. Two independent rock physical formulations are considered and multiple deep fully connected neural networks conditioned and trained to invert for different rock physics parameters. The network can rapidly derive rock physical parameters such as pressure, saturation, and porosity from seismic attributes, thus acting as an inversion tool. Subsets of the input parameters can be preset based on prior knowledge of a site. Utilizing neural networks in discriminating pressure and saturation allows real time field site conformance verification during seismic campaigns targeting 4D effects in an operations scenario.
-
-
-
IoT and Openness as the Design Language for Changing the Landscape for Oilfield Workflows
Authors R. Toolsi, A.A. Aqrawi and K. JansaSummaryDigital transformation has been a buzzword for several years; however, this transformation was both technological and hierarchical. Similar to that ethos, if we look at IoT as simply just new technology with edge gateways, sensors, and actuators, then we are only driving innovation across technology. We can start looking at IoT as a design language and making changes to the organization, communication, processes, and people.
I propose is to have IoT as part of the operational roadmap, that can liberate data, remove silos and bring about new business opportunities. Management can get real-time dash boards of all operations and make better Opex and Capex decisions, it’s also possible to automate several functions, thereby having better control and improving reaction time. IT would no longer be an vertical by itself by rather integrated as necessary to better support business objectives.
The advantages for employees, with better access to quality data, you can improve your decision making and it also gives your visibility across several different aspects of the operations. Thereby empowering employees to make decisions horizontally where traditionally there were only able to do so vertically.
-
-
-
Lifecycle Management in an Dynamic Open Ecosystem
Authors K. Jansa, A.A. Aqrawi and R. ToolsiSummaryThe rise and growth of Software Ecosystems over the last decade has resulted in a significant transition in software engineering practices.
This presents a tremendous opportunity in the E&P sector to deliver tailored, instant solutions to the user. By transitioning to a cloud-based E&P environment, we can effectively manage the lifecycle of software used in a monolithic desktop application from development, validation, deployment and ultimately to retirement. In this shared environment, the user has complete control and the extensibility of a monolithic desktop application in a cloud setting can be maintained without losing invested value in technology.
Lifecycle Management incorporates multiple disciplines: project management, requirements management, software development, software testing, quality assurance and customer support.
The Ecosystem allows the platform owner to communicate with the software developer community the release milestones of the platform and clearly state the requirements. Hence the developers may adjust their development cycle to ensure that their updates are submitted to validation on time to keep pace.
-
-
-
Machine Algorithm for Predicting Shale and Sand Arrangement Using Seismic Attributes
Authors S. Gabitova and M. NaugolnovSummaryThe work is devoted to a tool for lithology prediction using seismic data inversion and a creation of a classifier based on machine learning algorithms. There was found a hidden connection between the input data seismic interpretation results: P-Impedance, Vp/Vs and NTG and with high precision predict the probability of the water and oil saturated layers in the understudied and unexplored field sectors.
There were held multiple express tests of different machine learning classification algorithms from examples (labeled data): based on boosting - Gradient Boost, XGBoost, Cat Boost, bagging - Random Forest; Support Vector Machines (SVM), K-Nearest Neighbors (KNN). The best methods: Gradient Boost, XGBoost, Cat Boost were studied more carefully. As a result there was found a method based on machine learning algorithm (Boosting) that allows evaluating data, building a model and then forecasting with a high precision sand and shale arrangement in undrilled and poorly studied zones. This method could be applied when there is hardly distinguishable shales and sand as the insufficient contrast of elastic inversion data. We showed that this method that exhibit relatively high classification accuracy and allows to classify lithotypes: sands, shales and build maps of the sand probability.
-
-
-
Super Resolution of Fault Plane Prediction by a Generative Adversarial Network
Authors F. Jiang and P. NorlundSummaryInterpreting seismic data enhances understanding of subsurface geological features, particularly for assisted fault interpretation. The results of assisted fault interpretation workflows can provide valuable information to optimize hydrocarbon production during drilling and stimulation treatments. However, given the complexity of seismic data such workflows can generate incorrect or misleading interpretations, such as discontinuous fault segments and mispositioned fault planes, particularly when deep-learning convolutional neural networks are used. Fault extraction results often face difficulties locating the fault plane where low reflectivity or signal-to-noise ratio exists. In this abstract, a novel approach is introduced that implements a super-resolution generative adversarial network to help improve the resolution of fault prediction results. Synthetic fault data were generated to train an adversarial model, which was then applied to different field data sets. This approach could serve as a standard post-processing workflow to decrease the uncertainty as part of an assisted fault interpretation approach and provides an efficient method of helping improve the fidelity of fault prediction results.
-
-
-
Clastic Reservoir Rock Grain Size Estimation from Wireline Logs Using a Random Forest Model: Initial Results
Authors F. Anifowose, S. Shahrani and M. MezghaniSummaryGrain size is a key input to various reservoir models. The models require a continuous log of grain size. Core samples are usually not available over the entire reservoir section. The most accurate grain size measurement is obtained from sieve and laser particle size analyses. These methods are expensive. The conventional method, the visual core description, is time-consuming, subjective, and nonreproducible. Alternative methods include the use of empirical equations, nuclear magnetic resonance (NMR) relaxation time, and acoustic velocities. These latter methods require inputs that are not sufficiently available, not applicable to different geological settings, or not available for all wells. This paper proposes a new methodology that estimates reservoir rock grain size for a new well or reservoir section from archival core description data and their corresponding wireline logs using machine learning technology. Nine wells from a clastic reservoir are used. Seven wells are combined to build the training set while the remaining two are used for model validation. Three machine learning methods are implemented and trained with optimized parameters. The results showed that, despite the subjectivity and bias associated with the core description data, the machine learning methods are capable of estimating the grain size for the validation wells.
-
-
-
Pathways to Exploration Success – Orchestrating the Steppingstones
Authors S. Roy, C. Castagnac, T. Levy and S. NolletSummaryIn the overall E&P industry the definition of corporate exploration success differs from organization to organization. It could vary widely from investing early into the right piece of acreage for an investment company, to drilling a discovery well for an exploration only company, to matching the producing volume of hydrocarbon from subsurface with the discovery volume claimed by the Exploration department for a major oil and gas producer. Most of the organizations define their business processes as per their corporate goals and establish stage gate processes for decision making.
The exploration processes framework is the guidance to identify and prioritize the work to be done as per the business goal, facilitate collaboration between domains, make each domain understand their roles and responsibility to achieve the common goal, and how their work is going to be used in the next step.
The digital solution is embedding the exploration processes framework into cloud native application, with the preservation of data and knowledge, empowers the capacity to integrate analysis from multiple exploration domains, increase the understanding of regional geology to find and mature opportunities. The digital technology successfully establishes the connection between the business goals and the technical project execution.
-
-
-
Inverting Elastic Model Properties Using ResNet
More LessSummaryWe develop a novel seismic data inversion method to estimate the properties of subsurface layered elastic models using Convolutional Neural Network (CNN). Specially, we use ResNet (Residual Neural Network) to predict the parameters of layered elastic models, including layer depth, layer density, P-wave, and S-wave velocities for its unique identity block architecture. The entire dataset consists of 10,000 layered elastic models and their corresponding single shot records. We use 80% pairs from the dataset to implement the training process, and then the trained network could make predictions on the rest of the models in the dataset. Our trained network presents satisfying prediction results on both simple (i.e., few-layer) and complex (i.e., multi-layer) models, and it suggests that the proposed approach could be a useful tool for data processing, especially when dealing with near-surface layered models.
-
-
-
Robust Evaluation of Fault Prediction Results: Machine Learning Using Synthetic Seismic
Authors M. Sarajaervi, T. Hellem Bo, B. Goledowski and M. NickelSummaryMetrics to assess machine learning methods are necessary for evaluation of results and for comparisons to existing methods. For the segmentation of faults in seismic data, we suggest the use of a robust Jaccard metric that allows for small lateral inaccuracies in fault positioning. This error tolerance is necessary because interpretations are often inaccurate or subjective as a result of low seismic resolution, noise or other image deficiencies. The metric is used to evaluate results during the development of a 3D convolution neural network. In practice, this is done by applying new versions of the convolutional neural network to field data and by using metrics to compare machine learning results to the manual interpretations.
-
-
-
Harnessing Data Standards and Cloud Computing to Achieve Global Screening and Compare Exploration Potential
Authors M. Treloar, T. Butt and K. HeyburnSummaryExploration for conventional hydrocarbons has become increasingly challenging. Exploration frontiers represent high-risk opportunities, and explorers require time and cost-effective tools to consistently understand and rank these opportunities within an exploration portfolio. This is particularly true in the continuing environment of risk-aversion and budgetary pressure. In addition, despite the ongoing digitalization of the industry providing more data than ever before, exploration teams remain reduced in size and often lack the resources to make full use of the ever-increasing volumes of data at their disposal.
These challenges speak to a need for faster, better-integrated, and more rigorous screening of exploration opportunities. This article examines how the need can be addressed by combining the efficiency gains in data access, integration, and processing capabilities offered by cloud technologies with standardized geological interpretations. As a case study, all Cretaceous clastic exploration potential in offshore basins has been assessed globally. The technology, workflows and inputs used to achieve this are covered, alongside the importance of applying a consistent framework for data integration.
-
-
-
A Transformational Journey from Unstructured Geoscience Data to a Digital Analytical Experience
Authors D. Slidel and I. FletcherSummaryThe automated standardization of legacy datasets can facilitate the generation of valuable new insights, when consumed within a business intelligence platform. Where the digital transformation in the energy services sector has successfully led to the automation of data gathering and interpretation processes, a geoscientist’s time can be freed up to undertake the more valuable interpretive tasks that are so vital in understanding subsurface geology. A recent advancement in managing data that has significantly helped with this transition has been the adoption of business intelligence and analytics platforms to produce advanced visualizations and automated analysis. This presentation and article focusses on a recent example of this process in action where play cross sections in an unstructured PDF format are processed for use within an analytical platform.
-
-
-
Real Time Well Engineering for Intelligent Rig State Identification: An Edge Computing Use Case
Authors V. Kemajou, R. Samuel and M. YasirSummaryThe internet of things has brought better connectivity among devices in various industries including the oil and gas industry. With this improved connectivity comes improved applications, especially in the area of real-time data processing and analytics. A major requirement for real-time application is minimized latency. Cloud computing, despite its many benefits, is often limited in that area. A lag often starts and worsens when the realtime data has to go back and forth between field locations and data centers. On the other hand, edge computing enables faster processing speeds. Edge computing applied to real-time well engineering is presented and discussed. It has the potential to be an adequate approach for applications dependent on real-time processing such as geosteering, which requires instantaneous processing to drastically enhance the economic profitability of a well. With edge computing, some sensor data can be cleaned and processed instantaneously to automatically identify the rig state, which is a key step in the real-time well engineering workflow during drilling.
-
-
-
Machine Learning for Wax Deposition Prediction
Authors M. Nait Amar and A. Jahanbani GhahfarokhiSummaryAccurate prediction of wax deposition is of vital interest in digitalized systems to avoid issues that interrupt the flow assurance during production of hydrocarbon fluids. The present investigation aims at establishing rigorous intelligent schemes for predicting wax deposition under extensive production conditions. To do so, multilayer perceptron (MLP) optimized with Levenberg-Marquardt algorithm (MLP-LMA) and Bayesian Regularization algorithm (MLP-BR) were established using 88 experimental measurements. The obtained results showed that MLP-LMA achieved the best performance with an overall root mean square error of 0.2198 and a coefficient of determination (R²) of 0.9971. The performance comparison revealed that MLP-LMA outperforms the prior approaches in the literature.
-
-
-
Geoscience Workflow Tracking by Means of RESQML Standard Format
Authors M. Piantanida, M. Dalla Rosa and B. VolpiSummaryThe paper describes Eni’s implementation of a RESQML database, capable of tracking the models exchanged across different G&G applications along interpretation and modelling workflows, together with all the metadata that will allow the later identification of the right model to be re-used for future work.
The implementation is targeted at sustaining Eni’s vision of a future ecosystem of small geo-apps, each focused on performing at best a small piece of the workflow, and with the possibility of easily composing the apps into a full workflow by exchanging the RESQML models across each other. This vision must be coupled with a powerful model tracking database, capable of identifying which models were used at each step of the workflow. The paper describes the approach used by Eni for such implementation, including the capability of disaggregating RESQML models into the basic data components to avoid duplication or inconsistencies, as well as some examples of the metadata used to correctly label, store and retrieve the RESQML models within the database.
-
-
-
Principal Component Analysis and Deep Learning along Directional Image Gathers for High-Definition Classification of Subsurface Features
By B. De RibetSummaryDiffraction imaging has proven to be an attractive method for delivering high-resolution subsurface images containing different types and scales of continuous and discontinuous geometrical objects. For depth domain 3D subsurface models, Koren and Ravve (2011) described an imaging method which is based on the ability to decompose the full recorded seismic wavefield into continuous full-azimuth directivity components in situ at the subsurface image points. This method follows the concept of imaging and analysis in the “Local Angle Domain” and allows us to generate azimuthal directivity gathers, from which we can separate specular and diffracted energies.
As part of the ongoing effort to automatically enhance procedures for classifying directivity driven image data into N geometrical features such as continuous reflectors, faults, point diffractors, acquisition noise, and ambient noise, Itan et al. (2017) presented a Deep Learning (DL) approach to this challenging task. This work expands on this method, as in addition to vertical section image patches, we also train the net with horizontal patches. This leads to further improvements, particularly in areas masked by ambient and coherency noise for classifying different geometrical features. We demonstrate our method on seismic data from the Eagle Ford and Barnett unconventional shale plays.
-
-
-
The Big Loop: an Innovative Solution for Automating Subsurface Workflows and Transforming Inter-Disciplinary Asset Team Collaboration
Authors C. Cosson, A. Plougoulen, M. Morin and G. MaisonneuveSummaryThe efficient management of subsurface operations depends on the asset team’s technical excellence. Skilled practitioners in various domains interpret, model and predict reservoir performance. This, however, leads to silos between domains, each with its own language, methods and technologies, resulting in: a) An overly long cycle time for passing through domains, analyzing data and delivering key information that helps management decisions. b) The challenge of creating consistency between specialists and building confidence in the results. This presentation demonstrates a workflow that improves collaboration by helping practitioners work synchronously, based on fully automated reservoir modeling technologies allied with agile collaborative methods. The technology driver is a workflow orchestrator, which runs atop optimized reservoir modeling and simulation software packages and scripts. A cognitive approach is used: initial models are built from available data, and then refined iteratively as new information arrives. Updates are automatically propagated through domains and deliver new results. Consistency across domains is preserved and models are evergreen. Continuous alignment is guaranteed, and results reflect the asset’s needs. This solution, called Big Loop™, is software-agnostic, customizable to the needs of the individual organization, and has been shown to significantly improve asset development and management efficiency.
-
-
-
Kriging Season 3: From Geosciences to Geo_DATA_Sciences
Authors L. Sandjivy, S. Valentin and L. LacailleSummaryBig Geo Data and cloud computing are a real E&P game changer taking us from Geosciences to Geo DATA sciences. Probability theory offers a consistent mathematical framework for developing specific “kriging based” machine learning algorithms for automating reservoir modelling and updating in real time. This is the season 3 of the Kriging algorithm saga in the oil industry, after season 1, kriging as an interpolator, and season 2, kriging as a geophysical workflow optimizer. In season 3, kriging-based software packages give way to kriging-based software apps for operating digital E&P projects
-