- Home
- Conferences
- Conference Proceedings
- Conferences
First EAGE Workshop on High Performance Computing for Upstream in Latin America
- Conference date: September 21-22, 2018
- Location: Santander, Colombia
- Published: 21 September 2018
-
-
Heterogeneous Computational Model for 2D Full-Waveform Inversion on Multicore and Multi-GPU Systems
Authors D. Barrera, M. Boratto, V. Koehne, E. Sperandio, J. Souza and D. MoreiraSummaryOur next generation of petroleum industry holds the promise of increased performance, along with mass customization, better quality and improved productivity. It has as its design principles interoperability, virtualization, decentralization, real-time capacity and supercomputing. The increasing need for computing power today justifies the continuous search for techniques that decrease the time to answer usual computational problems. To take advantage of new hybrid parallel architectures composed by multithreading and multiprocessor hardware, our current efforts involve the design and validation of highly parallel algorithms that efficiently explore the characteristics of such architectures. In this paper, we propose a heterogeneous computational model for seismic imaging using the 2D Full-Waveform Inversion (FWI) to easily exploit multicore and multi-GPU systems. We present an optimization of an algorithm and discuss some results.
-
-
-
Auto-tuning of 3D Acoustic Wave Propagation in Shared Memory Environments
Authors T. Barros, J. B. Fernandes, I. A. Souza-de-Assis and S. Xavier-deSouzaSummaryFinite difference methods (FDM) are largely used for modeling seismic data with the acoustic wave equation. These methods are computationally intensive, demanding the use of techniques that allow the obtainment of complex results in affordable time. This gives rise to the use parallelization techniques, which can significantly decrease the algorithm execution time. In shared memory environments, the 3D acoustic wave equation might be parallel computed in chunks of data, where the solution is concurrently evaluated, for the different data chunks. The determination of these chunk sizes, also known as workload distribution, is a crucial aspect in this type of approach. In this work we focus in optimize the workload distribution of a 3D FDM algorithm, parallelized with OpenMP, in a shared memory environment. We propose an auto-tuning algorithm that makes use of the global optimization strategy Coupled Simulated Annealing (CSA) to find the chunk size which minimizes the execution time of propagating the seismic acoustic wave equation. We illustrate, in numerical experiments, that the optimal chunk size varies according to the architecture, compiler and number of threads used. We also illustrate, in our tests, that the use of the CSA method is quite promising for the obtainment of the optimal chunk size for these different computational setups, resulting in significant time savings.
-
-
-
Tomography: a Deep Learning vs Full-Waveform Inversion Comparison
Authors S. Farris, M. Araya-Polo, J. Jennings, B. Clapp and B. BiondiSummaryWe explore the feasibility of a deep learning approach for tomography by comparing it with the current velocity prediction techniques used in the industry. This is accomplished through quantitative and qualitative comparisons of velocity models predicted by a Machine Learning (ML) system and those of two variations of full-waveform inversion (FWI). Additionally, we compare the computational aspects of the two approaches. The results show that the ML-based reconstructed models are competitive to the FWI-produced models in terms selected metrics, and widely less expensive to compute.
-
-
-
Improving 2D FWI Performance by Using Symmetry On Inner Product Spaces
Authors R. Noriega, S. Abreo and A. RamirezSummaryThe forward and backward modeling steps (necessary to compute the gradient) of the time domain Full waveform Inversion method imply a high computational cost in terms of memory consumption and execution time. In the state-of-the-art exists a computational strategy that consist on reconstruct the backward pressure wavefield from the information at the boundaries of the area of interest. With this method, the used memory could be reduced to less than 5% of the initial value, but the execution time increases approximately 55% due to it is necessary an additional modeling step. A new way to compute the inversion gradient by taking advantage of an inner product property is proposed in this paper. Therefore, if this new gradient calculation is combined with the reconstruction strategy, the extra modeling step is not necessary keeping the same RAM reduction.
-
-
-
Bearing fault detection through machine learning: time-domain vs time-frequency analysis for feature extraction
Authors F. Ulloa and G. BarbieriSummaryMachine learning methods have been used for fault detection in condition-based maintenance through the application of different approaches for feature extraction. Feature extraction has a significant influence on the selection and performance of the machine learning algorithm and consequently on the obtained results of the analysis. Within this work, time-domain and time-frequency approaches for feature extraction are compared. The binary classification of the state of a bearing is used as case study for the comparison; i.e. nominal / failure state. In time-domain analysis, time descriptive statistics of the signal are extracted, and a neural network is used for the fault classification. Whereas, in time-frequency analysis, a 1D signal in time is transformed into a 2D image through the utilization of the Short Time Fourier Transform. Then, a convolutional neural network is applied for the fault classification. The time-frequency approach showed better results on the fault classification of the selected application with lower computational costs. Further studies should be performed in order to validate the result.
-
-
-
Minimal Coherence Optimization for Coding Blended Sources
Authors K. Florez, S. Abreo and A. RamirezSummaryBlended (simultaneous) sources in marine seismic acquisitions improve the acquisition efficiency, reducing the acquisition costs and the number of acquired data, obtaining data with mixed information. Traditional data processing for blended data requires a de-blending or separation stage before the reconstruction of velocity model with the FWI method. In this document is developed a source coding optimization in distance and time delay, based on the minimum mutual coherence criteria between blended sources, with the aim to improve the final velocity model avoiding the de-blending step. FWI is implemented to contrast the optimal coded blended distribution with randomly coded blended distribution and traditional equally spaced sources. It is shown that optimal blended distribution of sources improves the final velocity model versus random coding, decreasing the artifacts in the image and obtaining a model closer to the model obtained with traditional acquisition, decreasing the processing time.
-
-
-
Using SPITS to Optimize the Cost of High-Performance Geophysics Processing on the Cloud
Authors N. Okita, T. Coimbra, C. Rodamilans, M. Tygel and E. BorinSummaryPublic cloud providers, such as the Microsoft Azure and the Amazon Web Services (AWS), offer a wide variety of virtual machines, with different specifications, and prices, that are enabling users to run high-performance programs without buying specialized and expensive hardware. Moreover, some providers, such as AWS, allow the user to bid for lower cost virtual machines, called Spot Instances. Nonetheless, these machines may be terminated within a few minutes of warning at the provider discretion.
In this work, we leveraged the SPITS programming model to implement a high-performance and fault-tolerant seismic processing application that is proper for execution on Spot instances and analyzed how different virtual machines from the AWS may affect the performance and the price of the computation. Our experimental results indicate that Spot instances have similar performance to regular instances but are roughly three times less expensive. Finally, we show that AWS groups virtual machines in Availability Zones and that selecting virtual machines from different zones may also affect the total execution cost.
-
-
-
Improvement of a RTM Algorithm with Convolutional Absorbing Boundaries
More LessSummaryIn order to explore and exploit natural resources in México, it is required high performance technology and engineering human capacity, with both it is creating methodologies to offer a better quality results. The exploration of natural resources, it is the area of knowledge in which geophysics and geology are practiced, however for these two sciences to be understood in the same space, subsurface images are needed. So, implementing an efficient method to avoid noise signals caused by the same time reverse migration algorithm is the method of absorbing borders. This method proposes to subdivide the seismic image into borders in order to obtain only the propagation and retro-propagation signal without spurious signals. This implementation in the oil industry is necessary to generate greater effectiveness in the seismic inversion and thus the generation of seismic images with a better interpretation. Without this implementation, we have worked in the oil industry for more than 30 years, which is why it is innovative for our oil field projects in Mexico to carry it out. It can be said that when referring to the images in depth we are talking about deep migration PSDM (Pre-Stack Depth Migration) and its elements are input data, pre-processing, migration algorithm and speed model. Reverse-time migration (RTM) is a pre-stacking migration technique that, unlike the rest of migrations, ignores simplifications and uses the full-wave equation [Whitmore, 1983] . In the past decades a great variety of absorbent borders have been developed, especially for the modeling of the seismic wave. Subsequently, among the various attempts to improve the classical PML, (Kuzouglu, 1996) and, (Roden, 2000) developed the convolutional PML or CPML (Convolutional Perfectly Matched Layer) for the Maxwell equations and were adapted to the equations of elastodynamics by Komatitsch and Martin, ( Komatitsch, 2007 ). The latter are used to simulate the direct problem in the present work. The development of the reversal algorithm in time, consisted in the implementation of absorbent borders in the different domains of the seismic image. This implementation considerably improves the seismic image compared to the inverse algorithm without absorbing borders and even with absorbent borders not as elaborate as the development in this thesis work. By means of the implementation of convolutional absorbent borders, it modifies the way of thinking of the current algorithms of programming, for which this thesis work is a clear example of the algorithmic revolution applied to the geophysics of the last decade.
-
-
-
Large scale full waveform induced seismicity inversion for Groningen
Authors A. St-Cyr, S. Reker and J. W. BloklandSummaryThe biggest gas field in Western Europe is the Groningen field in the north of the Netherlands. Its production leads to a compaction of the Rotliegend reservoir which results in regional subsidence and earthquake activity. Detailed study of the detected events is required for assessing future induced seismicity hazards and risks. To locate an event, a moment tensor inversion method based on a full elastic waveform and an exhaustive grid search is employed. This approach necessitates the generation of a Green’s function database and, driven by the frequency content, requires a high-resolution velocity model. We describe herein key adaptations of the production workflow in order to handle both the resolution and Green function generation times requirements. Our solution uses of a full wave equation library making use of hybrid parallelism techniques. The results demonstrate an increase in the quality of the seismic event localization hand-in-hand with increased mesh resolution, leading to some of the largest elastic modeling runs performed in-house, to this date, and, to the second largest ones in the reported literature in terms of degrees of freedom.
-
-
-
Reservoir Development Planning Using Optimization Methods In An Oil Field Case
Authors J. Camacho, J. Prada, D. Abreo and Y. VillamilSummaryThis paper deals with the application of optimization techniques in order to facilitate decision-making regarding the oil reservoir development planning. In this sense, in order to maximize economical profits and incremental oil recovery, it is specified restrictions such as the number and type of new wells to be drilled, the control scheduling as well as optimal locations with the objective to perform a waterflooding project. The method exploits information, which is provided by simulations of a numerical reservoir model to obtain optimal combination of the decision variables. In this study, a modified PSO-MADS algorithm was implemented with capabilities to manage constrains related to existing history wells. Thus, new wells are allowed to be placed only in reservoir avoiding existing well-heads and inactive cells. The effectiveness of the procedure was illustrated by its ability to optimize a complex heavy oil reservoir represented by 2.202.702 grid cells (529.014 active cells, 72 layers) and 11 faults. A total of 420 decision variables were used to solve this problem. The PSO-MADS methodology was adapted, and operational restrictions associated to existing wells were included. In addition to increase the oil recovery, decide the optimal new well placement and control operation as it is shown. The proposed constrained version of PSO-MADS algorithm considers limitations displayed by existing wells. The selected cases showed an optimal location of new wells, resulting in a significant volume of oil recovered and positives values of net present value for reservoir development planning.
-
-
-
Local Industry-Academy Synergy for the Development of High Performance Computer Applied in Oil and Gas Exploration Geophysics
Authors H. Gonzalez-Alvarez, W. Agudelo-Zambrano and A. B. Ramirez-SilvaSummaryDifferent processing architectures has been used in a joint effort between industry and academy. FPGA were tested as an alternative for computing seismic images on large datasets. From 2013–2017 a collaborative team was established with regional Universities of Colombia, Colciencias (Colombian Agency for Science and Technology) and Ecopetrol, to achieve the common goal of generating in-depth high definition seismic images of complex areas (FWI and RTM methods). Special focus on GPU implementation was given to the developed codes. On the other hand, Ecopetrol uses a for its seismic processing operation a CPU cluster, that serve as benchmark for other processes. Industry professionals, professors and students from different backgrounds such as geology, mathematics, physics, geophysics and engineering converged to build local research groups in computational geophysics.
-
-
-
Improving Performance of 3D Seismic Survey Simulation with a Hybrid MPI/OpenMP Approach
Authors C. Barbosa, L. Leite and A. CoutinhoSummaryGeophysical imaging faces nowadays new challenges related to 3D data acquisition, which means that we need to simulate full 3D volumes. In this work, we present a hybrid MPI/OpenMP approach for modeling the 3D heterogeneous acoustic wave equation to boost the performance of a classical extrapolation scheme. Our solution is a standard eighth order in space and second order in time 3D acoustic FD code, parallelized with MPI/OpenMP and running on Intel general-purpose CPUs. Test cases provided by the High-Performance Computing for Energy project (hpc4e.eu) are solved using our optimized numerical code. We experiment our solution using the flat acoustic tests at 20 Hz maximum frequency. These are codenamed AF-UNIT-20Hz and AF-SURVEY-20Hz. The first test is related to a single shot modeling and the second one simulates an acquisition considering 1681 shots in the subsurface. Results show that a hybrid MPI/OpenMP strategy used to solve the UNIT test in standard multi-core machines is easily scalable to more than one node for the SURVEY experiment. All experiments were run in Lobo Carneiro, a 6000 core machine installed at the High Performance Computing Center at the Federal University of Rio de Janeiro, Brazil.
-
-
-
Salt Segmentation with Fully Convolutional Networks and Transfer Learning
Authors P. M. Cruz and J. P. NavarroSummaryIn this work we present a new methodology for segmenting salt structures in seismic images, the proposed method is based on Deep Learning. Salt segmentation on seismic data is a challenging task due to several aspects. In general, the salt structures have a very complex geometrical shape, hence is more complicated to define an a-priori geometric model than in layered environments. The high impedance contrast at vicinity of salt reduces dramatically the seismic energy propagating through the salt bodies. This makes difficult to illuminate pre-salt targets, and to correctly model salt boundaries. In modern interpretative processing workflows salt modeling is one key aspect to effectively produce accurate seismic images near the salt. The main object of this research is to develop an automatic salt segmentation solution. The methodology of this solution is based on Deep Learning (DL), Fully Convolutional Networks (FCN), and Transfer Learning (ML). Results are presented in real seismic data.
-
-
-
Seismic Interferometric Method: A CPU Parallel Approach
Authors D. F. Barrera, P. Melo, D. Martins, M. Boratto and A. FurtadoSummarySeismic Interferometry can be defined from a computationally point of view as the cross-correlation or the deconvolution process between seismic signals, in order to retrieve virtual sources or receivers where only are placed receivers or sources, respectively. This method is mainly used in passive seismics and oil exploration. Depending on the approach, the receivers could be placed over the Earth’s surface or in Vertical Seismic Profiles (VSP). The seismic interferometry becomes an expensive method when is used large seismic data. This because the cross-correlation and the deconvolution between sources is done to each sample in time and space of its receivers. Then, if the seismic data have a large sampling in time and space, this process can become unfeasible with a serial approach algorithm. In this work we will investigate the parallel implementation of the classic seismic interferometric methods based in cross-correlation and deconvolution approaches, in order to find an efficient way to compute them. In the numerical experiments of this work we will consider a 64-bit CPU’s Intel Xeon.
-
-
-
Devito: Fast and Scalable Full-Waveform Inversion Without the Excruciating Pain
Authors G. Gorman, F. Luporini, N. Kukreja, M. Louboutin, A. St-Cyr, B. Souza and F. HerrmannSummaryWe describe Devito, an open-source embedded domain specific language (DSL) in Python for developing parallel and highly optimized finite difference solvers - primarily targeting seismic imaging applications such as reverse-time migration (RTM) and full-waveform inversion (FWI). The two key novel aspects of this technology that distinguish it from existing frameworks is the use of a symbolic mathematics engine, SymPy, to enable geophysicists to quickly and easily implement new methods that are easily verifiable, and the use of compiler techniques to transform the high-level implementation into highly optimized and parallel C code on the fly. A key benefit of taking this embedded DSL compiler technology approach is that it also provides a robust strategy for application code performance portability. We demonstrate the generation and automated execution of highly optimized stencil code for standard operators such as forward, adjoint and gradient operators from only a few lines of Python and SymPy. We also demonstrate how Devito can be integrated with Dask to implement highly scalable RTM on Cloud and Supercomputing platforms.
-
-
-
Models to Describe Inhibitor Adsorption/Desorption During a SQUEEZE Program
Authors L. N. Salcedo, L. F. Carrillo and C. E. EstupiñanSummaryScale deposition has is a very serious and challenging problem in the oil and gas industry, causing losses of millions of dollars every day (Martinez, 2017) per project. To attack this issue, many engineers have focused on preventing scale formation from the reservoir through squeeze treatments. These treatments involve the injection of a chemical product, which must possess certain key characteristics, in order to effectively prevent scale formation and deposition in the reservoir; but also needs to have a good interaction with the reservoir to attain long squeeze treatments.
The purpose of this paper is to show how those key characteristics that the chemical product needs to have, are represented through mathematical models. The methodology in this paper includes the tests that need to be employed to select the best chemical product, which is selected for an specific reservoir mineralogy, reservoir conditions and physicochemical water composition. Finally, it is going to be shown how to describe this behavior through mathematical models, and how these are employed to reduce and optimize computational costs.
-