- Home
- Conferences
- Conference Proceedings
- Conferences
Fourth EAGE Workshop on High Performance Computing for Upstream 2019
- Conference date: October 7-9, 2019
- Location: Dubai, UAE
- Published: 07 October 2019
1 - 20 of 23 results
-
-
GeoDRIVE, an HPC flexible platform for seismic applications
Authors G. Sindi, V. Etienne, A. Momin and T. TonellotSummaryWe present GeoDRIVE, a software framework tailored to seismic applications embedding high performance features. We discuss the spatial cache-blocking algorithm for the Finite-Difference Time-Domain (FDTD) method and the protocol to find optimal parameters. GeoDRIVE was successfully applied on 3-D large scale seismic surveys using the Shaheen II supercomputer at KAUST. Two applications are presented. In the first one, we successfully produced a 3-D image of the subsurface geologic layers at a record resolution of 7.5 meters with a maximum frequency of 100 Hz ( Dimensions International, 2018 ). The second application is an elastic modeling using in average 2434 compute nodes in parallel. This achievement represents a workload of 59.6 ExaFLOP on a current PetaFLOP/s machine. This indicates that the next generation of supercomputers targeting the ExaFLOP/s sustained performance, would reduce the running-time of our application to one hour or less. With such performance, it is reasonable to predict that 3D elastic imaging will be a routinely used algorithm by seismic exploration in the next years.
-
-
-
One-way wave equation migration of common-offset vector gathers: parallel multi CPU/GPU implementation
Authors A. Pleshkevich, V. Lisitsa, D. Vishnevsky and V. LevchenkoSummaryWe present an original algorithm for seismic imaging, based on the depth wavefield extrapolation by one-way wave equation. Parallel implementation of the algorithm is based on the several levels of parallelism. Parallelism of the input data parallelism allows processing full coverage for some area (up to one square km). Mathematical approach allows dealing with each frequency independently and treating solution layer-by-layer, thus a set of 2D instead of initial 3D common-offset vector gathers are processed simultaneously. Next, each common-offset vector image can be stacked and stored independently. As the result, we designed and implemented parallel algorithm which allows computing common-offset vector images using one-way wave equation-based amplitude preserving migration.
-
-
-
A Checkpoint of research on the implementation of geophysical stencils on multicore platforms
Authors F. Dupros and C. HillairetSummaryHe production of reliable three-dimensional images of the subsurface remains a major challenge in the oil and gas industry and strongly relies on the efficient exploitation of supercomputers. However, as each vendor is working on next-generation technologies, the landscape of architectures that may be available leads to increasing concerns regarding real applicative performance. Whatever the design of these systems will be (heterogeneity, high core counts or depth of the memory hierarchy), it is admitted that co-design approaches will play a major role to ensure that oil and gas applications will be in best position to adopt the next breakthroughs. In this paper, after a review of recent contributions for the optimization of geophysical stencils, we discuss key feature from Arm hardware that may influence standard implementations.
-
-
-
Saving FLOPs in Geophysics with optimal p-adaptivity
By V. EtienneSummaryA Discontinuous Galerkin Spectral Element (DGSE) method is presented for highly accurate seismic modelling with an emphasis on Floating Point Operations (FLOPs) saving. While standard finite-element-based applications feature unstructured meshes, the scheme developed in this work relies on regular Cartesian meshes. The spatial mesh refinement (h-adaptivity) is avoided and instead, an ad-hoc p-adaptivity is proposed. This alleviates the task of mesh building and avoids unnecessary FLOPs related to mesh over-refinements. A protocol to determine the appropriate polynomial order for each element depending on the local wave velocity is defined. This protocol allows to maintain the error of the numerical solution below an arbitrary threshold. In a model representative of sand environments, with extreme low velocity in the shallow region, the p-adaptive DGSE yields an effective saving of FLOPs by an order of magnitude compared to non-adaptive DGSE and by a factor of 4.4 compared to the Spectral Element Method (SEM) for 2D elastic modelling in sand environments. This makes p-adaptive DGSE an attractive approach to efficiently tackle large velocity contrasts.
-
-
-
Alleviating the pressure on memory for seismic modeling
Authors R. Abdelkhalak, H. Ltaief, V. Etienne, K. Akbudak, T. Tonellot and D. KeyesSummaryThis paper describes two methods to improve the performance of a FDTD solver for the first order formulation of the 3D acoustic wave equation. Based on spatial and temporal cache blocking techniques, these methods enable to maximize bandwidth of the memory subsystem, while reducing data traffic in-between the memory hierarchy. On the one hand, the spatial blocking (SB) approach increases data reuse among cores within each iteration of the time integration. On the other hand, the multicore wavefront diamond temporal blocking (MWD-TB) technique further leverages the SB performance by intrinsically reusing freshly cached data solutions across iterations of the time integration. While SB achieves sixfold performance speedup against the naive implementation (without cache blocking), MWD-TB outperforms SB by up to 50\% on a two-socket 16-core Intel Haswell system.
-
-
-
Automated Distributed-memory Parallelism from Symbolic Specification in Devito
Authors F. Luporini, R. Nelson, T. Burgess, A. St-Cyr and G. GormanSummaryAutomated Distributed-memory Parallelism has been added to Devito, a rapidly evolving framework adopted by a dynamic, heterogeneous and fast-growing community. The key innovations are the abstractions provided to the user and the compiler- based implementation approach, which we consider invaluable for long-term sustainable software to replace (partly or fully) obsolete, impenetrable, hardly extendable and often inefficient legacy code. The auto-tuner, which determines, among the other things, the best block shape for each tiled loop nest in an Operator, has already been tweaked to support DMP. Single-node multi-socket (one MPI process per socket) as well as Multi-node experiments, both weak and strong scaling, are planned for the near future.
-
-
-
Total takes the deep dive into GPU for Seismic Imaging
Authors L. Boillot, D. Klahr, L. Qu, X. Lacoste, M. Bonnasse-Gahot, J. Montel, E. Bergounioux and J. BricheSummaryTotal decided to acquire a new supercomputer based on GPU technology. The exact solution is composed of IBM Power System AC922 server nodes with Nvidia Tesla V100 GPU and Mellanox InfiniBand interconnect. The main point considering using GPU is the data transfers inherent to the parallel algorithms. But the biggest change to take into account is about the node density which significantly changes the ratio computing capability versus memory amount. The main bottlenecks related to this technology transfers have been deeply studied, depending on seismic algorithms. Our GPU-ready seismic imaging toolbox is now composed of OpenACC and CUDA implementations so as to offer a good threshold between code maintainability, portability and performance. Preliminary results on synthetic and real datasets are very promising. The performance of network and I/O are another critical points that we are working on as the computer power is increased roughly two times more than the global bandwidth to the scratch. We strongly believe that extending the usage of compression technology into all seismic imaging workflows will improve the global performance of the IBM Power9 machine.
-
-
-
CGG: A Journey from Software to Hardware
Authors V. Arslan, F. Pautre, J. Blanc and T. BarragySummaryThe paper presents an overview of the methodology that CGG uses to makes suitable and sometimes unconventional choices for its seismic processing HPC infrastructure.
-
-
-
Weak scalability analysis of GPGPU-based iterative solvers in a two-phase pore-scale flow simulator
Authors C. Thiele, M. Araya-Polo, F. Alpak, B. Riviere and D. HohlSummaryDirect numerical simulation of two-phase flow at the pore scale is computationally challenging due to high requirements on physical fidelity and because of the spatial resolution necessary to accurately represent pore geometries. In this paper, we explore how GPGPU-accelerated iterative linear solvers can help to make these simulations feasible in workflows such as relative permeability estimation. Our target application is a Cahn-Hilliard-Navier-Stokes solver that uses a discontinuous Galerkin discretization in space and an implicit discretization in time. The performance bottleneck of the application is the solution of sparse linear systems in each time step. We evaluate and compare the performance of a CPU-based iterative solver from the Trilinos package and its GPGPU-accelerated counterpart from the AMGX package. In simulations with realistic porous rock geometries, we demonstrate that the weak scalability of the two solvers are comparable. At the same time, the GPGPU-accelerated solvers are approximately forty times faster on our multi-GPGPU compute nodes, resulting in more than a four-fold speedup of the overall simulation. Our results show that GPGPUs can improve parallel efficiency in pore-scale flow simulations, and they can help to make larger simulations feasible.
-
-
-
Scalable High-Resolution Seismic Tomography
Authors L. Boillot and P. BasiniSummaryTraditional ray-based tomography aims at recovering the long wavelength of the velocity model, The continuous progress in computing hardware and algorithms allows now for very dense simulation grids. Efficiently targeting finer simulations grids requires strong parallel scalability. High-resolution tomography seems a good candidate to this challenge, either on CPU or on GPU. Tomography workflow is composed of three main steps. The ray-based shooting and the Fréchet derivatives computations steps are embarrassingly parallel although highly imbalanced. The minimization problem step is approximated by solving a linear system with a very sparse and highly rectangular matrix. A suitable parallel implementation on CPU is a client/server paradigm upon dynamic scheduled OpenMP and the modern pipe-l-cg iterative method. Benchmarks performed on Pangea2 supercomputer demonstrated the very well strong scalability behaviour. Furthermore, the move towards GPU is under investigation using streams upon OpenACC for shooting and derivatives processes, and the PETSc-GPU version for the solving step. Ray-based tomography is now capable to preserve and improve the high-frequency content of input velocity model. Moreover, its reasonable computational cost makes it competitive against more intensive computational methods like Full Waveform Inversion.
-
-
-
Seismic Processing with Hybrid HPC
Authors K. Narayanan, P. Souza Filho, A. Sardinha, C. Ávila, A. Azambuja, F. Sierra, D. De Paula, M. Vecino, L. Silva and N. JiSummaryMoving to the cloud imposes challenges - like having to learn the specific cloud API to deploy a cluster and launch jobs. This work presents a seismic HPC application running under the Hybrid HPC paradigm. With it, all your on-prem, cloud, supercomputing centers and partner’s resources are presented as a unified pool of resources. With Hybrid HPC you are able to use the same UI/API to launch applications on any resource available to the platform. A complex seismic imaging application was executed using a hybrid HPC platform, accessing resources on each of the three major cloud service providers (CSPs). With a few small benchmark experiments we achieved on cloud 97% of on-prem performance per GPU.
-
-
-
Potential applications of quantum computing in upstream
By M. DukalskiSummaryAdvancements in nano-fabrication as well as improved control over quantum states has brought quantum technologies to the technological readiness level where large business took increased interest in this technology and started playing an active role in its development. Perhaps as a result of that, quantum computing has received a lot of media attention in the recent years, which makes it very important to understand what quantum computers (once sufficiently powerful one gets built) could or could not do and separate science from the hype which starts to surround this field. In this talk I will attempt to address these opportunities as well as challenges and suggest a few possible areas within upstream business where quantum computing could make an impact.
-
-
-
3D simulation of active-passive tracer dispersion in polygonal fractured geometries
Authors S. Khirevich and T. PatzekSummaryWe simulate advection-diffusion flow of tracers in fractured rock geometries. Voronoi tessellation generates polygonal patterns, which we then use to introduce fractures and obtain fractured geometry. The rock geometry is discretized using a scalable, in-house developed discretization software. Lattice-Boltzmann and random-walk particle-tracking methods are employed to obtain flow field and recover tracer behavior, respectively. Tracers are allowed to cross semi-permeable interface between fractures and matrix. In addition, tracers can have variable partitioning coefficients. The implemented numerical framework allows simulating field-scale tracer experiments designed to estimate residual oil saturation. Use of HPC platform is necessary to perform such simulations in three dimensions.
-
-
-
Optimization of Wellbore Placement and Design for Full-Field Development using Computational Mathematical Modeling
Authors M. Al-Ismael, G. Al-Qahtani, A. Al-Turki, A. Al-Hezam and K. DaiSummaryPlacing infill wells in reservoir’s sweet spots and choosing appropriate wells configurations are challenging optimization problems. This work presents an efficient well placement and design optimization approach for full-field scale capitalizing on computational mathematical modeling to maximize wells contact with high-productive hydrocarbon zones. The approach uses mixed integer programming to solve the optimization problem using high-performance optimization solver. Results show that the approach can efficiently place a number of wells optimally in a simulation model in reasonable time and without violating any of the imposed geometrical and intersection constraints. The computational run-time of the optimization solver was used to evaluate the performance between different cases using different constraints and hardware specifications. The approach demonstrated in this work complements sweet spots identification capabilities to maximize ultimate hydrocarbon recovery, maintain and extend plateau, and prolong the fields’ lifetime.
-
-
-
Boundary Conditions for Seismic Imaging: Computational and Geophysical Points of View
Authors E. Algizawy, A. Nasr, F. Ahdy, K. Elamrawi and P. ThierrySummaryTheoretically, waves propagate their to infinity or continue until vanishing. This is not applicable in modelling a seismic imaging since we truncate model to a computational grid; modelling a region of interest. So, absorbing all incoming energy at the boundaries of a grid mimics a real-life infinite media. Many approaches attempt to mimic different kinds of boundary conditioning e.g. Sponge, PML, random boundaries, etc. The objective of this study is to find the best compromise between geophysical and computational standpoint and find the best quality of the attenuation with a minimum number of additional grid points. We reviewed 2 RTM implementations; the conventional RTM with Sponge and CPML and a 3-prop random velocity RTM. Our findings show that applying IPP ZFP AVX512 compression to the conventional RTM yields a speedup of around ~5.3x, run on 3DNAND P4600x SSD. Although RTM with random boundaries gives a speedup in excess of ~7x relative to the conventional RTM, it faces geological limitations due to the reflected noises. On the other hand, CPML is the best for geophysical standpoint with extra computations, going beyond 16 CPML grid points adds significant computation time. While Sponge is based on simple exponential decaying function with low computational overhead, cannot easily reach the average CPML damping.
-
-
-
A GPGPU pipeline for fast synthesis of 3D seismic
SummaryIncreasing the productivity of seismic imaging workflow through efficient simulation pipelines is a mandatory task for any Oil & Gas company nowadays. In this work, we propose a GPGPU pipeline for fast synthesis of seismic data that encompasses high-performance geostatistical simulation of rock properties and efficient numerical propagation of acoustic waves to deliver a large data set of 3D seismic cubes with spatially-varying properties, enabling the training and assessment of recently-proposed neural network architectures for seismic inversion.
-
-
-
Incorporating Lossless Compression in Parallel Reservoir Simulation
Authors M. Rogowski, S.N. Kayum and F. MannussSummaryRestart files are simulator’s internal files that are a binary representation of the reservoir state at a chosen time and can be used by the simulator’s end users to resume the simulation from an arbitrary point in time. The size of restart files could reach hundreds of gigabytes and require high storage capacity. In this paper, the lossless compression of an in-house reservoir simulator’s internal restart files is explored and its advantages are shared.
-
-
-
Digital Twin of Multiscale Geological Media: Faults, Fracture Corridors, Caves. Seismic simulation and imaging
Authors V. Cheverda, G. Reshetova, V. Lisitsa and M. ProtasovSummaryThe current level of development of numerical methods and high-performance computer systems opens way to obtain detailed information about the structure of geological objects using 3D seis-mic study. A universally recognized necessary component that ensures the successful develop-ment of modern high-tech technologies for acquiring, processing and interpreting geophysical data is the complete digital models of geological objects - their digital counterparts. It is on this basis that a detailed assessment of the resolution and information content of the proposed meth-ods and their comparison with the already known processing and interpretation algorithms using the example of a specific geological object becomes possible. In this paper the main efforts are paid to the construction of a realistic three-dimensional seismo-geological model containing a family of faults, as well as clusters of cavities and fracture corri-dors. After constructing such an inhomogeneous multi-scale model, we perform finite-difference numerical simulation of the formation and propagation of three-dimensional seismic wave fields. The data obtained are processed using the original procedures for extracting scattered / diffracted waves with the subsequent construction of images of the corresponding small-scale objects, which generate these waves. We perform the detailed analysis of the results obtained.
-
-
-
A Scientific Workflow for Reverse Time Migration under Uncertainty
Authors C.H. Barbosa, B. Silva, C. Alves, R. Silva, L. Kunstmann, H. Costa, J. Alves, M. Mattoso, F. Rochinha, D. Filho and A. CoutinhoSummaryGeophysical imaging faces challenges in seismic interpretations due to multiple sources of uncertainties related to data measurements, pre-processing and velocity analysis procedures. An essential part of the decision-making process is understanding the uncertainties and how they influence the outcomes. For this, we present a new scientific workflow built upon a Bayesian tomography, Reverse Time Migration, and image interpretation based on machine learning techniques. Our scientific workflow explores an efficient hybrid computational strategy. Besides, high levels of compression are applied to reduce the network data transfer among the workflow activities and to store the final images. The experiments are made with the well known Marmousi Velocity Model Benchmark and run in Lobo Carneiro at the High-Performance Computing Center at the Federal University of Rio de Janeiro, Brazil. The new scientific workflow together with high-performance computing techniques allows obtaining seismic images under uncertainty very fast.
-
-
-
Exposing Fine-Grained Parallelism in Sequential Gaussian Simulation
More LessSummaryThe implementation of computationally demanding algorithms on GPU architectures is becoming inevitable nowadays. In the geoscience domain, reverse time migration based on the wave equation provides the most evident example. Problems related to modeling of flow and transport of black-oil or compositional fluids in the subsurface have also been successfully addressed. However, less attention has been drawn to the population of petrophysical properties, where geostatistical simulation algorithms, such as Sequential Gaussian Simulation (SGS), are also computationally expensive. The path level parallelism approach in SGS assumes the simultaneous simulation of several values along a randomly chosen path, which traverses the simulation grid. The values simulated further down the path may depend on previously simulated values, hence efficient operation requires scheduling the simulation of each cell to maximize parallelism while avoiding race conditions. This can be tackled by relaxing the accuracy of the initial algorithm and ignoring a few dependencies leading to conflicts, however, the solution will diverge from the exact algorithm and the level of differences is difficult to control. The exact path-level parallelization strategy can be implemented using multi-coloring schemes, as it has been shown for a limited number of CPU threads. In our work, we demonstrate that fine-grained parallelism in sequential Gaussian simulation can be exposed to sufficient degree for efficient use of GPU architectures even for an exact strategy. We discuss several implementations of multi-coloring algorithms applied to path-level parallelism and benchmark the overall performance of the GPU implementations against CPU standards.
-