- Home
- Conferences
- Conference Proceedings
- Conferences
72nd EAGE Conference and Exhibition - Workshops and Fieldtrips
- Conference date: 14 Jun 2010 - 17 Jun 2010
- Location: Barcelona, Spain
- ISBN: 978-90-73781-87-0
- Published: 13 June 2010
101 - 105 of 105 results
-
-
Addressing programmability of Accelerator-based Seismic Applications
By J. -M. CelaLately, hardware accelerators have demonstrated their capability to propel algorithms like RTM into real production codes in O&G industry. Currently, symmetric multi-core technology is not able to achieve the accelerators performance. For instance, the soon to be released family of multi-core chips with up to 48 cores will achieve peak performance which account just for the base-line performance of the accelerators, chip-wise. Up to last year, the accelerator race was a three-horse deal, the main accelerators contenders were: IBM Cell/B.E., GPUs and FPGAs. However, the IBM announcement regarding the Cell/B.E. processors demise leaves only two accelerators standing. Therefore, in this talk we will address a comparative of the RTM execution using Nvidia GPUs and Convey FPGA-based solution. Moreover, the accelerators programmability is a key point. We will introduce the in-house GMAC programming model for GPUs. GMAC helps to hide the (host-device) memory management and communications to the user, thus allowing the development of simplify but efficient programs. Also, we present real RTM execution results that demonstrate the viability of this approach. The next figure shows a comparative between GMAC and a native CUDA code, solving the same
stencil problems, as can be seen the performance is competitive but the GMAC user code is simpler (2 domains single host example).
-
-
-
Developing an Exascale Trajectory; Leveraging GPU-based Seismic Imaging Experiences
By T. McKercherThe lessons learned from implementing GPU-based computing solutions in large-scale seismic imaging production environments will set a trajectory for transparent scalability of future generation many-core architectures. GPUs offer an additional level of parallelism to existing distributed memory parallel environments. As hardware evolves, it is important to consider: memory architecture, execution control, control flow efficiency, and unified address schemes. At the core of NVIDIA's strategy is the requirement to complement existing compute infrastructure with industry-standard COTS components that leverage existing software knowledge. A mandatory requirement is to provide extensible APIs that allow augmentation of modern software tools/practices, thereby offering developers freedom to choose the best GPU-based tool/method for a given problem. But achieving optimal performance from many-core architectures also requires improving computational thinking skills. Experience has shown that it is imperative to focus on domain decomposition, truly understanding data access patterns during optimization investigations for distributed many-core systems. Academic institutions have adopted CUDA-based techniques to help build the foundation for computational thinking because the API supports the proper parallel constructs. Also, as the costs for seismic acquisition and processing spiral upward, modeling plays an ever increasing role in oil/gas work flows. Mesh-based modeling approaches that exploit GPU parallelism will continue to demonstrate excellent scalability and help solve some of the inverse challenges. The seismic work flow is melding, and large-scale Visual Computing will play an expanding role. Centralized, secure delivery of seismic applications from HPC data centers will improve crossdiscipline collaboration to shorten cycle time, reduce risk, and improve exploration/production success rates. In this presentation, we will share customer experiences that relate to the forces shaping the future of seismic imaging. This is an exciting time for innovations in GPU run-time environments, and as programming tools evolve, we can leverage our experience to help set a trajectory toward meeting the exascale demands that loom on the horizon.
-
-
-
AMD technologies alignment for Oil and Gas industry
By J. MoraAMD is providing a wide range of technologies based on CPUs, GPUs and a complete software ecosystem that meet the Oil and Gas industry requirements. A review of those requirements, with special focus on the performance side, and how AMD is covering them will be provided. The flexibility of multi-[core,socket,chipset,infiniband,gpu] platforms combined with a highly efficient software ecosystem (ie. ACML, open64 and opencl) will be exposed with respect to several Oil and Gas application scenarios.
-
-
-
A reconfigurable implementation of a Computational Stencil for Accelerated Seismic Imaging
By S. WallachFor decades, the evolution of computer systems has been driven by the exponential increase in logic density predicted by Moore’s Law. Performance has increased exponentially as clock rates increased, and soaring transistor counts are utilized in a wide variety of architectural innovations to increase performance per clock. However, in recent years performance has begun to stagnate as power density—caused by increasing system complexity and increasing clock frequency—has become the limiting factor in design (Figure 1). In an attempt to circumvent the laws of physics, system architects are turning to heterogeneous computing architectures. Such architectures increase performance by combining industry standard processors with specialized hardware that focus on accelerating specific operations.i Typically, these specific operations are those that represent a large percentage of an application’s runtime.
-
-
-
HPC Technologies for Interactive Processing. Basic HPC requirements and tools for Pre-stack Interpretation
More LessPre-stack Interpretation will bring interpreters and processors together which implies that we need to achieve a level of interactivity on the processing side that approaches at least the level of interactivity used to be available on the interpreters desk. From the interpretation side we have to manage data sets which are at least two orders in size larger than in the stacked world . The goal of my talk is to analyse this situation and present high performance computing tools to attack the problems.
Since a few year we are now in the multicore area and as we proceed more cores and modified cores ( especially in order cores) will appear on CPU's. GPU's and FPGA's can contribute enormous compute capabilities and will eventually merge with the major CPU stream. But we will see as well new ideas coming up from the embedded marked pushing the ideas of massive many core CPU's with a low power budget.
-