-
f NVIDIA Tesla, a way to dramatically speedup seismic processing and reservoir simulation applications
- Publisher: European Association of Geoscientists & Engineers
- Source: Conference Proceedings, 70th EAGE Conference and Exhibition - Workshops and Fieldtrips, Jun 2008, cp-41-00046
- ISBN: 978-94-6282-104-0
Abstract
Massive, fine-grained parallel computing capabilities will be needed to help researchers<br>effectively use petascale computing environments. In particular, petascale computing will<br>gain performance speed from the parallel processing capabilities of graphics processing units<br>(GPU). The concept behind the general-purpose GPU (GPGPU) is simple: Use the massively<br>parallel architecture of the graphics processor for general-purpose computing tasks. Because<br>of that parallelism, ordinary calculations can be dramatically sped up.<br>GPGPU is being used as a high-performance coprocessor for oil and gas exploration and other<br>applications—and it's much cheaper than a supercomputer. Scientists and researchers benefit<br>from the power of the massively parallel computing architecture. This availability of<br>supercomputing will unlock the answers to previously unsolvable problems in systems<br>ranging from a workstation to server clusters.<br>Using a GPU as a calculation unit may appear complex. It’s not really about dividing up the<br>task to execute into a handful of threads like using a multicore CPU but rather it involves<br>thousands of threads.<br>In other words, to try and use the GPU is pointless if the task isn’t massively parallel, and for<br>this reason, it can be compared to a super calculator rather than a multi-core CPU. An<br>application to be carried out on a super calculator is necessarily divided into an enormous<br>number of threads and a GPU can thus be seen as an economical version devoid of its<br>complex structure.<br>NVIDIA CUDA is a software layer intended for stream computing and an extension in C<br>programming language, which allows identifying certain functions to be processed by the<br>GPU instead of the CPU. These functions are compiled by a compiler specific to CUDA in<br>order that they can be executed by a GPU’s numerous calculation units. Thus, the GPU is<br>seen as a massively parallel co-processor that is well adapted to processing well paralleled<br>algorithms and like in seismic and reservoir simulation.<br>NVIDIA Tesla product line is dedicated to HPC. The Tesla Computing System is a slim 1U<br>form factor which easily scales to solve the most complex, dataintensive HPC problems.<br>Tesla Computing System is equipped with four new generation NVIDIA GPU boards, IEE<br>754 compliant Double Precision FP, and a total of 16GB video memory. The rack is used in<br>tandem with multi-core CPU systems to create a flexible computing solution that fits<br>seamlessly into existing IT infrastructure.