With some of the largest supercomputing clusters on the planet, the seismic imaging industry<br>has a tremendous appetite for computing resources. Increasing demand for energy and<br>societal pressures for greener energy will only accelerate this growing need. It is therefore<br>clear that seismic imaging companies will be early adopters for the newest generation of<br>supercomputers that enable petascale computing and beyond – machines that are perhaps as<br>close as only months away. However, as this transition to petascale sweeps through the<br>industry, the industry will also be one of the first groups to come face-to-face with the<br>significant changes that will be required in order to go beyond petascale. The industry will<br>have to actively develop plans for how their business models and standard operations will<br>change in order for practitioners have to achieve the performance they require for their evergrowing<br>technical challenges. The next few years will present very important challenges and<br>opportunities for high performance computing in research, industry and business. Petascale<br>computing and, eventually, “exascale” computing will bring the promise of capability to<br>deliver full solutions to some of the most challenging and complex issues facing the industry.<br>However, for well documented technology reasons, these new computing systems<br>architectures will be radically different in design from traditional high performance<br>computing platforms. For example, in response to growing technological obstacles, the<br>processor industry is moving down the multicore path. This development is driving a sea<br>change in the computer industry for which a new "Moore's Law" may be arising - dictating a<br>doubling of the number of cores per unit time. As more cores are squeezed on to a chip, the<br>old programming approaches will not be adequate to achieve the performance required by this<br>industry; and naive assumptions of linear scaling of performance with the number of cores<br>will be very wrong. Recent experience with multicore has identified key challenges which<br>will have to be overcome in order to realize the potential of the next generation of<br>supercomputing. These challenges include fundamental algorithm design; integration of novel<br>architectures with more traditional computational systems; management of the unprecedented<br>amounts of data which are now a key component in all high performance computing<br>activities; and the development, improvement and validation of new applications solutions<br>which address the full complexity of the problems which these novel architectures will make<br>tractable. This presentation will discuss how the industry will be impacted by these changes<br>and what practitioners can do to achieve the full potential of multicore-based petascale and<br>exascale supercomputers.


Article metrics loading...

Loading full text...

Full text loading...

This is a required field
Please enter a valid email address
Approval was a Success
Invalid data
An Error Occurred
Approval was partially successful, following selected items could not be processed due to error