1887
Volume 21, Issue 8
  • ISSN: 0263-5046
  • E-ISSN: 1365-2397

Abstract

Bill Bartling, senior director, energy and sciences, Silicon Graphics provides an analysis of the shortcomings of today’s computer systems and discusses where the future lies for computer-intensive operations in the E&P business. Oil and gas computational challenges, especially in seismic processing and reservoir simulation, have consistently outstripped the capabilities of even the most powerful computers. Continuing that trend, these same computational problems are now growing at a rate faster than Moore’s Doubling Law. In truth, the very large problems we are trying to solve today have always been around. Adapting to our surroundings, we have scaled them back to match the limited abilities of our machines hoping that engineering breakthroughs would eventually compensate for those limitations. Many breakthroughs have indeed come to pass, especially in microprocessor speed, power, pricing and form factor. But these revolutions exposed new limitations that in many cases cancelled out some or all of the benefits. Scientists and engineers have rushed to consume the promises of blazing fast gigahertz CPUs at highly competitive prices, only to find that I/O, bus speed, storage systems and code design prevented them from realizing their full potential. Idle cycles became the theme of the day. The goal in working with the complexity of seismic and reservoir simulation data, is throughput: speed is merely a way to get there. The concept of using speed to deliver breakthrough throughput, and thus to deliver significantly faster computational times for much larger data models, continues to drive research both in computer and software design. The answer must lie in balanced systems, with each component optimized to do its part in delivering the revolution. The past few years have ushered in a new computational paradigm: the cluster. This model is based on one of the most tantalizing promises of the Internet – a globally inter-connected computational grid of inexpensive systems that, together, can potentially combine to create the most powerful supercomputer ever built. The SETI Grid (http://setiathome. ssl.berkeley.edu/) is an excellent example of such a system, but, of course, with inter-connects running at 28.8kb/sec in too many places, SETI@home highlights the importance of appropriate and adequate back plane bandwidth. At the other end of the spectrum are supercomputers, which, in truth, are highly compact computational grids with extraordinarily fast interconnects. In spite of their slower clock speed microprocessors, from their introduction they outran the fastest PCs due to efficient, stable and proven parallel operating systems and related optimizations that focused on the throughput of many joined processors, not just individual processor speed. Of course, the price point that comes with this sort of machine, while once easily justified, has come under increasing scrutiny from data processors, IT executives and P/L managers. Key components to the success of supercomputers, in addition to their interconnect speeds, have been effective I/O and data delivery systems to keep the processors fed and working, the lack of which has been a striking shortcoming of PC architectures applied to computationally intensive tasks. Direct attached storage with fibre-channel connections has been the preferred solution to keep the data flowing to processors. New storage and delivery systems crush that paradigm, offering breakthroughs in data delivery via Storage Area Networks.

Loading

Article metrics loading...

/content/journals/0.3997/1365-2397.21.8.25601
2003-08-01
2024-04-26
Loading full text...

Full text loading...

http://instance.metastore.ingenta.com/content/journals/0.3997/1365-2397.21.8.25601
Loading
  • Article Type: Research Article
This is a required field
Please enter a valid email address
Approval was a Success
Invalid data
An Error Occurred
Approval was partially successful, following selected items could not be processed due to error