1887
Volume 3, Issue 8
  • ISSN: 0263-5046
  • E-ISSN: 1365-2397

Abstract

A popular conundrum posed by computational seismologists concerns the performance of various computers on seismic data. Comparing throughputs, i.e. the amount of data which can be processed in some convenient period such as a month or perhaps a British standard business lunchtime, is usually complicated by the use of different seismic software packages, different processing sequences, different data densities, good oldfashioned exaggeration, memory lapses and so on. The only commonly used benchmark to my knowledge is the venerable Whetstone benchmark, designed many years ago to test floating point operations. Unfortunately, this program is very much smaller than typical scientific programs of any kind, let alone seismic programs, and no I/O (input/output) is performed. As a result the Whetstone benchmark is, in my experience, almost useless for testing seismic computer systems.

Loading

Article metrics loading...

/content/journals/10.3997/1365-2397.1985016
1985-08-01
2024-04-19
Loading full text...

Full text loading...

http://instance.metastore.ingenta.com/content/journals/10.3997/1365-2397.1985016
Loading
  • Article Type: Research Article
This is a required field
Please enter a valid email address
Approval was a Success
Invalid data
An Error Occurred
Approval was partially successful, following selected items could not be processed due to error