Volume 4, Issue 4
  • ISSN: 0263-5046
  • E-ISSN: 1365-2397


I expect you're wondering what happened to Article VIII on graphics, devices, standards, usage and so on. So am I. In the meantime, I got bored waiting for me to start writing it and so I thought I would follow on from the fairly apocryphal portable benchmark article (Hatton 1985a), which evoked quite a response, and write Article VIII not on graphics, devices, standards, usage and so on, and talk more about seismic software architectures and how benchmarking can be done. The real Article VIII, or Article N where N is a large prime number inversely proportional to my interest at any one time, will have to wait until I can find something vaguely humorous to say about graphics. The problem is that programmers take it all terribly seriously-try communicating with any graphics expert, for example! In essence, seismic benchmarking attempts to estimate how much seismic data processing a particular software and hardware system can achieve. It causes more friction between normally rational geophysicists than almost anything else. A commonly used rule of thumb is the two-by-two rule. First, if the salesman gives a figure, divide it by two. This will generally give quite reasonable agreement with the programming staffs benchmark programs. However, in order to simulate the day-to-day catalogue of processing disasters, ('where's my plot gone? ... I never said that tape ... what tape? ... oh, just run it again ... I said 400 mill. Windows ... the wick in the CPU has gone out again ... '), a further factor of 2 reduction is necessary. This latter reduction is remarkably consistent and leads to an important physical invariant: The first law of benchmarking: Useful work+testing+bungling = 120% of available machine time or 30% of the estimated machine time. This paper will attempt: (a) to discuss efficiency from both practical and theoretical points of view, (b) to expand on the first FFT benchmark (Hatton-1985a), and enable a critical appraisal as to what the quoted figures mean, if anything, (c) to study host machine floating point performance compared with attached array processor performance. During the course of this I will bring the original results (Table l of Hatton 1985a) up to date with benchmark results of a number of computers from micro- to supercomputer which have been sent to me in the interim. Probably the most significant factor determining machine efficiency is that of software, both operating system and applications. I will discuss the applications category first.


Article metrics loading...

Loading full text...

Full text loading...

  • Article Type: Research Article
This is a required field
Please enter a valid email address
Approval was a Success
Invalid data
An Error Occurred
Approval was partially successful, following selected items could not be processed due to error