Parallel computing: What we did right

After a decade of research and evaluation, a number of contractors and major oil companies have deployed massively parallel computers for production seismic processing. These investments both in hardware and in software development have been made despite disagreement among vendors over the best parallel architecture, a sense within the computer science community that automatic parallelization remains an unsolved problem, a relative inattention to parallel I/O and mass storage issues important to seismic processing, and close management scrutiny of the value added by high-end processing algorithms. Most of the issues raised by the supercomputing community have proved to be irrelevant to seismic processing: processor topology, code portability, ease-of-use, and scalability do not significantly impact our use of high-performance computers. The turning point came when our industry realized that only a few relevant issues remained: applicability, stability, and the programming paradigm. These have now been addressed by business trends, product maturity, and hard work. The idiosyncrasies of our business (the need to recognize and manage risk and the tremendous leveraging that occurs between research and successful exploration) have placed us among the earliest adopters of parallel computing technology. These idiosyncrasies are unlikely to change and will therefore continue to distinguish our usemore » of advanced technology from the way in which it is adopted by other industries.« less