Preferred Citation: Ames, Karyn R., and Alan Brenner, editors Frontiers of Supercomputing II: A National Reassessment. Berkeley:  University of California Press,  c1994 1994. http://ark.cdlib.org/ark:/13030/ft0f59n73z/


 
Market Trends in Supercomputing

Market Trends in Supercomputing

Neil Davenport

Neil Davenport is the former President and CEO of Cray Computer Corporation. Before the spinoff of the company from Cray Research, Inc. (CRI), in November 1989, Neil served from 1981 to 1988 as the Cray Research Ltd. (UK) Managing Director for Sales, Support, and Service for Northern Europe, the Middle East, India, and Australia; from 1988 to November 1989, he was Vice President of Colorado Operations, with responsibility for the manufacture of the CRAY-3. Before joining CRI, he worked 11 years for ICL in England, the last three managing the Education and Research Region, which had marketing responsibility for the Distributed Array Processor program.

Since 1976 and the introduction of the CRAY-1, which for the purpose of this paper is regarded as the start of the supercomputer era, the market for large-scale scientific computers has been dominated by machines of one architectural type. Today, despite the introduction of a number of new architectures and despite the improvement in performance of machines at all levels in the marketplace, most large-scale scientific processing is carried out on vector pipeline computers with from one to eight processors and a common memory. The dominance of this architecture is equally strong when measured by the number of machines installed or by the amount of money spent on purchase and maintenance.

As with every other level of the computer market, the supply of software follows the dominant hardware. Accordingly, the library of


74

application software for vector pipeline machines has grown significantly. The investment by users of the machines and by third-party software houses in this architecture is considerable.

The development of vector pipeline hardware since 1976 has been significant, with the prospect of machines with 100 times the performance of the CRAY-1 being delivered in the next year or two. The improvement in performance of single processors has not been sufficient to sustain this growth. Multiple processors have become the norm for the highest-performance offerings from most vendors over the past few years. The market leader, Cray Research, Inc., introduced its first multiprocessor system in 1982.

Software development for single processors, whether part of a larger system or not, has been impressive. The proportion of Fortran code that is vectorized automatically by compilers has increased continuously since 1976. Several vendors offer good vectorization capabilities in Fortran and C. For the scientist, vectorization has become transparent. Good code runs very well on vector pipeline machines. The return for vectorization remains high for little or no effort on the part of the programmer. This improvement has taken the industry 15 years to accomplish.

Software for multiprocessing a single task has proved to be much more difficult to write. Preprocessors to compilers to find and highlight opportunities for parallel processing in codes are available, along with some more refined structures for the same function. As yet, the level of multitasking single programs over multiple processors remains low. There are exceptional classes of problems that lend themselves to multitasking, such as weather models. Codes for these problems have been restructured to take advantage of multiple processors, with excellent results. Overall, however, the progress in automatic parallelization and new parallel-application programs has been disappointing but not surprising. The potential benefits of parallel processing and massively parallel systems have been apparent for some time. Before 1980, a number of applications that are well suited to the massively parallel architecture were running successfully on the ICL Distributed Array Processor. These included estuary modeling, pattern recognition, and image processing. Other applications that did not map directly onto the machine architecture did not fare so well, including oil reservoir engineering, despite considerable effort.

The recent improvements in performance and the associated lowering in price of microprocessors has greatly increased the already high level of attraction to massively parallel systems. A number of vendors have


75

introduced machines to the market, with some success. The hardware issues seem to be manageable, with the possible exception of common memory. The issues for system and application software are still formidable. The level of potential reward and the increase in the numbers of players will accelerate progress, but how quickly? New languages and new algorithms do not come easily, nor are they easily accepted.

In the meantime, vector pipeline machines are being enhanced. Faster scalar processing with cycle times down to one nanosecond are not far away. Faster, larger common memories with higher bandwidth are being added. The number of processors will continue to increase as slowly as the market can absorb them. With most of the market momentum—also called user friendliness, or more accurately, user familiarity—still being behind such machines, it would seem likely that the tide will be slow to turn.

In summary, it would appear that the increasing investment in massive parallelism will yield returns in some circumstances that could be spectacular; but progress will be slow in the general case. Intermediate advances in parallel processing will benefit machines of 16 and 64 processors, as well as those with thousands. If these assumptions are correct, then the market share position in 1995 by type of machine will be similar to that of today.


77

Market Trends in Supercomputing
 

Preferred Citation: Ames, Karyn R., and Alan Brenner, editors Frontiers of Supercomputing II: A National Reassessment. Berkeley:  University of California Press,  c1994 1994. http://ark.cdlib.org/ark:/13030/ft0f59n73z/