Lessons Learned
Ben Barker
William B. "Ben" Barker is President of BBN Advanced Computers Inc. and is Senior Vice President of its parent corporation, Bolt Beranek & Newman Inc. (BBN). He joined BBN in 1969 as a design engineer on the Advanced Research Projects Agency ARPANET program and installed the world's first packet switch at the University of California-Los Angeles in October 1969. In 1972 Dr. Barker started work on the architectural design of the Pluribus, the world's first commercial parallel processor, delivered in 1975. In 1979 he started the first of BBN's three product subsidiaries, BBN Communications Corporation, and was its president through 1982. Until April 1990, Dr. Barker served as BBN's Senior Vice President, Business Development, in which role he was responsible for establishing and managing the company's first R&D limited partnerships, formulating initial plans for the company's entry into the Japanese market and for new business ventures, including BBN Advanced Computers.
Dr. Barker holds a B.A. in chemistry and physics and an M.S. and Ph.D. in applied mathematics, all from Harvard University. The subject of his doctoral dissertation was the architectural design of the Pluribus parallel processor.
Abstract
Bolt Beranek & Newman Inc. (BBN) has been involved in parallel computing for nearly 20 years and has developed several parallel-processing systems and used them in a variety of applications. During that time, massively parallel systems built from microprocessors have caught up with conventional supercomputers in performance and are expected to far exceed conventional supercomputers in the coming decade. BBN's experience in building, using, and marketing parallel systems has shown that programmer productivity and delivered, scalable performance are important requirements that must be met before massively parallel systems can achieve broader market acceptance.
Introduction
Parallel processing as a computing technology has been around almost as long as computers. However, it has only been in the last decade that systems based on parallel-processing technology have made it into the mainstream of computing. This paper explores the lessons learned about parallel computing at BBN and, on the basis of these lessons, our view of where parallel processing is headed in the next decade and what will be required to bring massively parallel computing into the mainstream.
BBN has a unique perspective on the trends and history of parallel computation because of its long history in parallel processing, dating back to 1972. BBN has developed several generations of computer systems based on parallel-processing technology and has engaged in advanced research in parallel algorithms and very-large-scale systems (Figure 1). In addition to parallel-processing research, BBN has been shipping commercial parallel processors since 1975 and has installed more than 300 systems. This represents approximately $100 million in investment of private and government funds in BBN parallel-processing technology and products.
BBN has developed extensive experience with parallel-processing systems during this 18-year period. Large numbers of these systems have been used by BBN in communications and simulation applications, most of which are still in operation today. BBN has also used BBN parallel systems in a wide range of research projects, such as speech recognition and artificial intelligence. This extensive experience using our own parallel computers is unique within the industry and has enabled BBN to better understand the needs of parallel computer users.
Parallel Processing:
1980 to 2000
In the 1980s, the high-performance computing industry gained experience with parallel processing on a small scale. Vendors such as Sequent Computer Systems and Alliant Computer Systems developed systems with up to tens of processors, and the major vendors, including IBM, Cray Research, and Digital Equipment Corporation, all began marketing systems with four to eight processors. Parallel processing on this scale has now become commonplace in the industry, with even high-end workstations and PC servers employing multiple CPUs.
A key development that helped bring about this acceptance was the symmetric multiprocessing (SMP) operating system. Typically based on UNIX®, but in some cases on proprietary operating systems, SMP operating systems made it much easier to use multiprocessor computers. All of these systems support shared memory, which is needed to develop the parallel operating system kernels used in SMP systems.
However, all of these systems have bus-based or crossbar architectures, limiting the scalability of the systems. The bus in a bus-based
architecture has a fixed bandwidth, limited by the technology used and by the physical dimensions of the bus. This fixed bandwidth becomes a bottleneck as more processors are added because of the increase in contention for the bus. Crossbar architectures provide scalable bandwidth, but the cost of crossbars increases as the square of the number of ports, rendering them economically infeasible for more than a few dozen processors.
In the 1990s, massively parallel computers based on scalable interconnects will become a mainstream technology, just as small-scale parallel processing did in the 1980s. The driving force is the economics involved in increasing the performance of the most powerful computers. It is becoming increasingly expensive in both dollars and time to develop succeeding generations of traditional ultra-high clock rate supercomputers and mainframes. Massively parallel systems will be the only affordable way to achieve the performance goals of the 1990s. This shift is made possible by three technologies discussed in the following sections:
• high-performance RISC microprocessors,
• advanced software, and
• versatile, scalable system interconnects.
The Attack of the Killer Micros
One of the key drivers in the high-performance computing industry is the disparity between the price/performance and overall performance gains of microprocessors versus conventional mainframes and vector supercomputers. As Figure 2 illustrates, the gains in microprocessor performance are far more rapid than those for supercomputers, with no end in sight for this trend. When looking at curves such as these, it seems obvious that high-performance microprocessors and parallel systems built from these microprocessors will come to dominate the high-end computing market; this is the "attack of the killer micros."
This changeover is only just now occurring. As Figure 3 illustrates, parallel systems are now capable of higher performance and better price/performance than traditional supercomputers. This transition occurred with the advent of RISC microprocessors, which provided sufficient floating point performance to enable parallel systems to rival supercomputers. This performance and price/performance gap will continue to widen in favor of parallel micro-based systems as microprocessor gains continue to outstrip those of supercomputers.
Programmer Productivity on Massively Parallel Systems
High performance and attractive price/performance are not enough to bring massively parallel systems into the computing mainstream. It is well known that only 10–20 per cent of computer center's budget goes to paying for computer hardware. The largest portion goes to paying for people to write software and to support the computers. Large gains in price/performance can be quickly erased if the system is difficult to use. In order to be accepted by a larger number of customers, massively parallel systems must provide ease-of-use and programmer productivity that is more like current mainstream high-performance systems.
The conventional wisdom in the 1980s was that parallel systems are difficult to use because it is hard to parallelize code. However, many problems are naturally parallel and readily map to parallel architectures. For these problems, the past has been spent trying to develop serial algorithms that solve these problems on single-CPU systems. Trying to take this serial code and parallelize it is clearly not the most productive approach. A more productive way is to directly map the parallel problem onto a parallel system.
Also, most computer systems today are parallel systems. Minicomputers, workstations, minisupers, supercomputers, even mainframes all have more than a single CPU. Clearly, parallelism itself isn't the only problem, since such systems from major computer vendors are now considered mainstream computers.
Yet there is a programmer productivity gap on most massively parallel systems, as illustrated in Figure 4. While the productivity on
small-scale parallel systems now mirrors the traditionally higher productivity of uniprocessor systems, the productivity on these massively parallel systems is still very low. Given that there are plenty of parallel problems and that parallel processing has reached the mainstream, what is still holding massively parallel systems back? The answer lies in their software development environment.
Front End/Back End Versus Native UNIX
One key differentiator between most massively parallel systems and the successful mainstream parallel systems is the relationship of the development environment to the computer. In most massively parallel systems, the computer is an attached processor, or back end, to a front-end workstation, minicomputer, or personal computer (Figure 5). All software development and user interaction is done on the front end, whereas program execution runs on the back-end parallel system. BBN's early parallel processors, such as the Butterfly® I and Butterfly Plus systems, required such front ends. As we learned, there are several problems with this architecture, including
• Bottlenecks: the link between the front end and the back end is a potential bottleneck. It is frequently a local area network, such as Ethernet, with a very limited bandwidth compared with the requirements of high-end supercomputer applications.
• Debugging and tuning difficulties: because the software-development tools are separate from the parallel back end, it can be difficult to debug and tune programs. The tools on the front end cannot directly examine the memory of the back end and must rely on the back-end processors for information. If a program crashes some or all of the parallel nodes' kernels, the debugger may not be able to provide sufficient information.
• Slow development cycle: because development is done on a separate computer, the power of the parallel supercomputer is not available to run the development tools, such as the compiler. Also, executable program images must be downloaded into the back end, adding a step to the development cycle and further slowing down productivity.
• Multiple environments: while the front end typically runs UNIX, the back-end processors run a proprietary kernel. This requires the developer to learn two different environments.
• Limited kernel: the proprietary kernel that runs on the back-end processors does not provide all of the facilities that users expect on modern computers. These kernels provide little protection between tasks, no virtual memory, and few operating-system services.
Contrast this with modern supercomputers, mainframes, minicomputers, and workstations. All have native development environments, typically based on UNIX. This greatly simplifies development because the tools run on the same machine and under the same operating system as the executable programs. The full services of UNIX that are available to the programmer are also available to the executable program, including virtual memory, memory protection, and other system services. Since these systems are all shared-memory machines, powerful tools can be built for debugging and analyzing program performance, with limited intrusion into the programs operation.
Recent BBN parallel computers, such as the GP1000Ô and TC2000Ô systems, are complete, stand-alone UNIX systems and do not require a front end. The Mach 1000Ô and nXÔ operating systems that run on these computers contain a highly symmetric multiprocessing kernel that provides all of the facilities that users expect, including load balancing, parallel pipes and shell commands, etc. Since these operating systems present a standard UNIX interface and are compliant with the POSIX 1003.1 standard, users familiar with UNIX can begin using the system immediately. In fact, there are typically many users logged into and using a TC2000 system only hours after it is installed. This is in contrast to our earlier front-end/back-end systems, where users spent days or weeks studying manuals before running their first programs.
Single User versus Multiple Users
A second difference between mainstream computers and most massively parallel systems is the number of simultaneous users or applications that can be supported. Front-end/back-end massively parallel systems typically allow only a few users to be using the back end at one time. This style of resource scheduling is characterized by batch operation or "sign-up sheets." This is an adequate model for systems that will be dedicated to a single application but is a step backward in productivity for multiuser environments when compared with mainstream computers that support timesharing operating systems. As has been known for many years, timesharing provides a means to more highly utilize a computer system. Raw peak MFLOPS (i.e., millions of floating-point operations per second) are not as important as the number of actual FLOPS that are used in real programs; unused FLOPS are wasted FLOPS. The real measure of system effectiveness is the number of solutions per year that the user base can achieve.
Early in BBN's use of the Butterfly I computer, we realized that flexible multiuser access was required in order to get the most productivity out of the system. The ability to cluster together an arbitrary number of processors was added to the ChrysalisÔ operating system (and later carried forward into Mach 1000 and nX), providing a simple but powerful "space-sharing" mechanism to allow multiple users to share a system. However, in order to eliminate the front end and move general computing and software development activities onto the system, real time-sharing capabilities were needed to enable processors to be used by multiple users. The Mach 1000 and nX operating systems provide true time sharing.
Interconnect Performance, System Versatility, and Delivered Performance
Related to the number of users that can use a system at the same time is the number of different kinds of problems that a system can solve. The more flexible a system is in terms of programming paradigms that it supports, the more solutions per year can be delivered. As we learned while adapting a wide range of applications to the early Butterfly systems, it is much more productive to program using a paradigm that is natural to the problem at hand than to attempt to force-fit the code to a machine-dependent paradigm. Specialized architectures do have a place running certain applications in which the specialized system's
architecture provides very high performance and the same code will be run a large number of times. However, many problems are not well suited to these systems.
Current mainstream systems provide a very flexible programming environment in which to develop algorithms. Based on shared-memory architectures, these systems have demonstrated their applicability in a wide range of applications, from scientific problems to commercial applications. BBN's experience with our parallel systems indicates that shared-memory architectures are the best way to provide a multiparadigm environment comparable to mainstream systems. For example, the TC2000 uses the Butterfly® switch (BBN 1990) to provide a large, globally addressable memory space that is shared by the processors yet is physically distributed: "distributed shared memory." This provides the convenience of the shared-memory model for those applications to which it is suited while providing the scalable bandwidth of distributed memory. The TC2000's Butterfly switch also makes it an ideal system for programming with the message-passing paradigm, providing low message-transfer latencies.
Another key difference between mainstream systems and primitive massively parallel computers is the system's delivered performance on applications that tend to randomly access large amounts of memory. According to John Gustafson of NASA/Ames Laboratory, "Supercomputers will be rated by dollars per megabyte per second more than dollars per megaflop . . . by savvy buyers." A system's ability to randomly access memory is called its random-access bandwidth, or RAB. High RAB is needed for such applications as data classification and retrieval, real-time programming, sparse matrix algorithms, adaptive grid problems, and combinational optimization (Celmaster 1990).
High-RAB systems can deliver high performance on a wider range of problems than can systems with low RAB and can provide the programmer with more options for developing algorithms. This is one of the strengths of the traditional supercomputers and mainframes and is a key reason why most massively parallel systems do not run certain parallel applications as well as one would expect. The TC2000 is capable of RAB that is comparable to, and indeed higher than, that of mainstream systems. Table 1 compares the TC2000 with several other systems on the portable random-access benchmark (Celmaster undated). For even higher RAB, the Monarch project (Rettberg et al. 1990) at BBN explored advanced switching techniques and designed a very-large-scale MIMD computer with the potential for several orders of magnitude more RAB than modern supercomputers. Lastly, BBN's experience using early
|
Butterfly systems in real-time simulation and communications applications indicated that special capabilities were required for these areas. The real-time model places very demanding constraints on system latencies and performance and requires software and hardware beyond what is provided by typical non-real-time systems. These capabilities include a low-overhead, real-time executive, low-latency access to shared memory, hardware support such as timers and globally synchronized, real-time clocks, and support for the Ada programming language.
Challenges and Directions for the Future
The challenge facing the vendors of massively parallel processors in the 1990s is to develop systems that provide high levels of performance without sacrificing programmer productivity. When comparing the next generation of parallel systems, it is the interconnect and memory architecture and the software that will distinguish one system from another. All of these vendors will have access to the same microprocessors, the same semiconductor technology, the same memory chips, and comparable packaging technologies. The ability to build scalable, massively parallel systems that are readily programmable will determine the successful systems in the future.
Most vendors have realized this and are working to enhance their products accordingly, as shown in Figure 6. The traditional bus-based
and crossbar architecture systems have always held the lead in application versatility and programmer productivity but do not scale to massively parallel levels. Many of these vendors, such as Cray, have announced plans to develop systems that scale beyond their current tens of processors. At the same time, vendors of data-parallel and private-memory-MIMD systems are working to make their systems more versatile by improving interconnect latency, adding global routing or simulated shared memory, and adding more UNIX facilities to their node kernels. The direction in which all of this development is moving is toward a massively parallel UNIX system with low-latency distributed shared memory.
As in previous transitions in the computer industry, the older technology will not disappear but will continue to coexist with the new technology. In particular, massively parallel systems will coexist with conventional supercomputers, as illustrated in Figure 7. In this "model supercomputer center," a variety of resources are interconnected via a high-speed network or switch and are available to users. The traditional vector supercomputer will provide compute services for those problems that are vectorizable and primarily serial and will continue to run some older codes. The special-purpose application accelerators provide very high performance on select problems that are executed with sufficient frequency to justify the development cost of the application and the cost of the hardware. The general-purpose parallel system will off-load the vector supercomputer of nonvector codes and will provide a production environment for most new parallel applications. It will also serve as a testbed for parallel algorithm research and development.
Summary
In the 1980s, parallel processing moved into the mainstream of computing technologies. The rapid increases in "killer micro" performance will enable massively parallel systems to meet the needs of high-performance users in the 1990s. However, in order to become a mainstream technology, massively parallel systems must close the programmer productivity gap that exists between them and small-scale parallel systems. The keys to closing this gap are standard languages with parallel extensions, native operating systems (such as UNIX), a powerful software development tool set, and an architecture that supports multiple programming paradigms.
Appendix:
BBN Parallel-Processing Systems
A summary of the BBN parallel-processing systems appears in Table 2. The Pluribus was BBN's first parallel-processing system. Developed in the early 1970s, with initial customer shipments in 1975, it consisted of up to 14 Lockheed Sue minicomputers interconnected via a bus-based distributed crossbar switch and supported shared global memory. It was used primarily in communications applications, many of which are still operational today.
|
The initial member of the Butterfly family of systems, the Butterfly I, was developed beginning in 1977. An outgrowth of the Voice Funnel program, a packetized voice satellite communications system funded by the Defense Advanced Research Projects Agency (DARPA), the Butterfly I computer was designed to scale to 256 Motorola, Inc., M68000 processors (a system of this size was built in 1985) but without giving up the advantages of shared memory. The key to achieving this scalability was a multistage interconnection network called the Butterfly switch. BBN developed the proprietary Chrysalis operating system, the GistÔ performance analyzer, and the Uniform SystemÔ parallel programming library for this system. Butterfly I machines were used in a wide variety of research projects and also are used as Internet gateways when running communications code developed at BBN.
In the early 1980s, DARPA also funded BBN to explore very-large-scale parallel-processing systems. The Monarch project explored the design of a 65,536-processor shared-memory MIMD system using a multistage interconnection network similar to the Butterfly switch. The high-speed switch was implemented and tested using very-large-scale integration based on complementary metal oxide semiconductor technology, and a system simulator was constructed to explore the
performance of the system on real problems. Some of the concepts and technologies have already been incorporated into Butterfly products, and more will be used in future generations.
The Butterfly Plus system was developed to provide improved processor performance over the Butterfly I system by incorporating Motorola's MC68020 processor and the MC68881 (later, the MC68882) floating-point coprocessor. Since this system used the same Butterfly switch, Butterfly I systems could be easily upgraded to Butterfly Plus performance.
The Butterfly Plus processor boards also included more memory and a memory-management unit, which were key to the development of the Butterfly GP1000 system. The GP1000 used the same processors as the Butterfly Plus but ran the Mach 1000 operating system, the world's first massively parallel implementation of UNIX. Mach 1000 was based on the Mach operating system developed at Carnegie Mellon University but has been extended and enhanced by BBN. The TotalViewÔ debugger was another significant development that was first released on the GP1000.
The TC2000 system, BBN's newest and most powerful computer, was designed to provide an order of magnitude greater performance than previous Butterfly systems. The world's first massively parallel RISC system, the TC2000 employs the Motorola M88000 microprocessor and a new generation Butterfly switch that has ten times the capacity of the previous generation. The TC2000 runs the nX operating system, which was derived from the GP1000's Mach 1000 operating system. The TC2000 also runs pSOS+mÔ , a real-time executive.
The goal of the Coral project is to develop BBN's next-generation parallel system for initial delivery in 1992. The Coral system is targeted at providing up to 200 GFLOPS peak performance using 2000 processors while retaining the shared-memory architecture and advanced software environment of the TC2000 system, with which Coral will be software compatible.
References
BBN, "Inside the TC2000," BBN Advanced Computers Inc. report, Cambridge, Massachusetts (February 1990).
W. Celmaster, "Random-Access Bandwidth and Grid-Based Algorithms on Massively Parallel Computers," BBN Advanced Computers Inc. report, Cambridge, Massachusetts (September 5, 1990).
W. Celmaster, "Random-Access Bandwidth Requirements of Point Parallelism in Grid-Based Problems," submitted to the 5th SIAM Conference on Parallel Processing for Scientific Computing , Society for Industrial and Applied Mathematics, Philadelphia, Pennsylvania.
R. D. Rettberg, W. R. Crowther, P. P. Carvey, and R. S. Tomlinson, "Monarch Parallel Processor Hardware Design," Computer23 (4), 18-30 (1990).