previous chapter
Goals for Frontiers of Supercomputing II and Review of Events since 1983
next chapter

Goals for Frontiers of Supercomputing II and Review of Events since 1983

Kermith Speierman

At the time of the first Frontiers of Supercomputing conference in 1983, Kermith H. "K." Speierman was the chief scientist at the National Security Agency (NSA), a position he held until 1990. He has been a champion of computing at all levels, especially of supercomputing and parallel processing. He played a major role in the last conference. It was largely through his efforts that NSA developed its parallel processing capabilities and established the Supercomputing Research Center.

I would like to review with you the summary of the last Frontiers of Supercomputing conference in 1983. Then I would like to present a few representative significant achievements in high-performance computing over this past seven years. I have talked with some of you about these achievements and I appreciate your help. Last, I'd like to talk about the goals of this conference and share with you some questions that I think are useful for us to consider during our discussions.

1983 Conference Summary

In August of 1983, at the previous conference, we recognized that there is a compelling need for more and faster supercomputers. The Japanese , in fact, have shown that they have a national goal in supercomputation and can achieve effective cooperation between government, industry, and academia in


16

their country. I think the Japanese shocked us a little in 1983, and we were a bit complacent then. However, I believe we are now guided more by our needs, our capabilities, and the idea of having a consistent, balanced program with other sciences and industry. So I think we've reached a level of maturity that is considerably greater than we had in 1983. I think U.S. vendors are now beginning, as a result of events that have gone on during this period, to be very serious about massively parallel systems, or what we now tend to call scalable parallel systems.

The only evident approach to achieve large increases over current supercomputer speeds is through massively parallel systems. However, there are some interesting ideas in other areas like optics that are exciting. But I think for this next decade we do have to look very hard at the scalable parallel systems.

We don't know how to use parallel architectures very well. The step from a few processors to large numbers is a difficult problem. It is still a challenge, but we now know a great deal more about using parallel processors on real problems. It is still very true that much work is required on algorithms, languages, and software to facilitate the effective use of parallel architectures .

It is also still true that the vendors need a larger market for supercomputers to sustain an accelerated development program . I think that may be a more difficult problem now than it was in 1983 because the cost of developing supercomputers has grown considerably. However, the world market is really not that big—it is approximately a $1 billion-per-year market. In short, the revenue base is still small.

Potential supercomputer applications may be far greater than current usage indicates. In fact, I think that the number of potential applications is enormous and continues to grow.

U.S. computer companies have a serious problem buying fast, bipolar memory chips in the U.S. We have to go out of the country for a lot of that technology. I think our companies have tried to develop U.S. sources more recently, and there has been some success in that. Right now, there is considerable interest in fast bipolar SRAMs. It will be interesting to see if we can meet that need in the U.S.

Packaging is a major part of the design effort. As speed increases, you all know, packaging gets to be a much tougher problem in almost a nonlinear way. That is still a very difficult problem.

Supercomputers are systems consisting of algorithms, languages, software, architecture, peripherals, and devices. They should be developed as systems that recognize the critical interaction of all the parts. You have to deal with a whole system if you're going to build something that's usable.


17

Collaboration among government, industry, and academia on supercomputer matters is essential to meet U.S. needs. The type of collaboration that we have is important. We need to find collaboration that is right for the U.S. and takes advantage of the institutions and the work patterns that we are most comfortable with. As suggested by Senator Jeff Bingaman in his presentation during this session, the U.S. needs national supercomputer goals and a strategic plan to reach those goals .

Events in Supercomputing since 1983

Now I'd like to talk about representative events that I believe have become significant in supercomputing since 1983. After the 1983 conference, the National Security Agency (NSA) went to the Institute for Defense Analyses (IDA) and said that they would like to establish a division of IDA to do research in parallel processing for NSA. We established the Supercomputing Research Center (SRC), and I think this was an important step.

Meanwhile, NSF established supercomputing centers, which provided increased supercomputer access to researchers across the country. There were other centers established in a number of places. For instance, we have a Parallel Processing Science and Technology Center that was set up by NSF at Rice University with Caltech and Argonne National Laboratory. NSF now has computational science and engineering programs that are extremely important in computational math, engineering, biology, and chemistry, and they really do apply this new paradigm in which we use computational science in a very fundamental way on basic problems in those areas.

Another event since 1983, scientific visualization, has become a really important element in supercomputing.

The start up of Engineering Technology Associates Systems (ETA) was announced at the 1983 banquet speech by Bill Norris. Unfortunately, ETA disbanded as an organization in 1989.

In 1983, Denelcor was a young organization that was pursuing an interesting parallel processing structure. Denelcor went out of business, but their ideas live on at Tera Computer Company, with Burton Smith behind them.

Cray Research, Inc., has trifurcated into three companies since 1983. One of those, Supercomputing Systems, Inc., is receiving significant technological and financial support from IBM, which is a very positive direction.


18

At this time, the R&D costs for a new supercomputer chasing very fast clock times are $200 or $300 million. I'm told that's about 10 times as much as it was 10 years ago.

Japan is certainly a major producer of supercomputers now, but they haven't run away with the market. We have a federal High Performance Computing Initiative that was published by the Office of Science and Technology Policy in 1989, and it is a result of the excellent interagency cooperation that we have. It is a good plan and has goals that I hope will serve us well.

The Defense Advanced Research Projects Agency's Strategic Computing Program began in 1983. It has continued on and made significant contributions to high-performance computing.

We now have the commercial availability of massively parallel machines. I hope that commercial availability of these machines will soon be a financial success.

I believe the U.S. does have a clear lead in parallel processing, and it's our job to take advantage of that and capitalize on it. There are a significant number of applications that have been parallelized, and as that set of applications grows, we can be very encouraged.

We now have compilers that produce parallel code for a number of different machines and from a number of different languages. The researchers tell me that we have a lot more to do, but there is good progress here. In the research community there are some new, exciting ideas in parallel processing and computational models that should be very important to us.

We do have a much better understanding now of interconnection nets and scaling. If you remember back seven years, the problem of interconnecting all these processors was of great concern to all of us.

There has been a dramatic improvement in microprocessor performance, I think primarily because of RISC architectures and microelectronics for very-large-scale integration. We have high-performance workstations now that are as powerful as CRAY-1s. We have special accelerator boards that perform in these workstations for special functions at very high rates. We have minisupercomputers that are both vector and scalable parallel machines. And UNIX is certainly becoming a standard for high-performance computing.

We are still "living on silicon." As a result, the supercomputers that we are going to see next are going to be very hot. Some of them may be requiring a megawatt of electrical input, which will be a problem.


19

I think there is a little flickering interest again in superconducting electronics, which provides a promise of much smaller delay-power products, which in turn would help a lot with the heat problem and give us faster switching speeds.

Conference Goals

Underlying our planning for this conference were two primary themes or goals. One was the national reassessment of high-performance computing—that is, how much progress have we made in seven years? The other was to have a better understanding of the limits of high-performance computing. I'd like to preface this portion of the discussion by saying that not all limits are bad. Some limits save our lives. But it is very important to understand limits. By limits, I mean speed of light, switching energy, and so on.

The reassessment process is one, I think, of basically looking at progress and understanding why we had problems, why we did well in some areas, and why we seemed to have more difficulties in others. Systems limits are questions of architectural structures and software. Applications limits are a question of how computer architectures and the organization of the system affect the kinds of algorithms and problems that you can put on those systems. Also, there are financial and business limits, as well as policy limits, that we need to understand.

Questions

Finally, I would like to pose a few questions for us to ponder during this conference. I think we have to address in an analytical way our ability to remain superior in supercomputing. Has our progress been satisfactory? Are we meeting the high-performance computing needs of science, industry, and government? What should be the government's role in high-performance computing?

Do we have a balanced program? Is it consistent? Are there some show-stoppers in it? Is it balanced with other scientific programs that the U.S. has to deal with? Is the program aggressive enough? What benefits will result from this investment in our country?

The Gartner report addresses this last question. What will the benefits be if we implement the federal High Performance Computing Initiative?

Finally, I want to thank all of you for coming to this conference. I know many of you, and l know that you represent the leadership in this business. I hope that we will have a very successful week.


21

previous chapter
Goals for Frontiers of Supercomputing II and Review of Events since 1983
next chapter