previous part
14— WHAT NOW?
next section

14—
WHAT NOW?

Panelists in this session presented a distillation of issues raised during the conference, especially about the government's role in a national high-performance computing initiative and its implementation. David B. Nelson, of the Department of Energy, summarized the conference, and the panel discussed the role of the government as policy maker and leader.

Session Chair

David B. Nelson,
Department of Energy


549

Conference Summary

David B. Nelson

David B. Nelson is Executive Director of the Office of Energy Research, U.S. Department of Energy (DOE), and concurrently, Director of Scientific Computing. He is also Chairman of the Working Group on High Performance Computing and Communications, an organization of the Federal Coordinating Committee on Science, Engineering, and Technology. His undergraduate studies were completed at Harvard University, where he majored in engineering sciences, and his graduate work was completed at the Courant Institute of Mathematical Sciences at New York University, where he received an M.S. and Ph.D. in mathematics.

Before joining DOE, Dr. Nelson was a research scientist at Oak Ridge National Laboratory, where he worked mainly in theoretical plasma physics as applied to fusion energy and in defense research. He headed the Magnets-Hydrodynamics Theory Group in the Fusion Energy Division.

Introduction

I believe all the discussion at the conference can be organized around the vision of a seamless, heterogeneous, distributed, high-performance computing environment that has emerged during the week and that K. Speierman alluded to in his remarks (see Session 1). The elements in this environment include, first of all, the people—skilled, imaginative users, well trained in a broad spectrum of applications areas. The second ingredient of that environment is low-cost, high-performance, personal


550

workstations and visualization engines. The third element is mass storage and accessible, large knowledge bases. Fourth is heterogeneous high-performance compute engines. Fifth is very fast local, wide-area, and national networks tying all of these elements together. Finally, there is an extensive, friendly, productive, interoperable software environment.

As far as today is concerned, this is clearly a vision. But all of the pieces are present to some extent. In this summary I shall work through each of these elements and summarize those aspects that were raised in the conference, both the pluses and the minuses.

Now, we can't lose sight of what this environment is for. What are we trying to do? The benefits of this environment will be increased economic productivity, improved standard of living, and improved quality of life. This computational environment is an enabling tool that will let us do things that we cannot now do, imagine things that we have not imagined, and create things that have never before existed. This environment will also enable greater national and global security, including better understanding of man's effect on the global environment.

Finally, we should not ignore the intellectual and cultural inspiration that high-performance computing can provide to those striving for enlightenment and understanding. That's a pretty tall order of benefits, but I think it's a realistic one; and during the conference various presenters have discussed aspects of those benefits.

Skilled, Imaginative Users and a Broad Spectrum of Applications

It's estimated that the pool of users trained in high-performance computing has increased a hundredfold since our last meeting in 1983. That's a lot of progress. Also, the use of high-performance computing in government and industry has expanded into many new and important areas since 1983. We were reminded that in 1983, the first high-performance computers for oil-reservoir modeling were just being introduced. We have identified a number of critical grand challenges whose solution will be enabled by near-future advances in high-performance computing.

We see that high-performance computing centers and the educational environment in which they exist are key to user education in computational science and in engineering for industry. You'll notice I've used the word "industry" several times. Unfortunately, the educational pipeline for high-performance computing users is drying up, both through lack of new entrants and through foreign-born people being pulled by advantages and attractions back to their own countries.


551

One of the points mentioned frequently at this conference, and one point I will be emphasizing, is the importance of broadening the use of this technology into wider industrial applications. That may be one of the most critical challenges ahead of us. Today there are only pockets of high-performance computers in industry.

Finally, the current market for high-performance computing—that user and usage base—is small and increasingly fragmented because of the choices now being made available to potential and actual users of high-performance computing.

Workstations and Visualization Engines

The next element that was discussed was the emergence of low-cost, high-performance personal workstations and visualization engines. This has happened mainly since 1983. Remember that in 1983 most of us were using supercomputers through glass teletypes. There has been quite a change since then.

The rapid growth in microprocessor compatibility has been a key technology driver for this. Obviously, the rapid fall in microprocessor and memory costs has been a key factor in enabling people to buy these.

High-performance workstations allow cooperative computing with compute engines. As was pointed out, they let supercomputers be supercomputers by off-loading smaller jobs. The large and increasing installed base of these workstations, plus the strong productive competition in the industry, is driving hardware and software standards and improvements.

Next, several of the multiprocessor workstations that are appearing now include several processors that allow a low-end entry into parallel processing. It was pointed out that there may be a million potential users of these, as compared with perhaps 10,000 to 100,000 users of the very high-end parallel machines. So this is clearly the broader base and therefore the more likely entry point.

Unfortunately, in my opinion, this very attractive, seductive, standalone environment may deflect users away from high-end machines, and it's possible that we will see a repetition on a higher plane of the VAX syndrome of the 1970s. That syndrome caused users to limit their problems to those that could be run on a Digital Equipment Corporation VAX machine; a similar phenomenon could stunt the growth of high-performance computing in the future.


552

Mass Storage and Accessible Knowledge Bases

Mass storage and accessible, large knowledge bases were largely ignored in this meeting—and I think regrettably so—though they are very important. There was some discussion of this, but not at the terabyte end.

What was pointed out is that mass-storage technology is advancing slowly compared with our data-accumulation and data-processing capabilities. There's an absence of standards for databases such that interoperability and human interfaces to access databases are hit-and-miss things.

Finally, because of these varied databases, we need but do not have expert systems and other tools to provide interfaces for us. So this area largely remains a part of the vision and not of the accomplishment.

Heterogeneous High-Performance Computer Engines

What I find amazing, personally, is that it appears that performance on the order of 1012 floating-point operations per second—a teraflops or a teraops, depending on your culture—is really achievable by 1995 with known technology extrapolation. I can remember when we were first putting together the High Performance Computing Initiative back in the mid-1980s and asking ourselves what a good goal would be. We said that we would paste a really tough one up on the wall and go for a teraflops. Maybe we should have gone for a petaflops. The only way to achieve that goal is by parallel processing. Even today, at the high end, parallel processing is ubiquitous.

There isn't an American-made high-end machine that is not parallel. The emergence of commercially available massively parallel systems based on commodity parts is a key factor in the compute-engine market—another change since 1983. Notice that it is the same commodity parts, roughly, that are driving the workstation evolution as are driving these massively parallel systems.

We are still unsure of the tradeoffs—and there was a lively debate about this at this meeting—between fewer and faster processors versus more and slower processors. Clearly the faster processors are more effective on a per-processor basis. On a system basis, the incentive is less clear. The payoff function is probably application dependent, and we are still searching for it. Fortunately, we have enough commercially available architectures to try out so that this issue is out of the realm of academic discussion and into the realm of practical experiment.

Related to that is an uncertain mapping of the various architectures available to us into the applications domain. A part of this meeting was


553

the discussion of those application domains and what the suitable architectures for them might be. Over the next few years I'm sure we'll get a lot more data on this subject.

It was also brought out that it's important that one develops balanced systems. You have to have appropriate balancing of processor power, memory size, bandwidth, and I/O rates to have a workable system. By and large, it appears that there was consensus in this conference on what that balance should be. So at least we have some fairly good guidelines.

There was some discussion at this conference of new or emerging technologies—gallium arsenide, Josephson junction, and optical—which may allow further speedups. Unfortunately, as was pointed out, gallium arsenide is struggling, Josephson junction is Japanese, and optical is too new to call.

Fast, Local, Wide-Area, and National Networks

Next, let's turn to networking, which is as important as any other element and ties the other elements together.

Some of the good news is that we are obtaining more standards for I/O channels and networks. We have the ability to build on top of these standards to create things rather quickly. As an example of that, I mention the emergence of the Internet and the future National Research and Education Network (NREN) environment, which is based on standards, notably transmission control protocol/Internet protocol, and on open systems and has already proved its worth.

Unfortunately, as we move up to gigabit speeds, which we know we will require for a balanced overall system, we're going to need new hardware and new protocols. Some of the things that we can do today simply break down logically or electrically when we get up to these speeds. Still, there's going to be a heavy push to achieve gigabit speeds.

Another piece of bad news is that today the network services are almost nonexistent. Such simple things as yellow pages and white pages and so on for national networks just don't exist. This is like the Russian telephone system: if you want to call somebody, you call them to find out what their telephone number is because there are no phone books.

Another issue that was raised is how we can extend the national network into a broader user community. How can we move NREN out quickly from the research and education community into broader industrial usage? Making this transition will require dealing with sticky issues such as intellectual property rights, tariffed services, and interfaces.


554

Software Environment

Archiving an extensive, friendly, productive, interoperable software environment was acknowledged to be the most difficult element to achieve. We do have emerging standards such as UNIX, X Windows, and so on that help us to tie together software products as we develop them.

The large number of workstations, as has previously been the case with personal computers, has been the motivating factor for developing quality, user-friendly interfaces. These are where the new things get tried out. That's partly based on the large, installed base and, therefore, the opportunities for profitable experimentation.

Now, these friendly interfaces can be used and to some extent are used as access to supercomputers, but discussion during this conference showed that we have a way to go on that score. Unfortunately—and this was a main topic during the meeting—we do not have good standards for software portability and interfaces in a heterogeneous computing environment. We're still working with bits and pieces. It was acknowledged here that we have to tie the computing environment together through software if we are to have a productive environment. Finally—this was something that was mentioned over and over again—significant differences exist in architectures, which impede software portability.

Concluding Remarks

First, if we look back to the meeting in 1983, we see that the high-performance computing environment today is much more complex. Before, we could look at individual boxes. Largely because of the reasons that I mentioned, it's now a distributed system. To have an effective environment today, the whole system has to work. All the elements have to be at roughly the same level of performance if there is to be a balanced system. Therefore, the problem has become much more complex and the solution more effective.

To continue high-performance computing advances, it seems clear from the meeting that we need to establish effective mechanisms to coordinate our activities. Any one individual, organization, or company can only work on a piece of the whole environment. To have those pieces come together so that you don't have square plugs for round holes, some coordination is required. How to do that is a sociological, political, and cultural problem. It is at least as difficult, and probably rather more so, than the technical problems.


555

Next, as I alluded to before, high-performance computing as a business will live or die—and this is a quote from one of the speakers—according to its acceptance by private industry. The market is currently too small, too fragmented, and not growing at a rapid enough rate to remain operable. Increasing the user base is imperative. Each individual and organization should take as a challenge how we can do that. To some extent, we're the apostles. We believe , and we know what can be done, but most of the world does not.

Finally, let me look ahead. In my opinion, there's a clear federal role in high-performance computing. This role includes, but is not limited to, (1) education and training, (2) usage of high performance for agency needs, (3) support for R&D, and (4) technology cooperation with industry. This is not transfer; it goes both ways. Let's not be imperialistic.

The federal role in these and other areas will be a strong motivator and enabler to allow us to achieve the vision discussed during this meeting. It was made clear over and over again that the federal presence has been a leader, driver, catalyst, and strong influence on the development of the high-performance computing environment to date. And if this environment is to succeed, the federal presence has to be there in the future.

We cannot forget the challenge from our competitors and the fact that if we do not take on this challenge and succeed with it, there are others who will. The race is won by the fleet, and we need to be fleet.


557

The High Performance Computing Initiative

Eugene Wong

Eugene Wong is Associate Director for Physical Sciences and Engineering in the Office of Science and Technology Policy. Dr. Wong began his research career in 1955. From 1962 to 1969, he served as Assistant Professor and Professor and, from 1985 to 1989, as Chairman in the Electrical Engineering and Computer Sciences Department of the University of California at Berkeley. When he was confirmed by the U.S. Senate on April 4, 1990, he continued serving in a dual capacity as Professor and Departmental Chairman at the University of California. Dr. Wong received his B.S., M.A., and Ph.D. degrees in electrical engineering from Princeton University. His research interests include stochastic processes and database-management systems.

I would like to devote my presentation to an overview of the High Performance Computing Initiative as I see it. This is a personal view; it's a view that I have acquired over the last six months.

The High Performance Computing Initiative, which many people would like to rename the High Performance Computing and Communications Initiative, is a proposed program of strategic investment in the frontier areas of computing. I think of it as an investment—a long-term investment. If the current proposal is fully funded, we will have doubled the original $500 million appropriation over the next four to five years.

There is a fairly long history behind the proposal. The first time the concept of high-performance computing—high-performance computing


558

as distinct from supercomputing—was mentioned was probably in the White House Science Council Report of 1985. At that time the report recommended that a study be undertaken to initiate a program in this area.

A strategy in research and development for high-performance computing was published in November 1987 under the auspices of the Office of Science and Technology Policy (OSTP) and FCCSET. FCCSET, as some of you know, stands for Federal Coordinating Council on Science, Engineering and Technology, and the actual work of preparing the program and the plan was done by a subcommittee of that council.

In 1989, shortly after Allan Bromley assumed the office of Director of OSTP, a plan was published by OSTP that pretty much spelled out in detail both the budget and the research program. It is still the road map that is being followed by the program today.

I think the goal of the overall program is to preserve U.S. supremacy in this vital area and over the next few years to accelerate the program so as to widen the lead that we have in the area. Secondly, and perhaps equally important, is to effect a timely transfer of both the benefits and the responsibilities of the program to the private sector as quickly as possible.

"High performance" in this context really means advanced, cutting edge, or frontiers. It transcends supercomputing. It means pushing the technology to its limits in speed, capacity, reliability, and usability.

There are four major components in the program:

• systems—both hardware and system software;

• application software—development of environment, algorithms, and tools;

• networking—the establishment and improvement of a major high-speed, digital national network for research and education; and

• human resources and basic research.

Now, what is the basic motivation for the program, aside from the obvious one that everybody wants more money? I think the basic motivation is the following. In information technology, we have probably the fastest-growing, most significant, and most influential technology in our economy. It has been estimated that, taken as a whole, electronics, communications, and computers now impact nearly two-thirds of the gross national product.

Advanced computing has always been a major driver of that technology, even though when it comes to dollars, the business may be only a very small part of it. Somebody mentioned the figure of $1 billion. Well, that's $1 billion out of $500 billion.

It is knowledge- and innovation-intensive. So if we want as a nation to add value, this is where we want to do it. It is also leveraging success.


559

This is an area where, clearly, we have the lead, we've always had the lead, and we want to maintain that lead.

In my opinion, a successful strategy has to be based on success and not merely on repair of flaws. Clearly, this is our best chance of success.

Why have a federal role? This is the theme of the panel. You'll hear more about this. But to me, first of all, this is a leading-edge business. I think several presenters at this conference have already mentioned that for such a market, the return is inadequate to support the R&D effort needed to accelerate it and to move ahead. That's always the case for a leading-edge market. If we really want to accelerate it rather than let it take its natural time, there has to be a federal role. The returns may be very great in the long term. The problem is that the return accrues to society at large and not necessarily to the people who do the job. I think the public good justifies a federal role here.

Networking is a prominent part of the program, which is an infrastructure issue and therefore falls within the governments purview. The decline in university enrollment in computer science needs to be reversed, and that calls for government effort—leadership.

Finally, and most importantly, wider use and access to the technology and its benefits require a federal role. It is the business of the federal government to promote applications to education and other social use of high-performance computing.

As a strategy, I think it's not enough to keep the leading edge moving forward. It's not enough to supply enough money to do the R&D. The market has to follow quickly. In order for that market to follow quickly, we have to insure that there is access and there is widespread use of the technology.

There are eight participating federal agencies in the program. Four are lead agencies: the Defense Advanced Research Projects Agency, the Department of Energy, NSF, and NASA, all of whom are represented here in this session. The National Institute of Standards and Technology, the National Oceanic and Atmospheric Administration, the National Institutes of Health, and the EPA will also be major participants in the program.

So, where are we in terms of the effort to implement the program? The plan was developed under FCCSET, and I think the group that did the plan did a wonderful job. I'm not going to mention names for fear of leaving people out, but I think most of you know who these people are.

That committee under FCCSET, under the new organization, has been working as a subcommittee of the Committee on Physical Sciences,


560

Engineering, and Mathematics, under the leadership of Erich Bloch (see Session 1) and Charlie Herzfeld. Over the last few months, we've undertaken a complete review of the program, and we've gotten OMB's agreement to look at the program as an integral program, which is rare. For example, there are only three such programs that OMB has agreed to view as a whole. The other two are global change and education, which are clearly much more politically visible programs. For a technology program to be treated as a national issue is rare, and I think we have succeeded.

We are targeting fiscal year 1992 as the first year of the five-year budget. This is probably not the easiest of years to try to launch a program that requires money, but no year is good. I think it's a test of the vitality of the program to see how far we get with it.

The President's Council of Advisors on Science and Technology (PCAST) has provided input. I'm in the process of assembling an ad hoc PCAST panel, which has already met once. It will provide important private-sector input to the program. In that regard, someone mentioned that we need to make supercomputing, high-performance computing, a national issue. I think it has succeeded.

The support is really universal. You'll hear later that, in fact, committees in the Senate are vying to support a program. I think the reason why it succeeded is that it's both responsive to short-term national needs and visionary, and I think both are required for a program to succeed as a national issue. It's responsive to national needs on many grounds—national security, economy, education, and global change. It speaks to all of these issues in a deep and natural way.

The grand challenges that have been proposed really have the potential of touching every citizen in the country, and I think that's what gives it the importance that it deserves.

The most visionary part of the program is the goal of a national high-speed network in 20 years, and I think most people in Washington are convinced that it's going to come. It will be important, and it will be beneficial. The question is how it is going to come about. This is the program that we will spearhead.

The program has spectacular and ambitious goals, and in fact the progress is no less spectacular. At the time it was conceived as a goal, a machine with a capacity of 1012 floating-point operations per second was considered ambitious. But now it doesn't look so ambitious. The program has a universal appeal in its implications for education, global change, social issues, and a wide range of applications. It really has the potential of touching everyone. Thus, it's a program that I'm excited


561

about. I'm fortunate to arrive in Washington just in time to help to get it started, and I'm hopeful that it will get started this year.

Let me move now to the theme of this session-What Now? Given the important positions that the presenters in this session occupy, their thoughts on the theme of the federal role in high-performance computing will be most valuable. In the course of informal exchanges at this conference, I have already heard a large number of suggestions as to what that role might be. Let me list some of these before we hear from our presenters.

What is the appropriate government role? I think the most popular suggestion is that it be a good customer. Above all, it's the customer that pays the bills. It should also be a supporter of innovation in a variety of ways—through purchase, through support R&D, and through encouragement. It needs to be a promoter of technology, not only in development but also in its application. It should be a wise regulator that regulates with a deft and light touch. The government must be a producer of public good in the area of national security, in education, and in myriad other public sectors. It should also be an investor, a patient investor for the long term, given the current unfriendly economic environment. And last, above all, it should be a leader with vision and with sensitivity to the public need. These are some of the roles that I have heard suggested at this conference.


563

Government Bodies As Investors

Barry Boehm

Barry W. Boehm is currently serving as Director of the Defense Advanced Research Projects Agency's (DARPA's) Software and Intelligent Systems Technology Office, the U.S. Government's largest software research organization. He was previously Director of DARPA's Information Science and Technology Office, which also included DARPA's research programs in high-performance computing and communications.

I'd like to begin by thanking the participants at this conference for reorienting my thinking about high-performance computing (HPC). Two years ago, to the extent that I thought of the HPC community at all, I tended to think of it as sort of an interesting, exotic, lost tribe of nanosecond worshippers.

Today, thanks to a number of you, I really do feel that it is one of the most critical technology areas that we have for national defense and economic security. Particularly, I'd like to thank Steve Squires (Session 4 Chair) and a lot of the program managers at the Defense Advanced Research Projects Agency (DARPA); Dave Nelson and the people in the Federal Coordinating Committee on Science, Engineering, and Technology (FCCSET); Gene Wong for being the right person at the right time to move the High Performance Computing Initiative along; and finally all of you who have contributed to the success of this meeting. You've really given me a lot better perspective on what the community is like and what its needs and concerns are, and you've been a very stimulating group of people to interact with.


564

In relation to Gene's list, set forth in the foregoing presentation, I am going to talk about government bodies as investors, give DARPA as an example, and then point to a particular HPC opportunity that I think we have as investors in this initiative in the area of software assets.

If you look at the government as an investor, it doesn't look that much different than Cray Research, Inc., or Thinking Machines Corporation or IBM or Boeing or the like. It has a limited supply of funds, it wants to get long-range benefits, and it tries to come up with a good investment strategy to do that.

Now, the benefits tend to be different. In DARPA's case, it's effective national defense capabilities; for a commercial company, it's total corporate value in the stock market, future profit flows, or the like.

The way we try to do this at DARPA is in a very interactive way that involves working a lot with the Department of Defense (DoD) users and operators and aerospace industry, trying to figure out the most important things that DoD is going to need in the future, playing those off against what technology is likely to supply, and evaluating these in terms of what their relative cost-benefit relationships are. Out of all of that comes an R&D investment strategy. And I think this is the right way to look at the government as an investor.

The resulting DARPA investment strategy tends to include things like HPC capabilities, not buggy whips and vacuum tubes. But that does not mean we're doing this to create industrial policy. We're doing this to get the best defense capability for the country that we can.

The particular way we do this within DARPA is that we have a set of investment criteria that we've come up with and use for each new proposed program that comes along. The criteria are a little bit different if you're doing basic research than if you're doing technology applications, but these tend to be common to pretty much everything that we do.

First, there needs to be a significant long-range DoD benefit, generally involving a paradigm shift. There needs to be minimal DoD ownership costs. Particularly with the defense budget going down, it's important that these things not be a millstone around your neck. The incentive to create things that are commercializable, so that the support costs are amortized across a bigger base, is very important.

Zero DARPA ownership costs: we do best when we get in, hand something off to somebody else, and get on to the next opportunity that's there. That doesn't mean that there's no risk in the activity. Also, if Cray is already doing it well, if IBM is doing it, if the aerospace industry is doing it, then there's no reason for DARPA to start up something redundant.


565

A good many of DARPA's research and development criteria, such as good people, good new ideas, critical mass, and the like, are self-explanatory. And if you look at a lot of the things that DARPA has done in the past, like ARPANET, interactive graphics, and Berkeley UNIX, you see that the projects tend to fit these criteria reasonably well.

So let me talk about one particular investment opportunity that I think we all have, which came up often during the conference here. This is the HPC software problem. I'm exaggerating it a bit for effect here, but we have on the order of 400 live HPC projects at any one time, and I would say there's at least 4000, or maybe 8000 or 12,000 ad hoc debuggers that people build to get their work done. And then Project 4001 comes along and says, "How come there's no debugger? I'd better build one." I think we can do better than that. I think there's a tremendous amount of capability that we can accumulate and capitalize on and invest in.

There are a lot of software libraries, both in terms of technology and experience. NASA has Cosmic, NSF has the supercomputing centers, DoD has the Army Rapid repository, DARPA is building a Stars software repository capability, and so on. There's networking technology, accesscontrol technology, file and database technology, and the like, which could support aggregating these libraries.

The hardware vendors have user communities that can accumulate software assets. The third-party software vendor capabilities are really waiting for somebody to aggregate the market so that it looks big enough that they can enter.

Application communities build a lot of potentially reasonable software. The research community builds a tremendous amount just in the process of creating both machines, like Intel's iWarp, and systems, like Nectar at Carnegie Mellon University; and the applications that the research people do, as Gregory McRae of Carnegie Mellon will attest, create a lot of good software.

So what kind of a capability could we produce? Let's look at it from a user's standpoint.

The users ought to have at their workstations a capability to mouse and window their way around a national distributed set of assets and not have to worry about where the assets are located or where the menus are located. All of that should be transparent so that users can get access to things that the FCCSET HPC process invests in directly, get access to various user groups, and get access to software vendor libraries.

There are certain things that they, and nobody else, can get access to. If they're with the Boeing Company, they can get the Boeing airframe software, and if they're in some DoD group that's working low


566

observables, they can get access to that. But not everybody can get access to that.

As you select one of these categories of software you're interested in, you tier down the menu and decide that you want a debugging tool, and then you go and look at what's available, what it runs on, what kind of capabilities it has, etc.

Behind that display are a lot of nontrivial but, I believe, workable issues. Stimulating high-quality software asset creation is a nontrivial job, as anybody who has tried to do it knows. I've tried it, and it's a challenge.

An equally hard challenge is screening out low-quality assets—sort of a software-pollution-control problem. Another issue is intellectual property rights and licensing issues. How do people make money by putting their stuff on this network and letting people use it?

Yet another issue is warranties. What if the software crashes right in the middle of some life- or company- or national-critical activity?

Access-control policies are additional challenges. Who is going to access the various valuable assets? What does it mean to be "a member of the DoD community," "an American company," or things like that? How do you devolve control to your graduate students or subcontractors?

Distributed-asset management is, again, a nontrivial job. You can go down a list of additional factors. Dave Nelson mentioned such things as interface standards during his presentation in this session, so I won't cover that ground again except to reinforce his point that these are very important to good software reuse. But I think that all of these issues are workable and that the asset base benefits are really worth the effort. Right now one of the big entry barriers for people using HPC is an insufficient software asset base. If we lower the entry barriers, then we also get into a virtuous circle, rather than a vicious circle, in that we increase the supply of asset producers and pump up the system.

The real user's concern is reducing the calendar time to solution. Having the software asset base available will decrease the calendar time to solution, as well as increase application productivity, quality, and performance.

A downstream research challenge is the analog to spreadsheets and fourth-generation languages for high-performance applications. These systems would allow you to say, "I want to solve this particular structural dynamic problem," and the system goes off and figures out what kind of mesh sizes you need, what kind of integration routine you should use, etc. Then it would proceed to run your application and interactively present you with the results.


567

We at DARPA have been interacting quite a bit with the various people in the Department of Energy, NASA, and NSF and are going to try to come up with a system like that as part of the High Performance Computing Initiative. I would be interested in hearing about the reactions of other investigators to such a research program.


569

Realizing the Goals of the HPCC Initiative:
Changes Needed

Charles Brownstein

Charles N. Brownstein is the Acting Assistant Director of NSF and is a member of the Executive Directorate for Computer and Information Science and Engineering (CISE) at NSF. Dr. Brownstein chairs the interagency Federal Networking Council, which oversees federal management of the U.S. Internet for research and education. He participates in the Federal Coordinating Committee on Science, Engineering and Technology, which coordinates federal activities in high-performance computing, research, and educational networking. He has served as a Regent of the National Library of Medicine and recently participated in the National Critical Technologies panel. Before the creation of CISE in 1986, Dr. Brownstein directed the Division of Information Science and Technology. He also directed research programs on information technology and telecommunications policy at NSF from 1975 to 1983. He came to NSF from Lehigh University, Bethlehem, Pennsylvania, where he taught and conducted research on computer and telecommunications policy and information technology applications.

The President's FY 1992 High Performance Computing and Communications Initiative (HPCC Initiative)—also known as the High Performance Computing Initiative (HPCI)—may signal a new era in U.S. science and technology policy. It is the first major technology initiative of the 1990s.


570

It integrates technology research with goals of improving scientific research productivity, expanding educational opportunities, and creating new kinds of national "information infrastructure."

In the post-Cold War and post-Gulf War environment for R&D, we're going to need a precompetitive, growth-producing, high-technology, highly leveraged, educationally intensive, strategic program. The HPCC Initiative will be a prime example. This paper explores the changes that had to occur to get the initiative proposed and the changes which are needed to realize its goals. It is focused on the issue of the human resource base for computing research.

The HPCC program, as proposed, will combine the skills, resources, and missions of key agencies of the government with those of industry and leading research laboratories. The agency roles are spelled out in the report "Grand Challenges: High Performance Computing and Communications," which accompanied the President's budget request to Congress.

On its surface, the HPCC Initiative seeks to create faster scientific computers and a more supple high-performance computer communications network. But the deep goals deal more with creating actual resources for research and national economic competitiveness and security. One essential part is pure investment: education and human resources are an absolutely critical part of both the HPCC Initiative and the future of our national industrial position in computing.

HPCC became a budget proposal for FY 1992 because the new leaders of the President's Office of Science & Technology Policy (OSTP) had the vision to create an innovative federal R&D effort. The President's Science Advisor, Dr. Allan Bromley, came in, signed on, and picked up an activity that had been under consideration for five years. The plan represents the efforts of many, many years of work from people in government, industry, and user communities. Dr. Bromley's action elevated the HPCC program to a matter of national priority. Gene Wong (see his presentation earlier in this session) joined OSTP and helped refine the program, translating between the language of the administration and the language of the scientific community.

Industry participation is a central feature of HPCC. In the period of planning the program, industry acknowledged the fact that there's a national crisis in computing R&D, education, and human resources. Research has traditionally been supported in the U.S. as a partnership among the government, universities, laboratories, and industry. Today, it's possible to get money to do something in education from almost all of the major companies. That's a good sign.


571

Computing as an R&D field has some obvious human-resource deficiencies now and will have some tremendous deficiencies in the labor pool of the future. There are fewer kids entering college today with an intent in majoring in natural sciences and engineering than was the case at any time in the past 25 years. Entry into college with the goal of majoring in natural sciences and engineering is the best predictor of completing an undergraduate degree in the field. Very few people transfer into these fields; a lot of people transfer out. The net flow is out into the business schools and into the humanities.

The composition of the labor pool is changing. The simple fact is that there will be greater proportions of students whose heritages have traditionally not led them into natural sciences and engineering. There are no compelling reasons to suggest that significant numbers of people from that emerging labor pool can't be trained. A cultural shift is needed, and intervention is needed to promote that cultural shift.

The resource base for education is in local and state values and revenues; the ability for the federal government to intervene effectively is really pretty small. Moreover, the K–12 curriculum in natural sciences and engineering, with respect to computational science, is marginal. The problem gets worse the higher up the educational system that you go. Teaching is too often given second-class status in the reward structure at the country's most prestigious universities. So we have a daunting problem, comparable to any of the grand challenges that have been talked about in the HPCC Initiative.

One place to start is with teachers. There's an absolute hunger for using computers in education. Parts of the NSF supercomputer center budgets are devoted to training undergraduate and graduate students, and the response has been encouraging. One recent high school student participant in the summer program at the San Diego Center placed first in the national Westinghouse Science Talent Search.

The Washington bureaucracy understands that dealing with education and human resources is a critical part of the effort. Efforts undertaken by the Federal Coordinating Committee on Science, Engineering, and Technology have produced an HPCC Initiative with a substantial education component and, separately, an Education and Human Resources Initiative capable of working synergistically in areas like networking.

One group that needs to get much more involved is our leading "computing institutions." We have, it seems, many isolated examples of "single-processor" activities in education and human resources. Motives ranging from altruism to public relations drive places like the NSF and


572

NASA Centers, the Department of Energy (DOE) labs, and other computational centers around the country to involve a broad range of people in educational activities. That includes everything from bringing in a few students to running national programs like Superquest. We need to combine these efforts.

A national Superquest program might be created to educate the country about the importance of high-performance computing. Cornell University and IBM picked up the Superquest idea and used it in 1990. They involved students from all over the country. One of the winners was Thomas Jefferson High School, in Virginia; Governor John Sununu, whose son attends school there, spoke at the ceremonies announcing the victory.

We need to run interagency programs that put together a national Superquest program with the assets of DOE, NASA, NSF, and the state centers and use it to pull a lot of kids into the process of scientific computing. They'll never escape once they get in, and they will pull a lot of people such as their parents and teachers into the process.

We also need to find ways to involve local officials. School systems in the country have daunting problems. They have been asked to do a lot of things apart from education. Science education is just a small part of the job they do. They need a lot of help in learning how to use computing assets. I believe that concerned officials are out there and will become involved. If we're going to do anything significant, ever, in the schools at the precollege level, these officials have to be the people who are convinced. The individual teacher is too much at the whim of the tax base and what happens to residential real-estate taxes. The local officials have to be pulled in and channeled into these programs. The people to do that are the local members of the research community from our universities, laboratories, and businesses.

Getting these people to make an investment in the student base and in the public base of understanding for this technology is very important. I would love to see people like Greg McRae of Carnegie Mellon University spend about six months being an evangelist in the sense of reaching out to the educational community and the public at large.

What will the HPCC Initiative do for education in computational science? The initiative has about 20 per cent of its activity in education and human resources, and to the extent that we can get this program actually supported, there is a commitment, at least within the agencies, to maintain that level. The program will support graduate students on research projects, along with training for postdocs in computational


573

science, curriculum improvement, and teacher training. About 20 per cent is for the National Research and Education Network (NREN). This used to be called NRN. The educational opportunity is absolutely critical, and we are serious about keeping it NREN—with an E .

Over the past few years, the Internet has been extended from about 100 institutions—the elite—to over 600 educational institutions. The number we aim for is 3000. That sounds daunting, except that we've driven the price of an incremental connection to the network, through a structure of regional networks, down from the $60,000 range, to about the $10,000 or $11,000 range. That will come down much further, making it easily available to anyone who wants it as commercial providers enter the market.

That means that small, "have-not" institutions in remote or poor areas of the country will be viable places for people to reach out to national assets of research and education, including the biggest computers at the best-equipped institutions in the country. One finds odd people in odd places using odd machines over the network to win Gordon Bell awards. It's really quite amazing what a transforming effect an accessible net can have.

Much has occurred in networking recently. We've gone from 56 kilobytes to 45 megabits in the NSFnet backbone that is shared with the mission agencies. The regional nets will be upgrading rapidly. The mission agency nets are linked via federal internet exchanges, resources are shared, and the mix is becoming transparent.

A commercial internet exchange (CIX) has been created. Network services are also changing. The resources are there for X-400 and X-500. There are programs under way to get national white pages. Many scientific communities have begun to create their own distributed versions, ranging from bulletin boards to software libraries.

Long-time users have experienced a transformation in reliability, compatibility, reach, and speed. We have, in the wings, commercial providers of services, from electronic publishing to educational software, preparing to enter this network. The U.S. research community has been engaged, and usage is growing rapidly in all of the science and engineering disciplines. Like supercomputing, high-performance networking is turning out to be a transforming technology. Achieving the President's HPCC goal is the "grand challenge" that must be met to sustain this kind of progress.


575

The Importance of the Federal Government's Role in High-Performance Computing

Sig Hecker

Siegfried S. Hecker is the Director of Los Alamos National Laboratory, Los Alamos, New Mexico. For more complete biographical information, please refer to Dr. Hecker's presentation in Session 1.

As I look at the importance of the federal government's role, I think I could sum it all up by saying that Uncle Sam has to be a smart businessman with a long-term outlook because his company's going to be around for a long time. Someone has to take that long-term outlook, and I think clearly that's where the federal government has to come in.

What I want to discuss today is specifically the role of Los Alamos National Laboratory (Los Alamos, or the Laboratory) and how we'd like to respond to the High Performance Computing Initiative. I think most of you know that what we are interested in first and foremost at Los Alamos is solutions. We have applications, and we want the best and the fastest computers in order to be able to do our jobs better, cheaper, and faster and, hopefully, to be able to do things that we had not been able to do before.

Our perspective comes from the fact that we want to do applications; we're users of this computing environment. So we've always considered it imperative to be at the forefront of computing capability.

Let me review briefly the important roles that I think we've played in the past and then tell you what we'd like to do in the future. It's certainly fair to say that we've played the role of the user—a sophisticated,


576

demanding user. In that role we have interfaced and worked very, very closely with the computing vendors over the years, starting early on with IBM and then going to Control Data Corporation, to Cray Research, Inc., to Thinking Machines Corporation, and then of course all along with Sun Microsystems, Inc., Digital Equipment Corporation, and so forth.

Out of necessity, in a number of cases we've also played the role of the inventor—in the development of the MANIAC, for instance, right after development of the ENIAC. Our people felt that we had to actually create the capabilities to be able to solve the problems that we had.

Later on, we invented things such as the common file system. The high-performance parallel interface, better known as HIPPI, is also a Los Alamos invention. That's the sort of product that's come about because we're continually pushed by the users for this sort of capability.

New algorithms to solve problems better and smarter are needed. So things like the lattice gas techniques for computational fluid dynamics were basically invented here by one of our people, along with some French collaborators. Also, we are very proud of the fact that we helped to get three of the four NSF supercomputer centers on line by working very closely with them early on to make certain that they learned from our experiences.

We also introduced companies like General Motors to supercomputing before they bought their computers in 1984. We were working with them and running the Kiva code for combustion modeling. As Norm Morse points out (see Session 10), we have 8000 users. At least half of them are from outside the Laboratory.

The role that we've played has been made possible by our feeling that we have to be the best in the defense business. Particularly in our mainline business, nuclear weapons design, we felt we needed those capabilities because the problems were so computationally intense, so complex, and so difficult to test experimentally. We were fortunate for many, many years that first the Atomic Energy Commission and then the Department of Energy (DOE) had the sort of enlightened management to give us the go-ahead to stay at the forefront and, most importantly, to give us the money to keep buying the Crays and Thinking Machines and all of those good machines.

What we have proposed for the Laboratory is an expanded national charter under the auspices of this High Performance Computing Initiative. First of all, our charter has already significantly expanded beyond nuclear weapons R&D, which represents only about a third of our


577

activities. The remaining two-thirds is a lot of other defense-related activities and many civilian activities.

Today, in terms of applications, we worry about some of the same grand challenges that you worry about, such as environmentally related problems—for instance, the question of global climate change. The Human Genome Initiative is basically an effort that started at Los Alamos and at Lawrence Livermore National Laboratory because we have the computational horsepower to look at how one might map the 3 billion base pairs that exist on your DNA. We also have other very interesting challenges in problems like designing a free-electron laser essentially from scratch with supercomputers.

In response to congressional legislation earlier this year, I outlined a concept called Collaborative R&D Centers that I'd like to see established at Los Alamos or at least supported at places like Los Alamos. There are several aspects of this proposed center I would like to mention. For one thing, we'd like to make certain that we keep the U.S. at the leading edge of computational capabilities. For another, we intend to make the high-performance computing environment available to greater numbers of people in business, industry, and so forth.

But there are five particular things I'd like to see centers like this do. First of all, continue this very close collaboration with vendors. For instance, at Los Alamos we're doing that now, not only with Cray but also with IBM, Thinking Machines, and many others.

Second, continue to work, perhaps even closer, with universities to make sure that we're able to inject the new ideas into the future computing environment. An example of that might be the work we've done with people like Al Despain at the University of Southern California (see Session 4) as to how one takes the lattice gas concepts and constructs a computational architecture to take advantage of that particular algorithm. Despain has thought about how to take one million chips and construct them in such a fashion that you optimize the problem-solving capabilities.

As part of this collaboration with universities, we could provide a mechanism for greater support, through DOE and Los Alamos, of graduate students doing on-campus research, with provisions for work at the Laboratory, itself. We do a lot with graduate students now. In fact, we have about 400 graduate students here during the course of a summer in many, many disciplines. I think in the area of computational sciences, we will really boost student participation.


578

The third aspect is to have a significant industrial user program to work even more closely with U.S. industry—not only to make available to them supercomputing but also to promote appreciation of what supercomputing and computational modeling can do for their business. So we'd like to have a much greater outreach to U.S. industry. I agree with the comments of other presenters that supercomputing in industry is very much underutilized. I think one can do much better.

The fourth aspect would be to help develop enabling technologies for tomorrow's innovations in computing—technologies such as photolithography (or at least the next generations of photolithography), superconducting microelectronics, optical computers, neural networks, and so forth. In the Strategic Defense Initiative (SDI) program, where we've done a lot of laser development, we'd like to provide essentially the next "light bulb" for photolithography—that "light bulb" being a free-electron laser we've developed for SDI applications.

The benefit of the free-electron laser is that you can do projection lithography, that you can take the power loss because you start with extremely high power, and that you can tune to wavelength. We think that we can probably develop a free-electron laser, tune it down to the order of 10 nanometers, and get feature sizes down to 0.1 micron—perhaps 0.05 microns—with that sort of a light bulb. It would take a significant development to do that, but we think it's certainly possible. We're working right now with a number of industrial companies to see what the interest level is so that we might be able to get beyond what we think can be done with X-ray synchrotron proximity lithography. It's that type of technology development that, again, I think would be a very important feature of what laboratories like ours could do.

The fifth aspect would be a general-user program to make certain that we introduce, as much as is possible and feasible, some of these capabilities to the local communities, schools, businesses, and so forth. This collaborative R&D would have a central organization in a place like Los Alamos, but there would be many arms hooked together with very-high-speed networks. It would also be cost-shared with industry. The way that I see this working, the government invests its money in us to provide this capability. Industry invests its money and shows its interest by providing its own people to interact with us.

These are just a few remarks on what I think a laboratory like Los Alamos can do to make certain that this country stays at the leading edge of computing and that high-power computing is made available to a broader range of users in this country.


579

Legislative and Congressional Actions on High-Performance Computing and Communications

Paul G. Huray

Paul G. Huray is Senior Vice President for Research at the University of South Carolina at Columbia and a consultant for the President's Office of Science and Technology Policy. From 1986 to 1990, he served with the Federal Coordinating Committee on Science, Engineering, and Technology, chairing that body's Committee on Computer Research and Applications, which issued An R&D Strategy for HPC and The Federal HPC Program. He has assisted in the development of Manufacturing Technology Centers for the National Institute of Standards and Technology.

I thought I'd begin this presentation by making a few comments about the participation of the U.S. Congress in the High Performance Computing and Communications (HPCC) Initiative. Table 1 shows a chronology of the legislative actions on HPCC. There is nothing in particular that I want to bring to your attention in this figure except the great number of events that have occurred since August 1986. In July 1990, there were several pieces of legislation on the floor of the Senate: S-1067, S-1976, the Senate Armed Services Authorization, and a couple of complementary House bills. Let me outline the legislative situation as of this writing (August 1990).

At the recent meeting of the President's Council of Advisors on Science and Technology, I tried to guess where the legislation was headed; Table 2


580
 

Table 1. Legislative History of HPCC Initiative

Date

Designation

Action

Aug 86

PL 99-383

NSF Authorization: OSTP network
report

Nov 87

An R&D Strategy for
HPC

FCCSET: systems, software, NRN,
human resources

Aug 88

Sen. Sci. (CS&T)

Hearing: "Computer Networks and HPC"

Oct 88

S.2918

"The National HPC Technology Act": Strategy + AI, Inf. Sci., and budget

May 89

S.1067

"The HPC Act of 1989"

Jun 89

H. SR&T (SS&T)

Hearing: "U.S. Supercomputer
Industry"

Jun 89

Sen. CS&T

Hearing: "S.1067—NREN"

Jun 89

Gore roundtable

Off-the-record: network carriers

Jul 89

Sen. CS&T

Hearing: "S.1067—Visualization and Software"

Aug 89

H.R.3131

"The National HPC Technology Act of 1989"

Sep 89

The Federal HPC Program

Implementation plan: DARPA, DOE, NASA, NSF

Sep 89

Sen. Sci. (CS&T)

Hearing: "S.1067—Advanced
Computing and Data Management"

Oct 89

H. SR&T (SS&T)

Hearing: "HPC"

Oct 89

H. Telecom. (E&C)

Hearing: "Networks of the Future"

Nov 89

S.1976

"The DOE HPC Act"

Mar 90

Sen. En. R&D (E)

Hearing: "S.1976"

Apr 90

S.1067, amended

"The HPC Act of 1990" (same as
The Federal HPC Program )

Jun 90

S.1976, amended

Puts NREN under DOE

Jul 90

H.R.5072

"The American Technology
Preeminence Act": DOC Authorization, ATP, S/W amend., OSTP

Jul 90

Sen. Armed Serv.

Authorizes $30 M for DARPA in FY 91

Jul 90

Gore roundtable

Off-the-record: HPC users


581
 

Table 2. Legislative Prognosis on HPCC Initiative

Action

Best Guess

Conference Compromise

S.1067, S.1976, Armed Services authorization will pass. Committee report will resolve Senate NREN issue. H.R.5072 or H.R.3131 will pass. House will accept full Senate bill.

Consolidation

Bill could attract ornaments or become a subset of other S&T legislation.

Appropriations

Budget committee will single out HPCC for a line item with a "special place in the FY 91 budget," but appropriations will fall short of authorization because of general budget constraints.

Education

Senate hearings will be held to place HPCC in an educational context. Human interfaces (sound, interactive graphics, multimedia) will be considered in view of previous failure of computer-aided instruction.

Business

Senate hearings will be held to consider the value of HPCC to the manufacturing environment and to corporate network services.

shows what could happen. S-1067 and S-1976 have passed out of committee and are on the floor for compromise. Some of the people attending this conference are participating in the compromise process.

There are some issues to be resolved, especially issues related to the National Research and Education Network (NREN), but it is our belief that these pieces of Senate legislation will pass in some form and that the complementary legislation will also pass in the House, although the language is quite different in some respects. However, we have some assurance that the House will accept the Senate bill when it's eventually compromised.

There are a couple of other dangers, however, in the consolidation process. What might happen to these pieces of legislation? As indicated in Table 2, they could attract ornaments from other activities, or they could become a subset of some other science and technology legislation. When I asked Senator Albert Gore of Tennessee what he thought might happen, he said that apart from issues related to abortion and the Panama Canal, he couldn't imagine what might be added.

The appropriations process, of course, is the key activity. The bills that are on the floor of the House and Senate now are just authorizations. But when it comes budget time—time to actually cut those dollars—we believe that HPCC will have a special status. The point is that there are


582

still plenty of opportunities for this initiative to fall apart in the Congress. In fact, some ideologues would like that to happen. I think, on the other hand, that we will see a few HPCC additions in the FY 1991 budget produced by Congress, and certainly there will be a special place in the FY 1992 budget coming out of the Executive Office.

Table 2 also indicates that more hearings will be held on HPCC. I realize that probably half the people in this audience have participated in these hearings, but I would guess that the next hearings would be associated with education, probably with the network and its usefulness as a tool for education. I would also guess that the following hearing would concentrate on business and how the manufacturing sector will benefit from a national network. Because we were asked to discuss specific topics, I'm going to address the rest of my remarks to the usefulness of the HPCC Initiative to the business sector.

To start, I want to note a very important key word in the title of the program (see Figure 1), the word "federal." The word "federal" has allowed this initiative to go forward without becoming industrial policy. At one point, that word was "national," but we realized we were potentially running into trouble because we wanted to retain an emphasis on economic competitiveness without crossing the political line into industrial policy.

Figure 1.
Cover of the report issued by the Office of Science and Technology
Policy.


583

Three goals were stated in the development of the High Performance Computing Initiative, as the HPCC Initiative is also known:

• maintain and extend U.S. leadership in high-performance computing, especially by encouraging U.S. sources of production;

• encourage innovation through diffusion and assimilation into the science and engineering communities; and

• support U.S. economic competitiveness and productivity through greater utilization of networked high-performance computing in analysis, design, and manufacturing.

These goals are getting close to the political line of industrial policy. It is the third of these goals I want to focus on during my presentation.

I think everyone who participated in this process understood the relevance of this initiative to the productivity of the country. In order to examine that potential, I decided to take a look at what might become a more extended timetable than the three phases of the national network that are listed in HPCC Initiative documents.

As you probably know, phases 1, 2, and 3 of NREN are addressed in the document that covers growth in performance of the network and growth in the number of institutions. We're probably already into NREN-2 in terms of some of those parameters. But many of us believe that the big payoffs will occur when business begins to participate in the national network and gains access to high-performance computing. Figure 2 suggests that large numbers of institutions could become part of the network once the business sector participates.

Figure 2.
Extended timetable for a National Research and Education Network (NREN).


584

How will that happen? One way is through a program that the National Institute of Standards and Technology (NIST) runs. It is a program that I'm very familiar with because it takes place partly in South Carolina.

In 1988 Congress passed the Omnibus Trade and Competitiveness Act, as shown in Table 3. That act was very controversial because it had all kinds of issues that it dealt with—unfair trade, antidumping, foreign investments, export control, intellectual property. But one of the activities it established was the development of regional centers for the transfer of manufacturing technology, and this activity is intimately connected with the national network.

As shown in Table 4, there are currently three centers that participate in this manufacturing technology center program. One is at Rensselaer Polytechnic Institute in Troy, New York, one is in Cleveland at the Cleveland Area Manufacturing Program, and one is at the University of South Carolina. The South Carolina-based initiative provides a process for delivering technology to the work force; and this is quite different than the normal Ph.D. education process. We're delivering technology to small and medium-size companies, initially only in South Carolina but later throughout the 14 southeastern states in a political consortium called the Southern Growth Policies Board.

This technology involves fairly straightforward computation, that is, workstation-level activity. But in some cases the technology involves numerically intensive computing. Table 5 shows the kind of technologies that we're delivering in South Carolina. The small-to-medium-size companies range in capabilities from a blacksmith's shop up to a very sophisticated business. But we're having terrific impact. Nationwide, the number of computer scientists has increased by two orders of magnitude since 1983, to 60,000 in 1989. In 1989, we trained 10,000 workers in the technologies shown in Table 6 as part of this program. Those are the workers trained in one state by one of three manufacturing technology centers—a number that is expected to expand to five in 1991. This will be a model for a technology extension program for the whole United States, in which many people in the work force will have access not just to workstations but to high-performance computing, as well.

As an example, Figure 3 shows the network that we're currently using to distribute technology electronically. This is a state-funded network aided by Digital Equipment Corporation. This is an example of a local initiative that is fitting into the national program. There are 26 institutions participating in this consortium, mostly technical colleges whose instructors have previously trained many of the work force in discrete parts manufacturing activities around the state.


585
 

Table 3. Omnibus Trade and Competiveness Act of 1988


•  Tools to open foreign markets and help U.S. exporters (examples):

– Equitable Trade Policy

– Antidumping Measures

– Foreign Investment

– Export Control

– Intellectual Property Protection


• Initiative to boost U.S. industry in world markets (NIST responsibilities)

– Assist State Technology Programs

– Implement Advanced Technology Program

– Establish Clearing-House for State Technology Programs

– Establish Regional Centers for Transfer of Manufacturing Technology

 

Table 4. Characteristic Descriptions of the Existing NIST Manufacturing Technology Centers (MTCs)

Northeastern MTC at Rensselaer Polytechnic Institute:
Focuses on hardening of federal software for commercialization.
Features excellent engineering and enjoys support of industry vendors.
"The Shrink-Wrap Center."

Great Lakes MTC at Cleveland Area Manufacturing Program:
Focuses on real-time measurement and metal-cutting machinery.
Promotes a large toolmaking base in the Cleveland area.
"The Manufacturing Resource Facility."

Southeastern MTC at the University of South Carolina:
Focuses on workforce training through existing technical institutions.
Addresses company needs, including training, in 14 southeastern states.
"The Delivery Center."

 

Table 5. SMTC Technical Emphasis

 

Computer-Aided Design

Robotics

Computer-Aided Manufacturing

Metrology

Computer-Aided Engineering

Integration Cells

Numerically Controlled Machines

Inventory Control

Advanced Machine Tools

Quality Control


586
 

Table 6. SMTC: General Successes

 

• Training approximately 10,000 workers in

– CAD

– CAM

– CAE

– Piping Design

– Geometric Dimensioning/Tolerancing

– Info Windows (IBM)

– Prog Logic CON

– SPC

– TQC

– ZENIX

• Implementing manufacturing-company-needs assessment

• Establishing technical colleges network

• Establishing center of competence for manufacturing technology

• Transferring numerous technologies from Oak Ridge National Laboratory and Digital Equipment Corporation

We estimate that more than 80% of the commerce in South Carolina is within 25 miles of one of these institutions, so linking them into a corporate network is going to be reasonably straightforward. We've initiated that linking process, and we believe we're going to bring about a cultural change with the help of a manufacturing network.

One of the things we are doing, for example, is bringing up an electronic bulletin board, which is essentially going to be a bidding board on which we have electronic specifications for subcontracts from the Department of Defense or large corporations. We will let those small corporations call the bulletin board and see what's up today for bid, with a deadline perhaps at three o'clock. The small businessman will look at the electronic specifications for a particular subcontract, which might be a fairly large data set displayed on a high-quality workstation. The businessman will ask himself if he can manufacture that item in a numerical fashion quickly for the subcontract bid. In one scenario, before the three o'clock deadline comes, the businessman will bid on the project, the bid will be let at five o'clock, the manufacturing takes place that night, and delivery happens the next morning. I think that's not an unrealistic vision of a future manufacturing infrastructure aided by a national network.


587

Figure 3.
South Carolina State Technical Colleges' wide-area network plan.

The companies that are currently participating in our program are, for the most part, using 1950s technology, and they are just now coming into a competitive environment. Unfortunately, they don't even know they're competing in most cases. But we can see such a network program extending to other states in the southeastern United States, as shown in Figure 4. The plan for NIST is to clone the manufacturing technology centers throughout other states in the country. We can imagine eventually 20,000 small- to medium-sized companies participating in such a program—20,000 businesses with employees of perhaps, in the case of small businesses, 50 persons or less.

We need to remember that these small corporations produce the majority of the balance of trade for the United States. Seventy-five per cent of our manufactured balance of trade comes from small-or medium-sized companies, these are the people we need to impact. I have described a mechanism to initiate such a program. I believe that this program can address the infrastructure of manufacturing and productivity in the country, as well as accomplish some of the very sophisticated projects we are doing in academia and in our national laboratories.


588

Figure 4.
Southeastern Manufacturing Technology Network.


589

The Federal Role As Early Customer

David B. Nelson

David B. Nelson is Executive Director in the Office of Energy Research, U.S. Department of Energy. For more complete biographical information, please see his presentation earlier in this session.

Federal agencies—and I'll use Department of Energy (DOE) as an example because I'm most familiar with it—have for many years played an important role as an early customer and often as a first customer for emerging high-performance computing systems. This is true not only in federal laboratories but also in universities with federal funding support, so the federal role is diffused fairly widely.

Let's talk about some of the characteristics of the good early customer.

First, the customer must be sophisticated enough to understand his needs and to communicate them to vendors, often at an early stage when products are just being formulated.

Second, the customer must work closely with the vendors, often with cooperative agreements, in order to incorporate the needs of the customers into the design phase of a new product.

Third, the customer is almost always a very early buyer of prototypes and sometimes even makes early performance payments before prototypes are built to provide capital and feedback to the vendor.

Fourth, the early buyer must be willing to accept the problems of an immature product and must invest some of his effort to work with the vendor in order to feed back those problems and to correct them. That has historically been a very important role for the early good buyer.


590

Fifth, the early buyer has to be willing to add value to the product for his own use, especially by developing software. Hopefully, that added value can be incorporated into the vendor's product line, whether it is software or hardware.

And sixth, the early buyer should be able to offer a predictable market for follow-on sales. If that predictable market is zero, the vendor needs to know so he doesn't tailor the product to the early buyer. If the early buyer has a predictable follow-on market that is not zero, the vendor needs to be able to figure that into his plans. For small companies this can be extremely important for attracting capital and maintaining the ability to continue the product development.

The High Performance Computing Initiative authorizes an early-customer role for the federal government. The six characteristics of a good customer can be translated into five requirements for success of the High Performance Computing Initiative.

First, there must be a perceived fair process that spreads early customer business among qualified vendors. As we learn from Larry Tarbell's description of Project THOTH (Session 12) at the National Security Agency, competition may or may not be helpful. In fact, it has been our experience in DOE that when we are in a very early-buy situation, there is usually no opportunity for competition. But that doesn't mean that fairness goes out the window, because various vendors will be vying for this early buyer attention. They ought to. And by some mechanism, the spreading of the business among the agencies and among the user organizations needs to take place so that the vendors can perceive that it's a fair process.

Second, the early buyer agency or using organization must provide a patient and tolerant environment, and the agency sponsors need to understand this. Furthermore, the users of the product in the early-buy environment need to exercise restraint in either bad-mouthing an immature product or, equally bad, in making demands for early production use of this immature product. We have seen instances in agencies in the past where this aspect was not well understood, and mixed expectations were created as to what this early buy was supposed to accomplish.

Third, these early buys must be placed into sophisticated environments. Those agencies and those organizations that are going to participate in this aspect of the initiative need to be able to provide sophisticated environments. The last thing that a vendor needs is naive users. He accommodates those later on when his product matures, but when he is first bringing his product to market, he needs the expertise of the experienced and qualified user.


591

Fourth, early buys should be placed in an open environment, if possible. The information as to how the computer is performing should be fed back at the appropriate time, not only to the vendor but also to the rest of the user community. And the environment should involve outside users so as to spread the message. The organization that is participating in the early buy or the prototype evaluation needs to have both a willingness and an ability to share its experience with other organizations, with other agencies, and with the broader marketplace. Clearly, there's a tension between the avoidance of bad-mouthing and the sharing of the news when it's appropriate.

Finally, we need to have patient capital—the federal agencies must be patient investors of advanced technology and human capital. They must be willing to invest today with an understanding that this product may not return useful work for several years. They must be willing to find users to try out the new computer and feed back information to the vendor without being held to long-term programmatic accomplishments. In many cases, these sophisticated institutions are leading-edge scientific environments, and scientists must make tradeoffs. Is a researcher going to (1) publish a paper using today's technology or (2) risk one, two, or three years of hard work on a promising but immature technology?

As we put the High Performance Computing Initiative together, we kept in mind all of these aspects, and we hope we have identified ways to achieve the goals that I laid out. However, I don't want to leave you with the idea that this is easy. We, all of us, have many masters. And many of these masters do not understand some of the things that I've said. Education here will help, as in many other places.


593

A View from the Quarter-Deck at the National Security Agency

Admiral William Studeman

Admiral William O. Studeman, U.S. Navy, currently serves as Deputy Director of Central Intelligence at the CIA. Until April 1992, he was the Director of the National Security Agency (NSA). His training in the field of military intelligence began subsequent to his graduation in 1962 with a B.A. in history from the University of the South, Sewanee, Tennessee. Thereafter, he accumulated a series of impressive academic credentials in his chosen specialty, taking postgraduate degrees at the Fleet Operational Intelligence Training Center, Pacific, in Pearl Harbor, Hawaii (1963); the Defense Intelligence School in Washington, DC (1967); the Naval War College in Providence, Rhode Island (1973); and the National War College in Washington, DC (1981). He also holds an M.A. in Public and International Affairs from George Washington University in Washington, DC (1973).

The Admiral's early tours of duty were based in the Pacific, where he initially served as an Air Intelligence Officer and, during the Vietnam conflict, as an Operational Intelligence Officer, deploying as the Command Staff of the Amphibious Task Force, U.S. Seventh Fleet. Later assignments posted him to such duty stations as the U.S. Sixth Fleet Antisubmarine Warfare Force, Naples, Italy; the Defense Intelligence Agency station in Iran; the Fleet Ocean Surveillance Information Center, Norfolk, Virginia; and the U.S. Sixth Fleet Command,


594

Gaeta, Italy. His duties immediately preceding his appointment as NSA Director included assignments as Commanding Officer of the Navy Operational Intelligence Center (1984–85) and Director of Naval Intelligence (1985–88), both in Washington .

The National Security Agency (NSA) possesses an enlightened, harmonious, heterogeneous computing environment, probably the most sophisticated such environment anywhere. In a sense, it is the largest general-purpose supercomputing environment in the world. Its flexibility in using microcomputers to supplement specially adapted high-power workstations and state-of-the-art supercomputers goes beyond anything that could have been imagined just a few years ago. The investment represents many billions of taxpayer dollars. A major portion of that investment is in supercomputing research, particularly in massively parallel areas, and in the applications that we use routinely on our networking systems. Because of special applications like these, and because of multilevel security requirements, NSA is obliged to operate its own microchip factory. Even if Congress declared tomorrow that the world had become safe enough to halt all defense work at the microelectronics lab, one could still justify the lab's existence on the basis of a whole range of nondefense applications related to the national interest.

So the computing environment at NSA is a complete computing environment. Of course, the focus is narrowly defined. Raw performance is an overriding concern, especially as it applies to signal processing, code-breaking, code-making, and the operations of a very complicated, time-sensitive dissemination architecture that requires rapid turnaround of collected and processed intelligence. In turn, that intelligence must be sent back to a very demanding set of users, many of whom function day-to-day in critical—sometimes life-threatening—situations.

One must admit to harboring concern about certain aspects of the federal High Performance Computing Initiative. Most especially, the emphasis must be on raw performance. Japanese competition is worrisome. If Japanese machines should someday come to outperform American machines, then NSA would have to at least consider acquiring Japanese machines for its operations.

Let me digress for a moment. You know, we operate both in the context of the offense and the defense. Consider, for instance, Stealth technology as it relates to battle space. Stealth benefits your offensive posture by expanding your battle space, and it works to the detriment of your enemy's defensive posture by shrinking his battle space. Translate


595

battle space into what I call elapsed time: Stealth minimizes elapsed time when you are on the offense and maximizes it when you are on the defense.

I raise this rather complicated matter because I want to address the issue of export control. Some in government continue holding the line on exporting sensitive technology—especially to potentially hostile states. And I confess that this position makes some of us very unpopular in certain quarters, particularly among industries that hope to expand markets abroad for American high-tech products. I have heard a great many appeals for easing such controls, and I want to assure you, relief is on the way. Take communications security products: there now remains only a very thin tier of this technology that is not exportable. The same situation will soon prevail for computer technology; only a thin tier will remain controlled.

It most certainly is not NSA's policy to obstruct expansion of overseas markets for U.S. goods. Ninety-five per cent or more of the applications for export are approved. NSA is keenly aware that our vital national interests are intimately bound up with the need to promote robust high-tech industries in this country, and it recognizes that increased exports are essential to the health of those industries. Among the potential customers for high-tech American products are the Third World nations, who, it is clear, require supercomputing capability if they are to realize their very legitimate aspirations for development.

And it is just as clear that the forces of darkness are still abroad on this planet. In dealing with those forces, export controls buy time, and NSA is very much in the business of buying time. When the occasion demands, you must delay, obfuscate, shrink the enemy's battle space, so to speak.

It is no secret that NSA and American business have a community of interest. We write contracts totaling hundreds of millions—even billions—of dollars, which redound to the benefit of your balance sheets. Often, those contracts are written to buy back technology that we have previously transferred to you, something that never gets reflected on NSA's balance sheet. Still, in terms of expertise and customer relations, the human resources that have been built up over the years dealing with American business is a very precious form of capital. And that capital will become even more precious to NSA in the immediate future as the overall defense budget shrinks drastically (disastrously, in my view). If NSA is to continue carrying out its essential functions, it will more and more have to invest in human resources.

This may be a good time to bring up industrial policy. "Government-business cooperation" might be a better way to put it. I'm embargoed from saying "industrial policy," actually. It's not a popular term with the


596

current administration. Not that the administration or anyone else in Washington is insensitive to the needs of business. Quite the opposite. We are more sensitive than ever to the interdependence of national security and a healthy industrial base, a key component of which is the supercomputer sector. If we analyze that interdependence, we get the sense that the dynamic at work in this country is not ideal for promoting a competitive edge. The Japanese have had some success with vertical integration, or maybe they've taken it a bit too far. Regardless, we can study their tactics and those of other foreign competitors and adapt what we learn to our unique business culture without fundamentally shifting away from how we have done things in the past.

What is NSA's role in fostering government-business cooperation, particularly with respect to the supercomputing sector? Clearly, it's the same role already mentioned by other speakers at this conference. That role is to be a good customer in the future, just as it has been in the past. NSA has always been first or second in line as buyers. Los Alamos National Laboratory took delivery on the first Cray Research, Inc., supercomputer; NSA took delivery on the second and spent a lot of money ensuring that Cray had the support capital needed to go forward in the early days. We at NSA are doing the same thing right now for Steve Chen and the Supercomputer Systems, Inc., machine. NSA feels a strong obligation to continue in the role of early customer for leading-edge hardware, to push the applications of software, and to be imaginative in its efforts to fully incorporate classical supercomputing with the massively parallel environment.

One area in which NSA is certain to devote more resources is cryptography problems that prove less tractable when broken down into discrete parallel pieces than they are when attacked in a massively parallel application. Greater investment and greater momentum in this line of research is a high priority.

Producing computer-literate people: through the years, NSA has nurtured a great pool of expertise that the entire nation draws upon. Many of you in this very audience acquired your credentials in computing at NSA, and you have gone out, like disciples, to become teachers, researchers, and advocates. Of course, NSA maintains a strong internship program, with the object of developing and recruiting new talent. Many people are not aware that we have also mounted a very active education program involving students in the Maryland school system, grades K through 12. Further, NSA brings in teachers from surrounding areas on summer sabbaticals and provides training in math and computer science that will improve classroom instruction. All of these efforts


597

are designed to address directly the concerns for American education voiced by the Secretary of the Department of Energy, Admiral James D. Watkins (retired).

Thank you for providing me the opportunity to get out of Washington for a while and attend this conference here in the high desert. And thank you for bearing with me during this rambling presentation. What I hoped to accomplish in these remarks was to share with you a view from the quarter-deck—a perspective on those topics of mutual interest to the military-intelligence community and the high-performance computing community.


599

Supercomputers and Three-Year-Olds

Al Trivelpiece

Alvin W. Trivelpiece is a Vice President of Martin Marietta Energy Systems, Inc., and the Director of Oak Ridge National Laboratory. He received his B.S. from California Polytechnic State College-San Luis Obispo, and his M.S. and Ph.D. in electrical engineering from Caltech. During his professional career, Dr. Trivelpiece has been a Professor of Electrical Engineering at the University of Maryland-College Park, Vice President of Engineering and Research at Maxwell Laboratories, Corporate Vice President at Scientific Applications Incorporated, Director of the Office of Energy Research for the Department of Energy, and Executive Officer of the American Association for the Advancement of Science.

I always enjoy coming to the Southwest, as many of you do. There is something that people who live here sort of forget, and that is that when you step outside at night and you look up, you see a lot of stars. They're very clear and bright, just as if they are painted on the sky.

Ancient humans must have done the same thing. That is, after they skinned the last sabertooth of the day, they looked up at the sky to see all those stars. Among those people there must have been a few who were as smart as Richard Feynman. Some of them probably had the same IQ that Feynman had. It must have been very frustrating for them because they lacked the tools to answer the questions that must have occurred to them. What things we now recognize as planets, moons, meteors, and comets were phenomena they could only wonder about. But eventually


600

some of them persuaded colleagues to join them and build Stonehenge (early "big science") and other kinds of observatories. These were some of the early tools.

As time has gone by, tools have included ships that permitted exploring what was on the other side of the water, transport that enabled travelers to cross deserts, and eventually, vehicles that allowed humans to go into space and deep into the ocean. Tools are also those things that permit the intellectual exercises that involve going into the interior of the atom, its nucleus, and the subparts thereof.

All of that has been made available by, in a sense, one of the most enduring human characteristics, and that is curiosity. Beyond the need to survive, I think, what drives us more than any other single thing is curiosity. But curiosity can't be satisfied. Ancient man couldn't figure out what was on the inside of an atom, but with the right kind of an accelerator, you can. However, as the tools have come along, they've led to new questions that weren't previously asked.

You, the computer manufacturers, are in the process of developing some truly spectacular tools. Where is all that going to go? One of the things that you are going to need is customers. So I want to just talk for a few minutes about one of the customer bases that you're going to have to pay attention to. I ask you to imagine that it's the year 2000 and that in 1997 a child was born. The child is now three years old. I pick a three-year-old because I think that when you're three years old, that's the last time in your life that you're a true intellectual. The reason I believe this is because at that particular stage a three-year-old asks a question "Why?" for no other reason than the desire to know the answer. Anybody who has been the parent of a three-year-old has put up with a zillion of these questions. I won't try to recite the string; you all know them.

That curiosity, however, is fragile. Something rather unfortunate seems to happen to that curiosity as the child gets a little older. Children reach the third grade, and now you direct them to draw a picture of a fireman, a policeman, a scientist. What do you get? A lady named Shirley Malcolm, who works at the American Association for the Advancement of Science, has a remarkable collection of pictures by third graders who were asked to draw a scientist. What do these pictures look like? Let me just tell you, these are usually people you wouldn't want to be. These are bad-looking, Einstein-like critters who are doing bad things to other people, the environment, or animals. That's your customer base. Incidentally, there's another set of pictures that I've never seen, but they're supposed to be from a similar study done in the Soviet Union, and what


601

those children drew were people being picked up in limos and driven to their dachas.

Most of you have heard about the education demographics in the United States. Eighty-five per cent of the work force between now and the year 2000 is going to be minorities and women; they have not traditionally chosen to pursue careers in science and technology. We have as a result a rather interesting and, I think, a serious problem.

Now, how might you go about fixing that? Well, maybe one way is for every three-year-old to get a terminal and at government expense, get access to a global network. The whole world's information, literally, would be available. This can be done by the year 2000. If this were to occur, what would the classroom of the 21st century look like? I believe it starts with a three-year-old, a three-year-old who gets access to an information base that permits, in that very peculiar way that a three-year-old goes about things, skipping from one thing to another—language, animals, mathematics, sex, whatever. Three-year-olds have a curiosity that just simply doesn't know any particular bounds. Somehow the system that we currently have converts that curiosity into an absolute hostility toward intellectual pursuits. This seems to occur between the third year and third grade.

So, I believe that the distinction between home and classroom is probably going to be very much blurred. Home and classroom will not look significantly different. The terminals may be at home, the terminals may be in schools, they may be very cheap, and they may be ubiquitous. And the question is, how will they be used?

What about the parents between now and then? I suspect that the parents of these children, born in 1997 and three years old in the year 2000, are going to be very poorly equipped. I don't know what we can do about that. But I have a feeling that if you get the right tools in the hands of these three-year-olds in the year 2000, a lot of the customer base that you are counting on will eventually be available. Remember that you, the computer developers and vendors, have a vision. But, unless you do something to help educate the people needed to take advantage of that vision, the vision simply will not exist. So you have a serious problem in this regard.

Think also of children who are disabled in some way—blind, dyslexic, autistic, or deaf. You can make available to these children in their homes or schools through the information bases an ability to overcome whatever disabilities they might have. High-performance computing might provide one means to help in a broad-based campaign to overcome large collections of disabilities.


602

Thus, I think one of the questions is a rhetorical question. Rather than leaving you with an answer, I leave you with some questions. Because high-performance computing is going to have an impact on the classroom of the 21st century, you have to ask, what is that impact going to be? And how are you going to prepare for it?


603

NASA's Use of High-Performance Computers:
Past, Present, and Future

Vice Admiral Richard H. Truly

Richard H. Truly, Vice Admiral, U.S. Navy (retired), was until 1992 the Administrator of the National Aeronautics and Space Administration. He has a bachelor's degree in aeronautical engineering from the Georgia Institute of Technology, Atlanta. He was the first astronaut to head the nation's civilian space agency. In 1977, he was pilot for one of the two-man crews that flew the 747/Space Shuttle Enterprise approach-and-landing test flights. He served as back-up pilot for STS-1, the first orbital test of the Shuttle, and was pilot of STS-2, the first time a spacecraft had been reused. He was Commander of the Space Shuttle Challenger (STS-8) in August-September 1983. As a naval aviator, test pilot, and astronaut, the Vice Admiral has logged over 7500 hours in numerous military and civilian jet aircraft.

I am delighted to be here at Los Alamos, even for a few hours, for three big reasons. First, to demonstrate by my presence that NASA means it when we say that our support for high-performance computing, and particularly the High Performance Computing Initiative (HPCI), is strong, and it's going to stay that way. Second, to tell you that we're proud of NASA's support over the last five years that led to this initiative. Third, to get out of Washington.

NASA needs high-performance computing to do its job. Let me begin by telling you that some of our current missions and most of our future


604

missions absolutely depend upon the power and the capability that supercomputers can and are providing.

The Hubble Space Telescope, despite what you read, in the next several weeks is going to make a long series of observations that are going to create great interest among scientists and the public, alike. The Hubble will look out to the stars, but the value of the data it brings back can only be understood by programs that can be run on very powerful computers. Within two or three years, when we go back and bring the Hubble up to its originally intended performance, I can assure you that that mission is going to produce everything that we said it would.

There's a space shuttle sitting on the pad that's going to launch in about a week. Its launch will make ten perfect flights since the Challenger accident. This would have been impossible if a supercomputer at Langley Research Center had not been able to analyze the structural performance of the fuel joint that caused the Challenger accident—a problem we did not understand at the time of the accident.

As I speak, 26 light minutes away, the Magellan spacecraft is moving around the planet Venus. It has a very capable, perfectly operating, synthetic-aperture side-looking radar that we've already demonstrated. Magellan is under our control, and we're bringing data back to understand a problem that we've had with it. However, I must tell you the problem is in an on-board computer, apparently.

The reason that we need a side-looking radar to understand the planet Venus is that today, as we sit enjoying this beautiful weather out here, it is raining sulfuric acid on Venus through an atmosphere that produces a surface temperature of something just under 1000 degrees Fahrenheit. To understand that planetary atmosphere and to see the surface, we need supercomputers to interpret the data Magellan sends us. Supercomputers not only allow us to explore the planets through a robot but also will help us understand our own earth.

NASA is in a leadership business, and I think the HPCI is a leadership initiative. NASA is in a visionary business, and I think the HPCI is a visionary program. NASA very much is in a practical business, a day-to-day practical business, and NASA believes that the HPCI is a practical program where federal agencies can get together, along with cooperation from you, and solve some of these disparate problems. The 1992 budget has not been submitted to the OMB, but I assure you that when it is, NASA's support for the HPCI will be there.

Very briefly, our role in the HPCI is to take on the daunting and difficult task of coordinating the federal agencies on the software side and on algorithm developments. Our major applications areas in the


605

Initiative fall into three areas. First, in computational aeronautical sciences, I'm proud to say, NASA's relationship with the aircraft industry and the aeronautical research establishment over the years is possibly the best example of a cooperative government/private-industry effort. The supercomputers that we use in that effort are only tools, but they are necessary to make sure that the aircraft that sit at airports around the world in future years continue to be from the Boeing Company and McDonnell Douglas and that they continue their record of high returns to our trade balance.

The second major area in applications is in the earth sciences and in the space sciences. For instance, conference participants were given a demonstration of a visualization of the Los Angeles Basin, and it showed us the difficulties in understanding the earth, the land, the ice, the oceans, and the atmosphere. There's no way that our Planet Earth Initiative, or the Earth Observing System that is the centerpiece of it, can be architected without large use of supercomputing power. We recognize that, and that's why the computational planning part of that program is on the front end, and that's also why the largest single item in the budget is no longer the spacecraft, itself, but the analysis and computational systems. And developing those systems will probably turn out to be the most difficult part of the job.

The third applications area is in exploration, both remote today and manned in the future—to the planets and beyond, with robots and people.

Another major area that I ought to mention as our part of the HPCI is that of educating the next generation. In our base program, which I'll speak to in just a minute, we have about seven institution-sited or university-sited centers of excellence, and we intend to double that with our share of the HPCI funds we intend to propose, first to the President and then to Congress.

I should point out that the initiative already sits on a $50 million per year NASA research base in high-performance computing, principally targeted toward scientific modeling, aeronautical research modeling, and networking, both within NASA and outside.

In closing, let me make an observation about what I've seen from interacting with conference participants and from touring Los Alamos National Laboratory. I've noticed that this area is something like the space business. Every single person knows how to run it, but no two people can agree how to run it.

I believe, as Admiral Studeman said earlier in this session, that we absolutely need high performance, but we also need the entire range of


606

work that the companies represented here can provide. We cannot do our job without supercomputing performance.

As far as priorities go, let me just say that in the little over a year that I've been the NASA administrator, I've had a lot of meetings over in the White House, particularly with Dr. Bromley. (I think that you and others in the science and research community ought to thank your lucky stars that Allan Bromley is the Science Advisor to the President.) Among the various topics that I recall—in conversations with small groups, talking about where the federal government and the nation should go—two subjects stand out. The first is in high-performance computing. The second is a subject which, frankly, stands higher in my priority list and is my first love, and that is math and science education. NASA's education programs—I can't miss this opportunity, as I never miss one, to tell you about our many great education programs—have three main thrusts. First, we try to capture at the earliest possible age young kids and make them comfortable with mathematics and science so that later they are willing to accept it. Second, we try to take those young people and channel more of them into careers in mathematics and science. Third, we try to enhance the tools—particularly in the information systems and computers—that we give to the teachers to bring these young people along.

In short, NASA intends to continue to be part of the solution, not part of the problem.


607

A Leadership Role for the Department of Commerce

Robert White

Robert M. White was nominated by President Bush to serve as the first Department of Commerce Undersecretary for Technology. His nomination was confirmed by the U.S. Senate on April 5, 1990.

Dr. White directs the Department of Commerce Technology Administration, which is the focal point in the federal government for assisting U.S. industry in improving its productivity, technology, and innovation to compete more effectively in global markets. In particular, the Administration works with industry to eliminate legislative and regulatory barriers to technology commercialization and to encourage adoption of modern technology management practices in technology-based businesses.

In addition to his role as Under Secretary for Technology, Dr. White serves on the President's National Critical Technologies Panel and the National Academy of Science's Roundtable on Government/University/Industry Research. Before joining the Administration, Dr. White was Vice President of the Microelectronics and Computer Technology Corporation (MCC), the computer industry consortium, where he directed the Advanced Computing Technology program. Dr. White served as Control Data Corporation's Chief Technical Officer and Vice President for Research and Engineering before he joined MCC.


608

In 1989, Dr. White was named a member of the National Academy of Engineering for his contributions to the field of magnetic engineering. He is also a Fellow of the American Physical Society and the Institute of Electrical and Electronics Engineers. In 1980 he received the Alexander von Humboldt Prize from the Federal Republic of Germany. Dr. White holds a B.S. in physics from MIT and a Ph.D. in physics from Stanford University.

During Eugene Wong's presentation in this session, he named the eight major players in the High Performance Computing Initiative. You may recall that the Department of Commerce is not among them. Of course, we do have activities at the National Institute of Standards and Technology (NIST) that will relate to some of the standardization efforts. And certainly the National Oceanic and Atmospheric Administration, within the Department of Commerce, is going to be a user of high-performance equipment. But more generally, Commerce has an important role to play, and that is what I'd like to discuss.

Near the close of his remarks, Gene listed the ways in which the federal government might help, and I was intrigued by the one that was at the very bottom. You all probably don't remember, but it was leadership. And so I want to talk a little bit about the role that the government can play in leadership. And to do that, I sort of thought of some of the attributes of leadership that we often think about with regard to actual leaders. I want to try to apply them, if you like, to an organization, particularly the Department of Commerce.

One of the first things I think that leadership involves is vision, the conveying of a vision. I think in the past—in the past few years and even in this past week—we've heard a lot of discouraging information, a lot of discouraging data, a lot of discouraging comparisons to competitors. I think Commerce certainly is guilty of pessimism when they publish a lot of their data. We're all guilty of focusing too readily on the gains of our competitors.

I think our vision—the vision I want to see Commerce adopt—is one of a resilient, innovative and competitive nation, a positive vision, one that can look to the future.

One of the elements of competitiveness has to do with manufacturing, particularly manufacturing quality products. And one of the ways in which Commerce plays a leadership role in manufacturing is that we manage the Malcolm Baldrige Award, which basically promotes quality improvement. This program is only a few years old. During the first half


609

of 1990, we had requests for over 100,000 guideline booklets. I don't know who's going to handle all those applications when they come in.

I think the exciting thing about those guidelines is that they double as a handbook for total quality management. So whether or not you apply for the Baldrige Award or just read the book, you're bound to benefit in terms of quality.

With regard to manufacturing itself, Paul Huray in this session has already mentioned the regional manufacturing centers. Paul emphasized the important fact that this is really the beginning of a very important network. Within Commerce we are also promoting another manufacturing effort, which is the shared manufacturing centers. These are manufacturing centers that are by and large funded by state and local governments, but they are available for small companies to utilize and try new equipment, try new approaches, and perhaps even do prototype runs on things.

And finally NIST, as many of you know, has actually a major automation effort under way that involves many collaborations with industry and federal agencies.

One of the other attributes of leadership is the role of catalyzing, coordinating, and generally focusing. One of the most important assets of the role that Commerce plays is that of convening, the power of convening, and in this way we have access to a lot of data that we can make available to you, hopefully in a useful way. We do maintain clearing-house efforts. We have a database now that has all the state and local technology efforts catalogued for easy accessibility.

Another attribute of a leader, or leadership, is that of empowering those who are part of the organization. In the Department of Commerce, the way that we empower industry, hopefully, is by removing barriers. When it became clear a few years ago that antitrust laws were inhibiting our ability to compete on a global scale, we worked with others to pass the Cooperative Research Act, which has so far made possible several hundred consortia throughout the country. And there is now in Congress a bill to allow cooperative production. I often think that if this bill existed a few years ago, the manufacturing line of the Engineering Technology Associates System could have been formulated as a cooperative effort to share the cost in that very far-thinking effort. And so, if this act passes, companies will certainly be able to work together to benefit from economies of scale.

The federal government also now offers exclusive licenses to technology, particularly that developed within the government laboratories.


610

And in fact, we have within Commerce a large policy organization that welcomes suggestions from you on removing barriers. In some of our discussions, we heard of some things we do that sound dumb. We'd like to identify those things. We'd like you to come and tell us when you think the federal government is doing something foolish. We can try to change that—change the policy, change the laws. That's one of our functions.

And finally, that brings me to technology policy. Many agencies, as you know, have identified their "critical technologies." The Department of Defense has done so. The Department of Commerce published something that they called "Emerging Technologies."

And as a result of legislation initiated by Senator Jeff Bingaman of New Mexico (a Session 1 presenter), we are now assembling a list of national critical technologies. The thought now is that there will be maybe 30 technologies listed and that the top 10 will be identified.

We also have under way through the Federal Coordinating Committee on Science, Engineering, and Technology an across-the-board exercise to actually inventory all the federal laboratories with regard to those national critical technologies.

And what we hope to do as a result of all of that is to bring together the relevant industrial and federal lab players in these critical technologies—much in the way in which you're brought together here to consider high-performance computing—to talk about priorities and establish a strategic plan—a five-year plan or longer.

The Technology Administration that we have in Congress has been given the power to award grants to industry for technology commercialization. I'm talking about the new Advanced Technology Program. Currently it's very small and very controversial, but it has the potential to become very large.

One of the important elements of this program is that it does require matching funds, and so active involvement by industry is certain. You often hear that this program will support precompetitive and generic technologies, which are acceptable words in Washington now. Precompetitive, to me, means something that several companies are willing to work together on. Anytime you have a consortium effort, almost by definition what they're working on is precompetitive.

All of these programs within the Technology Administration, and the Technology Administration, itself, may be new, but I'm highly optimistic that they will have a major impact.


611

Farewell

Senator Pete Domenici

Pete V. Domenici of New Mexico has served as a U.S. Senator since 1972. A central figure in the federal budget process, he is the ranking Republican on the Senate Budget Committee and was his party's Senate Coordinator at the President's 1990 Budget Summit. As a leader in formulating government science and technology policy, he authored the original technology-transfer bill, enacted in 1989. This ground-breaking piece of legislation strengthens the relationship between the national laboratories and the private sector. He also played a key role in the drafting of the 1990 Clean Air Act. The Senator, a native New Mexican, has garnered numerous awards during his distinguished political career, including the 1988 Outstanding Performance in Congress Award from the National League of Cities, the 1989 Public Service Award from the Society for American Archeology, and the 1990 Energy Leadership Award from Americans for Energy Independence.

I have been proud to consider myself a representative of the science community during my years in the Senate, and I have been proud to count among my constituents such institutions as the national laboratories, the National Science Foundation (NSF), and the National Aeronautics and Space Administration (NASA). So let me express, by way of opening these remarks, my profound admiration for the principles and the promise that the members of this audience embody.


612

You have all heard enough during these proceedings about jurisdictional disputes in Washington. You have heard how these disputes get bound up with policy decisions concerning the scope and accessibility of the supercomputer environment being developed in America—how these disputes affect whether we discuss supercomputers, per se, or just increased computing capacity of a high-sensitivity nature. But these issues—the substantive ones—are too important to see mired in turf battles, not only back in Washington but here at this conference, as well. To those of you arguing whether it's NSF versus NASA versus Department of Energy, let me suggest that we have to pull together if the optimum computing environment is to be established.

But I want to turn from the more immediate questions to a more fundamental one. And I will preface what I am about to say by emphasizing that I have no formal scientific training apart from my 1954 degree from the University of New Mexico, which prepared me to teach junior-high math and chemistry. Everything I have subsequently learned about high-performance computing I have learned by interacting informally with people like you. At least I can say I had the best teachers in the world.

It is obvious that this is the age of scientific breakthrough. There has never been anything quite like this phenomenon. We will see more breakthroughs in the next 20 to 30 years, and you know that. Computers are the reason. When you add computing capabilities to the human mind, you exponentially increase the mind's capacity to solve problems, to tease out the truth, to get to the very bottom of things. That's exciting!

So exciting, in fact, that the realization of what American science is poised to accomplish played a major role in my decision to run for the Senate again in 1990. I wanted to be in on those accomplishments. American science is a key component in the emerging primacy of American ideals of governance in a world where democracy is "breaking out" all over. Oh, what a wonderful challenge!

Of course, while the primacy of our ideals may not be contested, our commercial competitiveness most certainly is. In the years just ahead, the United States is going to be conducting business in what is already a global marketplace—trying to keep its gross national product growing predictably and steadily in the company of other nations that want to do the very same thing. That's nothing new. Since World War II, other nations have been learning from us, and some have even pulled ahead of us in everything except science. In that field, we are still the envy of the world.


613

I submit to you that few things are more crucial to maintaining our overall competitive edge than exploiting our lead in computing. Some members of Congress are focusing on the importance of computers and computing networks in education and academia. They have my wholehearted support. You all know of my continuing commitment to science education. But I, for one, tend to focus on the bottom line: how American business will realize the potential of computers in general and supercomputers in particular. What is it, after all, that permits us to do all the good things we do if not the strength of our economy? Unless business and industry are in on the ground floor, the dividends of supercomputing will not soon accrue in people's daily lives, where it really counts.

Let me tell you a little bit about doing science in New Mexico. Back in the early 1980s, believe it or not, a recommendation was made by a group of business leaders from all over the state, along with scientists from Los Alamos National Laboratory and Sandia National Laboratories, that we establish a network in New Mexico to tie our major laboratories, our academic institutions, and our business community together. We now have such an entity, called Technet.

We encouraged Ma Bell to accelerate putting a new cable through the state; we told them that if they didn't do it, we would. And sure enough, they decided they ought to do it. That helped. We got in three or four years ahead of time so that we could rent a piece of it.

Now, it is not a supercomputer network, but it's linking a broad base of users together, and the potential for growth in the service is enormous.

You know, everyone says that government ought to be compassionate. Yes, I too think government ought to be compassionate. I think the best way for government to be compassionate is to make sure that the American economy is growing steadily, with low inflation. That's the most compassionate activity and the most compassionate goal of government. When the economy works, between 65 and 80 per cent of the American people are taken care of day by day. They do their own thing, they have jobs, business succeeds, business makes money, it grows, it invests. That's probably government's most important function—marshaling resources to guarantee sustained productivity.

Every individual in this room and every institution represented here, as players in the proposed supercomputing network, have an opportunity to be compassionate because you have an opportunity to dramatically improve the lives of the people of this great nation. You can do it by


614

improving access to education, certainly. But mostly, in my opinion, you can do it by keeping your eye on the bottom line and doing whatever is necessary and realistic to help industry benefit from high-performance computing.

You all know that there are people in Congress who are attracted to supercomputers because they're high tech—they're, well, neat. The truth is, we will have succeeded in our goal of improving the lives of the American people when supercomputers are finally seen as mundane, when they're no longer high tech or neat because they have become a commonplace in the factory, in the school, and in the laboratory.

It's a great experiment on which we are embarking. To succeed, we'll have to get the federal government to work closely with the private sector, although there are some who will instantly object. I will not be one of those. I think it's good, solid synergism that is apt to produce far better results than if those entities were going at it alone. It would be a shame if we, as the world leader in this technology, could not make a marriage of government and business work to improve the lives of our people in a measurable way.

The rest of the world will not wait around to see how our experiment works out before they jump in. Our competitors understand perfectly well the kind of prize that will go to whoever wins the R&D and marketing race. The United States can take credit for inventing this technology, and we are the acknowledged leaders in it. I say the prize is ours—but that we'll lose it if we drop the ball.

We Americans have a lot to be proud of as we survey a world moving steadily toward democracy and capitalism. Our values and our vision have prevailed. Now we must ensure that the economic system the rest of the world wants to model continues vibrant, growing, and prosperous. The network contemplated by those of us gathered here, the supercomputing community, would most certainly contribute to the ongoing success of that system. I hope we're equal to the task. I know we are. So let's get on with it.


615

previous part
14— WHAT NOW?
next section