previous part
13— INDUSTRY PERSPECTIVE: POLICY AND ECONOMICS FOR HIGH-PERFORMANCE COMPUTING
next part

13—
INDUSTRY PERSPECTIVE:
POLICY AND ECONOMICS FOR HIGH-PERFORMANCE COMPUTING

Panelists in this session presented a general discussion of where the U.S. high-performance computing industry is and how and why we got there. Topics included government helps and hindrances, competitive issues, financing and venture capital problems, and future needs.

Session Chair

Robert White,
Department of Commerce


505

Why Supercomputing Matters:
An Analysis of the Economic Impact of the Proposed Federal High Performance Computing Initiative

George Lindamood

George E. Lindamood is Vice President and Director of High-Performance Computing at Gartner Group, Inc., Stamford, Connecticut. He received his B.S., magna cum laude, in mathematics and physics from Wittenberg University and his M.A. in mathematics from the University of Maryland. He has more than 30 years of experience in the computer field, spanning academia, government, and industry, in activities ranging from research and development to international trade negotiations.

Introduction

On September 8, 1989, the Office of Science and Technology Policy (OSTP) published a report proposing a five-year, $1.9 billion federal High Performance Computing Initiative (HPCI). The goals of this program are to

• maintain and extend U.S. leadership in high-performance computing and encourage U.S. sources of production;

• encourage innovation in high-performance computing by increasing its diffusion and assimilation into the U.S. science and engineering communities; and

• support U.S. economic competitiveness and productivity through greater utilization of networked high-performance computing in analysis, design, and manufacturing.


506

In response to a Congressional request, OSTP and the Department of Energy, acting through Los Alamos National Laboratory, engaged Gartner Group, Inc., to develop a quantitative assessment of the likely economic impact of the proposed HPCI program over the coming decade. This study is proceeding in two phases.

In Phase I, which was completed in July 1990, two alternative scenarios (A and B), both depicting supercomputing through the year 2000, were developed. One scenario assumes full funding for the proposed HPCI program that would commence in FY 1992. The other scenario assumes "business as usual," that is, no additional federal funding above what is expected for HPCI-related activities now under way.

Phase II, which is the more important phase, was completed in September 1990. In Phase II, the two scenarios are extended to encompass the impact of the HPCI program, first upon selected industrial segments that are the major users of supercomputers and then upon the U.S. economy as a whole.

I will summarize the results of Phase I and describe the methodology employed in Phase II.

Phase I Methodology

During Phase I, two scenarios were developed. Scenario A assumes that current levels of HPCI funding will remain constant. Scenario B assumes full HPCI support, thereby changing the rate and direction of supercomputer development and utilization.

Scenario A

Our projection of the future of supercomputing is rooted in our understanding of the past, not only of supercomputing but also of other elements of the information industry. Over the last three years, we have developed a quantitative model that characterizes the information industry in terms of MIPS (millions of instructions per second), systems, and dollars for various classes of systems—mainframes, minicomputers, personal computers, etc., as well as the components of these systems, such as CPUs, peripherals, and software. Both the methodology and the results of this model have been applied in the development of the two alternative scenarios for the coming decade in supercomputing.

Basically, the model assumes that technology is the driver of demand because it is the principal determiner of both the overall performance and the price/performance of various types of information systems. Hence, future projections are based on anticipated technological advances,


507

interpreted through our understanding of the effects of similar advances in the past and the changing competitive conditions in the industry. Industry revenues are derived from these projections of price/performance, MIPS, and systems shipments, using average system price and average MIPS per system as "reasonability" checks. Historically, the model also reflects macroeconomic cycles that have affected overall demand, but there has been no attempt to incorporate macroeconomic forecasts into the projections.

Assumption 1. In modeling the supercomputer industry, we have assumed that supercomputer systems can be aggregated into three classes:

• U.S.-made vector supercomputers, such as those from Cray Research, Inc., Cray Computer Corporation, and (in the past) Control Data Corporation and Engineering Technology Associates Systems;

• Japanese-made vector supercomputers, such as those marketed by Nippon Electric Corporation, Hitachi, and Fujitsu; and

• parallel supercomputers, such as those made by Intel and Thinking Machines Corporation.

We assume that the price, performance, and price/performance characteristics of systems in each of these classes should be sufficiently uniform that we do not have to go into the details of individual vendors and models (although these are present in the "supporting layers" of our analysis). We do not anticipate any future European participation in vector supercomputers, but we assume no restrictions on the nationality of future vendors of the third class of systems.

Assumption 2. For our base scenario, we assume that the signs of maturity that have been observed in the market for vector supercomputers since 1988 will become even more evident in the 1990s after the current generation of Japanese supercomputers and the next generation of U.S. vector supercomputers—the C90 from Cray Research and the CRAY-3 (and -4?) from Cray Computer—have had their day.

Assumption 3. For parallel systems, however, we assume that the recent successes in certain applications will expand to other areas once the technical difficulties with programming and algorithms are overcome. When that happens, use of parallel systems will increase significantly, somewhat displacing vector systems—at least as the platform of choice for new applications—because of superior overall performance and price/performance. Growth rate percentages for millions of floating-point operations per second (MFLOPS) until the year 2000, installed, are shown in Table 1.


508
 

Table 1. Compound Annual Growth Rate for Installed MFLOPS

 

1980–84

1985–89

1990–94

1995–99

U.S. Vector Supercomputers

64%

45%

34%

23%

Japanese Vector Supercomputers

 

86%

63%

39%

Parallel Supercomputers

 

80%

66%

71%

Assumption 4. We also assume that the price/performance of U.S.-made vector supercomputers will continue to improve, or decrease, at historical rates (about 15 per cent per year) and that the price/performance of Japanese-made vector systems will gradually moderate from 30+ per cent per year levels to 15 per cent per year by the year 2000. For parallel systems, we assume an accelerating improvement, to 20 per cent per year by the year 2000, in price/performance as a result of increasing R&D in this area.

Assumption 5. Despite the decrease in price per MFLOPS, average prices for supercomputer systems have actually increased a few percentage points per year historically. The reason, of course, is that average system size has grown significantly, especially because of expanded use of multiprocessing. We assume that these trends will continue for vector systems, albeit at a slowed rate of increase after 1995, because of anticipated difficulties in scaling up these systems to ever-higher levels of parallelism. For parallel systems, technological advances should lead to accelerated growth rates in processing power, resulting in systems capable of one-TFLOPS sustained performance by the year 2000. Growth rate percentages for average MFLOPS per system are shown in Table 2.

Assumption 6. Finally, we assume that retirement rates for all classes of supercomputer systems will follow historical patterns exhibited by U.S.-made vector systems.

 

Table 2. Compound Annual Growth Rate for Average MFLOPS/System

 

1980–84

1985–89

1990–94

1995–99

U.S. Vector Supercomputers

21%

23%

27%

22%

Japanese Vector Supercomputers

 

48%

30%

24%

Parallel Supercomputers

   

60%

42%


509

These assumptions are sufficient to generate a projection of supercomputer demand for the next 10 years:

• The number of installed systems will more than triple by the year 2000. Table 3 shows how this installed base will be divided, as compared with today. The number of supercomputers installed in Japan will exceed the number installed in the U.S. after 1996.

• Installed supercomputer power (measured in peak MFLOPS) will increase more than 125-fold over the next decade, from almost 1.4 million MFLOPS in 1990 to over 175 million MFLOPS in 2000. (However, this is substantially less than the growth rate in the 1980s—from about 4000 MFLOPS in 1980 to 340 times that amount in 1990.) Of the MFLOPS installed in 2000, 90 per cent will be parallel supercomputers, two per cent will be U.S.-made systems, and eight per cent will be Japanese-made systems.

• As shown in Table 4, the "average" vector supercomputer will increase about 10 times in processing power, whereas the "average" parallel system will increase about 60 times over the decade. Average supercomputer price/performance will improve by a factor of 25,

 

Table 3. Growth in Supercomputer Demand, 1990–2000

 

1990

2000

Source

   

U.S. Vector Supercomputers

347 (57%)

640 (34%)

Japanese Vector Supercomputers

183 (30%)

669 (36%)

Parallel Supercomputers

81 (13%)

552 (30%)

User

   

Government

174 (28%)

463 (25%)

Academia

130 (21%)

402 (22%)

Industry

250 (41%)

833 (45%)

In-House

57 (9%)

163 (9%)

Installation Site

   

U.S.

301 (49%)

683 (37%)

Europe

115 (19%)

345 (19%)

Japan

174 (28%)

768 (41%)

Other

21 (3%)

65 (3%)

Total Installations

     611

   1861


510
 

Table 4. Supercomputer Power, Scenario A

 

U.S. Vector
Supercomputers

Japanese Vector
Supercomputers

Parallel
Supercomputers

Average System
   Price (Millions)

$24.8

$16.4

$35.3

Average System
  Power (Peak GFLOPS)

12.0

38.5

630,000

Price per MFLOPS

$2000

$425

$56

mostly as a result of increased usage of parallel systems that have more than 10 times better price/performance than vector systems.

• Annual revenues for vector supercomputers will peak at just under $3 billion in 1998. Revenues for parallel systems will continue to grow, surpassing those for vector systems by 1999 and exceeding $3.1 billion by 2000.

Scenario B

For this scenario, we assume that the federal HPCI program will change the direction of high-performance computing (HPC) development and utilization and the rate of HPC development and utilization.

Assumption 1. As in Scenario A, supercomputers are grouped into three classes in Scenario B:

• U.S.-made vector supercomputers,

• Japanese-made vector supercomputers, and

• parallel supercomputers.

Assumptions 2 and 3. We assume that demand for supercomputer systems of both the vector and parallel varieties will be increased by the HPCI program components concerned with the evaluation of early systems and high-performance computing research centers. All funding for early evaluation ($137 million over five years) will go toward the purchase of parallel supercomputers, whereas funding for research centers ($201 million over five years) will be used for U.S.-made vector and parallel supercomputers, tending more to the latter over time. We also assume that federal funding in these areas will precipitate increased state government expenditures, as well, although at lower levels. Although all of these systems would be installed in academic and government facilities (primarily the former), we also postulate in Scenario B that the


511

technology transfer components of HPCI would succeed in stimulating industrial demand for supercomputer systems. Here, the emphasis will be more on U.S.-made vector systems in the near term, although parallel systems will also gain popularity in the industrial sector in the late 1990s as a result of academic and government laboratory developmental efforts supported by HPCI.

Assumption 4. This increased demand and intensified development will also affect the price/performance of supercomputer systems. For U.S.-made vector systems, we conservatively assume that price/performance will improve one percentage point faster than the rates used in Scenario A. For parallel supercomputers, we assume that price/performance improvement will gradually approach levels typical of microprocessor chips and RISC technology (that is, 30+ per cent per year) by the year 2000.

Assumption 5. The increased R&D stimulated by HPCI should also result in significantly more powerful parallel supercomputers, namely, a TFLOPS system by about 1996. However, we do not assume any change in processing power for vector supercomputers, as compared with Scenario A, because we expect that HPCI will have little effect on hardware development for such systems. (This is distinct, however, from R&D into the use of and algorithms for vector systems, which definitely will be addressed by HPCI.)

Assumption 6. We assume that retirement rates for supercomputer systems of all types will be the same as in Scenario A.

As before, these assumptions are sufficient to generate a projection of supercomputer demand for the next 10 years:

• The number of installed supercomputers will approach 2200 systems by the year 2000. Table 5 shows how this installed base will be divided, as compared with Scenario A.

• Particularly noteworthy is the difference between these two scenarios in terms of U.S. standing relative to Japan. In Scenario A, Japan takes the lead in installed supercomputers, but in Scenario B, the U.S. retains the lead.

• Installed supercomputer power (measured in peak MFLOPS) will be increased by a factor of more then 300, to over 440 million MFLOPS, by the year 2000 (which is slightly less than the rate of growth in the 1980s). Of the MFLOPS installed in 2000, 96 per cent will be parallel supercomputers, one per cent will be U.S.-made vector supercomputers, and three per cent will be Japanese-made vector supercomputers.


512
 

Table 5. Supercomputer Installations in the Year 2000, by Scenario

 

Scenario A

Scenario B

Source

   

U.S. Vector Supercomputers

640 (34%)

754 (35%)

Japanese Vector Supercomputers

669 (36%)

669 (31%)

Parallel Supercomputers

552 (30%)

750 (34%)


User

   

Government

463 (25%)

488 (22%)

Academia

402 (22%)

518 (24%)

Industry

833 (45%)

984 (45%)

In-House

163 (9%)

183 (8%)


Installation Site

   

U.S.

683 (37%)

995 (46%)

Europe

345 (19%)

345 (16%)

Japan

768 (41%)

768 (35%)

Other

65 (3%)

65 (3%)


Total Installations


     1861


      2173

• As shown in Table 6, the "average" vector supercomputer will increase about 10 times in processing power, whereas the "average" parallel system will increase nearly 125-fold over the decade. Average supercomputer price/performance will improve by a factor of 55.

• Annual revenues for vector supercomputers will peak at just over $3 billion in 1998. Revenues for parallel systems will continue to grow, surpassing those for vector systems by 1997 and exceeding $5 billion in 2000.

The differences between Scenarios A and B, as seen by the supercomputer industry, are as follows:

• 17 per cent more systems installed;

• almost three times as many peak MFLOPS shipped and two and one-half times as many MFLOPS installed in 2000;

• 39 per cent greater revenues in the year 2000—an $8 billion industry (Scenario B) as opposed to a $5 billion industry (Scenario A); and

• $10.4 billion more supercomputer revenues for the 1990–2000 decade.

In addition to these differences for supercomputers, HPCI would cause commensurate increases in revenues and usage for minisupercomputers, high-performance workstations, networks,


513
 

Table 6. Supercomputer Power, Scenario B

 

U.S. Vector
Supercomputers

Japanese Vector
Supercomputers

Parallel
Supercomputers

Average System
  Price (Millions)

$22.1

$16.4

$37.9

Average System Power   (Peak GFLOPS)

12.0

38.5

1,300,000

Price per MFLOPS

$1840

$425

$29

software, systems integration and management, etc. However, the largest payoff is expected to come from enhanced applications of high-performance computing.

Phase II Methodology

To estimate the overall economic benefit of HPCI, we have sought the counsel of major supercomputer users in five industrial sectors representing a variety of experience and sophistication:

• aerospace,

• chemicals,

• electronics,

• oil and gas exploration and production, and

• pharmaceuticals.

Our assumption is that supercomputers find their primary usage in R&D, as an adjunct to and partial replacement for laboratory or field experimentation and testing—for example, simulating the collision of a vehicle into a barrier instead of actually crashing thousands of new cars into brick walls. Hence, high-performance computing enables companies to bring more and better new products to market and bring new products to market faster.

In other words, high-performance computing improves R&D productivity. Even if there is no other benefit, the use of HPC, which in turn affects overall company productivity in direct proportion to the share of expenditures for R&D, provides a way to determine a conservative estimate of productivity improvement, as shown by the following steps:

• Scenarios A and B are presented to company R&D managers, who are then asked to give estimates, based on their expertise and experience, of the change in R&D productivity over the coming decade.


514

• For both scenarios, these estimates are translated into overall productivity projections, using information from the company's annual report, for the ratio of R&D spending to total spending.

• Productivity projections for several companies in the same industrial sector are combined, with weightings based on relative revenues, to obtain overall Scenario A and B projections for the five industrial sectors identified above. At this point, projections for other industrial sectors may be made on the basis of whatever insights and confidence have been gained in this process.

These productivity projections are interesting in and of themselves, but we do not intend to stop there. Rather, we plan to use them to drive an input/output econometric model that will then predict the 10-year change in gross national product (GNP) under Scenarios A and B. By subtracting the GNP prediction for Scenario A from that for Scenario B, we expect to obtain a single number, or a range, that represents the potential 10-year payoff from investing $1.9 billion of the taxpayers' money in the federal HPCI program.


515

Government As Buyer and Leader

Neil Davenport

Neil Davenport is the former President and CEO of Cray Computer Corporation. For more complete biographical information, see his presentation in Session 3.

The market for very high-performance supercomputers remains relatively small. Arguably, the market worldwide in 1990 was not much more than $1 billion. In the 1990s, the development of a machine to satisfy this market—certainly to get a viable market share—requires the development of components, as well as of the machine itself. The marketplace presented to component manufacturers by suppliers of supercomputers is simply not large enough to attract investment necessary for the production of very fast next-generation logic and memory parts. High performance means high development costs and high price. This is a far cry from the days of development of the CRAY-1, when standard logic and memory components were put together in innovative packaging to produce the world's fastest computer.

The market for very large machines is small. It would clearly be helpful if there were no inhibitions to market growth. An easier climate for export of such technology would help the manufacturers. This is a small aspect of a general preference for free and open competition, which would give better value to the buyer.

Government remains the largest customer throughout the world, without whom there would probably not be a supercomputer industry. It is very important that government continue to buy and use supercomputers and in so doing, direct the efforts of the manufacturers


516

of supercomputers. The world market is so small that it clearly cannot sustain a large number of competitors, given the high cost of entry and maintenance. The essential element for success in the supercomputing business is that there be a reasonable size of market that is looking for increased performance and increased value. In this way, the survival of the fittest can be assured, if not the survival of all.


517

Concerns about Policies and Economics for High-Performance Computing

Steven J. Wallach

Steven J. Wallach is Senior Vice President of Technology of CONVEX Computer Corporation. For more complete biographical information, see his presentation in Session 3.

First, I would like to "take a snapshot" of the state of the supercomputing industry. I think today we certainly have leadership roles because the U.S. is the world leader in supercomputing. I think one of the key areas that people do not often talk enough about is in the area of application leadership. Even when you go to Japan to use a Japanese supercomputer, more than likely the application was developed in the United States, not in Japan. I do not think that I have heard this mentioned in other presentations, but this is a very, very important point. He who has the applications ultimately wins in this business. Also, we are establishing worldwide standards. If the Japanese or Europeans build a new supercomputer, they tend to follow what we are doing, as opposed to trying to establish new standards.

Those are some positive points. What are some of the negatives? Most of the semiconductor technology of today's supercomputers is based on Japanese technology. That is a problem because it is something that we do not necessarily have under our control.

What scares me the most is that for all U.S. supercomputing companies, other than IBM, supercomputers are their only business. They cannot afford to fund efforts for market share over a three-to-five-year period. For the Japanese companies—Hitachi, Fujitsu, and Nippon Electric


518

Corporation—the supercomputer business is a very small percentage of their overall business, and they are multibillion-dollar-a-year companies. If they chose to sell every machine at cost for the next five years, you would not even see a dent in the profit-and-loss statements of the Japanese companies. Personally, this is what scares me more than anything in competing against the Japanese.

In contrast, how does the U.S. work? We have venture capital. Some people call it "vulture" capital. When a product is very successful and makes a lot of money for its creators and inventors, that one success tends to bring about many, many "clones" that want to cash in on the market. We can go back in the late 1960s, when the minicomputer market started and we had Digital Equipment Corporation and 15 other companies; yet the only companies that really grew out of that boom that are still around are Data General and Prime. Almost everyone else went out of business.

The problem is that five companies cannot each have 40 per cent of the market, so there is a shakeout. This happened in the minicomputer business, the tandem business, and the workstation business; it certainly happened in the midrange supercomputer business.

Now let us take a look at government policy. Typically, the revenue of most companies today is approximately 50 per cent U.S. and 50 per cent international. This is true for almost every major U.S. manufacturer. At CONVEX Computer Corporation, we are actually 45 per cent U.S. (and 35 per cent Europe and five per cent other), but it is always surprising that 15 per cent of our revenue is in Japan. We have not found any barriers to selling our machines in Japan; some of our largest customers are in Japan.

In five years, when you buy a U.S.-made high-definition television (HDTV), it probably will have been simulated on a CONVEX in Japan. We have over 100 installations and literally zero barriers. The only barrier that we have come across was at a prestigious Japanese university that said, "If you want to sell a machine to us, that's great; we'll buy it. But when we have a 20-year relationship with a Japanese company, we typically pay cost. If you want to sell us your machine at cost, we will consider it." Now, if that is a barrier, then so be it. But personally, I say we have had no barriers whatsoever.

U.S. consumption, from CONVEX's viewpoint, is anywhere from 30 per cent to 50 per cent and is affected by the U.S. government directly or indirectly—directly when the Department of Defense (DoD) buys a machine and indirectly when an aerospace contractor buys a machine based on a government grant. From an international viewpoint, our export policy, of course, is controlled. The policy is affected by U.S. export laws like those promulgated in accordance with the Coordinating


519

Committee on Export Controls (COCOM), especially with respect to non-COCOM countries, such as Korea, Taiwan, and Israel. The key is that we now have competition from countries that are not under our control (such as Germany, Britain, and France), where there are new developments in supercomputers. If we were to try to export one of these machines, the export would be precluded. So I think we are losing control because of our export policies.

In the current state of government policy, government spending impacts revenues and growth. For companies like CONVEX, effects from government money tend to be the early adopters (universities, national laboratories, etc.). These institutions buy the first machines and take the risk because the risk is on government money. Sometimes proving something does not work is as significant a contribution as proving something does work because if you can prove it does not work, then someone else does not have to go down that path.

The other thing we find that helps us is long-term contracts. That is, buyers will commit to a three-or four-year contract with the government helping, via the Defense Advanced Research Project Agency (for example, through the Thinking Machines Corporation Connection Machine and the Touchstone project) and the NSF centers. The NSF centers have absolutely helped a company like CONVEX because they educated the world in the use of supercomputers.

One of the reasons we do very well in Japan is because Japanese business managers ask their engineers why they are not using a supercomputer, not the other way around. So we are received with open arms, as opposed to reluctance.

So where are we going? The term I hear today more and more is COTS, commercial off-the-shelf, especially in DoD procurements. Also, I think we have totally underestimated Taiwan, Korea, Hong Kong, and Singapore. Realistically, we have to worry about Korea and Taiwan. I have traveled extensively in these countries, and I would worry more about them than I would about the SUPRENUM and similar efforts.

The thing that worries me is that we Americans compete with each other "to the death" among our companies. Can the U.S. survivors have enough left to survive the foreign competition?

Another concern I have is about the third-party software suppliers. Will these suppliers begin to reduce the number of different platforms they support?

My last concern is whether the U.S. capital investment environment can be changed to be more competitive. In Japan, if a company has money in the bank, it invests some extra money to diversify its base. In Japan, the


520

price of stock goes up when a company explains how it is investing money for long-term benefit; because a lot of the stock is owned in these banking groups, earnings might be depressed for two years until the investments shows a profit. By contrast, in U.S. companies, we live quarter to quarter. If you blow one quarter, your epitaph is being written.

So what am I encouraging? I think we have to have changes in the financial infrastructure. Over half the market is outside the U.S., and we have no control—U.S. dumping laws mean nothing if the Japanese want to acquire our market share in Germany. So I think somehow we have to address that issue. My experience with Japanese companies is that in Germany they will bid one deutsche mark if they have to; in Holland, one guilder; but they will never lose a deal based on price. It can be a $20-million machine, but if they want to make that sale, they will not lose it on price. So, we must deal with the fact that U.S. dumping laws affect less than 50 per cent of the market.

One last thing in terms of export control: I personally think we should export our technology as fast as we can and make everyone dependent on us so that the other countries do not have a chance to build it up. One of the reasons we do not have a consumer electronics industry now is that the Japanese put the U.S. consumer electronics industry out of business. Now they are in control of us because we cannot build anything. The same thing, potentially, is true with HDTV. We should export it; let others be totally dependent on us, and then we will actually have more control because other countries will have to come to us.


521

High-Performance Computing in the 1990s

Sheryl L. Handler

After receiving her Ph.D. from Harvard University, Sheryl L. Handler founded PACE/CRUX, a domestic and international economic development consulting firm. Clients ranged from biotechnology and telecommunications companies to the World Bank, the U.S. State Department, and numerous other agencies and companies. She was President of PACE/CRUX for 12 years.

In June 1983, Dr. Handler founded Thinking Machines Corporation and within three years introduced the first massively parallel high-performance computer system, the Connection Machine supercomputer. The Connection Machine was the pioneer in a new generation of advanced computing and has become the fastest and most cost-effective computer for large, data-intensive problems on the market. Thinking Machines is now the second largest supercomputer manufacturer in America.

Supercomputing has come to symbolize the leading edge of computer technology. With the recognition of its importance, supercomputing has been put on the national agenda in Japan and Europe. Those countries are actively vying to become the best. But a national goal in supercomputing has not yet been articulated for America, which means that all the necessary players—the designers, government laboratories, software developers, students, and corporations—will not be inspired to direct their energies toward a big and common goal. This is potentially dangerous.


522

Supercomputing represents the ability to dream and to execute with precision. What is a country without dreams? What is a country without the ability to execute its own ideas better than anyone else?

What drew me into the supercomputing industry was an awe, an almost kid-like fascination with it. Supercomputing is a tool that allows you to

contemplate huge and complex topics
or zero in on the smallest details
while adjusting the meter of time
or the dimensions of space.
Or with equal ease, to build up big things
or to take them apart.

To me, there is poetry in our business and a big business in this poetry. In addition, supercomputing is now getting sexy. I recently saw a film generated on the Connection Machine[*] system that was really sensual. The color, shapes, and movement had their own sense of life.

It is very important to this country to take the steps to be the leader in this field, both in the present and in the future. How can we do this? In short, we must be bold: set our sights high and be determined and resourceful in how we get there. Big steps must be taken. But the question is, by whom?

Some look at our industry and economic structure and say that big steps are not possible. Innovation requires new companies, which require venture capitalists, which require quick paybacks, which only leaves time for small, incremental improvements, etc. In fact, big steps can be taken, and taken successfully, if one has the will and the determination.

Sometimes these big steps can be taken by a single organization. At other times, it requires collective action. I would like to look at an example of each.

It is generally agreed that the development of the Connection Machine supercomputer was a big step. Some argued at the time that it was a step in the wrong direction, but all agreed that it was a bold and decisive step. How did such a product come about? It came about because the will to take a big step came before the product itself. We looked around us in the early 1980s and saw a lot of confusion and halfway steps. We didn't know what the answer was, but we were sure it wasn't the temporizing that we saw around us.


523

So we organized to get back to basics. We gathered the brightest people we could find and gave them only one request: find the right big step that needed to be taken. We needed people whose accomplishments were substantial, so their egos weren't dependent on everything being done their way.

In addition to Danny Hillis, we were fortunate to have two other prominent computer architects who had their own designs. Eventually even their enthusiasm for the Connection Machine became manifest, and then we knew we were onto something good. I thought of this initial phase of the company as building a team of dreamers.

Then we added another dimension to the company—we built a team of doers. And they were as good at "doing" things as the theorists were at dreaming. This team was headed by Dick Clayton, who had vast experience at Digital Equipment Corporation as Vice President of Engineering. His responsibilities ranged from building computers to running product lines. When he arrived at Thinking Machines, we put a sign on his door: "Vice President of Reality." And he was.

So we had the dreamers and the doers. Then there was a third phase—coupling the company to the customer in a fundamental way.

We built a world-class scientific team that was necessary to develop the new technology. But as you know, many companies keep a tight reign on R&D expenses. We viewed R&D as the fuel for growth, not just a necessary expense that had to be controlled.

We had a powerful opportunity here. Our scientific team became a natural bridge to link this new technology to potential customers. These scientists and engineers who had developed the Connection Machine supercomputer were eager to be close to customers to understand this technology from a different perspective and, therefore, more fully. Our early customers had a strong intuition that the Connection Machine system was right for them. But the ability to work hand-in-hand with the very people who had developed the technology enabled our users to get a jump on applications. As a result, our customers were able to buy more than just the product: they were buying the company. The strategy of closely coupling our scientists and our customers has become deeply embedded in the corporate structure. In fact, it is so important to us that we staff our customer support group with applications specialists who have advanced degrees.

So the creation of the Connection Machine supercomputer is an example of a big step that was taken by a single organization. In the years since, massively parallel supercomputers have become part of everyone's supercomputing plans. A heterogeneous environment, with vector


524

supercomputers, massively parallel supercomputers, and workstations, is becoming the norm at the biggest, most aggressive centers.

And now another big step needs to be taken collectively by many of the players in the computer industry. Right now, there is no good way for a scientist to write a program that runs unchanged on all of these platforms. We have not institutionalized truly scalable languages, languages that allow code to move gracefully up and down the computing hierarchy. And the next generation of software is being held up as a result. (If you don't believe that this is a problem, let me ask you the following question: how many of you would be willing to install a meter on your supercomputers that displays the year in which the currently running code was originally written?)

How long can we wait until we give scientists and programmers a stable target environment that takes advantage of the very best hardware that the 1990s have to offer? We already know that such languages are possible.

Fortran 90 is an example. It is known to run efficiently on massively parallel supercomputers such as the Connection Machine computer. It is a close derivative of the Control Data Fortran that is known to run efficiently on vector supercomputers. It is known to run efficiently on coarse-grain parallel architectures, such as Alliant. And while it has no inherent advantages on serial workstations, it has no particular disadvantages, either.

Is Fortran 90 the right scalable language for the 1990s? We don't know that for sure, either. But it is proof that scalable languages are there to be had. Languages that operate efficiently across the range of hardware will be those that will be most used in the 1990s. It is hard for this step to come solely from the vendors. Computer manufacturers don't run mixed shops. My company does not run any Crays, and, to the best of my knowledge, John Rollwagen doesn't run any Connection Machine systems. But many shops run both.

So there is a step to be taken. A big step. It will take will. It will take a clear understanding that things need to be better than they are today—and that they won't get better until the step gets taken. That is where we started with the Connection Machine computer, with the clear conviction that things weren't good enough and the determination to take a big step to make them good enough. And as the computer industry matures and problems emerge that affect a wide segment of the industry, we should come together. It works. I recommend it.


525

A High-Performance Computing Association to Help the Expanding Supercomputing Industry

Richard Bassin

Richard Bassin has been a pioneer in the development of relational database-management systems throughout Europe, having introduced this important new technology to a wide array of influential and successful international organizations. In 1988, Mr. Bassin joined nCUBE Corporation, a leading supplier of massively parallel computing systems, where he served as Vice President of Sales until April 1991. Starting in 1983, Mr. Bassin spent five years helping build the European Division of Oracle Corporation, where he was General Manager of National Accounts. During this time, Mr. Bassin also developed and was a featured speaker in an extremely successful series of relational database-management seminars, entitled Fourth Generation Environments for Business and Industry. This series is today a vigorously functioning institution and constitutes the worldwide standard as a forum for the exchange of information on innovations in database management. Before working in the database-management field, Mr. Bassin was a Technical Manager for Computer Sciences Corporation.

It is evident to me that there are a lot of people fighting for a very small supercomputer marketplace. It is a growing marketplace, but it is still not big enough. The number of vendors represented among the presenters in


526

this session confirms a relatively small marketplace. If we are talking about a billion dollars, it's a relatively small marketplace.

We need to expand that marketplace if we're going to have strength in high-performance computing. I would state that the High Performance Computing Initiative (as opposed to the supercomputing initiative), as the government calls it, is probably a better angle because a lot of people already have a misconception of what supercomputing is.

But we need to expand because people need higher-performance computing. We need to expand it to a greater degree, especially in industry. Both vendors and users will see advantages from this expansion. Vendors will have the financial security to drive the R&D treadmill from which users benefit.

There has been a lot of discussion over the last few days about the foreign threat, be it Japanese, European, from the Pacific rim, or otherwise. Again, if we expand the industry, as Steve Wallach suggests in his presentation, we have to go worldwide. We must not only be concerned about the billion dollars the government has made available to the community, but we must also look at the worldwide market and expand it. And we must expand out within the national market, getting supercomputing into the hands of people who can benefit from it. There's not enough supercomputing or high-performance computing on WallStreet. Financial analysts, for instance, could use a lot of help. Maybe if it were available, they would not make some of their most disastrous miscalculations on what will go up and what will go down.

How do we strengthen that marketplace? How do we expand it? Well, in my view, there's a need for the vendors to get together and do it in concert—in, for example, a high-performance computing association, where the members are from the vendor community, both in hardware and in software. That organization, based in places like supercomputing centers, should represent the whole high-performance computing community and should work to expand the entire industry rather than address the needs of an individual vendor.

All too often, government is influenced by those most visible at the moment. If we had an association that would address the needs of the industry, that would probably be the best clearing-house that the government could have for getting to know what is going on and how the industry is expanding.

It would also provide an ideal clearing-house for users who are confused as to what's better for them and which area of high-performance computing best suits their needs. Today, they're all on their own and


527

make a lot of independent decisions on types of computing, price of computing, and price/performance of computing relative to their needs. Users could get a lot of initial information through an association.

The last thing I would say is that such an association could also propose industry-wide standards. We have a standard called HIPPI (high-performance parallel interface), but unfortunately we don't have a standard that stipulates the protocol for HIPPI yet. A lot of people are going a lot of different ways. If we had an organization where the industry as a whole could get together, we might be able to devise something from which all the users could benefit because all the users would be using the same interface and the same protocol.

I am a firm believer in open systems. Our company is a firm believer in open systems. Open systems benefit the industry and the user community, not just the user community.

In conclusion I will tell you that we at nCUBE Corporation have discussed the concept of a high-performance computing organization at the executive level, and our view is that we will gladly talk to the other vendors, be they big, small, or new participants in high-performance computing. We have funds to put into an association, and we think we should build such an association for the betterment of the industry.


529

The New Supercomputer Industry

Justin Rattner

Justin Rattner is founder of and Director of Technology for Intel Supercomputer Systems Division. He is also principal investigator for the Touchstone project, a $27 million research and development program funded jointly by the Defense Advanced Research Projects Agency and Intel to develop a 150-GFLOPS parallel supercomputer.

In 1988, Mr. Rattner was named an Intel Fellow, the company's highest ranking technical position; he is only the fourth Intel Fellow named in the company's 20-year history. In 1989, he was named Scientist of the Year by R&D Magazine and received the Globe Award from the Oregon Center for Advanced Technology Education for his contributions to educational excellence. Mr. Rattner is often called the "Father of Parallel Valley," the concentration of companies near Portland, Oregon, that design and market parallel computers and parallel programming tools .

Mr. Rattner received his B.S. and M.S. degrees in electrical engineering and computer science from Cornell University, Ithaca, New York, in 1970 and 1972, respectively.

An observation was made by Goldman, Sachs & Co. in about 1988 about the changing structure of the computer industry. They talked about "New World" computing companies versus "Old World" computing companies. In my observation of the changing of the guard in high-performance computing, I group companies such as Cray Research, IBM,


530

Digital Equipment Corporation, Nippon Electric Corporation, and Fujitsu as Old World supercomputing companies, and Intel, Silicon Graphics IRIS, Thinking Machines Corporation, nCUBE Corporation, and Teradata as New World supercomputing companies.

The point of this grouping is that the structure of the industry associated with Old World computing companies is very different from the structure of the industry associated with New World computing companies. I am not saying that the New World companies will put all Old World companies out of business or that there will be an instantaneous transformation of the industry as we forsake all Old World computers for New World computers. What I am trying to emphasize is the fact that the New World industries have fundamentally different characteristics from Old World industries, largely because of the underlying technology.

Of course, the underlying agent of change here is the microprocessor because micro-based machines show more rapid improvement. Intel's forecast for a high-performance microprocessor in the year 2000—in contrast to a high-integration microprocessor in the year 2000—is something on the order of 100 million transistors, about an inch on a side, operating with a four-nanosecond cycle time, averaging 750 million instructions per second, and peaking at a billion floating-point operations per second (GFLOPS), with four processing units per chip.

I think that Intel is every bit as aggressive with superprocessor technology as other people have been with more conventional technologies. For instance, see Figure 1, which is a picture of a multichip module substrate for a third-generation member of the Intel i860 family, surrounded by megabit-cache memory chips. It is not that unusual, except that you are looking at optical waveguides. This is not an illustration of an aluminum interconnect. These are electro-optic polymers that are interconnecting these chips with light waves.

We are also working on a cryogenically cooled chip. In fact, there is a 46 microprocessor operating at Intel at about 50 per cent higher than its commercially available operating frequency, and I think it is running at about -30°C.

Figure 2 shows the impact of this technology. I tried to be as generous as I could be to conventional machines, and then I plotted the various touchstone prototypes and our projections out to the point I ran off the performance graph. We believe we can reach sustained TFLOPS performance by the middle of the decade.


531

Figure 1.
A multichip module substrate for a third-generation Intel i860.

Figure 2.
Micro-based machines show more rapid improvement in peak system performance.


532

The "new" industry technology is what produces a "paradigm shift," if I can borrow from Thomas Kuhn, and leads to tremendous crisis and turmoil as old elements and old paradigms are cast aside and new ones emerge. In short, it has broad industry impact.

Among these impacts are technology bridges—things that would help us get from the old supercomputing world to the new supercomputing world. One of those bridges is some type of unifying model of parallel computation, and I was delighted to see a paper by Les Valiant (1990), of Harvard University, on the subject. He argues that the success of the von Neumann model of computation is attributable to the fact that it is an efficient bridge between software and hardware and that if we are going to have similar success in parallel computing, we need to find models that enable that kind of bridging to occur.

I think new industry machines result in changes in usage models. This is something we have seen. I have talked to several of the supercomputer center directors that are present at this conference. We get somewhat concerned when we are told to expect some 1500 logins for one of these highly parallel machines and all the attendant time-sharing services that are associated with these large user communities. I do not think these architectures now and for some time will be conducive to those kinds of conventional models of usage, and I think we need to consider new models of usage during this period of crisis and turmoil that accompanies the paradigm shift.

Issues associated with product life cycles affect the investment strategies that are made in developing these machines. The underlying technology is changing very rapidly. It puts tremendous pressure on us to match our product life cycles to those of the microprocessors. It is unusual in the Old World of supercomputers to do that.

Similarly, we have cost and pricing effects. I cannot tell you how many times people have said that the logic in one of our nodes is equivalent to what they have in their workstations. They ask, "Why are you charging me two or three times what someone would charge to get that logic in the workstation?" These are some of the issues we face.

The industry infrastructure is going to be changing. We are witnessing the emergence of a whole new industry of software companies that see the advent of highly parallel machines as leveling the playing field—giving them an opportunity to create new software that is highly competitive in terms of compatibility with old software while avoiding the problem of having 20-year-old algorithms at the heart of these codes. Many other industry infrastructure changes can be anticipated, such as the one we see beginning to take place in the software area.


533

Procurement policies for overall supercomputers are based on the product life cycle, cost, and pricing structures of Old World supercomputers. And we see that as creating a lot of turmoil during the paradigm shift. When we sit down with representatives of the various government agencies, they tend to see these things in three- or four- or five-year cycles, and when we talk about new machines every 18 to 24 months, it's clear that procurement policies just don't exist to deal with machines that are advancing at that rapid rate.

Finally, export policies have to change in response to this. In a New York Times article entitled "Export Restrictions Fail to Halt Spread of Supercomputers," the reporter said that one thing creating this problem with export restrictions was that among the relatively powerful chips that are popular with computer makers abroad is the Intel i860 microprocessor, which is expected to reach 100 million floating-point operations per second sometime in late 1990 or early 1991. This is just an example of the kind of crisis that the new computer industry will continue to create, I think, for the balance of this decade, until the paradigm shift is complete.

Reference

L. Valiant, "A Bridging Model for Parallel Computation," Communications of the ACM33 (8), 103-111 (1990).


535

The View from DEC

Sam Fuller

Samuel H. Fuller, Vice President of Research, Digital Equipment Corporation, is responsible for the company's corporate research programs, such as work carried out by Digital's research groups in Maynard and Cambridge, Massachusetts, Palo Alto, California, and Paris, France. He also coordinates joint research with universities and with the Microelectronics and Computer Technology Corporation (MCC). Dr. Fuller joined Digital in 1978 as Engineering Manager for the VAX Architecture Group. He has been instrumental in initiating work in local area networks, high-performance workstations, applications of expert systems, and new computer architectures.

Before coming to Digital, Dr. Fuller was Associate Professor of Computer Science and Electrical Engineering at Carnegie Mellon University, where he was involved in the performance evaluation and design of several experimental multiprocessor computer systems. Dr. Fuller is a member of the boards of directors of MCC, MIPS Corporation, and the National Research Initiatives. He also serves as a member of the advisory councils of Cornell University, Stanford University, and the University of Michigan and is on the advisory board of the National Science Resource Center (Smithsonian Institution-National Academy of Sciences). Dr. Fuller is a Fellow of the Institute of Electrical and Electronics Engineers and a member of the National Academy of Engineering.


536

I would like to cover several topics. One is that Digital Equipment Corporation (DEC) has been interested for some time in forms of parallel processing and, in fact, massively parallel processing, usually called here SIMD processing.

There are two things that are going on in parallel processing at DEC that I think worthy of note. First, for a number of years, we have had an internal project involving four or five working machines on which we are putting applications while trying to decide whether to bring them to market. The biggest thing holding us back—as stated by other presenters in this session—is that it is a limited market. When there is a question of how many people can be successful in that market, does it really make sense for one more entrant to jump in? I would be interested in discussing with other researchers and business leaders the size of this market and the extent of the opportunities.

The second thing we are doing in massively parallel processing is the data parallel research initiative that we have formed with Thinking Machines Corporation and MasPar. In this effort, we have focused on the principal problem, which is the development of applications. A goal in January of 1989 with Thinking Machines was to more than double the number of engineers and scientists who were writing applications for massively parallel machines.

An interesting aspect I did not perceive when we went into the program was the large number of universities in this country that are interested in doing some work in parallel processing but do not have the government contracts or the grants to buy some of the larger machines we are talking about at this conference. As the smaller massively parallel machines have come forward, over 18 of the MasPar machines with DEC front ends have gone into various universities.

Some people have spoken to me about having supercomputer centers where people are trained in vector processing and having that concept filter down into the smaller machines. Also, as more schools get small massively parallel machines, those students will begin to learn how to develop applications on parallel machines, and then we will begin to see that trend trickle upward, as well.

A very healthy development over the past 12 months is the availability of low- as well as high-priced massively parallel machines. The goal of the DEC-Thinking Machines-MasPar initiative involving universities is no longer to double the number of engineers and scientists. It is now, really, to more than quadruple the number of engineers and scientists that are working on these types of machines, and I think that is quite possible in the year or two ahead.


537

Our next goal, now that several of these machines are in place, is to begin having a set of workshops and conferences where we publish the results of those applications that have been developed on these machines at universities around the country.

Another significant initiative at DEC is to look at how far we can push single-chip microprocessors. The goal is a two-nanosecond cycle time on a two-way superscalar machine. Our simulations so far indicate that we can achieve capacities on the order of 300 million instructions per second (MIPS). Looking forward and scaling up to our 1993 and 1994 potential, we expect performance peaks to be in the neighborhood of 1000 MIPS.

I hasten to add that in this research program we are doing some of the work with universities, although the bulk of it is being done internally in our own research labs. The idea is to try and show the feasibility of this—to see whether we can make this the basis of our future work. The methodology is to use the fastest technology and the highest level of integration. Attempting to use the fastest technology means using emitter-coupled logic (ECL). We are continuing to work with U.S. vendors, Motorola and National. We've gone through two other vendors over the course of the past 18 months now, and there's no doubt in our minds that while the U.S. vendors are dropping back in some of their commitments to ECL, the Japanese are not. It would have been a lot easier for us to move forward and work with the Japanese. But we made a decision that we wanted to try and work with the U.S. vendor base to develop a set of CAD tools. We're doing custom design in ECL, and the belief is we can get as high a density with the ECL as we can get today with complementary metal oxide semiconductors (CMOS). It's a somewhat different ECL process. I think some people might even argue that it's closer to bipolar CMOS than ECL. But, in fact, all of the transistors in the current effort are ECL.

Today, packaging techniques can let you dissipate 150 to 175 watts per package. But the other part of the project, in addition to the CAD tools, is to develop the cooling technology so that we can do that on a single part.

Another reason it is not appropriate to call this a supercomputer is the large impact on workstations, because you can surround this one ECL part with fairly straightforward CMOS dynamic random-access-memory-chip second-level and third-level caches. So I think we can provide a fairly powerful desktop device in the years ahead.

What we are building is something that can get the central processing unit, the floating-point unit, and the translation unit, as well as instruction and data caches, on a single die. By getting the first-level caches on a single die, we hope to go off-chip every tenth to fifteenth cycle, not every cycle, which allows us to run the processor two to ten times faster


538

than the actual speed on the board. So we just use a phase-lock loop on the chip to run it at a clock rate higher than the rest of the system. It also lets us use a higher performance on the processor but then use lower technology for the boards themselves.

Because this is a research project, not a product development, it seems to me it's useful to discuss whether we meet our goal—whether our U.S. suppliers can supply us the parts in 1992 and 1993. This is clearly an achievable task. It will require some aggressive development of CAD tools, some new packaging technology, and the scaling of the ECL parts. But all of those will be happening, so in terms of looking at one-chip microprocessors, it's clear that this is coming, whether it happens in 1994, 1995, or a year or two later.

The next main topic I wanted to talk about is the question posed for this session by the conference organizers, i.e., where the government might be of help or be of hindrance in the years ahead, and I have three points. One is that I think it would be relatively straightforward for the government to ease up on export controls and allow us to move more effectively into new markets, particularly eastern Europe.

DEC has set up subsidiaries in Hungary and Czechoslovakia. It would like to go elsewhere. But a number of the rules hamper us. Now, the other people have talked about the supercomputing performance rules. Well, because DEC doesn't make supercomputers—we make minicomputers—we've run into other problems. We actually began to develop a fairly good market there. Then, in Afghanistan, we followed the direction of the government and stopped all further communication and stopped our delivery of products to eastern Europe.

As things opened up here this past year, it's turned out that the largest installed base of computers in Hungary is composed of Digital machines. Yes, they are all clones, not built by us, but it's a wonderful opportunity to service and provide new software. Right now we're precluded from doing that because it would violate various patent and other laws, so we're basically going to give that market over to the Japanese, who will go in and upgrade the cloned DEC computers and provide service.

The second point is that the government needs to be more effective in helping collaboration between U.S. industry, universities, and the government laboratories. The best model of that over the past couple decades has been the Defense Advanced Research Projects Agency (DARPA). Digital, in the early years, certainly with timesharing and networking,


539

profited and contributed well in those two areas. We didn't do as well with work-stations, I think. Obviously Sun Microsystems, Inc., and Silicon Graphics Inc. got the large benefit of that. We finally work up. We're doing better on workstations.

The point is that DARPA has done well, I think, in fostering the right type of collaboration with universities and industries in the years past. We need to do more of that in the years ahead, I think. So I would, number one, encourage that.

I have a final point on government collaboration that I think I've got to get on the table. People have said that their companies are for open systems and that you've got to have more collaboration. DEC also is absolutely committed to open systems. We need more collaboration. But let me caution you. In helping to set up a number of these collaborations—Open Software Foundation, Semiconductor Manufacturing Technology Consortium, and others—the government needs to play a central role if you want that collaboration to be focused on this country.

Unless you have the government involved in helping to set up that forum and providing some of the funding for the riskiest of the research, you will have an international, rather than a national, forum. High-performance computing is the ideal place, I think, for somebody in the government—whether it's the Department of Energy or DARPA or the civilian version of DARPA—to cause that forum to bring the major U.S. players together so we can develop some of the common software that people have talked about at this conference.


541

Industry Perspective:
Remarks on Policy and Economics for High-Performance Computing

David Wehrly

David S. Wehrly is President of ForeFronts Computational Technologies, Inc. He received his Ph.D. and joined IBM in 1968. He has held numerous positions with responsibility for IBM's engineering, scientific, and supercomputing product development. Dr. Wehrly pioneered many aspects of technical computing at IBM in such areas as heterogeneous interconnect and computing systems, vector, parallel, and clustered systems, and computational languages and libraries. He was until August 1992 Director of IBM's High-Performance/Supercomputing Systems and Development Laboratories, with overall, worldwide systems management and development responsibility for supercomputing at IBM.

I would like to share some of my thoughts on high-performance computing. First, I would like to make it clear that my opinions are my own and may or may not be those of IBM in general.

The progress of high-performance computing, in the seven years since the last Frontiers of Supercomputing conference in 1983, has been significant. The preceding sessions in this conference have done an excellent job of establishing where we are in everything from architecture to algorithms and the technologies required to realize them. There are, however, a few obstacles in the road ahead, and perhaps we are coming to some major crossroads in the way we do business and what the respective roles of government, industry, and academia are.


542

There is a lot more consensus on where to go than on a plan of how to get there, and we certainly fantasize about more than we can achieve in any given time. However, in a complex situation with no perfect answers—and without a doubt, no apparent free ride—most would agree that the only action that would be completely incorrect is to contemplate no action at all.

Herb Striner (Session 11) was far more articulate and Alan McAdams (same session) was more passionate than I will be, but the message is the same. Leading edge in supercomputing is certainly not for the faint of heart or light of wallet!

The Office of Science and Technology Policy report on the federal High Performance Computing Initiative of 1989 set forth a framework and identified many of the key issues that are inhibiting the advance of U.S. leadership in high-performance computing. There are many observers of the current circumstances that contend that it is a lack of a national industrial policy and extremely complex, bilateral relationships with countries such as Japan, with which we are both simultaneously allies and competitors, that assures our failure.

So, what are the major problems that we face as a nation? We have seen this list before:

• the high cost of capital;

• a focus on short-term revenue optimization;

• inattention to advanced manufacturing, quality, and productivity;

• unfair trade practices; and

• the realization that we are behind in several key and emerging strategic technologies.

Although this is not a total list, it represents some of the major underlying causes of our problems, and many of these problems arise as a result of the uncertain role government has played in the U.S. domestic economy, coupled with sporadic efforts to open the Japanese markets.

When viewed with respect to the U.S. Trade Act of 1988, Super 301, and the Structural Impediment Initiative talks, statements by high-profile leaders such as Michael Boskin, Chairman, Council of Economic Advisors—who said, "potato chips, semiconductor chips, what is the difference? They are all chips."—or Richard Darman, Director, Office of Management and Budget—who said, "Why do we want a semiconductor industry? We don't want some kind of industrial policy in this country. If our guys can't hack it, let them go."—must give one pause to wonder if we are not accelerating the demise of the last pocket of support for American industry and competitiveness and moving one step closer to carrying out what Clyde V. Prestowitz, Jr., a veteran U.S.-Japanese


543

negotiator, characterized as "our own death wish." His observations (see the July 1990 issue of Business Tokyo ) were made in the context of Craig Fields's departure from the Defense Advanced Research Projects Agency. We must recognize such confusion about our technology policy that is in contrast to the Japanese Ministry of International Trade and Industry (MITI), in which a single body of the Japanese government fine tunes and orchestrates an industrial policy.

I am not advocating that the U.S. go to that extreme. However, some believe the U.S. will become a second- or third-rate industrial power by the year 2000 if we do not change our "ad hoc" approach to technology policy. The competitive status of the U.S. electronic sector of the industry is such that Japan is headed toward replacing the U.S. as the world's number one producer and trader of electronic hardware by mid-1990, if not earlier.

So, what is needed? The U.S. needs a focused industrial policy that is committed to rejuvenating and maintaining the nation's high-tech strength. Such a policy must be focused—a committed government working in close conjunction with American business and academic institutions. Technical and industrial policy must be both tactical and strategic and neither isolated from nor confused by the daily dynamics of politics.

I would summarize the key requirements in the following way:

• We need to recognize economic and technological strength as vital to national security. We must understand fully the strategic linkages between trade, investment, technology, and financial power. The Japanese achieve this through their keiretsu (business leagues).

• Institutional reforms are needed to allow greater coordination between government, academia, and business. MITI has both strengths and weaknesses, but first and foremost, it has a vision—the promotion of Japan's national interest.

• The U.S. must strengthen its industrial competitiveness. Measures are necessary to encourage capital formation, increase investment, improve quality, promote exports, enhance education, and stimulate research and development.

• America must adopt a more focused, pragmatic, informed, and sophisticated approach toward Japan on the basis of a clear industrial policy and coherent strategy with well-defined priorities and objectives, without relegating Japan to a position of adversary. We must recognize Japan for what it is—a brilliant negotiator and formidable competitor.

From an economic standpoint, the U.S. has a growing dependence on others for funding our federal budget deficit, providing capital for investment, selling consumer products to the American public, and


544

supplying components for strategic American industrial production. American assets are half of their 1985 value, and Americans pay twice as much for Japanese products and property.

Sometimes we are obsessed with hitting home runs rather than concentrating on the fundamentals that score runs and win games. Our focus is on whose machine is the fastest, whose "grand challenges" are grandest, and who gets the most prestigious award. While all of this is important, we must not lose sight of other, perhaps less glamorous but possibly more important, questions: How do we design higher quality cars? How do we bring medicines to market more quickly? How can we apply information technologies to assist in the education of our children? The answers to these questions will come by focusing on designing high-performance computers for the sake of technology and by focusing on applications.

To achieve these objectives, significant work will be required in the hardware, software, and networking areas. Work that the private sector does best should be left to the private sector, including hardware and software design and manufacture and the operation of the highly complex network required to communicate between systems and to exchange information. Even the Japanese observed in their just-completed supercomputing initiative that industry, through the demands of the market, had far exceeded the goals set by MITI and that the government had, in fact, become an inhibitor in a project that spanned almost a decade. Our government, however, could assist tremendously if it would focus attention on the use of high-performance computing to strengthen both American competitiveness and scientific advances by serving as a catalyst, by using high-performance computing to advance national interests, and by participating with the private sector in funding programs and transferring critical skills and technologies.

We have said a lot about general policy and countering the Japanese competitive challenge, but what about the technology challenge, from architecture to system structure? What are the challenges that we face as an industry?

At the chip level, researchers are seeking new levels of component density and new semiconductor materials and cooling technologies and exploring complex new processing and lithographic techniques, such as soft X-ray, for fabricating new chips. Yet, as the density increases and fundamental device structures become smaller, intrinsic parasitics and interconnect complexities bring diminishing returns and force developers to deal with an ever-increasing set of design tradeoffs and parameters.


545

This is demanding ever-higher capital investment for research and development of tooling and manufacturing processes.

Ultimately, as we approach one-nanosecond-cycle-time machines and beyond, thermal density and packaging complexities are causing a slowing of the rate of progress in both function and performance for traditional architectures and machine structures. With progress slowing along the traditional path, much research is now being focused on higher, or more massive, degrees of parallelism and new structural approaches to removing existing road blocks to enable the computational capacity required for the challenges of tomorrow. Japan, too, has discovered this potential leverage.

At the system level, the man-machine interface is commanding a lot of development attention in such areas as voice recognition and visualization and the algorithms needed to enable these new technologies. These areas are opening up some exciting possibilities and creating new sets of challenges and demands for wideband networking and mass storage innovations.

Probably the greatest technology challenge in this arena is parallel software enablement. At this juncture, it is this barrier more than anything else that stands in the way of the 100- to 10,000-fold increase in computing capacity required to begin to address the future scientific computational challenges.

In summary, scientific computing and supercomputers are essential to maintaining our nation's leadership in defense technologies, fundamental research and development, and, ultimately, our industrial and economic position in the world. Supercomputers and supercomputing have become indispensable tools and technology drivers, and the U.S. cannot afford to relinquish its lead in this key industry.

I believe the High-Performance Computing Act of 1989 offers us in its objectives an opportunity to stay in the lead, though I do have some concerns involving the standards and intellectual-property aspects of the bill. I encourage a national plan with an application focus and comprehensive network and believe the time for action is now. I am persuaded that work resulting from implementation of this act will encourage the government and the private sector to build advanced systems and applications faster and more efficiently.


547

previous part
13— INDUSTRY PERSPECTIVE: POLICY AND ECONOMICS FOR HIGH-PERFORMANCE COMPUTING
next part