previous chapter
Government's High Performance Computing Initiative Interface with Industry
next chapter

Government's High Performance Computing Initiative Interface with Industry

Howard E. Simmons

Howard E. Simmons is Vice President and Senior Science Advisor in E. I. du Pont de Nemours and Company, where for the past 12 years he headed corporate research. He has a bachelor's degree from MIT in chemistry and a Ph.D. from the same institution in physical organic chemistry. He was elected to the National Academy of Sciences in 1975 and has been a Visiting Professor at Harvard University and the University of Chicago. His research interests have ranged widely from synthetic and mechanistic chemistry to quantum chemistry. Most recently, he has coauthored a book on mathematical topological methods in chemistry with R. E. Merrifield

It is a pleasure for me to participate in this conference and share with you my perspectives on supercomputing in the industrial research, development, and engineering environments.

I will talk to you a little bit from the perspective of an industry that has not only come late to supercomputing but also to computing in general from the science standpoint. Computing from the engineering standpoint, I think, came into the chemical industry very early, and E. I. du Pont de Nemours and Company (du Pont) was one of the leaders.

Use of supercomputers at du Pont is somewhat different from the uses we see occurring in the national laboratories and academia. The differences are created to a large extent by the cultures in which we operate and


354

the institutional needs we serve. In that context, there are three topics I will cover briefly. The first is "culture." The second is supercomputer applications software. The third is the need for interfaces to computer applications running on PCs, workstations, minicomputers, and supercomputers of differing types—for example, massively parallels.

As I mentioned in regard to culture, the industrial research, development, and engineering culture differs from that of the national laboratories and academia. I think this is because our objective is the discovery, development, manufacture, and sale of products that meet customer needs and at the same time make a profit for the company. This business orientation causes us to narrow the scope of our work and focus our efforts on solving problems of real business value in those areas in which we have chosen to operate. Here I am speaking about the bulk of industrial research, although work at AT&T's Bell Laboratories, du Pont's Central Research, and many other corporate laboratories, for instance, do follow more closely the academic pattern.

A second cultural difference is that most of the R&D effort, and consequently our staffing, has been directed toward traditional experimental sciences. We have rarely, in the past, been faced with problems that could be analyzed and solved only through computational methods. Hence, our computational science "tradition" is neither as long-standing nor as diverse as found in the other sectors or in other industries.

A significant limitation and hindrance to broad industrial supercomputer use is the vision of what is possible by the scientists and engineers who are solving problems. So long as they are satisfied with what they are doing and the way they are doing it, there is not much driving force to solve more fundamental, bigger, or more complex problems. Thus, in our industry, a lot of education is needed, not only for the supercomputer area but also in all advanced computing. We believe we are making progress in encouraging a broader world view within our technical and managerial ranks. We have been having a lot of in-house symposia, particularly in supercomputing, with the help of Cray Research, Inc. We invited not just potential users of the company but also middle managers, who are key people to convince on the possible needs that their people will have in more advanced computing.

Our company has a policy of paying for resources used. This user-based billing practice causes some difficulty for users in justifying and budgeting for supercomputer use, particularly in the middle of a fiscal cycle. A typical example is that scientists and engineers at a remote plant site—in Texas, for example—may see uses for our Cray back in Wilmington, Delaware. They have a lot of trouble convincing their


355

middle management that this is any more than a new toy or a new gimmick. So we have done everything that we can, including forming SWAT teams that go out and try to talk to managers throughout the corporation in the research area and give them some sort of a reasonable perspective of what the corporation's total advanced computing capabilities are.

The cultural differences between the national laboratories, universities, and industry are certainly many, but they should not preclude mutually beneficial interactions. The diversity of backgrounds and differences in research objectives can and should be complementary if we understand each other's needs.

The second major topic I will discuss is software, specifically applications software. We presume for the present time that operating systems, communications software, and the like will be largely provided by vendors, at least certainly in our industry. There are several ways to look at the applications software issue. The simplest is to describe our strategies for acquiring needed analysis capabilities involving large "codes." In priority, the questions we need to ask are as follows:

• Has someone else already developed the software to solve the problem or class of problems of interest to us? If the answer is yes, then we need to take the appropriate steps to acquire the software. In general, acquisition produces results faster and at lower cost than developing our own programs.

• Is there a consortium or partnership that exists or might be put together to develop the needed software tools? If so, we should seriously consider buying in. This type of partnering is not without some pitfalls, but it is one that appeals to us.

• Do we have the basic expertise and tools to develop our own special-purpose programs in advanced computing? The answer here is almost always yes, but rarely is it a better business proposition than the first two options. This alternative is taken only when there is no other viable option.

To illustrate what's been happening, our engineers have used computers for problem solving since the late 1950s. Since we were early starters, we developed our own programs and our own computer expertise. Today, commercial programs are replacing many of our "home-grown" codes. We can no longer economically justify the resources required to develop and maintain in-house versions of generic software products. Our engineers must concentrate on applying the available computational tools faster and at lower life-cycle costs than our competition.


356

Our applications in the basic sciences came later and continue to undergo strong growth. Many of our scientists write their own code for their original work, but here, too, we face a growing need for purchased software, particularly in the molecular-dynamics area.

Applications software is an ever-present need for us. It needs to be reasonably priced, reliable, robust, and have good documentation. In addition, high-quality training and support should be readily available. As we look forward to parallel computers, the severity of the need for good applications software will only increase, since the old and proven software developed for serial machines is becoming increasingly inadequate for us.

Finally, integration of supercomputers into the infrastructure or fabric of our R&D and engineering processes is not easy. I believe it to be one of the primary causes for the slow rate of penetration in their use.

For the sake of argument, assume that we have an organization that believes in and supports computational science, that we have capable scientists and engineers who can use the tools effectively, that the computers are available at a reasonable cost, and that the needed software tools are available. All of these are necessary conditions, but they are not sufficient.

Supercomputers historically have not fit easily into the established computer-based problem-solving environments, which include personal computers, workstations, and minicomputers. In this context, the supercomputer is best viewed as a compute engine that should be almost transparent to the user. To make the supercomputer transparent requires interfaces to these other computing platforms and the applications running on them. Development of interfaces is an imperative if we are going to make substantial inroads into the existing base of scientific and engineering applications. The current trend toward UNIX-based operating systems greatly facilitates this development. However, industry tends to have substantial investments in computer systems running proprietary operating systems (e.g., IBM/MVS, VAX/VMS, etc.).

Three brief examples of our supercomputer applications might help to illustrate the sort of things we are doing and a little bit about our needs. In the first example, we design steel structures for our manufacturing plants using computer-aided design tools on our Digital Equipment Corporation VAX computers. Central to this design process is analysis of the structure using the NASTRAN finite-element analysis program. This piece of the design process is, of course, very time consuming and compute intensive. To break that bottleneck, we have interfaced the


357

design programs to our CRAY X-MP, where we do the structural analyses. It is faster by a factor of 20 to 40 in our hands, a lot lower in cost, and it permits us to do a better design job. We can do a better job with greater compute power so that we can do seismic load analyses even when the structures are not in high-risk areas. This, simply for economic reasons, we did not always do in the past. This capability and vision lead to new approaches to some old problems.

The second example is—as we explore new compounds for application and new products—part of the discovery process that requires the determination of bulk physical properties. In selected cases we are computing these in order to expedite the design and development of commercial manufacturing facilities. We find high value in areas ranging from drug design to structural property relations in polymers. A good example is the computation of basic thermodynamic properties of such small halocarbons as Freon chlorofluorocarbon replacements. This effort is critical to our future and the viability of some of our businesses. It is very interesting to note that these are ab initio quantum mechanical calculations that are being used directly in design of both products and plants. So in this case we have had no problem in convincing the upper management in one of the most traditional businesses that we have of the great value of supercomputers because this is necessary to get some of these jobs done. We gain a substantial competitive advantage by being able to develop such data via computational methodologies and not just experimentally. Experimental determination of these properties can take much longer and cost more.

A third example, atmospheric chemistry modeling to understand and to assess the impact of particular compounds in the ozone-depletion problem—and now global warming—is another area where we have had a significant supercomputer effort over many years. This effort is also critical to the future and viability of some of our businesses. As a consequence, this is an area where we chose to develop our own special-purpose programs, which are recognized as being state of the art.

In looking forward, what can we do together? What would be of help to us in industry?

One answer is to explore alternatives for integrating supercomputers into a heterogeneous network of computer applications and workstations so they can be easily accessed and utilized to solve problems where high-performance computing is either required or highly desirable.

Second, we could develop hardware and software to solve the grand challenges of science. Although it may not be directly applicable to our


358

problems, the development of new and novel machines and algorithms will benefit us, particularly in the vectorization and parallelization of algorithms.

Third, we might develop applications software of commercial quality that exploits the capabilities of highly parallel supercomputers.

Fourth, we could develop visualization hardware and software tools that could be used effectively and simply by our scientists and engineers to enhance their projects. We would be anxious to cooperate with others jointly in any of these sorts of areas.

The bottom line is that we, in our innocence, believe we are getting real business value from the use of supercomputers in research, development, and engineering work. However, to exploit this technology fully, we need people with a vision of what is possible, we need more high-quality software applications—especially for highly parallel machines—and we need the capability to easily integrate supercomputers into diverse problem-solving environments. Some of those things, like the latter point, are really our job. Yet we really need the help of others and would be very eager, I think, to work with a national laboratory in solving some problems for the chemical industry.


359

previous chapter
Government's High Performance Computing Initiative Interface with Industry
next chapter