Ralph C. Merkle received his Ph.D. from Stanford University in 1979 and is best known as a coinventor of public-key cryptography. Currently, he pursues research in computational nanotechnology at the Xerox Research Center in Palo Alto, California.
We are going to discuss configurations of matter and, in particular, arrangements of atoms. Figure 1 is a Venn diagram, and the big circle with a P in it represents all possible arrangements of atoms. The smaller circle with an M in it represents the arrangements of atoms that we know how to manufacture. The circle with a U in it represents the arrangements of atoms that we can understand.
Venn diagrams let you easily look at various unions and intersections of sets, which is exactly what we're going to do. One subset is the arrangements of atoms that are physically possible, but which we can neither manufacture nor understand. There's not a lot to say about this subset, so we won't.
The next subset of interest includes those arrangements of atoms that we can manufacture but can't understand. This is actually a very popular subset and includes more than many people think, but it's not what we're going to talk about.
The subset that we can both manufacture and understand is a good, solid, worthwhile subset. This is where a good part of current research is devoted. By thinking about things that we can both understand and
manufacture, we can make them better. Despite its great popularity, though, we won't be talking about this subset either .
Today, we'll talk about the subset that we can understand but can't yet manufacture. The implication is that the range of things we can manufacture will extend and gradually encroach upon the range of things that we can understand. So at some point in the future, we should be able to make most of these structures, even if we can't make them today.
There is a problem in talking about things that we can't yet manufacture: our statements are not subject to experimental verification, which is bad. This doesn't mean we can't think about them, and if we ever expect to build any of them we must think about them. But we do have to be careful. It would be a great shame if we never built any of them, because some of them are very interesting indeed. And it will be very hard to make them, especially the more complex ones, if we don't think about them first.
One thing we can do to make it easier to think about things that we can't build (and make it less likely that we'll reach the wrong conclusions) is to think about the subset of mechanical devices: machinery. This subset includes things made out of gears and knobs and levers and things. We can make a lot of mechanical machines today, and we can see how they work and how their parts interact. And we can shrink them down to smaller and smaller sizes, and they still work. At some point,
they become so small that we can't make them, so they move from the subset of things that we can make to the subset of things that we can't make. But because the principles of operation are simple, we believe they would work if only we could make them that small. Of course, eventually they'll be so small that the number of atoms in each part starts to get small, and we have to worry about our simple principles of operation breaking down. But because the principles are simple, it's a lot easier to tell whether they still apply or not. And because we know the device works at a larger scale, we only need to worry about exactly how small the device can get and still work. If we make a mistake, it's a mistake in scale rather than a fundamental mistake. We just make the device a little bit bigger, and it should work. (This isn't true of some proposals for molecular devices that depend fundamentally on the fact that small things behave very differently from big things. If we propose a device that depends fundamentally on quantum effects and our analysis is wrong, then we might have a hard time making it slightly bigger to fix the problem!)
The fact remains, though, that we can't make things as small as we'd like to make them. In even the most precise modern manufacturing, we treat matter in bulk. From the viewpoint of an atom, casting involves vast liquid oceans of billions of metal atoms, grinding scrapes off great mountains of atoms, and even the finest lithography involves large numbers of atoms. The basic theme is that atoms are being dealt with in great lumbering statistical herds, not as individuals.
Richard Feynman (1961) said: "The principles of physics, as far as I can see, do not speak against the possibility of maneuvering things atom by atom." Eigler and Schweizer (1990) recently gave us experimental proof of Feynman's words when they spelled "IBM" by dragging individual xenon atoms around on a nickel surface. We have entered a new age, an age in which we can make things with atomic precision. We no longer have to deal with atoms in great statistical herds—we can deal with them as individuals.
This brings us to the basic idea of this talk, which is nanotechnology. (Different people use the term "nanotechnology" to mean very different things. It's often used to describe anything on a submicron scale, which is clearly not what we're talking about. Here, we use the term "nanotechnology" to refer to "molecular nanotechnology" or "molecular manufacturing," which is a much narrower and more precise meaning than "submicron.") Nanotechnology, basically, is the thorough, inexpensive control of the structure of matter. That means if you want to build something (and it makes chemical and physical sense), you can very likely build it. Furthermore, the
individual atoms in the structure are where you want them to be, so the structure is atomically precise. And you can do this at low cost. This possibility is attracting increasing interest at this point because it looks like we'll actually be able to do it.
For example, IBM's Chief Scientist and Vice President for Science and Technology, J. A. Armstrong, said: "I believe that nanoscience and nanotechnology will be central to the next epoch of the information age, and will be as revolutionary as science and technology at the micron scale have been since the early '70's. . . . Indeed, we will have the ability to make electronic and mechanical devices atom-by-atom when that is appropriate to the job at hand."
To give you a feeling for the scale of what we're talking about, a single cubic nanometer holds about 176 carbon atoms (in a diamond lattice). This makes a cubic nanometer fairly big from the point of view of nanotechnology because it can hold over a hundred atoms, and if we're designing a nano device, we have to specify where each of those 176 atoms goes.
If you look in biological systems, you find some dramatic examples of what can be done. For instance, the storage capacity of DNA is roughly 1 bit per 16 atoms or so. If we can selectively remove individual atoms from a surface (as was demonstrated at IBM), we should be able to beat even that!
An even more dramatic device taken from biology is the ribosome. The ribosome is a programmable machine tool that can make almost any protein. It reads the messenger RNA (the "punched paper tape" of the biological world) and builds the protein, one amino acid at a time. All life on the planet uses this method to make proteins, and proteins are used to build almost everything else, from bacteria to whales to giant redwood trees.
There's been a growing interest in nanotechnology (Dewdney 1988, The Economist 1989, Pollack 1991). Fortune Magazine had an article about where the next major fortunes would come from (Fromson 1988), which included nanotechnology. The Fortune Magazine article said that very large fortunes would be made in the 21st century from nanotechnology and described K. Eric Drexler as the "theoretician of nanotechnology." Drexler (1981, 1986, 1988, 1992) has had a great influence on the development of this field and provided some of the figures used here.
Japan is funding research in this area (Swinbanks 1990). Their interest is understandable. Nanotechnology is a manufacturing technology, and Japan has always had a strong interest in manufacturing technologies. It will let you make incredibly small things, and Japan has always had a strong interest in miniaturization. It will let you make things where
every atom is in the right place: this is the highest possible quality, and Japan has always had a strong interest in high quality. It will let you make things at low cost, and Japan has always been interested in low-cost manufacturing. And finally, the payoff from this kind of technology will come in many years to a few decades, and Japan has a planning horizon that extends to many decades. So it's not surprising that Japan is pursuing nanotechnology.
This technology won't be developed overnight. One kind of development that we might see in the next few years would be an improved scanning tunneling microscope (STM) that would be able to deposit or remove a few atoms on a surface in an atomically precise fashion, making and breaking bonds in the process. The tip would approach a surface and then withdraw from the surface, leaving a cluster of atoms in a specified location (Figure 2). We could model this kind of process today using a computational experiment. Molecular modeling of this kind of interaction is entirely feasible and would allow a fairly rapid analysis of a broad variety of tip structures and tip-surface interactions. This would let us rapidly sort through a wide range of possibilities and pick out the most useful approaches. Now, if in fact you could do something like that, you could build structures using an STM at the molecular and atomic scale.
Figure 3 shows what might be described as a scaled-down version of an STM. It is a device that gives you positional control, and it is roughly 90 nanometers tall, so it is very tiny. It has six degrees of freedom and can position its tip accurately to within something like an angstrom. We can't build it today, but it's a fairly simple design and depends on fairly simple mechanical principles, so we think it should work.
This brings us to the concept of an "assembler." If you can miniaturize an STM and if you can build structures by controlled deposition of small clusters of atoms on surfaces, then you should be able to build small structures with a small version of the STM. Of course, you'd need a small computer to control the small robotic arm. The result is something that looks like an industrial robot that is scaled down by a factor of a million. It has millionfold smaller components and millionfold faster operations.
The assembler would be programmable, like a computer-controlled robot. It would be able to use familiar chemistry: the kind of chemistry that is used in living systems to make proteins and the kind of chemistry that chemists normally use in test tubes. Just as the ribosome can bond together amino acids into a linear polypeptide, so the assembler could bond together a set of chemical building blocks into complex three-dimensional structures by directly putting the compounds in the right places. The major differences between the ribosome and the assembler
are (1) the assembler has a more complex (computerized) control system (the ribosome can only follow the very simple instructions on the messenger RNA), (2) the assembler can directly move the chemical building blocks to the right place in three dimensions, and so could directly form complex three-dimensional structures (the ribosome can only form simple linear sequences and can make three-dimensional structures only by roundabout and indirect means), and (3) the assembler can form several different types of bonds (the ribosome can form just one type of bond, the bond that links adjacent amino acids).
You could also use rather exotic chemistry. Highly reactive compounds are usually of rather limited use in chemistry because they react with almost anything they touch and it's hard to keep them from touching something you don't want them to touch. If you work in a vacuum, though, and can control the positions of everything, then you can work with highly reactive compounds. They won't react with things they're not supposed to react with because they won't touch anything they're not supposed to touch. Specificity is provided by controlling the positions of reacting compounds.
There are a variety of things that assemblers could make. One of the most interesting is other assemblers. That is where you get low
manufacturing cost. (At Xerox, we have a special fondness for machines that make copies of things.) The idea of assemblers making other assemblers leads to self-replicating assemblers. The concept of self-replicating machines has actually been around for some time. It was discussed by von Neumann (1966) back in the 1940s in his work on the theory of self-reproducing automata. Von Neumann's style of a self-replicating device had a Universal Computer coupled to what he called a Universal Constructor. The Universal Computer tells the Universal Constructor what to do. The Universal Constructor, following the instructions of the Universal Computer, builds a copy of both the Universal Computer and the Universal Constructor. It then copies the blueprints into the new machine, and away you go. That style of self-replicating device looks pretty interesting.
NASA (1982) did a study called "Advanced Automation for Space Missions." A large part of their study was devoted to SRSs, or Self-Replicating Systems. They concluded, among other things, that "the theoretical concept of machine duplication is well developed. There are several alternative strategies by which machine self-replication can be carried out in a practical engineering setting. An engineering demonstration project can be initiated immediately. . . ." They commented on and discussed many of the strategies. Of course, their proposals weren't molecular in scale but were quite macroscopic. NASA's basic objective was to put a 100,000-ton, self-replicating seed module on the lunar surface. Designing it would be hard, but after it was designed, built, and installed on the lunar surface, it would manufacture more of itself. This would be much cheaper than launching the same equipment from the earth.
There are several different self-replicating systems that we can examine. Von Neumann's proposal was about 500,000 bits. The Internet Worm was also about 500,000 bits. The bacterium, E. coli , a self-replicating device that operates in nature, has a complexity of about 8,000,000 bits. Drexler's assembler has an estimated complexity of 100 million bits. People have a complexity of roughly 6.4 gigabits. Of course, people do things other than replicate, so it's not really fair to chalk all of this complexity up to self-replication. The proposed NASA lunar manufacturing facility was very complex: 100 to 1000 gigabits.
To summarize the basic idea: today, manufacturing limits technology. In the future we'll be able to manufacture most structures that make sense. The chief remaining limits will be physical law and design capabilities. We can't make it if it violates physical law, and we can't make it if we can't specify it.
It will take a lot of work to get there, and more than just a lot of work, it will take a lot of planning. It's likely that general-purpose molecular manufacturing systems will be complex, so complex that we won't stumble over them by accident or find that we've made one without realizing it. This is more like going to the moon: a big project with lots of complicated systems and subsystems. Before we can start such a project, though, there will have to be proposals, and analyses of proposals, and a winnowing of the proposals down to the ones that make the most sense, and a debate about which of these few best proposals is actually worth the effort to build. Computers can help a great deal here. For virtually the first time in history, we can use computational models to study structures that we can't build and use computational experiments, which are often cheap and quick, compared with physical experiments, to help us decide which path is worth following and which path isn't.
Boeing builds airplanes in a computer before they build them in the real world. They can make better airplanes, and they can make them more quickly. They can shave years off the development time. In the same way, we can model all the components of an assembler using everything from computational-chemistry software to mechanical-engineering software to system-level simulators. This will take an immense amount of computer power, but it will shave many years off the development schedule.
Of course, everyone wants to know how soon molecular manufacturing will be here. That's hard to say. However, there are some very interesting trends. The progress in computer technology during the past 50 years has been remarkably regular. Almost every parameter of hardware technology can be plotted as a straight line on log paper. If we extrapolate those straight lines, we find they reach interesting values somewhere around 2010 to 2020. The energy dissipation per logic operation reaches thermal noise at room temperature. The number of atoms required to store one bit of information reaches approximately one. The raw computational power of a computer starts to exceed the raw computational power of the human brain. This suggests that somewhere between 2010 and 2020, we'll be able to build computers with atomic precision. It's hard to see how we could achieve such remarkable performance otherwise, and there are no fundamental principles that prevent us from doing it. And if we can build computers with atomic precision, we'll have to have developed some sort of molecular manufacturing capability.
Feynman said: "The problems of chemistry and biology can be greatly helped if our ability to see what we are doing and to do things on an atomic level is ultimately developed, a development which, I think, cannot be avoided."
While it's hard to say exactly how long it will take to develop molecular manufacturing, it's clear that we'll get there faster if we decide that it's a worthwhile goal and deliberately set out to achieve it.
As Alan Kay said: "The best way to predict the future is to create it."
A. K. Dewdney, "Nanotechnology: Wherein Molecular Computers Control Tiny Circulatory Submarines," Scientific American257 , 100-103 (January 1988).
K. E. Drexler, Engines of Creation , Anchor Press, New York (1986).
K. E. Drexler, "Molecular Engineering: An Approach to the Development of General Capabilities for Molecular Manipulation," in Proceedings of the National Academy of Sciences of the United States of America78 , 5275-78 (1981).
K. E. Drexler, Nanosystems: Molecular Machinery, Manufacturing and Computation , John Wiley and Sons, Inc., New York (1992).
K. E. Drexler, "Rod Logic and Thermal Noise in the Mechanical Nanocomputer," in Proceedings of the Third International Symposium on Molecular Electronic Devices , F. Carter, Ed., Elsevier Science Publishing Co., Inc., New York (1988).
The Economist Newspaper Ltd., "The Invisible Factory," The Economist313 (7632), 91 (December 9, 1989).
D. M. Eigler and E. K. Schweizer, "Positioning Single Atoms with a Scanning Tunnelling Microscope," Nature344 , 524-526 (April 15, 1990).
R. Feynman, There's Plenty of Room at the Bottom , annual meeting of the American Physical Society, December 29, 1959. Reprinted in "Miniaturization," H. D. Gilbert, Ed., Reinhold Co., New York, pp. 282-296 (1961).
B. D. Fromson, "Where the Next Fortunes Will be Made," Fortune Magazine , Vol. 118, No. 13, pp. 185-196 (December 5, 1988).
NASA, "Advanced Automation for Space Missions," in Proceedings of the 1980 NASA/ASEE Summer Study , Robert A. Freitas, Jr. and William P. Gilbreath, Eds., National Technical Information Service (NTIS) order no. N83-15348, U.S. Department of Commerce, Springfield, Virginia (November 1982).
A. Pollack, "Atom by Atom, Scientists Build 'Invisible' Machines of the Future," The New York Times (science section), p. B7 (November 26, 1991).
D. Swinbanks, "MITI Heads for Inner Space," Nature346 , 688-689 (August 23, 1990).
J. von Neumann, Theory of Self Reproducing Automata , Arthur W. Burks, Ed., University of Illinois Press, Urbana, Illinois (1966).