At some time in my late forties I realized that I had "made it." My research was internationally acclaimed. I was the chairman of the best biology department in the world. I no longer needed to "prove myself" to myself or to others. I had achieved in my research and career what I had set out to achieve so amorphously many years earlier.
In a sense all of that accomplishment was like the uncoiling of a spring that had been formed very early in my life—the realization of values and goals imprinted as a child and focused at MIT. It was as though the program had come to the end of the tape. But where to now? With the imprinted goals achieved, what came next? Some would call this a mid-life crisis, but I had no sense of crisis but rather one of relief and decompression. Long-repressed interests outside of science reemerged.
The tone and ambience of science were changing—the result of its own success. When I entered science, it was in the popular mind an arcane but benign profession. It was peopled largely by scholars who had a driving curiosity to know how the natural world worked, and who could be nothing else but scientists. As a career, science did not promise much material reward but offered great intellectual satisfaction.
The Second World War had convincingly demonstrated the value of science to national security, health, and industry. Government funding then greatly augmented the resources available to science and with them the potential of careers in science. During the 1960s, I observed the entry of a somewhat different type of science student—bright but not
driven—who chose science as an acceptable professional career occupation but who might just as well have become a lawyer, physician, developer. Between 1948 and 1988, the number of Ph.D.s awarded per year in the biosciences increased more than eightfold.
Science could now be performed on a scale commensurate to the problems at hand with a resultant acceleration of progress. And as science and its offspring, technology, became more and more important in the economy, as society was required again and again to adapt to new technological change, and as some of the impacts of the new technologies (e.g., nuclear weapons, pollution, toxic waste, even overpopulation) appeared less than desirable, so the societal ambience of science changed.
Scientific administration proliferated. The allocation of resources for science inevitably became at least partially politicized, and the societal view of science and technology became increasingly critical, whether a backlash to unsought and unwanted change, a response to perceived environmental degradation or military emphasis, or an expression of a latent anti-intellectualism. The public support for science in a society largely ignorant of its content becomes inherently volatile, resting on the public perception of its consequences. Science is still perceived as arcane, but now it is also considered potentially malignant.
These observations led me to an heretical thought: Should there—can there—be limits to inquiry? The very question incites distress and revulsion in a scientist, whose credo must be that knowledge is good and more knowledge better. But the logic of the recombinant DNA controversy forced me to consider the question as dispassionately as possible. Had it ever arisen before—among scientists, that is, not among those committed to religious or ideological dogma? Yes, it had, to nuclear physicists. Frederick Soddy, a colleague of Lord Rutherford's, had written in 1920:
Let us suppose that it became possible to extract the energy which now oozes out, so to speak, from radioactive material over a period of thousands of millions of years, in as short a time as we pleased. From a pound weight of such substance one could get about as much energy as would be obtained by burning 150 tons of coal. How splendid. Or a pound weight could be made to do the work of 150 tons of dynamite. Ah, there's the rub. . . . It is a discovery that conceivably might be made tomorrow in time for its development and perfection, for the use or destruction, let us say, of the next generation, and, which it is pretty certain, will be made by science sooner or later. Surely it will not need this actual demonstration to convince the world that it is doomed if it fools with the achievements of science as it has fooled too long in the past.
And many of the physicists who worked at Los Alamos on the atomic bomb during World War II had deep qualms. As Victor Weisskopf has written: "Many of us hoped that the number of neutrons per fission would be low enough to prevent the making of a bomb. But it wasn't."
To pose the question invites a paradox. How can we know what we would not want to know? The paradox is compounded by the tautology that the most important discoveries are those least expected.
We accept limits on modes of inquiry. We do not experiment on involuntary, human subjects; we seek to minimize suffering in experimental animals (some would ban all animal research); we avoid experiments that might produce irreversible environmental effects. Implicitly, we value human dignity and our sense of responsibility for other life forms and for the planetary environment more than the knowledge that might thus be acquired. Most often, we assume or hope that the knowledge sought can be obtained by other more acceptable means.
But is there any knowledge we would not want to have, regardless of how it was obtained? Knowledge too dangerous, too destabilizing? Our species evolved and survived with the means to cope with the dangers of a world of human scale. At that scale, the species could tolerate the not insignificant level of irrationality that we know to be present in human behavior. But beneath the surface of that human-scale world lie structures and forces of great power, which we have learned to understand and manipulate to human purpose. Could there be elements of these powers that would be mortally dangerous if available to irrational minds? Does nature set traps for unwary species?
If hydrogen bombs could be readily made in a garage workshop, all of human society would be in mortal peril. Happily, it is not that easy. But would we encourage, or allow research to make such simple manufacture possible? Does biology, through recombinant DNA, offer a potential analog? And, if so, could inadvertence, let alone malevolence, unleash disaster?
I was, however, as I am today, most reluctant to concede that humanity should forego knowledge of any kind forever. I therefore thought to introduce another dimension into consideration—time. Discoveries that might be of great potential harm in one era might be innocuous in another. In a truly peaceful world, the discovery of atomic energy might never be applied to weapons. Should then more research and intellect have been devoted to the search for routes to lasting peace and less to nuclear physics?
Is there a preferable sequence for discovery, just as there is a necessary
sequence for the onset of gene expression during the development of an organism? Are there more opportune times for the exploration of certain areas of knowledge? Could we discern such a program and apply it to the allocation of research resources?
These seemed to be meaningful questions, worthy of sustained exploration. These emerging concerns needed to be addressed by minds well informed and perceptive as to likely future developments. But the best minds of science were focused elsewhere. Caltech, MIT, and the national academy were largely ignoring this rising tide, except when it directly affected their specific interests.