SDI And Earlier Military Projects: A Comparison
All major military projects have been "political" in the sense that those who decide to support them have policy objectives in mind. The United States' decision to develop the atomic bomb was a watershed in
the degree to which the political authorities became committed to a program involving research as well as development. Unlike other projects aimed at perfecting existing weapons, this one was designed to conduct a complex series of experiments in an effort to discover whether a wholly new weapon (which was thought capable of introducing a radically new dimension to warfare) could be developed. The decision was made in conditions of secrecy at the highest level of the executive branch and was influenced by the fear that a wartime enemy, Nazi Germany, might be the first to develop such a weapon. Scientists played an important role in the decision—far more important than they had played in the initiation of military-technology programs in the past—because it depended critically, first, on their judgments of technical feasibility and, second, on their commitment to make the concept work.
The success of the Manhattan Project set two fundamental precedents. One was that national security was now closely coupled to progress in science and technology, at least in the physical sciences. Although the team that had developed the atomic bomb was dispersed at the end of the war, the project was revived a few years later because of a growing appreciation in the United States that a continuing effort had to be made to support science and technology for purposes of defense. This decision was not a result of domestic pressures from defense industry or the military services but was largely a reaction to the intensification of the could war. The U.S. government, spurred on by public concern, was reacting to Soviet success in developing an atomic weapon, the Berlin blockade, creation of the Sino-Soviet bloc, and the outbreak of the Korean War. Even though the United States enjoyed a clear advantage in military technology over the Soviet Union, it seemed imperative that the partnership between the federal government and the military laboratories as well as with the research universities be maintained and strengthened in order to maintain that advantage, especially in view of the U.S.S.R.'s potential manpower and tactical superiority over Europe. This partnership represented a considerable break with a past in which the federal government was often constrained on constitutional and ideological grounds from either becoming an economic actor or from subsidizing private industry. Especially in the Progressive era, government had been conceived of as a watchdog over industry, a champion of free enterprise against combinations, trusts, and oligopolies. After World War II, however, concern for national security put the government in the unaccustomed role of underwriting the risk of private contractors and of reinforcing industrial oligopoly in order to support the most essential military contractors.
The United States has so far resisted pressures to go even beyond these arrangements. It has not created a cabinet-level department to coordinate science and technology, nor has it adopted a formula for national R & D expenditures based on the theory that overhead costs should be borne by the government. The United States' approach has been sectoral and pragmatic and has so far not engaged the federal government consistently and openly in the support of civil as well as military research and development. Despite the evidence that other countries have profited from government planning and direction, the United States has adhered to the classical liberal principle that private industry can do the best job of promoting innovation in the civil sector if left free of government interference. In the two principal areas of federal involvement, defense and space exploration, the U.S. pattern has been to emphasize work performed by private industry but subsidized by public R & D contracts and procurement. In supporting this R & D, the federal government relies on the "project system," except in the case of three nuclear-weapons laboratories—Los Alamos, Livermore, and Sandia—where research is supported on a nonprofit, "level-of-effort" basis—much as the Soviets do in their missile and other weapons-research facilities. Although some critics argue that the United States would be better off admitting that it had created a "contract state" or a "managed economy," and nationalizing those defense firms whose work is almost entirely dependent on federal contracts, such steps have been resisted on the ground that they violate traditional American norms and would be counterproductive.
Despite such disputes about methods, it was well recognized, especially in the late 1950s, that national security required a wide range of activities in R & D of all sorts. Various factors influenced decisions on research priorities: the perceived military need, technical feasibility, and the comparative cost vis-à-vis other existing or new systems. In an effort to achieve better control of the process, Secretary of Defense Robert S. McNamara introduced cost-benefit analysis for comparing weapons systems rather than allowing the services to pursue their own needs and to iron out priorities by interservice bargaining. Although the need to reconcile economic and strategic goals with the priorities set by each of the services continues to bedevil defense planning, McNamara's efforts at least compelled the planners to think about the interrelationships of weapons and their roles in general strategic doctrine.
Although it would be too much to say that strategic doctrine alone fueled the quest for new military technology, it is certainly true that the advent of the atomic bomb gradually led to the adoption of the belief in
strategic deterrence, and that this belief influenced the development of offensive forces. The decision to employ intercontinental ballistic missiles as delivery vehicles for atomic weapons was only a more efficient way to implement a strategy already in place. The Soviet Union undoubtedly had something to do with this decision, but it probably would have been made anyhow. The decision to build the thermonuclear bomb was different. It was not obvious in 1949, when the matter was first considered, either that the bomb could be designed or that it would confer some military advantage not supplied by the atomic bomb. Accordingly, the scientists on the General Advisory Committee (GAC) to the Atomic Energy Commission recommended almost unanimously against a crash program to develop the new, more powerful bomb. President Truman overruled the GAC on the advice of a small group of scientists committed to the idea. Truman and his political advisors were uneasy that the Soviets had developed an atomic bomb of their own, fearing that they might develop a thermonuclear bomb ahead of the United States and use their resulting superiority to achieve advantages, much as Secretary of State James F. Byrnes had tried to do with "atomic diplomacy."
The H-bomb decision deviated from the wartime pattern in that the president decided to proceed over the opposition of key scientific advisors, who had warned that a crash program was not only premature but would also be costly in terms of other military needs, such as a continental air defense and tactical atomic weapons. Nevertheless, the president did not reach his decision arbitrarily or in defiance of informed opinion. In fact, the administrators of the Los Alamos Laboratory had recommended that research on the H-bomb be accelerated, along with other projects. Karl T. Compton, head of the country's highest-ranking defense-science advisory body, the Research and Development Board, which was attached to the Department of Defense, had recommended going ahead with the effort to develop the H-bomb. This decision, then, was based on recommendations of those with operative responsibility and on the opinion of senior scientific advisors, even though it had been opposed by the GAC. Congress's role in this decision was almost as limited as it had been in the case of the atomic bomb, except that one senator, Brien McMahon (D., Conn.)—who had a key role in such matters as chairman of the Joint Congressional Committee on Atomic Energy—worked with the air force in promoting the case for a crash program.
Since then, the general pattern in weapons development has been that new ideas have percolated up, from the industrial contractors, the DOD laboratories, and the federally funded research centers. The ideas move
up through a hierarchy of decision makers, including those in charge of military R & D and the military planners, and are reviewed—if they reach the top levels of the Pentagon—by special outside advisory committees made up of experts from industry and academia, in addition to senior officials, elected as well as appointed. Ideas that involve new weapons that either parallel new Soviet developments or act to counter them are especially likely to run this entire gauntlet. In 1954, for example, a U.S. ICBM program both paralleled the Soviet ICBM project and seemed to solve the problem of how to penetrate Soviet air defenses. The ICBM's unique speed was also an important consideration. Such proposals reach the White House only after lengthy, thorough, and iterative reviews in the defense establishment. When proposals come to the attention of the president without undergoing this process, they have rarely if ever been acted upon.