Preferred Citation: Clarke, Lee. Acceptable Risk?: Making Decisions in a Toxic Environment. Berkeley:  University of California Press,  c1989 1989. http://ark.cdlib.org/ark:/13030/ft5t1nb3k6/


 
Eight— Organizing Risk

Eight—
Organizing Risk

Conclusions about the Binghamton case must begin on a positive note that nevertheless has grim implications. Although the accident was not as bad as it could have been, similar accidents are likely to occur again and to have even more dire consequences. In a sense, the PCB solution used in the coolant mixture for the building's transformers was a blessing. Had the transformers used a nontoxic coolant mixture based on mineral oil (the common coolant before PCBs, and still used in many buildings), the State Office Building would probably have burned to the ground. Binghamton's fire chief testified that such a traditional coolant would have created "a towering inferno." In that event, the problems of toxic chemicals would not have plagued the building, the exposed people, or decision makers. But we must remember that the complete contamination of the SOB was an accident, and as such is a relatively rare occurrence. Towering infernos would occur more frequently if there were no PCB-containing transformers. Important advantages obviously attend the use of risky technologies (Wildavsky 1979).

The fortuitous timing of the accident—early in the morning (5:30 A.M.)—also moderated its potential for disaster. Had the accident occurred a mere four hours later, more than seven hundred people working and conducting business in the building would have suffered acute exposure to toxic chemicals far surpassing the levels at Seveso, Love Canal, and Michigan.[1] We cannot know if deaths would have resulted from exposure of that intensity, but it is possible that many would have died in the panic to get out of the darkened building. It is certain that an accident during working hours would have increased the probabilities of chronic ailments. "If this building had gone up

[1] See, respectively, Whiteside (1979), Levine (1982), and Egginton (1980).


158

at three o'clock in the afternoon," one DOH official remarked, "this would have been a horse of a different color . . . Be grateful" (Fecteau 1981f). Thus, we survive to discover that the accidents our organizations create can always be worse.

Another paradox in the story of the State Office Building is that, in some sense, the accident could not have happened in a better place. The scientific and engineering apparatus used by the Office of General Services and the New York Department of Health are regarded, even by their counterparts in the private sector, as among the most advanced in the world. Instruments to measure traces of dioxin, for example, are very sophisticated, and the scientists who use them are highly trained. If the accident had occurred in Montana, the very low measurements taken and used by OGS and DOH (however haphazardly collected) might not have been available for us to examine critically.

Similar accidents have been reported in Chicago, San Francisco, Tulsa, Toronto, Clearwater, Florida, and Syracuse and Albany, New York, but none was as extensive. We can only hope that the timing of future accidents will be as forgiving as this one was.

Throughout this book I have sought answers to two main questions: How do organizations behave under conditions of ambiguity? What can organizational sociology teach us about risk management and risk assessment? In this chapter, I first trace the process through which solutions to the SOB's problems were constructed. Along the way, I evaluate the ability of extant theories to explain that process. Next, I more explicitly mine organization theory for clues to interpret the Binghamton case, arguing for a modification of currently favored models of decision making. In concluding, I return to the twin themes of this book—ambiguity and the organizational context of risk assessment—and propose some directions in which I think the sociology of risk should move.

All important accidents, by definition, produce social disruptions (Erikson 1976; Kreps 1984). Accidents with the potential to affect innocent bystanders (those who cannot be aware of the risks) and future generations pose dilemmas that are peculiar to modern technologies (Perrow 1984). One of the most important dilemmas decision makers must resolve is the proper bal-


159

ance between organizational and public health. Before turning to this topic in the final section on risk assessment, let us examine the SOB accident to see how organizations make sense of the inherent ambiguities that beset such situations.

Disruption of Routine

Imagine the day before the accident at the SOB, February 4, 1981. Federal agencies such as the Environmental Protection Agency are adjusting to the Reagan administration's budget cuts and policy changes. The National Institute of Occupational Safety and Health, which advises the Occupational Safety and Health Administration, is running experiments and conducting epidemiological research. At the state level, the New York Department of Health is engaged in its multifarious programs to oversee the medical profession in New York, certify hospitals, inspect restaurants, issue directives to branch offices, and conduct biological, epidemiological, and chemical research. The two major divisions of the state's Office of General Services—the design and the construction and maintenance divisions—are busy doing the work of engineers and keeping New York's buildings operating. The New York Department of Environmental Conservation is monitoring private corporations for violations of pollution and toxic-substance laws, certifying landfills, and otherwise trying to keep a step ahead of New York's myriad threats to environmental integrity.

At the local level, both city and county governments are preoccupied with the prosaic but important functions of budgeting, legislating, and public relations. Most of the Broome County Health Department's resources are engaged in administering its Women, Infants, and Children program; inspecting hotels; and maintaining its schedule of home health care visits for the indigent, old, and disabled.

On February 5, the accident disrupted these routines, creating, from all accounts, social chaos (at least in terms of the proper response to the SOB and its victims).[2] As we have seen,

[2] There are two main differences between disasters such as the SOB, Love Canal, Michigan, Bhopal, and TMI, on the one hand, and natural disasters, on the other. First, for natural disasters experienced agencies exist tomitigate the harshest effects of dam breaks, earthquakes, fires, and floods (treating the sick and injured and burying the dead). Second, the rescue effort for toxic disasters is necessarily more limited than it is for victims of natural disasters. If a flood demolishes a town, the sooner the rescuers and the victims meet, the sooner will follow public (and individual) health. Once a dioxin exposure has occurred, the best rescuers can do is monitor victims for early signs of disease. For these reasons, studies of natural disasters would be of little help here.


160

the event created two general classes of problems. First were the technical uncertainties of how to run medical surveillance and decontamination: Who should be covered? What ailments should be anticipated? Would a medical monitoring program help the exposed? Should a surveillance study even be run? Could the building be rid of the toxins, and, if so, how? When should cleaning stop? How safe is safe? The second class of ambiguities—which might be termed sociopolitical—involved the lack of any semblance of social structure for the distribution of legitimacy, authority, responsibility, and power among organizations.

It will help to organize the narrative by separating it into two phases. Figure 2 (opposite) graphically represents the progression of events in Binghamton. It also illustrates the relationship between the sociopolitical and the technical ambiguities. As time passed, a division of labor gradually emerged from organizations negotiating over definitions of authority and responsibility. I argued in previous chapters that the organizations needed to settle these negotiations before the more technical solutions to the problems of the contaminated people and the contaminated building could be constructed. As the Binghamton story unfolded, the technologies of medical surveillance and decontamination became increasingly sophisticated, arcane, and elaborate. A series of interorganizational conflicts had to be resolved, however, before these technologies were defined, accepted, and implemented.

Phase One:
An Interorganizational Garbage Can

The early phase of the crisis set into motion organizational behavior as abstruse to outside observers as the accident was to


161

figure

Figure 2.
Organizing Risk: The Binghamton State Office Building


162

the organizations that had to contend with it. Within two days after the fire in the SOB, the Office of General Services sent a team of untrained janitors into an extremely toxic environment, risking lawsuits as well as workers' health. OGS, as if to mock organization theorists (Goodman and Pennings 1977), acted as the model of bureaucratic efficiency, applying a very effective solution, but to the wrong problem (i.e., the building was contaminated, not just dirty). In the subsequent three weeks, this first cleanup effort served mainly to spread, not contain, the toxins because of doors left unlocked, petty theft, and documents and cars that left the building. The New York Department of Health responded much as it had to Love Canal, trying to minimize the importance of the accident, and therefore aggravating the negative consequences over the long term (Levine 1982). The Broome County Health Department began a program of medical monitoring whose lack of order reflected the department's lack of requisite personnel, expertise, and funding and its relative isolation from agencies such as EPA and NIOSH. Physicians, unable to tell their patients if they were sick or would become sick, or indeed what could be done if they became sick, were concerned about possible lawsuits, or, as they saw it, being put in a position of responsibility for the blunders of others. The local media also reflected the general chaos, reporting contradictory facts from officials who contradicted one another.

The first phase of the story, which lasted about nine months, is best described as an "interorganizational garbage can" (Cohen, March, and Olsen 1972; March and Olsen 1979). This multiorganizational garbage can possessed several characteristics. First, there was no clear definition of the problem . In the chapters on medical surveillance and decontamination I criticized the assumptions that underlay the technologies of the medical surveillance study and that defined acceptable levels of contamination. During the first phase, even the possibility of medical surveillance and cleanup was being negotiated by several organizations, mainly the Broome County Health Department, the New York State Health Department, and the Office of General Services. The first surveillance efforts by the county health de-


163

partment created a normative constraint on DOH to follow through with the study. BCHD framed the question of the value of medical monitoring early on and in such a way that if DOH terminated the program, it would be taken as an admission that public health was not being taken seriously.

Looking back, there seem to be functional reasons why the county health department could not have directed long-term medical surveillance. Pfeffer and Salancik (1978), who represent an important perspective on relations between organizations (known as resource dependence theory), argue that in our world of scarcity, organizations cannot internally generate all the resources necessary for attaining their goals and must therefore exchange resources with others in their environments (see also Aldrich and Pfeffer 1976). Another reason organizations enter into formal and informal arrangements is that problems sometimes arise that are beyond the capabilities of any single organization to solve (Gottfredson and White 1981; Metcalfe 1981). Normally, resource dependence theory maintains, exchanges between organizations lead to interdependencies, from which shared expectations can develop (Levine and White 1961; Aldrich and Whetten 1981). In the Bing-hamton case, the cost of the PCB blood tests alone was more than $40,000, not including the opportunity costs of labor, machinery, and public relations. Resource dependence theory and common sense suggest that medical surveillance would require a great many resources if it were to have any longevity and validity. Thus, an efficient and effective study would have to be run by the New York State Health Department. Upon closer inspection, however, it is clear that functional necessity was not the reason for these events. Contrary to resource dependence theory, the emergent division of labor in Binghamton did not reflect either the demands of efficiency or an agreement on which values should be pursued. For example, it was not until the PCB test results began to be returned from Chemlab that the state health department took an active interest in medical monitoring.

Neither the demands of the tasks themselves nor the imperatives of trading scarce resources led to the eventual distribu-


164

tion of tasks. Instead, organizations negotiating a definition of acceptable levels of risk played a major role in determining responsibility. These conflicts and disagreements were not simple disputes between technicians over trivial details, but political battles over basic questions of legitimacy and accountability. During phase one, "the problem" came to mean other organizations, not the SOB itself. Until interagency competition abated, the demands of the exposed individuals and the building cleanup were relegated to a secondary status.

In one sense there was a chorus of agreement: The players in the State Office Building drama all professed—sincerely, I believe—a strong commitment to public health. Yet, as I have argued, meta-goals such as "public health" and "acceptable risk" are of little help in understanding how organizations make tragic choices. Given the uncertainties engendered by the SOB fire, these goals could hardly be more than wishes for a remote and vague future. Goals such as these are master metaphors for the processes through which social goods are organized and allocated. Behind the metaphors are organizations with choices to make, struggling among themselves over ways to control one another and their environments.

A second mark of this interorganizational garbage can was that no organization, or set of organizations, possessed an obvious right, obligation, or mandate to deal with the tragedy . OGS owned the building, but the effects of the accident transcended its walls. Ownership was therefore not a necessary and sufficient condition to establish authority over its disposition. In the language of garbage can theory, the situation lacked an "access structure" (March and Olsen 1979, 28–31).

Access structures are patterns of interaction that direct solutions to problems. The term "solution" does not necessarily imply meeting the requirements of a task so that a problem no longer exists. It simply denotes a range of alternatives, each of which might reasonably qualify as organizational policy. I have argued, as have March and Olsen, that organizations can be conceived of as repositories of solutions. Mechanisms (or the lack thereof) that direct organizations toward or away from problems are therefore social technologies used in making


165

choices. Because the organizations in the garbage can phase lacked mandates and norms that clearly demarcated responsibilities, the flow of solutions to problems was not well regulated. As a result, the potential to influence decisions was fluid. We might hypothesize that access to decision making is pluralistic when access structures are not well defined.[3] This is one of the most important reasons why the county health department was able to shape policies in Binghamton, at least during phase one, even though it did not command superior tangible resources.

Another important consequence proceeded from the lack of an access structure among organizations. In chapter 3, I described the ease with which local, state, and federal government organizations were able to buffer themselves from the demands of the SOB. Indeed, of the potential pool of participants, most organizations were able to avoid becoming central players. In almost all those cases, administrators cited their lack of expertise. Closer inspection reveals, however, that lack of expertise explained these demurrals in only a few cases. Instead, a constellation of political demands both constrained and gave discretion to organizations seeking to distance themselves from the accident. When most organizations left the garbage can, what remained were the New York State Health Department and the Office of General Services, on the one hand, and the Broome County Health Department, on the other, to form an uncooperative set of organizations to whom the media, public, and other organizations turned for answers.

Political positions polarized early in phase one, as BCHD, OGS, and DOH tried to define what problems should receive attention and who should be the arbiter of reasonable policy. The organizations disputed control over resources and control over symbols. Throughout phase one, medical surveillance languished as organizations constrained one another into inaction. BCHD controlled the data from the blood tests and was committed to an extended study because, as its director put it, "like

[3] Following a similar logic, Laumann, Knoke, and Kim (1985) argue that in turbulent environments the degree of interest in a policy domain is a crucial determinant of policy outcomes.


166

asbestos, these chemicals remain in the body and are carcinogens, so chronic surveillance seemed the best first reaction; the study can always be limited later." But DOH sought to wrest the data and unanalyzed samples from BCHD, doubted that the SOB posed any important risk, and thus doubted the very merit of medical surveillance. DOH finally agreed that some type of surveillance was appropriate, but state and county officials could not agree on who should qualify for inclusion in the program. BCHD proposed testing everyone who might have been exposed, "maybe one or two thousand people." State officials, however, wished to confine the definition of exposure to those who could prove they were actually in the building (thus excluding those whose route of exposure was the contaminated garage or the surrounding environment). The two health departments disagreed on how surveillance would be conducted, what body functions should be monitored, who should talk to the media (and what should or should not be said), who should be included in the study, and how the public should be informed.

BCHD maintained that DOH control of medical surveillance entailed a conflict of interest, that more body functions should be watched, that warnings should be given to pregnant women, and that a disinterested expert in environmental medicine should run medical monitoring. While under Schecter's direction, the county health department exhibited a policy of openness with the media, proposed the inclusion of four times more subjects in the study than the state health department, and promoted a relatively open decision-making process. After several futile attempts to iron out these differences, and some critical exchanges in the media, DOH threatened not to pay a local hospital for services it authorized and withdrew BCHD's warrant to state funds, seven weeks after the accident. Not until after Schecter was fired, at the end of what I have labeled phase one, was a research protocol developed for medical monitoring. (Indeed, an organized medical monitoring program did not begin until a year after the fire.) The state's study includes only those who can prove they were exposed within the State Office Building.


167

But BCHD was able to wield another type of influence even after losing control of important tangible resources. Under ambiguous conditions, as March and Olsen (1979) suggest, symbols play a heightened role in social interaction. By maintaining close contact with the local media, acting as one might expect a cautious health department to act, BCHD gained access to an important social stepping-stone between state decision makers and their environments. As long as that relationship continued, BCHD was able to force OGS and DOH (as well as county officials) to respond to criticism and thus to modify important policies. It was not until the end of the first phase that state organizations gained symbolic control of the SOB.

A final characteristic of this interorganizational garbage can was that mechanisms for information exchange were unrestricted . In phase one, the media had direct access to administrators and workers, even though state organizations issued an order, on March 1, 1981, that only official representatives were authorized to speak to the media. Until September 1981 (nine months after the accident), interchanges among officials, scientists, and the media were frequent and published. As one might expect, this relationship was not always cooperative. Officials were not always candid, and there was the occasional inflammatory headline ("State May Leave Some PCBs," "The Tower of Death"). But media access to policymakers and experts facilitated relatively open, if somewhat crude, public debate over important issues. Scientists, bureaucrats, and policymakers could barely mention plans about public health and cleanup without reporters questioning reasons and intentions (and sometimes suggesting alternatives of their own).

At first, reporters were as unfamiliar with the strange, deadly substances as everyone else, so they developed their own networks of experts to explain the complexities and dangers of what they were trying to report. This mining of technical expertise allowed reporters to ask informed questions about public health and policy. In arming themselves with technical minutiae, reporters often served as links between opposing views. Although not frictionless conduits, these links were often more effective than official dialogues. Fortified with the testi-


168

mony of a chemist, for example, who had discussed the minuscule amounts of dioxin necessary to induce birth defects, reporters would then ask those in policy-making positions what they intended to do to avoid the dread effects. It would stretch the point to argue that the media served as watchdogs, but they did serve as an important mechanism through which the question of acceptable risk could be posed to those with the political responsibility to decide such issues.

These observations about the media are important. Risk assessment literature tends to view the media as conveyors of misleading or inaccurate information regarding risks (Combs and Slovic 1979; Slovic, Fischhoff, and Lichtenstein 1979; Wildavsky 1979; Douglas and Wildavsky 1982), thus causing the public, and others, to hold unreasoned and unreasonable positions concerning hazards. The progression of events in Binghamton suggests that this perspective is not very useful. During phase one, the media did not display the sensationalism one might expect from ill-informed reporters, but, rather, struggled in earnest, and usually with success, to understand what they were trying to report. When given access, reporters interviewed decision makers and experts as frequently as possible. Of course, this meant contradictions were bound to appear, because officials and experts were contradicting one another and themselves. But there is no evidence, from Binghamton at least, that accurate portrayals of these contradictions and the internal conflict in organizations (whose task it is to solve problems) necessarily lead to irrational publics or demands for excessive caution. Indeed, because definitions of acceptable risk, like most important public issues, are fundamentally about political value and moral choice, there is every reason to encourage the media to report conflicting and contradictory positions.

These contours—competing definitions of the problem, lack of centralized authority, and political accountability (through the media) of decision makers—were the major characteristics of the interorganizational garbage can in phase one.


169

Phase Two:
The Action Set

By the beginning of phase two, all the organizations except for BCHD, DOH, and OGS had extracted themselves (or been pushed away) from the problems in Binghamton. As phase two progressed, BCHD's role was lessened, and authority, information, and power became centralized in state organizations. An amorphous collection of more or less independent organizations emerged, with a structured division of labor—or what Aldrich (1979) calls an "action set." An action set is "a group of organizations that have formed a temporary alliance for a limited purpose" (see also Aldrich and Whetten 1981). Like organization sets (Evan 1966) and, sometimes, networks (Milner 1980; Mintz and Schwartz 1985), action sets tend to develop a normative order for effective decision making and efficient allocation of resources. Implicit in this idea is that the members of an action set hold a domain consensus (Levine and White 1961), or mutual agreement on a division of responsibilities, and that the allocation of tasks among the organizations is based on comparative advantage. Although by the beginning of phase two there was indeed a structure to the organizational field (Warren 1967), the ability to structure the participation of other groups in key decisions was more important to the process of constructing solutions than either expertise or efficiency.

In phase two, state organizations established themselves as the legitimate "owners" of the SOB's risks, thus resolving the issue of symbolic or ideological control. Gusfield (1981) argues that the ability of actors to establish "ownership" of risk centrally shapes the construction of solutions to public problems. In the context of this study, this means that once issues of responsibility and legitimate authority were settled, some solutions to the problem of the toxic SOB had a higher probability of being pursued than others. After state organizations successfully came to own the building's risks, there was little likelihood that the destruction of the building would be publicly considered. Similarly, the scope of medical surveillance became very narrowly defined, excluding potentially important symptoms (reproductive failures, mental health) of concern to the exposed individuals.


170

After open challenges to the state's authority began to wane, the Office of General Services began a publicity campaign to dispel the widespread distrust of its policies expressed by the media, the public, the Citizens' Committee, and others. In a series of open meetings, the public was granted symbolic opportunities to influence decisions. At these meetings, DOH and OGS officials explained, "in laymen's terms," what they intended to do and how. But it would be a mistake to describe these gatherings as an attempt to involve the public in defining acceptable risk. The agendas of the meetings were arranged so that substantial criticisms did not arise, data were not released until the meeting, and the presentations were arcane and did not answer the questions to which outsiders sought answers. Sometimes the meetings were held during working hours, which prevented most citizens from attending.

By the end of phase two, the only legitimate interpretations of scientific data and solutions were those proposed by the state's department of health and the Office of General Services. Questions of acceptable risk and adequate medical surveillance no longer received the public attention they once had (although Citizens' Committee meetings, letters to the editors of the local papers, and Schecter's speaking engagements tried to rekindle debate). Once state organizations owned the State Office Building's risks, plans for decontamination and medical surveillance began to take shape. In phase two, the state hired a professional toxic-cleanup firm to develop plans for decontamination. That firm, as I argued in chapter 6, was an important part of the solution to the toxic problem.

As the structure of relationships among organizations developed, the production and dissemination of information became centralized. Questions from public representatives were increasingly referred to public relations personnel, who distributed "press-type information." One consequence of this tight coupling of organizations with information was that issues once debated openly (e.g., Acceptable to whom? Healthy according to what standards?) became inaccessible to all but those most closely allied with state organizations. In addition, state officials met with editors of the local papers and convinced them to relax their criticism of state actions. During phase one, reporters


171

spent a large part of their time researching the intricacies of toxic chemicals and interviewing the many people who might influence policy. They interviewed dissidents and tracked down unofficial leads. During phase two, reporters relied far more heavily on quotes from officials. Thus, accompanying the state's assumption of the SOB's risks was the elimination of an important mechanism of political debate—the crucial element, of course, in all questions of acceptable risk.

Interestingly, formal risk assessments (which, according to rationalist-decision theory ought to be mechanisms that structure social action) were not generated until after the key decisions had been reached (see fig. 2, last line).

The Structural Basis of Individual Dissent

The role of Dr. Arnold Schecter in the Binghamton case warrants comment. It is true that some of Schecter's personal characteristics were important in the Binghamton case. His training in preventive medicine and occupational health, his extraordinary energy and ability to articulate reasonable criticism in a short reaction time, and his willingness to openly question the authority of New York State both endeared him and made him a valuable asset to members of the media. During the State Office Building crisis, Schecter and the media developed a mutually beneficial relationship. He provided good copy by criticizing the state; the media, in turn, served as a mechanism through which he could voice his concerns. There are, nevertheless, at least two larger lessons.

First, it should be noted that Schecter did not overstep his mandate as the local health officer. Although state officials were quick to denounce Schecter and his actions as alarmist or self-serving, his actions and decisions were in fact those we might expect of any public health officer. At the same time, it is also true that by openly criticizing state, and sometimes county, policy Schecter was putting his job as county health director in jeopardy, and in this sense his actions were not what we might expect from any public health officer. The solution to this apparent puzzle is not found in Schecter's personal qualities but in his structural position. Although Schecter was director of the


172

Broome County Health Department, he was also a tenured professor at SUNY Binghamton's medical school. He thus had a secure position enjoyed by few county health commissioners. To the extent that Schecter was responsible for generating public debate regarding important policies and decisions, he was able to do so because he was occupationally secure. This observation is disturbing, for it suggests that in instances like Binghamton's, the public may not be able to count on local health officers to protect their interests.

Second, although during phase one the media and the county health director frequently traded information, after Schecter's ouster, he no longer enjoyed the status of quoted critic. Why? The local papers were not co-opted until after Schecter was ousted, so that does not account for his subsequent neglect by the media. And Schecter's expertise as a physician and public health official did not disappear simply because he no longer worked for the county. One lesson to learn from Schecter's role is that access to an organizational base of power is an important intervening variable in disputes over legitimacy. Even given Schecter's personal qualities, it is the office that is important, not the person, whatever the person's qualifications. Losing his institutional legitimacy meant losing his status as expert, and being known as an expert increases the probability of attention from the media (Gans 1979; Pearson 1984).

Theories of Choice

Theories about how problems are solved highlight certain aspects of organizational behavior and, by necessity, neglect others. In the simplest model, complete and valid information is passed through the organization to those at the top, who then choose one solution from several or many. In this model (the theory of rational choice), selection of alternatives is governed by demands of efficiency and the criteria of organizational effectiveness. The theory requires these criteria, as well as decision makers' preferences, to be well defined. Otherwise, enough uncertainty is introduced into the model that its predictive powers are considerably lessened. Obvious uses of


173

this overly rational model of decision making are elusive. The model is slippery because it is more often assumed than argued explicitly. No theorist of decision making (save perhaps the neo-classical economist) openly argues completely rational choice. Yet any given issue of Administrative Science Quarterly, a major scholarly journal on organizations, will contain an article that covertly adopts the tenets of rational choice theory. Most often, these tenets are found in works that do not directly address decision making, thus obscuring the authors' rationalist assumptions. But this should not lead us to think the theory of rational choice is chimerical.[4]

Garbage can theory was developed to counter theories in which power and rationality drive decision making (Lutz 1982; Tasca 1983). In this view, the components of an organizational system—people, problems, solutions, choices—are only loosely coupled and often vary independently of one another (Weick 1976). In theories of rational choice, the power to command, hierarchically structured offices (each with a regulated amount of authority), and the demands of completing tasks are the keys to understanding how choices are made. March and Olsen (1979) argue, instead, that a choice results from the fortuitous coincidence of the components of an organizational system. James March (1978, 592) explains that the garbage can model emphasizes "the extent to which choice behavior is embedded in a complex of other claims on the attention of actors and other structures of social and cognitive relations." What is important in choice situations is not the interests of elites, but rather the constellation of competing demands on potential participants' time: the opportunity costs of making decisions. People cannot attend to many problems at once, do not have stable and ordered preferences, and often are unable to understand what their organizations are or should be doing. In this model, ends and means bear no obvious connection to each other, nor are action and intention always related. Instead, organizations act and produce goals only when they are chal-

[4] Even if it were impossible to find a theory of rational choice, there would still be value in creating one, if only to serve as a background against which alternative views could be compared.


174

lenged to render sensible accounts of their actions (Scott and Lyman 1968; March and Olsen 1979, 71–75).

The garbage can metaphor is useful because it provides a way of thinking about organizational processes that differs from deterministic models (Mohr 1982). It is especially useful for drawing attention to organizational behavior under ambiguous conditions, i.e., situations in which goals are unclear, technologies are ill defined, and rights to participate in major decisions are in flux. Rather than coordinating their behaviors so that organizations move toward well-understood ends, participants often behave in ways that reflect no plan. Situational rationality reigns in ambiguous contexts. Garbage can theory has caused us to research the conditions under which rationalist theories can explain organizational behavior and the conditions under which they cannot.

The model underestimates the importance of power in organizational life, however, and I doubt that the elements of an organizational system are as "randomly organized" as garbage can theory would have us believe. In universities (supposedly the ideal-typical organized anarchies), for example, power may be more widely dispersed than it is in an oil company, and departments may be more loosely linked than they are in an army, but it is still not usually the case that lower participants enjoy the same probability of influencing important decisions as do those at the top of the hierarchy. Although this is admittedly an oversimplification, single organizations are more accurately described as a division of labor based on legitimated authority, with the subunits designed to transform some raw material, than as a conglomeration of disjointed elements, randomly colliding in a system in which meaning is forever equivocal.

A group of organizations, however, is more like a garbage can (or a crowd), having neither an institutionalized structure to coordinate its members nor a centralized office that issues orders. Moreover, with an inter organizational garbage can, as in the first phase in Binghamton, entry and exit into decision opportunities are relatively easy, and there is no hierarchy that clearly delimits authority and power among organizations. The principles of garbage can theory are thus most applicable at the level of interorganizational analysis.


175

A variant of this nonrational model of decision making, specifically cast at the level of interagency relationships, has been suggested by Norton Long (1958), who analyzed organizations within communities and asserted that a local community should be conceived as "an ecology of games." Games are activities in which social players such as organizations, interest groups, and publics vie for participation. The rules of any single game are fixed and known, but the "social game," or the general creation of social order, is neither well structured nor understood by the players. A community is not a single structure of services and power; rather, organizations in a community pursue their own interests and in the process mesh their behaviors with that of others to produce social equilibrium. Because all these games (read "organizations") are relatively isolated from one another, there is scant formal coordination. Long's metaphor is problematic because it assumes, much as March and Olsen's does within organizations, that the lack of formal interagency coordination means that the distribution of power among a group of gamesters is inconsequential. Although the insights of March and Olsen and Long are most useful at the level of interorganizational analysis, their theories must be modified to accommodate the role of power in models of decision making.

The nature of power changed during the two phases in Binghamton. In the first phase, nodes of power often shifted from organization to organization in a relatively unpatterned, if not completely random, manner. Later, after most organizations had "exited" the garbage can, power became stabilized in state organizations. The kind of power in Binghamton was not the overwhelming, direct control of one actor by another (although there were significant examples of this). Instead, power was important in Binghamton because it helped to determine what would be considered the legitimate content of policies.

Gusfield (1981) studied how the popular image of the "killer drunk" was created. He found little objective evidence that drunken drivers are a major social problem. Among the most important factors that contributed to the "killer drunk" myth were the institutional interests that defined the terms of the drinking and driving debate. One consequence of the influence of various organizations on defining drinking and driving as a


176

social problem was that individuals are viewed as a major threat on our highways. Yet there is nothing inherent in traffic accidents that requires a focus on individual action (Gusfield 1981, 174). Other ways to name the problem might be "an unforgiving automotive design" or "a highway system that induces tragedy." Attributions of responsibility change when tragic choices (or any choices) are placed in a larger context that includes other factors.

Similarly, if control of the State Office Building's risks had been more broadly distributed, rather than the state assuming all decision opportunities, different solutions to cleanup and medical surveillance might have been applied.

Symbols and Organizational Deceit

At several points I have examined how symbols—expert knowledge, scientific studies, risk assessments—are used in situations that entail important choices. Sometimes these symbols were used as devices to conceal certain facts or interests. Yet, in only a few of those instances was there a deliberate organizational plan to defraud an environment. Instead, each instance of deception made sense within the immediate context of making trade-offs between organizational and public health. It is true that officials setting policy for DOH, for example, often constructed an organizational face that was opaque to the public. Facts, studies, findings, and problems were sometimes presented in ways that did not accurately reflect how they were being used and studied within the organization. There were also several instances in which DOH suppressed "alarming" evidence of the SOB's hazards, but readily released "calming" evidence.

In each of these instances there were important differences between what organizational representatives said and what they did. One explanation for this discrepancy, popular in Binghamton, is the "steamroller theory," which holds that officials were lying, incompetent, and callously conspiring to deceive a public unable to resist manipulation. This theory conjures up an image of a New York State juggernaut, its organizations


177

unified in purpose, agreed on means, and politically homogeneous. The steamroller explanation fails, however, because it implies omniscient rationality and extraordinary cunning among elites, and complete integration within and among organizations. Even during the second phase, when conspiracies and cover-ups would have been easiest to carry out, less than perfect coordination and homogeneity prevailed. Instead of full rationality, we witnessed tightly bounded rationality, as elites muddled through ambiguous situations, constructing solutions before goals were known, and without any assurances they would solve problems (Lindblom 1959, 1979). Far from incompetent, policymakers and scientists in the bureaucracies of New York State are widely regarded by their peers as among the best in the world.

A more plausible explanation for the difference between what elites do and what they say is that institutional environments demand rational accounts from organizations (DiMaggio and Powell 1983). One task of organizational leaders is to advance interpretations of their agents' behavior that will make sense outside the organization. For example, when the janitorial cleanup turned into a scandal, OGS officials nevertheless maintained that the effort was successful, although by any objective measure the cleanup succeeded only in increasing the likelihood that New York would incur lawsuits (and increase the risk to workers). The use of symbols in events and policies involving organizational deceit—particularly where ambiguous risks are involved—is therefore best understood as an attempt to conform with environmental demands for rationality. Unfortunately, we can take little comfort from the implication that deliberate lying and treachery are probably not very central problems in situations involving hazards to public health. In instances of significant risk, organizations most often take into account the expectations and demands of other organizations when constructing policy and devising action. In these cases, publics, as unorganized masses without access to concentrated resources, must find some mechanism that will synchronize their interests with those of organizations if they are to have their interests represented in official policy. In a world where


178

important risks are defined and accepted mainly by organizations, the term "public interest" is a fiction.

The Sociology of Risk

One of the major themes of this book has been the organizational context of risk assessment. Most of the risk analysis literature has a decidedly psychological cast (for reviews, see Fischhoff 1977; Kates 1977; Cole and Withey 1981; Einhorn and Hogarth 1981; Covello, Menkes, and Nehnevajsa 1982). The major concerns of this research are the cognitive processes involved in individual assessment of hazard; experiments and attitude surveys are the major tools of data collection.

Two criticisms of the risk assessment literature are relevant here. First, psychological studies of risk lack a conception of social structure that connects perceptions of risk with the formation of policy. Instead, the literature assumes that individuals are the crucial assessors and acceptors of risk in our society. It therefore rests on the premise that policymakers are determined by, and will reflect the views of, society as a whole. But the evidence from Binghamton, Love Canal, Three Mile Island, Michigan, Times Beach, and Bhopal suggests—contrary to the psychology of risk—that organizations, not disparate members of the general population, are the final arbiters of hazard. Such studies have neglected the processes by which important decision makers evaluate and accept risks, and so have missed the crucial processes through which risk assessment proceeds.

Second, the field of risk analysis seems to presume that choice among hazards should follow a rational model of decision making. As I indicated, much of this research focuses on individuals' risk perceptions (Cole and Withey 1981; Slovic, Fischhoff, and Lichtenstein 1982). Many argue that individuals' sources of information (mainly television and newspapers) systematically distort the data on which individuals base their assessments (Lichtenstein et al. 1978; Combs and Slovic 1979). Moreover, because people cannot digest the large amounts of information available for popular consumption, they devise ways to order those data—they make sense of hazards in ways


179

that confirm stereotypes about the way the world works. The stereotypes often fit the data well (Slovic, Fischhoff, and Lichtenstein 1980).

To determine when subjects' guesses are accurate, their risk assessments must be compared to some standard. One standard for comparison is a model of rational decision making, where "rational" means logical and systematic information search. With this method, subjects' guesses are compared with statistical compilations of probability distributions of risks to human health. For example, Slovic, Fischhoff, and Lichtenstein (1979) found that students and members of the League of Women Voters estimated the risk of accidental death as more likely than death caused by disease, even though disease claims fifteen times as many lives per year as accidents. Another standard for comparison is expert knowledge, where the public's risk assessments are matched with those of experts. Slovic, Fischhoff, and Lichtenstein (1979) report that when experts confine their guesses to their specialty areas, they offer better estimates of risks than do nonexperts. One explanation for the differences between these groups is that nonexperts are unaware of actual data on the risks of certain technologies and activities. Nonexperts, for example, consistently underestimate the aggregate hazards of lawn mowers and overestimate the likelihood of a major accident at a nuclear power plant.

Using either of these two standards (probability distributions or experts) yields comparisons that can be interpreted to mean that feared risks receive disproportionate attention, or, to put it simply, that people worry too much. Lester Lave (1984) and Aaron Wildavsky (1979), for example, two well-known authors in the field, respectively argue that low-level risks should be dismissed, and that conflicts between groups over questions of hazard may cripple our economic system.

These findings imply that risk analysis could be used to inform decision makers, providing them a way to rank their preferences, thus facilitating a more rational process of choice. Meyer and Solomon (1984, 246), for example, argue that formal risk assessments "may yield better policy judgments" than other ways of decision making (such as reacting to the demands of interest groups or complying with federal regulations). Risk


180

assessment, in this view, is a method of purging values and political judgments from choice opportunities. Meyer and Solomon argue, in effect, that policymakers could be more effective if their decisions more closely conformed to the dictates of instrumental or formal rationality (Weber 1978, 24–26). It is possible that under certain relatively clear conditions, risk analysis can indeed serve as a tool to gather and systematize information for rational choice.[5] But under ambiguous conditions, decision makers are confronted with too much information whose meaning is equivocal (Sabatier 1978; Feldman and March 1981), and meaning must be clear for ordered preferences to be of value.

If risk analysis were indeed a tool of rational choice, we would expect a clear, agreed-upon definition of the hazard and an extensive data-collection effort on a wide variety of options. This information would then be used to construct alternative goals. Because, strictly speaking, the acceptability of risk is a political, rather than a scientific, issue (Kantrowitz 1975; Fischhoff 1977; Kates 1977; Calabresi and Bobbitt 1978; MacLean 1982; Otway and Thomas 1982), we would expect to find the values of many groups reflected in the evaluation process.

But sociological research suggests that risks are rarely, if ever, assessed in a rational manner. Allan Mazur (1973), in a careful analysis of debates over water fluoridation in the 1950s and nuclear power in the 1960s, shows that institutional location is strongly associated with how experts (the quintessential risk assessors) pose questions and choose methods to answer them. Mazur's research shows that scientific data are more ambiguous than we usually presume. Consider, for example, a controversy over whether or not to build a nuclear power plant on an earthquake fault that has been inactive for forty thousand years. Proponents of the plant will argue in favor of construction because the fault is obviously stable; opponents will argue that building the plant is foolhardy because the lengthy interval since the last

[5] The following are conditions under which a rational model of risk assessment might apply: (a) where structures of authority and power are unambiguous and stable, (b) where goals are familiar and easily defined, and (c) where technology is readily available (Clarke 1988).


181

earthquake means another is imminent. Mazur also found evidence that risk assessors tend to choose methods and data that support the position to which they are already committed. "We generally assume," he writes, "that informed scientific advice is valuable to political policy makers. However, in the context of a controversial political case, and when the relevant technical analysis is ambiguous, then the value of scientific advice becomes questionable" (Mazur 1973, 261; see also Mazur 1975). Rather than determining policy, risk assessments in controversial situations are more likely to reflect alternatives already chosen.

Had the process of determining acceptable risk followed a rational model in Binghamton, formal assessments would have driven decisions regarding what to do with the building and those exposed. Yet, formal risk analyses proliferated only after competing definitions of acceptable risk were winnowed out; instead of preceding actions, risk assessment followed them. In addition, policies to decontaminate the State Office Building and to conduct a medical surveillance program were instituted far earlier than calculations of the costs and benefits of those programs.[6]

The use of formal risk analyses in Binghamton suggests a process of choice that is not captured by decision-making theories that stress goal definition, consideration of alternative solutions, and implementation of an optimal choice. In addition to being tools for rational decision making, risk assessments are also tools that help an organization construct a reality whereby the actions it has already taken will seem reasonable to elements in its environment (Meyer and Rowan 1977; Meyer and Scott 1983). As I argued above, risk assessments are claims to legitimacy that are directed at other organizations.

[6] Formal assessments were constructed by VERSAR, Inc., as well as by the New York State Department of Health. After a Freedom of Information request yielded a copy of the contract between VERSAR and OGS, I discovered that VERSAR was required to file weekly progress reports with OGS. There are extensive references in those reports to cost-benefit analyses and risk assessments, although no conclusions are reported therein. OGS refused to released any of those assessments, claiming they were "under litigation."


182

Organizations, not individual members of society, are the most important risk assessors in our society. To understand how choices about risk are made, we need to accord more attention to the structural forces that impinge on decision makers. Some of these forces originate in the organizations in which policymakers are embedded, and some originate in the environments of those organizations. These structures and processes coalesce to constrain decision makers to make trade-offs between organizational resources and public health, and vice versa. The Binghamton case has allowed us to examine some of the processes whereby alternatives are chosen and some of the mechanisms that organizations use to guide their behavior in circumstances that provide few clues about the proper responses.


183

Eight— Organizing Risk
 

Preferred Citation: Clarke, Lee. Acceptable Risk?: Making Decisions in a Toxic Environment. Berkeley:  University of California Press,  c1989 1989. http://ark.cdlib.org/ark:/13030/ft5t1nb3k6/