Preferred Citation: Shrader-Frechette, K. S. Risk and Rationality: Philosophical Foundations for Populist Reforms. Berkeley:  University of California Press,  c1991 1991. http://ark.cdlib.org/ark:/13030/ft3n39n8s1/


 
Part Three New Directions for Risk Evaluation

Part Three
New Directions for Risk Evaluation


169

Chapter Eleven
Risk Evaluation

Methodological Reforms

A naturally occurring decay product of radium 226, radon 222 has an important impact on human health. Because it can leave the soil or rock in which it is present and enter the surrounding air or water, radon gas is ubiquitous. When it decays into a series of radioisotopes collectively referred to as "radon daughters," two of these daughters (polonium 218 and polonium 214) give off alpha particles. These alpha particles, when emitted in the lung, can cause lung cancer.1

Until recently, radon daughter exposure was associated with lung cancer in uranium miners. Now we know that radon daughters are present in the air of buildings, building materials, water, underlying soil, and utility natural gas. Although radon concentrations in American homes have not been systematically surveyed, available data show that some dwellings have concentrations greater than the control levels in underground mines2 This means, for example, that the lung cancer risk for lifetime exposure from age one is approximately 9.1 x 10-3 assuming an average life span of seventy years, or 70 WLM.3 This is a risk of lung cancer of nearly 1 in 100, as a result of lifetime radon exposure. Despite such risk figures, the present state of scientific knowledge does not allow one to specify the best values characterizing the dose in a home or mine; there are fundamental uncertainties surrounding the radon dosimetry factors.4

Risk assessment and evaluation are beset with typical uncertainties, similar to those in the radon case. Indeed, as was already mentioned, uncertainties of six orders of magnitude "are not unusual.5 As a result of these uncertainties, hazard assessors often fall victim to a variety of methodological errors, such as the probabilistic strategy and the isolationist strategy. In this chapter and the next, I shall outline several proposals for avoiding some of the worst effects of these uncertainties and erroneous methodological strategies. This chapter will examine


170

several methodological suggestions designed to improve quantitative risk analysis, especially risk evaluation. The last chapter will outline some regulatory and procedural solutions for reforming hazard management. These suggestions should address many of the problems criticized earlier in the volume; they should fill out the new account of rational risk evaluation, scientific proceduralism, begun in Chapter Three.

Policymakers have attempted for some time to improve various methods of risk analysis and evaluation, but their proposed methodological reforms have been fraught with controversy—notably, the debate between the cultural relativists, who overemphasize values in hazard evaluation, and the naive positivists, who underemphasize them.6 Decisionmakers and evaluators likewise disagree over how to resolve some of the more practical problems associated with risk management. At one end of the "solutions spectrum" are environmentalist policy-makers, such as Howard Latin. They argue for uniform environmental standards, for prohibitions against agents suspected of causing harm (carcinogens, for example), and against economic incentives and benefit-cost techniques for reducing risk.7 At the other end of the spectrum are industry-oriented policymakers such as Bernard Cohen and Lester Lave. They argue for situation-specific environmental standards, for negotiation regarding agents suspected of causing harm, and in favor of economic incentives and benefit-cost techniques for reducing risk.8

In this and the next chapter, I shall continue to argue for a middle position, between the industry and the environmentalist solutions to the problems of hazard evaluation and risk management. Agreeing to some extent with the environmentalists, I have argued throughout this book that quantified risk assessment (QRA), risk evaluation, and risk-cost-benefit analysis (RCBA) are seriously flawed, in part because of questionable methodological strategies associated with them.9 However, siding also with the industrial experts, I shall show (in this chapter) that although QRA is in practice deficient, there are in-principle reasons for continuing to use it.

More specifically, this chapter argues for a number of methodological claims regarding risk evaluation: that we need to use QRA and RCBA; that RCBA could be significantly improved by means of ethical weighting techniques; that hazard evaluation ought to be accomplished by means of alternative analyses designed to take account of different methodological, ethical, and social assumptions; that expert opinions on risk estimates ought to be weighted on the basis of their past predictive successes; and that assessors generally ought to give up both the naive positivist and the cultural relativist views, as well as the en-


171

vironmentalist and the industry-oriented accounts of risk. Instead, the chapter argues that experts ought to adopt a middle position, scientific proceduralism, that emphasizes increasing the analytic precision of hazard assessment and the democratic control of environmental risks.

Admittedly, the conclusion of many persons who have followed my repeated criticisms of methodological strategies associated, with QRA and RCBA might be that we ought to discontinue use of both. Such a conclusion would follow, however, (1) only if the problems with QRA and RCBA were essential to the use of these quantitative and economic methods and (2) only if there were alternative, superior tools capable of achieving better risk analysis and management. Since I believe that both (1) and (2) are false, more modest and reformist conclusions follow from the criticisms leveled earlier in the book. The case for using QRA can probably be made most clearly by considering RCBA, a method described briefly in Chapter Four.10 Since most of the methodological criticisms of QRA in this volume have been directed at the risk-evaluation stage, any plausible defense of the claim that we ought to continue to use QRA ought to focus on this third stage and on the most prominent tool of risk evaluation, RCBA. Moreover, since most people object to QRA because of its quantitative, reductionistic approach, their criticisms typically focus on RCBA. Let's examine RCBA, in order to see why, although these complaints are partially correct, they fail to show that policymakers ought to abandon either RCBA or QRA.

Why We Need QRA, despite Its Flaws: The Case for RCBA

Currently, nearly all U.S. regulatory agencies (with the exception only of the Occupational Health and Safety Administration) routinely use RCBA to help determine their policies. Although the National Environmental Policy Act (NEPA) of 1969 requires that RCBA be used to evaluate all proposed environment-related federal projects, opponents of this technique often view its practitioners as dehumanized numerators engaging in a kind of economic Philistinism.11 Amory Lovins compares RCBA to the street lamp under which the proverbial drunkard searched for his wallet, not because he lost it there but because that was the only place he could see.12 The most common objections to RCBA generally focus on four alleged problems: there is no accepted theory of rationality to undergird RCBA; the democratic process, as well as mathematical-economic techniques, ought to determine risk policy; RCBA ignores factors such as the equity of distribution and the


172

incommensurability of various parameters; and its data base is inadequate.13

Rather than focus on each of these objections, I want to ask a more basic question: What arguments ought to be counted as decisive in the case for and against RCBA? Unless proponents and opponents of this technique agree on the criteria for resolving their disputes, no consensus on the methods for making public policy seems possible. Currently, much of the debate is at cross-purposes. Both opponents and proponents of this technique are arguing for theses which those who disagree with them do not regard as decisive on the issue of whether or not to use RCBA.

One argument used by opponents of RCBA appears to be both misguided and central to their objections. I would like to expose its flaws and to point out what alternative reasons might be decisive grounds for arguing for or against use of RCBA. This flawed argument is that, since RCBA is deficient in a number of serious ways, it should not be used routinely for societal decisionmaking regarding environmental projects. More descriptively oriented variants of this argument, such as those by Dreyfus, focus on the claim that RCBA cannot model all instances of "human situational understanding."14 More normative variants, such as those by MacIntyre, maintain that RCBA exhibits some of the same defects as "classical utilitarianism."15

Two Main Attacks on RCBA

Although those who attack RCBA do not formulate their arguments in terms of explicit premises, they all appear to be employing a simple, four-step process. For the more normative philosophers, these steps are as follows:

1.     RCBA has a utilitarian structure.

2.     Utilitarianism exhibits serious defects.

3.     Typical applications of RCBA—for example, in risk assessment and in environmental impact analysis—exhibit the same defects.

4.     Just as utilitarianism should be rejected, so should the use of RCBA techniques.16

The more descriptive attacks on RCBA are similar:

1.     RCBA is unable to model all cases of human situational understanding and decisions.


173

2.     The inability to model all cases of human situational understanding and decisions is a serious defect.

3.     Typical applications of RCBA—for example, in risk assessment and in environmental impact analysis—exhibit the same defect.

4.     Just as faulty models of human situational understanding and decisions should be rejected, so should RCBA.

This four-step argument is significant, not only because persons such as Dreyfus, Gewirth, MacIntyre, and MacLean appear to be employing it, but also because it seems to reflect the thinking of many (if not most) philosophers who discuss RCBA and applied ethics. After taking a closer look at both variants of the argument, I shall attempt to establish two claims:

1.     Even if all the premises of both versions of the argument were true, because the inference from them to the conclusion is not obviously valid, it is not clear that the conclusion (4) ought to be accepted.

2.     Although the second and third premises of the normative variant of the argument are likely true, because the first premise is not obviously true,it is not clear that the conclusion (4) ought to be accepted.

Let's examine first the descriptive variants of the argument. Although its proponents would probably admit that RCBA is useful for some cases of individual decisionmaking (for example, for determining whether to repair one's old car or buy a new one), they claim that, in most instances, individuals do not use RCBA to do a "point count."17 Dreyfus and Tribe maintain, instead, that they use intuition. Socolow and MacLean claim that they employ open discourse, argument, or debate.18 In any case, all critics agree that any formal model such as RCBA is unable to capture what goes on when someone understands something or makes a decision. They maintain that such formal models fail in being too narrow and oversimplified.19

As Dreyfus put it, much policymaking is "beyond the pale of scientific decisionmaking." It requires "wisdom and judgment going beyond factual knowledge," just as chess playing and automobile driving require "expertise" and "human skill acquisition" not amenable to RCBA.20 The expert performer, so the objection goes, plays chess or drives the automobile "without any conscious awareness of the process." Except during moments of breakdown, "he understands, acts, and learns from results," without going through any routine such as RCBA. Indeed, he not only does not go through any such routine; he could not. And he


174

could not, according to critics of RCBA, because he is often unable to tell a cost from a benefit; much of the time, he doesn't know either the probability or the consequences of certain events.21 Hence, goes the argument, because RCBA cannot model many cases of individual decisionmaking, it cannot model societal decisionmaking.22

More normative policymakers likewise reject use of RCBA for societal risk decisions, but their emphasis is on the claim that RCBA should not, rather than that it cannot, succeed. The gist of their objections to RCBA is that the technique forces one to assume that "everybody has his price." Lovins and MacLean are quick to point out that some things are priceless and hence not amenable to RCBA calculation.23 Gewirth argues that certain rights cannot be costed and then traded for benefits. Critics of RCBA all argue that moral commitments, rights, and basic goods (such as the sacredness of life) are inviolable and incommensurable and hence cannot be bargained away for any benefits revealed in an RCBA.24 In a nutshell, they allege that RCBA shares the same defects as classical utilitarianism; it cannot account for crucial values such as distributive justice.25 Hence, they conclude, in a society where decisionmaking ought to be based on rights and justice, hazard analysis and management ought not to be based on RCBA.

Problems with the Main Inference of the Argument against RCBA

Even if all three premises of the argument against RCBA were true, its conclusion ought not to be accepted, since the relevant inference is not obviously valid. This inference is that RCBA should be rejected because it exhibits a number of serious deficiencies. The inference is questionable because proponents of RCBA are likely to admit, with their opponents, that RCBA has shortcomings. For its advocates, RCBA is one way of facilitating democratic decisionmaking, not a substitute for it. Hence, proponents of RCBA maintain that, although critics are correct in pointing out deficiencies in the method, these flaws alone do not constitute decisive reasons for abandoning it.

Proponents claim that the problems with RCBA are not the issue; the real issue is whether RCBA represents the least objectionable of all the methods used for policymaking. If so, it is not sufficient for an opponent of RCBA to find fault with the technique and then reject it. He must, in addition, show that there is a viable decisionmaking alternative that has fewer deficiencies than RCBA.

One reason why recognition of RCBA deficiencies is a necessary, but not a sufficient, condition for inferring that it ought not be used in


175

policymaking is that RCBA provides at least some benefits to society. For example, it enables policymakers to cope with externalities. It is essential for providing estimates of social costs, so that prices can be set equal to marginal costs.26 Government then can set standards to regulate output and to determine the optimum size of projects. If RCBA were rejected as a part of decisionmaking, then we would have to abandon not only techniques used for regulating externalities and achieving social benefits such as traffic control but also the means of ensuring competitive pricing.27 Because of these assets of RCBA, those who attack it ought either to defend the claim that there is an alternative that is superior to RCBA or to refrain from the call to reject RCBA. Surprisingly, many of those who criticize RCBA, with the exception of perhaps Hare and Self, do not discuss what might constitute sufficient conditions for rejecting it. Instead, they merely point out deficiencies in the method.28

A second reason for denying that RCBA deficiencies are decisive grounds for abandoning the method is that (in many cases) acceptance of this inference would preclude all systematic forms of societal decisionmaking. In other words, if the descriptive variant of the argument proves anything at all, it proves far too much—namely, that the deficiencies rendering RCBA questionable likewise count against all systematic societal decision methods used in similar situations.

Consider first Dreyfus's charge that "the analytic decomposition of a decision problem into component probabilities, utilities, and tradeoffs" is misguided because true understanding of policymaking cannot be made explicit; decisionmakers, so the argument goes, "know how to do things, but not why they should be done that way."29 MacIntyre says much the same thing: "Moral [and policy] arguments are in our culture generally unsettleable."30 Other proponents of this descriptive argument claim, as was already noted,31 that people can't tell what is a cost and what is a benefit of a particular technological proposal. Since we are unable to sort out costs from benefits, they claim, RCBA cannot lead to good policymaking.32

If all such claims were correct—that is, if policy disputes are generally unsettleable, and if costs cannot be distinguished from benefits— then the difficulties noted would undercut even any non quantitative risk methods, as well as any rational analysis of them. If Dreyfus and others are right about these difficulties, then criteria for policy would have to remain implicit and, by their very nature, could not be made explicit. As a result, no one could understand either decisionmaking or the criteria for its success. Moreover, any systematic, rational form of public policymaking—whether quantitative or nonquantitative, sci-


176

entific, democratic, or legal—would be undercut. This is because any nonarbitrary form of decisionmaking requires specification of policy goals and criteria for its success. Any democratic form of decision-making requires, further, rendering these goals and criteria explicit, so that they can be recognized and evaluated by the body politic. Therefore, instead of condemning RCBA for reasons that would indict any systematic decision methodology, policymakers would do better to argue for a particular risk-assessment method, on the grounds that it was superior to all known alternatives. Indeed, any other approach would amount to begging the question of whether the deficiencies of RCBA are sufficient grounds for rejecting it.

In begging this question, policy analysts fail to take account of the fact that any theory of decisionmaking, not just RCBA, leaves some residual unaccounted for by the theory. For Gewirth and MacLean, for example, the main theoretical residue of RCBA is that it allegedly cannot explain the overarching significance given to human rights and to the sacredness of life. However, any opponent of any theory is always able to charge that it cannot account for some important value. This is because any theory must employ simplifying assumptions that have a "residual" not accounted for by the theory of which they are a part. Hence, arguments focusing merely on the deficiencies of RCBA miss the point. The point is which residuals are more or less important, and which decision alternatives have the least deficiencies.

The more normative arguments against RCBA fail, for example, because often they merely rehash the old problems with utilitarianism, instead of recognizing that both utilitarian and deontological policy alternatives have theoretical residues. Coburn, for example, is one of the many who take this approach.33 Yet, as Sen points out, Bentham and Rawls capture two different aspects of interpersonal welfare considerations. Both provide evaluations necessary for ethical judgments, but neither alone is sufficient. Utilitarians are unable to account for the theoretical residue of how to evaluate different levels of welfare, and Rawlsians are unable to account for the theoretical residue of how to evaluate gains and losses of welfare.34 Hence, merely pointing out one or the other of these deficiencies misses the point.

In particular, Dreyfus's claim that one does not normally employ RCBA, to play chess or to drive an automobile, misses the point.35 Likewise, MacLean's claim, that he does not use RCBA to value his antique Russian Samovar, misses the point. One reason why persons do not use the technique in such situations is that one's own experiences, values, and goals are unified, and hence provide a single, integrated


177

basis from which to make individual decisions about valuing risks and actions. RCBA, however, has been proposed for societal decisionmaking precisely because the disparate members of society have no unifying experiences and goals that would provide an integrated basis from which to make decisions about how to spend funds or assess technological risks. Collective choices, exercised through government, require some analytic "logic" or method to reconcile and unify the diverse experiential bases of all the members of society. Hence, the fact that RCBA is not used by individuals, in playing chess or pricing samovars, does not count against the desirability of using RCBA in making societal decisions about risk. The two cases are fundamentally disanalogous.

Admittedly, there is no way to show in advance that RCBA, when refined and developed, will or will not enable us to account for some of our obviously correct normative or descriptive intuitions about choice. This being so, it seems inappropriate for proponents and opponents of RCBA to provide unqualified support, respectively, for acceptance or rejection of RCBA. Rather, they ought to aim at showing why one ought or ought not try to work toward an allegedly adequate RCBA theory.36 Not to pursue this line of argumentation is merely to beg the question of what might constitute the soundest basis for policy.

Many critics of RCBA do not provide convincing grounds for rejecting it, because they underestimate the benefits of using an explicit, clearly defined decision technique, and they overestimate the possibility for rational policymaking when no analytic procedure such as RCBA is part of the process. A society pleading for policymaking based solely on Dreyfus's expertise, "intuition," and "wisdom," or on MacLean's "open discourse," rather than also based on RCBA or on any other analytic method, is like a starving man pleading that only steak will satisfy him. Since much current policy is arbitrary and based on purely political considerations, at least some use of any analytic method seems desirable.

One reason why some use of RCBA is better than none is that failure to use a well-defined system leaves one open to the charge of both methodological and substantive arbitrariness. At least in the sense that its component techniques and underlying value judgments are capable of being known and debated, use of an analytic decision procedure is rarely arbitrary in a methodological sense, although it may be controversial in a substantive sense. For example, a decision (based on RCBA) to use a particular technology for storing low-level radwastes may be substantively controversial in the sense, for instance, that different persons have opposed views on how much risk is acceptable. If the RCBA


178

were done properly, however, its conclusion would not be methodologically arbitrary, in the sense that someone who examined all the calculated costs and benefits would be unable to tell why a particular policy were said to be more or less beneficial than another. A proper RCBA is not methodologically arbitrary, at least in the sense that its decision criteria, however faulty, are explicit; the bases for its conclusions, including the numbers assigned to particular values, are there for everyone to see and evaluate.

If someone were to use a nonanalytic decision procedure, such as "intuition" or "open discourse" (as suggested by many opponents of RCBA), then the resulting conclusions would likely be both methodologically arbitrary and substantively controversial. There would be no way to analyze and evaluate a particular intuitive decision; that is the whole point with something's being intuitive: it is directly apprehended. In this sense, a systematic technique (such as RCBA), even with known flaws, is less methodologically arbitrary than an intuitive approach, whose assets and liabilities cannot, in principle, be the object of some reasoning process.

The opponent of RCBA, however, is likely to believe that use of a seriously deficient analytic decision procedure, incorporated within the democratic process, is not necessarily preferable to using no decision procedure or to relying on the democratic process alone. He would likely argue that, in the absence of some decisionmaking system, people would be forced to confront the arbitrariness of their social choices. They would be forced to seek "wisdom" and intuitive "expertise" as the only possible bases for policymaking. In fact, this is exactly what Dreyfus has claimed, and what philosopher Holmes Rolston argued when he criticized an earlier version of these remarks. Rolston claims that using RCBA is like weighing hogs in Texas. Those doing the weighing put the hog in one pan of a large set of scales, put rocks in the other pan, one by one, to balance the weight of the hog, and then guess how much the rocks weigh. Rolston says that the Texans ought to guess the weight of the hog, just as policymakers ought to use intuition and discourse, through the democratic process, and not bother with RCBA.

Rolston's criticism of RCBA does not work, however, and for several reasons. He fails to distinguish the method of valuation from the precision with which any factor can be calculated by means of that method. If consistent methods, such as RCBA, yield imprecise results, these results are not arbitrary in a damaging sense, since the assumptions used in quantification and the value judgments underlying them are clear and consistent. Moreover, if one rejects RCBA and opts merely


179

for the intuition and discourse of normal democratic channels, likely he forgets that the necessary conditions for participatory democracy are rarely met. As Care pointed out, these conditions include:

1.     that all the participants be: noncoerced; rational; accepting of the terms of the procedure by which they seek agreement; disinterested; committed to community self-interestedness and to joint agreement; willing to accept only universal solutions; and possessed of equal and full information;

2.     that the policy agreed to prescribe something which is both possible and non-risky, in the sense that parties are assured that it will be followed through; and finally

3.     that the means used to gain agreement be ones in which all participants are able to register their considered opinion and ones in which all are allowed a voice.37

Once one considers these constraints, it becomes obvious that circumstances seldom permit the full satisfaction of these conditions for procedural moral acceptability. Consequently, it is unclear that democratic procedure alone will produce a more ethical policy than that achieved by democratic procedure together with RCBA. If anything, the moral acceptability of the democratic process seems to require use of analytic methods, since they are one way of being rational.38

Without some use of analytic methods of policymaking, back scratching, payoffs, bribes, and ignorance could just as well take over, except that these moves would be harder to detect than within a clear and systematic decisionmaking approach. Given no nonarbitrary decision procedure for identifying either intuitively correct or wise policies, we might do well to employ a policy technique which, however deficient, is clear, explicit, and systematic, and therefore amenable to democratic modification. Given no philosopher-king and no benevolent scientist-dictator, opponents of RCBA need to explain why they believe that society has the luxury of not opting for some analytic procedure such as RCBA as a part of democratic decisionmaking. At a minimum, they ought to explain why they believe RCBA is unlikely to aid policymakers in satisfying the conditions for procedural democracy, just as proponents of the technique ought to argue why they claim RCBA is likely to help decisionmakers meet those conditions. Otherwise, arguments about the adequacy of RCBA merely beg the question.39

In begging the question, such arguments fail to take account of many of the constraints on real-world decisionmaking. It often requires, for


180

example, that one make a decision, even though it is not clear that the factors involved are commensurable or that one can adequately take account of rights. In rejecting RCBA, either because it fails to give a "true" description of all the situations of human choice or to take account of certain truths about ethics, its critics often forget that public policymakers are not like pure scientists; they do not have the luxury of seeking truth alone. There are other values, pragmatic ones, that also have to be considered, and this is where RCBA might play a role.

The pragmatic assets of an analytic scheme like RCBA are clear if one considers an example. Suppose citizens were debating whether to build a newer, larger airport in their city. Suppose, too, that an RCBA was completed for the project and that the benefits were said to outweigh the risks and costs of the project by $1 million per year, when nonmarket, qualitative costs were not taken into account. It would be very difficult for citizens to decide whether the risks and costs were offset by the benefits. But if one hypothetically assumed that fifty thousand families in the vicinity of the airport each suffered qualitative risks and costs of aircraft noise, traffic congestion, and increased loss of life worth $20 per year per family, then the decision would be easier to make. It would be much easier to ask whether it was worth $20 per family per year to be rid of the noise, congestion, and auto fatalities associated with a proposed airport, than it would be to ask whether the airport pros outweighed the cons, or what a wise or expert person would say about the situation.40 Formulating the problem in terms of hypothetical monetary parameters, and using an analytic scheme to make a cardinal estimate of one's preferences, appears to make this particular problem of social choice and risk assessment more tractable. Moreover, one need not believe that the hypothetical dollars assigned to certain costs are "objective" in order to benefit from RCBA. RCBA preference ordering, including the assignment of numbers to these preferences, is merely a useful device for formulating the alternatives posed by problems of social choice.41

Despite all its difficulties related to the freedom of choice of the poor and the diminishing marginal utility of money, RCBA is useful in suggesting what individuals will "trade off" for safety or for amenities. Hence, it appears to yield some insights about preferences and about constraints on choice. It can't tell us about processes of valuation that guide these choices. Rather, the goal of RCBA is to provide relevant information about preferences—information that is useful for rational, democratic debate. Hence, any decisive argument against RCBA cannot merely fault the technique for its admitted inability to enlighten us


181

about the values that guide our choices. RCBA is not a substitute for moral philosophy, merely one way to elucidate problems within it.

Problems with the Normative Attack on RCBA

But if the RCBA methods are not meant to prescribe various ethical ends, but only to illuminate their consequences and alternative means for achieving them, then how is it that MacIntyre and other opponents of risk assessment can allege that RCBA is a "normative form of argument" which "reproduces the argumentative forms of utilitarianism"?42 RCBA is indeed utilitarian in that the optimal choice is always determined by some function of the utilities attached to the consequences of all the options considered. It is not obvious, however, that RCBA is solely consequentialist. For one thing, before one can apply RCBA strategies, one must first specify alternative courses of action open to the decisionmaker. Since one can consider only a finite set of options, the decisionmaker must make a value judgment that the eliminated alternatives would not be the best, if they were left in the calculation. But an infinite set of options cannot be reduced by means of a utilitarian value judgment, because it would presuppose knowing the utilities attached to the consequences of an infinity of options. It is impossible to know these utilities, both because they are infinite and because the only utilitarian grounds for reducing the options is to carry out the very calculations that cannot be accomplished until the options are reduced. Thus, any application of RCBA principles presupposes that one makes some value judgments that cannot be justified by utilitarian principles alone.43 One might decide, for example, that any risky technologies likely to result in serious violations of rights ought to be eliminated from the set to be subjected to RCBA calculations. In this case, use of the allegedly utilitarian RCBA techniques would presuppose a deontological value judgment.

RCBA also includes many presuppositions that can be justified by utilitarian principles only if one engages in an infinite regress. Hence, these presuppositions must be justified by some nonutilitarian principles. Some of the presuppositions are that each option has morally relevant consequences; that there is a cardinal or ordinal scale in terms of which the consequences may by assigned some number; that a particular discount rate be used; and that certain values be assigned to given consequences. Moreover, these assignments could be made in such a way that RCBA was able to account for nonutilitarian considerations. For instance, one could always assign the value of negative


182

infinity to consequences alleged to be the result of an action that violated some deontological principle(s).44 Thus, if Gewirth, for example, wants to proscribe risk policy that results in cancer's being inflicted on people, then he can satisfy this allegedly nonutilitarian principle by assigning a value of negative infinity to this consequence.45

Suppes also supports the claim that using RCBA does not necessarily commit one to a utilitarian ethical theory. He argues that "the theory could in principle be adopted without change to a calculus of obligation and a theory of expected obligation." From the standpoint of moral philosophy, says Suppes, this material indifference means that RCBA has an incomplete theory of rationality. It is a formal calculus that can be interpreted in a variety of ways. One could conceivably interpret rights violations and inequities as RCBA costs; Dasgupta and Heal, for example, show that the social welfare function of RCBA can be interpreted according to at least three different moral frameworks: egalitarianism, intuitionism, and utilitarianism.46

If my arguments, as well as those of Suppes, Rosenberg, and others are correct, then RCBA is nonutilitarian in at least three senses: it presupposes prior, nonutilitarian value judgments; it provides for weighting the consequences of the options considered; and it allows one to count virtually any ethical consideration or consequence as a risk, cost, or benefit. This means that there are at least three grounds for arguing that the first premise of the normative argument against RCBA is untrue.

There also appear to be good reasons for continuing to use RCBA. (1) It responds to the need for a unified, societal (rather than individual ) form of risk decisionmaking. (2) It is a way of clarifying and facilitating democratic decisionmaking. (3) It enables policymakers to compare diverse risks on the basis of probabilities and consequences, to cope with externalities, to provide social benefits, and to ensure competitive pricing. (4) RCBA contributes to rational policymaking, whereas rejecting it usually amounts to rejecting any systematic risk decisions, a stance that leaves room for arbitrary, dishonest, purely political, or irrational hazard assessment. Moreover, (5) since RCBA operates by means of explicit decision criteria, the bases for its conclusions are in principle amenable to debate and discussion by the public; they are not purely intuitive. (6) RCBA helps to make many virtually intractable risk decisions more intelligible. (7) It helps to clarify the values that guide our choices by forcing us to order our preferences. (8) It provides a way for us to spend societal resources on risk abatement so as to save the most lives for the fewest dollars. Finally, (9) RCBA provides a formal


183

calculus that is in principle capable of being interpreted in terms of many value systems.

Improvement of RCBA by Ethical Weighting Techniques

Because RCBA is often misinterpreted in purely utilitarian ways, it typically fails to take account of egalitarian values, social obligations, and rights. One way to remedy these deficiencies would be to employ a weighting system for RCBA. Many of the dilemmas of risk evaluation, discussed in Chapter Five, could be ameliorated by a weighting scheme. For example, in cases where free, informed consent to risk imposition was in question (the "consent dilemma"), policymakers could assign a negative weight to those risks. In cases where dangers to hazard victims were likely to combine to some unacceptable level (the "contributors dilemma"), decisionmakers likewise could assign a negative weight to those risks. Ethically weighted RCBA could also be used to counteract some of the risk-analysis strategies criticized in Chapters Six through Ten. For example, if a potentially catastrophic technology imposed a greater risk on consumers, rather than on producers, then its costs could be weighted more negatively.

Admittedly, neither RCBA nor risk assessment can tell us what weights to impose; RCBA is a formal calculus amenable to the weights dictated by any ethical system. Its purpose is to help us clarify our ethical values as a society, not to dictate them. Hence, perhaps the best way to use RCBA would be to have different interest groups prepare alternative RCBAs, each with different ethical weights. The public or its representatives could then decide which weighting scheme best represented its values. I have argued elsewhere that RCBA ought to be weighted by various parameters, but that teams of analysts, including ethicists and members of various public-interest groups, ought to be responsible for devising alternative, ethically weighted RCBAs.47 Once these were completed, normal democratic procedures could be used to choose the desired RCBA. Before I go into that proposal, however, it makes sense to explain why any ethical weighting at all appears desirable.

Because no necessary connection exists between Pareto optimality, the central concept of RCBA, and correct policy,48 it would be helpful if there were some way to avoid the tendency to assume that RCBA alone reveals socially desirable policy. Alternative, ethically weighted RCBAs would enable persons to see that good policy is not only a matter


184

of economic calculations but also a question of ethical analysis. If people follow Gresham's law and thus persist in their tendency to accord primacy to quantitative results, such as those of RCBA, then using ethically weighted RCBAs might keep them both from ignoring ethical parameters, which are not normally represented in a quantitative analysis, and from identifying un weighted RCBA results with a prescription for desirable social policy.

Ethically weighted RCBAs would also provide a more helpful framework for democratic decisionmaking. Policy analysis would be able to show how the chosen measures of social risks, costs, and benefits might respond to changed value assumptions.49 Likewise, ethically weighted RCBAs might bring values into policy considerations at a very early stage of the process, rather than later, after the RCBA conclusions were completed. In this way, citizens might be able to exercise more direct control over the values to which policy gives indirect assent. Acceptance of the willingness-to-pay measure, for example, implies acceptance of the existing scheme of property rights and income distribution, since risks, costs, and benefits are calculated in terms of existing market prices and conditions.50 To employ a system of alternative, ethically weighted RCBAs, among which policymakers and the public can decide, would be to assent to the theses (1) that existing RCBAs already contain ethical weights (which are probably unrecognized) and (2) that proponents of the ethics implicit in current analyses ought to be required to plead their cases, along with advocates of other ethical systems, in the public court of reason. Given that weighting is already done anyway, both because of the implicit presuppositions of economists who practice RCBA and because of the political process by means of which the public responds to RCBA, it makes sense to bring as much clarity, ethical precision, openness, and objectivity to it as possible, by explicitly weighting the RCBA parameters in a variety of ways.51

Another reason for using ethically weighted RCBAs is that it appears more desirable than alternatives such as the use of revealed preferences or the current market assignments for income distribution, risks, costs, and benefits.52 Employing revealed preferences requires one to make a number of implausible assumptions, as Chapter Four argued.53 These could be avoided if, instead of employing revealed preferences, one followed the practice of preparing a number of alternative, ethically weighted RCBAs. Following Mill's lead,54 several economists have also proposed that RCBA can take account of the nonutilitarian distinction between right and expediency by using a system of weights.55 Economists Allen Kneese, Shaul Ben-David, and William Schulze, for example, working under National Science Foundation funding, have developed


185

a scheme for weighting risks, costs, and benefits by means of alternative ethical criteria.56 They also suggest that each ethical system be represented by a general criterion rather than by a list of rules, such as the Ten Commandments.57 They claim that the requirement that an ethical system be represented as a transitive criterion for individual or social behavior leaves at least four ethical systems (and probably more) for use in reweighting RCBA parameters: Benthamite utilitarianism, Rawlsian egalitarianism, Nietzschean elitism, and Paretian libertarianism. On their scheme, the Benthamite criterion is that one ought to maximize the sum of the cardinal utilities of all individuals in a society. The Rawlsian criterion is that one ought to try to maximize the utility of the individual with the minimum utility, so long as that individual remains worst off. According to the Nietzschean weighting scheme, one ought to maximize the utility of the individual who can attain the greatest utility. Finally, according to the Paretian criterion, say Kneese, Ben-David, and Schulze, one ought to act in such a way that no one is harmed and, if possible, that the welfare of some persons is improved.58

The merit of the ethical weighting criteria described by Kneese and his associates is that they could theoretically provide a unique sort of nonmarket information. For example, one could ask an individual how much he would be willing to pay for redistributing income to less fortunate members of society. The purpose of asking about such ethical beliefs, of course, would not be to provide a prescription for policy, but to allow the public and policymakers to see how different assumptions about the desirability of given distributions change the overall ratio of costs to benefits. They would be able to examine alternative, ethically weighted options, and not merely marginal costs or benefits across opportunities.

The work of Kneese and his associates illustrates dramatically that what is said to be feasible or unfeasible, in terms of RCBA, can change dramatically when different ethical weighting criteria are employed. Using case studies on helium storage, nuclear fission, and automobile-emission standards, they showed how alternative ethical weights can be used to generate contradictory RCBA conclusions. For example, when used with a Benthamite weighting criterion, RCBA reveals that using nuclear fission for generation of electricity is not feasible. When either the Paretian, Rawlsian, or Nietzschean criterion is used, however, nuclear power may be said to be feasible or unfeasible, depending on the value attributed to factors such as compensation for damages and the utility attributed to future generations.59

Admittedly, the weighting schemes outlined by Kneese and his colleagues have a number of limitations. Most notably, because they em-


186

ploy simple criteria, they fail to capture the complexity of ethical systems. They are allegedly unable, for example, to represent a priority ordering of different ethical claims within the same ethical system.60 Since I have argued elsewhere that one can order the ethical claims of individuals and societies, and then use them to weight RCBA parameters, I shall not repeat those arguments here.61 The main objection to them is that any sort of weighting scheme would be "at variance with the allocative principles by which the competitive economy is vindicated."62

However, government often makes policy decisions inconsistent with certain presuppositions underlying the competitive economy. Whenever government takes account of an externality such as pollution and imposes restrictive taxes or outright prohibitions, it is clearly making decisions inconsistent with particular presuppositions underlying a purely competitive economy. Indeed, if it did not, then grave harm, such as pollution-induced deaths, could occur. Also, it is well known that economists "correct" their (competitive) market parameters for risks, costs, and benefits.

Another classic objection to the use of any weighting scheme in RCBA is that weightings ought to be left to politicians and the democratic process and not taken over by economists.63 On the contrary, although economists and ethicists might help to formulate lexicographic rules and to weight RCBA parameters in accordance with these rules,64 they need only be responsible for helping to formulate a number of alternative, ethically weighted analyses. The public and policy-makers would decide among the alternative RCBAs. Indeed, one reason why this proposed weighting scheme would not lend itself to control by experts, who have no business dictating policy in a democracy, is that it calls for preparation of alternative RCBAs, each with different ethical weights.65 This amounts to employing an adversary means of policy analysis (see the next chapter).

Use of Alternative Risk Analyses and Evaluations

Weighting QRAs and RCBAs so as to reflect different methodological, ethical, and social preferences, however, is valuable only to the degree that one is able to see how alternative evaluative assumptions generate different risk-assessment conclusions. In other words, weighting schemes are valuable as sensitivity analyses. They make it possible to observe the effects of different assumptions on the same hazard assessment. Therefore, several risk analyses ought to be done for any single


187

societal or environmental threat, and each of these analyses should contain different methodological assumptions.

One reason for mandating that several alternative hazard assessments be done is that successful decisionmaking depends in part on knowing all the relevant facts and seeing all sides of a given "story." It is more likely that all sides to a story will be revealed if different groups conduct hazard analyses.66 Moreover, the public deserves a role in the process of determining rational risk choices, if the locus of public decisionmaking ought to be with the people, rather than merely with scientific experts. Public control and consumer consent can hardly be obtained if only one risk evaluation, controlled merely by scientific experts, is performed. Also, because all risks are both perceived and value laden,67 and because all hazard evaluations employ judgmental strategies, some of which are highly questionable,68 there are no value-free risk assessments. Consequently, hazard analysis, especially risk evaluation, is highly politicized and therefore should be accomplished in a political and legal arena, so that all persons affected by a given risk will receive equal consideration.

Only if the naive positivists were correct in their views about risk would it make sense to perform only one hazard evaluation.69 Since they are not correct, assessors ought to perform alternative evaluations, so as to spell out some of the controversial social and political dimensions of risk policy.70 In the account that I have been defending, increasing the degree of analytic sophistication is not sufficient for resolving risk conflicts. Policymakers must rely on procedural and democratic, rather than merely scientific, methods of evaluating and managing risk.71

Purely scientific methods of risk evaluation are inadequate because there are numerous uncertainties in hazard assessment. Scientists generally are unable to evaluate long-term environmental changes, in part because of the complexity of ecological science.72 Likewise, epidemiological aspects of risk analysis are often problematic owing to factors such as lack of exposure data, small sample size, recall bias, chance variation, long latency periods, and control selection. Exposure assessments are often uncertain because of synergistic effects, reliance on average doses, and dependence on particular mathematical models that oversimplify the actual situation. Dose-response assessment is uncertain because of the unreliability of particular experimental conditions, the difficulty of estimating whether there is a threshold, incorporation of background hazard rates, and extrapolation from observed doses.73 Because of these uncertainties, assessors fill gaps in information with inference and judgment. Because they do so, it is important to perform


188

several different analyses of the same risk.74 Moreover, because the various institutions performing assessments often seize upon whatever data and value judgments aid their cause,75 it is not wise to rely only on one study. In fact, a variety of government policy groups has already recognized the importance of doing alternative assessments.76

Use of Weighted Expert Opinions

Another methodological device for improving hazard evaluation is to weight expert opinions. This procedure amounts to giving more credence to experts whose risk estimates (probabilities) have been vindicated by past predictive success. It is a way to exercise probabilistic control over expert opinions and to make them more objective. Weighting such opinions would reveal whether a hazard assessor (who provides a subjective probability for some accident or failure rate) is "well calibrated." (A subjective probability assessor can be said to be well calibrated if, for every probability value r in the class of all events to which the assessor assigns subjective probability r , the relative frequency with which these events occur is equal to r.77 ) The primary justification for checking the "calibration" of risk assessors is that use of scientific methodology requires testing problem solutions. Not to test them is to reduce science to ideology or metaphysics.78

Admittedly, obtaining empirical control of subjective probability assessments is very difficult in the practice of risk analysis. The very reason for resorting to the use of such subjective estimates, based on degrees of belief of experts, is that actual accident frequencies, for example, cannot be determined in any uncontroversially objective way. As was already mentioned, the accidents for which there are no objective probability estimates are typically those involving new technologies. Recognizing this, risk assessors have used subjective probabilities, based on expert opinion, since about 1975. Decision theorists have used them since the 1950s.

It is difficult both to arrive at subjective probabilities and to calibrate them because the numerical values in question are often very small; for example, the per-reactor-year (subjective) probability of a nuclear core melt, as was mentioned earlier in the volume, is 1 in 17,000.79 For such subjective probabilities, calibration is ruled out; it would take too many years to test the frequency of actual core melts. However, there are many subjective probabilities that go into the calculation of the likelihood of a core melt, and weighting expert opinions in risk assessment may be possible if we are able to use these other, lower-level


189

subjective probabilities. It is also extremely important to do so, because there is a wide divergence of opinion among experts as to their actual values. In the famous WASH-1400 study of nuclear reactor safety, for example, thirty experts were asked to estimate failure probabilities for sixty components. These estimates included, for instance, the rupture probability of a high-quality steel pipe of diameter greater than three inches, per section-hour. The average spread over the sixty components was 167,820. (The spread of these thirty expert opinions, for a given component, is the ratio of the largest to the smallest estimate.) In the same study, another disturbing fact was that the probability estimates of the thirty experts were not independent; if an expert was a pessimist with respect to one component, he tended to be a pessimist with respect to other components as well.80

Recently, a group of hazard assessors in the Netherlands used empirical frequencies obtained from a study done by Oak Ridge National Laboratories to calibrate some of the more testable subjective probabilities used in WASH-1400, one of the best and most renowned risk assessments ever accomplished.81 Obtained as part of an evaluation of operating experience at nuclear installations, the frequencies were of various types of mishaps involving nuclear reactor subsystems. The Oak Ridge study used operating experience to determine the failure probabilities for seven such subsystems (including loss-of-coolant accidents, auxiliary feedwater-system failures, high-pressure injection failures, long-term core-cooling failures, and automatic depressurization-system failures for both pressurized and boiling water reactors). Amazingly, all the values from operating experience fell outside the 90 percent confidence bands in the WASH-1400 study. However, there is only a subjective probability of 10 percent that the true value should fall outside these bands. Therefore, if the authors' subjective probabilities were well calibrated, we should expect that approximately 10 percent of the true values would lie outside their respective bands. The fact that all the quantities fall outside these bands means that WASH-1400, allegedly the best risk assessment, is very poorly calibrated. It also exhibits a number of flaws, including an overconfidence bias.82

One can calibrate subjective probabilities and thus correct, in part, for these biases in probability estimates.83 There is extensive psychometric literature on calibrating probability estimates,84 as well as long-term meteorological experience with calibration techniques. National Weather Service forecasters have been expressing their predictions in probabilistic form since 1965, and numerous analysts have collected and evaluated weather data for predictive accuracy.85 Nearly all these


190

analyses show excellent calibration. Even more important, calibration has improved weather forecasts since the probabilistic forecasts were introduced.86

In response to these arguments for calibrating probability assessors, at least two objections are likely. One is that calibration does not guarantee reliable risk decisions. Admittedly not, but this is not an insurmountable problem, since no process can ensure that it will lead to the right decisions. We cannot command results (since that would require having a crystal ball), but we can command methods. Hence, our responsiblities in decisionmaking must focus primarily on methods, not on results.87

Another objection to calibrating experts is that there is no firm relationship between assessors' performance on the smaller probability estimates, used for checking calibration, and on the larger ones. In probability estimates of a nuclear core melt in particular, there are no good checks or controls for the larger estimates. This objection fails, however, because policymakers ought to try to calibrate expert opinion, in part because it is apparently so often wrong, in part because meteorologists have been successful in doing so, and in part because the consequences of failure to calibrate could be catastropic (e.g., 145,000 people killed in a nuclear core melt).88

Given these reasons to calibrate, the key question is how to go about doing it in the best way. The best calibration data we have are the many observed frequencies whose calculated probabilities together comprise larger probabilities (such as those for core melt). Although these lesser figures are not exactly the same as the core-melt probability, they are a necessary part of obtaining the larger probabilities and are hence better than no data. To argue against using a second-best check on expert opinion, merely because it is not as good as the best check (knowing the actual frequency), is unrealistic, since knowledge of frequencies is in principle unavailable. If it were available, one would not be relying on expert opinion in the first place.

How Scientific Proceduralism Guarantees Objectivity

All these methodological suggestions (calibration, alternative assessments, ethical weights) for improving risk evaluation and hazard management are predicated on two principles, both defended earlier in this volume. One principle is that assessors ought to give up both the rigid, naive-positivist assumption that experts' risk estimates are completely value free and the erroneous relativist assumption that risk assessment is not objective in any sense. It is objective, in at least the


191

three senses discussed in Chapter Three. The second principle is that contemporary hazard evaluation needs to become more democratic, more open to control by the public, and more responsive to procedural accounts of rational risk behavior.

We must now show how to safeguard scientific rationality and objectivity, even though risk-evaluation methods need to take account of democratic, ethical, political, and procedural factors, factors allegedly not capable of being handled in purely rational and narrowly objective ways. The procedural account of hazard evaluation (outlined in Chapter Three) presupposes that rationality and objectivity ultimately require an appeal to particular cases as similar to other cases believed to be correct, just as legal reasoning requires. Aristotle recognized that there are no explicit rules for such judgments, but that inexplicit ones guide moral reasoning. These inexplicit rules or judgments rely on the ability of a group of people, similarly brought up, to see certain cases as like others. This recognition of cases is also what Wittgensteinians are disposed to believe about all instances of human learning. At the final level of risk evaluation, after subjecting the assessment to the tests of predictive and explanatory power and to the rigors of debate over alternative assessments, there can be no specific algorithms to safeguard objectivity, no infinite regress of rules. Ultimately, even rules must give way, not to further appeals to specific (risk-evaluation or risk-assessment) rules, as the naive positivists presuppose, but to a shared appreciation of similarities.89

As such, this Popperian and Wittgensteinian account (scientific proceduralism) anchors objectivity to (1) criticisms made by the scientific and lay community likely to be affected by risk judgments and (2) empirical control over expert opinion (obtained by pursuing the goal of explanatory power, tested by prediction; by calibrating probability assessors; and by performing sensitivity analyses). The criticisms would help to protect the procedural and democratic aspects of risk evaluation, and the empirical control would help to safeguard its predictive and scientific components—that is, its rationality and objectivity.

This account of scientific objectivity is premised on the assumption that open, critical, and methodologically pluralistic approaches to hazard analysis and evaluation (via alternative studies, sensitivity analyses, calibration, and ethical and methodological weighting schemes) can in principle reveal the theoretical, linguistic, and cultural invariants of reality, much as a plurality of experimental perspectives reveals the true invariants of quantum mechanical systems.90 In the view that I am suggesting, the relevant variance principles applicable to hazard analysis, and especially evaluation, dictate that risk behavior is rational and


192

objective if it survives scrutiny and criticism by different, well-calibrated communities of theory holders, each with different transformations or ethical assessments of the same hazard.91

In this view, the risk assessments that we ought to call "rational" are those that have been subjected to systematic critical inquiry regarding explanatory and predictive power, as Popper proposed for science. Risk evaluations that we ought to call "rational" are likewise those that have been subjected to systematic, democratic, and procedural constraints.92 But systematic, critical inquiry requires a plurality of techniques: sensitivity analyses, calibration, alternative assessments, and so on. According to the epistemology outlined here, what characterizes science-related activities, such as hazard analysis, is that they are susceptible to empirical control. They are evaluated, in part, on the basis of explanatory power and predictive successes. The salient features of science and related activities such as risk assessment are thus empirical and predictive control, criticism of one's own perspectives, and recognition that there are alternative perspectives and methods. Indeed, if objectivity has anything to do with invariance under different theoretical perspectives,93 then scientists and risk assessors ought to be strongly committed to retaining only those beliefs that survive critical scrutiny. Such a notion of objectivity, however, suggests that the ultimate requirement for improved risk assessment is a whole new theory of rationality, one that is critical, procedural, populist, egalitarian, and democratic, as well as objective, scientific, and controlled in part by prediction and calibration. It requires, in other words, an epistemology in which what we ought to believe about risk analysis is bootstrapped onto how we ought to act.94 For example, we ought to act in ways that recognize due-process rights, give equal consideration to the interests of all persons, and so on.

Objections to Scientific Proceduralism

Scientific proceduralists recognize that the positivists and the cultural relativists are correct in certain respects. The positivists are correct in believing that there are at least general empirical criteria for scientific rationality and objectivity (for example, checking the calibration of risk assessors), and that reason ought to alter scientific practice and hazard analysis. The cultural relativists are correct in believing that, for most useful methodological value judgments, scientific rationality is largely a function of specific situations. Hence, scientific practice and actual risk evaluations ought to alter reason.

In Chapter Three, I argued for a multilevel notion of scientific ra-


193

tionality and distinguished among principles or goals, procedures, and actual scientific practice. The explication begun in that chapter rests on the insight that scientific rationality and objectivity are more universal than the cultural relativists claim and more complex than the naive positivists appear to believe. This very complexity, however, gives scientific proceduralism the ability to answer some of the main charges likely to be directed against either naturalistic accounts of science95 or against naive positivism.96

Against the specific position (scientific proceduralism) outlined here and in Chapter Three, there are at least seven main objections, all of which I believe can be answered: (1) Risk evaluation does not need the alleged improvements (ethical weights, calibration, alternative analyses) mandated by scientific proceduralism, because industry and the market are already accomplishing many of these reforms. (2) To say that there are stable rules or goals of science or risk evaluation and assessment (e.g., "Calibrate expert assessors") presupposes a realist view of science. (3) Since this account provides a normative view of scientific rationality and objectivity and is committed to some stable, universal goals of hazard analysis (e.g., using accident frequency to check estimated risk probabilities), it appears to be a positivistic account. (4) Since I define 'scientific objectivity,' in part, in terms of the criticism and debate of the scientific community, there appears to be no great difference between my view and that of Shapere and Feyerabend. (5) This account of scientific objectivity is "too thin." (6) The proposed goal of science, explanatory power tested by prediction, as achieved through sensitivity analyses and calibration, provides a trivial view of norms in science; as Hempel suggests, goals are "imprecise constraints on scientific theory choice."97 (7) Alternatively, the goal of science proposed in this account is impossible to meet, because it would require predictive power for probability estimates. The first objection has been expressed by a number of industry risk assessors; the second and third objections have been formulated by Miami philosopher Harvey Siegel, the fourth by Notre Dame philosopher Gary Gutting, and the fifth and seventh by Notre Dame philosopher Phil Quinn. The sixth objection is suggested by one of Hempel's criticisms of Kuhn.98

Responses to These Objections

Although space limitations prevent a full answer to these seven objections, I shall briefly sketch the arguments that, if presented in full, would respond to them. First, it is false to say both that risk evaluation does not need the improvements (alternative assessments, ethical and


194

methodological weights, calibration of subjective probabilities) I have suggested, and that industry and the market are already accomplishing these reforms. This objection is the classical industry response: it can keep its own house in order; it does not need government regulations or mandated improvements in risk analysis. Current facts, however, indicate that both these responses are doubtful.

Alternative assessments are necessary, for example, because the public is forced either to accept whatever studies are provided by industry or government or to pay for its own assessments. Since we have already argued that industry risk analyses are often biased,99 and since no laws or regulations provide for funding alternative assessments, there is a need for obtaining an alternative, nonindustry point of view, a government-funded alternative assessment. As one prominent assessor puts it: Unlike environmental impact assessment, risk assessment "frequently functions as a mere arcane expert process…. [It] often lacks procedures for public involvement in the design and critique of an analysis."100

There is also an unmet need for placing ethical and evaluative weights on the risk evaluations, so that members of the affected public can choose how to evaluate risks they face. Such weighting is not typically performed, as the discussion earlier in the chapter showed. Because it is not, risk evaluations often exhibit only one type of ethical norms, those of utilitarianism. They ignore considerations of equity and the needs of particular individuals, and they define "acceptable risk" in ways that are unacceptable to numerous potential risk victims, such as workers. All potential risk victims have rights to free, informed consent and to help evaluate risks by means other than utilitarianism and economic efficiency.101

Likewise, there is an unmet need to calibrate all probabilistic risk assessments, both because government does not require such calibration and because there is no guarantee that it will be accomplished without such a requirement. If industry and government were eager to calibrate risk assessors and to revise subjective probabilities on the basis of observed frequencies, then existing U.S. risk studies would already have been corrected. Instead, we had to wait for the Dutch to prove the unreliability of the nuclear risk probabilities specified in some U.S. assessments.102 Industry and government risk assessors have not kept their houses in order; to do so, they need a calibration requirement.

As for the second objection, scientific proceduralism entails no commitment for or against realism. My presupposing that there are general rules of risk assessment (e.g., involving calibration) does not commit


195

me one way or the other regarding realism, since many of the entities (probabilities) having explanatory power may have only hypothetical or heuristic status. Moreover, since I am pursuing an externalist position on scientific rationality, it is reasonable to argue that there is some universal, stable goal of science (explanatory and predictive power), without arguing why this stability is the case.103

Third, there are at least two reasons why this account does not fall victim to any of the flaws of naive positivism. For one thing, it is not committed to purely a priori rules of scientific method. The methodological goals (e.g., calibration, explanatory and predictive power, sensitivity analyses) defended here underdetermine (fail to prescribe) all specific methodological rules. This is because the specific rules need to be dictated in part by particular methodological value judgments and the given risk situation.

Fourth, the fact that some rules need to be dictated in part by the particular situation does not mean that my account of scientific objectivity is no different from that of relativists such as Shapere and Feyerabend. Scientific proceduralism is objective in at least the senses outlined in Chapter Three: its ability to withstand criticism, its ability to change on the basis of new facts and probabilities, and its ability to explain and predict both risks and human responses to them.

This notion of scientific objectivity (scientific proceduralism) is not too thin, because it presupposes a notion of scientific rationality dependent in part upon empirical measures, such as calibration and predictive power. Moreover, any stronger definition of scientific objectivity seems likely to fail, either because it might beg the realism question, or because it might presuppose knowledge we do not have. Because every situation in science is different, it is virtually impossible to specify completely, ahead of time, what an objective representation of some particular risk-assessment situation might be. Despite this impossibility, the methodological goals outlined in this chapter (e.g., testing explanatory and predictive power, performing alternative assessments, conducting sensitivity analyses) do not provide merely a trivial account of norms in applied science and risk assessment, as the sixth objection suggests. For one thing, they presuppose a rejection of most common versions of naturalism. They also provide an answer to the basic question mentioned in Chapter Three: "Are there general principles that account for the rationality of science and risk assessment?"

The goals (like calibration) proposed here likewise are not too strong, in requiring predictive power of risk estimates. Without such goals, one could not test a scientific explanation or risk estimate, and one's assessments would be relativistic. Without testing, one could not secure


196

the empirical foundations of science and hazard evaluation or assessment.104

Conclusion

Since this chapter, like the rest of the volume, is primarily philosophical, it is not meant to provide a precise account of the methodological solutions to problems of risk assessment and evaluation. These details need to be given by statisticians, epidemiologists, psychometricians, economists, and risk assessors. Precise methodological techniques (e.g., for calibration) need to be spelled out, and more examples and more developed arguments need to be provided, in order to support responses to these and Other objections to the position of scientific proceduralism. The argument sketches given thus far, however, suggest some of the ways that we might attempt to improve our methods of risk assessment and evaluation, so as to address the problems outlined earlier in this volume. The argument sketches also show that the notions of scientific rationality and objectivity outlined here deserve further investigation; indeed, much of the epistemological work for this account of hazard evaluation remains to be done. Let's move to the next chapter to examine some of the ways in which we might improve risk management.


197

Chapter Twelve
Risk Management

Procedural Reforms

During any period in almost any region of the country, one hears reports of local opposition, either to siting unwanted technological facilities or to bearing the environmental risks associated with already existing developments. Names like "Love Canal," "Three Mile Island," "Baltimore Canyon," "Minimata Bay," and "Seabrook" have been etched in our newspapers, our courtrooms, and our fears.

According to some experts, many cases of public opposition to risk are created by self-serving attitudes and unrealistic public expectations, rather than by catastrophic accidents and accelerating cancer rates. These other controversies, they say, represent instances of an "inverse" tragedy of the commons. In Garrett Hardin's "tragedy of the commons," people misuse a "public good," such as clean water, simply because it is free.1 The "inverse tragedy of the commons," however, occurs when people avoid their share of responsibility for "public bads" that fulfill essential public purposes—for example, when people try to avoid having airports, roads, or toxic waste facilities near them.2 One of the main tasks of hazard assessment and management is to know when a risk imposition represents a tragedy of the commons, which ought to be avoided, and when it is an inverse tragedy of the commons, which ought to be accepted responsibly.

One reason why it is difficult for the public to determine whether a given risk ought to be accepted or rejected is that evaluators often do not make an adequate case for accepting the risks that are likely to be imposed. They do not do so, in large part, because their hazard analyses often have two main deficiencies. They typically employ a number of questionable evaluative assumptions,3 epistemological strategies,4 whose effect on policy is to disenfranchise the public, the potential risk victims.


198

This chapter will provide a brief overview of several procedural ways to improve hazard evaluation and risk management. It will argue that, in order to address some of the problems with equity of risk distribution, compensation, consent, and scientific uncertainty, we need to investigate ways to reform statutes dealing with societal hazards. Moreover, because of the deepening loss of public trust in most institutions, carefully structured citizen participation and negotiation in making risk decisions are the only ways to legitimate them to the public. Policy decisions, especially when they affect health and safety, stand little chance of being accepted, for example, if government and industry continue their current DAD (decide, announce, defend) strategy for siting hazardous facilities and imposing environmental risks. Such a procedure relies on an unstructured, industry-controlled siting process and limited citizen input; the result is that public controversy is virtually guaranteed.5

This chapter will not provide specific solutions to problems of managing environmental risks, since these and the arguments for them are best accomplished by specialists in fields such as environmental law and welfare economics. Nevertheless, the general proposals provided in subsequent sections of this chapter should be enough to establish the prima facie plausibility of several proposals: (1) to achieve statutory reform in areas such as toxic torts; (2) to aim at obtaining free, informed consent from all potential risk victims and to include them in decisionmaking about societal hazards; (3) to provide ex ante and ex post compensation for imposed public risks; (4) to negate all liability limits that protect risky technologies at the expense of the public; and (5) to begin to resolve environmental controversies through negotiation and adversary assessment. Since the distribution of public risks has rarely been fair and has often fallen disproportionately on the powerless and the poor,6 the proposals aim at a more equitable distribution both of societal hazards and of decisionmaking power regarding them.

Statutory Reform and Risk Management

One of the central ways of reforming risk management is to amend the laws governing environmental or technological hazards, such as the laws governing toxic torts. Statutory improvements in this area are difficult to achieve, as Chapters Four, Five, and Eleven outlined, because there are numerous scientific uncertainties in identifying, estimating, and evaluating societal risks. As a result, litigation related to


199

public safety and health is complex, time consuming, and expensive; toxic tort plaintiffs, for example, must explain how and why a hazard caused a specific condition, even though any of numerous factors could be the real cause. Moreover, the relative contribution of genetic makeup, life-style, and the external environment to human health is difficult to determine. As a consequence, industry and government repeatedly invoke scientific uncertainty as grounds for the failure to reform laws dealing with hazardous substances.7

The reason why these legal mechanisms need to be amended is clear if one considers the costs that environmental hazards force on citizens. Public risks typically impose at least three burdens on society: costs of harm (e.g., medical bills, pain); costs of avoiding harm (e.g., pollution-control mechanisms); and transaction costs incurred in allocating harm (e.g., litigation, negotiation, and regulation). If there were no transaction costs, then society could allocate resources so as to minimize the sum of the costs of harm and avoiding harm. In such a situation, the party imposing risk or damage would have to strike a bargain with the (potential) victims, a bargain leading to an economically efficient outcome.8 Since our society has transaction costs, however, its laws of liability often lead to economically inefficient outcomes. Also, they favor those who impose risks, rather than those who are their victims, because those who cause risk and harm are not penalized until the damage to victims exceeds their (the victims') transaction costs (e.g., exceeds the costs of the victims' initiating litigation). The net effect is that high transaction costs (often caused by scientific uncertainty) beset attempts to establish liability; trying to prove that someone who imposes a risk (e.g., a toxic dump licensee) is at fault may thus encourage inefficient and unethical allocations of resources.9 The purpose of this chapter is to examine some efficient and ethical ways of managing societal risks.

To understand the typical difficulties of using the existing legal system to protect victims of environmental risks, we can consider several characteristics of toxic tort litigation. (1) Often victims do not know how or that they have been put at risk or harmed by toxic substances. (2) The time, effort, and expense of bringing suit often exceed the probable damage award, as is evidenced by the small number of compensation awards in cases of occupational cancer. (3) Toxics victims typically have less information than the defendants, usually companies producing hazardous chemicals. These difficulties are illustrated by the fact that, even outside the tort system, workers' compensation boards are typically unable to determine causes of harm. The boards face the same difficulties as plaintiffs within the legal system when they attempt


200

to establish causes of harm despite limited information. In the state of Colorado, for example, less than two plaintiffs per year typically receive workers' compensation awards for occupational cancer, even though epidemiologists estimate that many times this number deserve such benefits.10

Private damage actions for victims of environmental hazards are also problematic because many states have statutes of limitations, and most courts require victims to prove that the defendant's behavior unreasonably exposed them to a hazardous substance. Most courts also require victims to quantify costs and benefits in attempting to establish fault, to show causation of harm, and to prove that it was "worth" avoiding in a cost-benefit sense. Proving causation is especially difficult, as was noted in Chapter Four, because studies of environmental harm address causality in terms of statistical probabilities, a form of evidence typically not recognized by courts as establishing causation. Even if victims can establish the agent causing their harm, they are still faced with showing which of many exposures or risks are responsible for the actual harm— for example, whether they were exposed to chlorine in the drinking water or in the workplace.11

Regulation by administrative agencies, such as the Environmental Protection Agency or the Nuclear Regulatory Commission, likewise often fails to protect victims of environmental hazards, just as do private damage actions. This is in part because, in most such cases, the risk perpetrator is considered innocent until proved guilty. Hence, even though the government possesses resources enabling it to overcome many scientific uncertainties associated with environmental risks and harms, well-financed business interests often dominate regulatory decisions. The Atomic Energy Commission, for example, was so embroiled in lawsuits, because of its catering to vested interests, that it had to be abolished in 1974.12

Another difficulty with regulatory agencies is that statutory control is often fragmented among several different commissions. For example, five different federal agencies are responsible for the regulation of toxic chemicals. This duplication of effort creates confusion, inefficiency, and inconsistency. Regulatory agencies also typically have insufficient funds to bring lawsuits in more than 1 or 2 percent of the cases that ought to be tried. Since current regulations do not encourage private-sector research on avoiding unknown environmental dangers, only the government is able to determine the responsibilities of those who impose risks on the public. Moreover, rulemaking is time consuming and cumbersome, so that, even if something is known to cause a risk, it takes more than a century to establish regulatory standards for it.13


201

Statutory Reform, Transaction Costs, and Liability Limits

Given the problems associated with private damage actions and administrative regulations, it is safe to conclude that government and law have not been as effective as they might have been in reducing the societal costs of environmental and technological risks. If we are to manage public hazards more effectively, then we must try to devise ways to reduce the transaction costs associated with allocating risks and harms. One way would be to ensure that litigation is not a prerequisite to recovering damages for injury. Compensation could be financed from a fund administered by a government agency. Risk imposers, such as producers of toxic wastes, could contribute to the fund on the basis of the amount and severity of the risk and harm they generate.

A second way to reduce transaction costs would be to ease the evidentiary burdens placed on victims in cases where litigation is unavoidable. The Food and Drug Administration, for example, requires pharmaceutical manufacturers to demonstrate that a new drug is safe before it allows sale, instead of presupposing that the manufacturer is innocent until proved guilty. Another example of easing the evidentiary burden on victims occurred in a celebrated case involving the drug diethylstilbestrol (DES). The plaintiff could not identify which of several defendants caused her injury (produced the DES responsible for her cancer), yet one of them did so. Hence, the court eased the victim's evidentiary burden and shifted the burden of proof to the defendants, who were required to show that they were not responsible for the harm. Part of the reason for this shift was that the court determined that defendants are better able to bear the cost of injury caused by a defective product, and that holding manufacturers liable will provide an incentive for product safety.14

A third way to reduce transaction costs would be to force people to pay for the risks or harms they impose on society, by internalizing the externalities or social costs of employing hazardous technologies. One way to internalize social costs would be to impose strict liability on those who have caused societal risk or harm. New Jersey's Major Hazardous Waste Facilities Siting Act, for example, contains a provision for strict liability. (I shall argue later for full, but not strict, liability.) The act permits compensation for damage without proof of negligence. By using liability to internalize externalities, the New Jersey law helps the price (paid by consumers) of the goods generated by a risky technology to reflect the overall social costs (such as injuries or pollution) of producing them. Internalizing social costs or externalities also motivates those in charge of a hazardous technology to avoid causing damages


202

whenever the total social costs of such harm exceed the damage-avoidance costs. This deterrence mechanism is weakened whenever the market price of a commodity does not reflect its total social costs, because the expenses of bringing suit deter many potential victims or plaintiffs. The more the market and the regulatory system reflect true damages and expenses, the greater the incentive to lower the social costs of risk. And the greater this incentive, the better mechanism we have for deterring environmental harm and for allocating risks in an equitable and efficient manner.15

A Model Statute for Reform

Deterring harm and distributing risks equitably and efficiently requires that any statutory reform address a fundamental difficulty: inexpensive avoidance of technological and environmental hazards depends largely on liability. Assessing liability individually, however, militates against swift and efficient cost allocations by imposing the transaction costs of litigation. Statutory reform, therefore, must attempt to reduce the transaction costs of liability assignments.

In the model statute he defends, Trauberman provides several ways to avoid some of the high transaction costs. He proposes that anyone suffering from a particular environmentally induced disease, who was exposed to a certain substance (such as asbestos) at a specified level and for a particular duration, should be presumed to have established that the substance caused the disease. Such a presumption, already used in settling black lung cases, would help reduce both the plaintiff's burden of proof and the current problem, for example, of having over 10,000 plaintiffs all filing suit for asbestos-related injuries. It would avoid a situation in which courts, plaintiffs, and defendants spend time and money dealing with identical issues.16

A second way to avoid costly and frequent adjudication of similar issues would be to use more class-action suits in cases of pollution- and environment-related injuries and risks. Admittedly, the courts have not welcomed class actions for such cases, reasoning that the issues of exposure, liability, and damages are too diverse for class-action treatment.17 Nevertheless, it might be possible, with suitable statutory reforms, to certify class actions for environmentally caused risks or harms, and yet to use separate proceedings to determine liability and damages, as happened in the Agent Orange and DES cases.18 Statutory reforms could also require that those persons whose conduct involves the societal or occupational imposition of environmental risk be held liable without fault for the full damages that they cause. Although their liability could


203

be limited to the extent that other factors are judged to have caused the risk or injury in question, requirements of full liability and liability without fault would invalidate several current statutes concerning environmental and technological hazards. One such statute is the Price-Anderson Act, which reduces most of the liability of commercial nuclear licensees.19

The main rationale for negating such liability limits was argued earlier in Chapters Eight and Nine: the necessity to internalize all costs, to ensure the rights of all citizens to compensation for harms or risks imposed on them, to achieve equity, and to provide an incentive for safe management of hazardous technologies. The obvious objection to negating liability limits, however, is that full liability, without fault, might shut down new technologies and discourage inventions and scientific progress.20

The response to this objection has already been spelled out earlier in the volume. Economic or technological efficiency does not justify limiting citizens' rights; it does not promote equality; it does not always increase employment; and it places disproportionate environmental risks on the poor. As Thomson puts it, if A 's action will cause B harm, then A must buy this right from B, provided the harm is a compensable one.21 Since the harms associated with liability limits are often incompensable (for example, death from a nuclear core melt), it is even more imperative that the liability not be limited. Apart from these ethical considerations, however, there are practical reasons for believing that full liability, without fault, need not discourage inventions and scientific progress. In cases where inventions and technologies were essential to the public good, government (taxpayers) could provide the requisite insurance. Insurance pools, financed by individual investors, could also cover possible liabilities associated with innovative and risky technologies. Of course, if no private insurers will take the financial risks associated with a dangerous technology, then this might suggest that such uninsured risks should not be borne by taxpayers either.22 Even if no one were willing to insure risky technologies, they still could be tested and developed in situations where potential victims gave informed consent and signed releases, much as is done in cases of medical experimentation. Hence, statutory reform, requiring full liability, need not impede technological progress.23

Statutory reform also needs to address the problem of apportioning damages for environmental or technology-related injury, especially when there are multiple independent causes of harm. One response to cases with several defendants would be to reject the established causation rule and to substitute a system of joint and several liability for


204

the entire amount of damages, as was done in the landmark case of Landers v. East Texas Salt Water Disposal Company.24 Another possible solution to the problem of determining the responsible defendant, in cases of multiple causation of harm, would be to assign liability based on market share. As was mentioned earlier, the California Supreme Court did exactly this in Sindell v. Abbott Laboratories. In this landmark case, the plaintiff was a young woman injured by prenatal exposure to DES. She was unable to identify which of several companies had made the DES taken by her mother, and so she brought suit against the major manufacturers of the drug. Since six or seven companies, out of two to three hundred, were responsible for 90 percent of the market, the court devised a theory of "market share liability." Under this theory, each defendant was held liable for its share of the DES market unless it could show that its product was not responsible for the injuries to the plaintiff.25 In other cases involving multiple defendants, however, any one of a variety of possible rules might be adopted for allocation of liability.26 My point is not to argue for any particular rules, since their applicability is, in part, situation specific. Rather, the point is that there appear to be grounds for new allocation rules.27

The Sindell court, for example, in invoking a new rule based on market share, explicitly endorsed apportioning liability based on risk (rather than certain harm ) attributable to each defendant when causation is uncertain. Hence, the Sindell case shows that plaintiffs need not prove that the defendants caused the harm.28 The case therefore provides a foundation for significant reform of tort law, a reform applicable to sophisticated harms that cannot be traced to any particular producer. The rationale for forcing manufacturers who impose such sophisticated harms to bear the costs of injury caused by their products is that these costs "can be insured by the manufacturer and distributed among the public as a cost of doing business."29

Assignment of liability on the basis of risk (or probability of causing harm), of course, might cause some innocent defendants to be held responsible. On the other hand, such an assignment would avoid a worse consequence of apportioning full liability to one defendant merely because that defendant was a substantial factor in the cause of harm. To avoid such a consequence, some situations might require a rule that a plaintiff could obtain proportional recovery based on the risk of harm created by each defendant, if the court determines that it is unreasonable to expect the injured individual to ascertain the precise cause of harm among multiple factors.30 Another way to reduce transaction costs borne by victims of environmental harm would be to provide that substances posing a reasonable likelihood of causing cer-


205

tain risks or diseases be administratively designated as "hazardous." Such a designation would diminish the burden of proof for the plaintiff or victim and would allow statistical evidence (of a causal relationship between exposure to a hazard and some disease) to be admissible in private liability actions.31

Statutory reform could also provide for liberal time periods for filing claims and actions for recovery. Nonrestrictive statutes of limitations are especially important in reducing transaction costs, because of the long latency periods for diseases such as cancer (thirty years in some instances) and because of the scientific uncertainty often associated with the early stages of harmful, technology-related actions.32 Likewise, allowing recovery within a liberal geographical jurisdiction would also reduce transaction costs. Countries that are exporters of pesticides banned in their own nation, for example, could be held responsible for damages incurred when the chemicals are shipped abroad. Such a law would respond, in part, to some of the criticisms (in Chapter Ten) of the "isolationist strategy." For example, any country accepting international guidelines for production, testing, notification, and labeling of pesticides could be held liable for injuries caused by any product it exports in violation of the guidelines.33

To improve the efficiency and equity of market transactions regarding environmental risks and injuries, a new statute could encourage companies, both here and abroad, to provide the public with information about various hazards. For example, the new statute could reduce the aggregate liability of risk imposers if there has been timely notification of potential victims.34

Even with such statutory reforms, however, persons exposed to environmentally induced risks and injuries would likely still bear substantial transaction costs. In large part, this is because they would not always be able to identify the substance responsible for their risk or injury, or because the responsible person might not have the financial resources to satisfy claims against him. For these reasons, perhaps litigation should not be a prerequisite to every recovery or risk avoidance. Perhaps a new statute should call for the establishment of an environmental-risk fund that would act as a secondary source of compensation.

Such a fund, mentioned earlier in the chapter, could be financed (1) by industries responsible for environmental and technological hazards (just as Superfund, created to compensate for damages posed by toxic wastes, is financed by a tax on the oil and chemical industries), (2) by "degree-of-hazard taxes," and (3) by public contributions. Those who impose societal risks could be required to make payments to the fund that reflect the magnitude of the compensable harm that they


206

cause. A substantial percentage of the hazard fee, for example, could come from a tax on the manufacturers of tobacco products, asbestos, and other substances that are well-recognized causes of risk and harm. The fund could provide injured individuals, or persons put at risk, with the option of either filing a claim against the fund or bringing a direct action against whoever caused the harm or risk.35

Federal statutory precedents for such a fund already exist—for example, under Superfund, the Deepwater Port Act, the Surface Mining Control and Reclamation Act, the Black Lung Benefits Reform Act, and the Outer Continental Shelf Lands Act.36 However, since the purpose of this chapter is not to provide specific solutions to problems of environmental risk, I shall not discuss further details of this proposed fund. Details about the fund and arguments for it are best presented by specialists in fields such as environmental law and welfare economics. My purpose is merely to establish the prima facie plausibility of several proposals for dealing with the risk problems discussed earlier in this volume. It is also to establish the fact that, if we ignore these problems and fail to modify existing legal mechanisms for dealing with environmental hazards, then society will continue to impose disproportionate costs on those who are most ignorant of, and most vulnerable to, environmental risks.

Informed Consent and Citizen Negotiation

To some extent, protecting those who are most vulnerable to environmental risks (e.g., those who work with toxic chemicals) is a matter of guaranteeing free, informed consent to public and occupational hazards. Statutory reform, as suggested in the previous section, would provide some of these guarantees, since the new statute would include provisions for limiting liability as an incentive for full disclosure of possible hazards.37 To achieve explicit consent, however, we need actual citizen participation in negotiating solutions for problems of risk. Moreover, once one realizes that the process of hazard assessment and management is highly value laden and politicized,38 then negotiation (rather than mere expert decisionmaking) becomes a virtual necessity for ensuring free, informed consent in situations of controversial risk.

If the purpose of negotiation is to defuse highly politicized risk situations, so as to ensure citizens' free, informed consent, such negotiation will need to presuppose that several conditions have been met. First, ideally the bargaining parties ought to be roughly equal in political and economic power, in order to ensure free, procedurally just transactions. This means that both citizens and industry groups


207

need equal funding so as to obtain access to experts, attorneys, and staff assistance. In particular, citizens need to have access to taxpayer monies to fund their negotiations, so that they will not be disadvantaged in any way in bargaining with industry. Second, alternative points of view, different evaluative assumptions, and a variety of risk methodologies ought to be considered. Government would fund the completion of alternative assessments and hazard-management plans,39 ensuring that all sides to a controversy were represented and well informed. Consideration of alternative positions would be a requirement of democratic decisionmaking, rather than a luxury accessible only to those who are financially able to participate in administrative hearings or legal appeals. Third, the negotiation process would be controlled not by a regulatory agency with discretionary powers, but by a group of citizens and experts with no conflict of interest in the matter under consideration. Hence, unlike regulatory and administrative decision-making, the negotiation would be less likely to be co-opted by unrealistic environmentalists or by unscrupulous industry representives.

On the ethical side, negotiation is often able to equalize the costs and benefits associated with hazards, even though risk imposition usually results in dissociation of costs and benefits. For example, one group of citizens might receive the benefits associated with using toxic chemicals, while another set of persons (those who live near the manufacturing plant) might bear the costs associated with the hazard. Those affected by risky technologies or environmental actions realize that, without compensation or consent, it is not fair for them to bear the dangers of some activity that benefits the whole society.40 Economists recognize that such inequities cause a misallocation of resources and a misrepresentation of policy choices.41 Typically, however, the social costs of this inequitable risk distribution are not taken into account in decisionmaking. Most economists generally consider these social costs or inequities only as "externalities," external to the market processes with which economics is concerned.

As was argued in Chapter Eleven, however, justice demands that we internalize these social costs, perhaps by using risk-cost-benefit analysis that is weighted to reflect considerations of fairness and distributive equity. One way to assess which of a variety of weighting schemes and alternative hazard analyses to employ is to use negotiation among citizens, the relevant industry officials, and policymakers. Since it enables local citizens and potential victims to have a voice in hazard assessment and management, negotiation can be an important means of avoiding antipopulist biases (Chapter Two), reductionist approaches to risk (Chapter Three), as well as the expert-judgment (Chapter Six),


208

probabilistic (Chapter Seven), and producer (Chapter Nine) strategies criticized earlier.

On the practical side, negotiation is one way of avoiding costly, time-consuming litigation, as well as local opposition to environmental risks. Many observers are convinced that a hazardous facility, for example, cannot be sited over a community's objections. A local group has many tactics that enable it to get its way: delay, litigation, political pressure, legislative exemption, and gubernatorial override.42 Individuals and communities have a keen sense of the inequities that result when one group bears the risks and another reaps the benefits. Consequently, they often subscribe to the NIMBY ("not in my backyard") syndrome and, at least in the siting of toxic waste dumps, have usually succeeded in averting the hazard.43

One practical way of avoiding the NIMBY syndrome is to negotiate with citizens, in order to determine what actions, if any, might be required to mitigate hazards, to promote safety, or to compensate them for accepting a given societal risk.44 Project siting generally fails when the government agency attempts to "sell" its premade decision, when it redefines public concerns, when it sponsors no prior public education efforts, and when it merely holds citizen hearings. It is often successful when the agency seeks data on the public's attitudes and needs, when it holds small-group meetings, when it sponsors prior education efforts, and when it exchanges written information with citizens or environmental groups.45

Admittedly, proponents of a risky technology might try to avoid negotiation with the public on the grounds that it is costly and time consuming. They might prefer instead to have state governments preempt local decisionmaking. Using legal and regulatory devices to avoid negotiation, impact mitigation, and compensation, however, is ultimately not a practical way to move toward siting a hazardous facility or imposing a particular risk on a community. It is impractical because, apart from what the law and the courts say, opponents of an environmental risk (unless they are won over through mitigation and negotiation) can resort to civil disobedience to accomplish their goals. In Michigan, for example, as was already mentioned, local residents put nails and tacks on the highways in order to prevent the state from burying cattle contaminated by polybrominated biphenyls. In other jurisdictions, residents have threatened to dynamite existing risky facilities; they have also taken public officials hostage to vent their anger over policymaking processes. All this suggests that, apart from the ethical and legal questions involved, people will resort to extreme measures to avoid a risk if they believe that their lives or their homes are threat-


209

ened. In such a situation, only negotiation, and neither law nor force, stands a chance of winning them over.46

Since there are a number of different models of risk participation and negotiation within local communities, I shall attempt neither to defend a particular one here nor to answer various questions that might arise in a given model (e.g., who should negotiate and how they should be chosen).47 Those tasks are better left to policymakers, arbitrators, and sociologists.48 To illustrate the prima facie plausibility of negotiation, I need only sketch several cases in which it has been used successfully. Environmental sociologist Elizabeth Peelle, a member of the research staff of Oak Ridge National Laboratory, has done much of the pioneer work on successful citizen negotiation in the face of risky technologies and undesirable environmental impacts. (Both she and I count citizen participation as "successful" if it leads to risk decisions acceptable to the informed, affected public.)

By analyzing some of the successful instances of public participation, Peelle was able to determine their common characteristics and hence to suggest criteria for successful negotiation regarding hazards. In one recent essay, she examined a local citizens' task force for a monitored retrievable storage (MRS) facility for nuclear waste in Tennessee; a two-state citizen forum for a Hanford, Washington, nuclear waste project; two local citizens' task forces for toxic waste in North Carolina; and local and state participation in a hazardous waste project in New Mexico. Her conclusion was that, when citizens participate in risk assessment and management and are involved in the decision process, they are more likely to accept the ultimate actions taken in such projects. When decisionmaking includes negotiation, say Peelle and numerous other experts, the possibilities increase for legitimated, stable public policy.49

In another essay, Peelle detailed three impact-mitigation plans, two for nuclear power plants in Tennessee and Washington State and one for a coal-burning facility in Wyoming.50 Faced with opposition from the local county (which refused to rezone the site) and from the Sierra Club and the Farm Bureau (which contested the application for a license to generate electricity), the three utilities (involved in each case) negotiated with citizens on how to promote safety, how to reduce hazards, and how to compensate them for their potential risks and losses.51 The Wyoming plan, the most comprehensive of the three discussed by Peelle, provided compensation for the risk by guaranteeing the community funds for mental health and social services, recreation, roads, and law enforcement. All three of the risk-abatement plans included some hazard-monitoring provisions.52 Their annual costs, for each utility, are between $125,000 and $900,000. In each case, the utility gives


210

direct payments to the community, either in the form of prepayments of future taxes or by means of loan guarantees and grants.53 Because they internalize (through compensation) many of the social costs associated with the imposition of societal risks, and because they are the result of negotiation with the relevant local communities, these impact plans have had at least three desirable effects. They have lessened uncertainties concerning facility siting; they have reduced delays re-suiting from unresolved issues, such as safety; and, through monitoring impacts, they have provided an urgently needed data base for improving future impact predictions and risk-mitigation plans. The three cases also illustrate a difference in risk-mitigation approaches: the Tennessee and Wyoming plans offer compensation for undesirable impacts, such as inequitable risk distributions,54 whereas the Washington plan provides fiscal incentives for acceptance of a risky technology.55

Perhaps the best argument for use of incentives and compensation is that offering them to citizens within the context of negotiation is far superior to most existing means of deciding whether to impose environmental risks on communities. Public participation within these existing means of decisionmaking often takes the form of intervention; that is, environmentalists and consumer activists appear, speak, and hence "intervene" on behalf of their cause, before the agency (for instance, the Nuclear Regulatory Commission) that has the power to impose a risk on a community. The process is fatally flawed, however, both because intervenor status is hard to obtain and because, once granted, the status does not guarantee adequate funds for representing the point of view of citizens or environmentalists. Moreover, intervenors are typically limited to raising only those issues which the law already requires the agency to take into account, even though other questions may be far more important. In the siting of nuclear power plants, for example, intervenors are not allowed to challenge the adequacy of current standards for radiological protection, but only to question whether a given utility or plant will meet those standards.56 Consequently, much of the intervenor money that is raised must be used on expensive lawyers who know how to fight highly stylized battles. All this is to little purpose if intervenors cannot challenge the decision-making process itself, a procedure that typically disenfranchises the public. Furthermore, decisions to site risky facilities are often a fait accompli by the time an intervenor has his say. Hence, intervenors are often viewed more as obstructionists than as representatives of any reasonable point of view. Use of incentives within a framework of negotiation, however, provides a possible opportunity for consensus building rather than mere obstructionism.57

For incentives to be successful in increasing public support for risky


211

environmental projects, social scientists have determined that at least three prerequisites must be satisfied: (1) The incentives must guarantee that local criteria for public health and safety will be met; (2) they must provide some measure of local control; and (3) they must legitimate negotiations as a viable mechanism for building consensus and resolving disputes.58

Of all three prerequisites, the first is probably the most important. Some of the most successful negotiations have taken place only when developers and manufacturers of risky technologies adopted a comprehensive, integrated program of risk reduction. For example, the New Jersey Hazardous Waste Facilities Siting Commission has been successful because it has sought not merely to obtain financial compensation for local citizens but also to reduce the pollutant stream, recycle waste, and detoxify it.59

The theory behind such negotiation and risk reduction is that, through the use of neutral or self-interested negotiators, all interested parties can come together, identify areas of common interest, and reach a solution that is fair and mutually acceptable. Besides New Jersey, Massachusetts, Rhode Island, and Wisconsin are also pioneers in negotiating to resolve local risk disputes. All three states preempt local land-use controls, but then throw the owner and operator of the proposed or existing hazardous facility into a procedure that combines methods of mediation, negotiation, and arbitration. (Mediation or conciliation, as defined by many policymakers, is a subset of negotiation; an approach adapted from labor relations, it uses negotiation to identify the real cooperative actions possible for interdependent parties whose interests and objectives differ. Negotiation typically involves two parties, whereas mediation is handled by a third party seeking to resolve a two-party dispute; neither procedure is ordinarily binding. In arbitration, however, two parties sit before a judge, and the outcome is binding.60 ) Some of the elements recognized by social scientists as necessary for successful negotiation include the following: identifying all the relevant parties early in the process, involving affected persons in early information development and evaluation, providing incentives to residents to accept a risk, taking steps to see that all citizens are informed about the proposed action or facility, and ensuring that the benefits exceed the costs for the host communities.61

Objections to Negotiation

Admittedly, there are a number of objections to negotiation, both on the part of the affected communities and on the side of the developers or industries seeking to gain risk acceptance from a local group. The


212

industry objections typically have to do with the political or economic costs likely to be required because of negotiation. These objections can often be answered by some of the practical considerations advanced in the previous section.62 A common citizen objection to negotiation is that it allows situations in which persons trade health and safety for financial rewards. Citizens claim that negotiation either allows developers and risk imposers to "bribe" local groups, or else it condones communities that want to "extort" compensation from developers.63

It is important to put these deficiencies in perspective, however, in order to see why negotiation is needed. Negotiation has all the same flaws that democracy generally has: when citizens are allowed to make choices, they often make mistakes. The only way to guarantee that citizens will never make mistakes is never to allow them to make decisions. Admittedly, neither negotiation nor democracy is a sufficient condition for avoiding mistakes such as trading lives for dollars. Negotiation with potential victims is needed, however, because it is a necessary condition for avoiding error in hazard evaluation and management; only those affected have the right to consent to the imposition of risk.

The argument throughout this chapter and the preceding ones has not been that negotiation has no problems. It has been a much more modest claim: that negotiation (in which citizens have a voice) is preferable to hazard assessment and risk management in which they have no role. The argument here is that negotiation is preferable to the typical current procedure for siting facilities, a strategy that often disenfranchises the public. Negotiation is preferable to preemption of local control, both because citizens have a right to exercise control over activities that threaten their well-being, and because preemption does not silence the risk aversion, civil disobedience, and opposition of persons who believe that their lives are in danger. Only rarely, for example, can states prevent a town from amending the maximum weight limit on a bridge it maintains in order to restrict truck access to a proposed risky facility. Instead, attempted preemption of local control of risks may simply force opponents to turn either to guerrilla tactics of opposition or to costly and time-consuming litigation. Or, as happened in Massachusetts, preemption could force local governments to use their power in state governments to defeat or prevent the operation of a preemption statute.64

Another response to criticisms of negotiation is to admit that many hazardous facilities (such as nuclear power plants) probably ought not to be sited, because there are less dangerous ways of providing the same benefits to society, and that any potential risk victims can be bribed


213

if they are somehow disadvantaged in or by society. That they are vulnerable to "bribes," and that they seek to remedy this disadvantage by trading safety for financial rewards, however, is not the fault of negotiation.65 It is not the fault of negotiation if people in Oregon, for example, vote for a copper smelter because they need the jobs, even though the smelter threatens the lives and safety of all citizens.66 It is not the fault of negotiation that any humans are so desperate that they feel forced into such choices. As Judith Jarvis Thomson puts it, "It is morally indecent that anyone in a moderately well-off society should be faced with such a choice . . . a choice between starving on the one hand, and running a risk of an incompensable harm on the other."67 It is the fault of society, not negotiation, if people face such choices. When societal balances of power are inequitable, risk negotiation will reflect those institutional imbalances. It cannot correct injustices already sanctioned by society. Risk negotiation can merely enfranchise those who heretofore had little voice in what hazards were imposed on them. Hence, negotiation does as much as can be expected of it.

Even though negotiation cannot necessarily prevent "bribing" the consumer, there is some evidence that such bribes are unlikely. Environmental sociologists report that noneconomic incentives—such as independent monitoring of risks and hazards, rights to information, and access to decisionmaking—are just as important to most citizens who are negotiating about local environmental risks as are financial incentives.68 If these findings are correct, then "buying off" citizens, in exchange for lessened safety requirements, may not be very likely.69

Moreover, apart from whether negotiation leads to what some might view as "bribes," consumer sovereignty and democracy require that people be allowed to make their own choices.70 As one philosopher put it, so long as they do not harm others, people ought to be allowed to choose cigarettes and saccharin; they ought to be able to choose nuclear fission and big cars, instead of being forced to accept solar power and small cars.71 As Mill and many other liberals recognized, people must be allowed to run whatever risks they wish, so long as the consequences affect only them.72

To oppose negotiation because it can lead to faulty citizen decisions or to consumers' being co-opted is unrealistic and narrow-minded. It is unrealistic because a technological society cannot avoid all risks. It is narrow-minded because it refuses to recognize that, in a democracy, everyone must make trade-offs. Everyone must have a mutual willingness to share power, make compensatory arrangements, and negotiate regarding risks, whether he is a developer of hazardous technology or a potential citizen victim. Not to negotiate would be to bring society to


214

a technological standstill. The objection that not everything is negotiable, that not everything has its price, is, of course, correct. Not all harms are compensable. That is why the federal government sets minimum environmental standards. Negotiation, however, is not proposed as a way to attain less stringent standards. Nothing in this proposal for negotiation suggests that we should abandon uniform environmental standards or that we should allow some communities to be victimized by less stringent risk requirements simply because a given industry has "bought them off." Rather, negotiation presupposes that a whole system of uniform environmental standards is in place, that such standards define what level of risk is minimally acceptable, and that people cannot "negotiate" their way to less stringent requirements. All that negotiation does it to compensate potential risk victims, give them more control over hazards, and reduce their risks.

Negotiation (with concomitant compensation or incentives) is at least in principle plausible because it is consistent with the legal and philosophical bases of neoclassical economic theory and the compensating wage differential. According to the theory behind the wage differential, persons can accept occupational risks provided that they are compensated and give free, informed consent to the risk imposition.73 Hence, if negotiation is wrong because it presupposes that compensation can sometimes be justifiable, then so are the foundations of much of our law, philosophy, and economics.

Another objection to negotiation is that it involves the unrealistic presupposition that all persons ought to give free, informed consent before risks are imposed on them. If society always adhered to this presupposition, an objector might claim, then many useful programs, such as public vaccination, could not be implemented.74 Arguing for universal vaccination, however, is not a case of attempting to justify denial of free, informed consent on grounds of expediency. The justification instead is the common good. Presumable, people do not have the right to refuse vaccination, because, if many do so, they will put other citizens at risk. Their right to free, informed consent ends where other persons' rights to bodily security and freedom from injury begin. Therefore, consent to technological risk is not analogous to consent to vaccination. Instead, the limits on one's free consent to vaccination are analogous, for example, to the limits on one's free consent to having a firearm taken away when it is likely that one will use it wrongly.75 And if so, then neither the vaccination nor the firearm case presents a counterexample to the requirement for consent.

Yet another objection to the negotiation scheme, with its attendant presuppositions about compensation, is that it would be both impossible


215

and impractical to compensate persons for all the technological and environmental risks that they face.76 But even though compensation may be "impractical," to dismiss it as such when there are ethical grounds for requiring it may be unjust, unless the impracticality is so great as to approach impossibility. If rights to compensation were recognized only when it was convenient to do so, there would be no rights. It would not work, for example, to tell a victim of racial discrimination that compensation is impractical, since presumably she has a right to compensation. Likewise, it will not do to tell a potential victim of environmental risk that compensation and, therefore, her rights are "impractical."

Compensation is also not impractical, as earlier arguments in this chapter have suggested, if it is the one way of negotiating community acceptance of a societal risk. Moreover, although compensation is difficult, because there are many hazards, we can nevertheless begin negotiating about, and compensating for, the worst environmental risks. Finally, the practicality of compensation and negotiation is established by the fact that our courts and our city councils have already employed them effectively. Compensation has been used to achieve successful community projects, as the Wyoming, Tennessee, New York, and Washington cases (cited earlier in this chapter) illustrate. Hence, it is at least prima facie plausible to argue that we can simply extend the record of these successes by devising federal statutes for risk negotiation and compensation.

Adversary Assessment and Risk Management

Perhaps one of the most disturbing objections to the negotiation and compensation scheme outlined earlier is that it presupposes a benign and cooperative regulatory climate. Yet, if those who disagree about environmental risk will not cooperate, then negotiation will not work. As one legal expert puts it,

[F]irms are not likely to agree voluntarily to ambitious technology-forcing measures involving large capital outlays and substantial risks. Technology-forcing is a major aim of current regulatory statutes, and environmentalists would not abandon that objective willingly in consensus-based negotiations. . . . As long as great interests are at stake and the goals of the major actors are incompatible, which are common characteristics of environmental disputes, there is no reason m doubt that participants would manipulate negotiations and would pursue post-negotiation remedies whenever that behavior is privately advantageous.77


216

If either party to a negotiation sees that more is to be gained from formal legal proceedings or obstructionist tactics than from negotiation, that party will abandon negotiation. The only alternative then would be to use some sort of adversary proceeding. This proceeding would include many of the same guarantees already discussed (earlier in the chapter) in connection with negotiation: (1) It would involve funding citizens' groups, so as to ensure that all parties in the adversary process had equal bargaining power. (2) Consistent with the previous chapter, it would guarantee consideration of different hazard assessments and alternative points of view about risk imposition and management. Finally, (3) the process would be controlled by disinterested parties, not by a regulatory agency.

Since I have written at length elsewhere about adversary proceedings and the "public jury" as vehicles for resolving environmental controversies, I shall not repeat those arguments and objections here.78 Instead, I shall merely sketch some of the main ideas involved in the notion of adversary assessment. Both scientists and laypeople would take part in adversary proceedings. The procedure would involve scientific experts presenting different technical positions, and social scientists and humanists arguing for alternative evaluative points of view. The final risk decision would be left to some democratic determination, probably a representative process, rather than to experts.

The precedent for adversary assessment of societal risks has been set by a number of citizen panels throughout the country. These are composed almost entirely of laypeople, not scientists, and many of them are responsible, for example, for the formulation and enforcement of scientific research guidelines.79 City councils in Cambridge (Massachusetts), San Diego, and Ann Arbor, for instance, have taken a number of initiatives in forming such citizen boards. In Cambridge, the city council authorized its city manager to appoint a citizen review board to evaluate the safety procedures required by the U.S. National Institutes of Health (NIH) for recombinant DNA research. Both the city council and the city commissioner unanimously approved the recommendations of the citizen review board.80

Need for Procedural Reform of Risk Management

Adversary proceedings would provide several benefits over the current system of decisionmaking regarding societal risk. First, an adversary system would require that funding be given to all sides involved in a controversy. Second, the adversary proceedings would make consideration of alternative positions a requirement of democratic decision-


217

making, rather than a luxury accessible only to those financially able to participate in administrative hearings or legal appeals. Third, unlike administrative and regulatory hearings, as well as negotiations, adversary procedures would be decisive. They would also be less likely to be co-opted by environmentalists or developers with vested interests, since they would not be dominated by a regulatory agency capable of exercising discretionary powers. Instead, they would be controlled by a group of citizens chosen because they had no apparent conflict of interest.

Admittedly, a few states and communities have had limited experience in using some of the procedural improvements in risk management suggested in this chapter (e.g., statutes requiring negotiation with, and compensation for, potential victims of societal risk). As a result, industry spokespersons have often alleged that these procedural reforms are not needed, or that they are already being accomplished. Both claims are false, as was argued in Chapter Eleven.

There are no federal statutes guaranteeing negotiation with, and compensation for, potential victims of all forms of societal risk. Currently, negotiation and compensation are not rights guaranteed by due process, but privileges accessible only to the moderately wealthy. In practice, these privileges are limited to those who are able to bear the transaction costs associated with adjudication through tort law. The whole point of this volume is that public consent to, and control over, risk evaluation and management are not the prerogatives of the rich, but the rights of all. They must therefore be protected by federal statute and administrative law, not subjected to the vagaries of circumstance.

Conclusion

Despite the alleged benefits of negotiation and adversary assessment, both procedures face many obstacles. For one thing, they are expensive (but perhaps not as expensive as the political unrest and loss of lives possibly resulting from erroneous societal decisions about environmental risk). Moreover, people may not be willing to pay the price, either for greater safety or for citizen negotiation. If they are not, they should not be forced to do so, as long as existing protections are equitably distributed.

Even if people would accept both negotiation and adversary assessment as vehicles for mitigating environmental risks, and even if they were willing to pay for them, however, there might still be objections. For one thing, I have not indicated specifically how negotiation and adversary assessment might work—for instance, who would take part


218

in the process. Although I have addressed some of these particulars elsewhere,81 most of them are better left to policymakers and social scientists. My purpose here has been to sketch some of the reasons why both negotiation and adversary assessment seem to be prima facie plausible.

Another possible objection to mitigating risks via negotiation and adversary assessment might be that reducing hazards is technically unworkable, allegedly because there are no economical, safe, feasible alternatives to existing risky technologies. If we decide to avoid use of commercial nuclear fission, for example, less risky energy options must be both developed and workable. Admittedly, all my remarks are predicated on the supposition that less risky technologies are both possible and economical. This supposition needs to be defended, on a case-by-case basis, for each environmental hazard. Obviously, there is no time to do so here, although such a defense conceivably could be given.82

Even without a defense of the thesis that less risky technologies are feasible, hazard evaluation could be more rational than it is. If the arguments in this volume are correct, we need to reform our methods, our statutes, our procedures, and (most important) our philosophies for making decisions about risks. On the ethical side, we need to recognize that persons have rights to compensation, to informed consent, and to due process; therefore, they have rights to negotiate about, or perhaps even prohibit, the hazards others wish to impose on them.

On the epistemological side, we need to recognize that risk evaluation and management are irreducibly political (normative), in much the same way that quantum mechanics is irreducibly statistical (nondeterministic). In physics, we have come to realize that quantum-mechanical measurements "interfere" with the state of the system being measured. In applied science and environmental policy, we have been slower to realize that the human components of societal risk evaluation cannot be removed, even though they "interfere" with positivistic measures of risk.

This book has been a first step in suggesting how we might adopt a more democratic and procedural account of rationality, so as to reflect the human dimensions of hazard assessment and evaluation. It has argued that the public is frequently rational in its risk evaluations and that, even when laypersons are wrong about risk, they often have the right to be wrong. Even, and especially, victims have choices.


Part Three New Directions for Risk Evaluation
 

Preferred Citation: Shrader-Frechette, K. S. Risk and Rationality: Philosophical Foundations for Populist Reforms. Berkeley:  University of California Press,  c1991 1991. http://ark.cdlib.org/ark:/13030/ft3n39n8s1/