previous section
Appendix B— A Methodological Accounting
next section

Appendix B—
A Methodological Accounting

In this appendix I discuss the virtues and faults of my data sources, some justifications for a case study, and a few ethical dilemmas.


The method I used to find people to interview was akin to snowball sampling. As my first contact in Binghamton, Arnold Schecter suggested the first "key players" to talk to. Because key players are always embedded in networks of other key players, I was led to about two hundred telephone interviews and sixty-five face-to-face interviews, approximately twenty of which were return visits.[1] Although I interviewed people across a broad spectrum of status and organizational location, I deliberately skewed my sample to represent those who were in positions to make important decisions. For the most part, organizations were the crucial players in the Binghamton case, and it has been their decisions and accounts on which I have spent the most time.

I conducted three weeks of interviews in Binghamton and one week in Albany.[2] I also conducted interviews in New York City and Washington, D.C. I tried to arrange interviews with everyone who played a role in developing policies concerning the State Office Building and the exposed people, as well as with workers, bureaucrats, scientists, engineers, and technicians. In general, I met little resistance in Binghamton. Respondents in Albany, however, were considerably more hesitant and were more suspicious of my motives. One DOH official accused me of being a journalist. State officials' reluctance can be explained by potential legal liability and the specter of Love Canal fresh in their memories. Despite this overall reluctance,


many state policymakers, and those close to them, granted me useful interviews.

One risk of interview data is the possibility that informants may not tell the truth, may distort facts, may be evasive, and in general may issue only "press-type information," to use the words of one of my respondents. This hazard confronts every field researcher (as it does every survey researcher). We invade people's personal and professional lives, ask them to divulge information they may not admit even to themselves, let alone to some stranger whose purpose is to analyze and publish what they say. The problem is pervasive in research such as that reported here, where lawsuits are on everyone's mind and people (especially elites) are regularly subjected to considerable embarrassment for saying things they must later retract ("The building will be open in two weeks"; "We will not do medical surveillance"). Indeed, this research challenge is endemic to all studies on organizations, because one of the things that bureaucracies do, after all, is keep outsiders from knowing where the skeletons are, by acting as if everything is rational and under control.

This last problem is particularly troublesome when one ventures beyond the boundaries of rational decision making. How is it possible to study anarchy and nonrationality when the basic research tool, the interview, demands that informants have good organizational sense and give rational accounts? Put another way, it is hard to imagine any of the Binghamton respondents saying something such as, "Well, no, we really didn't know what was going on, and, frankly, did not give enough thought to what exposed people might think, so we did the first thing that occurred to us." It is even harder to imagine that any respondent could say, in more academic prose:

Our preferences were not well ordered, and a wide range of alternatives were not embedded in our standard operating procedures and, so, did not receive much attention. In fact, often we did not seem to have any preferences at all, contrary to what rational theory predicts, because they frequently changed in response to environmental demands. At times, we failed to properly scan our environments and so did not anticipate the reactions of other organizations. Our problem-solving processes were further distorted when we began to make trade-offs between organizational interests and public health.

The only effective way to study loosely coupled behavior is to compare actual behavior with what people say about their behavior, and


then to compare both to predictions from available social theory. This strategy requires the frank admission that respondents can misinterpret important parts of their organizations and environments, thus raising the problem of deception.

Fortunately, deliberate deceptions are usually about facts ("I did not say that" or "We did not commit that act"), and if I was unable to verify a "fact" with another source, I did not use it. Fading memories and the reconstruction of events that differ from what actually happened are two other serious problems. As often as possible I double-checked facts and interpretations. For example, if someone relayed something that was not verified by other witnesses, I usually went back to that respondent for further clarification. Although there is no way to be absolutely sure that had memories and selective reinterpretations did not contaminate my data, I believe that the data I gathered were of the highest quality possible.

A Possible Bias

The study began through my connection with Arnold Schecter. Because I had unrestricted access to him, some of my questions in early interviews were colored in ways that probably revealed our association to some of my respondents. For example, when I asked the county executive about points of contention between Schecter's lawyers and the county over the proposed consultant's contract, he could surmise only that, for me to have commanded so much detail, Schecter had to have been my source. I cannot estimate the extent to which people who knew of this association responded differently because of it. The most effective check on this bias is the skill of conveying impartiality that one develops in the field (see Douglas 1976).[3]

Interview Technology

The tape recorder as a data-gathering instrument is regarded with suspicion, the most common criticisms being that people are less likely to be open and that it dampens spontaneity, making responses less reliable. For reasons of efficiency, I nevertheless decided to try


the recorder in face-to-face interviews, and I learned that tape does not seem to compromise the quality of responses. Generally, I found that politicians and those used to speaking in public did not mind the tape machine. Laypersons and scientists, however, were more reluctant to be recorded.

I developed another method that increased interviewing efficiency. After returning from my trips, I immediately transcribed the tapes and typed up notes from those I did not record. I then made a cleaner version of the nonrecorded interview notes, editing them for readability, and sent them to respondents for further comments or additions. My cover letter said I was sending my notes in order to verify their accuracy, explaining that there were some things I had missed in the interview or did not understand from my notes, and asked for clarification. If the interview had been recorded, I added a section at the end of the transcript that was more or less my interpretation of what the respondent had said. This addendum served several functions. First, for the unrecorded interviews, it allowed the respondent to correct any factual errors. Second, by including some interpretation in the interview notes, I engaged in some unobtrusive inquiry. There were always some questions I was not able to ask during the interview; also, sometimes, a question was so politically sensitive that to have asked it face-to-face would have jeopardized the interview. Thus, in the interview notes I sent to respondents for their perusal and correction, I included interpretations of what had been said that my informants had not offered themselves. Most of the time, I nevertheless received answers and clarifications that were clear and thorough. If the informant did not react to the nonanswers, I did not use them. Judging from the answers to my notes and questions, most respondents carefully read the interview notes. I always received a reply, usually with copious notes and useful additions. They always responded to the interpretive sections of the interview notes, sometimes simply with a shorthand indicating agreement (e.g., an "ok" in the margin), and sometimes going on to give their own perception of the problem—which, of course, was part of what I was trying to elicit in the face-to-face interview.


I relied on newspaper articles to reconstruct events and for quotations from officials. Using the mass media as a source is tantamount


to using accounts of accounts and raises serious problems of validity.[4] I do not believe my use of this data source has biased this study. I used newspaper articles to reconstruct events mainly for the first few months after the accident. Even then, my reliance on the articles was for gaining leads into the kinds of issues that needed exploring and the kinds of questions I needed to ask.

Politicians often complain about the media. The lament they most often voice is that their words are taken out of context and used in ways they did not intend. But this complaint is not really about accuracy, but about the meaning a journalist conveys by reporting facts and statements from different sources in the same article, thereby creating a context of meaning with which officials may disagree. Because the media, especially newspapers, were key actors in the Binghamton story, I usually asked my respondents their views of the mass media. Although officials often disagreed with the reporter's "slant" in an article, they usually testified that they had been quoted accurately. That is, they objected to the way their words had been used, but not to the reporting of what they had actually said. Moreover, after several interviews with each of the main reporters assigned to the SOB story (there were three or four from both of Binghamton's papers), I developed strong confidence in their professional ethics and journalistic skills.

I subscribed to both of Binghamton's newspapers for about two years, which eventually provided a comprehensive collection of articles that I used to construct chronologies and assemble quotations. In addition, because of the scholarly nature of my research, the two newspapers granted me unrestricted access to their morgues.

Freedom of Information Requests

One of my most fruitful sources was state and federal freedom of information (FOI) requests.[5] One of the things bureaucracies do is


produce files, and freedom of information requests give one (limited) access to those files. After the project had been under way for several months, one of my respondents referred to what he claimed was a fact. He also said his superiors would not be particularly happy if they knew it had been revealed. He told me that if I asked his superiors about the information, his identity would be apparent and would probably cost him his job. He suggested, however, that a freedom of information request—crafted vaguely enough so that the particular topic would not be revealed, yet targeted directly toward the area of interest—might yield confirmation of his story without implicating him. I thus submitted an FOI request to the public access officer at OGS for a copy of the contract between OGS and VERSAR, Inc., OGS's consultant. I learned from the contract that VERSAR was required to submit "weekly progress reports" to OGS. I also learned that VERSAR would be responsible for performing a series of formal risk assessments for OGS. I then submitted further requests for the weekly reports and risk assessment, which themselves suggested other documents of potential use. Another example of how FOI requests were helpful, this time with the state health department, was my use of bibliographies from DOH papers to find the titles of papers that had not been made public. I would then ask for these through an FOI request. In this way my collection of documents snowballed until, finally, the cost (usually 25 cents per page) prohibited further expansion of my file.

The Utility of Case Studies

Case studies, some hold, are the bane of organizational research (Campbell and Stanley 1966; Miles 1979; cf. Campbell 1975; Yin 1981). Case studies are cursed with a familiar list of faults: They cannot be used to generalize to a larger class of events (cf. Kruskal 1978; Kennedy 1979); they fail to provide enough controls, and therefore the most important factors cannot be isolated; and, worst of all, they lend an unchecked freedom of interpretation to the author. There are important qualifications, however, to this list of evils.

Any attempt to generalize to a class of events from a single case is doomed to fail, it is said, because the laws of probability do not permit extrapolation from an N of 1 (cf. Dukes 1965). This criticism of case studies is basically sound, but it should be qualified in two ways. First, case study researchers do indeed generalize to a larger class of events, but they do so with considerably less confidence than they would if their generalizations were supported by probability theory (Kruskal


1978). In the present case, I believe the processes reported here would probably be found in other cases in which several, or many, organizations interact in unexpected ways within an ambiguous environment. Of course, my confidence in this generalization is primarily based on sociological judgment and comparison with similar cases.

Second, case studies provide an opportunity to study social processes in depth. This advantage is not inherent in case study research, but concentrating one's research efforts on one case does allow maximum investment of data-gathering resources (e.g., time). As Diesing (1971) and Mohr (1982) point out, two basic sets of epistemological assumptions underlie most social science research. The first and most explicit set of assumptions is found in variance research. In such research, variables are chosen in advance, and the investigator's interest is to specify determinant relationships among them. Both the specific value and limitation of variance research is that the researcher focuses on a handful of the aspects of some situation, ignoring the rest (Diesing 1971, 269). In those cases, the scientists' concepts and procedures are, more often than in case study research, clearly defined, usually before the research begins. Case studies usually sacrifice opportunities to examine specific relationships, mainly because they typically lack the kinds of controls one finds in variance research. All research lacks something, and the trade-off is usually between comprehensiveness and detail (see Weick 1979, chap. 2, for a dissection of this dilemma). Basically, there are two ways to handle this problem: (1) study a carefully delimited set of variables, or (2) study a little of everything with the hope of rendering a holistic account (Diesing 1971, 279). One of the greatest benefits offered by case studies is the opportunity to investigate the full context of a social situation rather than being confined to the variables to which one has access.[6]

A potentially more serious charge against case studies is that the absence of statistical or comparative controls bars one from saying anything conclusive about the topic under study. For example, I can quite logically claim that the process of accepting risk is more a process of defeating dissent than creating consensus. But one could legitimately question how I know for sure that I have isolated the most important parts of the process of creating acceptable risk. Not by way of defense, but by way of putting this criticism into perspective, we should note that the only method that really solves this problem is


the controlled, randomized experiment. Even observational studies based on large databases must confine their controls to the variables at hand rather than concluding that all possible influences have been canvassed. That is why there is always a substantial portion of unaccounted-for variance in statistical studies. Statistical studies resemble qualitative case studies in that both use an interpretive framework or theory to provide the mechanism for controlling for significant influences on observed outcomes. But statistical studies suggest which variables to measure for later use as controls, whereas, for qualitative studies, theory building is an inductive exercise, so that variables are not conceptualized before data gathering begins. I would add that, for most case studies, there are controls for the myriad factors that could produce an outcome, but the tools of control are alternative ways of explaining observations, instead of other observations (Allison 1971; Davis 1974).

The final indictment against case studies—that they grant the researcher too much interpretive freedom—is a criticism against which I have few defenses. The temptation to make the data fit the favored interpretation always exists, and case studies lack the kinds of controls that confer great confidence that an accidental confluence of events has not produced the outcome one is trying to explain. This does not mean, I think, that the researcher is free to impose any interpretation on the data, because standards of logic will render some interpretations more plausible than others. Nevertheless, case studies are still plagued by the problem of too much interpretive freedom, at least until enough cases have been accumulated to provide analytic constraints on that freedom.

Case studies, with their nagging problems, annoying limitations, and egregious but necessary sins against standards of rigor that most researchers endeavor to reach, are counterbalanced by the saving graces of holistic research and by opportunities to suggest hypotheses and new directions for research. Ideally, if not always practically, case studies are useful stepping-stones to questions that can be answered with more explicitly operationalized concepts and specification of relationships among clearly delimited variables.


It is hard to imagine a project that requires field research methods that would not present the researcher with ethical dilemmas. Ethical conflicts seem to me inherent in research where one intrudes into people's lives to ask that they reveal things that may embarrass either


themselves or someone else (often their boss). If conflict is endemic in society and the researcher's aim is to investigate the various meanings of conflict among people who occupy different social locations, then differences of opinion are bound to test the integrity of the researcher. Lofty values of respect, fairness, and rights to privacy run headlong into the pragmatic demands of probing, teasing, and cajoling the truth out of people. Indeed, these values are immediately and inevitably compromised as soon as researchers assume they have the right to investigate how other people live their lives. And the profession of sociology does not give much guidance on how to resolve problems of professional ethics. A search of how other authors solved ethical dilemmas provides little help, because there is a great deal of variation in what is considered ethical. Another potential source of wisdom for resolving ethical dilemmas is the network of sociologists in which the researcher is located. Consulting colleagues about problems of ethics, although useful in many ways, is inevitably frustrating because even experienced field workers expound the same admonitions found in chapters on ethics and values in "Introduction to Methods" textbooks. In the end, one is left with what one began: one's own judgments about reconciling the demands of thorough research with the rights of those being researched. I made three judgment calls in the course of this study that might have had important effects on either the State Office Building story or the way I told it.

First, although in most interview research the promise of confidentiality is sacrosanct, many of my informants' names can be discerned from the text. My selective use of confidentiality is tied to the notion of informed consent, so I begin with how I handled that issue. Because no deception was involved in the study, and because the formality of signing a document would have been a bad way to begin interviews, I petitioned my university's committee on human subjects to modify the usual procedure. I took two documents to interviews, one the formal consent form, the other a simple letter introducing myself and my purpose. I gave all respondents the letter, explained what it was, and then told them I had the formal consent form if they wished to use it. In this way a potential boundary to developing good rapport (signing ceremonies) was turned into a mechanism of creating trust. No one asked for the formal consent form, but most smiled in appreciation when I handed them the letter of introduction.

I included the following sentence in both the letter and the consent form: "Any information you may provide will be respected as confidential, according to your wishes, since information and candid responses are more important to the researcher than individuals'


names." The phrase "according to your wishes" was constructed so that my informants knew confidentiality would surely follow a direct request for it. The SOB story would have been much duller to tell and read if I could not directly represent the people who were living with those fascinating problems. Moreover, disguising many of the respondents would have been impossible and absurd. The most important function of confidentiality is to prevent those who know the subject from finding out what the subject says, thinks, or does. But even if I had disguised the players, the people in Binghamton and Albany would still know who I was writing about. Thus, unless I was asked for confidentiality, or unless a quotation might conceivably place the respondent in jeopardy, I usually tied my quotes to the people who uttered them.

The second and third judgment calls resulted from the same source. Soon after Kathleen Gaffney replaced Arnold Schecter as commissioner of the Broome County Health Department, I traveled to Binghamton for some interviews. Gaffney and I were talking about the number of experts to whom Schecter had written about the contaminated parking garage, and she asked, "Would you like to photocopy the health department's 'Binghamton file'?" She then secured the county executive's approval, and I gratefully accepted the offer. The plethora of documents ran the gamut from Gaffney's personal notes on meetings she had attended to correspondence labeled "confidential."

The second ethical issue was that, through this windfall of documents, I had access to a list of most of the people who had been exposed inside and outside the SOB, each identified by his or her organizational affiliation. Several weeks earlier I had interviewed a health officer for the Civil Service Employees Association (CSEA), who explained that CSEA would like to keep records on its exposed members but could not get the state health department to release the names. The CSEA official assured me that if I ever obtained such a list, a copy would be put to good use. I also wanted to interview some of the people on the list, but these people had not agreed that their identity could be revealed to me. I found no professional guidelines for either of these situations, but I neither sent the list to CSEA nor contacted the workers.[7]


Finally, I had to resolve the dilemma of whether or not to quote from the many letters for which I lacked the authors' informed consent. I consulted some historians and historical sociologists about what is considered proper in such situations. Most of them had never faced such a problem, because most historical research deals with dead people. But, they said, the usual rule is that if one cannot get both the writer and the receiver of some letter or memorandum to grant permission to use it, the only thing to do is to wait until they die. For me, this was too stringent a requirement, because I researched individuals who might live for another forty years.

I decided I would not use Gaffney's personal notes but I would use official correspondence. When officials wrote these letters and memoranda, they did so in their capacities as public servants, presumably acting in the public interest. I therefore decided that it was not unreasonable to try to decipher how they interpreted what the public interest was. Moreover, institutionalized controls on professional ethics, such as informed consent and peer review of projects involving human subjects, were instituted to protect those who might otherwise lack the power to protect themselves. Thus, I found myself having to decide which groups would receive the most protection. This is not to suggest that people who occupy positions of power are without rights of their own, but I felt that not using the documents would be stretching the intent behind our attempts to define professional ethics.


previous section
Appendix B— A Methodological Accounting
next section