Chapter 5
Points of Departure
Targeting a Retrovirus (1984–1986)
April 24, 1984: A triumphant Margaret Heckler, secretary of health and human services, announced to the expectant reporters assembled at a Washington, D.C., press conference that the cause of AIDS had been found. This report of a virus linked to AIDS ushered in a new wave of debates—about who should receive credit for the discovery of the putative causal agent, which practices were most responsible for its transmission, and whether a retroviral causation was indeed sufficiently proven. But the general acceptance of the retroviral hypothesis of AIDS causation had still other implications that were both immediate and far reaching. Up until that point, medical treatment of people with AIDS had been aimed at controlling, as well as possible, the opportunistic infections and cancers that progressively devastated the bodies of immune-suppressed individuals. These were stopgap measures, at best—not only because many of the opportunistic diseases were difficult to treat, but also because each infection that subsided would generally be replaced by yet another. Lacking an understanding of the fundamental causes of immunosuppression, biomedical science had little hope of reversing the downward course of illness.
The discovery of Luc Montagnier's "LAV," Robert Gallo's "HTLV-III," and Jay Levy's "ARV" instantly changed the scientific agenda for AIDS research. Suddenly it became possible to use a new vocabulary,
one with words like "cure" and "vaccine." Perhaps the most extreme reactions came from politicians with a vested interest in promoting a triumphalist (and nationalist) account of scientific progress. A blood test for the virus would be available in a few months and a vaccine to prevent AIDS would be developed and ready for testing in about two years, announced Secretary Heckler, to the visible discomfort of some of the prominent scientists with her on the podium.[1]
In fact, there were no insurmountable obstacles to the development of the blood test for antibodies to HTLV-III, which was licensed by the Food and Drug Administration (FDA) in just under a year. But those more familiar with the inherent difficulties of vaccine research knew that scientists had succeeded in designing reasonably effective prophylactic vaccines against only a dozen viral illnesses. The most recent such vaccine, against hepatitis B, had taken most of a decade to bring to market. Dr. Anthony Fauci, head of the National Institute of Allergy and Infectious Diseases (NIAID) of the National Institutes of Health (NIH), quickly sought to dispel illusions and dampen inflated expectations. "To be perfectly honest," he told the New York Times a few days after the press conference, "we don't have any idea how long it's going to take to developed a vaccine, if indeed we will be able to develop a vaccine."[2]
A similar degree of uncertainty combined with high hopes surrounded the investigation of treatments for those already suffering from AIDS. The public and the media spoke of "cures," a term which conjured up images of penicillin-like drug that would quickly and efficiently rid the body of the invading microorganism. But unlike the bacteria and fungi that antibiotics treat, viruses—from the common ones, like the cold virus, to the rare and deadly ones, like Ebola—have seldom proven amenable to medical intervention. Viruses insinuate themselves into cellular DNA—the genetic code in the cell's nucleus—transforming infected body cells into factories for the production of more virus. To rid the body of a virus, therefore, requires eliminating every infected body cell without killing uninfected body cells—and scientists in 1984 had little to no idea about how such a task might be accomplished with HTLV-III. Indeed, at the time that the virus was discovered, only three antiviral agents of any kind were licensed for use in the United States, and none of them was entirely effective: amantadine, a drug used against influenza A; vidarabine, which was used against various viral infections of the eye; and acyclovir, a drug used for treating the herpes simplex virus.[3]
The Logic of Treatment
The search for a treatment against an infectious agent can proceed according to a clear theoretical logic or a hit-and-miss pragmatism. At one extreme, researchers may use their knowledge of the pathogen and the disease process to synthesize a novel compound that will target the pathogen or interrupt the pathogenesis (the development of disease). If the newly synthesized drug acts against the infectious agent in vitro and proves to be not too toxic in animal testing, then it can be tried in humans to see if the theoretically predicted effect is observable in practice. At the other extreme, researchers may simply "see what works" by taking existing drugs whose potential efficacy seems plausible (for example, drugs wih known antiviral activity) and adding them to a test tube containing the infectious agent. Such evidence of activity of a drug against a pathogen in vitro is no guarantee, to be sure, that the drug will have any effect on a disease process inside of a living human being or that the human being will be able to tolerate the drug. But it is a good way of screening for promising therapeutic agents.
Since it takes time to synthesize new compounds, and since biomedical researchers knew little about the structure, properties, or life cycle of what would come to be called the human immunodeficiency virus (HIV), hit-and-miss pragmatism was the more likely pathway to quick results. But researchers did possess one crucial fact from the outset that could guide them in the selection of likely agents for testing: they believed they were dealing with a retro virus, composed not of DNA, like most viruses, but RNA. An ordinary DNA virus enters the nucleus of an infected cell and causes the cell to carry out the genetic instructions encoded in the virus's DNA; it transcribes its DNA into RNA, which is then assembled into proteins that form a new virus. But before a retrovirus can integrate itself into the nucleus of an infected cell and replicate, it first has to convert its RNA into DNA—to rewrite its own genetic code "backwards" in a process called reverse transcription.[4] To complete that process, the virus relies on an enzyme it produces, called reverse transcriptase ; this enzyme, in other words, is absolutely essential to the process of viral replication. Inhibit the reverse transcriptase and you inhibit the viral spread: You don't "cure" the patient in the sense of ridding the body of already infected cells and restoring a functioning immune system, but you do—at least in theory—prevent the virus from going on to infect new cells. This
treatment strategy made particular sense if you assumed, as many researchers did, a straightforward model of how HIV caused immune system damage: if AIDS was the long-term result of HIV's direct cytopathic (cell-killing) effects on helper T cells (also called CD4 cells), then stopping the virus in its tracks should prevent the virus from killing more such cells, thereby keeping the immune system from deteriorating further.[5]
It didn't take long for both National Cancer Institute (NCI) scientist and those connected with the Pasteur Institute in France to pursue this promising lead. In October 1984, a group of NCI researchers including Gallo and Samuel Broder, the director of the NCI, published an article in Science describing their in vitro studies with a drug called suramin, which was "known to inhibit the reverse transcriptase of a number of retroviruses."[6] Suramin had been developed by the Bayer Company in Germany more than half a century ago, and, though never licensed in the United States, it had been used extensively in Africa and South America for the treatment of certain parasitic diseases. The NCI researchers found that when suramin was added to HTLV-III in the test tube, the virus became incapable of infecting and killing helper T cells.
Meanwhile, collaborators of Montagnier in France, having made similar assumptions about the logic of treating AIDS, began giving a compound directly to patients—antimoniotungstate, or HPA-23, which was known to incapacitate the reverse transcriptase of certain retroviruses that infect mice.[7] By February 1985, they could offer a brief report in Lancet on a fifteen-day course of treatment with four patients. Comparing the before-and-after assays showed that the drug regimen appeared successful in curtailing the replication of LAV.
But the French researchers also had some words of caution against assuming any easy successes in the fight against AIDS. Even though "infection with LAV seems to be an essential step in the pathogenesis of AIDS," nonetheless a drug that acted against the virus "may not be able to cure the disease." This might be because antiviral therapy came too late, after the virus had already done irreparable damage to the immune system. Or it might be that the pathways from infection to development of AIDS involved more than just direct cell-killing. Perhaps LAV infection instigated "autoimmune mechanisms"—failures of the immune system to distinguish between body cells and invaders, leading the immune system to turn on itself and target other immune cells. In that case, "AIDS could prove to be self-perpetuating even in
the face of inhibition of LAV multiplication."[8] Even if a compound like HPA-23 were eventually proven to be a safe and effective antiviral agent—even if it could be administered to humans in a clinically effective dose, even if its side effects proved tolerable, and even if the initial, promising results could be confirmed in controlled clinical trials with large numbers of patients—it still might not be sufficient to keep AIDS patients alive. "An antiviral won't be the miracle, but it will be absolutely obligatory," Jean Claude Chermann, one of the study coauthors and one of the discoverers of LAV, told Newsweek in April.[9]
That month, more than two thousand researchers from thirty countries converged on Atlanta to attend the first of what would become an annual milestone: the International Conference on AIDS.[10] Researchers in the United States and Europe reported at the conference that testing had begun, or was about to begin, with six drugs in small numbers of patients. These drugs, all of which had been found to have some inhibitory effect on the virus in the test tube, included suramin and HPA-23, as well as ribavirin, an antiviral drug made by a small Southern California pharmaceutical company.
Such studies were just the initial step on the long, uphill path to the marketing of a new drug in the United States.[11] With the passage of the Pure Food and Drug Act in 1906 and the Food, Drug, and Cosmetic Act in 1938, the FDA had been empowered to require that drug manufacturers submit evidence from "adequate tests" showing that a drug was safe, before it could be licensed for sale. Since safety was a relative term, the FDA was expected to assess risk and benefits, which implied making some additional determination of whether the drug was indeed effective. In 1962, in response to public uproar after the drug thalidomide was found to cause birth defects, Congress passed an amendment to the Food, Drug, and Cosmetic Act called the Kefauver-Harris amendment. (Thalidomide had never been licensed in the United States, but some pregnant women participating in studies had received it.) Although the issue with thalidomide was one of safety, the effect of the Kefauver-Harris amendment was to shift the emphasis of drug regulation more heavily in the direction of requiring formal, scientific proof of efficacy.[12]
As Harry Marks has described, by the early 1970s, "with the growth in influence of the National Institutes of Health and the rise of biostatistics as a distinct discipline …, the nature and methods of drug evaluation had achieved a form of scientific and bureaucratic orthodoxy."[13] Usually, the FDA asked for evidence from at least three
"phases" of randomized clinical trials in human subjects performed sequentially: a small Phase I trial to study the drug's toxicity and determine a good dosage for drug absorption; a larger, longer Phase II trial to test the drug's efficacy; and a still larger Phase III trial to bolster the evidence of efficacy in comparison with other treatments for the condition. Each of these studies required planning, recruiting of subjects, careful monitoring, and interpretation and write-up; and the FDA often took its own good time to reach its conclusions, which it made on the basis of recommendations from expert advisory panels. Typically, it might take a drug six or eight years to leap the regulatory hurdles. Critics pointed to the paltry number of drugs that made it to market in the post-Kefauver-Harris environment and argued that U.S. standards were unjustifiably higher than those of other countries. Consumer protectionists responded that U.S. standards were appropriately high, since many countries around the world couldn't afford to perform elaborate drug tests and therefore relied on the FDA to determine what was safe and effective.
Just like medications, any potential vaccine against AIDS would have to pass through extensive testing that included various phases of clinical trials, before the FDA licensed its use. But as reports at the International Conference made clear, even Phase I trials were still a long way off. Researchers had, however, begun identifying "subunits" of the virus that might serve to generate a protective immune response. Using parts of the virus, it was generally assumed, was a safer strategy than using the whole virus: researchers could induce an immune response without having to worry about the risk of accidentally infecting the healthy vaccine recipient. One problem, however, was the recent discovery—Gallo called it "worrisome"—of considerable genetic variation among different strains of the virus.[14] This raised the question of whether any particular subunit could generate protection against every strain. "We have a long way to go before AIDS is preventable or treatable," Dr. Martin Hirsch of Massachusetts General Hospital concluded in reviewing the conference, "but the first steps have been taken, and we are on our way."[15]
The Genesis of Treatment Activism
Observers in gay and lesbian communities had other, more critical perceptions of the International Conference and the depth of the scientific and political commitment to finding treatments for AIDS. Secretary Heckler's statement at the conference regarding
the nation's priorities was widely reported—that AIDS must be stopped "before it spreads to the heterosexual community." Commentators familiar with other scientific conferences observed some distinctive aspects of this one: "The meeting was unusual for the remarkable mixture of participants—doctors and scientists of almost every discipline rubbing elbows with gay activists and media personalities," said the newsletter of the Bay Area Physicians for Human Rights, the gay doctors' group: "The unlikely combinations led to comments about 'strange bedfellows,' but there is no proof of the reality of that phrase."[16]
Moments of levity notwithstanding, this was a threatening time for gay communities. With the availability of the HTLV-III antibody test, many would soon be learning for the first time that they were infected with the virus and faced with an uncertain future. At the same time, as the epidemic became more of a mainstream issue in the United States following reports of actor Rock Hudson's AIDS illness, fears of contagion on the part of the mass public multiplied, leading in many instances to stigmatization of homosexuals, whether healthy or ill. Gay rights and AIDS advocacy organizations feared that those testing positive for viral antibodies would be subject to discrimination, including loss of their jobs, housing, health insurance, and anonymity. In March 1985, the conservative commentator William F. Buckley Jr. proposed, in a notorious New York Times op-ed piece, that "everyone detected with AIDS should be tattooed in the upper forearm to protect common-needle users, and on the buttocks, to prevent the victimization of other homosexuals. …"[17]
The activist response to AIDS by gays and lesbians dated to the earliest days of the epidemic (see chapter I). It rested on the firm base of gay rights activism constructed in the previous decade, with its sex-positive ethic and its suspicious take on medical claims.[18] Now, in response to the new wave of provocations, many who had kept themselves at arm's length from such activism suddenly found themselves drawn into the fray. For a generation of relatively privileged, middle-class gay men, government had been something to restrict, to keep out of their "private" lives. As the boundary between private illness and public health exploded, these same men sought active governmental involvement to fund emergency AIDS research and to protect people with AIDS against discriminatory treatment.[19] However, such assistance was far from the top of the agenda of the Reagan administration, which consistently requested modest funds for AIDS research only to see Congress boost the amounts on its own initiative. Lesbians,
often radicalized by feminism in general and influenced by the feminist health movement of the 1970s in particular, also mobilized in increasing numbers, frequently assuming leadership roles in AIDS struggles.[20]
While the mainstream national gay rights organizations focused on issues of discrimination and budget appropriations, new voices emerged on the horizon. People with AIDS and their supporters discovered in early 1985 that ribavirin, one of the experimental drugs reported to inhibit reverse transcriptase, was available for two dollars a box in the farmacias of Mexico's border towns. Soon a steady stream of couriers were running shipments of ribavirin, along with an unapproved immune-boosting drug called isoprinosine, past U.S. customs and from there to AIDS patients all over the United States.[21]
Elsewhere, wealthy gay men with connections found other pathways to therapies reported to have potential benefit. "There are some Americans in Paris these days who are not so much interested in abstract art or avant-garde literature as they are in saving their own lives," wrote Newsweek in August, a week after Rock Hudson became the most prominent "AIDS exile" to seek treatment with HPA-23.[22] Embarrassed by stories of the "AIDS exiles," the FDA announced that it would permit the administration of HPA-23, along with the other antiviral AIDS drugs that had entered testing, on a "compassionate use" basis—a long-standing FDA mechanism for releasing experimental drugs on a case-by-case basis when requested by physicians for their terminally ill patients, in situations where no standard therapy is available. But the FDA spokesperson struggled to explain that the decision to permit compassionate use was in no way meant to suggest that HPA-23 actually worked . "There is no proven treatment for AIDS yet," he emphasized. "Everyone is assuming that this is a panacea, and there is none."[23] The French, meanwhile, had been forced to discontinue HPA-23 in some patients because of its toxic effects on the blood and the liver.[24]
The availability of drugs in other countries, however, only inclined the new AIDS activists to press for easier access by U.S. patients to a range of experimental compounds. Martin Delaney, at this time a Bay Area business consultant, former seminary student, and current ribavirin "smuggler," emerged as a key voice in these debates. "We don't know for sure how these drugs will work," Delaney told a community forum in the Castro district, the heart of San Francisco's gay community. "But it makes more sense than the next best thing, which is dying without trying anything." In October, Delaney held a press conference
to announce the opening of a new organization, Project Inform, which would conduct studies to determine the benefits of experimental drugs being used in the community, like ribavirin and isoprinosine. "No matter what the medical authorities say, people are using these drugs," Delaney told reporters skeptical of the idea of community-based research. "What we want to do is provide a safe, monitored environment to learn what effects they are having."[25]
Some years back, Delaney himself had participated in an experimental trial of a drug to treat chronic hepatitis. The drug had cured his hepatitis but left him with permanent damage to the nerves in his feet. Delaney considered it a fair bargain; but the drug was thought too toxic, the trial was terminated, and the treatment never approved. It was an experience that would color Delaney's response to the AIDS epidemic.[26] Who should decide what risks a patient can assume—the doctor or the patient?
Rights, Risks, and Ethics
The extensive literature on the ethics of clinical research[27] reflects considerable emphasis on protection of human subjects in biomedical experimentation. This, however, is a rather recent development that has paralleled the rise in importance of the randomized clinical trial both in biomedical fact-making and in regulatory decision making.[28] As historian David Rothman has described, the pivotal moment occurred in 1966 with the New England Journal of Medicine' s publication of a whistle-blowing review article by Henry Beecher, replete with disturbing, recent examples of unethical and potentially harmful experimental research. Beecher catalogued incidents of "investigators who had risked 'the health or the life of their subjects' without informing them of the dangers or obtaining their permission"—for example, withholding penicillin from servicemen with streptococcal infections as part of a study of an alternative therapy.[29] These revelations were followed a few years later by public outrage and congressional hearings in response to news media disclosures about the Tuskegee syphilis study, conducted openly for decades under the auspices of the U.S. Public Health Service, in which hundreds of poor, black sharecroppers were denied existing treatment so that researchers could study the "natural history" of the disease.[30] In 1974 Congress created the National Commission for the Protection of Human Subjects, which issued guidelines on research. In addition, the
NIH began requiring that each research center seeking federal funds for biomedical research on human subjects establish an "institutional review board" to evaluate the ethics of each proposed research "protocol" (the plan for the study).[31]
As Rothman and Harold Edgar have noted, the irony in these protective measures and in the new regulatory regime at the FDA was that they ran counter to the egalitarian and libertarian trends of the 1960s and 1970s in general and to the critique of paternalistic medicine in particular. "Just when patients secured greater autonomy—the right to know a diagnosis, to accept or refuse treatment—the experts at the FDA and review boards controlled the right to regulate new drugs and research protocols."[32] Soon AIDS patients and their advocates began rebelling against what they saw as well-intentioned but deadly paternalism. Activists like Delaney would exert a demand for greater patient autonomy by challenging medical authority from two directions at once. On one hand, they would insist that patients interested in trying experimental drugs should have the right to assume risks rather than endure the benevolent protection of the authorities. On the other hand, they would criticize certain approved and accepted research methods, like trials in which some patients received placebos, characterizing them as unethical for subjecting patients to unfair risks that the patients did not want to assume.
The State of the Art, 1985
As virologist and molecular biologists learned more about the life cycle of the virus, researchers began to speculate about other ways of halting its replication, besides interfering with reverse transcription. NCI researchers analyzed the different points of attack in an article published in September in Cancer Research .[33] First, in order to infect a cell and begin replicating, the outer proteins of the virus (called the "envelope") had to bind to the surface of the cell. Perhaps this binding could be blocked through the use of antibodies; but since most AIDS patients produced antibodies to HTLV-III and became ill nonetheless, it might be that such antibodies were insufficiently protective. Second, after binding to the cell surface, the virus "enters the target cell by an as yet unknown mechanism." If this mechanism could be identified, perhaps entry could be blocked. Third, after reverse trasncription and integration of the viral DNA into the nucleus of the host cell, the virus proceeded to manufacture new viral proteins. The authors noted that this transcription process appeared to be
boosted by a protein, the product of a recently discovered viral gene called tat (for "transactivation"), a gene not found in other known retroviruses. A drug that interfered with this protein might also be an effective antiviral agent. Finally, the new viral proteins were processed and assembled into a fully formed new virus, which was released from the cell by budding. "Our knowledge of these steps is rudimentary at best," the NCI researchers acknowledged, though "interferons have been shown to inhibit the release of other retroviruses. …"
While this was all very nice in theory, the NCI researchers concluded that the best immediate bet remained the reverse transcriptase inhibitors, like suramin. Unfortunately, the early reports on suramin, based on a small Phase I toxicity study by NCI and NIAID, were proving to be mixed at best. The drug did seem to reduce viral replication in vivo as it had in vitro. But "it did not produce clinical or immunological improvement with the regimen used."[34] A larger, Phase II trial would be needed to find out more about the efficacy of the drug. But the concerns about suramin were quickly confirmed a few months into the Phase II study. The drug was far too toxic: it appeared to have caused adrenal failure in several patients and may have hastened some patients' deaths.[35] Later, some treatment activists would claim that the study had been poorly monitored and had subjected its participants to needless risk.[36] The principal investigators, on the other hand, would offer the suramin study both as a cautionary tale and "as an example of how a clinical trial should be conducted": "Trials such as this one … prevent potentially harmful drugs from being distributed to large numbers of patients in the community."[37]
Those skeptical about the viral hypothesis (see part one) interpreted the ongoing difficulties with treatment research as evidence of the inadequacies of the reigning causal models. "If we have agents that effectively inhibit the replication of this virus," said New York physician Dr. Joseph Sonnabend in a New York Native interview in October 1985, "but [those agents] make no impact on the course of this disease, I think it will make apparent, for some people, the actual role of HTLV-III in causing this disease."[38] But research and media attention continued to focus on antiretroviral agents, and in early 1986, NCI researchers found themselves with a potential success on their hands.
"Waiting for the Right Disease"
Samuel Broder, the head of the NCI, had not been putting all his eggs in the suramin basket. In late 1984 he had put out the
word to the big pharmaceutical companies (the ones he considered capable of quickly bringing a drug to market): Send us anything you have on the shelf that might inhibit a retrovirus, and we'll do the assay to see if it halts replication of HTLV-III.[39] Burroughs Wellcome, the North Carolina-based subsidiary of a large British firm called Wellcome PLC, submitted ten compounds, and in February 1985 one of Broder's researchers, Hiroaki Mitsuya, found that one of the compounds was a reverse transcriptase inhibitor with strong antiviral activity: azidothymidine, called 3&0374;-Azido-3&0374;-Deoxythymidine in full or just AZT for short.
AZT had a peculiar history. In the early 1960s, a researcher named Jerome Horwitz at the Michigan Cancer Foundation decided to design a drug that would keep cancer cells from duplicating. With funding from the NCI, and working with such unlikely ingredients as herring sperm, Horwitz and his coworkers synthesized a group of compounds called dideoxythymidines that were designed to look like nucleosides, the building blocks of DNA. In theory, these "nucleoside analogues" would substitute themselves for real nucleosides, thereby interfering with formation of DNA molecules. Without more DNA, the cancer cells would simply stop duplicating. In practice, the treatment was a complete failure. Horwitz gave AZT and the other dideoxythymidines to mice with leukemia, but the drugs showed no effect. "My colleagues and I said that we had a very interesting set of compounds that were waiting for the right disease."[40]
Burroughs Wellcome had tested AZT against animal viruses but had dropped this line of inquiry since it was unrewarding. Now, after getting the good news about AZT from Broder, Burroughs Wellcome filed an "IND" (investigational new drug application) with the FDA. Phase I trials began in July 1985 with nineteen U.S. AIDS patients, under the auspices of the NCI and in collaboration with Duke University. Mitsuya announced the results of the six-week study on the last day of an AIDS conference the following January: AZT kept the virus from replicating in fifteen of the nineteen research subjects, boosting their immune systems (as measured by their T-cell counts) and relieving some of their symptoms. "It's not a dream drug," Mitsuya explained in a television interview, stressing the need for additional testing.[41]
The formal publication of the study in Lancet in March 1986 spelled out more of the details.[42] Researchers recently had discovered that AIDS was frequently accompanied by neurological impairments,
which indicated that the virus was also affecting cells in the brain. An effective therapy, therefore, would have to be capable of crossing a circulatory system defense called the "blood-brain barrier," a feat that many drugs could not accomplish. Fortunately, AZT did appear to cross the blood-brain barrier. In addition, though some subjects had experienced headaches or had developed low white cell counts, the drug could be tolerated relatively well. This was a relief, because AZT "might have been expected to produce intolerable side effects" (in the words of Jean Marx, the reporter for Science who described the trial), given the mechanism of drug action.[43] AZT "fooled" the reverse transcriptase enzyme into using it, in place of the nucleoside it imitated, when transcribing the virus's RNA to DNA. Then, once AZT was added to the growing DNA chain, AZT's structure prevented any additional nucleosides from being added on: reverse transcription simply came to a halt at that point, and the virus stopped replicating. But the problem was that since AZT terminated DNA synthesis, one might logically anticipate that it would have harmful effects on the DNA in healthy cells.
Having shown initial evidence of relative drug safety , the researchers had accomplished the formal objectives of a Phase I trial. But there was nothing to prevent them from reporting the apparent good news about efficacy —the news that attracted media attention. "The results also suggest that at least some immunological reconstitution occurred in most of the patients …, and that a clinical response was obtained in some." However, these findings had to be treated cautiously, since simply being in a trial "may have a strong placebo effect in influencing such factors as appetite and sense of well-being, and it is even possible that improved nutrition may then induce changes in immune function." Only the next step—a so-called "placebo-controlled" Phase II study, conducted in "double-blind" fashion so that neither the subjects nor the researchers would know who was receiving AZT and who was receiving a placebo (a look-alike dummy pill)—could determine whether the observed clinical improvements were truly due to the drug. (This study would be funded by Burroughs Wellcome and conducted at a number of academic centers, including the University of Miami, where it was under the direction of Dr. Margaret Fischl, and the University of California at San Diego, under Dr. Douglas Richman.) The NCI researchers concluded with a summary of the questions that remained to be answered: "We cannot say whether AZT can be tolerated over a long time, whether viral drug resistance will
develop, or ultimately whether AZT will affect disease progression or survival in patients with HTLV-III-induced disease. These are issues which can be resolved only by appropriately controlled long-term studies."[44]
Clinical Trials Take Center Stage (1986–1987)
Becoming Experts
"The general public, and even most AIDS organizations and activists, do not yet realize that we already have an effective, inexpensive, and probably safe treatment for AIDS." This was the characterization of AZT offered by John James, editor and publisher of a San Francisco-based newsletter called AIDS Treatment News (ATN ), in its third issue, published May 1986. Yet large-scale studies of AZT, James reported, were still several months away from starting, and "if all goes well, your doctor might be able to get AZT in about two years." This was hardly an acceptable time frame in James's view, and he offered a simple justification for his position: "We should point out that ten thousand people are expected to die of AIDS in the next year. And with deaths doubling every year, a little math shows that a two-year delay between when a treatment is known to work and when it becomes available means that three quarters of the deaths which ever occur from the epidemic will have been preventable."[45]
Of course, James's grim logic hinged on the meaning of the deceptively straightforward phrase "when a treatment is known to work": what do we mean by "known," and what do we mean by "work"? As far as James was concerned, though the effects of long-term use indeed remained unknown, it was clear that AZT did something . But from the standpoint of researchers—at least when speaking in their official capacities—it was precisely the point of the next round of testing to determine whether AZT in fact worked. Furthermore, as James himself fully realized, the sort of evidence that impressed him would not get Burroughs Wellcome past the front door of the FDA.
A former computer programmer with no formal medical or scientific training, James had just launched what would become the most prominent grassroots AIDS treatment publication in the United States.[46]ATN rapidly emerged as a crucial resource for doctors and patients alike; within a year, it had a circulation of thirty-five hundred.[47]
The newsletter provided the latest inside word on the up-and-coming drugs as well as the alternative therapies that didn't make it into formal clinical trials. It would go on to engage as well in a sweeping and detailed critique of the federal drug-testing and regulatory enterprise. As Debbie Indyk and David Rier have noted in a study of grassroots AIDS publications like ATN , such organs of communication effectively "[circumvent] the assessment of gatekeepers such as [medical journal] reviewers and editors. …" The result has been not only that more, and more varied, material has made it "through the net," but that "researchers, clinicians, and patients often confront new data almost simultaneously—sometimes, patients even see it first. …"[48]
In those early days, James had another pressing goal—to convince other activists and AIDS organizations that a new task confronted them. Already they had become experts about prevention strategies, antibody testing, antidiscrimination legislation, and the health care delivery system. Now it was time to learn a new set of tricks. "So far, community-based AIDS organizations have been uninvolved in treatment issues, and have seldom followed what is going on," wrote James in a call to arms in that same May 1986 issue. "With independent information and analysis, we can bring specific pressure to bear to get experimental treatments handled properly. So far, there has been little pressure because we have relied on experts to interpret for us what is going on. They tell us what will not rock the boat. The companies who want their profits, the bureaucrats who want their turf, and the doctors who want to avoid making waves have all been at the table. The persons with AIDS who want their lives must be there, too [emphasis added]." To "rely solely on official institutions for our information," James bluntly advised, "is a form of group suicide."[49]
Over the following months, James elaborated this strategy for engagement with clinical research. It wasn't that the researchers conducting AIDS clinical trials were evil or incompetent, but that they were "too close to their own specialties and overly dependent on the continued good graces of funding sources" to be capable of generating or publicly communicating an objective assessment of treatment research issues. On the other hand, these researchers were crucial sources of data: "Physicians and scientists already have pieces of the information, and they need someone they can talk to who can put the pieces together and let people know what is going on." There was nothing pie-in-the-sky, James insisted, about proposing that lay activists could
become experts themselves: "Non-scientists can fairly easily grasp treatment-research issues; these don't require an extensive background in biology or medicine."[50]
It was the right time to pay attention to the organization of clinical trials. With $100 million in federal funding, the NIH was in the process of setting up a nationwide network of fourteen research centers, dubbed AIDS Treatment Evaluation Units (ATEUs), which sought to enroll an additional one thousand of the nation's ten thousand living AIDS patients in government-sponsored Phase II trials of a select group of drugs, including AZT, foscarnet, HPA-23, and ribavirin.[51] Following the standard NIH procedure for "extramural research" (so called to distinguish it from the NIH's own in-house or "intramural" research), NIH would farm out the work to researchers, mostly at academic centers, who had designed and submitted the study protocols and would serve as the principal investigators for the studies.
But the process of setting up the ATEUs was slow and chaotic. Since HIV was an infectious agent, the National Institute of Allergy and Infectious Diseases, under the leadership of Anthony Fauci, had claimed ownership of AIDS treatment research. NIAID, however, had never organized clinical trials on this scale, and Fauci would soon be subjected to intense criticism by activists for what appeared to be incompetence. If Fauci were less intent on amassing power within the federal health bureaucracy, some suggested, he would have left AIDS treatment research with the NCI, where it began, relying on that institute's proven expertise in organizing large, multisite clinical trials for cancer therapies.[52]
The Gold Standard
Practically unknown before the Second World War, randomized clinical trials had rapidly, recently, and incontrovertibly become established as the "gold standard" in biomedicine. Such trials are presumed capable of establishing the risks and benefits of new drugs or of weeding out ineffective or dangerous drugs that doctors have prescribed on the basis of anecdotal evidence. Of course, sometimes anecdotal evidence is perfectly adequate: When an antibiotic brings about rapid miracle cures of diseases that are otherwise often fatal, doctors can safely trust the "evidence of their own eyes." But drugs with more marginal, or less rapid, effects are often harder to evaluate. If a patient gets somewhat better over the course of a few
months, is it the drug that is responsible or some other factor in the patient's life? Randomized clinical trials claimed to take the guesswork out of medical judgments.
These trials, many commentators have noted, are also crucial to the "scientization" of modern medicine—the legitimation of medicine as a fully scientific practice resting not just on the basic biological sciences but on the knowledge base generated by its own, distinct laboratory method.[53] Studies that proceed through the right steps—beginning with the random assignment of patients to either the treatment arm or the control arm—are presumed to generate true knowledge, while those with procedural failings are not. This reification of method, Harry Marks has said in an analysis of the history of such trials, has tended to bracket the political judgments that the use of the method necessarily entails: "On what basis do we choose a significance level?" "On what basis do we integrate the findings from a given experiment with the relevant body of theoretical or empirical literature?" And insofar as the use of such trials has encouraged researchers to privilege "trustworthy answers to a simply put question" over "a contestable reply to a more complex inquiry," it has remained unclear whether the type of knowledge generated is useful—whether the trials provide meaningful guidance to the doctors, patients, and officials who would use them. In this sense, as Marks has argued, reliance on randomized clinical trial may beg the fundamental policy question: What is the problem to which they are the solution?[54]
By 1986, as many as four hundred thousand to eight hundred thousand U.S. patients were enrolled in such trials every year.[55] The number of clinical trials reported in the scientific literature had grown by 30 percent in the first half of the decade alone, from 3,414 in 1980 to 4,372 in 1985. The practical impact, as Dr. Sidney Wolfe of Ralph Nader's Health Research Group commented in 1986, was that "patients have much greater access to new treatments now than they did a decade ago."[56] But a more subtle consequence of this steady expansion of clinical research was that it had shifted the social meaning of the trials.
To the study investigators and the research establishment, the trials were simply scientific experiments; but in the eyes of those suffering from serious illnesses, controlled clinical trials were an important means of access to otherwise unavailable drugs—drugs endowed with the glimmer of scientific promise by simple virtue of their novelty and the fact that they were being studied. So, for example, in December
1985, when the NCI announced a small study of a new experimental cancer treatment using interleukin-2, two thousand people telephoned within two days to find out how to get into the trial.[57] The differing perceptions of the essential purpose of clinical trials would soon put people with AIDS and their representatives on a collision course with the FDA and medical researchers.
"Great Promise for Prolonging Life"
The announcement came on September 20, 1986, after several days of rumors and speculation, and it made front-page news around the country: Margaret Fischl's Phase II trial of AZT had been ended early, after the NIH's Data and Safety Monitoring Board—whose job it was to take periodic "peeks" at trials in progress[58] —concluded that the drug was so effective that it would be unethical to keep the control group on placebos any longer. AZT "holds great promise for prolonging life for certain patients with AIDS," Dr. Robert Windom, the assistant secretary for health, told reporters, adding that he had asked the FDA to consider AZT for licensing as expeditiously as possible.[59] But AZT, Windom also made clear, "is not a cure for AIDS."[60]
The study had been conducted in double-blind fashion with 145 subjects getting AZT and 137 a sugar pill identical in appearance, according to the formal write-up, which appeared in the New England Journal of Medicine the following July.[61] All the patients had AIDS or AIDS symptoms, and there was a similar range of T-cell counts in the two arms of the study. At the time the study was terminated, 19 patients in the placebo arm had died. Only 1 patient receiving AZT had died, and this was a remarkable difference. (It was also a statistically significant difference: the probability that these results might have occurred by chance was well beneath the accepted statistical threshold of one in twenty.) There had been a total of 45 new opportunistic infections in the placebo arm of the study, versus only 24 in the treatment arm. More problematically, "severe adverse reactions," particularly bone marrow suppression, had been observed in the study: a full 24 percent of the AZT recipients had experienced anemia, and 21 percent required blood transfusions. AZT use also caused nausea, myalgia, insomnia, and severe headaches.[62]
With the support of the NIH and FDA, Burroughs Wellcome announced that it would supply AZT free of charge to any AIDS patient
who had suffered an attack of Pneumocystis carinii pneumonia, the most deadly of the opportunistic infections, during the past 120 days, pending the drug's formal approval. It would be distributed on a case-by-case basis only through physicians who submitted requests and agreed to supply data to Burroughs Wellcome; and the NIH set up a toll-free number for doctors to call to request the forms to file. Within weeks, after many doctors and people with AIDS protested that these criteria were arbitrary, Burroughs Wellcome agreed to expand the program to include any of the 7,000 people with AIDS who had suffered pneumocystis at any point.[63]
On March 20, 1987, less than three years after the Heckler press conference announcing the discovery of HTLV-III, the FDA approved AZT for use in the 33,000 Americans diagnosed with AIDS, on the basis of a positive recommendation by the panel of experts on the FDA's Anti-Infective Advisory Committee.[64] The drug had proceeded from in vitro studies to full approval in just two years and had been approved without a Phase III study. The United Kingdom, France, and Norway had also licensed AZT in the preceding weeks.
Burroughs Wellcome announced that it would sell its product under the brand name "Retrovir" at a price that would amount to eight thousand to ten thousand dollars per patient each year. The company refused to disclose its profit margins for AZT—a drug that, after all, had been invented by a federally funded cancer researcher in Detroit a quarter-century earlier, had been sitting on Burroughs Wellcome's shelves for years, and then had been shown effective against HIV in vitro by scientists at NCI and Duke University. One pharmaceutical industry analyst quoted in the New York Times estimated the profit at up to 40 percent and predicted that Retrovir would soon be "the company's largest contributor to revenue and earnings."[65] It was widely assumed that the price reflected the company's assessment—shared by researchers and conveyed to patients by the media—that the life span of the drug was limited since new and better antiviral drugs would be available soon. In practice the price meant that only rich countries could afford to subsidize it. AZT was—and is—essentially unavailable to all those outside of the so-called first world.
The pricing of AZT was not the only point of controversy. "At least half of all AIDS patients that should be eligible to take the drug either cannot take it at all or must take a lower dose to prevent toxicity," reported a news article in Science . (It would later become apparent that the originally prescribed dosage of twelve hundred milligrams per
day was unnecessarily high.) "We found it nearly impossible to keep patients on the drug," said Jerome Groopman, a prominent researcher and clinican who tried giving AZT to fourteen patients on a compassionate use basis.[66] "AZT may be a genie that we are letting out of the bottle," Dr. Itzak Brook of the FDA's Anti-Infective Advisory Committee told Time magazine in February.[67] Brook had been both the chair of the committee and the sole dissenting vote on the recommendation to license the drug.
In a more formal commentary published in the Journal of the American Medical Association (JAMA ), Brook explained that the committee "recognized that the benefit in very sick patients outweighed the serious toxic effects, but it was concerned by the fact that the long-term efficacy and toxicity are, to date, unknown and require further studies." Moreover, many committee members, according to Brook, were concerned that once the drug was approved, HIV-infected people with mild symptoms or none at all might gain access to the drug (since once a drug is licensed, any physician can prescribe it to a patient, whether or not the patient fits the official "indications"). For mildly symptomatic and asymptomatic patients, there were as yet no data from which to conclude either that the drug had benefits or that the benefits exceeded the risks.[68]
As Time suggested, halting the test early and offering the drug to patients in the placebo arm "robbed researchers of the chance to judge, under controlled conditions, any long-range effects of AZT, which might be as dangerous as the treated disease."[69] In fact, as the Washington Post had reported, the days before the announcement of the study's outcome had been marked by serious, behind-the-scenes soul searching: some government officials and researchers had been so concerned about the impact that releasing the drug might have on the capacity to conduct future research that they had implored the media not to carry the story about the trial's findings until a policy decision had been reached, lest the publicity itself create irresistible pressure for the release of the drug.[70] But research ethics, political realities, and the prevailing construction of belief precluded any alternative course of action. "I don't see how you can have a placebo group," said Dr. Charles Schable of the Centers for Disease Control (CDC), "because if you're pretty sure it's going to work, why should you not give it to people?"[71]
The Politics of "Indifference"
To use the language preferred by those who are experts on clinical trials, the AZT study was no longer at an "indifference point" (or, it no longer maintained a state of "equipoise"). In order to conduct an ethical experiment on human beings in which one group receives Treatment A and the other receives Treatment B, "the clinical investigator [must] be in a state of genuine uncertainty regarding the comparative merits of treatments A and B."[72] If this precariously balanced state does not hold—if the researcher has good reason to believe that one treatment is superior—then it would be considered unethical to subject either group to the putatively inferior treatment. When the Phase II AZT study began, it was technically at the indifference point: John James's belief that the drug was "known to work" notwithstanding, investigators like Fischl and Richman believed that there were no hard data supporting AZT's efficacy, since the suggestive results from the uncontrolled Phase I trial may simply have been due to a placebo effect and since that trial had lasted for only a few weeks. But once the Data and Safety Monitoring Board had "unblinded" the study to see the results so far, equipoise no longer held: clear statistical evidence showed a treatment difference between the AZT recipients and those given placebos.
The requirement that a trial could be conducted only when a state of equipoise existed was intended to protect the rights of human subjects by imposing an objective standard on the design of medical experimentation. In practice, like many such rules of scientific practice, this one was subject to negotiation and interpretation. After all, any time researchers test a drug, they do so because they have some reason to think it might work; indeed, probably few investigators take upon themselves the arduous task of designing and conducting a clinical trial unless, on some "gut level," they believe the study might succeed. At what point, therefore, do reasoned guesswork and personal belief come to violate a state of "genuine uncertainty"? Clearly, there is no firm, universally apparent, dividing line separating equipoise from its absence. So certain was Jonas Salk of the efficacy of his polio vaccine that he opposed conducting a double-blind, placebo-controlled trial, arguing that such a "fetish of orthodoxy" would unnecessarily doom some of those in the placebo group to contracting polio. Other researchers countered that in the absence of such a study, the vaccine would never achieve broad credibility among doctors and scientists.[73]
Inevitably, the assessment of equipoise becomes a social and often political process, embedded in the complex interactions and negotiations that establish the credibility of treatments in different quarters.[74]
More problematically still, there is no reason to assume that researchers and research subjects will be equally "indifferent" about the potential merits of therapies, or will be indifferent in quite the same way. "It is clear that research subjects may rationally prefer one treatment arm of a randomized clinical trial … rather than another even if there is no medical reason for the choice," commented medical ethicist Robert Veatch, pointing to patients' complex evaluations of side effects of drugs and quality-of-life concerns. "Only in the rarest of circumstances will active subjects really be indifferent to the two treatment options if indeed they really understand what these options are."[75]
Placebos Under Attack
The fact that researchers and research subjects could differ in their understandings of equipoise was unlikely to lead to controversy in comparisons between two active treatments, one old and one new. But comparisons between a potentially active drug and an inert placebo were far more capable of sparking an uproar among patients with a life-threatening disease. The use of placebos in the Phase II AZT trial was one of the first such cases to be criticized by AIDS treatment activists. In blunt terms, in order to be successful the study required that a sufficient number of patients die: only by pointing to deaths in the placebo group could researchers establish that those receiving the active treatment did comparatively better. Furthermore, to avoid introducing confounding variables into the study, the protocol forbade participants to receive other medication during the study. All this made a certain sense from the standpoint of experimental design, but it was difficult to justify to those people occupying the dual social roles of "patient" and "research subject"—people who began with the assumption that the purpose of medicine was to help them. Researchers insisted that clinical trials should not be confused with treatment—that being a research subject is not the same as being a patient. But this was a difficult distinction to put across in the best of circumstances, and it did not resonate with people with AIDS who were fighting to stay alive. In essence, the same practices and procedures that gave biomedicine its credibility as a science were threatening the credibility of medicine as a healing profession.
Mathilde Krim, a New York cancer researcher who had become the co-chair of the American Foundation for AIDS Research (AmFAR), argued at a New York demonstration in summer 1986 that "the double-blind clinical trial on AZT is an insult to morality."[76] But defenders of placebo-controlled trials characterized them as the quickest route to the truth and pointed to their track record in weeding out ineffective drugs that practicing physicians had believed in. Without the science of clinical trials in general, and without double-blind, placebo-controlled trials in particular, physicians were left with nothing but anecdotes and hunches. Douglas Richman, the AZT researcher at the University of California at San Diego, argued in 1988: "In the field of antiviral therapy alone, numerous anecdotal claims were made for the benefits of corticosteroids for chronic hepatitis B, of iododeoxyuridine for herpes simplex encephalitis, and cytosine arabinoside for disseminated herpes zoster. These clinical observations made by concerned physicians were proved to be erroneous in randomized, double-blind, placebo-controlled studies. In fact, the study drug in each case did more harm than the placebo."[77]
The opposite error—erroneously rejecting an effective therapy—was also possible in the absence of placebo-controlled trials. Indeed, said Richman, if there had been no double-blind, placebo-controlled trials, AZT probably would have been discarded. Since AZT showed no impact on the rate of opportunistic infections for the first six weeks of the study and no impact on survival for an even longer period, it would have been easy to conclude from an uncontrolled study that AZT was toxic and ineffective.[78] In response, some, such as Krim, argued that placebo controls weren't the only option for a controlled study. Data obtained from treatment groups could be compared with the medical records of matched cohorts of other AIDS patients who had been followed in the past in studies of the natural history of AIDS (a method called "historical controls"). Or patients in treatment groups could be compared against their own medical records from the weeks prior to their entry into the study. Similar methods had been employed successfully in research with cancer drugs.[79]
Beyond the questions about whether double-blind, placebo-controlled trials were ethical, there began to emerge, in response to the Phase II AZT study, a growing concern about whether such trials were in practical terms possible . The essence of a double-blind trial is that neither the subject nor the investigator knows whether the subject is receiving the drug or the placebo. But how can such information be
disguised in the case of a relatively toxic drug that produces symptoms like nausea and headaches? And how do researchers anticipate the actions of patients understandably anxious about the possibility that they were squandering their remaining days swallowing sugar pills? Even before the trial had ended, rumors began to trickle in from various quarters: some patients were seeking to lessen their risk of getting the placebo by pooling their pills with other research subjects. In Miami, patients had learned to open up the capsules and taste the contents to distinguish the bitter-tasting AZT from the sweet-tasting placebo. Dr. David Barry, the director of research at Burroughs Wellcome, complaining implausibly that never before in the company's history had any research subject ever opened up a capsule in a placebo-controlled trial, quickly instructed his chemists to make the placebo as bitter as AZT. But patients in both Miami and San Francisco were then reported to be bringing their pills in to local chemists for analysis.[80]
Presumably such practices were not invented by AIDS patients. But the prevalence of AIDS within relatively well-defined communities, and the growing sophistication of the emergent treatment underground, made it likely that strategies for "beating the system" diffused more rapidly and more extensively among AIDS patients than among, say, research subjects in cancer trials. (Ironically, such behavior also risked extending the length of the trial, by increasing the time required to show a statistically significant difference between the AZT group and the placebo group—an example of the clash between the individual and social good that makes such trials so vexing.)[81]
Researchers insisted that their own monitoring methods revealed little abuse: blood tests identified few patients in the placebo arms of studies who had obtained the active drug. Still, reports of "noncompliance" raised serious questions about just how "objective" the much-vaunted double-blind trials really were. Those seeing only the tidy graphs and reading only the crisp prose in the New England Journal of Medicine might conceive of such trials as the essence of scientific rigor and, hence, the most solid basis for forming clinical and regulatory judgments. Those observing the conduct of a trial "from the inside" might conclude that knowledge was resting on something rather less solid than bedrock, and they might wonder why the research establishment chose to fetishize this mechanism for establishing biomedical truth.[82]
The Repudiation of Victimhood
So-called noncompliance—of patients who don't take their medicine, as well as research subjects who don't follow the protocols—is a long-standing concern among medical professionals. But preoccupation with the issue has skyrocketed in recent years. In one study of the medical literature, Ivan Emke was able to find only 22 articles published in English on the topic of compliance before 1960. But "by 1978, 850 more had been published. Between 1979 and 1985, another 3,200 articles on compliance were published."[83] Noncompliance has become a catchall category for things patients do that health providers find undesirable—a term that casts as much light on doctors' expectations as it does on patients' behavior.[84]
As Emke has noted, doctors tend to discuss what they call the "problem of noncompliance" as if it were purely an individual issue involving specific troublesome patients. But as far back as the Popular Health Movement of the 1830s and 1840s, noncompliance has also appeared in "organized" forms. The feminist health movement of the 1970s and 1980s is, in Emke's words, "the clearest modern example": "It represents more than simply a questioning of the medical orthodoxy, but also involves the setting up of alternative clinics, the support of unique therapies, and the democratization of medical knowledge."[85] The consequences of organized noncompliance for professional authority are suggested indirectly by an observation made by Eliot Freidson, the influential sociologist of medicine and the professions, who, writing in the 1960s, assumed there was a general absence of such organization among medical patients. Professional authority cannot function as such, said Freidson, unless "its clientele is a large, unorganized aggregate of individuals, leaving little possibility for the exertion of lay pressure to compromise occupationally preferred standards."[86]
"Noncompliance" is a vague term, emphasizing what patients don't do, rather than what they do. It also suggests a zero-sum game, as if AIDS patients and their doctors had no interests in common. In practice, the relationship between patients with AIDS or HIV infection and community doctors has often been a close one—particularly in gay communities where the doctors themselves are sometimes gay and, in not a few cases, are also infected with HIV. Rather than speaking of noncompliance, it might be more accurate to describe a series of shifts in the nature of the doctor-patient relationship, accompanied and often
fueled by an unusual medical sophistication on the part of the patients.
As the extensive literature on the "doctor-patient relationship" suggests, there are many different models of such relationships. The doctor might be conceived as omnipotent or as simply an adviser to the patient. The patient might be imagined to be an inert object (as in surgery) or a competent decision maker (as in many chronic illnesses).[87] But as professional ethics have changed in recent years, and as the balance of power in the doctor-patient relationship has shifted, doctors have been increasingly inclined to acknowledge the full subjectivity of their patients.
AIDS patients have encouraged this cultural shift. Like their feminist predecessors, people with AIDS practiced "self-help with a vengeance," as Indyk and Rier have nicely characterized it[88] —an outright rejection of medical paternalism and an insistence that neither the medical establishment nor the government nor any other suspect authority would speak on behalf of people with AIDS or HIV. In 1985, groups of patients issued a "Founding Statement of People with AIDS/ARC" and a "Patient's Bill of Rights," which have been widely reprinted. The "Founding Statement" asserted: "We condemn attempts to label us as 'victims,' which implies defeat, and we are only occasionally 'patients,' which implies passivity, helplessness, and dependence upon others. We are 'people with AIDS."'[89] People with AIDS insisted not only on their right to self-representation but also on the right to full explanations from health professionals, the right to anonymity and confidentiality, and the right to refuse specific treatments.[90] Decision-making power, ultimately, had to reside with the person whose life was on the line. This was not an assumption to which doctors necessarily were averse, but the ingrained culture of professional practice often tended to militate against it. At a 1988 conference on AIDS held in London, an anthropologist held up two books side by side to illustrate the gap in perceptions: one was called AIDS: A Guide for Survival , the other, The Management of AIDS Patients .[91] (Only two years later, as the balance of power and knowledge between doctors and patients shifted, AIDS Treatment News would publish an article for patients advising them how to go about "Managing Your Doctor."[92] )
In explaining the medically "noncompliant" tendencies of groups like gay men and injection drug users with AIDS, some have emphasized their alienation from society: outcasts can be expected to rebel.[93]
Others have stressed the desperation of those confronted with imminent death. Yet for many people with AIDS, having the capacity to challenge their doctors over the terms of their medical treatment may stem less from their oppression or desperation than from their relative social advantages. Barrie R. Cassileth and Helene Brown have made a similar point about cancer patients who pursue alternative therapies: "Contrary to the stereotype, … patients who seek unproven methods include the educated, the middle to upper class, and those who are not necessarily terminal or even beyond hope of cure or remission by conventional treatments." Such patients are overrepresented because "several features of these [alternative] cures require time, financial resources, and an educated, questioning approach to illness.…"[94]
Similarly, many people with AIDS and their friends, lovers, and families are often equipped with the financial and cultural resources that permit them to reverse the unidirectional flow of power in the traditional doctor-patient relationship. Many are highly educated (though very often not in the hard sciences), highly motivated, and willing to work to learn the foreign language of biomedicine. "An offensive strategy began to emerge on the island of [hospital room] 1028," reported Paul Monette, in a memoir describing his lover's death, "especially as I took an increasingly hands-on role, pestering all the doctors: No explanation was too technical for me to follow, even if it took a string of phone calls to every connection I had. In school I'd never scored higher than a C in any science, falling headlong into literature, but now that I was locked in the lab I became as obsessed with A's as a premed student. Day by day the hard knowledge and raw data evolved into a language of discourse."[95]
One New York doctor described the results of such autodidactic strategies as he witnessed them with his patients: "You'd tell some young guy you were going to put a drip in his chest and he'd answer: 'No, Doc, I don't want a perfusion inserted in my subclavian artery,' which is the correct term for what you proposed doing."[96] In the eyes of some doctors, these were "bad" patients—troublesome know-it-alls who presumed to tell the doctor what to do. But others appreciated patients who took such an energetic interest in their own treatment.[97] The emerging partnerships between patients and health practitioners—and more generally, the expanding expertise residing in gay communities—would hold profound consequences for the politics of knowledge-making in the coming years.