Gaining Access (1987–1988)
"It's Not That Easy"
With the steady continuation of basic research on HIV, researchers learned an increasing amount about the life cycle of the virus and its genetic structure. Montagnier's discovery in 1986 of a second, distinct HIV virus—named HIV-2—also believed capable of causing AIDS, produced complications for therapeutic strategies, since there was no reason to believe that a treatment against HIV-1 would necessarily prove efficacious against HIV-2. In practice, treatment strategies focused simply on HIV-1, the virus associated with AIDS around the world; relatively little treatment-oriented research was devoted to HIV-2, found almost exclusively in West African countries.
In contrast to the rapid accumulation of knowledge about the properties and life cycle of HIV-1, researchers lacked a clear understanding of the pathogenesis of AIDS—the steps by which HIV directly or indirectly brought about the decline in the numbers of helper T cells and the destruction of immune functioning. Since the virus could be detected in only a tiny fraction of T cells, it began to seem unlikely that the direct cytopathic effect of HIV could adequately account for the observed T-cell decline. This anomaly was one of the factors that prompted one prominent retrovirologist, Dr. Peter Duesberg of the University of California at Berkeley, to argue in March 1987 that HIV could not be the cause of AIDS (see chapter 3).
A number of research findings in the period from 1986 to 1988
shed some light on the mysteries of pathogenesis, with implications for therapeutic strategies. Gallo and coauthors noted in 1987 that infection with the virus could cause T cells to clump together, forming "multinucleated giant cells" called syncytia. "As these gian cells cannot divide appropriately," wrote Gallo, "cell death results." Another clue to the question of how HIV could be so deadly when so few T cells were infected came with the discovery that HIV also infected the macrophages (from the Greek words for "big eater"), immune system scavenger cells present in the blood and other body tissues that surround and ingest foreign particles such as bacteria and protozoa. Since HIV could infect macrophages without killing them, macrophages could serve as reservoirs of infection within the body—"like beanbags, filled with hundreds of viral particles," in the words of Dr. Monte S. Meltzer of Walter Reed Army Institute of Research in Washington, D.C. An important implication was that a truly effective antiviral would presumably have to function within macrophages as well as in helper T cells.
Vaccine development also continued at a rudimentary stage, since researchers lacked basic information about what type of immune response a vaccine should stimulate and what type of viral preparation could most safely and effectively generate such a response. Was the goal to stimulate the "humoral" (antibody) arm of the immune system to generate an effective "neutralizing antibody" that could defend against HIV? Or was it to stimulate the "cell-mediated" arm (the arm that HIV itself attacks) to produce "killer T cells" that would be programmed to destroy an invading viral particle? Could either or both of these goals best be accomplished with whole virus or protein subunits, and should these be natural or genetically engineered? And once a candidate vaccine existed, how could it be tested to see if it worked? Chimpanzees were the only other species capable of being infected with HIV, but since they didn't develop AIDS, were they really a good "animal model" for AIDS research? Should researchers bypass animal testing and proceed directly to trials in humans? If so, the question of establishing efficacy became peculiarly tricky. One public health official admitted in 1986: "People have been talking vaccine, vaccine, vaccine for public consumption, and I have said it, too. But I always scratch my head and say this is not the kind of situation where it is going to be easy to do the testing."
After all, in order to prove that a vaccine is effective, researchers have to show a difference in infection rates, in a double-blind trial,
between those who received the vaccine and those who received a placebo. But in order to pass an ethics review, such a trial would have to include a "prevention component": each participant would have to be counseled on how to reduce the risk of HIV infection, and each would have to be strongly advised to practice these risk reduction techniques on the logic that he or she might be in the placebo group or might have received an ineffective trial vaccine. The difficulty, however, is that to the extent that the research subjects heed this counseling, there might be less of a difference in infection rates between the placebo group and the vaccine recipients. "The dilemma you might get into is that unless the volunteers continued with the practices that put them at risk, there would be nothing to study," commented Harold Jaffe of the CDC's AIDS program. It was a telling instance of the clash between the "scientific method" and the "real world": once the controlled experiment moved beyond the bounds of the laboratory walls, the iron logic that gave the experiment its scientific credibility proved difficult, if not impossible, to comply with—at least without threatening the moral credibility on which science, as a public institution, depends.
Meanwhile, NIAID's ATEU program for clinical trials of AIDS drugs, announced with some fanfare in 1986, had barely gotten off the ground—"delayed for months by technical, ethical and financial problems, bureaucratic sluggishness and lack of cooperation from [Burroughs Wellcome]," according to the lead of a front-page New York Times article. By April 1987 only 350 patients were enrolled in trials, as compared to the 1,000 that Fauci had promised would be enrolled by the first of the year. Activists chalked the delays up to NIAID's incompetence: unlike, say, the National Cancer Institute, NIAID simply didn't have the experience with running large, multicenter clinical trials. But some researchers insisted that clinical trials necessarily take time to design properly and that "there are no short cuts to the truth."
Fauci, paradoxically, put the blame on scientific progress: The licensing of AZT, in one fell swoop, had invalidated every existing protocol for tests of new antiviral drugs in AIDS patients. Now that AZT had become the standard of care for patients with advanced AIDS, it was no longer ethically acceptable to conduct placebo-controlled trials with such patients. Every protocol had to be rewritten to compare a group receiving the experimental drug with an "active control group" taking AZT. This was no minor substitution, since active-control trials
raised different methodological questions and demanded different statistical interpretation. "Months of work suddenly required complete revision," explained Fauci. The scientist also had sharp words for Burroughs Wellcome, expressing his "frustration" that the company "literally has complete control over what does or does not get done" in NIAID trials involving AZT. Burroughs Wellcome had been quick to supply the AZT when ATEU researchers wanted to try administering the drug in combination with acyclovir, another Wellcome product. But it took six months to get the company's permission to test AZT in combination with alpha interferon, a drug produced by a competing pharmaceutical company.
In 1987, Fauci took steps to put his own house in order. He abolished the ill-fated ATEU program for testing AIDS drugs and set up in its place a new network of researchers and research sites, called the AIDS Clinical Trials Group, or ACTG. And he hired away some of NCI's experts on clinical trials, including Dr. Daniel Hoth, an oncologist who was previously the chief of the investigational drug branch at NCI and who would run the ACTG program, and Dr. Susan Ellenberg, a biostatistician who would give expert advice on how to design the trials. "It was really like trying to build a space shuttle in Bangladesh," recalled Hoth some years later, after his departure from NIAID: "We were trying to do two things at once. One was to build the infrastructure and the second was to actually do the research. It was like being out in the Persian Gulf and you had scaffolding over the aircraft carriers at night and in the day [you were flying] missions."
From Hoth's perspective, part of the problem lay in the particular orientations of infectious-disease researchers and in how they differed from the oncologists with whom Hoth had worked in the past. Cancer researchers were used to running large, cooperative research projects; indeed, the average oncologist had at least a passing familiarity with such research since so many cancer patients were enrolled in trials. By contrast, many of the infectious-disease researchers who would run the government-funded trials at the various ACTG research sites around the country had little of this expertise. "So we were teaching people how to write protocols, how to deal with the FDA, how to think about strategic issues," Hoth recalled. Furthermore, it was obvious to oncologists "that you couldn't answer the most important questions by yourself because most of the important questions require very large trials"; cooperation, therefore, was the name of the game. But Hoth found the infectious-disease researchers to be resistant, at least
initially, to this fundamental truth. "They live in a publish or perish mode," said Hoth. "That drove them towards individual protocols rather than cooperation. So it was very hard for them to 'get' the concept of a cooperative group."
Criticism of the pace of drug testing continued throughout 1987 as patients pressed for studies of drugs ignored by the research establishment. Fauci complained to the press about the "misperception" that "if we're not testing every conceivable drug in a trial, we're falling short of our responsibility." As soon as any compound was reported to act against the virus in vitro, "everybody in New York and San Francisco is saying 'Why aren't you studying this? Thousands of people are dying in the streets, and this at least offers some hope. Why not try it?'" But "it's not that easy," Fauci insisted; most of these compounds proved to be of dubious value. In the words of Frank Young at the FDA, "the real problem is, where do you get the ideas and where do you get the compounds from?"
According to the Nobel Prize-winning molecular biologist David Baltimore, advances in AIDS drug treatment would come not through "random screening" of potential agents but rather through a more directed process of "rational drug development." As an example, many pointed to the biotechnology industry's latest contribution to AIDS research, a genetically engineered substance called soluble CD4, developed by the Genentech corporation in San Francisco. Soluble CD4 was designed to act as a "decoy" by imitating the CD4 molecule, the site on the immune system cells to which the virus binds. In theory, the virus would latch onto the soluble CD4 rather than attach itself to T cells; the effect of the drug on the virus, according to an enthusiastic NCI spokesperson, would be like "putting putty all over a porcupine." Samuel Broder was enthusiastic enough to tell the press: "It is one of the most important steps we have ever been able to take." Unfortunately, a good result in the test tube with a "rationally engineered" drug proved to be just as poor a predictor of in vivo success as the results of many drugs stumbled upon by chance. Soluble CD4 bombed out in clinical trials, proving completely ineffective in controlling HIV infection.
The NIAID-sponsored trials pursued scientifically safer and more predictable strategies. Since AZT had been shown to have efficacy, investigators focused attention on other dideoxynucleosides,
the family of nucleoside analogues to which AZT belongs. Two drugs in particular showed promise: dideoxycytidine (ddC) and dideoxyinosine (ddI). And since AZT's effect had been shown only in advanced cases of AIDS, it made sense to study the drug in less sick patients to see if it was beneficial to begin prescribing the drug earlier in the course of illness. Two large trials were begun: one, labeled "Protocol 016," studied AZT in mildly symptomatic HIV-infected patients; the other, "Protocol 019," focused on AZT use in asymptomatic patients. No one knew how many of such patients, if left untreated, would go on to develop AIDS. But whereas earlier in the epidemic authorities had suggested that perhaps 5, 10, or 20 percent of those infected would eventually develop AIDS, the experts increasingly were predicting that nearly every infected person might eventually do so. "Early intervention"—before the immune system had been severely compromised by the course of HIV infection—seemed therefore to make good intuitive sense. In fact, community-based treatment advocacy organizations like Project Inform had begun to stake their very identity on the notion of intervening early.
Ellen Cooper, the head of the FDA's Antiviral Drug Division, recalled that "there were a lot of people who would say to me at the agency, 'Well why are we even bothering to do studies in asymptomatics? … We know it's an antiviral, we know it works in more advanced patients. [Why not just] open up the indications to early patients?'" And in practice, some doctors had already begun prescribing AZT to HIV-infected patients who did not have AIDS, prompting bitter controversy between advocates and critics of the practice. "I know you don't get better by yourself," commented one Los Angeles doctor with a large AIDS practice, in a pithy expression of the practicing physician's interventionist orientation. Itzak Brook, the FDA advisory committee chair who had voted against approving the drug, was quick to say "I told you so": "This is just what I was afraid of," he commented to the New York Times . Samuel Broder of the NCI suggested that doctors and patients should simply sit back and wait: "The best thing to do now is to let the scientific community work this out." But Mathilde Krim, writing in a public policy journal, put the blame back on the NCI for having helped create the predicament in the first place: as far back as late 1985, NCI researchers had been discussing AZT in hopeful terms on national television, thereby enhancing the public's belief in the drug and raising the expectations of the patient community.
With HIV-infected people clamoring for AZT, the 016 (mildly
symptomatic patients) and 019 (asymptomatic patients) trials became more important than ever. They also became ever more difficult to conduct. Since there was no approved treatment for patients in these categories, AZT still had to be measured against a placebo. But compared to the original Phase II AZT trial with AIDS patients, these were larger and longer studies—necessarily so, since otherwise there would be "too few" deaths in the placebo arm to prove anything, given the relative health of the patients. Fischl's AZT trial had involved only 137 patients on placebos, and they were kept on it for twenty-four weeks at most. By contrast, the 019 study, conducted by Dr. Paul Volberding of the University of California at San Francisco, had 428 people in the placebo arm, and it was expected to run for several years.
Soon articles in the gay press were publicizing the plight of the "sacrificial lambs" in the AZT studies, sentenced by the research establishment to "death by placebo." Experts on clinical trials sought to emphasize the difference between the 016 and 019 studies and the earlier Phase II AZT study conducted with much sicker AIDS patients. That the patient community might find placebos difficult to countenance in trials of those facing "imminent death" was "entirely understandable," said Thomas Chalmers of the Harvard School of Public Health. But, he argued, "it is more difficult to understand that philosophy when one is dealing with asymptomatic patients … who may never develop AIDS and face a chance of being [made] sicker by a toxic and ineffective drug."
However, the trial participants—who had tested positive, who had gleaned from numerous newspaper accounts that they had a "time bomb" ticking away inside of them, and who, in their day-to-day lives, could see the presumed end results reflected in the bodies of the friends and lovers they visited in hospitals, reflected in the obituaries they read, and reflected in the funerals they attended—quite simply drew different conclusions. One subject in the 019 trial who had discovered he was in the placebo arm commented, "Fuck them. I didn't agree to donate my body to science, if that is what they are doing, just sitting back doing nothing with me waiting until I get PCP [Pneumocystis carinii pneumonia] or something." He told a reporter for the gay press that he had covertly begun taking dextran sulfate, an unapproved drug available through the treatment underground. Some community physicians expressed their incredulity on learning that participants in these studies were not permitted to take prophylactic medication to ward off pneumocystis pneumonia. One doctor described an experience
with one of his patients: "I said hello, and he handed me this lab slip from UCSF and started crying. He said they won't let me have aerosol pentamidine.… I looked at it, looked at him, and said, 'I don't believe you. Nobody would do that!' It drove me nuts!"
Dual Roles and "Double Agents"
The fundamental problem was that it was becoming more and more difficult for people with AIDS and HIV to occupy the dual roles of "patients" and "research subject." That these distinct roles might overlap without tension was always a convenient fiction. But in the cases of other illnesses such as cancer, the problem had been given more extended consideration. Most clinical research in cancer takes place on the "front lines" of patient care: a patient's own oncologist routinely enrolls him or her in research protocols that are integrated into the overall treatment plan. At least in theory, these oncologists are self-reflective about their role as what ethicist Robert Levine calls "double agents": they wear the hats of both "doctor" and "researcher" and must be responsible, simultaneously, to the abstract goal of knowledge and the concrete needs of their patients. Researchers in infectious disease also saw patients, but they were far less likely than oncologists to have extended experience with patients suffering from chronic, life-threatening illnesses. Until AIDS, as David Rothman and Harold Edgar have explained, "most the research in infectious diseases, although certainly not all, did not involve desperately ill patients willing to take high risks for the slimmest possibility of a gain. Inevitably, in the realm of infectious diseases, the commitment to placebo-based random trials did not have to come up against agonizing questions."
As these "agonizing questions" surfaced in trials like 016 and 019, community physicians not involved directly in clinical research (like the astonished doctor quoted above) found themselves caught smack in the middle between their own patients and the respected academic researchers conducting the trials. In more typical circumstances, these practitioners would likely have deferred to the academics, who enjoy high status within the broader medical community. (As Andrew Abbott has described it, such professionals reside closest to the profession's "pure" knowledge base and bask in its reflected glow. ) But the physicians on the front lines of the AIDS epidemic—the ones who saw hundreds of people with AIDS and HIV in their practices, who in some
cases were gay themselves and in some cases were HIV positive—found their loyalties sharply divided. Many of them reacted with sympathy as activists began to propose ways of easing the tension between the roles of "patient" and "subject"—ways of conducting research that might serve the ends of both science and ethics.