Surrogate Markers to the Rescue
But how to get ddC, along with ddI and other new drugs, formally approved by the FDA? That was the true goal, in the eyes of most AIDS activists—not relying on compounds cooked up in somebody's kitchen. No one thought these drugs were magic bullets.
As Mark Harrington wrote in his column in the gay and lesbian magazine Outweek , "At best, [ddI] will be a less toxic alternative to AZT [and] at worst, it will be an alternative with less antiviral activity and unpleasant side effects." Nonetheless, the issue was "vitally important," wrote John James, "because there are tens of thousands of people unable to use AZT, or no longer able to benefit from it."
The obstacle, in James's view, lay "not with any one agency, company, or other institution, but with a professional consensus which crosses organizational boundaries"; this consensus, if not disrupted, would effectively prevent "any decisive treatment advance from being available for years." The entire research enterprise was geared toward what James called "dinosaur trials"—huge, costly, multicenter trials that would take years to complete. Why did the trials take so long and require so many subjects? As James explained in AIDS Treatment News , the chief impediment was that "the FDA has insisted on the slowest measure of clinical improvement," namely death or opportunistic infections in the control group. "This means that the drug being tested is not measured by improvements in the patients who receive it, but [opportunistic infections] or deaths in those who do not."
The alternative measure of drug efficacy that activists proposed was one with a long history in biomedicine but none whatsoever in AIDS: the use of "surrogate markers" to demonstrate the efficacy of a treatment. A drug shown to reduce serum cholesterol or blood pressure, for example, may be approved to treat heart disease on the assumption that such improvements correlate in the long run with an overall clinical benefit. Similarly, the amount of reduction in tumor size is sometimes used as a surrogate marker for the effectiveness of a cancer drug. Approving a drug on the basis of a surrogate marker necessarily implied greater uncertainty about the actual effects of the drug against the disease for which the marker is a stand-in. But it was a more or less accepted course of action for life-threatening diseases, since it could speed up the decision-making process considerably.
The difficulty, however, was that no marker had yet been proven to function as a surrogate for the effectiveness of an antiviral AIDS drug. A good marker would be one with "face validity" and "biological relevance"; it would also be easily measurable in some objective and reliable fashion. But a high-profile workshop called "Surrogate End-points in Evaluating the Effectiveness of Drugs against HIV Infection and AIDS," sponsored by the Institute of Medicine of the National
Academy of Sciences and held in September 1989, had failed to arrive at a consensus on such a marker or markers. Anthony Fauci, the director of NIAID and the government's point man on AIDS research, backed the most obvious and oft-discussed marker: CD4 counts (the technical name for T-cell counts). Another logical choice was the level of p24 antigen, the core viral protein, but it was found only inconsistently in the blood. Debate also focused on other indicators of disease progression in the blood of HIV-infected people, such as a rising "b2 microglobulin" count or "neopterin" count. But these latter measures were nonspecific, Fauci argued, since they are common in many illnesses. "Nobody dies from elevated levels of b2 -microglobulin or neopterin," said Fauci, "but nobody can make it without CD4 cells."
T-cell depletion was the very hallmark of AIDS; to an immunologist like Fauci, AIDS could almost be defined in terms of HIV's direct and indirect effects on T cells. Any drug that staved off T-cell decline had to have some value. To Fauci, and certainly to many activists, this made such intuitive good sense that any opposition seemed almost frivolous. The biostatisticians and the FDA regulators had their doubts, nonetheless. As a measure, CD4 counts were notoriously labile, fluctuating depending on the time of day the blood was drawn, how much sleep the patient had the night before, what the patient ate for breakfast, or which laboratory was doing the analysis. More fundamentally, as NIAID biostatistician Susan Ellenberg pointed out, the problem was that something might be a good prognostic marker of the future course of illness in the natural history of a disease (and no one doubted that CD4 counts filled this role in AIDS), but that didn't prove it could function as the endpoint of a clinical drug trial. That is, researchers can predict the future of an HIV-infected person (speaking in probabilistic terms) if they know his or her CD4 counts, but that doesn't necessarily mean they can predict the effect of a treatment on the person's prognosis simply by knowing the effect of the treatment on his or her CD4 counts. Such an association remained to be demonstrated.
Some of the New York activists, like Mark Harrington, promoted the use of surrogates but also argued that surrogate markers were only part of the answer. He called for careful attention both to quality-of-life indicators and the pathogenetic mechanisms of HIV infection that presumably underlay the surrogate markers. But others, particularly in San Francisco, saw surrogate markers as the critical issue. Martin Delaney, the director of Project Inform, for whom the virtue of CD4
as a surrogate marker was "intuitively correct," blasted what he saw as the head-in-the-sand insistence on definitive proof. "Such a view may be valid from a scientifically conservative, purist perspective," Project Inform's newsletter contended, "but it is hardly a progressive position in the context of a raging epidemic. … How much does one have to know about the scientific nature of combustion when the house is burning down?" Yet researchers and regulators presented examples from other diseases to argue that their concerns were more than mere pedantry. James Bilstad, an FDA official, described to a JAMA reporter in 1991 the recent "very disturbing" finding that certain cardiac arrhythmia drugs improved the commonly accepted surrogate markers for heart disease but tripled the risk of mortality from sudden cardiac arrest.
Books about AIDS drug development have tended to portray the struggle over surrogate markers as one in which stodgy defenders of the status quo were eventually won over by well-informed activists who were in possession of what was indisputably the "right" answer. No doubt this is partly because these books were published before 1993, when the use of CD4 as a surrogate marker was seriously challenged. However, from the start the issue of surrogate markers in AIDS clinical trials had scientific arguments on both sides that were passionately defended. (Indeed, activists themselves were not insensitive to the arguments against surrogate markers, particularly the sole reliance on CD4. James, for instance, was more impressed by a technique called quantitative PCR [polymerase chain reaction] that measured plasma viremia; he and others advocated combining laboratory markers with markers of apparent health, such as a doctor's ranking of the patient's overall state of being.) Here, once again, an activist victory depended on the capacity of activists to intervene in a complex scientific controversy by adding their moral authority—and political muscle—to one particular side in a methodological and epistemological controversy. The existence of competing expert interpretations of how knowledge was to be constituted gave AIDS activists an opening from which to conduct their campaign.
Activist pressure on the surrogate marker issue was destined, in turn, to hold profound consequences for the public negotiation of belief about the efficacy of drugs like ddI and ddC. The debates over whether ddI and ddC "worked" would proceed hand in hand with a debate over the very mechanisms by which efficacy might be established in an AIDS antiviral trial. Given these circumstances, controversy about the licensing and use of these drugs was almost inevitable.
In general, for a clinical trial to "work," its results must be taken to "stand for" the effects of a drug were the drug to be administered widely to patients outside the artificial, experimental setting. A trial resting on surrogate markers, therefore, derives its credibility from a two-stage process of representation: it must first be agreed that the short-term effect of the drug on the marker represents the long-term effect of the drug in reducing mortality—and then the trial results must be understood to reflect what would happen in the everyday world of patients who consumed the drug. When articulation of the linkage between "experiment" and "real world" becomes so complex—and when the stakes are nothing short of life and death—not only is there more space for argument about the meaning of trial results, but the capacity of "outsiders" to intervene and assert claims becomes all the more potent.