previous chapter
AIDS and the American Health Polity: The History and Prospects of a Crisis of Authority
next section

AIDS and the American Health Polity: The History and Prospects of a Crisis of Authority

Daniel M. Fox

In 1981, when AIDS was first recognized, the American health polity was changing more rapidly than it had in a generation. The individuals and institutions that make up the health polity had a growing sense of discontinuity with the past. They were poorly prepared to take aggressive, confident action against an infectious disease that was linked in the majority of cases to individual behavior, was expensive to study and treat, and required a coordinated array of public and personal health services.

The unconventional phrase health polity emcompasses more individuals, institutions, and ideas than the words ordinarily used to describe health policies and politics. A polity is broader than a sector or an industry. It includes more people than providers and consumers of health services, more institutions than a health-care delivery system. It is more than an aggregation of policies. The Oxford English Dictionary defines polity as "a particular form of political organization, a form of government . . . an organized society or community." I use the phrase health polity to describe the ways a community, in the broad sense of the OED definition, conceives of and organizes its response to health and illness.

My thesis is that when the AIDS epidemic began, a profound crisis of authority was transforming the American health polity. The roots of this crisis reached back in time, some for decades, others for just a few years. They included changes in the causes of sickness and death and, therefore, concerted efforts to adapt facilities and payment mechanisms


317

in order to address them; ambivalence about the recent progress of medical research, reflected in slower growth in research budgets and efforts to make scientists more accountable to their financial sponsors and the media; a growing belief that individuals should take more responsibility for their own health and that public health agencies should encourage them to do so; a sense that the cost of health care was rising uncontrollably and should be contained; and an increase in the power of the private sector and of the states within the health polity. Everyone who worked in the health sector knew that a crisis was occurring; so did attentive consumers of print and television news. Uncertainty about priorities, resources, and, most important, leadership pervaded the health polity. The AIDS epidemic is an additional element in an ongoing crisis.

I write first as a historian and then as an advocate. This essay has three parts, the first two of which are analytical, contemporary history. First I describe the origins of the crisis of authority. I then describe how the crisis has influenced the polity's response to AIDS. In the third part, I identify shortcomings in the American health polity's response to illness; flaws that have been revealed more clearly by this epidemic. The original version of this chapter was written in the spring and summer of 1986 and published in December of that year in The Milbank Quarterly supplement, "AIDS: The Social Consequences of an Epidemic." By the time I revised it in January 1988 for publication in this book, the response of the American health polity to AIDS had changed considerably. The centrist coalition that had dominated American social policy from the 1930s to the 1970s was resurgent. The crisis of authority, however, is far from ended.

The Health Polity in 1981

The Declining Importance of Infectious Disease

The most profound change affecting the health polity in the late 1970s and early 1980s was a major shift in patterns of illness, a shift with consequences for every individual and institution within the polity. Chronic disease had become the leading cause of disability and death. For half a century many people in the health polity had advocated changes in the array of institutions for treatment, in professional education, and, most important, in the financing of health care to take account of the growing prevalence of chronic illness. But the institutions of the health polity ac-


318

commodated slowly to the new epidemiological situation. Most physicians, hospital managers, and, most important, Blue Cross and health insurance executives behaved as if infectious disease, injury, and the acute phases of chronic illness were the major causes of sickness and death. Most of the resources allocated to the health polity were therefore spent to manage acute episodes of illness and their aftermath. Nevertheless, by the late 1970s the burden of chronic, degenerative disease in an aging population was stimulating a profound reallocation of resources, new assumptions about the responsibilities of individuals and institutions, and considerable concern about rising costs.[1]

In the 1970s, moreover, physicians, health officials, and journalists frequently described infectious diseases as problems that had been, or soon would be, solved by scientific progress and an improved standard of living. They usually defined the most pressing health problems as cancer, heart disease, mental illness, and infant mortality among the poor. In contrast, almost everyone knew the history of success in the struggle against infectious diseases during the past century. Smallpox would soon be the first infectious disease to be eradicated; measles would be the next target.[2] Controlling an infectious disease now seemed to be a routine process of discovering its cause and cure. It was no longer necessary, in the United States at least, to crusade for proper sanitation, housing, and diet in order to reduce the incidence of infectious disease. There was considerable evidence that, from the early nineteenth century until at least the 1930s, changes in diet and living conditions had, in fact, been more important than medical intervention in bringing most infectious diseases under control.[3] As a result of rapid scientific advance since the 1940s, moreover, many diseases that had once been leading causes of death had become brief, if unpleasant, episodes of illness. According to leading medical scientists, this success proved that research in basic science should have higher priority than efforts at care and cure.[4] By the early 1980s infectious disease accounted for "less than 5 percent of the costs estimated for all diseases in the United States."[5]

Sexually transmitted diseases (STDs) were now accorded lower priority than ever before as threats to health. Syphilis and gonorrhea were amenable to drug therapy. Public health professionals now considered treatment a method of controlling venereal disease. The availability of treatment, whether in public health clinics or the offices of private physicians, created opportunities for education as well as cure.[6] Although public health agencies still conducted vigilant surveillance, physicians reported a smaller number of their cases than they did in the past, in


319

large measure because they perceived venereal disease as less of a threat to the community.[7]

Just a few years later, some people would recall with nostalgia the general attitude toward infectious disease in the late 1970s. In 1986, for instance, a third-year resident, who had entered medical school at the end of the 1970s, lamented that "many of today's residents spent their formative years in medical training during an era when the ability of the scientific community to solve health care problems seemed limitless."[8] The chief of the infectious disease bureau of a state health department recalled that, before the AIDS epidemic began, he had been considering a job with the World Health Organization because his work in the United States had become routine. In 1987 the chief of the infectious disease division at a major medical school, talking about AIDS to first-year students, lamented that he had chosen his specialty because he liked the idea of helping patients to recover quickly from their illnesses.

Increasing Priority of Chronic Degenerative Disease

For more than half a century a growing number of experts had urged that more attention and resources be allocated to chronic degenerative disease. In the 1920s and 1930s a few academic physicians had insisted that chronic disease—then often called "incurable illness"—would become more prevalent as the average length of life increased in the United States. They urged their colleagues to accord higher prestige and priority to long-term and home care, but without much success.[9]

Chronic disease attracted increasing attention in the 1950s. The privately organized Commission on Chronic Illness issued what were later regarded as landmark studies (1956-1959), and some medical specialists began to shift their emphasis from infectious to chronic disease. Among the first to do so were specialists in tuberculosis, who broadened their emphasis to diseases of the respiratory system after streptomycin was introduced as a cure for tuberculosis in the late 1940s.[10] The new specialty of rehabilitation medicine gained widespread publicity as a result of its success during and after World War II and the vigorous support throughout the 1950s that it received from the Eisenhower administration and Congress.[11] By the late 1950s the Hill-Burton Act had been amended to encourage the construction of facilities for long-term care and rehabilitation.

Nevertheless, priority within the health polity continued to be ac-


320

corded to acute rather than long-term care—either for infectious disease or for acute episodes of chronic illness. There were several reasons for this. Physicians' prestige among both their colleagues and the general public continued to rest on their ability to intervene in crises rather than on their effectiveness as long-term managers of difficult cases. Moreover, most of the money to purchase health services was paid by Blue Cross and commercial insurers on behalf of employed workers and their dependents, whose greatest immediate need was for acute care. Organized labor had little incentive to negotiate for fringe benefits for people too old or too sick to work. Since the inception of group prepayment for medical care in the 1930s, Blue Cross and commercial companies had resisted covering care for chronic illness, most likely because they feared that it would lead to adverse selection of risks and undesirably high premiums. Leading spokesmen for voluntary insurance argued that employee groups, even large groups of employees in the geographic areas covered by "community rated" plans, were too small to carry the large financial risks of chronic disease. Nevertheless, a constituency for long-term care of chronic illness was first created in the 1950s by the campaign for Social Security disability insurance and then in the early 1960s by efforts to create what in 1965 became Medicare.[12]

In the 1960s debates about national policy focused attention on unmet needs for health services in general and especially on care for the chronically ill. Some advocates of health insurance for the elderly under Social Security, enacted as Medicare in 1965, emphasized the need for long-term as well as acute care. Nevertheless, Medicare insured more comprehensively against the costs of acute episodes of illness than for outpatient, nursing home, or home health care.[13] Medicaid, however, which had been conceived mainly as a program of acute care for recipients of categorical public assistance, quickly became a major payor for nursing home and home health care for the elderly. By 1967 there was little controversy about the inception of the Regional Medical Program, which dispensed federal grants to diffuse the results of academic research about the major chronic diseases—heart disease, cancer, and stroke.[14]

Federal leadership in shifting priority to chronic degenerative disease continued during the Nixon administration. In 1970 President Richard M. Nixon declared war on cancer.[15] Two years later an amendment to the Social Security Act nationalized the cost of treating end-stage renal disease by covering kidney transplants and dialysis under Medicare.


321

Individual Responsibility for Health

By the 1970s there was considerable evidence that progress in controlling and preventing disease, especially chronic disease, could be achieved by changing personal behavior—"life-styles" was the euphemism—more effectively and economically than as a result of medical research and practice. Accordingly, health professionals and the media admonished individuals to modify their behavior in order to prevent or delay the onset of heart disease, stroke, and some cancers. To the surprise of many cynics, these pleas were effective.[16] Millions of people stopped smoking, drank less, exercised more, and ate less salt and fewer fatty foods. Preventing chronic disease had become a popular cause and, for some entrepreneurs, a lucrative one. For the first time since the nineteenth century, manufacturers of food products advertised that they improved health, often with the sanction of medical scientists. Manufacturers of healthier bread, cereals, and even stimulants, in turn, promoted exercise. Some of the new emphasis on individual behavior was fueled by concern to reduce or shift the cost of health services. But much of it was associated with a spreading interest in fitness, and with the belief that individuals should exert more control over their own bodies.

This promotion of individual responsibility occurred at the same time as increasing emphasis on the rights of patients, particularly their right to be treated with dignity and only after giving informed consent. Some health educators urged individuals to take more responsibility for their own health status in part so that they could demand more timely and efficient attention from the individuals and institutions of the health polity.[17] Critics of this point of view described it as another instance of "blaming the victim," of making individuals responsible for the physiological results of inadequate income and education.[18] The new emphasis on individual responsibility for health strengthened existing oversimplifications of cause and effect in the spread of disease. Individuals could be held responsible for behavior they engaged in before it was known to be dangerous. Moreover, individuals could be artificially abstracted from the social groups that formed their values and influenced their behavior.

Reflecting the new emphasis on individual behavior, state and local public health agencies joined campaigns to persuade individuals to reduce smoking and substance abuse. Even vaccination became a matter of individual choice. Public health officials, who in the past had insisted


322

that children be required by law to be vaccinated, now educated parents to make prudent choices.

Control of environmental pollution and occupational hazards were important exceptions to the increasing individualization of public health services. Public officials at the local, state, and federal levels exercised collective responsibility and evoked hostility from industry. Assisted and sometimes provoked by voluntary groups, public health officials called attention to the hazards of lead-based paint, fertilizers, chemical dumps, and atomic wastes. For reasons that may relate to a dichotomy between environmental and personal health services that arose around the turn of the century, the emphasis on collective rights and responsibilities in protecting people from diseases with environmental origins was not translated into other areas of public health practice. Diseases were increasingly categorized as subject either to individual or to collective action.

The Unfulfilled Promise of Science

Another reason for urging individuals to take more responsibility for their own health was widespread frustration, articulated particularly by some members of Congress, at the inability of medical science to keep some of its implied promises of the 1940s and 1950s. The great advances against infectious disease of the 1940s, especially the development of effective antibiotic drugs, had been widely publicized as the beginning of a permanent revolution in medicine. During the 1950s the budget of the National Institutes of Health and the expenditures of voluntary associations that sponsored research grew faster than ever before. Members of Congress, philanthropists, the press, and the general public expected that the causes of and cures for chronic diseases would soon be found because of research on basic biological processes.[19] But medical scientists proved to be better at basic research and at devising new technologies for diagnosis and for keeping very sick patients alive than at finding cures. This technology was disseminated rapidly because third-party payors eagerly reimbursed hospitals for purchasing it—which they did at the request of growing numbers of physicians in each medical specialty. The Regional Medical Program, as it was originally conceived, proved to be redundant. But the vast expenditure for technology had little impact on either mortality from particular diseases or on the growing morbidity from chronic illness. In the absence of new miracle drugs, the responsibility of individuals to reduce their risks was accorded even greater importance.


323

In addition, by the 1970s scientists sometimes seemed to be losing their privileged status within the health polity. Their success in the struggle against disease was no longer taken for granted, and they were frequently admonished to propose ways to solve practical problems and to be more accessible and forthcoming to representatives of the press and television. Moreover, scientists were no longer assumed to be virtuous as well as effective. Some years earlier, what was called the "bioethics movement," largely a coalition of philosophers, theologians, and some physicians, had begun a strenuous critique of medical scientists, especially clinical investigators. To many participants in this movement, protecting patients and research subjects from harm was the highest ethical goal. For some ethicists, autonomy took precedence over beneficence as goals.[20] Their concern with autonomy was embodied in federal regulations for the protection of human subjects in research. Similarly, the venerable antivivisectionist controversy was reactivated by a new animal rights movement. In part as a response to external criticism of science, but also because of general economic problems, research priorities and budgets were scrutinized more carefully than ever before by federal officials and the Congress.

For a generation the resources allocated to the health polity grew because everyone assumed that the nation's health would improve if more money was spent for research, hospitals, physicians' services, and educating health professionals. Public subsidies helped to create an increasing supply of hospitals, professionals, and research facilities. Blue Cross/Blue Shield and commercial insurers, using the premiums paid by employers and employees, stimulated demand for care. After 1965, when Medicare and Medicaid were established, the federal government became the largest third-party payor. In the early and mid-1970s there was broad agreement that access to basic medical care for the poor and the elderly was a diminishing problem,[21] that the next problems to solve were improving the quality of care and expanding the coverage of insurance and public-entitlement programs. But the consensus that had unified the health polity since World War II was now eroding.[22]

From Comprehensive Services to Cost Control

The broad coalition that had dominated the health polity since the 1930s broke apart in the 1970s. The labor movement, weakened by declining membership, ceased to lobby forcefully on behalf of broad social policy. Executives of large corporations, who for thirty years had pro-


324

vided their employees with generous health insurance benefits, found it increasingly difficult in the economic conditions of the 1970s to pay the cost of health care by raising the price of goods and services. The comprehensive first-dollar insurance coverage available to workers in the largest industries began to be described as a luxury that must be sacrificed in order to avoid increasing unemployment. Community rating, which had been endorsed by labor and business leaders in the 1940s as a way to increase equitable access to comprehensive health care, had been sacrificed to "experience rating," which shifted costs to the groups that could least afford to pay them. Furthermore, generous health insurance benefits seemed to encourage unnecessary surgery and excessive hospital stays. Evidence that numerous hospitals and physicians inflated their charges because third parties would pay them provided business, labor, and government leaders with additional justification for cost-containment measures. As tax revenues declined in the recessions of the 1970s, the federal government and the states changed the emphasis of health policy from providing access to more comprehensive services to cost control.

Advocates of cost control also argued that generous subsidies and reimbursement policies had created an oversupply of physicians and hospitals. Many of them wanted to reallocate the resources of the health sector to take account of the increasing incidence and prevalence of chronic illness. They contrasted excess capacity to provide acute care with the lack of facilities for long-term care.

The Crisis of Authority

The new emphasis on cost control and reallocating resources was evidence of a profound change in the distribution of authority within the health polity. Since World War II authority in health affairs, as in social policy generally, has been increasingly centralized in the federal government, although considerable power remained with state government and with employers. Centralized authority was frequently displayed in programs that required local initiative to meet federal standards; for example, the hospital construction program created by the Hill-Burton Act of 1945 and the community mental health and neighborhood health centers of the 1960s. In 1978 a political scientist, surveying health policy since the mid-1960s, wrote that "in no other area of social policy has the federal government been so flexible, responsive, and innovative."[23]

But the federal role in social policy generally, but especially in health,


325

narrowed after 1978. National health insurance, which many people had believed to be imminent a few years earlier, was politically moribund by the late 1970s.[24] In Congress and federal agencies, active discussion took place about containing health care costs through tax policy and new reimbursement strategies, which would encourage competition and offer incentives for physicians to use fewer resources.[25] Prepaid group practices, which for half a century had been the favorite strategy of liberals for increasing access to medical care, were renamed Health Maintenance Organizations (HMOs) by the federal government and used as a mechanism to control costs.[26] Diagnosis Related Groups (DRG), a mechanism to control hospital costs by setting prices based on the intensity of resource utilization, were devised by researchers at Yale University in the mid-1970s and were initially implemented in New Jersey.[27]

At the same time many state health departments or rate-setting commissions became, for the first time, active managers of the health industry. The goal of state and regional health planning changed from promoting rational growth to encouraging shrinkage or consolidation. Regulation—a word once associated mainly with the states' responsibility to implement health codes and to license professionals—was now used more often to refer to setting reimbursement rates and issuing certificates of need for construction and new equipment. Other states, however, chose to withdraw from active regulation of health affairs. Their leaders adopted the rhetoric of deregulation and competition that was heard with increasing frequency in discussions of national economic and social policy.

Business leaders began to claim new authority in the health polity. They perceived the cost of health benefits as an impediment to competition with foreign firms and a stimulus to dangerously high rates of inflation. In the United States, unlike other industrial nations, health insurance was linked to employment and was, therefore, a cost of production. A growing number of employers were choosing self-insurance in order to reduce costs. Many of them took advantage of a 1978 amendment to the Internal Revenue Code that permitted individual employees to select from a menu of benefits that often included less generous health insurance.[28] Responding to pressure from employers, Blue Cross and commercial insurance companies began to write policies with larger deductibles and copayments, to scrutinize claims more rigorously, to require second opinions and screening before hospital admissions, and to reduce beneficiaries' freedom to choose among physicians.

The health polity was experiencing a crisis of authority. Assumptions


326

about the balance of power in the health polity that had been accepted since the New Deal (though often grudgingly) were now challenged. In health affairs, as in social policy generally, increasing centralization was no longer regarded as inevitable. Many members of Congress and federal officials were eager to devolve authority over health affairs to the states and the private sector; business leaders were taking more initiative. Devolution would soon be accelerated by the Reagan administration. The health polity in 1981, when AIDS was first recognized, was more fragmented than it had been at any time since the 1930s.

The Health Polity Responds to AIDS

The Modern Response to Epidemic Disease

The health polity had, however, devised a set of responses to epidemics during the twentieth century, and these responses had been increasingly effective in controlling infectious disease.[29] At the beginning of the AIDS epidemic there seemed no reason to doubt that the problems posed by this new infection could be solved promptly and efficiently by applying the well-tested methods of surveillance, research, prevention, and treatment in a coordinated effort involving the federal government and the states. These methods had recently been used, with comforting success, to control Legionnaires' disease and toxic-shock syndrome. Hardly anybody had noticed that during the preparations in 1976 for an epidemic of influenza that did not occur, some senior administration officials raised strong ideological objections to direct federal intervention.[30] Nevertheless, in 1981, despite the crisis of authority in the health polity, AIDS did not seem to be an unusual challenge.

Widely shared assumptions about recent history generated confidence in the standard public health responses to epidemics. For a generation scientists had rapidly identified new infectious agents and devised tests for their presence, vaccines against them, and drugs to treat their victims. Most physicians and hospitals dutifully reported most cases of life-threatening disease, and public health officials held these reports in strict confidence. Although mass screening programs were sometimes controversial and were only partially effective in identifying new cases, there were widely accepted techniques for managing them. Since the early 1970s, moreover, it seemed possible to prevent disease through education and advertising. Finally, despite the problems of high costs


327

and fragmented authority, more Americans than ever before had access to medical care as a result of insurance or public subsidy.

Seven years after AIDS was first diagnosed, many public health officials remain confident that the syndrome will eventually be controlled by the conventional techniques for responding to epidemic disease. In support of this position they note that there have been no documented breaches of confidentiality in reporting or screening and that scientists have identified the infectious agent, devised a test for antibodies to it, report progress in the search for a vaccine, and, in AZT (Azidothymidine, tradenamed "Retrovir"), devised the first effective pharmacological intervention. Furthermore, these officials observe that many gay men have modified their sexual practices in response to education; that no one is known to have been denied treatment for AIDS because of inability to pay for it; and that in several major cities innovative programs of care are being offered to AIDS patients.

Other observers dispute this optimism, claiming that the conventional epidemic-controlling methods are inadequate to address AIDS.[31] They point to events or policies that appear to proceed from hostility or insensitivity to gay men and intravenous drug users. Many gay men, for instance, fear their privacy is threatened by reporting and screening policies that offer confidentiality, which could be breached, instead of guaranteeing anonymity. The Reagan administration was, until 1986, reluctant to request funds from Congress for research on and services for dealing with the epidemic; President Reagan did not even mention AIDS publicly until January 1986. Despite education in "safer" sex, much of it financed by public funds, the percentage of gay men who have positive antibodies to the human immunodeficiency virus (HIV) continues to increase. Also, public agencies in many cities and states have been reluctant to reach out to drug users in illegal "shooting galleries" or to provide them with sterile needles. Many third parties are reluctant to pay the additional costs of treating patients with AIDS. Although programs to create separate hospital units and community facilities for AIDS patients have been presented by their sponsors as positive steps, some critics view them as the beginning of tacit quarantine measures against modern-day lepers.

Without denying the persistence of discrimination against gays, and the blacks and Hispanics who are disproportionately represented among intravenous drug users, I believe the conventional responses to epidemics were now inadequate mainly because of the crisis of authority in the


328

health polity. A polity that is focused increasingly on chronic degenerative disease, that embraces cost control as the chief goal of health policy, and in which central authority has been diminishing cannot forcefully address this epidemic. In the following paragraphs I describe how this crisis of authority has influenced the actions of the health polity in surveillance, research, paying treatment costs, and organizing services for persons with AIDS.

Surveillance

Disagreements over surveillance policy have highlighted problems of cost and fragmented authority. Until 1987 the definition of a reportable case of AIDS used by the Centers for Disease Control (CDC) excluded many cases of illness related to HIV infection. Because most states have adopted the CDC's definition, the incidence and prevalence of AIDS and AIDS-related complex (ARC) can only be conjectured. The absence of accurate information has impeded accurate study of the onset and duration, as well as the cost, of the AIDS continuum. Surveillance policy, on the surface a straightforward problem in public health practice, in fact understates the severity and the cost of the epidemic.

Moreover, legal standards for the confidentiality of case reports vary among the states. By the end of 1987, nine states routinely collected the names of HIV-positive people. Two of these states, Colorado and Idaho, were using the names to trace sexual contacts.[32] Moreover, because most states classify AIDS as a communicable disease, case reports are not protected as strongly by statutes as they are for sexually transmitted diseases. They can, for example, be subpoenaed, although there is no evidence that they have been.

The lack of uniformity among the states in standards of confidentiality is an old problem made worse by the absence of national leadership in health affairs. On the one hand, surveillance policy has always been the responsibility of state governments, except for Indians, immigrants, and the military. On the other, standards of confidentiality affect civil liberties, an area of policy over which all three branches of the federal government had, until recently, been exerting increasing authority for a generation.[33]

The lack of consensus about standards to protect confidentiality increases the fear of many gay men that they will be stigmatized and persecuted. This fear, already intense, grew after the publication of a survey commissioned by the Los Angeles Times , according to which "most Americans favor some sort of legal discrimination against homosexuals as a result of AIDS."[34] Fear became rage when columnist William F.


329

Buckley, Jr., wrote in the New York Times that "everyone detected with AIDS should be tattooed on the upper forearm, to protect common needle users, and on the buttocks, to prevent the victimization of other homosexuals."[35] The fear is so intense it embraces the entire range of public policy: the irrational—Lyndon Larouche's proposal to screen every American for HIV antibodies; the dubiously effective—bills in several states to quarantine AIDS patients; the debatable—proposals to identify children or school employees with AIDS to school officials; and the traditional—the implementation of such STD-control techniques as the tracing of contacts.

Very little has been written or said to date about the effect AIDS has on the stigmatization of intravenous drug users. Unlike homosexuals, they do not organize to assert their rights, and they do not receive much public sympathy when they claim to do no harm by their private behavior. Drug users are generally stereotyped as pariahs who alternate between preying on innocent victims and receiving treatment and support at public expense. Many of them, furthermore, are also stigmatized because they are black or Hispanic. Addicts who die of AIDS may use fewer public funds than those who survive to receive treatment for their drug problems. Although several landmark civil liberties cases in the past have involved addicts, their rights—unlike those of gay men—have not yet been a subject of litigation during the AIDS epidemic.

Research

The history of research on AIDS was strongly influenced by the disinclination of the Reagan administration to assert central authority in the health polity.[36] In 1985 the Office of Technology Assessment, a congressional agency, reported that "increases in funding specifically for AIDS have come at the initiative of Congress, not the Administration." Moreover, "PHS [Public Health Service] agencies have had difficulties in planning their AIDS related activities because of uncertainties over budget and personnel allocations."[37] In January 1986 President Reagan called AIDS "one of the highest public health priorities" but at the same time proposed to reduce spending for AIDS research by considerably more than the amount mandated by the Gramm-Rudman-Hollings Act.[38] In 1986 and 1987 Congress continued to appropriate more funds for research than the administration requested—in the last days of the 1987 session, for instance, increasing the AIDS budget of the National Institutes of Health (NIH) to $448 million from $253 million the previous year.[39] Throughout the Reagan administration, budget officials had deliberately understated


330

the NIH budget request, knowing Congress would add to it. But the congressional increases for AIDS were greater than for other NIH programs.

As a result, at least in part, of the administration's reluctance to fund AIDS research during the first several years of the epidemic, voluntary contributions and state appropriations for laboratory and clinical investigation have been more important than in other recent epidemics. Foundations to sponsor medical research that had been established in New York City and, after Rock Hudson's death from AIDS, in Los Angeles merged to form the American Foundation for AIDS Research. In several cities, community-based organizations raised funds for research within and outside gay communities using techniques similar to those invented many years earlier by the National Tuberculosis Association and the National Foundation for Infantile Paralysis. The states of California and New York appropriated funds for research. These appropriations may be the first significant state expenditures for research related to a particular disease—except, perhaps, mental illness—since the early years of this century.

Similarly, state and local health departments, frequently in collaboration with community-based organizations, took the initiative in programs to prevent AIDS through public education. If the epidemic had occurred in the 1960s or even the early 1970s, the federal government might have established a program of grants for community action against AIDS. Consistent with the social policy of those years, such a program would have included guidelines for citizen participation. In the 1980s, in the absence of federal initiative, the leaders of community-based organizations in each major city combined goals and strategies from the gay rights, handicapped rights, and antipoverty movements of the recent past. Because they do not receive federal funds, some community groups have been free to move beyond educational programs and mobilize political action on behalf of patients with AIDS.[40] However, without a national program, community-based organizations are unlikely to emerge or to be influential in cities with small, politically weak gay populations.

Cost of Treatment

Because the epidemic began when government and private payors were restraining growth in the health sector, responsibility for the costs of treating patients with AIDS became a controversial issue. Many groups within the health polity had incentives to publicize and even to exaggerate high estimates of the costs of treating patients with AIDS. Prominent hospital managers were uncomfortable


331

with the new price-based prospective reimbursement and under pressure to offer discounts to Health Maintenance and Preferred Provider organizations. They encouraged speculation by journalists that the cost of treating patients with AIDS was 40 to 100 percent higher per day than the average for patients in their institutions. Many insurance executives embraced the highest estimates, perhaps because they wanted the states or the federal government to assume the burden of payment. A few insurance companies tried to obtain permission from state regulatory agencies to deny initial coverage to persons at risk of AIDS.[41] Officials of the federal Health Care Financing Administration (HCFA) avoided discussing the cost of treating AIDS. Both the administration and Congress have ignored suggestions that the two-year waiting period for Medicare eligibility be waived for persons with AIDS who qualify for Social Security Disability Insurance. When persons with AIDS qualify financially for the less generous disability provisions of the Supplemental Security Income program, they are eligible to receive Medicaid: The states have become the payors of last resort.

The actual costs of treating patients with AIDS are difficult to estimate because responses to the initial research on the subject are heavily political. The authors of the first systematic study, conducted by the Centers for Disease Control in 1985, estimated that the cost of hospital care between diagnosis and death averaged $147,000.[42] They derived this figure by using charges as a proxy for cost and multiplying them by an average length of stay, which was unusually long because it was disproportionately weighted with data from New York City municipal hospitals, which treated large numbers of intravenous drug users with multiple secondary infections and few home or community alternatives to hospitalization. The CDC study then compared hospital expenditures for AIDS with those for lung cancer and chronic obstructive pulmonary disease and found they were "similar," despite the obvious differences in the course, duration, and incidence of these diseases. Whatever the authors intended, the exaggerated estimates alarmed insurers (now prohibited by insurance regulators in several states from denying coverage to victims of AIDS), public officials, hospital executives, and the media. Other studies conducted in San Francisco alarmed some hospital executives because their estimates of the cost of hospitalization, between diagnosis and death—$27,857—were so low that it undercut their demand for higher reimbursement for AIDS patients.[43]

By early 1988 AIDS was usually presented as being about as expensive as other fatal illnesses. The major policy problem was now gener-


332

ally perceived as meeting the increasing burden of payment in the public sector—on Medicaid, state indigent-care programs, and public hospitals. Moreover, as the proportion of intravenous drug users with AIDS and HIV infection grew, concern about public burden was augmented by the traditional complaint of liberals and minority group leaders that programs for the poor are poor programs.[44]

Patient Services

In no previous epidemic have variations in lengths of hospital stay and in how services for patients are organized in different cities been so widely discussed. Most of the variation in the utilization of services seems to be a result of the availability of nonhospital services—particularly ambulatory medical care, skilled-nursing facilities, housing, hospices, and home health care. A few city and state health departments have tried to coordinate services. The San Francisco health department, allied with voluntary associations in the gay community, organized a network of inpatient, outpatient, and support services.[45] In order to achieve similar goals in a different political environment—one that is larger, more competitive among institutions, and has no tradition of coordination by consensus—the New York State Health Department created a program of "managed care." In this program, state officials are selecting hospitals that agree to meet specified criteria for managing a continuum of services within and outside the hospital.[46] Each hospital receives a higher reimbursement rate based on its proposal. By early 1988 the state had designated ten hospitals as care-managing centers. Further, every hospital in the state has received a higher rate of reimbursement for each AIDS patient treated since 1984.[47]

The New York State Health Department requires that its AIDS Centers, like the first such center San Francisco General Hospital, dedicate beds for AIDS patients. The rationale for the requirement, according to a principal author of the New York program, is that patients "will be treated better" if they are clustered. He defined "treated better" to mean that, as in San Francisco, AIDS patients would be served by nursing and social service staff who had volunteered for their roles, and that there would be greater attention for continuity of care. In addition, the dedicated beds in San Francisco, combined with case-management services, seemed to be related to shorter lengths of stay and lower utilization of intensive care.

Many hospital administrators and physicians in New York were enraged by the requirement to dedicate beds. They insisted that segregated patients and their hospitals would be stigmatized, and that dedicated


333

beds created new burdens for nurses who were already overworked and in short supply. Perhaps most important in their view, the Health Department was intruding on the domain of physicians and hospital staff. In the final regulations, a compromise was arranged that, Health Department officials hoped, would lead most of the designated centers to dedicate beds. In fact, many teaching hospitals in New York already clustered their AIDS patients for convenience in managing them. This dispute, like so many others during the epidemic, was less about AIDS than about the changing distribution of authority in the health polity.

In August 1986 the Robert Wood Johnson Foundation made the first awards in a $17.2 million program to encourage case management for AIDS patients. Funds were granted to applicants from ten of the twenty-one Standard Metropolitan Statistical Areas with the most cases of AIDS. Announcing the program, in January 1986, a foundation official described the federal government as if it were another philanthropic organization: "If an anticipated federal-grants initiative for similar purposes materializes, the Foundation and the Department of Health and Human Services are planning to coordinate the two programs as closely as possible."[48] In 1985 Congress had appropriated $16 million for AIDS Health Services Projects in the four cities with the greatest number of cases. But the Reagan administration initially sequestered these funds. For the first time since the 1950s a foundation program served as a surrogate for, rather than as an example to, the federal government. By late 1987 the Federal Health Resources and Services Administration had funded service projects in most of the cities in the Robert Wood Johnson Foundation's program.

The absence of national policy to organize and finance treatment for patients with AIDS may be appreciated by state and local officials who prefer to avoid responsibility for treating these patients. After a generation in which access to health services was gradually improved as a result of federal programs, geographic inequities may be increasing more rapidly for persons with AIDS than for victims of other diseases. In other words, AIDS patients in states or cities with relatively unresponsive health departments and no Robert Wood Johnson Foundation money may receive considerably less or lower-quality care than patients in other jurisdictions. The programs funded by the Robert Wood Johnson Foundation may be emulated elsewhere because, according to evidence from San Francisco, coordination reduces the length of hospital stays and the utilization of intensive care. But earlier discharge from hospitals can also be combined with inadequate outpatient, nursing home,


334

and home care. In many places, that is, superficial or cynical emulation of the policies of San Francisco or New York could produce results similar to what has happened when mental patients were deinstitutionalized.

Historical precedents abound for superficial or cynical distortion of strategies to improve health and social welfare in the United States. Since the 1930s officials of many state and local agencies have accepted the policies urged by experts with national visibility only under court order or when adopting them was a precondition for receiving federal funds. The possibility that these officials will resist pleas and even incentives to coordinate services for AIDS patients is enhanced by the unwillingness of the Reagan administration to insist on particular actions by state governments and by the recent retreat of the federal courts from mandating states to improve the care of particular classes of patients.

The public officials and staff members of voluntary associations who coordinate treatment for patients with AIDS have benefited from the gradual reorganization of services to emphasize chronic illness. Like tuberculosis, the most lethal disease of the nineteenth century, AIDS is an infectious disease that requires services outside the hospital. Reimbursement incentives offered by Medicare and private insurance since 1981 have stimulated a substantial increase in the number of home health care agencies and skilled-nursing facilities. Techniques for case management have been elaborated and tested in the past few years under waivers from HCFA and by Blue Cross plans and commercial insurance companies. Moreover, recent interest in substituting palliative for heroic measures in treating patients whose illnesses are terminal has increased reimbursement for, and thus the availability of, hospice services.

Furthermore, AIDS, like tuberculosis a century ago, must increasingly be treated as a chronic illness rather than as a series of discrete acute episodes. Some persons with AIDS are living longer as a result of the introduction of AZT. Other chemotherapeutic measures will no doubt create increasing demand for long-term-care services. AIDS is increasingly a chronic illness among children, requiring a coordinated pattern of services for as yet-indeterminate lengths of time.

AIDS is, to date, the only disease for which institutions are receiving grants and special reimbursement to coordinate inpatient and out-of-hospital services. The only comparable disease-specific case management is for end-stage renal disease—mainly for the procurement and distribution of organs. It is too soon to know if the interest groups organized around other diseases and conditions—people with brain injuries or multiple handicaps, for example—will demand similar services.


335

What is certain, however, is that the response of the American health polity to the AIDS epidemic has been shaped by fundamental changes that were occurring simultaneously. The most important of these changes, which I describe earlier, were according priority to chronic degenerative disease, emphasizing the responsibility of individuals for their own health, and controlling expenditures for health services. With the narrowed federal role in health care, a crisis of authority was transforming the health polity. The future of the AIDS epidemic will be shaped not only by the number and distribution of cases and by the results of research, but also—and perhaps most important—by how that crisis is resolved. If the polity continues to respond to AIDS as it did until 1986, the epidemic will likely mark yet another incidence of the gradual decline in collective responsibility for the human condition in the United States. On the other hand, the resurgence of the political center in the health polity which began in 1986 could transform the country's response to the epidemic.

By early 1987 there were several signs that incremental improvement in access to and coverage for health services was a serious political issue for the first time since the mid-1970s. In 1986 Congress required employers to permit employees to continue their health benefits for eighteen months after termination. More than a dozen states had enacted schemes to pay the health care costs of people who were uninsured or underinsured. In 1987 a bipartisan coalition in Congress expanded the administration's modest proposal to cover the catastrophic costs of illness under Medicare.

This general revitalization of the centrist, or incrementalist, coalition, which had dominated health policy for a generation, quickly influenced policy for paying the costs of AIDS. Congress responded to testimony about the high cost of AZT, the first moderately effective therapy for some AIDS patients, with an appropriation of $30 million to the states to subsidize the costs of the drug. The administration responded promptly and favorably to requests by several states for waivers to pay for noninstitutional services for persons with AIDS that are not usually covered under Medicaid. The Intragovernmental Task Force on AIDS Health Care Delivery, which convened in January 1987, recommended federal loan guarantees for long-term-care facilities (including hospices) for people with AIDS. The Health Care Financing Administration issued a report by the RAND corporation that predicted that AIDS would place an increasing burden on the Medicaid system. The report was reinforced by an independent study concluding that "public teaching hos-


336

pitals in states with restrictive Medicaid programs will be most adversely affected" by the burden of AIDS costs. In the fall of 1987 the National Center for Health Services Research and Technology Assessment contracted for the preparation of methods and instruments to measure and project more accurately the national cost of the disease, the cost-effectiveness of alternative ways of organizing services and treating the disease, and the relationship between the costs of AIDS and those of other diseases. In this environment of renewed interest in national policy, researchers who work on AIDS policy have been reporting calls from staff members of leading candidates for the 1988 presidential nominations.[49]

AIDS and the Future of the Health Polity

A Polemical Interpretation of Recent History

I describe next how the American health polity might reconsider its response to AIDS or to any other life-threatening disease. Between the late 1970s and the late 1980s the health polity broke sharply with long-term trends in American social policy. For most of the century there was a gradual shift from assigning responsibility to care for the sick to individuals and families toward collective responsibility and entitlement; individualism was considered a weak basis for social policy in an industrial society. For most of the century authority in the health polity was gradually centralized in national institutions—notably the federal government, large insurance companies, international labor unions, and professional associations; fragmentation was considered inconsistent with a just and efficient society. The centralization of authority in national institutions, however, was never complete in any area of social policy: State and local institutions, both public and private, continued to exert enormous power. A health insurance system based almost entirely on employment and retirement from it created considerable insecurity and inequity. But the trend was clear: Until the late 1970s those who opposed centralization, particularly the ideological right, considered themselves a minority group.

The AIDS epidemic coincided with a concerted effort within the polity to reverse the trends toward centralization in social policy. Authority within the polity was therefore devolving to the states and to private


337

corporations. The AIDS epidemic provides evidence that this reversal of social policy threatened public security against illness. I summarize that evidence and its implications in my concluding paragraphs.

The Persistence of the Unexpected

AIDS should provide convincing evidence that, despite the achievements of biomedical scientists, epidemics of diseases of mysterious origin and long latency will continue to occur, even in industrial countries. Some of these diseases will be infectious; most will probably be linked in some way to behavior or location or work. Science will continue to comprehend nature incompletely. The individuals and institutions who comprise the health polity should therefore accept the need to study and treat a greater variety of diseases than anyone can now imagine. Pressure to contain costs should be offset by a sense that there are limits to how much health care resources can be reduced in a society concerned about its survival.

The epidemic should also lead to better understanding of some practical implications of the platitude that all diseases are social as well as biological events. In the years before the AIDS epidemic the health polity accorded priority to biological factors in disease because its members were optimistic about the progress of medical science. The social basis of disease was not so much denied, as some critics charged, as it was ignored because of the health polity's enthusiasm about the results of laboratory research. However precisely social factors in disease can be identified, they do not contribute as effectively to diagnosis or therapy as does the study of diseased tissue. The AIDS epidemic, however, makes it difficult to deny that many pathogens only cause disease when people facilitate their transmission. As a result of AIDS there may be increased willingness to speak openly about sexual behavior and to provide more systematic education about it. There is already evidence that, in some schools, teachers are being more explicit about the risks of sexual behavior in response to students' fears about AIDS.[50] The media have been more explicit and accurate in reporting about AIDS than about any disease in the past that was linked to sexual behavior.

The Limits of Individual Responsibility

The epidemic also offers evidence that contradicts the assumption that it is desirable or even possible to substitute individual for collective


338

responsibility for social welfare. For more than a decade it has been fashionable among some politicians and policy intellectuals to assert that, given proper incentives, individuals can provide adequately for their own health and welfare. A plausible extension of this argument is that removing people who have positive HIV antibodies from insurance pools would, in the short run, save money for other people in those pools. Proponents of individualizing risk do not seem to care that removal of such individuals would also prevent those with positive antibodies who do not develop AIDS from subsidizing health care for other people.

Individualizing risk reinforces a shortsighted view of what constitutes rational social policy. Consider a society in which everyone who is considered a poor risk is denied insurance or forced to enroll in a group composed entirely of people with expensive afflictions. In such a society, the premiums for the oldest and sickest people would be prohibitively high, forcing them to seek public assistance or charity. Because most people are likely to become very old, very sick, or both, the consequences of creating smaller, more homogenous risk pools would be widespread pauperization. The political response to such a perverse policy might be broader support for a federally financed program of insurance against the costs of catastrophic illness.

AIDS also challenges the wisdom of offering incentives to apparently healthy young people to choose the least comprehensive health insurance. The beginning of the epidemic coincided with the decision of many employers to offer their employees so-called flexible-benefit plans. Under these plans, employees who considered themselves to be in excellent health could substitute other benefits, or in some instances cash, for the most expensive health insurance. There are no data about how many AIDS patients, most of them in their thirties and forties and with no previous history of serious illness, chose such substitutions.

The epidemic emphasizes the limitations of social policy that links entitlement to health insurance to employment rather than to membership in society and that provides benefits as a result of bargaining rather than entitlement. Since World World II most Americans of working age have obtained health insurance from their employers or their unions. Federal income tax laws encouraged the link between insurance and employment and prohibited firms from discriminating among workers at different levels of pay in awarding benefits. The tax laws cannot, however, remedy disparities in the coverage offered by different firms. Moreover, state governments have been reluctant to mandate coverage in large measure because, under federal law, mandates encourage firms


339

to shift to unregulated self-insurance—and have done so mainly in response to pressure from members of new provider groups (e.g., psychologists) who wanted to be reimbursed. As a result, the extent and duration of coverage vary enormously among workers with different employers. AIDS, which at the present time mainly affects people of working age, including intravenous drug users (many of whom do not work at all), reveals the limits of an insurance system that has not been compelled to offer a set of adequate minimum benefits.

The epidemic has also exposed the fragility of personal-support networks that are frequently promoted as substitutes for services that are provided, at higher social cost, by insurance, philanthropy, or through public policy. People who are at risk of contracting AIDS may be only slightly more isolated than everybody else. Americans increasingly live in small households, or alone; in the future families and friends may be less frequently available during crises than ever before. Most of us may need sympathetic case management by professionals during our catastrophic illnesses.

The Reassertion of Central Authority

Finally, the AIDS epidemic may demonstrate that the American health polity best serves the public interest when institutions within it struggle to assert central authority, when they do not accept fragmentation as the goal as well as the norm of health affairs. The unwillingness of the federal government to exert strong leadership in response to AIDS has been criticized by members of Congress, journalists, and patients since the beginning of the epidemic. In the absence of federal assertiveness, however, the health departments of several cities and states have coordinated the response of the health polity to AIDS. These health departments have tried, in different ways, to counter fragmentation by linking their traditional responsibility for surveillance with their more recent mandate to manage the health system. To the extent that similar linkage of the responsibilities of public health officers occurs elsewhere, it may partially substitute for the abdication of federal leadership and, perhaps, serve as a model for future national administrations.

These lessons about policy and authority could be drawn from the history to early 1988 of the response of the American health polity to AIDS. If they are not, we may recall the 1980s as a time when many Americans became increasingly complacent about the consequences of


340

dread disease and unwilling to insist that the individuals and institutions of the health polity struggle against them.

Notes

This chapter is an updated version of an article first published in the Milbank Quarterly 64, supplement (1986): 7-33. It is based on published and unpublished sources, interviews, conversation, and observation. I have indicated my obligation to written sources in citations in the text. I have not, however, ascribed particular comments—even quotes—to particular people. Some of my interviews were formal, either on or off the record. On many occasions, however, I benefited from conversations that were not, at the time, regarded by the people I was talking to or by myself as data for an essay in contemporary history and advocacy of social policy. Sometimes the conversations were privileged as a result of my participation in research bearing on the making of policy. I list here, alphabetically, the names of some of the people who have in conversations helped to shape my views about the health polity's response to the AIDS epidemic: Dennis Altman, Drew Altman, Stephen Anderman, Peter Arno, David Axelrod, Ronald Bayer, Joseph Blount, Allan M. Brandt, Cyril Brosnan, Susan Brown, Brent Cassens, Ward Cates, James Chin, Mary Cline, Peter Drottman, Ernest Drucker, Reuben Dworsky, Ann Hardy, Russell Havlack, Brian Hendricks, Robert Hummel, Mathilde Krim, Sheldon Landesman, Philip R. Lee, Richard Needle, Alvin Novick, Gerald M. Oppenheimer, Mel Rosen, Charles E. Rosenberg, Barbara G. Rosenkrantz, William Sabella, Stephen Schultz, Roy Steigbigel, Ann A. Scitovsky, and David P. Willis.

1. Daniel M. Fox, ''Health Policy and Changing Epidemiology in the United States: Chronic Disease in the Twentieth Century," in Is This the Way We Want to Die? ed. Russell Maulitz (New Brunswick, N.J.: Rutgers University Press, forthcoming). [BACK]

2. Louise B. Russell, Is Prevention Better Than Cure? (Washington, D.C.: Brookings Institution, 1986). [BACK]

3. Thomas McKeown, The Modern Rise of Population (New York: Academic Press, 1976). [BACK]

4. Lewis Thomas, The Lives of a Cell : Notes of a Biology Watcher (New York: Viking, 1974). [BACK]

5. Dorothy P. Rice, T. A. Hodgson, and A. W. Kopstein, "The Economic Cost of Illness: A Replication and Update," Health Care Financing Review 7 (1985): 61-80. [BACK]

6. J. M. List et al., eds., Maxcy-Rosenau Public Health and Preventive Medicine , 12th ed. (Norwalk, Conn.: Appleton-Century-Crofts, 1986). [BACK]

7. R. L. Cleeve et al., "Physicians' Attitudes toward Venereal Disease Reporting," Journal of the American Medical Association 202 (1967): 941-946. [BACK]

8. R. M. Wachter, "The Impact of the Acquired Immuno-Deficiency Syndrome in Medical Residency Training," New England Journal of Medicine 314 (1986): 177-180. [BACK]

9. Fox, "Health Policy and Changing Epidemiology," in Is This the Way We Want to Die? ed. Maulitz. [BACK]

10. W. Bruce Fye, "The Literature of Internal Medicine," in Grand Rounds , ed. Diana Long and Russell Maulitz (Philadelphia: University of Pennsylvania Press, 1988). [BACK]

11. Edward Berkowitz, "The Federal Government and the Emergence of Rehabilitation Medicine," Historian 43 (1981): 24-33. [BACK]

12. Edward Berkowitz, Disabled Policy (New York: Cambridge University Press, 1987); Sherry I. David, With Dignity : The Search for Medicare and Medicaid (Westport, Conn.: Greenwood Press, 1985). [BACK]

13. Theodore R. Marmor, The Politics of Medicare (Chicago: Aldine, 1973). [BACK]

14. Daniel M. Fox, Health Policies , Health Politics : The British and American Experience (Princeton: Princeton University Press, 1986). [BACK]

15. Richard A. Rettig, Cancer Crusade : The Story of the National Cancer Act of 1971 (Princeton: Princeton University Press, 1977). [BACK]

16. John H. Knowles, ed., Doing Better and Feeling Worse : Health in the United States (New York: W. W. Norton, 1977). [BACK]

17. Lowell S. Levin, A. H. Katz, and E. Holst, Self-Care : Lay Initiatives in Health (New York: Prodist, 1976). [BACK]

18. Robert Crawford, "You Are Dangerous to Your Health: The Ideology and Politics of Victim Blaming," International Journal of Health Services 7 (1977): 663-680. [BACK]

19. Stephen P. Strickland, Politics , Science and Dread Disease : A Short History of United States Medical Research Policy (Cambridge: Harvard University Press, 1972). [BACK]

20. Edmund D. Pellegrino, "The Reconciliation of Technology and Humanism: A Flexnarian Task 75 Years Later," in For the Good of the Patient : The Restoration of Beneficence in Medical Ethics , ed. Edmund D. Pellegrino and David C. Thomasma (New York: Oxford University Press, 1988). [BACK]

21. Ronald Anderson and L. A. Aday, Health Care in the United States : Equitable for Whom? (Beverly Hills, Calif.: Sage, 1977). [BACK]

22. Daniel M. Fox, "The New Discontinuity in Health Policy," in America in Theory : Humanists Look at Public Life , ed. D. Donoghue, L. Berlowitz, C. Menand (New York: Oxford University Press, 1988). [BACK]

23. Lawrence D. Brown, "The Formulation of Federal Health Care Policy," Bulletin of the New York Academy of Medicine 54 (1978): 45-58. [BACK]

24. Daniel M. Fox, "Chances for Comprehensive NHI Are Slim in the U.S.," Hospitals 52 (1978): 77-80. [BACK]

25. Jack A. Meyer, ed., Market Reforms in Health Care : Current Issues , New Directions , Strategic Decisions (Washington, D.C.: American Enterprise Institute, 1983). [BACK]

26. Lawrence D. Brown, Politics and Health Care : HMOs as Federal Policy (Washington, D.C.: Brookings Institution, 1983). [BACK]

27. John D. Thompson, "Epidemiology and Health Services Administration: Future Relationships in Practice and Education," Milbank Memorial Fund Quarterly 56 (1978): 253-273. [BACK]

28. Daniel M. Fox and Daniel C. Schaffer, "Tax Policy as Social Policy: Cafeteria Plans, 1978-85," Journal of Health Politics , Policy and Law 12 (1987): 609-664. [BACK]

29. Harry F. Dowling, Fighting Infection : Conquests of the Twentieth Century (Cambridge: Harvard University Press, 1977). [BACK]

30. Arthur M. Silverstein, Pure Politics and Impure Science : The Swine Flu Affair (Baltimore: Johns Hopkins University Press, 1981). [BACK]

31. Dennis Altman, AIDS in the Mind of America (Garden City, N.Y.: Anchor/Doubleday, 1986); Randy Shilts, And the Band Played On : Politics , People and the AIDS Epidemic (New York: St. Martin's Press, 1987). [BACK]

32. Intergovernmental Health Policy Project, AIDS : A Public Health Challenge : State Issues , Policies and Programs (Washington, D.C.: U.S. Department of Health and Human Services, 1987). [BACK]

33. Lawrence Gostin, W. J. Curran, and M. Clark, "The Case against Compulsory Case Finding in Controlling AIDS . . . ," American Journal of Law and Medicine 12 (1987): 7-53. [BACK]

34. E. R. Shipp, "Physical Suffering Is Not the Only Pain that AIDS Can Inflict," New York Times , 17 February 1986. [BACK]

35. William F. Buckley, Jr., "Crucial Step in Combating the AIDS Epidemic: Identify All the Carriers," New York Times , 18 March 1986. [BACK]

36. Peter S. Arno and Philip R. Lee, "The Federal Response to the AIDS Epidemic," in AIDS : Public Policy Dimensions , ed. J. Griggs (New York: United Hospital Fund, 1987). [BACK]

37. U.S. Congress, Office of Technology Assessment, Review of the Public Health Service Response to AIDS (Washington: OTA, 1985). [BACK]

38. C. Norman, "Congress Likely to Halt Shrinkage in AIDS Funds," Science 231 (1986): 1364-1365; "AIDS Research Funding," Blue Sheet : Health Policy and Biomedical Research News of the Week , 26 February 1986. [BACK]

39. W.B., "AIDS Funds Increased; Helos Meanne Blunted," Science 239 (1988): 140. [BACK]

40. Richard F. Needle et al., "The Evolving Role of Health Education in the AIDS Epidemic: The Experience of Nine High-Incidence Cities" (Report prepared for the Centers for Disease Control, 1986). [BACK]

41. Randy Shilts, "Insurance Denied? Industry May Screen for AIDS Virus," Village Voice , 3 September 1985. [BACK]

42. AIDS cost studies are referenced and summarized in Daniel M. Fox and Emily Thomas, "AIDS Cost Analysis and Social Policy," Law , Medicine , and Health Care 15 (1987-1988): 186-211.

43. Ibid.

44. Ibid. [BACK]

42. AIDS cost studies are referenced and summarized in Daniel M. Fox and Emily Thomas, "AIDS Cost Analysis and Social Policy," Law , Medicine , and Health Care 15 (1987-1988): 186-211.

43. Ibid.

44. Ibid. [BACK]

42. AIDS cost studies are referenced and summarized in Daniel M. Fox and Emily Thomas, "AIDS Cost Analysis and Social Policy," Law , Medicine , and Health Care 15 (1987-1988): 186-211.

43. Ibid.

44. Ibid. [BACK]

45. Peter S. Arno and R. G. Hughes, "Local Policy Response to the AIDS Epidemic," (Unpublished paper, 1985); Peter S. Arno, "The Non-profit Sector's Response to the AIDS Epidemic: Community-based Services in San Francisco," American Journal of Public Health 76 (1986): 1325-1330. [BACK]

46. State of New York, Department of Health, Request for Applications for Designation of AIDS Centers (Albany, 24 March 1986). [BACK]

47. Susan Dentzer, "Why AIDS Won't Bankrupt U.S.," U . S . News and World Report , 18 January 1988, 20-22. [BACK]

48. Robert Wood Johnson Foundation, press release, "AIDS Health Services Program," January 1986. [BACK]

49. Fox and Thomas, "AIDS Cost Analysis." [BACK]

50. Sara Rimer, "High School Course Is Shattering Myths about AIDS," New York Times , 5 March 1986. [BACK]


previous chapter
AIDS and the American Health Polity: The History and Prospects of a Crisis of Authority
next section