PART FOUR—
COMING TO TERMS WITH CHANGE
Seventeen—
Complexities of Conservatism:
How Conservatives Understand the World
Rebecca Klatch
American society has undergone enormous social and political changes in the past three decades. The Civil Rights Movement, anti-war protest, and the student and feminist movements not only challenged the political realm but also provoked re-examination of cultural and social relations. Issues spawned by social movements of the 1960s instigated changed relations between white people and people of color, between women and men, and between children and parents. Old ways of doing things no longer seemed to work as traditional roles and mores came into question.
Yet just as suddenly, the mid-1970s to 1980s was marked by a shift in mood and conservative protest. In fact, the rise of the New Right must be seen in the context of this enormous social upheaval, as arising amidst the breakdown of traditions and the challenge to existing institutions.
All too often discussion of the New Right focuses solely on organizations or on the conservative leadership in Washington, D.C. Yet how are we to understand the grassroots embrace of conservativism? What do people at the local level have to say about their own involvement in these countermovements to the 1960s? When we take seriously the people involved in various aspects of the Right and listen to their concerns, we find people genuinely concerned about the direction of American society, expressing their fears and hopes about the future of the nation.[1]
Although today's conservative movement is a reaction to social, political, and economic changes of the 1960s, the New Right is historically based and continuous with past right-wing movements in the United States.[2] It is also essential to realize that while the New Right speaks a common language in responding to these recent changes in American society, beneath this seeming unity rests a multiplicity of meaning.
The phrase "the New Right" seems to imply a unified entity, homogenous in its beliefs and goals. Yet the New Right is not a monolithic group. It is not one cohesive movement that shares a single set of beliefs and values. Rather, the New Right is composed of a range of groups and individuals who hold diverse—and even opposing—views about human nature, the role of government, and the political world. There is, in fact, a fundamental division that I characterize as "the social conservative" and "the laissez-faire conservative" world views. These two ideologies are rooted in different theoretical traditions and are constructed around divergent values.
The Worldviews of Social and Laissez-Faire Conservatives
Every ideology has a central lens through which the world is viewed. This lens serves to filter and refract reality. Events and objects come into focus through this lens, forming images of meaning. Social conservatives view the world through the lens of religion, meaning specifically Christianity or the Judeo-Christian ethic. In looking at America, social conservatives see a country founded on religious beliefs and deeply rooted in religious tradition. They speak of the religiosity of the Founding Fathers. The Constitution is seen as a gift from God; even the Declaration of Independence is interpreted as a religious document. In this view, America is favored in God's plan. As Phyllis Schlafly commented, "The atomic bomb was a marvelous gift that was given to our country by a wise God."[3]
The family stands at the center of this world, representing the building block of society. The family's role as moral authority is essential; the family instills children with moral values and restrains the pursuit of self-interest. Implicit in this image of the family is the social conservative conception of human nature. Humans are creatures of unlimited appetites and instincts. Without restraint, the world would become a chaos of seething passions, overrun by narrow self-interest. Only the moral authority of the family or religion tames human passions, transforming self-interest into the larger good.
The ideal society, then, is one in which individuals are integrated into a moral community, bound together by faith, by common moral values, and by obedience to the dictates of the family and religion; it is a world in balance, based on a natural hierarchy of authority.
Laissez-faire conservatives, in contrast, view the world through the lens of liberty, particularly the economic liberty of the free market and the political liberty based on the minimal state. Laissez-faire belief is rooted in the classical liberalism associated with Adam Smith, John Locke, and John Stuart Mill. All of human history is seen as either the inhibition
or progression of liberty. America is lauded as the cradle of liberty, created out of the spirit of self-reliance. The American Revolution is seen as a revolt against despotism, a milestone in the achievement of human liberty. The Declaration of Independence and the Constitution are revered as sacred symbols of limited government and individual rights.
The concept of liberty is inextricably bound to the concept of the individual. As the primary element of society, the individual is seen as an autonomous, rational, self-interested actor. Laissez-faire conservatives view humans as beings endowed with free will, initiative, and self-reliance. Left on their own, individuals will be creative and productive. The aim of the good society is to elevate the potential of humans by bringing this nature to fruition. This is best accomplished by protecting the economic liberty of the free market and the political liberty of the limited state.
In stark contrast, then, to the ideal world of social conservatism, in which moral authority restricts self-interest and thereby integrates individuals into a community, the laissez-faire ideal poses a society in which natural harmony exists through the very pursuit of self-interest. It is the marketplace, rather than God or moral authority, that creates social harmony out of individual interest.
Despite these disparate views of human nature and the ideal society, social and laissez-faire conservatives share a mutual dismay over the current spirit and direction of the nation. For social conservatives, the country has moved away from biblical principles and away from God; this threatens America's standing and particularly threatens the American family. While in the past the family has been stable and strong, "a haven in a heartless world," contemporary America is plagued by an attack on the family.
The pervasive image is of moral decay. References to moral decay permeate social conservative ideology, for the signs of decadence abound: sexual promiscuity, pornography, legalized abortion, and the disparagement of marriage, family, and motherhood. As one woman, active in the pro-family movement, put it:
I think our society is in decay. Everyone is so concerned about themself, the me generation. I see people around here taking aerobics or so concerned about fertilizing their lawn. Meanwhile the whole world is crumbling. It's as if I had one son who was on drugs, one daughter pregnant, another son doing some kind of criminal activity, and I looked around and said, "Well, I think I'll go do some gardening." But this is the time to stay close to home, to protect the family in these times of trouble. There are too many humanists running around. All these people jogging and so concerned about themselves . . . . You have mothers who want to go out and have a career, children who know their parents don't have time for them so
they turn to other kids for advice, and husbands who think it's okay for them to have extracurricular activities. The family is falling apart.
Moral issues are not only the root of America's problems; they take precedence over all other issues. A national pro-family leader told me:
From my perspective worrying only about economics and defense is sort of like building a house on a slippery mountainside with no foundation. Then along comes a rainstorm and it'll just wash the house right down the road. But if you had a foundation, you could get the whole thing anchored. . . . So I would say the biggest problems are moral. If the social order would get cleaned up, the economic problems could be resolved more readily. Because if people were in a more stable society, they wouldn't have the frantic thirst for more material goods that they have. And so if you had a stable social order, a lot of the other things would fall into place.
Thus, social conservatives look toward moral solutions, spiritual renewal, revival of prayer, repentance, and also toward concrete political involvement. Through political action they hope to push America back onto her righteous path. They see themselves as having a special mission: to restore America to health, to regenerate religious belief, and to renew faith, morality, and decency.
While laissez-faire conservatives also see a historical shift in the direction of the country, they view it in contrasting terms. It is the erosion of liberty, rather than moral decay, that is of utmost concern. In particular, America's departure from the traditions of self-reliance and the limited state threatens the individual's economic and political liberty. One local activist put it this way:
The last twenty years have seen an increase in a general, liberal, progressive thought. Everyone is concerned about altruism, promoting those who can't help themselves. Meanwhile look what's happened—the talented, the gifted, are overlooked. . . . People today are no longer proud. We've learned to be ashamed of our talents. We hide what we've achieved. There's nothing wrong with achieving, with wanting to improve yourself. That's what this country was built on. That's how cities and towns were built.
Unlike the social conservative activist who reacts against the obsession with self, this activist expresses the opposite: she speaks against the altruism rampant in society. Her concern, above all, is with the obstacles to the spirit of human liberty, the stifling of individual initiative at the expense of productivity.
Thus, laissez-faire conservatives point to the economic realm, not to the moral realm, as the answer to America's problems. When asked to name the most important problem facing the nation, laissez-faire conservatives unanimously select issues regarding the economy—and sec-
ondarily, defense issues—as most essential. One activist responded to this question by saying:
Definitely the economic issues [are most important]. Everything else is connected to the economic issues. Crime, for example—well, teenage unemployment. I think if they would lift the minimum wage for some jobs, they could train teenagers; they could give them jobs so they could earn a little money and meanwhile learn some skills. . . . The pocketbook is the main thing. . . . I think those people for the social or the moral issues are really a minority. That's not what most people are upset about. It's really the economic thing people want to change.
It is not the loosening of moral bonds, but restraints on the free market that are responsible for America's decline. Only economic recovery through a return to laissez-faire policies will assure the stability of society.
In short, while the social conservative activists use their influence to return the nation to its moral foundation, the laissez-faire conservative activists' efforts are directed toward returning America to her economic foundation. Laissez-faire conservatives believe that only protection of the free market, and a strengthening of individual initiative, self-reliance, and hard work will assure the future of the nation. It is not religious faith, but faith in the market that will solve America's problems. It is not the miracle of God, but the miracle of capitalism that will save America.
Conservatives' Motives for Activism
It is important to acknowledge not only the philosophical differences between these two views of the world, but also the different motivational bases of New Right activism. While both groups are reacting to changes in American society, they do so from different vantage points. For example, the expansion of government during the 1960s and 1970s induced a common antipathy on the parts of both social and laissez-faire conservatives. Yet while activists in both groups mention "Big Government" as provoking their involvement, their impetus for action is rooted in separate realities. To illustrate, one social conservative woman discusses how government interference instigated her involvement. Describing busing as her "baptism" into conservatism, she explains:
I had been a schoolteacher. And then I quit to raise my family. Then during all the busing time—about 1973—I began to get involved. My husband and I both saw what was happening in Massachusetts with the forced busing. You know, we're not against integration; I live in an integrated neighborhood in Dorchester. We're against the federal government forcing parents—telling them how to raise our children. . . . Now I see it as stopping government intervention in our lives. . . . You know, I was
brought up like everyone else, "mother and apple pie." It took awhile for me to see things [as a conservative]. We see ourselves as pro God, family, and the country. The issue that I see that connects these different causes is government intervention. The government should not have responsibility over our lives. . . . If an earthquake happens, that's God will. That we can't control. But the government has no right to tell someone what to do with their life.
The issue of control, and the boundaries between governmental authority and the authority of parents and God, are clearly of key concern.
In contrast, listen to the words of another activist who also names government interference as an important spark to her activism. She recounts a particular incident that had a radicalizing effect:
I remember one key event which had an impact on me. I was nineteen and dating a big black fellow from the Bronx. He designed missiles for the Vietnam War. No one thought anything of it then; it wasn't an issue. You have to remember, those were the Kennedy years. But there were some southerners who couldn't understand what a cute blonde thing was doing with a black man. They figured something was up. All we were really doing was sleeping together, the normal thing. One day two young guys came to see me to question me and they'd say things and I'd say, "How'd you know about that?" They told me, "We tapped your phone." Just like that! My jaw dropped down to the floor. That really hit me. Here I was only nineteen and realizing the government was invading my privacy.
While government authority is again of key concern, there is no mention of God, the Church, or parents. Not only is the meaning of government interference posed in a different way but the differences in these two activists' views are fundamentally in opposition. The laissez-faire conservative activist is horrified at government intrusion in the personal realm; the government illegitimately meddled with her right to privacy, to sleep with whomever she wants, "the normal thing." Both the life-style of this laissez-faire conservative activist and her perception of what constitutes government interference are at odds with the social conservative view, rooted in religious belief, which argues that there is no such thing as moral neutrality regarding governmental action.
How Conservatives View the Role of Government
The unifying cry, "get government off our backs," uttered by the New Right, in reality rests on different—and even opposing—notions of the legitimate role of government. Social conservative opposition to Big Government is rooted in the belief that government expansion inevitably usurps traditional authority. Thus, responsibilities traditionally designated to the family (for example, care for the aged, the religious up-
bringing of children) or to the church (for example, relief for the poor) increasingly are monopolized by the state. The public schools are a major battleground for the struggle between governmental and parental authority. Social conservative protest centers on such issues as who determines which schools children attend, who controls the curriculum, and what values are taught in the classroom.
The other reason social conservatives oppose Big Government is because they believe it promotes immorality. For example, government support and funding of organizations such as Planned Parenthood, the National Organization for Women, the National Welfare Rights Organization, the United Farmworkers, and the like, is interpreted as endorsement of secular humanism, a belief system in opposition to traditional values.
Big Government also promotes immorality through its own programs, in particular, through the welfare system and the tax system. George Gilder argues, for example, that welfare encourages immorality by "rewarding" illegitimacy. Equating the increase in government welfare with the rise in black illegitimacy rates, he argues that the government offers so many enticements to black teenagers that they cannot pass up the "good deal." Similarly, he maintains, the tax system promotes immorality by rewarding particular life-styles:
The tax burden has shifted from single people and from groovy people without any children to a couple trying to raise the next generation of Americans. . . . But if you're willing to leave your children in a day care center and go out to work, the government will give you $800 right off the top, a pretty good deal. . . . So the government very much wants you to leave your kids in day care centers. But if you raise your children yourself, you have to pay through the nose. However, that's nothing. If you could really kick out your husband and go on welfare, the government will give you $18,000 in benefits every year.[4]
Such government efforts are seen as undermining the traditional family and endorsing alternative life-styles.
Social conservatives contrast this current of government with the legitimate role of government set forth by the Founding Fathers. One common refrain is that the United States is meant to be a republic and not a democracy. Democracy is equated with mob rule, in which masses of individuals, unrestrained, each pursues his or her own desires. For example, Senator Jeremiah Denton, speaking at Phyllis Schlafly's celebration of the defeat of the Equal Rights Amendment, characterized the democracy of Plato's Republic in the following way:
It was a kind that was so democratic that the leaders were appointed by lot. They would rule according to the whims of the mob. And the animals
roamed in the streets because there was a complete feeling of liberty—you know, do your own thing and I'll do mine. There was a good deal of chaos. That form doesn't last very long and you work down into that form called despotism.
Here, liberty represents the unleashing of moral bonds, the anarchy of a society based on the precept "do your own thing," in which Man's primal instinctual nature results in chaos. This dreaded image of liberty unbound clearly parallels the ideal society of laissez-faire conservatism.
In many ways the 1960s epitomize this social conservative vision of a society in chaos. Signs of moral decay pervaded the 1960s—captured in the images of the blue-jeaned, beaded and bearded long-haired youth, Timothy Leary and the LSD cult, Woodstock and communes, skinny-dipping and bra-less women, head shops and rock concerts and, ultimately, in the image of a burning American flag. Part of the fear evoked by this era is connected to the uprooting of traditional forms of authority, evidenced in the generation gap and the slogan "Don't trust anyone over thirty," by the embrace of Eastern religion, yoga, Hare Krishna, and occultism, and perhaps most radically designated by buttons declaring "God is dead" or "Question Authority." These images of the 1960s desecrated the sacred symbols of the social conservative world.
The 1960s are also seen as the era that initiated an attack on the American family. While the pill and the sexual revolution ate away at the traditional moral norms governing the family unit, the New Left, and later the feminist movement, launched an ideological attack on the family. The rising divorce rate and the increased number of working mothers were also perceived as eroding the moral bases of family life.
Further, the 1960s is associated with a new emphasis on self, the ushering in of the Me Decade of the 1970s. Social conservatives attribute this elevation of the self to the predominant ethic of the 1960s, "Do your own thing." The popularity of psychology, with stress on self-exploration, "I'm Okay, You're Okay," and the multitude of new therapies, also added fuel to the fire of "Me Firstism." The net result of these developments was the breeding of a culture of narcissism.
In contrast to the fears of the social conservatives, what laissez-faire conservatives fear, above all, is the loss of individual freedoms, the imposition of collectivism, and ultimately, the slavery of the totalitarian state. Thus, for laissez-faire conservatives the 1960s and 1970s ushered in an era of governmental growth, increased regulations, higher taxes, and, consequently, the loss of freedom. The image evoked from this era is of government dangerously expanding, becoming a vast bureaucracy, a modern leviathan. The soaring rise in government spending, the expansion of federal regulations, the growth of federal agencies and the
enlargement of Congressional staffs are signs of the government stretching its tentacles, ever extending into new areas.
As with classical liberals, laissez-faire conservatives fear that any extension of government beyond limited functions will interfere with the unrestricted pursuit of self-interest and the functioning of the free market. Hence, the laissez-faire conception of the legitimate role of government is necessarily linked to belief in the limited role of government. External authority is suspect because it conflicts with individual autonomy and decision making.
Specifically, Big Government limits economic freedom through intervention in the free market. By bailing out large corporations in need of assistance, for example, government acts to protect particular actors in the marketplace, thereby upsetting the workings of the "invisible hand." Similarly, government regulations not only create impediments to free trade, but also stifle innovation, the vital spark to the entrepreneurial process.
Laissez-faire conservatives also oppose government expansion of the welfare system. Welfare represents the replacement of self-reliance, the robbing of initiative by giving services and goods away for free. As one local activist put it:
People used to hate charity when I grew up. They'd be embarrassed to take handouts; they'd take pride in being self-sufficient. To take responsibility for oneself and to have opportunity without restrictions—that's what this country is about. Now, instead of relying on their own abilities and opportunity, people today expect the government to help them out. A whole generation has been raised to expect to hold out their hand and the government will put money in it.
Opposition to welfare is closely connected to opposition to high taxes. The tax system punishes savings and investment and thereby is a disincentive to output and production. Again, government interferes with market forces, crushing the incentive to work harder, to produce. Further, taxation of the productive American citizen combined with the plethora of welfare programs offered to the nonworking, nonproductive citizen, not only kills individual initiative in both but also moves government dangerously beyond the limited role intended by the Founding Fathers. This compulsory transfer of wealth places human liberty in jeopardy by fundamentally redefining the role of government. Such programs promote equality at the expense of freedom.
In short, while both social and laissez-faire conservatives oppose the tax and welfare systems, they do so for fundamentally different reasons. Social conservatives view these systems as promoting immoral life-styles that conflict with religious values and undermine traditional authority.
Laissez-faire conservatives, on the other hand, oppose state interference in the economic realm, which strangles individual initiative, thereby depriving the entire nation of greater wealth.
In addition to laissez-faire fears regarding government intrusion on economic liberty, equally abhorrent is intrusion on political liberty. Fear of government expansion, then, is also the fear of government as "morals police," as Big Brother. Laissez-faire conservatives oppose government interference in the private realm, whether it be in terms of abortion, school prayer, or pornography. As one activist put it:
I don't want Big Brother in my house. I think we're headed towards "1984" fast enough without having him come in and say, "Hey, you can't determine what you're doing with your own body." Or "Hey, lady, get back there and do the dishes," or whatever. I don't want him in my home. That's what's important to me. I just don't want government involvement in my personal life. . . . I don't rush right out and get an abortion, but then I don't want the federal government telling me whether or not I can. I'm absolutely opposed to funding them. I don't think that the federal government should be funding anything now in terms of that. Just missiles.
Typically, this laissez-faire activist is pro-choice but against government-sponsored abortion. The government has no right to deny a woman's choice, or to support her decision with funding. In their adherence to maximum free choice for individuals and protection of civil liberties, laissez-faire conservatives consider such government interference illegitimate and dangerous. In fact, the libertarian dictum of total freedom of action, barring the use of force or fraud, parallels the very slogan of the 1960s that social conservatives condemn: "Do your own thing as long as it doesn't hurt anyone else."
Thus, the two worlds are in fundamental disagreement over the role of the state. While both social and laissez-faire conservatives criticize the existing state, essentially they call for its replacement by different kinds of state. Social conservatives attack the present state—government institutions, the schools, and so on—as representative of secular humanism. They wish to replace the values and interests now embodied in public institutions with their own set of beliefs and values. Rejecting any notion of moral neutrality, they are willing to use the state to achieve righteous ends, calling for the insertion of traditional values based on biblical principles. In particular, social conservatives rely on the state to legislate moral issues, for example, abortion and homosexuality.
Laissez-faire conservatives, on the contrary, reject any effort to introduce public authority into the private realm. They do not want to replace the values embedded in public institutions with some other set of values; rather, they wish to cut back or eliminate the public sector alto-
gether. Extension of the state into the private realm violates the sacred tenets of laissez-faire conservative belief—limited government in both public and private spheres. For laissez-faire conservatives, then, the state itself is inherently evil, a threat to individual rights.
In sum, while social and laissez-faire conservatives share a mutual hostility toward federal power, they do so for different reasons. While social conservatives view Big Government as immoral and a threat to traditional authority, laissez-faire conservatives view Big Government as dangerous to the political and economic liberty of the individual. While both agree on the need to oppose Big Government, social conservatives seek to return the country to the authority of God, the Church, and parents. Laissez-faire conservatives, on the other hand, seek to return authority to the very hands of the individual, for each person to act in his or her own best interest in both the public and the private realm.
Thus, while the laissez-faire conservative dreams of a world in which individuals are freed from the external constraints of authority and thereby attain true self-determination, such a vision is anarchic to the social conservative. Social conservatives view such "pure liberty" as a "do your own thing" type of despotism that defies divine laws and results in social disintegration. In essence, the laissez-faire vision of liberty, based on voluntaristic action, clashes with the social conservative world of hierarchical authority, based on moral absolutes.
The Future of the New Right
Finally, what is the likelihood of continued coalition between the two worlds of the New Right? Although social and laissez-faire conservatives have united to "get government off our backs" in three presidential elections, as we have seen they hold different—and even opposing—views of human nature, the ideal society, and the role of the state. If tension between the two worldviews should lead to conflict, what is the likely outcome? In terms of electoral politics, laissez-faire conservatives are currently in the stronger position. Gallup polls indicate that social issues have declined in general import among the public since the early 1970s.[5] Additionally, while there seems to be a conservative trend among the college-age generation,[6] it is not the social issues that they support.[7] As Martin Franks, executive director of the Democratic Congressional Campaign Committee, put it: "If there is anything that puts a shudder into young voters, it's the right wing's social agenda." In fact, Edward Rollins, the director of Reagan's 1984 re-election campaign, reported that most of the young voters Republicans had success with in 1984 were libertarian in orientation.[8] In addition, the baby-boom generation also tends to be economically conservative but liberal on the social issues.[9]
These factors indicate that should conflict erupt within the New Right, laissez-faire conservatives, who have a greater degree of support among the mass population, are likely to be the victors. However, while social conservatives have been relatively unsuccessful in promoting their social agenda through legislative means, the judiciary offers another possible avenue for social conservative victory. Because judicial appointments are made for life, they have more long-term impact than the election of officials. President Reagan made more lower-court appointments than any of his recent predecessors. In October of 1985, 30 percent of all U.S. District Court judges and 27 percent of all U.S. Court of Appeals judges were Reagan appointees.[10] Moreover, the appointment of Judge Anthony Scalia and the promotion of Justice Rehnquist to Chief Justice of the Supreme Court show promise for social conservatives.[11] The recent Supreme Court decision upholding Georgia's law prohibiting homosexual sodomy as well as the erosion of support for Roe v. Wade signify triumphs for the social conservative view of the state's role in legislating morality.
In short, while the two worlds of the New Right will continue to share a common opposition to the size of government and will continue to favor dismantling the welfare state, tax cutbacks, rearming America, and a foreign policy based on anticommunism, disputes over the priorities of the political agenda are bound to escalate in the coming years. Further, any measure of the relative success of social or laissez-faire conservatism must pay attention to shifts both in the legislative and the judicial realms.
But the future of the New Right does not simply depend on whether internal dissension results in divorce between the two worlds. The future of the New Right also depends on the response of those who fall outside its borders. For those who are concerned about America's shift to the right, it is essential to understand that the New Right speaks to real social problems and to see how the hopes and fears implicit in each world view tap into wider currents within American society.
Social conservatism speaks to problems and fears created by the enormous social upheaval of the past few decades. In her study of the right-to-life movement, Rosalind Petchesky warns that the pro-family movement should not be written off as religious fanaticism or as mere opportunism. The pro-family movement expresses fears resulting from teenagers' cultural independence from parents; it expresses parents' concerns about their children getting pregnant, having abortions, abusing drugs, or being sexual without a context of responsibility. Petchesky argues that neither the left nor the women's movement has offered a model for a better way for teenagers to live, and she urges the development of alternative visions that provide a sense of orientation in dealing with this disruption and insecurity.[12] Added to this, the pro-family movement speaks
to fears regarding women's precarious position in society. During a time of increased divorce and the feminization of poverty, women seek to assuage their anxiety, to secure their place in the social structure. Social conservatives also react against the vision of a society steeped in self-interest. They criticize the hedonism and obsession with self of a society that has taken individualism to an extreme, in which self-fulfillment takes priority over responsibility to others.
These concerns over excessive materialism, over the instilling of values beyond self-interest, over the need for a larger community, are shared with those of other political persuasions. In Habits of the Heart: Individualism and Commitment in American Life , Robert Bellah and his colleagues report that while individualism remains a deeply held and precious tradition in America, excessive individualism has led to isolation and to a preoccupation with self and with private interests.[13] Whereas in the past people derived meaning from their connection to others—to parents and to children, to a religious community—and from participation in public life, today freedom is largely defined negatively; people define themselves through breaking from past connections.
During a time in which neither work nor public participation provides meaning, people look for meaning through "expressive individualism" of shared life-styles. "Freed" from all social ties, people choose others like themselves, forming life-style enclaves centered on a private life of consumption and leisure. Echoing Tocqueville, Bellah and his colleagues conclude that the antidote to excessive individualism is participation in public life and a reconnection of individuals to true communities, which, unlike life-style enclaves, are inclusive, celebrate the differences among individuals, and reconnect people to their past and to their future.
In contrast, the social conservative answer to cultural narcissism is moral absolutism—the firm and unquestioning assertion of biblical principles and traditional values. There is a certain irony to this moral absolutism. On the one hand, social conservatives have faith in divinely ordained laws, solid and eternal truths that form the moral code by which to live. On the other hand, they continually fear that, with even the smallest of questioning, the entire foundation of morality is likely to crumble. This fear of collapse results in antagonism toward critical thought.
Further, the moral absolutism that divides the world into two camps of right and wrong, Good and Evil, reduces society to irreconcilable conflict. There are no shades of grey. For, if moral absolutism leaves little room for different interpretations of the Bible, it leaves even less room for the existence of a plurality of cultural and moral codes. In American society, historically rooted in heterogeneous cultures of diverse traditions, such a stance is suicidal.
If the urge toward community is a central concern underlying social conservatism, the urge toward freedom is one of the central concerns of laissez-faire conservatism. Laissez-faire conservatives speak to the reality of living in a world that is increasingly so huge and technological that the individual is lost, submerged in the structure. The laissez-faire conservative response expresses fear that as bureaucracies grow bigger and bigger the individual will be abandoned by the wayside. Horrified by the curtailment of civil liberties, appalled by Big Brother's intrusion into the private realm, ultimately the laissez-faire conservative fears the imposition of the totalitarian state.
Again, the very real issues voiced by laissez-faire conservatives are shared by those outside its borders. Bellah and his colleagues find nostalgia for the small town common in discussions with Americans across the country, regardless of political ideology. They interpret this longing, as well as the opposition to Big Government, as the desire to replace large-scale organizations with face-to-face interaction.
The concern, as well, over protection of civil liberties and opposition to government involvement in the private realm are issues laissez-faire conservatives share with those of other political persuasions. In their study of American's attitudes toward civil liberties, Alida Brill and Herbert McClosky find that whether ideology is measured by membership in particular groups, by self-identification, or by use of a scale, the strongest support for civil liberties is found among liberals rather than among conservatives.[14]
But there are crucial differences between laissez-faire conservatives and liberals in response to the curtailment of liberty. While all concerned might agree with the call to protect individual rights and to uphold freedom of action, for laissez-faire conservatives this translates into support for a foreign policy based on anticommunism and military strength. Consequently, while laissez-faire conservatives advocate less government intervention and uphold individual self-determination, many also support paternalistic foreign policies that rely on Big Government and deny self-determination to people in other lands. They fear state expansion and cherish individual liberties in America, yet support authoritarian dictatorships abroad in which the state reigns through terror, curtailing even the most basic liberties.
A consistent state policy would come closer to the isolationist position promoted by a faction of the libertarians. Yet even the libertarian notion of liberty defines liberty negatively as the absence of coercion. As Stephen Newman points out, the sensitivity to external power and control embedded in this notion of liberty disappears at the entrance to the market-place. There is no recognition, for instance, of the private power wielded by large corporations. Arguing that autonomy is limited to the extent
that the alternatives among which we may choose are conceived by others, Newman calls for a return to classical citizenship in which individuals participate in the decisions that determine their lives.[15]
Bellah and his colleagues also conclude that prevention of public despotism must entail a strengthening of citizen influence rather than a knee-jerk reaction to "get government off our back" or to "decentralize our economy." Recognizing that in a large and complex society some degree of centralization is necessary, they recommend individual participation through associations and social movements aimed at humanizing rather than abolishing government. They, too, urge a renewal of citizenship, by which individuals make the state more responsive and responsible to the community.
Ironically, the fears expressed by each of the worlds of the New Right are actually being realized through the actions of the other. The social conservative fear of a world enveloped in self-interest is, in fact, the world promoted by laissez-faire conservatism. The laissez-faire world of autonomous actors is a world of anomic individualism. Driven by self-interest, it is a world without altruism, void of any notion of responsibility and care for others. A world of social atomism is a world without community. In the extreme, the libertarian ideal reduces the entire world to private interest. It rejects the world of public citizens for a world of individuals, each pursuing his or her own ends.
On the other hand, the laissez-faire conservative fear of a world in which the individual is constrained by external authority is the very world promoted by moral absolutism. In a world in which there is one true interpretation of the Bible, one cultural tradition, one correct way to live, the individual cannot act autonomously. There is no freedom in a world reduced to black and white.
Yet the underlying yearnings of the New Right tap into very real issues which speak to main currents in American life. Only to the extent that the issues raised by the right are recognized and addressed is it likely that the rightward tide will be turned. Only by attention to these fundamental issues and only through the renewal of citizenship, public debate, and reasoned critique can avenues of choice remain open and hope truly remain to protect individualism and to sustain community.
Eighteen—
The Return of Rambo:
War and Culture in the Post-Vietnam Era
J. William Gibson
As the 1990s begin, the forty-five-year Cold War between the United States and the Soviet Union and their respective allies appears to have come to an end. The prospect of improved international relations among former foes and of reductions in nuclear and conventional arsenals means that the threat of World War III may be substantially reduced. As the fear of world war recedes, American society has the potential to develop a very different culture and politic by mobilizing the energies formerly directed toward preparation for war. Yet abandonment of war as a central social institution is by no means guaranteed. The United States approaches the end of the Cold War not as a victorious, satisfied world power now ready to lay down its arms, nor as a country that has finally learned that war does not pay. Instead we have spent the last decade trying—and at best, only partially succeeding—to come to terms with the first major military defeat in our history: Vietnam.
The U.S. defeat in the Vietnam War created serious, prolonged political and cultural crises. The fighting had involved tremendous military resources. Over 2.5 million men served in Vietnam; at the height of the war in 1968–69, the United States deployed more than 550,000 troops, a figure that excluded at least 100,000 to 300,000 more people, serving in logistic and training capacities outside Vietnam. Roughly 40 percent of all U.S. Army divisions, more than 50 percent of the Marine Corps divisions, one-third of U.S. naval forces, half of the Air Force fighter bombers, and between 25 and 50 percent of the B-52 bombers in the Strategic Air Command fought in the war. From 8 to 15 million tons of bombs were dropped on Southeast Asia (at minimum, four times the amount the United States dropped in World War II), along with another 15 million tons of artillery shells and other munitions.
Given their vast technological superiority over a largely peasant army from a poor agrarian society, U.S. political and military leaders thought they could "produce" victory by killing so many Vietnamese that the enemy could no longer replace casualties.[1] The failure of this approach—what I call "technowar"—raised questions about our method of warfare and the Pentagon's continual demands for vast economic resources to construct its capital-intensive, high-technology apparatus. Moreover, since military intervention had long been the ultimate threat behind much U.S. diplomacy, military defeat meant a loss of international political power as well.
Defeat also created a cultural crisis for America's national identity. According to historian Richard Slotkin, European settlers created a fundamental American myth he calls "regeneration through violence" during their wars against the Indians: American technological and logistic superiority in warfare became encoded as a sign of cultural and moral superiority. Thus, European and American civilization morally "deserved" to defeat Indian "savagery," and in turn, each victory by Anglo warriors "regenerated," or revitalized, the society as a whole.[2] The long history of U.S. victories, from the Indian wars through World War II, reinforced the centrality of wars and warriors as symbols of masculine virility and American virtue.
There was no victorious "regeneration through violence" to redeem the 58,000 Americans killed and the more than 300,000 Americans wounded in Vietnam. If the nation's long tradition of military victories had always been portrayed as divine omens of cultural and moral superiority, then defeat, by definition, cast doubt on the entire American social fabric. German social critic Walter Benjamin once observed that the terms "winning" and "losing" in war refer to more than just the war's outcome. Writing about Germany's defeat in World War I, Benjamin explained his thought that "losing" a war meant losing possession of it:
What the loser loses is not simply war in and of itself, war in general: it is the most minute of its vicissitudes, the subtlest of its chess moves, the most peripheral of its actions. Our linguistic usage itself is a marker of the depth to which the texture of our being is penetrated by winning or losing a war; it makes our whole lives richer or poorer in representations, images, treasures. And we have lost one of the greatest wars in world history, a war intertwined with the whole material and spiritual substance of our people. The significance of that loss is immeasurable.[3]
Although the United States was not conquered, as was Germany in World War I, the dispossession of cultural tradition and institutional powers that accompanied defeat, and the subsequent struggles to "repossess" the war and determine its meaning, have formed a major di-
mension of American politics and culture. At issue is not just one war, Vietnam, but the entire cultural legacy of war as fundamental to the American experience, the role of becoming a warrior as a male gender ideal, and the political wisdom and morality of fighting new expeditionary wars in other Third World countries. How American society comes to view the Vietnam War thus has major repercussions for how it views its historical past, the kind of future world order it hopes to create, and America's place in that order.
Repression of the Vietnam War, 1972–79
To understand why the Vietnam War became a major topic for cultural and political debate in the 1980s, it is necessary to remember the repression of the war during the 1970s, its strange absence from public life. By the end of 1971, all of the major conventional army and marine ground-combat units, together with much of their helicopter and jet fighter-bomber air support, had been withdrawn from Southeast Asia, as part of Nixon's "Vietnamization" strategy. The U.S. military had no choice but to leave. Discipline in many combat units had vanished as more and more soldiers participated in "search and avoid" missions or, more seriously, committed mutinous "fragging" attacks against officers and sergeants.[4] This breakdown and revolt received some attention by both network television news and the print media.
In February 1971, the New York Times began publication of the Pentagon Papers, a secret study on the history of U.S. involvement in Vietnam. This revelation provided the public with several extraordinary accounts of deception by their leaders. Nineteen seventy-one was also the last year of major anti-war protests. This last round differed from earlier ones in that the Vietnam Veterans Against the War held shocking demonstrations in Washington, D.C. Although the VVAW's two "Winter Soldier" investigations into American war crimes failed to receive media attention, powerful pictures of veterans throwing their medals over the White House fence were shown on network television news.[5] All in all, 1971 was an extremely disturbing time. It looked as if the war was coming to a close and the United States had lost. For the next year, the war virtually disappeared from public view.
But it hadn't ended. U.S. material support, advisory efforts, and periodic intrusions of air power continued. Then in December 1972, the U.S. Air Force Strategic Air Command was ordered to bomb Haiphong and Hanoi. Before the American government would sign the Paris Peace Accords, the world had to be shown that the United States was still very powerful. The bombing allowed Secretary of State Henry Kissinger to declare that America had achieved "peace with honor"; then the treaty
was signed, in January 1973, and American prisoners of war were returned home. The treaty signing and prisoner return created a second "ending" to the war, and the conflict again receded from the news.
When the communist forces launched a major offensive in the spring of 1975, again the American news media returned to Vietnam. New pictures and stories told of escalating chaos as South Vietnamese army divisions dissolved and former soldiers fled southward. Finally, on April 30, Ambassador Graham Martin ordered the final evacuation of U.S. personnel from Saigon. Television news showed footage of U.S. helicopters, that had been used to evacuate refugees, being pushed off aircraft carriers and sunk in the South China Sea. It was an image of total defeat.
Twice before the war had "ended," only to return in yet a more terrible form, like a recurrent nightmare from which there was no awakening. Consequently, the finality of the April debacle created a perverse kind of relief. President Ford delivered a eulogy on April 23, a week before the fall/liberation of Saigon:
Today, America can regain the sense of pride that existed before Vietnam. But it cannot be achieved by refighting a war that is finished. . . . These events, tragic as they are, portend neither the end of the world nor of America's leadership in the world.[6]
Other U.S. political leaders and the press followed his lead. The Vietnam War had been a "tragedy" and a "disastrous mistake," but it was time to move on to better things. Neither the leadership nor the public was prepared to re-examine the war and determine who was responsible for the debacle, either in terms of decision makers or broader examinations of institutions and culture.
Communications scholar Harry W. Haines calls this repression "strategic forgetting," in that the collective amnesia helped stabilize the American political system.[7] And the system desperately needed stabilizing. President Richard Nixon was forced to resign from office under the threat of impeachment in August 1974. Scores of his administration officials had previously been indicted for crimes during the long Watergate crisis. Many people thought the government was corrupt beyond repair.
President Carter's election in 1976 did not bring a return to normalcy. The Organization of Petroleum Exporting Countries' (OPEC) second, longer oil embargo against the United States (the first took place while Nixon was still in office) resulted in Americans spending long hours in lines, waiting to buy gasoline at vastly inflated prices. It was a humbling experience, a symbol of American decline. To make matters worse, the embargo came at a time when the erosion of U.S. manufacturing strength was resulting in plant closures and increasing unemployment. Massive trade deficits, high U.S. budget deficits, high inflation, double-digit in-
terest rates, and high unemployment combined to destroy the post–World War II pattern of continual economic growth. The American Dream of high wages, home ownership, abundant consumer goods, and increased educational and economic opportunities for one's children seemed to be fading away.
Foreign policy failures also heightened people's sense of living in an era when America was losing its empire and sliding from world power. The Soviet Union apparently did not fear American retaliation when it ordered its troops to invade Afghanistan in 1979. That same year, Sandinista revolutionaries overthrew Anastasio Somoza in Nicaragua, ending a dynasty that had begun when U.S. Marines appointed his father head of the National Guard in the early 1930s. In neighboring El Salvador, the Faribundo Martí National Liberation Front (formed after leftist political candidates were shot by the army during national elections) began protracted guerrilla war against the government, another U.S. ally. And in 1978, Muslim forces overthrew the Shah of Iran, whose father had been put in power by the Central Intelligence Agency in 1954. For over two years, U.S. embassy employees were held hostage in Iran, a national humiliation witnessed each night in the ABC News "Nightline" logo, "America Held Hostage." In April 1980, Delta Force commandos went to the rescue of those hostages, but their helicopters malfunctioned during the raid. The disaster made the feelings of national powerlessness worse.
With each new national trauma, the Vietnam War became more and more a symbolic harbinger of that decline: defeat in Southeast Asia had been America's first fall from grace, and now no one knew when the falling would stop or what the bottom might look like. All the old national mythologies seemed invalidated by these startling reversals, but to admit that the American system needed more than "tinkering" was deeply threatening. To open the Vietnam War up to serious examination meant that, potentially, everything else was open to the same scrutiny. The political elite, the news media, and certainly the majority of ordinary Americans were exhausted from all these conflicts and disillusionments; they wanted to avoid new national debates that might lead to yet further disruptions.
So, the Vietnam War was ignored. There were no films, no television shows, and no major academic conferences or other sustained public discussions about it. But the men who had fought were not so easily dismissed; their mere physical existence served as a reminder of what had happened in a war that few civilians wanted to remember. The veterans, of course, were human beings, all with different war experiences, who drew different conclusions from those experiences. But their humanity was lost in the nation's early effort to deny the war. Those who mentioned their veteran status frequently found that people backed away. In
consequence, a great many veterans later told of how they had previously lived "in the closet," and either denied or did not tell other people that they were veterans.
When they were acknowledged, veterans found themselves cast as symbols of a war that many suspected had been morally wrong. They came to represent American guilt. People called them "baby killers," meaning they were the living embodiment of evil. U.S. air power, artillery, and ground troops did indeed kill thousands of civilian Vietnamese, as veterans' novels, memoirs, oral histories, and testimonies—like those offered at the Winter Soldier investigations—all indicated. However, most of their stories provided a context for understanding how all those civilians came to be killed and opened a view of the chain of responsibility that went far beyond the man with the gun.
But in the mid- and late 1970s, this kind of crude name-calling was not intended and not used to open up debate about the war. Instead, it rang with archaic echoes from the history of Western culture. During the Middle Ages, Jews had been called "baby killers" and had been tortured and killed for supposedly conducting human sacrifices with Gentile babies (a revision of the older Christian claim that the Jews, not the Romans, had killed Christ).[8] A more modern example was the image of the German "Hun," who bayoneted children, created by British propagandists in World War I.[9] "Baby killer" is a tried and true castigation that divided the "good" people who opposed the war from the "bad" people who went to Vietnam. It was a form of ritual excommunication that cast veterans outside of human society.
Another label used to categorize Vietnam veterans was "crazy." There is no doubt that many combat veterans were disturbed, first by their combat experiences and second by the stigmatization they suffered at home. Newspaper reports of former soldiers undergoing hallucinatory flashbacks, acting violently toward other people, and committing suicide appeared regularly in the 1970s. But rarely was there any accompanying explanation for their psychological problems and violent acts. The accusation of madness created its own suffocating silence: sane people do not have to listen seriously to what crazy people say—because they are crazy. The negative images used to stigmatize Vietnam veterans also silenced them, and that silence was an intrinsic part of the continuing social repression of the war during the 1970s.
The Vietnam War Resurfaces in Literature and Film, 1977–80
The break from the repression of the war began in literature. In 1976, Ron Kovic published Born on the Fourth of July , the story of his life as a marine and his ordeal after the war as a paraplegic. Michael Herr, a
writer for Esquire , released his collection of war essays, Dispatches , in 1977. Gloria Emerson, a former New York Times correspondent, saw her memoir and war analysis, Winners and Losers , in print in 1978. The critical and commercial successes of books like these had two important consequences. First, the New York publishing world began looking for more books on Vietnam. Second, Hollywood writers, directors, and producers, who are constantly searching the horizon for cultural trends, noticed the success of the books.
Five major films concerning the war or the predicament of veterans after the war appeared in the 1970s: Taxi Driver (1976), The Deer Hunter (1978), Who'll Stop the Rain (1978), Coming Home (1978), and Apocalypse Now! (1979). The 1970s films put the Vietnam war back into American culture. However, the images of war and the stories of war veterans that they gave viewers presented a very particular version of what the war meant. All five films shared four important characteristics.
First, the movies showed veterans returning to war. Travis Bickle, the character played by Robert De Niro in Taxi Driver , becomes an armed warrior again (whether in fact or in fantasy is ambiguous) in a desperate hope to win recognition and somehow change society. In The Deer Hunter , De Niro plays a man who returns to Vietnam to rescue his lost childhood friend, a soldier who never came home. In Who'll Stop the Rain , Nick Nolte plays a former marine and Michael Moriarty plays a war correspondent who fight the bad guys to make a heroin deal. Bruce Dern, playing a returning marine in Coming Home , pulls a Vietcong rifle on a paraplegic veteran, played by John Voigt, when he realizes that the man has had an affair with his wife. In Apocalypse Now! , Martin Sheen plays a veteran who goes back to Vietnam because he can no longer function in the United States.
Second, almost all veteran characters are shown either as mentally ill or walking a psychological tightrope. They perform superbly in battle, but that is obviously about all they are capable of doing. As human beings, they are damaged.
Third, there is no successful "regeneration through violence." The character played by De Niro in The Deer Hunter brings back his friend's body, not a living man. Although taxi driver Travis Bickle becomes a media hero (at least in fantasy), his second war effort does not make him sane. The man played by Nick Nolte saves his friend in Who'll Stop the Rain , but he is killed in the effort, and corrupt intelligence agents (who set up the "buy") get the heroin. In Coming Home , a returned veteran commits suicide by swimming out to sea and drowning after he realizes that he cannot win back his wife's love. And in Apocalypse Now! the character played by Martin Sheen kills the renegade Colonel Kurtz (Marlon Brando), not because he is following orders from higher command, but
because Kurtz wants to die. The killing frees the veteran from his last connection to the military.
The fourth aspect these films share concerns their portrayal of the Vietnamese. None of them examines who the Vietnamese were or why they fought for thirty years, first against the French and later against the Americans. Taxi Driver, Coming Home , and Who'll Stop the Rain take place inside the United States. In Apocalypse Now! , Coppola presents the First Cavalry Division (Airmobile) attacking a bamboo village while wearing old western cavalry hats—the reference to the Indian wars in obvious. Michael Camino, director of The Deer Hunter , also cast the Vietnamese as "Indians." He portrays them as sadistic torturers, pure and simple.
Private Healing without Public Peace: Cultural Politics of Post-Traumatic Stress Disorder
In 1980, the common belief that many veterans were "crazy" was changed. After years of lobbying efforts by veterans' groups and affiliated therapists, the American Psychiatric Association listed Post-Traumatic Stress Disorder (PTSD) as a medical diagnosis in the third edition of its official Diagnostic and Statistical Manual (DSM III). The manual defined PTSD as a reaction to a severe trauma "generally outside the range of usual human experience." Reaction to the trauma could appear in a wide range of symptoms:
The characteristic symptoms include autonomic arousal, which is often manifest in panic attacks or startle reactions; a preoccupation with the traumatic event in the form of nightmares, flashbacks, or persistent thoughts about the trauma that intrude into everyday affairs; and a general dysphoria, a numbness that takes the meaning out of life and makes it hard to relate to other people. In [some] cases . . . the symptoms manifest themselves after a latency period of several years or . . . alternate with apparently asymptomatic periods that, on closer inspection, turned out to be periods of denial.[10]
Once PTSD became an official diagnosis, important symbolic and material resources became available to disturbed Vietnam veterans. Describing veterans' symptoms as disease partially eased the stigma that attached to men whose histories included anti-social actions, such as alcohol and drug abuse, wide mood swings, emotional withdrawal, and violence. What much of society had formerly seen as personal moral failures were reinterpreted as signs of PTSD, a kind of psychological suffering that could potentially be cured. Materially, the PTSD diagnosis helped secure continued political support for funding Vietnam Veterans Outreach Centers, which provided psychological and job counseling services, and it allowed some veterans to receive disability payments.
More money also became available for academic studies, and the strongest finding they turned up was that extensive combat experience often led to postwar psychological problems. Prewar differences in family stability, social class, and race proved to be much less important in the development of PTSD and Post Combat Violent Behavior (PCVB) than combat experience. Similarly, combat trauma caused veterans far more problems than hostile civilian attitudes.[11]
Early PTSD researchers also specifically connected the kinds of trauma veterans experienced to the American mode of warfare. Dr. Chaim F. Shatan, who cofounded the Vietnam Veterans Working Group in 1975, quotes one of his former soldier patients who knew something about the war was deeply wrong: "When I saw all the kids missing arms and legs in Saigon, I knew we shouldn't be there."[12]
Psychiatrist Robert Jay Lifton also showed the connection between the psychological problems experienced by veterans and the serious contradictions in the American war effort. He created the concept of the "counterfeit universe" to indicate the sense of fraud and betrayal many soldiers felt when they encountered the lies that permeated the U.S. military. False body counts, burned houses counted as "military structures destroyed," medals given to commanders for flying above the battlefield, and other routine practices embittered many soldiers. Lifton also heard repeated testimonies from guilt-ridden soldiers about how they had killed civilians and armed combatants who were only old men, women, and children. To help soldiers (and others) understand their guilt in context, Lifton conceptualized the American approach to war as creating "atrocity producing situations." When soldiers are sent to fight guerrillas who either live among the people or are the people, then atrocities occur regularly as part of the war's social dynamics. Moral responsibility for creating atrocity producing situations extends beyond the rank and file soldier.[13]
Since, from this perspective, the psychological disturbances experienced by veterans were caused by something larger than the individual, "curing" them would have to take the form of a public process in which collective responsibility for the Vietnam War was examined. Illuminating the contradictions between the goals of stopping what U.S. leaders called totalitarian communism and the reality of what this political stance and military strategy meant in terms of day-to-day warfare would be a crucial part of that process.
Instead of leading to the examination of collective responsibility, PTSD became a way to "repossess" the Vietnam War more comfortably. Chaim F. Shatan, who collaborated extensively with the DSM III task force, contended that the concept of PTSD that was officially adopted minimized the disorder's connection to war:
I would have preferred Postcombat Stress Disorder or, if we include survivors of other disasters, Postcatastrophic Stress Disorders. The words trauma, stress, and stressors are bloodless compared with disaster and catastrophe. While they satisfy nosologists, they also create confusion. The term "psychic trauma" was originally devised by Freud and his followers to describe "ordinary" psychic wounds, such as the loss of a parent or sibling, abandonment, the failure of a relationship, or the loss of one's station in life, not massive communal disaster.[14]
Through the diagnosis of PTSD, the veteran in essence was transformed into a patient for therapy and ritually reincorporated back into American society in exchange for depoliticizing his or her knowledge of the war. The potential political threat for destabilizing major power structures and creating opportunities for social change that is implicit in what I call the "warrior's knowledge" was thus neutralized.[15]
To Heal a Nation: Creation of the Vietnam Veterans Memorial
In this therapeutic approach, the Vietnam War itself became a "wound" upon the nation's body that needed to be healed. In April 1979, Jan C. Scruggs founded the Vietnam Veterans Memorial Fund to privately raise money for a monument to honor American war dead. Scruggs had served in the infantry, and suffered hallucinatory flashbacks after seeing The Deer Hunter . His effort soon gained support among veterans and families of fallen soldiers. In January 1980, Congress passed a bill authorizing the Memorial, and later that year, President Jimmy Carter signed it into law, adding his own prologue:
A long and painful process has brought us to this moment today. Our nation, as you all know, was divided by this war. For too long, we tried to put that division behind us by forgetting the Vietnam War and, in the process, we ignored those who bravely answered their nation's call, adding to their pain the additional burden of our nation's own inner conflict.[16]
As Harry Haines points out, this prologue was an early indicator that the Memorial was intended to serve as a kind of therapy. Carter "names the Memorial as a sign of national expiation, a sign through which Vietnam veterans are purged of an unidentified 'inner conflict.'" After the Fund's success in getting public appropriations for construction, it then formed an eight-person jury to select a design. The judges issued instructions "that the memorial must include a list of names of the war dead, that it must relate sensitively to the Washington Monument, the Lincoln Memorial, and other major monuments, and that it should be 'reflective and contemplative in nature,' refraining from making any 'political statement regarding the war or its conduct.'"
The jury received 1421 entries and in May 1981 selected Maya Lin's design. At first, the Yale undergraduate's famous black wall received approval from the National Capitol Planning Commission, the Fine Arts Commission, and the Department of the Interior. But by the fall, a conservative critique of the Memorial design had become a powerful political force. Tom Carhart, a Vietnam veteran and lawyer at the Pentagon complained, "Are we to honor our dead and our sacrifices to America with a black hole? . . . Can America truly mean that we should feel honored by that black pit?"[17] James Webb, a Vietnam veteran, author of the novel Fields of Fire (1977), and a well-known Washington official, called the design a "wailing wall" for future anti-war protests. H. Ross Perot (who had financed the design competition) also disliked it, as did several congressmen. They all wanted an above-ground, white monument.
Secretary of the Interior James Watt subsequently refused to build the wall. Scruggs mobilized his allies among veterans and the armed services and fought back. Watt then ordered Scruggs to compromise. Frederick Hart's proposal for statues of three soldiers approaching the wall was accepted to give the Memorial a more traditional look.
In March 1982, construction began and the first of three dedication ceremonies was held. Virginia Governor Charles S. Robb, a former Marine officer and Lyndon Johnson's son-in-law, recalled that he never knew how to answer "why" his men had died when he wrote to their parents, "and this memorial doesn't attempt to say why, but it does say we cared and we remembered."[18] Attempts to understand why the war happened were thus displaced to therapeutically heal the "wounds" it inflicted.
Later in the fall, the wall was completed and dedicated during a four-day event entitled the "National Salute to Vietnam Veterans." Secretary of Defense Casper Weinberger used the occasion to generate public support for the Reagan administration's Cold War agenda. He affirmed the conservative analysis of the Vietnam War, namely that the U.S. military suffered from what the Joint Chiefs of Staff called "self-imposed restraints." The Secretary of Defense said that the United States would never again ask its men to "serve in a war that we did not intend to win."[19] National unity would require learning that lesson before fighting another war.
But the meaning of the Memorial was not completely determined by Weinberger's speech. Different cultural forces were set in motion by the way its design framed experiences for visitors. Communications scholar Sandra Foss theorizes that the Memorial's presence within the earth, together with its enveloping walls, creates a "feminine sensibility" resonant with Jung's "Mother archetype." Visitors are warmly embraced while the inscribed names of the war dead communicate tragedy.[20] Both Foss and
novelist Bobbie Ann Mason believe that touching the wall creates a bond between the living and the dead. Vietnam War scholar (and veteran) Rick Berg concludes that this process takes Vietnam away from the state and instead locates the war "as an experiential and historical fact in the lives of . . . families."[21]
For Vietnam veterans, the Memorial created opportunities to break through what psychoanalyst Shatan calls "impacted grief" and mourn for their lost comrades. Armies do not encourage grieving; battle demands that soldiers keep fighting. Mourning is first blocked and then transformed into "ceremonial vengeance," the desire to make the enemy suffer.[22] Photographs of former soldiers crying became one of the most common representations of the Memorial, and thus of the Vietnam War.
In November 1984, the Reagan administration orchestrated the third ceremonial dedication to honor Hart's statues. A week-long series of televised events called "Salute II," and subtitled "American Veterans—One and All," was held. This dedication's political objective was to consecrate the Vietnam War as a "traditional" American war. President Reagan began his speech by saying that "this memorial is a symbol of both past and current sacrifice." He concluded:
There has been much rethinking by those who did not serve, and those who did. . . . There has been much rethinking by those who had strong opinions on the war, and those who did not know which view was right. There's been rethinking on all sides, and this is good. And it's time we moved on, in unity and with resolve to always stand for freedom, as those who fought did, and to always try to protect and preserve the peace.[23]
By implication, both the civilian and Vietnam veteran anti-war movements were wrong to protest. Healing requires unity. Unity creates the resolve to "stand for freedom" and fight another war. The "price" of freedom is periodic sacrifice of the nation's youth. Consequently, the memorial is a "symbol of both past and current sacrifice." Reagan's attempt to recuperate the Vietnam War makes new wars necessary and desirable—blood sacrifice can only be redeemed by more blood sacrifices . Without new sacrifices, then all those who have died before will have died in vain.
This concept of blood sacrifice appeared in many other commentaries during the 1980s. Tom Carhart, the lawyer who called Maya Lin's design a "black hole," had submitted his own design in which an army officer held a dead soldier up to the sky, like an ancient offering to the gods. Christopher Buckley similarly argued in Esquire that those who avoided military service had "forfeited what might have been the ultimate opportunity, in increasingly self-obsessed times, of making the ultimate commitment [self-sacrifice] to something greater than ourselves: the survival of comrades."[24] Francis Ford Coppola's second Vietnam film, Gardens of
Stone (1987), focuses on ritual sacrifice. A noble young lieutenant is first trained by two veteran sergeants. He volunteers for Vietnam and is killed. The movie concludes with his burial at Arlington National Cemetery and the mourning of his symbolic fathers. They know that he died for them.
Some veterans foresaw that the Memorial would be used to promote the revitalization of war culture. Poet William D. Ehrhart made the connection in "The Invasion of Grenada":[25]
I didn't want a monument,
not even one as sober as that
vast black wall of broken lives.
I didn't want a postage stamp.
I didn't want a road beside the Delaware
River with a sign proclaiming:
"Vietnam Veterans Memorial Highway."
What I wanted was a simple recognition
of the limits of our power as a nation
to inflict our will on others.
What I wanted was an understanding
that the world is neither black-and-white
nor ours.
What I wanted
was an end to monuments.
Regeneration through Violence
In 1983, a new kind of Vietnam War film reached the American public. These films departed from those of the 1970s in one crucial respect—the warriors won their wars. Uncommon Valor showed a former U.S. commando team leader with private financing organize a rescue mission to retrieve members of his reconnaissance unit who had been captured years before and who remained POWs in Laos. He recruits his old crew for the project. Training shows the men becoming mentally and physically fit again. They transcend the egotism of civilian life and become a united team, complete with a purpose in life for which they are willing to die—and most of them do. The trade-off of killed rescuers for saved POWs is about one-to-one.[26]
Two years later, Sylvester Stallone's John Rambo, who in 1983 destroyed an Oregon town while experiencing a prolonged flashback in Rambo: First Blood , receives a presidential pardon to rescue POWs. Before accepting the offer, he asks "Do we get to win this time?" Having located the POWs, Rambo is then betrayed by his CIA case officer (who in turn takes orders from higher authority). No longer a subordinate
member of the bureaucracy, Rambo's full power is unleashed. He brings the prisoners back by himself.
Almost immediately, Stallone-Rambo became an object of ridicule in the mainstream press. Everyone laughed at his corny dialogue, massive body-builder physique, and cartoon killing proficiency. But the ridicule obscured the huge success of these films and the important opening they made in the culture.
The ostensible motivation for the heroes of Uncommon Valor and Rambo: First Blood, Part II was getting POWs out of Southeast Asia, and it was true that this was still an issue. When the Paris Peace Accords were signed in 1973, more than 2000 American soldiers, primarily members of air crews shot down over Laos and Vietnam, were listed as missing-in-action (MIA). Their families faced highly secretive Defense and State department officials. Both the Carter administration and a Special Select Committee of the House of Representatives announced that they had not found credible evidence that Americans were still being held prisoner. Reagan administration officials later said that although they had no proof of live Americans, "the information available to us precludes ruling out that possibility."[27]
Despite these statements, belief in POWs persisted. Many Americans no longer trusted their government. Pictures of desperate Vietnamese boat people fleeing their country after the long (thirty years) revolutionary war and subsequent economic collapse reinforced the old concept of the barbarian enemy. Moreover, there is no doubt that the American POWs who returned in 1973 had been brutally tortured. This combination of popular distrust and belief in a savage enemy created the idea that living POWs had been abandoned by the U.S. government because they were politically inconvenient.
But these two films (and the clones, like Chuck Norris's Missing in Action series) had a deeper cultural significance, beyond the POW debate. They gave birth to a new incarnation of a fundamental cultural archetype in American mythology. The old American figure of the avenging warrior who saves a society that cannot save itself was resuscitated. Suddenly a vast cultural outpouring created new images and stories of American men (frequently Vietnam veterans) who are "reborn" as powerful warriors fighting either criminals inside the United States or guerrillas, terrorists, drug dealers, or KGB agents abroad, on a worldwide battleground.
This reworking of traditional war culture constituted what I call "paramilitary culture." The new warrior hero was only rarely portrayed as a member of a conventional military or law-enforcement unit. Instead, he fought alone or with a small, elite group of fellow warriors. By being outside the dominant power structure and bureaucracy, the new
paramilitary warrior could overcome legal and political restraints supposedly imposed by higher-ups on their subordinates, and thus achieve new mythic victories to replace American defeat in Vietnam. Each victory regenerated society by infusing it with the warrior's strength and virtue.
In the mid- to late 1980s this culture blossomed. Scores of warrior films were released. From twenty to thirty men's "action" adventure pulp novel series were published; each book featured from 20 to 120 graphically described killings (much like pornographically described sex). The genre sold millions. Soldier of Fortune magazine and half a dozen imitators together fabricated a universe filled with visits to elite commando units; battlefield reports from Nicaragua, Angola, and Afghanistan; old war stories from Vietnam; and extensive testing of every possible small arm, knife, and military accessory. Together they sold well over a million copies a month. And the domestic weapons market changed. In 1989 the Bureau of Alcohol, Tobacco, and Firearms estimated that Americans had bought two to three million military "assault" rifles.
A central component of the paramilitary culture is the old war movie tradition that only war can provide males the key ritual transition from boyhood to manhood. The feminist movement seriously challenged traditional gender roles and declared that customary male privileges and practices were no longer acceptable. Reaffirming the warrior role was a response to that challenge as well as an attempt to recover from Vietnam. Even Oliver Stone's critical Vietnam film, Platoon (1987), shows a conflicted young soldier emerge from battle as an adult. Only Stanley Kubrick's Full Metal Jacket (1987) found no redemption in war; in his film, both the military training experience and the permissive lawlessness of war infantilized men.
Paramilitary culture also stresses that soldiers and police are not the only ones to do battle. The warrior role is an identity for all men, not just an occupation for some. All men, be they bankers, professors, factory workers, or postal clerks, can be warriors, always prepared for battle against the enemies of society.
Most of all, paramilitary culture has provided men with a world view that frames virtually all social conflicts between groups as potential life-and-death confrontations. America became the imaginary war zone. Images of the enemy proliferated. Poor Mexicans who immigrated to the United States became "illegal aliens" and were accused of being disease ridden, drug dealing, and communist infested. New racist and neo-Nazi groups surfaced, each filled with dreams of killing blacks and Jews and federal officials of the "Zionist Occupational Government." Fears of "terrorist" attacks by foreign and domestic groups have prompted massive escalations in police arsenals. Ordinary people across the country have
felt that "terrorists," foreign invaders, black gangs, economic collapse, nuclear war, or some other chaos-causing foe would soon appear. And many have hoped for this imaginary enemy to come, because then it could be slain or mastered. With the enemy's death, their fears would vanish and America would be restored.
Cultural regeneration through violence found a political partner in the Reagan administration. As a Hollywood actor, Reagan had starred mostly in secret agent, war, and western movies. As a politician, during his election campaign he promised to reject the "Vietnam syndrome" that had inhibited the Carter administration from military intervention. Once in office, he showed strong interest in war fantasies, such as Tom Clancy's novels about American and NATO victories in World War III battles, The Hunt for Red October (1984) and Red Storm Rising (1985). Extensive covert operations and military interventions by elite units became favored solutions to combat the spreading tentacles of what Reagan called the Soviets' "Evil Empire."
To be sure, not everyone supported Reagan. Tens of millions of people voted for his opponents in 1980 and 1984. Nor did every American deeply believe (or even casually fantasize) that either threatening to fight or actually fighting a new war would solve the country's domestic and international problems. Nor did the old anti-war movement of the 1960s completely disappear. Activists moved on to new issues, such as opposing intervention in Central America.
However, the cultural and political movements that embraced the warrior and war as sacred symbols of America achieved a crucial kind of hegemony in the 1980s. First, the desire for a victory to bring mythical redemption was strong enough that many people accepted the principle that "small" wars, air strikes, commando raids, and other "limited" military engagements were both politically and culturally good for the nation. In accepting this principle, the Vietnam War's major lesson was ignored, namely that war cannot be rationally managed and controlled. It is easy to underestimate enemies, particularly those from less technologically advanced societies. But when the "enemy" fights back as hard as possible, small wars can quickly escalate into big ones.
Second, the public desire for victory was also strong enough to avert massive public outrage and political crises when several of these new covert and overt military exercises failed. For example, after 241 marines were killed in Beruit in October 1983, a new sense of national power was generated days later by the invasion of Grenada. The drama of success in Grenada was far more palatable to most Americans than a sustained investigation and public discussion of what those marines were doing in Lebanon and how they had become such an accessible target.
In 1984, it looked as if this new interventionist foreign policy would be
partially restrained. Congress passed the Boland Amendment, prohibiting military aid to the Nicaraguan Contras. In response, the White House's National Security Council and other intelligence operatives initiated covert fund raising operations (including the sale of 5000 TOW antitank missiles to Iran) to support them. Yet when these covert transactions became public in 1987, public reaction was firmly split, even though Iran was probably America's most hated enemy. While some people were angered and frightened by these illegal acts, many Americans saw Lieutenant Colonel Oliver North as a hero (like Rambo) who righteously overcame Congress's self-imposed restraints to do what was necessary to win. Consequently, "Irangate" did not cause a legitimation crisis for the Reagan administration. That same year the United States sent warships to the Red Sea to protect oil tankers. Subsequently, they sank some small Iranian gunboats, replacing the Iran-Contra affair with another victory.
The first significant threat to this mutually reinforcing cycle of political and cultural regeneration through violence came in the summer of 1989, when the reforms initiated by then Soviet president Gorbachev signalled that the post–World War II Communist order was undergoing serious changes. During the late summer and fall months, strong popular movements overthrew the Communist regimes of Eastern Europe. Important changes also took place in Central America during this same period. The Nicaraguan Contras began to fragment politically in the face of an impending peace settlement. In El Salvador, the Farabundo Martí National Liberation Front (FMLN) guerrilla offensive in December discredited U.S. claims that the insurgency had been defeated. Thus U.S. foreign policy in Central America was in crisis.
President George Bush, former director of the CIA under Gerald Ford and later Reagan's vice-president, and other seasoned U.S. leaders had spent their entire political careers fighting the Cold War. They were unable to respond quickly to the drastic internal changes in the Soviet Union and Eastern Europe. The U.S. leadership instead reaffirmed the old policy of fighting an enemy to illustrate American power. Drug lords became the new target.
The December 1989 invasion of Panama by U.S. forces achieved high acclaim after Manuel Noriega was forced to turn himself over to U.S. authorities for trial on drug trafficking charges in Florida. Lee Atwater, chairman of the Republican National Committee, said that with the invasion President Bush "knocked the question about being timid and a wimp out of the stadium."[28] Public opinion polls showed from 77 percent to over 80 percent of respondents in favor of the military action. Syndicated columnist David Broder concluded:
The static on the left should not obscure the fact that Panama represents the best evidence yet that, 15 years after the Vietnam War ended, Ameri-
cans really have come together in recognition of the circumstances in which military intervention makes sense. The elements of agreement have been in place for some time. President Bush's contribution is to demonstrate that the new national consensus will survive when tested.[29]
President Bush certainly agreed with Mr. Atwater's and Mr. Broder's assessments that he had proven himself a warrior-leader and had a political and cultural consensus for more military expeditions abroad. In August 1990, he ordered over 200,000 U.S. soldiers to Saudi Arabia in response to Iraq's seizure that month of oil-rich Kuwait. Later that fall, Bush declared that he had "had enough" of Iraqi dictator Saddam Hussein and did not want to wait a year or more to see if the United Nations economic embargo of Iraq would cause it to give up its conquest. New orders were given to double U.S. forces to over 500,000 troops. Troops and aircraft from Great Britain, France, Italy, Egypt, Syria, and other Middle Eastern countries also moved into positions in Saudi Arabia. In mid November, the United Nations Security Council passed a resolution authorizing the use of force against Iraq if that nation did not leave Kuwait by January 15, 1991. In December the U.S. Congress passed a resolution authorizing President Bush to go to war. On January 16 at 7:00 P.M ., American fighter-bombers bombed Baghdad; Cable Network News broadcast the bombing live. Within six weeks, the campaign was over, as allied troops retook Kuwait. What was striking about the very short war was not only its duration but also its resonance with the earlier war—Vietnam.
Two hours after the bombing began, President Bush addressed the American public on television. In that speech, he promised that the new Persian Gulf War would not be like the war in Vietnam. This time, he said, American forces "will not be asked to fight with one hand tied behind their back ."[30] Like Secretary of Defense Weinberger before him, President Bush also saw U.S. defeat in Vietnam as the result of self-imposed restraints rather than the efforts by Vietnamese who had been fighting for generations against various foreign occupations of their country. By unleashing the American military, victory was assured. As he said later in his January 30th State of the Union speech, the "dark chaos of dictators" would be vanquished and a "New World Order" would be established.[31]
Many other key features of the war culture that developed in the 1980s helped frame the ways the Persian Gulf War was described and pictured. Both President Bush and the reporters and experts interviewed on television news frequently talked about the struggle against "Saddam," a practice that transformed a war involving over a million troops into an imaginary duel between two lone paramilitary warriors.
At the same time, the news media showed deep fascination with mod-
ern weapons systems. Most striking of all were the films taken from video-guided air-to-surface missiles and "smart" bombs. Distant bunkers appeared in the middle of the camera's cross-hairs and then got larger and larger as the missile or bomb approached its target. Suddenly, like magic, the bomb detonated and the target was destroyed. The image of America successfully waging high-technology warfare against Iraq created a world that made "sense" at a very deep level to many people. The issue at stake was good versus evil, and the problem evil posed was being solved by technology. In this way Tom Clancy and the many other "techno-thriller" novelists made their presence felt.
Across the land, both American flags and yellow ribbons appeared, tied to homes, cars, and coat lapels. Each ribbon signified "support" for U.S. soldiers in the Persian Gulf, but what this support meant was not certain. Undoubtedly, many Americans learned from the agony of Vietnam veterans, and consequently wanted to make sure the troops knew that people cared about them, regardless of the rightness or wrongness of the war. However, the strong public support for fighting Iraq also indicated that many Americans viewed their displays of patriotic symbols as essential actions needed to win victory.
Taken to the extreme, this view led to the development of racist paraphernalia, such as T-shirts showing an Arab riding a camel with an F-15 bearing down on him, with the caption "I'd fly 10,000 miles to smoke a camel." The fantasy of becoming personally empowered through vicariously killing the enemy was, of course, one of the principal attractions presented in post–Vietnam War movies, pulp novels, and paramilitary magazines. With the camel jockey's death, everything would return to "normal." "It's almost like the whole burden of Vietnam has been lifted off everyone's shoulders," said Clayton Yeurter, Lee Atwater's successor as chairman of the Republican National Committee. "Americans have pride again."[32] Or as George Bush exclaimed after the defeat of Saddam Hussein, "By God, we've kicked the Vietnam syndrome once and for all."[33]
Conclusion
Thus, although the end of the Cold War creates the possibility of a far less dangerous, more peaceful world, the transformations in the Soviet Union and Eastern Europe do not by themselves ensure that peace will come. As Walter Benjamin indicated, since "our whole being is penetrated by winning or losing a war," then making war or peace is not simply a matter of flipping a switch and choosing a foreign policy. America lost the Vietnam War, and much of its symbolic life—its heroes and dreams, its movies and novels, its therapies and monuments—have all tried to cope with this loss.
Defeat in Vietnam challenged the traditional national culture of regeneration through violence. Critical novels, oral histories, and films provided some sense that the conduct of the Vietnam War contradicted the professed American value of fighting for freedom. A significant part of the intelligentsia eventually came to think that the war was wrong, as did sections of the general public. However, the older mythology of war, with its twin promises to society that boys can achieve full manhood through the ordeal of combat and that their victories will renew the entire society, proved to be stronger.
Both the Vietnam Veterans Memorial and the therapy provided to PTSD victims have helped heal some of the pain felt by Vietnam veterans, their families, and the larger public. But these therapeutic forms developed as subordinate cultural constructs. Their limits for healing were determined by the repression of open debate on the war and the refusal to make peace and establish new relationships with the Republic of Vietnam and the Vietnamese people. Because of these limits, much of the healing effort was recuperated by the state as a way of restoring America's martial tradition.
This does not mean that American war culture, with the heroic warrior as its national symbol, can never be overcome. But the strength of that culture and its mythic appeal as a path to personal and social empowerment and adventure must be acknowledged. The "representations, images, treasures" and excitements of a new peaceful world will require much more elaboration before they convince skeptics that peace offers more than more war and that the way of the warrior belongs to the past, not the future.
Nineteen—
Criminals' Passions and the Progressive's Dilemma
Jack Katz
Introduction
Progressives in the United States, defined here as those who are at once critical of business elites and supportive of policies that directly benefit the lower classes, now face street crime as a daunting political problem. Conservative politicians triumph in stirring up fears of street crime; they can maintain a broadly moralistic posture even as they discard particular leaders who have become tainted by scandal. But progressives, who occasionally have their own problems with corrupt leaders, have become virtually speechless on criminal violence. Progressives risk alienating important constituencies—especially supporters of civil liberties, minority groups, and socialists—if they rail against violent predatory crime; and they appear morally weak to many voters when they drop a posture of indignation and speak in positive tones on behalf of remedial programs.
The current quandary created for progressive politicians by criminal violence is not simply the result of right-wing campaign advertising tricks, as apologists too often believe; the current difficulty is deeply rooted in the structure of progressive thought. The makeup of progressive thought is itself a complex matter. Over this century, the American progressive agenda has not had a uniform or constant partisan character. Before World War I, "progressives" were often Republican; and Democrats, since World War II, have been deeply divided between progressive and conservative wings. But despite these partisan shifts, and underlying the complexities of support for specific policies, progressives have adopted several distinctive positions on crime.
Since 1950, three lines of initiative, corresponding to three independent intellectual sources, have shaped progressive policies toward violent criminals. In the 1950s, theories of depth psychology were invoked to depict violent criminals as emotionally disturbed and to reform criminal court procedures and penal institutional policies that had been charac-
terized as insensitive, cruel, and ineffective in reducing crime. In the 1960s, sociological writings that attributed criminality to a contradiction between materialist culture and inadequate economic opportunities, were drawn on to develop various governmental programs in an effort called the War on Poverty. In the 1970s, progressive forces in the legal profession succeeded in implementing norms of legality in various American institutions, with significant if unforeseen consequences for the ability of government to implement progressive policies in criminally violent sectors of the population.
In the following pages, I will argue that the progressive's current dilemma with violent crime has been structured by: (1) an emergent awareness of the repressive consequences of reforms inspired by psychological theory; (2) the inadequacies of sociological theories of crime for appreciating the passions in criminal motivations; and (3) the success of efforts to institutionalize legality, which has placed deviant sectors of the population at a historically unprecedented remove from the conventionally respectable centers of American society. The result has been a thoroughgoing failure to develop policies that can grasp the realities of criminal violence, either intellectually or practically.
Depth Psychology and Criminal Justice
Early in twentieth century, progressive forces operated through social networks that coordinated academic research and private philanthropic efforts. Progressives produced a large body of social research criticizing criminal justice institutions as insensitive to the urban poor, and they mounted recurrent waves of pressure demanding institutional reforms from state legislatures. Their most celebrated permanent success in the domain of criminal justice was the creation of specialized courts and confinement facilities for juveniles. Depicting a process in which the age-indiscriminate housing of offenders led to the socialization of youths into mature criminal life-styles, and calling for institutions responsive to the individual needs of youths, the progressives created a national climate of opinion in which resistance to the differentiation by criminal justice administration of adolescent and older offenders was regarded as barbaric and archaic. Beginning in Chicago in 1898, juvenile courts were soon established throughout the nation.[1]
Progressives originally appreciated juvenile delinquents, in a sociological perspective, as members of vulnerable social groups, namely as residents of impoverished, usually immigrant, urban neighborhoods. By the 1950s, progressive thought about juveniles had become dominated by psychological perspectives. Settlement-house and related local-action philanthropic agencies became self-consciously "professionalized," in
part through their increasing governance by psychological social workers. When publicity about "gang" violence in American cities created a sense of crisis in the 1950s, the psychologically trained incumbents of the old-line social agencies called for further differentiation in the treatment of juvenile delinquents, not only by age category but by type of personality disorder.[2]
During the 1960s, progressive opinion on the juvenile court shifted in a way that came to characterize disenchantment with a series of criminal justice reforms that had been instituted to promote greater sensitivity to individual needs. If the juvenile court system had minimized earlier problems of physical abuse and moral corruption of young by older offenders, critics pointed out that the juvenile justice system had also extended the reach of state control, providing justification for state funding of an increased confinement capacity and for confinement criteria, such as being a "person in need of supervision," that were inherently vague and administratively arbitrary. Juvenile court judges, operating in a special adjudicatory environment, sought to distinguish "good kids" from "hardcore offenders" within proceedings lasting a matter of minutes.[3] This prototypically modern criminal justice institution hardly seemed to exercise power in an enlightened manner.
From the 1930s to the 1960s, the insanity defense had been treated as a vehicle for bringing compassionate, progressive social science into the criminal justice process. Legal academics demonstrated the sophistication of their scholarship by bringing psychological theory into their teaching and treatises. When violent crimes drew extraordinary journalistic interest, psychological arguments were promulgated to the general public by writers intent on opposing capital punishment.[4] But discontent about discretionary power grew rapidly in the sixties, and an early target of criticism was the realm of discretion authorized by psycholegal interpretations which held that impassioned violence reflected insanity.
In the early 1960s, a criticism emerged that pointed to the ironically repressive implications of the insanity defense.[5] In some jurisdictions, the prosecution had been able to invoke the charges of insanity, over defense objections, in order to reduce its burden of proof. More generally, a defendant who had been adjudged insane and unfit to stand trial, or one who had escaped conviction on a successful insanity defense, faced a liability to confinement beyond set statutory terms and might be placed interminably under the administration of institutional psychological experts. In the 1970s, Western intellectuals found it increasingly difficult to ignore disturbing resonances from revelations of the repressive use of insanity issues in Soviet criminal processes.[6] Psychological science, previ-
ously thought to be a perspective of enlightened compassion working for the defense, now became seen as a tool of limitless state repression.
Progressives promoted the following debate over the proper content of an insanity defense: should the traditional standard, that exculpation depended on showing that the defendant did not know the difference between right and wrong, be replaced by a deeper psychological understanding that, even knowing the moral meaning of given behaviors, a person might be unable to control his or her actions because of an underlying mental disease or defect? The inclination of progressives to adopt psychological theories alleging that criminal conduct might be caused by "mental disease or defect" rather than by a person shaping his or her behavior according to a coherent, if idiosyncratic, frame of understanding, was intellectually as well as administratively problematic. Throughout the shift among progressives away from trust in psychological expertise and toward a suspicion of repressive state power, there was little effort to appreciate the details of the lived experience of criminal violence. Progressive thought was consistently unable to grasp the non-rational, moral, and sensual logic of impassioned personal violence.
Even though offenders may not formally think out the rationale of their behavior, their violence has a tripartite coherence.[7] First, impassioned attacks typically are righteous efforts to defend some version of the Good. A father who beats a five-week-old child to death may initially seem to have gone berserk, but the details of his mounting violence—the issuance of orders to stop crying, the shaping of violence into forms of discipline rather than into more expeditious methods of killing—commonly describe an effort not so much to kill as to restore respect according to the biblical injunction that parents be honored. Similarly, a spouse, in fatally attacking an adulterous mate, will not simply and temporarily go mad, but may become mad about defending the sanctity of the marital union. A fellow taking up a shotgun and leveling it at a driver who has blocked his driveway will understand himself to be standing up for the rights of property owners in general. Two fellows, enraged and in a violent struggle, will typically be refusing to accept insults that not only they, personally, but that any "real man" should refuse to tolerate. Far from giving way to dark or evil impulses, the attackers will understand themselves as trying to be Good in some sense that they, at the moment, feel is widely embraced by the community.
Second, the violence in righteous attacks is not simply a "release" or a resolution of frustrated emotions, but a positive, creative act. In essence, the attacker attempts to write the truth of the offense he has received in the body of the person attacked. The attack at once reconstructs and attempts to transcend the offense. This is sometimes evident in a struggle
to turn the mouth into a compliant tool for delivering curses. Sometimes it is seen in the homologous relationship between the attacking acts and the background they respond to: he burned my school books in a trash can, I'll burn him in his bed; she insulted my taste in music, I'll smash her record player.
And third, the extremity of the act is felt and specifically desired by the offender, who acts out of a sense that he must take a last moral stand in defense of his respectability. Many of the paradoxes of impassioned homicide—that these serious acts occur so often during leisure times and in casual social settings, that they occur so rarely against superiors at work—make sense if we appreciate that enraged attacks, although emerging from humiliating situations, tend not to emerge when the humiliated person feels there is some other time and place where respect might be taken for granted.
Righteously impassioned violence is typically employed to coordinate an array of activities (insulting gestures and curses, pushes and shoves, attacks on inanimate objects, strategies to scar the victim's body, and so on). Such violence emerges from a logic that is profoundly moral and sensual, if not reflectively generated. These are not simply aberrations or "mental explosions"; righteously enraged violence emerges through universal interpersonal rather than unique psychopathological dynamics. Yet these acts are also products of idiosyncratic systems of brutal meaning, emerging over an extended period of time and through multiple attacks, through an elaborated, private semiotics of violence. And there lies at least a small promise for effective intervention.
Shortly after the attack, offenders often "wake up" to regret their new existence as criminal killers in a world that regards them not as protagonists in immortal dramas written in biblical terms, but as mortals caught in the prosaic workings of the criminal justice system. The most recent progressive policy response to impassioned domestic violence, supported now by some suggestive experimental evidence, encourages early and sustained intervention in domestic disputes by local criminal justice agencies. Such intervention appears promising in part because arrest, confinement, court appearances, and mandated counseling may together significantly undermine the moments in which transcendent myth governs experience.[8]
What lends a perhaps surprisingly progressive character to this new impetus toward police intervention into the most intimate regions of private life is its affiliation with the feminist and anti–child abuse social movements. But the state's social welfare bureaucracies do not operate in a manner indifferent to social class. Through the administration of income and housing assistance, foster care and adoption programs, drug treatment and public medical services, the state massively and thor-
oughly interrogates the private lives of the poor. (The middle class knows nothing comparable. The forms used by mortgage loan officers, for example, have no place to record suspicions of spouse and child abuse.) And so, over the course of a generation, the progressive perspective has moved from a posture of compassionate, deep psychological understanding of the offender toward a moral alliance with the victim in an outraged call to expand the state's punitive and constraining powers into the private lives of the lower classes.
Sociological Materialism and Criminals' Passions
About fifty years ago, Robert K. Merton, in his paper "Social Structure and Anomie,"[9] formalized what became the dominant sociological version of the progressive view of crime causation, at least in journalism and in the public's understanding. Once counted as the most frequently cited professional article in the history of sociology, Merton's argument had both theoretical and political appeal.
Merton developed two now-familiar themes: deviance is the result of a commercial culture that stimulates desires, within a social structure that unequally distributes the opportunities to fulfill material goals. Published when the Depression was well advanced and the Democratic party was well entrenched in national office, the theory cast a chastising eye in familiar directions, appealing to widespread sentiments that structural problems could unjustly put masses of otherwise good people to desperate choices. The argument also appealed to culture critics who disdain the false gods of mass advertising.
While positing the emergence of an intermediate state of anomie from which various lines of deviant action might emerge, Merton's theory put materialistic motives and opportunities for material gains at the driving foundation of deviance. A brief review of some central patterns in youth crime and in predatory adult crime will indicate the massive irrelevance of this materialist version of progressive thought to the lived experience of deviance.
Delinquency
Merton's theory made its most rhetorically successful appearance in statements on youth crime, or "delinquency." Revised and applied specifically to juvenile delinquency by Richard Cloward and Lloyd Ohlin,[10] "opportunity theory" became part of the intellectual foundation of the Democratic administration's War on Poverty in the 1960s.[11] Three patterns indicate the broad inadequacies of this version of progressive thought in application to youth crime.
First, despite a popular tradition of locating in the lower class a dis-
tinctive culture of toughness, a fascination with aleatory risk taking, and the death defying pursuit of illicit action, mortality statistics on males aged fifteen to nineteen point to a quite different contemporary social reality. In the United States, the mortality rate for white males in late adolescence is not clearly lower than the rate for blacks; in some years it is slightly higher. Teenage white males die from motor vehicle accidents at about three times the rate for their black counterparts. Teenage black males die from "homicide and legal intervention" at almost five times the rate for their white counterparts. The white rate is not simply the product of the greater availability of cars to white adolescents; the adolescent white male rate of death from auto accidents is several times that of white female adolescents, just as the black male adolescent rate of death from "homicide and legal intervention" is several times that of the black female adolescent rate.[12] White male adolescent mortality seems related not simply to access to automobiles but to the way they are driven.
These mortality data indicate criminogenic forces that operate powerfully across the lines that progressive theory traditionally addresses: the fascination with tempting fate is common among young men, and is often combined with a widespread preoccupation with the cultures of alcohol and drug use. As with "gang" violence, youths' reckless driving typically emerges spontaneously and without connection to material, instrumental objectives. In the domain of fatal criminal activity by young males, social inequality appears to structure the form more clearly than the incidence of deviance. C. Wright Mills's old argument that social pathologists are biased to focus on deviance in urban, lower-class, ethnic minority settings is still worth heeding.[13]
The irrelevance of materialist causes and intentions explains the similarly general neglect by sociologists of relatively innocent forms of youthful criminal activity. Vandalism, for example, has rarely been studied. Vandals take material destruction, not acquisition, as their objective, except when they take an item as a souvenir of an adventure. "Joyriding" forms of auto theft are not typically instrumental efforts to remedy problems of access to transportation. Adolescents who shoplift often have the money to buy the stolen goods in their pockets, or they discard the booty soon after it has served its function as trophy. And young burglars often break and enter successfully and then wonder what they might reasonably do once inside. Some act out a version of the Goldilocks story, sitting here and there, messing up this or that room, perhaps taking a beer from the refrigerator. When caught, these amateur offenders frequently experience a metaphysical shock as they simultaneously realize, prospectively, the conventional criminal definition that their activity may be given; and retrospectively, that they had been operating in a world of myths.
None of the powerful attractions of these sneaky thrills has mattered much to social research and social theory. But precisely because these forms of deviance have cross-class appeal to amateurs, they can lead us to recognize motivational dynamics that social theory should grow to appreciate. In part, these forms of sneaky thrills are attractive as games: they have goal lines, tricky maneuvers to fake-out the opposition, and a concern to "get away with it" that justifies action, regardless of the value of the "it." In part, they are seductive plays with sexual metaphors of foreplay, penetration, and orgasmic climax. In part they are strategic interactions over social visibility in which the line between what others think and what one knows about oneself is put to the test.
And they all share as well the temptations of desecration. Vandalism pursues a pure fascination with defiling the sacred. Its outrageously senseless character, to victims, points toward its explanation. The common view misunderstands vandals as simply trying to destroy; but destruction appeals because it is a strategy to release positive powers. Across social and ethnic divisions, adolescents live in a world of material goods that, independent of their market value, serve, through orderliness and surface perfections, to provide a sense of cosmological coherence to adults.[14] Wherever things soiled and things out of place stir anxieties not rationally related to instrumental activities, there adolescents can find objects that are treated as possessing totemic power.
But it has been common to see only a negative force—"frustration" or "deprivation"—behind vandalism, just as it is common to see only "acting out" or "frustration" behind a child's knocking down of a pyramid of blocks. Yet the construction has a compelling majesty for the child; as parents celebrate it with mock expressions of awe, the whole obviously attains a spiritual presence that is infinitely greater than the sum of its parts. Drawn to the visible majesty that is obviously before him, the child seeks to touch the untouchable, acting more in the spirit of exploration than of malevolence. Observers who can see only "anger" and a "mess" speak more about the limits of their own theories than about the lived experience of youth.
More clearly than in any other area of criminal behavior, the traditional progressive view has dominated popular and academic understandings of street "gang" violence. Related to the great rhetorical appeal of "opportunity" theory and the call for expanded job opportunities to which opportunity theory predictably leads, has been the extraordinarily tired intellectual texture of the field. For a while in the 1960s, books on delinquency won awards from professional research societies without working their arguments through data of any sort, quantitative or qualitative.[15]
Part of the narrowness of traditional inquiry into group delinquency
has been a neglect of comparative class analysis. At least since World War II, a series of forms of collective deviance have been embraced by middle-class youth: beats, hippies, punks. What makes a comparative examination important is not simply that it makes analysis more deeply sociological; it helps isolate the essential attractions of violence to ghetto youth gangs.
Middle-class and ghetto forms of collective youth deviance have been regularly and radically different along a series of dimensions. While members of ghetto fighting groups profess a proud affiliation with a local neighborhood and with identities as "homeboys," middle-class youths cover up their affluent and suburban origins in tawdry dress and in trips to city locations. While middle-class youth develop "gender blender" styles of deviance (jeans for everyone, makeup for males and females alike), ghetto gangs organize power around males and sustain emphatically dominant relations with female acquaintances and groups. Middle-class collective youth deviance movements identify with underclass and pariah groups, symbolically integrating diverse classes and ethnicities. Meanwhile ghetto gangs have a segregating thrust, dramatizing inherent, irremediable, deadly differences with just those others in society who are most like them in age, sex, geographic location, social class, and usually ethnicity. And middle-class forms of collective deviance typically take leftist overtones, often defining themselves as collective movements through historically decisive battles with police and the "Establishment." Ghetto gangs, on the contrary, are by and large fascistic. Even when they are being hunted by the police, they treat official authority as essentially outside their sovereign reality and employ various symbols of aristocratic, in-born rights of absolute authoritarian rule.
In crucial respects, the traditional progressive understanding of ghetto gang violence has it precisely backwards. Far from being a response to limited opportunities to enter the larger society, the gang encourages its members to look backward and downward; the gang has its attractions as a vehicle for demonstrating elite status by dominating others as aristocrats would dominate a lower caste. Thus Mexican-American barrio warriors do not emulate the advertised models of modern commercial affluence, they mimic the arrogant posture of premodern landed elites. And contemporary black gangs in Los Angeles continue to enact a fascination with the legacy of their southern rural heritage in which one group dominates another group on the bare basis of color (in this case, red and blue colored clothes).
The ghetto gang's principles of social organization and culture are those of childhood: stay close to home, relate primarily to members of the same sex, engage the myths and symbols of Knights and Lords and other fantastic premodern rulers. Ghetto violence is, then, not a cause of
gang formation; the fabric of gang life has its own fascinations, and violence is valued, indeed treasured, as a way of transforming what might be seen as a "punk" affiliation with childhood ways into a heroic loyalty to the neighborhood or group.
The social divisions central to violent ghetto gangs are not between the ghetto and the affluent ethnic majority society; they are divisions within ghetto communities. The arrogant style of gang domination shows up vividly in juxtaposition to the humility of others in the gang's generational or neighborhood background, and not at all in anticipation of a dismal occupational fate within a long biographical view or a larger socioeconomic framework. What progressive commentators have not wanted to acknowledge is that America's distinctive, century-old youth gang problem is related neither to a more avidly commercial culture nor to a greater degree of rigidity in stratification in this country than in economically comparable Western societies. It is related to the distinctive external and internal migration patterns in U.S. history that have repeatedly brought in masses of ethnic minority, peasant-origin peoples who maintain traditions of deference toward authority and vivid concerns over respectability. Along with such strange bedfellows as the Ku Klux Klan and Hell's Angels, ghetto gangs are the form that the mass appeal of the fascistic spirit, which revels in lording nativist advantages over humbled ethnic minorities, has taken in our country.
With respect to the ghetto gang phenomenon, progressive theorists have been caught on the horns of an inescapable dilemma. They cannot confront the sensual and moral attractions that inspire gang members and then turn to present these youths as petitioners for leftist policies. And they cannot acknowledge the central contribution of rural-urban migration patterns in establishing the framework for gang formation without turning cruelly against morally compelling masses of optimistic newcomers.
Stickup
The contemporary social reality of robbery provides superficial support for ideas that offenders calculate crimes as instruments for obtaining material goals. Robbers, after all, typically demand money, and in interviews, robbers often say that they do it for the money. But:
• Robbers are often employed when they commit their offenses, and their jobs sometimes provide them with information, and with opportunities for emotional hostilities with superiors, that generate an insider's knowledge and resentments as resources for the offense. The instrumental, materialist perspective rebounds with an appreciation of the low pay and status of the jobs robbers generally hold.
But the progressive's perception of white-collar crime turns the debate around once again. If good jobs don't stifle criminal motives, why assume that bad jobs breed them?
• A look at how little planning goes into the typical street robbery, and the very limited rewards and high risks of capture the offense entails, suggests that the instrumental perspective is applicable only superficially, if at all.
• If there are instrumental, materialist reasons for robbery, it is striking that poor women can virtually never grasp them. And in places where the ethnic comparison can be made clearly, as in Southern California, poor black males commit robbery four to five times more often than do poor Hispanic males. This disproportion is much greater for robbery than it is for nonproperty violent offenses, such as assault and homicide, or for nonviolent property offenses, such as burglary.[16]
• The fact that robbers are typically well aware of alternative opportunities to make more money at less risk through other illegitimate activities, in nonviolent property crime and in vice markets; and the fact that many burglars, check forgers, pimps, and drug dealers abjure robbery as a fool's game should persuade researchers to attend to the special, nonmaterial attractions of robbery.
• If lack of legitimate opportunity motivates entry into and persistence with robbery, the progressive is well aware that, after the many years in prison that virtually all persistent robbers will serve, legitimate job opportunities will not be any better. And yet, when the ages of robbers are noted and compared to the number of people in the population of the same ages, it appears that robbery peaks in the early twenties and declines precipitously a few years later.
• And, what is perhaps most neglected about the social reality of robbery, early experiences with robbery often show that materialist motives are clearly secondary during the formative stages of the robber's criminal career. Robbery as a "professional" line of criminal work typically grows out of adolescent years spent cultivating an awesome presence as a badass. Teenage badasses will often go out of their way to embed robbery in a transcendent immoral project.
A repeat offender's first robberies will often victimize peers and even friends, and will not be limited to situated action, but will make robbery an ongoing feature of his identity. A fearsome fellow may, for example, ask a series of acquaintances for small "loans" that are, it is initially promised, to be paid back at some future date. When those dates come, and when repayment dates are rescheduled again and again, the cynicism behind the debtor's excuses and promises, delivered with casually subtle in-
timidating undertones, will become increasingly transparent, with the result that both parties will realize retrospectively that the "loan" was the first stage in an artfully interminable robbery.
In recent years, the instrumental, materialist perspective has reached into the patterns of robber-victim interaction to find supporting evidence. Asked to generalize about the matter, robbers will typically say that they don't want to use violence and only do when a victim's resistance makes it necessary. A great variety of studies have seemed to show support in the form of high correlations between resistance by victims and their experience of suffering injuries in robberies. The ironically inverse relationship between the means of intimidation used by a robber and the probability of injury to victims seems to provide further support. Considering nonfatal injuries, robbers who use guns are less dangerous than those who use knives, who in turn are less dangerous than those who use clubs or strong-arm force, presumably because the more lethal the means of seeking compliance, the less likely one will be required to use it. Various demographic characteristics of offenders and victims—age, sex, number of co-offenders—significantly distinguish robberies from assaults and nonrobbery homicides, also indicating that a different, presumably more rational, framework governs robbery than governs impassioned violence.
But a more intimate view of the meaning of robbery within its full social and processual context undermines the persuasiveness of a rational, instrumental understanding of the robber's behavior. Correlation, we must always recall, does not establish causation; but more troubling is the distinct possibility that there is a causal relation between victims' resistance and robbers' use of force that runs in a direction opposite to the one that the rationalist view would presume. Resistance by victims may be their response to a robber they perceive as bent on gratuitous violence.
Moreover, even if victims resist and trigger the offender's violence, considering the nature of the resistance victims produce, they often do not provide very compelling reasons for a violent response. What is usually coded by researchers as "resistance" by victims is an array of behaviors, such as a slow pace in complying with the offender's demands, shouts for help, or attempts to escape, that do not threaten the offender physically and that increase the offender's risk of capture less than the offender's own violent response. Given the lack of planning typical of the robbery, the offender would often be much more rational to abandon the current victim and move on to the next criminal opportunity he encounters.
Most importantly, offenders have so many reasons to use violence in robberies that virtually all, and thus ultimately virtually none, of their brutality can be explained instrumentally. After taking all the betting
money at a crap game held in a public housing project, robbers are well advised to fire a few warning shots because their victims are likely to have guns and to use them. When robbing a pimp or drug dealer whom they know, and who knows them, robbers are "rational" to defend themselves from subsequent attack by killing their victims. Because they often rob with others who are equally as or more fearsome than they, and because they cannot bring to legitimate forums their disputes over how to split the take, robbers send an instrumentally sensible message to their co-offenders when they viciously attack victims. And because they are frequently involved in vice activities that make them well-known carriers of large amounts of cash in their communities, robbers have additional reasons to display a random proclivity to "nonrational" violence.
Given the context of the typical robber's social life, gratuitous violence carries so many instrumental benefits that it may almost always be "rational." Most fundamentally, the uncertainties in robbery are so numerous and so inherent in the offense that offenders cannot rationally anticipate using violence rationally. They know that they cannot know if the victim will have the means and inclination to respond irrationally to their offer to forbear violence in exchange for money. They often suspect that even if they make their use of force precisely contingent on the victim's compliance, their co-offenders, who are often under the influence of intoxicating substances and who often boast badass reputations themselves, may not. At some point in their careers most offenders will realize that they do not know themselves well enough to know whether they will respond rationally to an unexpected occurrence within an offense. In the end, those who would persist in robbery know that to engage in this type of offense, they must steel themselves to it beyond rational calculation. And toward that end, gratuitous violence is an especially valuable self-portraying resource.
In referring to their offense as "stickup," robbers summarize the attraction in the offense that makes an instrumental understanding superficial. At once phallic and fiercely willful in its connotations, "stickup" is attractive almost exclusively to males and to males who are inclined to treat personal relationships as well as robbery scenes as spontaneously subject to violent domination by their will. "Stickup" promises to freeze the will of victims so that the offender can become the only being present with a purpose that must be respected. Within such an egocentric cosmological project, the robber need not have reasons for his behavior because he need not attend to the reasons of anyone else.
White-Collar Crime
Contemporary realities, broadcast so vividly and continuously that they can no longer be ignored, undermine social theories that would attribute deviant motives either to materialist culture or to inequalities in social
structure. White-collar crime generates waves of publicity far out of proportion to the little puddles of criminal cases that are officially filed against political and business elites, and white-collar crime does not fit neatly into the critique of commercial culture. The motivational dynamics behind great political scandals, such as the abuses of power in "Watergate" and "Irangate," are connected less clearly to material self-seeking than to an image of national identity under attack and needing extraordinary measures of defense. When they are committed for economic gain, white-collar crimes are incompatible with notions that deviant motives are bred in disadvantaged sections of society, and they are not neatly limited to any particular national culture. International bribery scandals have demonstrated that political corruption at the highest levels effectively tempts royal families, socialist party officials, and bureaucrats throughout the Third World. And the business forms of America's white-collar crimes—from the price-fixing scandals of the 1950s, with their anti–free market design, to inside traders operating on capitalism's heart in the 1980s—show well-placed executives criminally financing conventional life-styles, and arbitragers taking in money in amounts so fabulous that they far outpace the capacity of commercial culture to stimulate the offenders' desires. The latter talk about money not necessarily as a means to an advertised life-style, but in terms strikingly reminiscent of street criminals, as a way of keeping score in a gamelike pursuit of high risk "action."
In the light of white-collar crime, the progressive view of crime causation has been pushed back from explaining the emergence and incidence of deviant motives to the politically less inspiring claim that social inequalities only shape the form or quality of deviance. But here, on the qualitative turf of common crime, the traditional view of progressives is even less convincing. In its lived details, street crime does not clearly exhibit materialistic motives but, rather, hedonistic pursuits sometimes united with gratuitous patterns of violence that border on the sadistic.
Legality and the Marginalization of Deviance
Several indirectly connected, long-term patterns of social change that have been promoted by American progressives have had the joint, unintended consequence of increasing the social distance between the criminally violent sectors of the population and the conventionally respectable centers of society. One broadly relevant pattern of development has been the long-term expansion of higher education from the large-city-based, private universities that dominated academic research earlier in the century to rural and small-city-based state universities that became major research centers in sociology after World War II.
First-hand, qualitative, individual-case-based research on crime and
deviance has typically been produced in research centers located in large cities, often through private universities that had no local public university competition and that were responsive to community concerns to address urban problems. The development of large public universities, historically justified in part as a way of opening opportunities to those not favored by family wealth, created major research centers that were commonly located at great distances from the high-crime urban areas. As national police case data, first produced in the 1930s, and national victim survey data, first produced in the 1970s, became increasingly available, researchers could participate in the development of empirical knowledge about crime without disadvantage due to their location, provided that their research was statistical, and especially if they pursued demographic and ecological questions.
In the 1960s and 1970s, American sociology received the products of a small number of intensive field studies, many of them on youth gangs, conducted by academically based researchers located in Boston, Chicago, Los Angeles, San Francisco, and Philadelphia. At the same time, American sociology and criminology journals received voluminous contributions in which academics, housed primarily in state university research centers located primarily outside of major cities, used statistical data bases to analyze demographic and ecological issues about crime. Graduate students who today are looking for a dissertation focus can find ready access, at any major research center, to nationally administered, computerized tapes containing police, victim, and census data. Similarly convenient access to sites for first-hand field research are not as widely distributed. Despite some crosscurrents (for example, the more recent growth of public universities in some urban centers; individually negotiated grants that enable the conduct of research away from an academic home base), the dominant trend is for academic social research on crime to be conducted at a greater social remove from the deviants studied, both in geographic terms and in terms of the methodology and types of data employed.[17]
But geographic and technological changes in the social institutional basis of academic research on deviance have played a relatively minor role in moving the lived experience of deviance to the obscured peripheries of American society. Of far greater importance has been the pursuit of legality through a variety of indirectly related social movements, movements that have promoted civil rights in racial relations and in prisons, and that have attacked political corruption and union racketeering. Without any ideological planning directing the pattern, progressive forces have been more or less consistently successful in infusing the rule of law, principally ideals of due process and equal protection, through American society; while progress in reducing substantive social inequali-
ties has been stalled, abandoned, and reversed. One of the results of the uneven implementation of the progressive agenda has been the displacement of criminal violence to the margins of the American social structure.
One dimension of this change, the historical ironies of the Civil Rights Movement in separating an Afro-American underclass from white and black middle-class society, has now been frequently noted. By reducing racial barriers to education, employment, and housing, the Civil Rights Movement created pathways out of racially segregated southern communities, and out of northern urban ghettos, that were especially useful for upwardly mobile African-Americans.[18] As one experienced fieldworker of American ghetto life has put it, the "old heads"—the locally respected, well-educated, professional and small-business elite of the black community—had become far more scarce in ghetto areas by the 1980s than they had been in the 1950s.[19] The American black ghetto, once a multiclass area containing the black bourgeoisie, the black working class, and the black urban poor, became increasingly dominated by a black "underclass." Young men who are tempted by life-styles characterized by criminal violence have always been concentrated in ethnically segregated, poor urban neighborhoods. But over the last twenty-five years, young African-American men attracted to a dangerous life-style of illicit "action" have had decreasing personal contacts with conventionally respectable black men, except in one area—through their frequent contacts with increasingly integrated police forces.
Related to the race relations side of the Civil Rights Movement, a number of ancillary movements brought legality into institutions that traditionally governed the relationship between the criminal population and the centers of societal authority. In prisons, Black Muslims, who combined freedom of expression and freedom of religion claims with complaints about racially motivated oppression, initially spurred the judicial reform of authority relations.[20] Once begun, prisoners' rights movements spread to represent white and Hispanic complainants, stimulated the formation of specialized legal action support groups, and became national. In the Wolff decision in the early 1970s, the U.S. Supreme Court made clear that the courts would broadly place procedural requirements on prison administrators' power to discipline inmates.[21] In several states, federal courts have taken control of prisons away from incumbent officials and have appointed monitors to supervise the reconstruction of everyday authority relations.
Currently, the impact of the prisoners' rights movement on prison social order is the subject of great methodological controversy. Conservative critics of activist judges have charged that judicial intervention has stimulated unprecedented waves of inmate violence directed against inmates and staff. They appear to have been right, but only in the short
run. Judicial intervention typically meant the destruction of "tender" systems in which prison administrators delegated tasks of control, including the perception of deviance and the recommendation of disciplinary action, to inmate leaders. When judges became responsive to complaints about the discretionary exercise of disciplinary power, the ultra vires power of inmate leaders was replaced with formal systems in which correctional officers "write up" alleged infractions by inmates, who then have the opportunity to contest major charges before independent hearing officers. In several major prison systems, judicial intervention appears to have stimulated inmate violence by breaking the old system of prison authority and signalling to inmates that a judicial audience now existed for their protests. But in the longer run, sometimes after a full decade of administrative resistance to judicial control of prisons, the new, constitutionally sanctioned system of prison authority has taken hold and inmate violence has receded.[22]
Yet even with the decline of inmate violence, the "legalized repression" that now characterizes prison administration has two enduring consequences that progressives must find disturbing. One is a dramatic increase in caste-like segregation among inmate groups and between inmates and staff. Initially spurred by Black Muslims, who rode the Civil Rights Movement of the 1960s into federal courts, judicial intervention has authorized inmate associations and has undermined the ability of prison administration to select inmate leaders and shape the social structure of the inmate world,[23] with the result that U.S. prisons today are dominated to an unprecedented extent by fascist-styled gangs organized around a bizarre array of racist mythologies, from Black Muslims to Mexican Familias to Aryan Brotherhoods. We should not flinch from the historical paradox that the most highly rationalized, modern form of enlightened beneficence, as represented by progressive judges battling cruelty and arbitrary power in prisons over the last twenty-five years, has led to the most intense expressions of peer-directed racist hatreds among contemporary inmates, whose vengeful and creatively vicious acts of maiming and mutilation[24] are strangely reminiscent of the ancient and spectacular official brutality that the Enlightenment revolutions aimed to eradicate.
Second, and more subtly, the legalization of prison authority, which was finally achieved on a national scale during the 1980s, has reconstituted the control capacities of prisons, expanding enormously their organizational powers to incarcerate. Before the judicial revolution, correctional officers in many state systems were "good old boys" who, through patterns of violence, corruption, and limited educational achievement, maintained social ties with inmate leaders. Now, correctional officers are better educated, much more often female, preoccupied with the legali-
ties of power (such as the proper form to "write up" inmate infractions), and increasingly professional in their career orientations. More socially distant from inmates in gender, education, and everyday culture, correctional staffs now equip prisons with a greatly enhanced organizational sophistication. It is not a coincidence that as prisons finally became "legalized" in their daily operations during the 1980s, the raw size of the inmate population doubled on a national scale (and in California, which has been a leader in the acceptance of legality in the prisons, the prison population quadrupled), even as crime rates ended the decade at a lower level than existed when it began. Without the professionalization and legalization of prison administration, this increase in incarcerative capacity would have been opposed by judges who, when confronted with resistant "good old boy" administrations, have in extreme instances ordered the release of inmates subject to arbitrary, discriminatory, and "cruel and unusual" treatment.
Legality, in the forms of judicial oversight, the reshaping of the exercise of authority to conform with published rule, and the introduction of intraorganizational procedures for questioning authority, has changed the relationship of deviant populations to the center of American society by dismantling traditional, subterranean social bridges between the conventional center and the deviant periphery. The attack on the corruption of prison authority was paralleled by interrelated attacks on corruption in urban political machines, in police departments, and in local criminal courts. Even before Watergate, an uncoordinated but nationalscale legal campaign against local political corruption had begun, as federal law enforcement institutions became gradually more professionalized and, in the nonpartisan style of pre–World War I progressives, became mobilized to "clean" government.
The historical marking point of the modern onslaught against corrupt "machine" politicians was Robert Morgenthau's leadership of the New York U.S. Attorney's office in the 1960s against Carmine De Sapio and the vestiges of Tammany Hall politics. Before Watergate, in sporadic bursts of prosecutions in Los Angeles, New Jersey, and Maryland, and after Watergate in a national wave of political corruption cases affecting jurisdictions as diverse as Chicago and Oklahoma City, federal prosecutors targeted local political organizations whose members had profited financially by selling the powers of their offices to respectable businessmen and "connected" criminals alike.[25] Along with numerous other social trends that undermined political "machines," anticorruption prosecutions dismantled a complex network of ties between inner-city deviants and respectable government officials.
When urban political machines mobilized the vote in ethnic neighborhoods, neighborhood toughs, who traditionally hung out around social
and athletic, especially boxing, clubs, were appreciated as politically valuable symbols of ethnic group pride and were at times literally as well as figuratively publicly embraced by ward bosses.[26] If intervention in the criminal justice system could not always be counted on for an efficacious "fix," many local criminals found that it made sense to keep "connected" lawyers on "retainer" and disposed to exercise "influence." A reader of the life histories of common criminals who operated in U.S. cities from the West through the East coasts will find matter-of-fact descriptions of corrupt influences on local criminal justice institutions continuing through the 1950s,[27] descriptions that have no parallel in the ethnographies and life histories that trace criminal careers within the last twenty years.
By the time in American history that African-Americans were able to be elected municipal leaders, the traditional infrastructure through which ethnic leaders reached and were reachable by residents in ethnic neighborhoods had been severely weakened by force of law. The dismantling of corrupt bridges to deviant inner-city populations is significant for progressive policies in at least two respects. The absence of subterranean connections in contemporary cities presents special difficulties for implementing remedial programs. A new educational, job opportunity, or drug rehabilitation program must now bootstrap its own social mechanisms for reaching its targets. On the other side, the increased social distance of predatory criminals from conventional social organization hardens the boundaries of their deviant life-styles and makes their motivations less ambiguously antisocial.
Consider the transformation in the role of the criminal hardman that was shaped by the legalization of labor relations through the structuring of a national legislative framework for administering labor relations, from the 1930s through the 1950s, and by "labor racketeering" prosecutions that began in the 1960s, scored major successes in the 1970s and 1980s, and are continuing to date. From the 1930s into the 1950s, there were hundreds of labor racketeering murders in New York City alone, and uncounted employment opportunities for strong-arm activities; in many cases the toughs conducting violent criminal activity on behalf of unions had acquired records for robbery and other forms of criminal violence in their youth.[28] For today's criminal hardman, political and labor organizations are as foreign as businessmen's clubs downtown; the life-styles of criminally violent young men now easily take them into illicit drug distribution networks at the street level, and virtually never connect them with the subtleties of the union's "business agent" role.
The moral universe in which the contemporary violent criminal typically operates has been isolated gradually but firmly by the broad institutionalization of legality throughout American society. Consider the impli-
cations of the transformation of local neighborhood commercial culture, from the days of the urban American Jewish ghetto, as documented by Louis Wirth early in this century, to the realities of the African-American ghetto today. In Wirth's ghetto, market transactions were routinely moralized; price, quality, and service were constantly subject to bargaining; fraud was an everyday risk; any transaction could provide evidence of one's acumen or prove one a fool.[29] The implementation of mass merchandising and consumer protection legislation has eradicated the routine moral dramas of negotiating consumer transactions. The poor still pay more, but they usually pay it up front; large-scale and legally reviewable marketing has made retail negotiations too costly and bothersome for most merchandisers. Today the predatory criminal, whose objectives are superficially material but whose motives are fundamentally centered on transcending humiliation and "taking" victims by making them fools, stands as a harsh, extreme, archaic representative of a lost moral world.
Conclusion
The challenges that criminals' passions represent for contemporary progressive thought reflect essentially two troublesome empirical patterns. First, while legality, or procedural justice, and substantive equality, or social class justice, have frequently been joined in progressive thought, legality has made a much steadier empirical advance in recent decades. Progressive thought has failed to adjust to this emergent disjuncture in its ideals. The increasing institutional commitment to legality in the United States has made it more difficult to reach populations of the criminally violent; in addition, legality increasingly confuses social class interpretations of crime by enhancing public awareness of white-collar crime. While progressives may take ideological comfort from the expanded prosecutions of conservative political and business elites for corruption and fraud, they cannot in the same breath blame socioeconomic conditions for deviant motives. If social class position shapes the form more clearly than it shapes the incidence of crime, reducing crime inequalities cannot as easily be promoted by a promise of improved collective moral character.
Second, and more disturbing, as political advertising effectively parades the details of criminal victimization before the public, the lived experience of contemporary street crime is dominated by moral and sensual dynamics, not material strivings.
If the lived experience of crime poses fundamental difficulties for progressive politicians, there are equally troublesome implications for modern social theory in general. My overall argument, that criminality
is best explained by moral and sensual dynamics, will seem to some antiprogressive, in part because of the nature of its approach to social explanation. Crime is neither the product of extraordinary emotional pathology ("insanity" or "sickness") nor the instrumental execution of materialist plans. Its coherence is deep and detailed, but it is sensually embodied, not reflectively produced. But so is much of our routine, conventional, everyday behavior. When we write, walk, or talk, we do so interactively, anticipating how our lines of action will be seen and responded to by others, but we do not live our interactive awareness rationally; we do not produce our routine action reflectively. Rather we rely on rhythms and holistic senses of the projects we are engaged in, such that our writing is better grasped as a kind of prosaic drawing; our talking, as a usually banal singing; our walking, as a dancing that typically does not parade its aesthetic dimensions even as we rely fundamentally on them. Were we able to see the moral and sensual dimensions of everyday behavior more clearly, an analysis of the moral and sensual dynamics of crime would not stand out as damning. But we still teach a legacy of eighteenth- and nineteenth-century rationalism in our social theory classes; and our version of interactionist analysis overplays the role of reflection ("the looking-glass self") and thought, to the neglect of the embodiment of conduct (we read classic interactionist theory on "mind, self and society," but not on "body, self, and social interaction").
Considering the distinctive anxieties of twentieth-century life and the special horrors of its massive deaths, our clinging to centuries-old theoretical perspectives is a deep and pathetic intellectual failure. Existentialism and phenomenology, the philosophical movements that arose with the twentieth century's unprecedented, vicious chaos, remain outside the mainstream of empirical research and of American social thought. So do the central sticking points in American policy development on crime, where sensual and moral dynamics are as central as they are to crime causation.
If homicides commonly arise out of quickly developing passions, we should reduce the availability of guns so that enraged attackers might turn to less lethal instruments of violence. But guns remain symbols that are embraced with uniquely profound passions in America. The key research question about guns and American violence is not how much or whether removing guns will reduce violence, but why guns have acquired such strong moral and sensual meanings in the United States.[30]
And if robbery grows out of a fascination with dominating social interaction so that it provides signs of respect, we should acknowledge the special challenges of humiliation that are maintained within ethnically bounded, and particularly black, poverty communities. But policies of racial integration that would specifically address the unique roots
of America's criminal violence evoke fears that repeatedly turn progressive political leaders to more broadly defined, and less specifically relevant, institutional solutions. Ironically working in the same direction, the recent rise of African-Americans to local political leadership positions, through electoral successes that are based on the size of segregated voting blocks, has itself contributed to the deterioration of the national commitment to integration.
And if robbery pays so poorly that a commitment to it can only be sustained when intermixed with more economically rewarding opportunities in gambling, prostitution, and illegal drug markets—vice worlds that, when combined with robbery, sustain the moral and sensual attractions to a way of life characterized by illicit action—then we should appreciate that our criminal legal framework, through making vice activities so remunerative, provides crucial support for the robber's career. But the sensualities of vice activities stir such powerfully moralized passions that we are barely able as a community to bring alternative regulatory frameworks into rational political discussion.
Dominated by a materialist, instrumental perspective on crime, progressive thought has misled us because it has directed attention away from those distinctive features of American ethnic relations and moral culture that are most closely related to the exceptionally violent dimensions of the crime problem in the United States. The progressive's failures are mirrored rather than avoided by the Right. Indeed, no voice on the contemporary political scene speaks to the distinctively national character of America's criminal violence. In a notable way, both political sides share a materialist, instrumental perspective on crime causation, the one calling for more "opportunity," the other for higher "costs" to offset the "benefits" that criminals are presumed to calculate. If we would abandon the instrumental, materialist framework that supports tired remedial policies and turn research attentions to the moral and sensual dynamics that animate criminal violence, we might generate an empirically grounded progressive response to crime.