Preferred Citation: Wolfe, Alan, editor. America at Century's End. Berkeley:  University of California Press,  c1991 1991. http://ark.cdlib.org/ark:/13030/ft158004pr/


cover

America at Century's End

Edited By
Alan Wolfe

UNIVERSITY OF CALIFORNIA PRESS
Berkeley · Los Angeles · Oxford
© 1991 The Regents of the University of California


Preferred Citation: Wolfe, Alan, editor. America at Century's End. Berkeley:  University of California Press,  c1991 1991. http://ark.cdlib.org/ark:/13030/ft158004pr/


ix

ACKNOWLEDGMENTS

Although he may not know it, Ulf Himmelstrand is responsible for this book. When I was in Scandinavia doing research for Whose Keeper? Ulf invited me to be a consultant on a project he was conducting at the University of Uppsala on changes in Swedish life. His notion of asking a number of leading Swedish sociologists to write chapters on different aspects of everyday life struck me as a wonderful way to try to come to grips with social change. (The book was eventually published as Sverige—Vardag och Struktur: Sociologer Beskriver det Svenska Samhallet , or Sweden—Everyday Life and Structure: Sociologists Describe Swedish Society , edited by Ulf Himmelstrand and Goran Svensson and published by Norstedt's in 1988). I immediately saw the idea for an American book.

When I described Ulf's project to Naomi Schneider at the University of California Press, her first reaction was that they were unlikely to publish a book on Sweden. "No," I said, "I mean a book on America." Naomi was quickly taken by the idea and has been an enthusiastic supporter ever since. She has also been a first-rate editor. I am delighted to acknowledge my gratitude to her.

It was Katherine Newman who first suggested that Herb Gans would be delighted to know about this book and could be encouraged to write a preface. My thanks to her for the suggestion and to Herb for carrying it out. And, it goes without saying, I am grateful to all those who contributed chapters to the book, for, in nearly all cases, they not only wrote what I had hoped they would but they also read the work of others, provided suggestions, and participated actively in shaping the final form of the book. Even while recognizing how inappropriate it is to single one of them out for special mention, I feel I must acknowledge Judith Stacey's wise help as consultant and sounding-board.


x

The contributors to the volume met twice while the project was underway, once on the West Coast and once on the East Coast. I was fortunate enough to receive funds from the American Sociological Association/National Science Foundation Fund for the Advancement of the Discipline, funds that were matched by Steven Cahn, Provost of the City University of New York Graduate Center. I am grateful to both for their help in strengthening this volume. Thanks also to Dan Poor, who demonstrated superb administrative and intellectual skills in numerous ways. Jonathan Imber generously helped with the proofreading, as did Autumn Ward, Susan Guglielmetti, and Lisbeth Summ. This book was edited during a year in which it was my wife's turn to write a book. I promised her that I would refrain from writing to give her more time. (I said nothing about editing.) I want to thank her for her encouragement and support during a time when she was wrestling with conundrums of her own. And the kids were terrific.

ALAN WOLFE
NEW YORK, AUGUST 1990


xi

PREFACE

Although I have not contributed a chapter to this book, I have been involved in it nearly from the start, albeit marginally, and am therefore not the stranger who is invited to write the foreword once a book is finished.

I participated in this venture principally because its purpose—to produce an empirically reasonable but also evaluative portrait of some of America toward the end of the twentieth century—is vitally important. No one can picture all of America in a single book, but the editor and the authors have covered an amazingly wide area, from the family, community, and workplace as well as the political, educational, health, leisure and other major institutions central to everyday life, all the way to the world economy—without which neither America's institutions nor everyday life can be understood anymore.

Trying to make sense of America is, and ought to be, a never-ending process, which I think can best be done by sociologists and social or cultural anthropologists, or by like-minded members of other disciplines. Journalists are also trying to make sense of America, to be sure, but unlike sociologists, who look for recurring patterns and their contexts, journalists usually have to focus on the atypical and deviant. For example, as I write this their picture of America stars street-, Wall Street-, and inside-the-Beltway criminals and features the underclass, yuppies, and the allegedly indifferent and confused generation now in its twenties, or what the Washington Post has called the "doofus generation."

This book is also important because many of its chapters are based on data gathered with "ethnographic" methods. Once a word used solely by anthropologists to describe their research method for studying small preindustrial cultures, ethnography is now becoming an umbrella term to cover fieldwork, participant-observation, and informal interviewing. To me, it means being with and talking to people, especially those whose


xii

activities are not newsworthy, asking them thoughtful and empathic questions, and analyzing the resulting data without the need to prove prior ideological points.

This method, which I consider the most scientific for understanding social life, has been used in sociology nearly since its beginning, but in the last half century or so the discipline has been overwhelmed by quantitative researchers, who rarely talk to people or do so only to count and correlate them. Quantitative sociologists learn something about America, too, but they are forever limited by their methods and the numbers available to them.

The book is noteworthy as well because it is sociology and social anthropology sans jargon, written for the general reader as well as the specialist. It is an example of what sociology should be most of the time (except when it has to be written solely for the specialist), and one way in which sociology can discharge its public responsibilities.

Finally, I think the book is important because it deals not only with America toward the end of the twentieth century but also with social change. As such, it raises or revives some questions about the study of social change that go beyond the book itself, but that make it particularly stimulating reading.

These questions take off from editor Alan Wolfe's description of the authors as a third generation of post–World War II sociologists, and of me, by implication, as a member of the first generation, since I was trained right after the end of that war. However, I studied sociology at the University of Chicago in the late 1940s, when it was the center and virtually the sole practitioner of ethnographic research—and, incidentally, of a reflexive kind that is now, like almost everything else, called postmodern. In addition, the people I worked with, notably David Riesman, Earl Johnson, and Everett Hughes, emphasized the study of institutions. Consequently, I am not very different in background from many of the contributors to this book, except in age, and my questions are therefore mainly temporal.

Since studying a whole society, especially a continent-sized and highly diverse one like America, is immensely difficult, researchers have had to make a number of limiting assumptions, one of which is usually chronological. Thus, a number of the chapters of this book make explicit or implicit comparisons of the present with the past—for example, the 1950s and/or the 1960s—and for some authors some aspects of these periods are treated as good old days.[1]

This is a perfectly reasonable way of studying social change, but one question it raises is why sociologists are joining journalists in looking at social change by decades, even though decades do not seem to make major differences in how institutions work or people behave. (The ex-


xiii

ceptions are institutions and organizations which operate on the basis of fiscal years, and some of their activities are affected, among other things, by the frequent need to spend all budgeted money quickly at the end of a fiscal year.) This question is especially relevant now that the end of a millennium is near, when all kinds of people will make all kinds of observations, most of them likely to be proven wrong, about the changes that will take place because an old millennium is ending and a new one is beginning. Still, whether and how perceptions of the new millennium will affect social life is a relevant research topic when the time comes, and therefore a possible chapter topic for a future edition of this book.

Whether the past was better than the present has always been a fascinating issue, because all periods are good old days for some but not for others, which evokes the question of which past periods are good for whom and why. We all remember specific social and economic phenomena that were central to our lives, and while most white Americans may remember the 1950s as what is now described as the era of postwar affluence, poor whites, blacks, and other racial minorities surely feel differently.[2]

Obviously, even nonpoor whites will not be unanimous about the 1950s, if only because different populations have different collective memories of every decade, assuming they can structure their memory by decades. I wonder for whom the 1950s were dominated by dark days, and assume that the victims of the McCarthy witchhunts and the Korean War were in that category.[3] Many sociologists of my age are also apt to remember the 1950s as a time when there were virtually no jobs in sociology, and when sociology was still regularly being confused with social work and socialism. In any case, what people, even sociologists, remember from the past, is a social construction of the present, which must be studied alongside that present.

Another question the book raises for me has to do with the speed of change. While comparisons with the past make the present, and change sui generis, come alive in print, in the so-called real world of the people whom sociologists study, change is often gradual. In fact, most people are less likely to see change than the consequences, good or bad, of change: not when the family is changing but when parental rules of family life are no longer obeyed by the young. Even sociologists do not always see change as easily as they write about it. Many new social phenomena have to grow for a while before they become sufficiently widespread and anchored to become visible—and to be trusted as raw material for generalizations about change.

Furthermore, reading these chapters made me wonder whether the age of the researcher could affect how he or she constructs descriptions of various social changes. Perhaps some of the changes emphasized in


xiv

this book that look new today to young researchers look more venerable to older ones. For example, the corporate speculation and greed that marked the 1980s may look very different to a young researcher than say, to a very old one who recalls the speculation and scandals that preceded the Great Depression in the late 1920s, or to a historian specializing in the late nineteenth century, when the original robber barons still stalked the earth. Likewise, the negative political advertising in recent elections appears fairly tame (although still abhorrent) to this writer, who grew up in Chicago in the 1940s when the Democratic machine of Edward Kelly (and later Richard Daley) used dirty tricks which no politician today could get away with. But those political tricks were tame when compared with some of the negative political advertising in nineteenth-century America.

Curiously enough, in some respects the greatest changes since the 1950s, and earlier, have taken place in America's self-knowledge—thanks to the increase in the number of years of schooling, the expansion of feature and analytic journalism, the growth of interest in history and even the more modest growth of sociology. (For example, in 1955, the American Sociological Association had 4450 members, virtually all invisible to the general public, whereas in 1990 it had nearly 13,000.)

As a result, the sociologists who wrote this book know far more about America's institutions and America than my colleagues and I did in the 1950s. Consequently, I wonder whether contemporary sociologists—of any era—see their era's complexity and assume the past to have been simpler and more integrated, without taking into account that what is known about the past—any past—is always simpler than what is known about the present.

This leads also to the questions Alan Wolfe raises in the last chapter about today's lack of integration and consensus, which he calls decentering, as well as the search for various kinds of recentering. These questions are significant in part because they address old issues about the need for community in new ways and from a different angle. So far, however, not much research has been done to determine how much of what kinds of integration or consensus people want and how much they and society need to function. Furthermore, politicians and other organizational leaders often call for integration and consensus but do so largely to create or obtain support for their own policies.

I think, for example, that the White House has worked hard for a number of years and over several administrations at its own kind of recentering project: to become the country's symbolic center and power center, although whether it has succeeded is worth debating. Whether it should succeed is even more important to debate, however.


xv

Indeed, I am nervous about the general notion of society having a center, whether that center is conceptual, symbolic, or instrumental. Even if the sole purpose of the center is to help social integration, a societal center of any kind, and perhaps the metaphor itself, always seems to carry with it inegalitarian consequences for those who are not at, or associated with, the center. Whether the center is the carrier and protector of the society's dominant values, or of its sacred symbols and institutions, or of the economic core, those at the periphery are usually treated as being of a lower status or otherwise inferior. Consequently, I would suggest that, paraphrasing Bertolt Brecht, blessed is the society that needs no center!

HERBERT J. GANS
COLUMBIA UNIVERSITY


1

INTRODUCTION—
CHANGE FROM THE BOTTOM UP

Conservative Politics, Unstable Society

Someone who visited the United States in the first decade after World War II and then came back in the last decade of the twentieth century would have seen two entirely different countries.[1] From the relations between husbands and wives and parents and children, to patterns of home ownership and living space, to the relative power of the two political parties, to the role of the United States in the international economic and political order, to the racial and ethnic composition of the population, to the ways in which people understand such social institutions as schools, churches, and medical offices—the texture of American life bears little resemblance to the way things once existed, at least in the American imagination. No wonder so many Americans, politicians as well as ordinary citizens, seem bewildered by what is emerging around them.

The transition to a new century marks the culmination of a major generational shift in the American social, economic, and political landscape. To the degree that any order can be imposed on a society as diverse as that of the United States, there did emerge, after the twin traumas of the Great Depression and World War II, patterns of social life that many would, with some mistaken nostalgia, later come to call "normal." It was assumed that economic growth and moderate government intervention led by centrists of both parties would ensure enough stability in the political economy to make possible generalized home ownership, moves to the suburbs, secure futures for offspring, intergenerational upward mobility, and global peace guaranteed by American military strength. Obviously there were dissenters, first on the right during the McCarthy period, and later, in the 1960s, on the left. Ultimately, moreover, the dissenters would make their critiques stick, and both political parties would be reshaped by them. Nonetheless a significant number of Americans came to believe that the America of the late


2

1950s and early 1960s was the "real" America, a feeling that surely contributed to the election to the presidency of someone whose television and movie career had helped to define the culture of that earlier era.

The election of conservative presidents, however, is a symptom of, not a solution to, radical change in the society. Despite efforts to punish flag-burning, to go back to basics in education, or to reverse gains won by racial minorities, a world in which people "know their place" simply cannot be brought back into existence. The twenty-five-year period between the end of World War II and the end of the 1960s will surely come to be viewed by future historians as the exception, not the rule. An affluent society in which families were supported by the husband's income and in which ever-increasing economic growth seemed to offer the solution to any problems—private or public, domestic or foreign—that might appear, no longer accords with reality, no matter how many people wish it would. Conservatism is helpless in the face of that fact, not only because talk of order at the top is a frustrated reaction to disorder at the bottom, but also because the particular conservatism that came to power in America in the 1980s was also powered by a vision of change. There is no party of order in America.

Although politicians like to give the impression that they are in control of events, it seems clear that events are in control of them. No one planned an outcome in which children born in the 1970s would face a radically different set of life choices than those born in the 1950s. That contemporary family life would take on a completely different coloration from what some had proclaimed as the "natural" nuclear family of the 1950 television sit-com was as much a surprise as the decline of American power in the world, let alone the arrival on these shores of a new generation of immigrants from parts of the world that Americans—never the strongest in geography—had previously known little about. The changes that have affected the United States over the past four decades are taking place behind our backs, appearing with their results already in place before we even have a chance to register that something has been going on. Because the reshaping of the contours of American life is not the product of any particular political agenda, or even the result of any social planner's vision, its consequences are that much more likely to be unsettling.

Caught between expectations formed in an earlier period and the realities of new political and economic forces, Americans are unsure how to respond, sometimes giving vent to populist anger, sometimes retreating into private life, at still other times voting for the most conservative candidates they can find. It is time to begin to take stock of what has been happening in this country since the days when people thought they knew what was normal. This book is an effort to do so, not so much by


3

focusing on the changes at the top, but by trying to understand them as the product of forces unleashed at the bottom. Every contributor to this book save one is a sociologist, and the one who is not is a social anthropologist. The assumption that guides our efforts is that changes of great magnitude and rapidity can only be grasped by understanding how those affected by such changes perceive them. The authors were charged to go out and listen. The chapters that follow are their reports of what they heard.

Seventeen Changes in American Life

Before turning to concrete studies of how people understand their families, communities, jobs, and social institutions in a time of transition, it is worthwhile to try to set the scene by cataloging the changes that have made American society so unsettled. There are, after all, large forces at work, and their impact on people's lives will be significant. Any attempt to catalog such forces is bound to be somewhat selective; here, nonetheless, is mine.

1. Population shifts have produced a new demographic profile of the country. Newer regions of the United States, such as the South and West, have achieved economic and political prominence over the older cities of the Eastern Seaboard and Midwest. (Some Brooklynites know exactly the day when America changed for good—and for the worst.) Moreover immigration, especially from Latin America and Asia, has also changed the literal image of what it means to be an American.[2] Since Washington's Farewell Address, Americans have always been somewhat reluctant to engage themselves in the affairs of the world. Now the world, perhaps tired of waiting, has decided to engage itself in the affairs of the United States.

2. Concomitant with demographic changes are political ones. The New Deal coalition, linking working-class and ethnic votes in the North with the solid South, can no longer automatically win presidential elections.[3] Yet the fact that the presidency has been dominated by Republicans, while Congress has remained under control of the Democrats, suggests that there is no one political mood in the country at all but rather many moods, often contradictory, and in any case localized and privatized. Indeed, the important point may be that it is not the content of politics that has shifted so rapidly, but more the form, as expensive campaigns, media simplifications, and "sound bite" politics dominate the campaign strategies of both parties.[4]

3. No longer are the fundamental values and culture of the society shaped by a Yankee consciousness inherited from Great Britain. A book like The Lonely Crowd , with its Weber-inspired discussion of inner-


4

directedness, might as well have been written about a foreign country. A Protestant ethic stressing thrift, honesty, hard work, sacrifice, and community service has less currency for a country that, with each passing year, is decreasingly Protestant. While some segments of America have become "more" religious, as witnessed by the rise of fundamentalism in many forms, others have become "less" so that preachers, academics, and others charge that hedonistic utilitarianism has become America's only compelling source of ethical values.[5] Morality in America has more to do with the subcommunities to which one belongs than to the national community to which all belong.

4. Both upward and downward mobility seem to have increased, at least in the consciousness of most Americans. On the one hand, energy crises and inflation have raised the specter of a world without endless growth, transforming middle-classness from a "natural" condition to a matter of positional struggle.[6] Downward mobility, moreover, has even begun to reach into the heights of the upper middle class, and, as it does, basic assumptions about progress and the good life are undergoing significant alteration.[7] Yet even as they face the prospect that their children might not achieve the same level in life as they did, Americans also are treated to stories of exceptionally rapid upward mobility, as new capitalists, many from "marginal" ethnic backgrounds, assume prominent places in the American consciousness. A once-existing link between status and wealth seems broken: there are as many who have the latter without the former as there are who have the former without the latter.

5. Conditions at work have been almost completely transformed. In part this is due to radical changes in the nature of American industrial relations, such as the decline of large manufacturing firms, the reorganization of industries, and the rise of such financing techniques as leveraged buy-outs.[8] For the average American worker, especially the unionized worker, the resulting changes are dramatic. The stereotypical situation twenty-five years ago was one in which trade-union-conscious men left each morning for high-paying factory jobs while their wives stayed home and raised the children. Now the men no longer belong to a union, no longer work in factories, and no longer receive high pay, while their wives, who also work (probably in the service sector), earn enough to bring the family income barely up to what it was, in real dollars, a quarter century ago. (America's "working man" is no longer necessarily a man.) When union density was high, worker solidarity strong (or at least stronger than now), and competition held at bay through monopolies and protectionism, the world of work could to some degree be shielded from the rest of society.[9] Now, for more and more Americans, everything about work is negotiated constantly—between husbands and wives and between employees and employers.


5

6. The reality of two-career families has changed both family ideology and family practice since the 1950s.[10] What Arlie Hochschild has called "the second shift" alters everyday life, as women come home from work only to go to work.[11] Child care arrangements all but inconceivable a generation ago are being invented from scratch, for not only are mothers working but grandparents have moved away and extended kin networks are harder to maintain given geographic mobility and higher housing prices.[12] Women and men have been forced to negotiate their ways around these changes, discovering for themselves patterns that work rather than following textbook formulas that explain what the family is (or ought to be). For some writers such changes signify family decline, while for others they represent new possibilities for the empowerment of women.[13] It is, however, clear from both that if there is a post-modern family, it is something to which we are moving back, something from a period before the one in which the "natural" nuclear family was constructed.

7. Children have become contested terrain as well. On the one hand, as part of the nostalgia characteristic of the 1980s, we seem to want to reassert the innocence of childhood.[14] On the other hand, as phenomena from illegal drugs to quite legal commercials make clear, we deprive children of the time to just be children. Meanwhile, young people themselves go their own way, as, of course, they always have, developing their own subcultures, markets, institutions, and rituals.[15] Americans have long thought of themselves as a child-oriented culture, but hair-raising stories of latchkey kids, sexual abuse, poor schools, and a failure to recognize the need for adequate child care have forced a reevaluation. Nonetheless, at least at the level of public policy, Americans seem surprised that new generations of children somehow keep making their appearance in the world.

8. Housing for most families has also changed radically. There may have been no more important piece of domestic legislation in postwar America than the Housing Act of 1949, which symbolically linked home ownership with democratic ideology. Now there are more renters, more homeless, more foreclosures, more young people unable to accumulate a down payment, and more speculative profits for some. No one knows what the future implications of these changes will be; markets have their downswings as well as their upswings, and a very recent crisis in real estate has begun to make homes affordable once more. Yet a house in America has never been merely an investment but, instead, the center of a richly textured symbolic world.[16] Even "affordable" houses take so much of the typical young family's income that the home can no longer be viewed as a protection against the market, but has come to symbolize one's largest investment in the market.


6

9. Housing is just one factor in a transformation of the American economy and its relationship to the world economy. Most Americans now understand that these two economies are no longer synonymous, which forces them to confront unprecedented questions, such as whether local communities should welcome foreign investment, put controls on growth, or attempt to regulate the quality of life in their regions. When American-based corporations are multinational while foreign-based corporations create jobs for Americans, whose economic success should Americans be cheering? The experience of the state of New Jersey—which, in its efforts to supply its troopers with cars made in America, had to reject Fords only to accept Volkswagens—will become increasingly common. Just as Europe is increasingly being integrated economically and politically, the United States may be breaking into two economies: one inward looking and protectionist, the other global and expansionist. It is a sign of the times that neither political party can tell which economy is preferable.

10. Americans are in a postimperial mood, without ever quite having admitted to themselves that they have given up the empire.[17] The collapse of communism, America's ideological enemy in the postwar years, coming alongside a major weakening of the Eastern bloc, America's geopolitical enemy during the same period, was not enough to arouse an American president to eloquence or even the American people to a greater concern with the rest of the world. All this changed, and rather dramatically, with the American victory over Iraq in 1991, yet the consequences over the long run may not be that great. To be sure the victory over Iraq was in part the result of stunning diplomacy, especially the ability to keep the allied forces together despite repeated attempts to split them. And it clearly represented an overcoming of the "Vietnam syndrome," putting to rest the notion that Americans had become reluctant to use military power. In the short run, the war in the Middle East would seem to suggest a turn back toward globalism. Yet the very success of the Bush administration in Iraq may also wind up contributing to an American withdrawal from the world. The war in some ways solidified the uniquely American belief that it is possible to obtain diplomatic objectives through violence without substantial loss of life. The brilliance of the diplomacy was matched by a lack of political objectives, not only for the Middle East but also for America's role in the post–Cold War world. As America approaches century's end, there is a clear sense that a new world order is necessary, but the fact that the first step in the new world order was the deployment of massive firepower reminiscent of the old world order does not suggest new breakthroughs. Whatever else happens in world politics—a topic far too broad to be broached here—there is little question that, despite the victory in Iraq, Americans will be living with foreign policy uncertainty for some time.


7

11. It is possible to debate whether Americans have or have not withdrawn their attention from the world, but on the question of whether they have withdrawn their attention from social problems at home, there can be no dispute: they have. The notion that a national problem can be identified, that funds can be mobilized to address it, and that a solution for the problem would be available—the atmosphere characteristic of the Kennedy-Johnson years—no longer exists in American domestic life. This change is deeper than a shift from reform to conservatism, from Democrats to Republicans. It represents a retreat from a spirit of can-do optimism that has characterized American life since the early nineteenth century. On the one hand, crack cocaine, AIDS, and homelessness seem to present problems of such depth and social cost as to be beyond anything ever experienced in American memory. On the other hand, the willingness to tackle such problems—indeed, any problems—is hamstrung by a reluctance to raise taxes that would make policies possible. To the degree that these concerns overlap with race, and they do in public perception, if not always in reality, they point to a mood that has emerged among some Americans that raises questions about whether racial harmony is possible in the United States at all, and, consequently, whether it remains possible to speak of one American experience.[18] In such areas as schooling, residence, and opportunity, the realities facing racial minorities are so different from those facing middle-class whites as to make progress toward equality seem all but inconceivable.

12. It is often the case that social change and technological change do not necessarily reinforce each other. The "traditional" American family and suburban home, for example, were reinforced at a time when quite untraditional new technologies, such as television and modern appliances, were altering how Americans used their time.[19] But in the past decade, social changes have been accompanied by very noticeable technological changes, each strengthening the other.[20] As a result of computer technology, for example, working at home is made possible by the modem and the fax machine. Cottage industries are therefore returning, as highway congestion makes going to work increasingly unthinkable. Flexible working patterns, in turn, will have consequences for families and communities. Between them, new technologies and new patterns of allocating time will combine to change how people work, how they spend their leisure time, and how they travel from one to the other.

13. In the absence of traditional understandings of community, Americans are creating new experiments with subcommunities. The elderly, living longer than ever before, symbolize this development, concentrating, if they have the means, in specific regions and supporting specific industries that cater to their needs.[21] But retirement communities are only one example of the general trend: urban gentrification, increasing segregation by class, and the development of "high-tech" subecono-


8

mies—Silicon Valley and all its spin-offs—all represent living patterns that come closer to Durkheim's definition of traditional society based on likeness than toward his understanding of modern society based on a complex division of labor. The protection of the local environment against change is the theme of political and social movements often characterized as NIMBY ("not in my backyard") groups. Some new communities regulate the architectural details of the homes within them down to the color and shape of doors. Other communities have become adept at protesting the encroachment of undesirable change. The tremendous diversity of America at the national level, it would seem, is being matched by an emphasis, often futile, on homogeneity at the local level.

14. In part because they live and act in new ways, Americans are no longer sure how to represent reality to themselves, let alone to others. For the first time in our history, the media have become nationalized, creating the possibility of a richer national community. But with the success of chain bookstores, national newspapers, and twenty-four-hour-a-day cable news has also come a "thinning" of the reality that is represented, as if Americans had more and more information and less and less understanding. The general pattern seems to be one characterized by an explosion of the outlets that make communication possible combined with an increasing inability to find much that is original and interesting to communicate. One can now watch the same news program anywhere in the United States—indeed, anywhere in the world—and yet still not have the context and historical understanding to make sense of the events being reported.

15. How Americans understand their relationships to each other has been changing as well. Although they like to think of themselves as neighborly, Americans increasingly resort to ways of resolving their disputes with each other that are more formal than a chat over a fence. The increasing litigiousness of American society, the new role of insurance companies as makers of public policy, the formalization of trust, the increasing use of binding arbitration, the rise (and now fall) of an interventionist judiciary, the increasing privatization of government services all represent steps away from Gemeinschaft to Gesellschaft . Yet it would be foolish to lament these changes: Even as they wax nostalgic for a world they believe to be lost, Americans take steps to ensure their rights, realize their self-interests, and protect themselves against what they perceive to be the intrusive claims of community. The net result is a change in the texture of everyday life, one bound to be felt at places as diverse as the physician's waiting room, the court room, the local prison, and the suburban shopping mall.

16. No one, in a sense, seems to obey the rules any more. This is not meant to be part of a conservative lament that always accompanies social


9

change, bemoaning the fact that people no longer know their place, but instead as a reflection of the fact that new issues have arisen for which traditional rules can no longer guide conduct: surrogate mothering, computer hacking, organ transplants, the prolongation of life, corporate crime, abortion, and AIDS are only some of the examples. It is as if the United States is caught between two moral codes, one of which no longer applies and the other of which has not yet been developed.

17. Although they are not among the most important of American institutions, the social sciences have also been caught up in transformations that question their very existence. There are prominent exceptions—I hope this book demonstrates that—but most work in social science seems increasingly unable to deal with changing economic, social, and political realities. The premise that the social sciences could be modeled on the value-free nature of the physical sciences has been undermined by an epistemological revolution in the "hard" sciences themselves. No longer is it possible to believe in the liberal optimism that led social scientists to accumulate inventories of findings about human behavior, in the belief that this would make for a better society. Yet unsure of how to respond to its own crisis of meaning, the social sciences either retreat into an ever-greater empiricism or develop models of rational choice or refinements of structuralism that explain everything except how real people act. The crisis of meaning in American society exists as well among those who make the study of meaning their business.

We are, it seems, no longer the society we once were, but neither are we the society we had hoped to be. As we approach century's end, something new is emerging, helter-skelter, in our midst that bears little resemblance to any existing political, theological, or sociological model of how the world is supposed to work. These emerging patterns may constitute the prelude to a new order that we will eventually come to view as normal or it may be a period of disorder that will usher in ever greater disorder; at this point, no one knows. But if we cannot know where these changes will take us, we can at least take pictures of American life in transition. Sociology, with its focus on real people living real lives, ought to make it possible to do so.

The Rediscovery of Institutions

The assumption that links the chapters of this book is that a sociological approach to American society offers a way to get a grip on a process that is undergoing rapid change. But what kind of sociological approach should it be? When America was understood to be stable, the sociological study of America was also relatively straightforward. The 1950s were not only self-proclaimed "golden years" for such cherished—if both short-


10

lived and somewhat illusory—institutions as the nuclear family and the well-disciplined school; they were also celebratory years for the social sciences.[22] The notion prevailed that one could start out on a five- or ten-year project to study some American institution with the expectation—all the more powerful because simply assumed rather than examined—that the institution under study would still be much the same when the study was published. In textbooks dealing with American society at that time, chapters seem to write themselves: marriage and the family, religion, voluntary associations, business, the welfare state, and, for the not-so-self-congratulatory, social problems.

At the present time, by contrast, it would be nearly impossible to imagine any one sociologist or journalist capable of analyzing all the many ways in which American society has been transformed. That is why I wanted to edit this book rather than write it myself. To capture the fluidity of American society in recent years, it seemed preferable to have many authors, diverse points of view, multiple methodologies, and tentative conclusions. There was a time when sociologists believed that if we did not know it all, we at least knew most of it. It would hardly do to go to the opposite extreme, as some postmodernists do, and argue that we do not, and cannot, know anything—that there is no reality out there for social science to represent. It is enough to suggest that the realities of American life are far more complex than we once imagined and to be as humble in assuming that we have discovered the truth of those realities as we are aggressive in using our tools to discover what the truth, or truths, may be.

The transformations in American society described in this book, therefore, invoke what could be called a "third generation" of sociologists. The first generation of postwar sociologists, under the influence of Talcott Parsons, tended to stress the stability of American institutions and their contribution to the overall functioning of the society.[23] This relatively complacent view was shaken to pieces during the 1960s when a second generation, the "New Left," made its mark on American sociology. Institutions were then seen as oppressive, their replacement by newer forms necessary as a first step toward greater liberation.[24] The view now emerging demands that we pay attention to and understand the institutions of American society, not because they fulfill some grand design and not because they are inherently oppressive. They are, simply put, understood as interesting, as changing, as constantly running away from the analytic models we develop to understand them.

The third generation of sociologists can be called "the new institutionalists," after the movement in economics of the 1920s and 1930s that wanted to look behind abstractions at the realities of economic activity in the real world. Clearly because of the changes taking place in American


11

society, there seems to be something of a cross-disciplinary zeitgeist emphasizing the role of institutions. In political science, Johan Olson and James March have called for an explicit return to the focus on institutions that characterized the study of public administration a generation ago.[25] Economists—who tend, with Jon Elster, to believe that "there are no societies, only individuals who interact with each other"[26] —are less likely to focus on institutions, but even among those most committed to rational-choice assumptions, the role of large organizations such as corporations has come under scrutiny.[27] Even literary criticism, including the deconstruction tendency, has begun to ask questions about institutions and the role they play in interpretation.[28]

Not surprisingly, this concern with institutions has also affected sociology: the study of the welfare state, for example, a traditional sociological concern, is inevitably a preoccupation, not only with government, but with a variety of other social institutions as well.[29] Sociologists, indeed, are increasingly concerning themselves with how institutions actually work, despite how received opinion suggests they do; what impact they have on the people whose lives are affected by them; how they are created and recreated by social practices; what relations exist between them; and whether, how, and why they change.[30] But it is not just through a revival of the sociology of organizations that sociology will come to understand the changing nature of American institutions.[31] It is also by a return to the post–World War II roots of contemporary sociology, to a time when sociologists pooled their efforts to write for a general audience about the relevance of their discipline to the problems of the day.[32] Already efforts are emerging to bring back to sociology its concern with real social issues. Besides the contributors to this book, one can point to a broad group of sociological investigators looking, nontechnically, at the institutions of American society with a curious eye.[33]

What these scholars have in common is not age—though most, but not all, of the contributors to this book were born between 1949 and 1955—but three other commitments. First, neo-institutionalism shows respect for the nitty-gritty empirical realities of social life. All these sociologists are in touch with the lived reality of America. They are neither, to use C. Wright Mills's famous terms, abstract empiricists nor grand theorists. Ethnographic approaches, to be sure, predominate, because ethnography—with its emphasis on understanding how people themselves understand the world around them—is the unique contribution that sociology (and anthropology) can give to the world. But there is an important place for other forms of empirical investigation as well. Much of the new institutional scholarship, for example, is inspired by historical methods, which provide a way to get a grounding in the lived realities of institutions and how they change. Others are not averse to statistics;


12

good number crunching is often essential to complete a picture of what is going on in the world, as, I hope, some of the essays in this book demonstrate.

Second, each of these writers shows open-mindedness toward political issues. While each of them has taken strong positions on issues involving class, race, and gender, there is a lack of dogmatism of any form in their work, an appreciation of the unexpected. Again, to be sure, this pluralism has its bias: sociologists generally find themselves to the left of, say, nuclear engineers, and this book certainly reflects that trend. But each author was charged to let the data speak. Not all of them found what their political perspectives suggested they ought to find. We sought a combination of commitment and openness rather than dogmatic certainty, apolitical cynicism, and artfully contrived compromises between conflicting positions.

Finally, all of these contributors were selected because they could write. Some write with genuine literary skill and others with marked social science training, but all of them are committed to writing as clearly as possible. We hope to harken back to the days when a career in social science did not necessarily mean that one wrote as many obscure articles in technical journals as possible, but also involved an obligation to comment for general readers on important trends in society. I asked the contributors to think of David Riesman as they wrote, to try and keep alive the spirit—if not always the politics—of C. Wright Mills.[34] The spirit that motivated this book is the sense that the inward turn taken by the social sciences in the 1970s and 1980s was not a permanent development but a moment of introspection preparing the way for a return to larger themes—and larger audiences to read them. Understanding that sociology is not a discipline that stands outside American society looking in, we recognize ourselves to be part of the transformations for which we are trying to account. We hope this sense of involvement gives our chapters less certainty but, because it is tentative, greater authenticity.

From the Bottom to the Top

There is no going back—all the authors assembled here would agree. The transformations in American practices and institutions being analyzed in this volume have, as all transformations do, both positive and negative sides, but it seems a fair generalization that our experience with the nuclear family is typical: what has emerged in place of the family of the 1950s is beset with difficulties, but it is clearly preferable to an institution that stifled the human potential of so many women. The same could be said for the schools, for the doctor-patient relationship, the community, and all the rest. In all cases, the transition to newly emerging pat-


13

terns is rocky, unstable, and fraught with contradiction, yet also exciting for the new possibilities that are opened up. Although no one can know what shape emerging America will ultimately take, it is likely to be different, and for those who find in difference rewards as well as problems—as do most of us writing here—that change alone will be one to welcome.

It was not part of the charge to these authors to outline their own political hopes and policy suggestions (although some of them did so). Their collected observations, however, paint a picture of America that may be helpful to those who think more explicitly about public policy. What we offer is, in the jargon of Washington, D.C., an "outside the Beltway" perspective. There are real people out there in America. They are neither the secular-humanist, pornography-loving decadents imagined by the right-wing fundamentalist nor the deeply reactionary, racist, and ignorant know-nothings that the left invokes to explain the right. They are trying to live as best they can, and any public policy ought to begin with that.

When I first began to practice social science, there was a clear link between what social scientists wrote and what policymakers thought. It was not always a beneficial relationship; the war in Vietnam was intellectually inspired by social scientists, let alone some of the more dubious ideas about poverty and how to fight it. Still, what took place then is preferable, in my opinion, to what takes place now, which is that policy decisions are based on tomorrow's public opinion poll, demands made by this or that interest group, or assessments of how to "win" by playing on emotions and fears—as if the best efforts of social scientists trying to understand the world were of no consequence whatsoever for the country and its future.

My hope is that a chastened social science and a chastened policy elite may someday meet again and learn from each other. I cannot speak for the policy elites. But social scientists have learned a good deal over the past few decades: There is less arrogance and more willingness to listen in what many of them do. If policymakers are going to respond adequately to the transformations America has been experiencing, they ought first to consult, not quick and shallow public opinion polls nor the well-financed views of those with an immediate stake in whatever policy is being debated, but instead those whose lives constitute and are constituted by the policies they make. It is a long way from the "bottom up" to the "top down"—longer than from the "top down" to the "bottom up"—but like many longer journeys, the rewards at the end of the trip are more lasting.

ALAN WOLFE


15

PART ONE—
INTIMACY AND COMMUNITY


17

One—
Backward toward the Postmodern Family:
Reflections on Gender, Kinship, and Class in the Silicon Valley

Judith Stacey

The extended family is in our lives again. This should make all the people happy who were complaining back in the sixties and seventies that the reason family life was so hard, especially on mothers, was that the nuclear family had replaced the extended family. . . . Your basic extended family today includes your ex-husband or -wife, your ex's new mate, your new mate, possibly your new mate's ex, and any new mate that your new mate's ex has acquired. It consists entirely of people who are not related by blood, many of whom can't stand each other. This return of the extended family reminds me of the favorite saying of my friend's extremely pessimistic mother: Be careful what you wish for, you might get it.
DELIA EPHRON , Funny Sauce


In the summer of 1986 I attended a wedding ceremony in a small pentecostal church in the Silicon Valley. The service celebrated the same "traditional" family patterns and values that two years earlier had inspired a "profamily" movement to assist Ronald Reagan's landslide reelection to the presidency of the United States. At the same time, however, the pastor's rhetoric displayed substantial sympathy with feminist criticisms of patriarchal marriage. "A ring is not a shackle, and marriage is not a relationship of domination," he instructed the groom. Moreover, complex patterns of divorce, remarriage, and stepkinship linked the members of the wedding party and their guests—patterns that resembled the New Age extended family satirized by Delia Ephron far more than the "traditional" family that arouses the nostalgic fantasies so widespread among religious and other social critics of contemporary family practices.

This chapter summarizes and excerpts from my ethnographic book Brave New Families: Stories of Domestic Upheaval in Late Twentieth-Century America (New York: Basic Books, 1990). For constructive responses to an earlier draft, I am grateful to Alan Wolfe, Aihwa Ong, Ruth Rosen, Evelyn Fox Keller, and Naomi Schneider.


18

In the final decades before the twenty-first century, passionate contests over changing family life in the United States have polarized vast numbers of citizens. Outside the Supreme Court of the United States, righteous, placard-carrying Right-to-Lifers square off against feminists and civil libertarians demonstrating their anguish over the steady dismantling of women's reproductive freedom. On the same day in July 1989, New York's highest court expanded the legal definition of "family" in order to extend rent control protection to gay couples and a coalition of conservative clergymen in San Francisco blocked implementation of their city's new "domestic partners" ordinance. "It is the totality of the relationship," proclaimed the New York judge, "as evidenced by the dedication, caring, and self-sacrifice of the parties which should, in the final analysis, control," the definition of family.[1] But just this concept of family is anathema to "profamily" activists. Declaring that the attempt by the San Francisco Board of Supervisors to grant legal status to unmarried heterosexual and homosexual couples "arbitrarily redefined the time-honored and hallowed nature of the family," the clergymen's petition was signed by sufficient citizens to force the ordinance into a referendum battle.[2] When the reckoning came in November 1989, the electorate of the city many consider to be the national capital of family change had narrowly defeated the domestic partners law. One year later, a similar referendum won a narrow victory.

Betraying a good deal of conceptual and historical confusion, most popular, as well as many scholarly, assessments of family change anxiously and misguidedly debate whether or not "the family" will survive the twentieth century at all.[3] Anxieties like these are far from new. "For at least 150 years," historian Linda Gordon writes, "there have been periods of fear that 'the family'—meaning a popular image of what families were supposed to be like, by no means a correct recollection of any actual 'traditional' family—was in decline; and these fears have tended to escalate in periods of social stress."[4] The actual subject of this recurring, fretful discourse is a historically specific form and concept of family life, one that most historians identify as the "modern family." No doubt, many of us who write and teach about American family life have not abetted public understanding of family change with our counter-intuitive use of the concept of the modern family. The "modern family" of sociological theory and historical convention designates a family form no longer prevalent in the United States—an intact nuclear household unit composed of a male breadwinner, his full-time homemaker wife, and their dependent children—precisely the form of family life that many mistake for an ancient, essential, and now endangered institution.

The past three decades of postindustrial social transformations in the United States have rung the historic curtain on the "modern family"


19

regime. In 1950 three-fifths of American households contained male breadwinners and full-time female homemakers, whether children were present or not.[5] By 1986, in contrast, more than three-fifths of married women with children under the age of eighteen were in the labor force, and only 7 percent of households conformed to the "modern" pattern of breadwinning father, homemaking mother, and one to four children under the age of eighteen.[6] By the middle of the 1970s, moreover, divorce outstripped death as the source of marital dissolutions, generating in its wake a complex array of family arrangements caricatured by Delia Ephron in the epigraph.[7] The diversity of contemporary gender and kinship relationships undermines Tolstoy's famous contrast between happy and unhappy families: even happy families no longer are all alike![8] No longer is there a single culturally dominant family pattern, like the modern one, to which the majority of Americans conform and most of the rest aspire. Instead, Americans today have crafted a multiplicity of family and household arrangements that we inhabit uneasily and reconstitute frequently in response to changing personal and occupational circumstances.

Recombinant Family Life

We are living, I believe, through a tumultuous and contested period of family history, a period following that of the modern family order but preceding what, we cannot foretell. Precisely because it is not possible to characterize with a coherent descriptive term the competing sets of family cultures that coexist at present, I identify this family regime as postmodern. I do this, despite my reservations about employing such a controversial and elusive cultural concept, to signal the contested, ambivalent, and undecided character of contemporary gender and kinship arrangements. "What is the post-modern?" Clive Dilnot asks rhetorically in the title of a detailed discussion of literature on postmodern culture, and his answers apply readily to the domain of present family conditions in the United States.[9] The postmodern, Dilnot maintains, "is first, an uncertainty, an insecurity, a doubt." Most of the "post-" words provoke uneasiness because they imply simultaneously "both the end, or at least the radical transformation of, a familiar pattern of activity or group of ideas," and the emergence of "new fields of cultural activity whose contours are still unclear and whose meanings and implications . . . cannot yet be fathomed." The postmodern, moreover, is "characterized by the process of the linking up of areas and the crossing of the boundaries of what are conventionally considered to be disparate realms of practice."[10]

Like postmodern culture, contemporary family arrangements in the United States are diverse, fluid, and unresolved. The "postmodern fam-


20

ily" is not a new model of family life equivalent to that of the "modern family"; it is not the next stage in an orderly progression of family history, but the stage in that history when the belief in a logical progression of stages breaks down.[11] Rupturing the teleology of modernization narratives that depict an evolutionary history of the family, and incorporating both experimental and nostalgic elements, the postmodern family lurches forward and backward into an uncertain future.

Family Revolutions and Vanguard Classes

Two centuries ago leading white middle-class families in the newly united American states spearheaded a family revolution that gradually replaced the diversity and fluidity of the premodern domestic order with a more uniform and hegemonic modern family system.[12] But "modern family" was an oxymoronic label for this peculiar institution, which dispensed modernity to white middle-class men only by withholding it from women. The former could enter the public sphere as breadwinners and citizens because their wives were confined to the newly privatized family realm. Ruled by an increasingly absent patriarchal landlord, the modern middle-class family, a woman's domain, soon was sentimentalized as "traditional."

It took most of the subsequent two centuries for substantial numbers of white working-class men to achieve the rudimentary economic passbook to "modern" family life—a male breadwinner family wage.[13] By the time they had done so, however, a second family revolution was well underway. Once again, middle-class white families appeared to be in the vanguard. This time women like myself were claiming the benefits and burdens of modernity, a status we could achieve only at the expense of the "modern family" itself. Reviving a long-dormant feminist movement, frustrated middle-class homemakers and their more militant daughters subjected modern domesticity to a sustained critique, at times with little sensitivity to the effects that our anti-modern-family ideology might have on women for whom full-time domesticity had rarely been feasible. Thus, feminist family reform came to be regarded widely as a white middle-class agenda, and white working-class families were thought to be its most resistant adversaries.

I shared these presumptions before I conducted fieldwork among families in Santa Clara County, California. My work in the "Silicon Valley" radically altered my understanding of the class basis of the postmodern family revolution. Once a bucolic agribusiness orchard region, during the 1960s and 1970s this county became the global headquarters of the electronics industry, the world's vanguard postindustrial region. While economic restructuring commanded global attention, most outside observers overlooked concurrent gender and family changes that


21

preoccupied many residents. During the late 1970s, before the conservative shift in the national political climate made "feminism" seem a derogatory term, local public officials proudly described San Jose, the county seat, as a feminist capital. The city elected a feminist mayor and hosted the statewide convention of the National Organization for Women (NOW) in 1974. Santa Clara County soon became one of the few counties in the nation to elect a female majority to its board of supervisors. And in 1981, high levels of feminist activism made San Jose the site of the nation's first successful strike for a comparable worth standard of pay for city employees.[14]

During its postindustrial makeover, the Silicon Valley also became a vanguard region for family change, a region whose family and household data represented an exaggeration of national trends. For example, although the national divorce rate doubled after 1960, in Santa Clara County it nearly tripled; "nonfamily households" and single-parent households grew faster than in the nation, and abortion rates were one and one-half the national figures.[15] The high casualty rate for marriages of workaholic engineers was dubbed "the silicon syndrome."[16] Many residents shared an alarmist view of the fate of family life in their locale, captured in the opening lines of an article in a local university magazine: "There is an endangered species in Silicon Valley, one so precious that when it disappears Silicon Valley will die with it. This endangered species is the family. And sometimes it seems as if every institution in this valley—political, corporate, and social—is hellbent on driving it into extinction."[17]

The coincidence of epochal changes in occupational, gender, and family patterns make the Silicon Valley a propitious site for exploring ways in which "ordinary" working people have been remaking their families in the wake of postindustrial and feminist challenges. The Silicon Valley is by no means a typical or "representative" U.S. location, but precisely because national postindustrial work and family transformations were more condensed, rapid, and exaggerated there than elsewhere, they are easier to perceive. In contrast to the vanguard image of the Silicon Valley, most of the popular and scholarly literature about white working-class people portrays them as the most traditional group—indeed, as the last bastion of the modern family. Relatively privileged members of the white working class, especially, are widely regarded as the bulwark of the Reagan revolution and the constituency least sympathetic to feminism and family reforms. Those whose hold on the accoutrements of the American dream is so recent and tenuous, it is thought, have the strongest incentives to defend it.[18]

For nearly three years, therefore, between the summer of 1984 and the spring of 1987, I conducted a commuter fieldwork study of two


22

extended kin networks composed primarily of white working people who had resided in Santa Clara County throughout the period of its startling transformation. My research among them convinced me that white middle-class families are less the innovators than the propagandists and principal beneficiaries of contemporary family change. To illustrate the innovative and courageous character of family reconstitution among pink- and blue-collar people, I present radically condensed stories from my book-length ethnographic treatment of their lives.[19]

Remarking Family Life in the Silicon Valley

Two challenges to my class and gender prejudices provoked my turn to ethnographic research and my selection of the two kin groups who became its focus. Pamela Gama,[*] an administrator of social services for women at a Silicon Valley antipoverty agency when I met her in July of 1984, provided the first of these when she challenged my secular feminist preconceptions by "coming out" to me as a recent born-again Christian convert. Pamela was the forty-seven-year-old bride at the Christian wedding ceremony I attended two years later. There she exchanged Christian vows with her second husband, Albert Gama, a construction worker to whom she was already legally wed and with whom she had previously cohabited. Pamela's first marriage (in 1960 to Don Franklin, the father of her three children) lasted fifteen years, spanning the headiest days of Silicon Valley development and the period of Don's successful rise from telephone repairman to electronics packaging engineer.

In contrast, Dotty Lewison, my central contact in the second kin network I came to study, secured that status by challenging my class prejudices. The physical appearance and appurtenances of the worn and modest Lewison abode, Dotty's polyester attire and bawdy speech, her husband's heavily tattooed body, and the demographic and occupational details of her family's history that Dotty supplied satisfied all of my stereotypic notions of an authentic "working-class" family. But the history of feminist activism Dotty recounted proudly, as she unpacked a newly purchased Bible, demonstrated the serious limitations of my tacit understandings. When I met Dotty in October of 1984, she was the veteran of an intact and reformed marriage of thirty years duration to her disabled husband Lou, formerly an electronics maintenance mechanic and supervisor, and also, I would later learn, formerly a wife and child abuser.

Pamela, Dotty, and several of their friends whom I came to know during my study, were members of Betty Friedan's "feminine mystique"

I employ pseudonyms and change identifying details when describing participants in my study.


23

generation, but were not members of Friedan's social class. Unlike the more affluent members of Friedan's intended audience, Pam and Dotty were "beneficiaries" of the late, ephemeral achievement of a male family wage and home ownership won by privileged sectors of the working class. This was a pyrrhic victory, as it turned out, that had allowed this population a brief period of access to the modern family system just as it was decomposing. Pam and Dotty, like most white women of their generation, were young when they married in the 1950s and early 1960s. They entered their first marriages with conventional "Parsonsian" gender expectations about family and work "roles." For a significant period of time, they and their husbands conformed, as best they could, to the then culturally prescribed patterns of "instrumental" male breadwinners and "expressive" female homemakers. Assuming primary responsibility for rearing the children they had begun to bear immediately after marriage, Pam and Dotty supported their husbands' successful efforts to progress from working-class to middle- and upper-middle-class careers in the electronics industry. Their experiences with the modern family, however, were always more tenuous and less pure than were those of women to whom, and for whom, Betty Friedan spoke.

Insecurities and inadequacies of their husbands' earnings made itinerant labor force participation by Dotty and Pam necessary and resented by their husbands before feminism made female employment a badge of pride. Dotty alternated frequent childbearing with multiple forays into the labor force in a wide array of low-paying jobs. In fact, Dotty assembled semiconductors before her husband Lou entered the electronics industry, but she did not perceive or desire significant opportunities for her own occupational mobility at that point. Pamela's husband began his career ascent earlier than Dotty's, but Pamela still found his earnings insufficient and his spending habits too profligate to balance the household budget. To make ends meet in their beyond-their-means middleclass life-style without undermining her husband's pride, Pam shared child care and a clandestine housecleaning occupation with her African-American neighbor and friend, Lorraine. Thus Pam and Dotty managed not to suffer the full effects of the "problem without a name" until feminism had begun to name it, and in terms both women found compelling.

In the early 1970s, while their workaholic husbands were increasingly absent from their families, Pam and Dotty joined friends taking reentry courses in local community colleges. There they encountered feminism, and their lives and their modern families were never to be the same. Feminism provided an analysis and rhetoric for their discontent, and it helped each woman develop the self-esteem she needed to exit or reform her unhappy modern marriage. Both women left their husbands, became welfare mothers, and experimented with the single life. Pam ob-


24

tained a divorce, pursued a college degree, and developed a social service career. Dotty, with lesser educational credentials and employment options, took her husband back, but on her own terms, after his disabling heart attack (and after a lover left her). Disabled Lou ceased his physical abuse and performed most of the housework, while Dotty had control over her time, some of which she devoted to feminist activism in antibattering work.

By the time I met Pamela and Dotty a decade later, at a time when my own feminist-inspired joint household of the prior eight years was failing, national and local feminist ardor had cooled. Pam was then a recent convert to born-again Christianity, receiving Christian marriage counseling to buttress and enhance her second marriage to construction worker Al. Certainly this represented a retreat from feminist family ideology, but, as Pamela gradually taught me, and as Susan Gerard and I have elaborated elsewhere, it was a far less dramatic retreat than I at first imagined.[20] Like other women active in the contemporary evangelical Christian revival, Pam was making creative use of its surprisingly flexible patriarchal ideology to reform her husband in her own image. She judged it "not so bad a deal" to cede Al nominal family headship in exchange for substantive improvements in his conjugal behavior. Indeed, few contemporary feminists would find fault with the Christian marital principles that Al identified to me as his goals: "I just hope that we can come closer together and be more honest with each other. Try to use God as a guideline. The goals are more openness, a closer relationship, be more loving both verbally and physically, have more concern for the other person's feelings." Nor did Pamela's conversion return her to a modern family pattern. Instead she collaborated with her first husband's live-in Jewish lover, Shirley Moskowitz, to build a remarkably harmonious and inclusive divorce-extended kin network whose constituent households swapped resources, labor, and lodgers in response to shifting family circumstances and needs.

Dotty Lewison was also no longer a political activist when we met in 1984. Instead she was supplementing Lou's disability pension with part-time paid work in a small insurance office and pursuing spiritual exploration more overtly postmodern in form than Pam's in a metaphysical Christian church. During the course of my fieldwork, however, an overwhelming series of tragedies claimed the lives of Dotty's husband and two of the Lewisons' five adult children. Dotty successfully contested her negligent son-in-law for custody of her four motherless grandchildren. Struggling to support them, she formed a matrilocal joint household with her only occupationally successful child, Kristina, an electronics drafter-designer and a single mother of one child. While Dotty and Pamela both had moved "part way back" from feminist fervor, at the


25

same time both had migrated ever further away from the (no-longer) modern family.

Between them, Pamela and Dotty had eight children—five daughters and three sons—children of modern families disrupted by postindustrial developments and feminist challenges. All were in their twenties when I met them in 1984 and 1985, members of the quintessential postfeminist generation. Although all five daughters distanced themselves from feminist identity and ideology, all too had semiconsciously incorporated feminist principles into their gender and kin expectations and practices. They took for granted, and at times eschewed, the gains in women's work opportunities, sexual autonomy, and male participation in childrearing and domestic work for which feminists of their mothers' generation had struggled. Ignorant or disdainful of the political efforts feminists expended to secure such gains, they were instead preoccupied, coping with the expanded opportunities and burdens women now encounter. They came of age at a time when a successful woman was expected to combine marriage to a communicative, egalitarian man with motherhood and an engaging, rewarding career. All but one of these daughters of successful white working-class fathers absorbed these postfeminist expectations, the firstborns most fully. Yet none has found such a pattern attainable. Only Pam's younger daughter, Katie, the original source of the evangelical conversions in her own marriage and her mother's, explicitly rejected such a vision. At fourteen, Katie joined the Christian revival, where, I believe, she found an effective refuge from the disruptions of parental divorce and adolescent drug culture that threatened her more rebellious siblings. Ironically, however, Katie's total involvement in a pentecostal ministry led her to practice the most alternative family arrangement of all. Katie, with her husband and young children, has lived "in community" in various joint households (occasionally interracial households) whose accordion structures and shared childrearing, ministry labors, and expenses have enabled her to achieve an exceptional degree of sociospatial integration of her family, work, and spiritual life.

At the outset of my fieldwork, none of Pam's or Dotty's daughters inhabited a modern family. However, over the next few years, discouraging experiences with the work available to them led three to retreat from the world of paid work and to attempt a modified version of the modern family strategy their mothers had practiced earlier. All demanded, and two received, substantially greater male involvement in child care and domestic work than had their mothers (or mine) in the prefeminist past. Only one, however, had reasonable prospects of succeeding in her "modern" gender strategy, and these she secured through unacknowledged benefits feminism helped her to enjoy. Dotty's second daughter, Polly,


26

had left the Silicon Valley when the electronics company she worked for opened a branch in a state with lower labor and housing costs. Legalized abortion and liberalized sexual norms for women allowed Polly to experiment sexually and defer marriage and childbearing until she was able to negotiate a marriage whose domestic labor arrangements represented a distinct improvement over that of the prefeminist modern family.

I have less to say, and less confidence in what I do have to say, about postmodern family strategies among the men in Pam's and Dotty's kin groups. Despite my concerted efforts to study gender relationally by defining my study in gender-inclusive terms, the men in the families I studied remained comparatively marginal to my research. In part, this is an unavoidable outcome for any one individual who attempts to study gender in a gendered world. Being a woman inhibited my access to, and likely my empathy with, as full a range of the men's family experiences as that which I enjoyed among their female kin. Still, the relative marginality of men in my research is not due simply to methodological deficiencies. It also accurately reflects their more marginal participation in contemporary family life. Most of the men in Pam's and Dotty's networks narrated gender and kinship stories that were relatively inarticulate and undeveloped, I believe, because they had less experience, investment, and interest in the work of sustaining kin ties.[21]

While economic pressures have always encouraged expansionary kin work among working-class women, these have often weakened men's family ties. Men's muted family voices in my study whisper of a masculinity crisis among blue-collar men. As working-class men's access to breadwinner status recedes, so too does confidence in their masculinity.[22] The decline of the family wage and the escalation of women's involvement in paid work seems to generate profound ambivalence about the eroding breadwinner ethic. Pam's and Dotty's male kin appeared uncertain as to whether a man who provides sole support to his family is a hero or a chump. Two of these men avoided domestic commitments entirely, while several embraced them wholeheartedly. Two vacillated between romantic engagements and the unencumbered single life. Too many of the men I met expressed their masculinity in antisocial, self-destructive, and violent forms.

Women strive, meanwhile, as they always have, to buttress and reform their male kin. Responding to the extraordinary diffusion of feminist ideology as well as to sheer overwork, working-class women, like middle-class women, have struggled to transfer some of their domestic burdens to men. My fieldwork leads me to believe that they have achieved more success in the daily trenches than much of the research on the "politics of housework" yet indicates—more success, I suspect, than have most


27

middle-class women.[23] While only a few of the women in my study expected or desired men to perform an equal share of housework and child care, none was willing to exempt men from domestic labor. Almost all of the men I observed or heard about routinely performed domestic tasks that my own blue-collar father and his friends never deigned to contemplate. Some did so with reluctance and resentment, but most did so willingly. Although the division of household labor remains profoundly inequitable, I am convinced that a major gender norm has shifted here.[24]

Farewell to Archie Bunker

If this chapter serves no other purpose, I hope it will shatter the image of the white working class as the last repository of old-fashioned "modern" American family life. The postmodern family arrangements I found among blue-collar people in the Silicon Valley are at least as diverse and innovative as those found within the middle class. Pundits of postmodern family arrangements, like Delia Ephron, satirize the hostility and competition of the contemporary divorce-extended family. But working women like Pamela and Dotty have found ways to transform divorce from a rupture into a kinship resource, and they are not unique. A recent study of middle-class divorced couples and their parents in the suburbs of San Francisco found one-third sustaining kinship ties with former spouses and their relatives.[25] It seems likely that cooperative exfamilial relationships are even more prevalent among lower-income groups, where divorce rates are higher and where women have far greater experience with, and need for, sustaining cooperative kin ties.[26]

Certainly, the dismantling of welfare state protections and the re-privatizing policies of the Reagan-Bush era have given lower-income women renewed incentives to continue their traditions of active, expansionary kin work. The accordion households and kin ties crafted by Dotty Lewison, by Katie's Christian ministry, and by Pam and Shirley draw more on the "domestic network" traditions of poor, urban African-Americans described by Carole Stack and on the matrifocal strategies of poor and working-class whites than they do on family reform innovations by the white middle class.[27] Ironically, sociologists are now identifying as a new middle-class "social problem," those "crowded," rather than empty, nests filled with "incompletely launched young adults," long familiar to the less privileged, like the Lewisons.[28] Postindustrial conditions have reversed the supply-side, "trickle-down" trajectory of family change predicted by modernization theorists. The diversity and complexity of postmodern family patterns rivals that characteristic of premodern kinship forms.[29]


28

One glimpses the ironies of class and gender history here. For decades industrial unions struggled heroically for a socially recognized male breadwinner wage that would allow the working class to participate in the modern gender order. These struggles, however, contributed to the cheapening of female labor that helped gradually to undermine the modern family regime.[30] Then escalating consumption standards, the expansion of mass collegiate coeducation, and the persistence of high divorce rates gave more and more women ample cause to invest a portion of their identities in the "instrumental" sphere of paid labor.[31] Thus middle-class women began to abandon their confinement in the modern family just as working-class women were approaching its access ramps. The former did so, however, only after the wives of working-class men had pioneered the twentieth-century revolution in women's paid work. Entering employment in mid-life during the catastrophic 1930s, participating in defense industries in the 1940s, and raising their family incomes to middle-class standards by returning to the labor force soon after childrearing in the 1950s, wives of working-class men quietly modeled and normalized the postmodern family standard of employment for married mothers. Whereas in 1950 the less a man earned, the more likely his wife was employed, by 1968 wives of middle-income men were the most likely to be in the labor force.[32]

African-American women and white working-class women have been the genuine postmodern family pioneers, even though they also suffer most from its most negative effects. Long denied the mixed benefits that the modern family order offered middle-class women, less privileged women quietly forged alternative models of femininity to that of full-time domesticity and mother-intensive childrearing. Struggling creatively, often heroically, to sustain oppressed families and to escape the most oppressive ones, they drew on "traditional," premodern kinship resources and crafted untraditional ones, creating in the process the postmodern family.

Rising divorce and cohabitation rates, working mothers, two-earner households, single and unwed parenthood, and matrilineal, extended, and fictive kin support networks appeared earlier and more extensively among poor and working-class people.[33] Economic pressures, more than political principles, governed these departures from domesticity, but working women soon found additional reasons to appreciate paid employment.[34] Eventually white middle-class women, sated and even sickened by our modern family privileges, began to emulate, elaborate, and celebrate many of these alternative family practices.[35] How ironic and unfortunate it seems, therefore, that feminism's anti-modern-family ideology should then offend many women from the social groups whose gender and kinship strategies helped to foster it.


29

If, as my research suggests, postindustrial transformations encouraged modern working-class families to reorganize and diversify themselves even more than middle-class families, it seems time to inter the very concept of "the working-class family." This deeply androcentric and class-biased construct distorts the history and current reality of wage-working people's intimate relationships. Popular images of working-class family life, like the Archie Bunker family, rest upon the iconography of industrial blue-collar male breadwinners and the history of their lengthy struggle for a family wage. But the male family wage was a late and ephemeral achievement of only the most fortunate sections of the modern industrial working-class. It is doubtful that most working-class men ever secured its patriarchal domestic privileges.

Postmodern conditions expose the gendered character of this social-class category, and they render it atavistic. As feminists have argued, only by disregarding women's labor and learning was it ever plausible to designate a family unit as working class.[36] In an era when most married mothers are employed, when women perform most "working-class" jobs,[37] when most productive labor is unorganized and fails to pay a family wage, when marriage links are tenuous and transitory, and when more single women than married homemakers are rearing children, conventional notions of a normative working-class family fracture into incoherence. The life circumstances and mobility patterns of the members of Pamela's kin set and of the Lewisons, for example, are so diverse and fluid that no single social-class category can adequately describe any of the family units among them.

If the white working-class family stereotype is inaccurate, it is also consequential. Stereotypes are moral (alas, more often, immoral) stories people tell to organize the complexity of social experience. Narrating the working-class people as profamily reactionaries suppresses the diversity and the innovative character of a great proportion of working-class kin relationships. Because it contains socially divisive and conservative political effects, the Archie Bunker stereotype may have helped to contain feminism by estranging middle-class women from working-class women. Barbara Ehrenreich argues that caricatures that portray the working class as racist and reactionary are recent, self-serving inventions of professional, middle-class people eager "to seek legitimation for their own more conservative impulses."[38] In the early 1970s, ignoring rising labor militancy as well as racial, ethnic, and gender diversity among working-class people, the media effectively imaged them as the new conservative bedrock of "middle America." "All in the Family," the early 1970s television sit-com series that immortalized racist, chauvinist, working-class hero-buffoon Archie Bunker, can best be read, Ehrenreich suggests, as "the longest-running Polish joke," a projection of middle-class bad


30

faith.[39] Yet, if this bad faith served professional middle-class interests, it did so at the expense of feminism. The inverse logic of class prejudice construed the constituency of that enormously popular social movement as exclusively middle-class. By convincing middle-class feminists of our isolation, perhaps the last laugh of that "Polish joke" was on us. Even Ehrenreich, who sensitively debunks the Bunker myth, labels "startling" the findings of a 1986 Gallup poll that "56 percent of American women considered themselves to be 'feminists,' and the degree of feminist identification was, if anything, slightly higher as one descended the socio-economic scale."[40] Feminists must be attuned to the polyphony of family stories authored by working-class as well as middle-class people if we are ever to transform poll data like these into effective political alliances.

While my ethnographic research demonstrates the demise of "the working-class family," in no way does it document the emergence of the classless society once anticipated by postindustrial theorists.[41] On the contrary, recent studies of postindustrial occupation and income distribution indicate that the middle classes are shrinking and the economic circumstances of Americans are polarizing.[42] African-Americans have borne the most devastating impact of economic restructuring and the subsequent decline of industrial and unionized occupations.[43] But formerly privileged white working-class men, those like Pam's two husbands and Lou Lewison, who achieved access to the American Dream in the 1960s and 1970s, now find their gains threatened and difficult to pass on to their children.

While high-wage blue-collar jobs decline, the window of postindustrial opportunity that admitted undereducated men and women, like Lou and Kristina Lewison and Don Franklin, to middle-class status is slamming shut. "During the 1980s, the educated got richer and the uneducated got poorer. And it looks like more of the same in the 1990s," declared a recent summary of occupational statistics from the Census Bureau and the Labor Department.[44] Young white families earned 20 percent less in 1986 than did comparable families in 1979, and their prospects for home ownership plummeted.[45] Real earnings for young men between the ages of twenty and twenty-four dropped by 26 percent between 1973 and 1986, while the military route to upward mobility that many of their fathers traveled constricted.[46] In the 1950s men like Lou Lewison, equipped with Veterans Administration loans, could buy homes with token down payments and budget just 14 percent of their monthly wages for housing costs. By 1984, however, those veterans' children, looking for a median-priced home as first-time would-be home owners, could expect their housing costs to be 44 percent of an average male's monthly earnings.[47] Few could manage this, and in 1986 the U.S. government reported "the first sustained drop in home ownership since the modern collection of data began in 1940."[48]


31

Postindustrial shifts have reduced blue-collar job opportunities for the undereducated sons of working-class fathers I interviewed. And technological developments like Computer-Aided Design have escalated the entry criteria and reduced the numbers of those middle-level occupations that recently employed uncredentialled young people like Kristina Lewison and Pam's oldest child, Lanny.[49] Thus the proportion of American families in the middle-income range fell from 46 percent in 1970 to 39 percent in 1985. Two earners in a household now are necessary just to keep from losing ground.[50] Data like these led social analysts to anxiously track "the disappearing middle class," a phrase which, Barbara Ehrenreich now believes, "in some ways missed the point. It was the blue-collar working class that was 'disappearing,' at least from the middle range of comfort."[51]

Postindustrial restructuring has had contradictory effects on the employment opportunities of former working-class women. Driven by declines in real family income, by desires for social achievement and independence, and by an awareness that committed male breadwinners are in scarce supply, such women have flocked to expanding jobs in service, clerical, and new industrial occupations. These provide the means of family subsidy or self-support and self-respect gained by many women, like Pam and Dotty; but few of these women enjoy earnings or prospects equivalent to those of their former husbands or fathers. Recent economic restructuring has replaced white male workers with women and minority men, but at less well paid, more vulnerable jobs.[52]

Whose Family Crisis?

This massive reordering of work, class, and gender relationships during the past several decades is what has turned family life into a contested terrain. It seems ironic, therefore, to observe that at the very same time that women are becoming the new proletariat, the postmodern family, even more than the modern family it is replacing, is proving to be a woman-tended domain. To be sure, as Kathleen Gerson reports in the chapter that follows this one, there is some empirical basis for the enlightened father imagery celebrated by films like Kramer versus Kramer . Indeed my fieldwork corroborates emerging evidence that the determined efforts by many working women and feminists to reintegrate men into family life have not been entirely without effect. There are data, for example, indicating that increasing numbers of men would sacrifice occupational gains in order to have more time with their families, just as there are data documenting actual increases in male involvement in child care.[53] The excessive media attention that the faintest signs of this "new paternity" enjoy, however, may be a symptom of a deeper, far less comforting reality. We are experiencing, as Andrew Cherlin aptly puts


32

it, "the feminization of kinship."[54] Demographers report a drastic decline in the average number of years that men live in households with young children.[55] Few of the women who assume responsibility for their children in 90 percent of divorce cases in the United States today had to wage a custody battle for this privilege.[56] We hear few proposals for a "daddy track" in the workplace. And few of the adults providing care to sick and elderly relatives are male.[57] Yet ironically, most of the alarmist and nostalgic literature about contemporary family decline impugns women's abandonment of domesticity, the flipside of our tardy entry into modernity. Rarely do the anxious public outcries over the destructive effects on families of working mothers, high divorce rates, institutionalized child care, or sexual liberalization scrutinize the family behaviors of men.[58] Anguished voices emanating from all bands on the political spectrum lament state and market interventions that are weakening "the family."[59] But whose family bonds are fraying? Women have amply demonstrated a continuing commitment to sustaining kin ties. If there is a family crisis, it is a male's crisis.

The crisis cannot be resolved by reviving the modern family system. While nostalgia for an idealized world of "Ozzie and Harriet" and "Archie Bunker" families abounds, little evidence suggests that most Americans genuinely wish to return to the gender order these symbolize. On the contrary, the vast majority, like the people in my study, are actively remaking family life. Indeed, a 1989 survey conducted by the New York Times found more than two-thirds of women—including a substantial majority even of those living in "traditional," that is to say "modern," households, as well as a majority of men—agree that "the United States continues to need a strong women's movement to push for changes that benefit women."[60] Yet many people seem reluctant to affirm their family preferences. They cling, like Shirley Moskowitz, to images of themselves as "back from the old days," while venturing ambivalently, but courageously, into the new.[61]

Responding to new economic and social insecurities as well as to feminism, higher percentages of families in almost all income groups have adopted a multiple-earner strategy.[62] Thus, the household form that has come closer than any other to replacing the modern family with a new cultural and statistical norm consists of a two-earner, heterosexual married couple with children.[63] It is not likely, however, that any type of household will soon achieve the measure of normalcy that the modern family long enjoyed. Indeed, the postmodern success of the voluntary principle of the modern family system precludes this, assuring a fluid, recombinant familial culture. The routinization of divorce and remarriage generates a diversity of family patterns even greater than was characteristic of the premodern period, when death prevented family stability or household homogeneity. Even cautious demographers judge


33

the new family diversity to be "an intrinsic feature . . . rather than a temporary aberration" of contemporary family life.[64]

"The family" is not "here to stay." Nor should we wish it were. The ideological concept of "the family" imposes mythical homogeneity on the diverse means by which people organize their intimate relationships, and consequently distorts and devalues this rich variety of kinship stories. And, along with the class, racial, and heterosexual prejudices it promulgates, this sentimental, fictional plot authorizes gender hierarchy. Because the postmodern family crisis ruptures this seamless modern family "script," it provides a democratic opportunity. Feminists', gay liberation activists', and many minority rights organizations' efforts to expand and redefine the notion of family are responses to this opportunity. These groups are seeking to extend social legitimacy and institutional support for the diverse patterns of intimacy that Americans have already forged.

If feminism threatens many people and seems out of fashion, struggles to reconstitute gender and kinship on a just and democratic basis are more popular than ever.[65] If only a minority of citizens are willing to grant family legitimacy to gay domestic partners, an overwhelming majority subscribe to the postmodern definition of a family by which the New York Supreme Court validated a gay man's right to retain his deceased lover's apartment. "By a ratio of 3 to 1," people surveyed in a Yale University study defined the family as "a group of people who love and care for each other." And while a majority of those surveyed gave negative ratings to the quality of American family life in general, 71 percent declared themselves "at least very satisfied" with their own family lives.[66]

I find an element of bad faith in the popular lament over the decline of the family. Nostalgia for "the family" deflects criticism from the social sources of most "personal troubles." Supply-side economics, governmental deregulation, and the right-wing assault on social welfare programs have intensified the destabilizing effects of recent occupational upheavals on flagging modern families and emergent postmodern ones alike. Indeed, the ability to provide financial security was the chief family concern of most of the people surveyed in the Yale study. If the postmodern family crisis represents a democratic opportunity, contemporary economic and political conditions enable only a minority to realize its tantalizing potential.

The discrepant data reported in the Yale study indicate how reluctant most Americans are to fully acknowledge the genuine ambivalence we feel about family and social change. Yet ambivalence, as Alan Wolfe suggests, is an underappreciated but responsible moral stance, and one well suited for democratic citizenship: "Given the paradoxes of modernity, there is little wrong, and perhaps a great deal right, with being ambivalent—especially when there is so much to be ambivalent about."[67]

Certainly, as my experiences among Pamela's and Dotty's kin—and


34

my own—have taught me, there are good grounds for ambivalence about contemporary postmodern family conditions. Nor do I imagine that even a successful feminist family revolution could eliminate all family distress. At best, it would foster a social order that could invert Tolstoy's aphorism by granting happy families the freedom to differ, and even to suffer. Truly postfeminist families, however, would suffer only the "common unhappiness" endemic to intimate human relationships; they would be liberated from the "hysterical misery" generated by social injustice.[68] No nostalgic movement to restore the modern family can offer as much. For better and/or worse, the postmodern family revolution is here to stay.


35

Two—
Coping with Commitment:
Dilemmas and Conflicts of Family Life

Kathleen Gerson

Since 1950, when the breadwinner-homemaker household accounted for almost two-thirds of all American households, widespread changes have occurred in the structure of American family life. Rising rates of divorce, separation, and cohabitation outside of marriage have created a growing percentage of single-parent and single-adult households. The explosion in the percentage of employed women, and especially employed mothers, has produced a rising tide of dual-earner couples whose patterns of child rearing differ substantially from the 1950s' norm of the stay-at-home mother. As Judy Stacey also shows in this volume, the breadwinner-homemaker model of family life has become only one of an array of alternatives that confront men and women as they build (and often change) their lives over the course of an expanded adulthood.[1]

As changes in family structure have become apparent to intellectuals, politicians, and ordinary citizens, a national debate has arisen over their nature and significance. The most widely embraced interpretation of family change is one of alarm and condemnation. Analysts and social critics across the political spectrum routinely blame "the breakdown of the family" for a host of modern social ills, extending from the drug epidemic and increases in violent crime to teenage pregnancy, child abuse and neglect, the decline of educational standards, and even the birth dearth.[2] But a competing and less pessimistic perspective emphasizes the resilience of families, which are adapting rather than disintegrating in the face of social change, and the resourcefulness of individuals, who are able to build meaningful interpersonal bonds amid the uncertainty and fragility of modern relationships.[3] By arguing that changes in family structure represent a necessary adaptation to structural change, this perspective refuses to hold "nontraditional" families responsible for circumstances they did not create or social trends that accompany, but are not


36

caused by, family change.[4] It also calls attention to the positive side of social change by taking into account the benefits of women's increasing opportunities outside the home.

The "stability within change" perspective provides an important rebuttal to the gloomy and accusatory picture presented by the "family breakdown" thesis. It upholds the validity of women's struggle for gender equality and freedom of choice regarding sexuality, marriage, and childbearing. However, its relatively benign view tends to understate some of the costs of social change. In the context of persistent gender inequality, these costs have fallen most heavily on the women and children who can no longer count on a man's economic support and have not gained access to other economic bases. The growing percentage of women and children who live in poverty is, for example, an unfortunate consequence of the loosening of the bonds of permanent marriage and the erosion of male breadwinning.[5]

Posing the situation as one of family breakdown versus family stability and adaptive resilience oversimplifies the nature of the change process. This chapter argues, instead, that both the current debate on the family and the difficulties most families now face result less from the fact of fundamental social change than from the inconsistent and contradictory nature of change.[6] Incomplete and unequal social change has created new personal dilemmas over how to balance parental and employment commitments and new social conflicts between those who have developed "traditional" and "nontraditional" resolutions to the intransigent conflicts between family and workplace demands.[7] These dilemmas and conflicts pose the central challenges to which new generations of women, men, and children must respond.[8]

Personal Dilemmas and Family Diversity: The Consequences of Unequal Social Change

Social change in family structure remains inconsistent in two consequential ways. First, some social arrangements have changed significantly, but others have not. Even though an increasing percentage of families depend on the earnings of wives and mothers, women continue to face discrimination at the workplace and still retain responsibility for the lion's share of household labor.[9] Similarly, despite the growth of dual-earner and single-parent households, the structural conflicts between family and work continue to make it difficult for either women or men to combine child rearing with sustained employment commitment.[10] The combination of dramatic change in some social arrangements (for example, women's influx into the labor force) and relatively little change in others (for example, employers' continuing expectation that job responsibilities should take precedence over family needs) has created new forms of


37

gender inequality and new dilemmas for both women and men who confront the dual demands of employment and parenthood.

Second, social change is inconsistent because social groups differ greatly in how and to what degree they have been exposed to change. Not only are the alternatives that women and men face structured differently, but within each gender group, the alternatives vary significantly. A growing group of women, for example, have gained access to highly rewarded professional and managerial careers, but most women remain segregated in relatively ill-rewarded, female-dominated occupations. Similarly, the stagnation of real wages has eroded many men's ability to support wives and children on their paycheck alone, but most men still enjoy significant economic advantages. This variation in opportunities and constraints has, in turn, promoted contrasting orientations toward family change among differently situated groups of women and men.

This chapter draws on two studies of how differently situated groups of women and men are responding to the dilemmas posed by unequal social change.[11] These studies have examined, first, how women are responding to the structural conflicts between family and employment commitments and, second, how men are responding to the conflict between preserving their historic privileges and confronting their growing need to share breadwinning responsibilities with women. It analyzes the similarities and differences between women's and men's responses, how their family situations affect their personal and political strategies, and, finally, the short-term and long-term implications of men's and women's attempts to resolve these dilemmas and conflicts.

Women and men have developed a range of strategic responses to cope with the contrasting dilemmas they confront. We can compare the "coping strategies" of those who developed a "traditional orientation" with the strategies that grew out of two alternative orientations—an orientation that stresses the avoidance of parental commitments and, finally, an orientation based on seeking a balance between work and family commitments. Since the conflicts and dilemmas inherent in each family pattern vary according to gender, women's and men's strategic responses are analyzed separately. Women and men confront a different set of opportunities and constraints, but each group must respond to the dilemmas posed by unequal and uneven social change. Their contending resolutions to these family dilemmas shape the terms of political conflict as well as the contours of social change.

Choosing between Employment and Motherhood

Although most women, including most mothers, now participate in the paid labor force, this apparent similarity masks important differences in


38

women's responses to the conflicts between employment and motherhood. Not only do some mothers continue to stay home to rear children, but many employed women work part time or intermittently and continue to emphasize family over employment commitments.[12] These "domestically oriented" women stand in contrast to a growing group of "nondomestic" women, who have developed employment ties that rival, and for some surpass, family commitments. Women develop "domestic" or "nondomestic" orientations in response to specific sets of occupational and interpersonal experiences. These contrasting orientations to family life are not only rooted in different social circumstances; they also represent opposing responses to the conflicts between motherhood and employment.[13]

All women face an altered social context, but they differ in how and to what extent they have been exposed to structural change. This uneven exposure to new opportunities and constraints has produced contrasting orientations toward employment and motherhood. In my research on how women make family and work decisions, I found that regardless of class position or early childhood experiences and expectations, those women who were exposed to change in marital and work institutions were more likely to develop nondomestic orientations as adults, whereas those who were sheltered from these changes tended to develop a domestic orientation in adulthood. About two-thirds of the respondents who held domestic orientations as children ultimately became work-committed. Similarly, over 60 percent of those who were ambivalent about childbearing or who held career aspirations as children became committed to domesticity in adulthood.

Unanticipated encounters with changing structures of marriage and employment led some women to veer away from domesticity and others to veer toward it. Those who experienced instability in their relationships with men, who encountered often unanticipated chances for advancement at the workplace, who were disillusioned with the experience of motherhood, and who met severe economic squeezes in their households tended to develop strong work commitments. These women found full-time mothering and homemaking relatively isolating, devalued, and unfulfilling compared to the rewards of paid jobs. Exposure to unanticipated opportunities outside the home combined with unexpected disappointment in domestic pursuits to encourage a nondomestic orientation even among those who had initially planned for a life of domesticity.

In contrast, women who encountered blocked mobility at work and became disillusioned by dead-end jobs decided that motherhood provided a more satisfying alternative to stifling work conditions. They were, furthermore, able to establish stable marital partnerships in which they could depend on economic support from husbands with secure careers. When the experience of blocked mobility at the workplace was combined


39

with unexpected marital commitment to a securely employed spouse, even women who once held career aspirations were encouraged to loosen their employment ties and turn toward domestic pursuits. Over 60 percent of those who initially planned to have a work career ultimately opted for domesticity in response to constraints at the workplace and opportunities for domestic involvement. Amid the currents of social change, exposure to a traditional package of opportunities and constraints led these women to conclude that their best hope for a satisfying life depended on subordinating their employment goals to motherhood and family pursuits.

In sum, exposure to expanded opportunities outside the home (for example, upward employment mobility) and unanticipated insecurities within it (for example, marital instability or economic squeezes in the household) tends to promote a nondomestic orientation, even among women who once planned for full-time motherhood. Exposure to a more traditional package of opportunities and constraints (such as constricted employment options and stable marriage) tends, in contrast, to promote a domestic orientation even among those who felt ambivalent toward motherhood and domesticity as children. Both orientations reflect contextually sensible, if unexpected and largely unconscious, responses to the structural conflicts between employment and motherhood.

Uneven exposure to structural change, like the partial nature of change, promotes contrasting family orientations among women. Some women remain dependent on a traditional family structure that emphasizes sharp social differences between the sexes along with male economic support for women's mothering. Others increasingly depend on social and economic supports outside the home—which can be guaranteed only if women are accorded the same rights, responsibilities, and privileges as men. Rising marital instability and stagnant male wages have eroded the structural supports for female domesticity, but persistent gender inequality at the workplace and in the home also make domesticity an inviting alternative to those who still face limited options in the paid labor force. In the context of this ambiguous mix of expanded options and new insecurities, the choices of both domestically oriented and work-committed women remain problematic, however personally fulfilling they may be.

Strategies of Domestically Oriented Women

Despite the forces leading other women out of the home, domestically oriented women confront ample reasons to avoid such a fate. Blocked occupational opportunities leave these women poorly positioned to enjoy the benefits of work outside the home. They have concluded that domestic pursuits offer significant advantages over workplace commitment. A homemaker and mother of two declared:


40

I never plan to go back [to work]. I'm too spoiled now. I'm my own boss. I have independence; I have control; I have as much freedom as anyone is going to have in our society. No [paid] job can offer me those things.

Since their "freedom" depends on someone else's paycheck, domestically oriented women are willing to accept responsibility for the care of home and children in exchange for male economic support. As this disillusioned ex-schoolteacher and full-time mother of two pointed out, they have little desire to change places with their breadwinning husbands:

I have met guys who were housepersons, but I can't see any reason [for it]. It would turn it all crazy for me to come home around five thirty, and he'd have to have things ready for me. I think if I thought that [bringing in a paycheck] was my role for the rest of my life, I would hate it. I don't want to be [my husband]; then I would have to go and fight the world. I don't want the pressures that he has to bear—supporting a family, a mortgage, putting in all those hours at the office. Ugh!

Whether or not they work, domestically oriented women put their family commitments first. When employed, they carefully define their work attachments as a discretionary choice that can be curtailed if necessary and that always comes second to their children's needs. A part-time clerk and mother of two defined paid work as a "job," not a career:

I would never want to get us in a situation where I would have to work, because then I would really hate it. I don't work to have a career. Without a career, I can quit a job whenever I want. To have a career, you have to stick with it, and it takes a lot. I'd have to give up a lot of things my kids need, and it's not worth it to me. A job, I don't have to give up anything.

Although relatively insulated from the pushes and pulls that lead other women toward strong labor force attachment, domestically oriented women are nevertheless affected by the social changes taking place around them. The erosion of structural and ideological supports for a traditional arrangement has made their commitment to a family form based on a strict sexual division of labor problematic. The increased fragility of marriage, for example, poses an abiding, if unspoken, threat to domestic women's security. In the context of high divorce rates, homemaking women cannot assume that the relationships they depend on will last. This ex-clerk and mother of a young daughter complained:

[Having a child] has made me more dependent on my husband. I think he was attracted to me because I was very independent, and now I'm very dependent. I don't know what I would do if things didn't work out between [us] and we had to separate and I had to go to work to support my child. I think I'd be going bananas. It's scary to me.

Even when their marriages are secure, domestically oriented women face other incursions on their social position. The rise of work-commit-


41

ment among other women has not only provided an alternative to domesticity, it has also eroded the ideological hegemony that homemakers once enjoyed. Domestically oriented women feel unfairly devalued by others, as these ex-clerical workers explained:

There are times when I have some trouble with my identity; that has to do with being a mother. Because of society, sometimes the recognition or lack of it bothers me.

People put no value on a housewife. If you have a job, you're interesting. If you don't, you're really not very interesting, and sometimes I think people turn you off.

This ex-nurse added that even when economic pressures are weak, the social pressures to seek employment make domesticity a difficult choice:

I have been feeling a lot of pressure . . . there's a lot of pressure on women now that you should feel like you want to work. Sometimes it's hard to know what you feel, because I really don't feel like I want to [work], but I think I should feel like I want to.

The erosion of the structural and ideological supports for domesticity has left domestically oriented women feeling embattled. They are now forced to defend a personal choice and family arrangement that was once considered sacrosanct.[14] For these reasons, domestically oriented women cannot afford to take a neutral stance toward social change, and many have developed ideologies of opposition to other people's choices. Domestically oriented women tend to view employed mothers as either selfish and dangerous to children or overburdened and miserable, as these two homemakers suggested:

I have a neighbor with young children who works just because she wants to. I get sort of angry . . . I think I resent the unfairness to the child. I don't know how to answer the argument that men can have families and work, but women can't. Maybe it's not fair, but that's the way it is.

Most of the time all I hear from them is griping, and they're tired, and they're frantic to get everything done. It's a shame. I hate hurrying like that.

They viewed career-committed women as selfish, unattractive, and, at least in the case of childless women, unfulfilled:

Women can [take on men's jobs], but it's a blood-and-guts type of thing. Those who make it are witches because they found out what they had to do to get there. [ex-saleswoman planning first child]

I feel like they're missing out on something. If they're going to make a long-term thing of it and never have children, I think they're missing something.


42

Finally, domestically oriented women support men's right and duty to be primary breadwinners. They frown on men who shirk their duties to support women and children. This homemaker and mother of two could not understand why "undependable" men were considered glamorous:

There's this mystique about the charismatic, not decent and dependable, sort of man. They're movie types. . . . My husband goes to work at eight and comes home at five, and [people] say, "Isn't that boring?" And I say, "No. Not at all," because it gives me time to do what I want.

Although their strategies have unfolded against the tide of social change, domestically oriented women illustrate the forces that not only limit the change but provide a powerful opposition to it. Their personal circumstances give them ample reason to view change as a dangerous threat to their own and their children's well-being, even when it leads in the direction of greater gender equality.

Strategies of Work-Committed Women

Work-committed women lack the option of domesticity, or the desire to opt for it, but they nevertheless face significant obstacles. Persistent wage inequality and occupational sex segregation continue to deny most employed women an equal opportunity to succeed at the workplace. In addition, limited change in the organization of work, especially in male-dominated occupations, combines with the "stalled revolution" in the sexual division of domestic work to make it difficult for employed women to integrate career-commitment with motherhood. Work-committed women have responded in several ways to this predicament. A small but significant proportion have decided to forgo childbearing altogether, but the majority of work-committed women are attempting to balance child rearing with strong labor force attachment.[15]

Childless women have concluded that childbearing is an unacceptably dangerous choice in a world where marriage is fragile and motherhood threatens to undermine employment prospects. A strong skepticism regarding the viability of marriage led a divorced executive to reject childbearing:

[Having children] probably would set back my career . . . irretrievably. The real thing that fits in here is my doubts about men and marriage, because if I had real faith that the marriage would go on, and that this would be a family unit and be providing for these children, being set back in my career wouldn't be that big a deal. But I have a tremendous skepticism about the permanency of relationships, which makes me want to say, "Don't give anything up, because you're going to lose something that you're going to need later on, because [the man] won't be there.

Childless women also have considerable skepticism about men's willingness to assume the sacrifices and burdens of parenthood. Since gen-


43

der equality in parenting seems out of reach, so does motherhood. For this childless physician, even avowedly egalitarian men appeared untrustworthy:

I would never curtail my career goals for a [child] . . . I would not subjugate my career any more than a man would subjugate his career. . . . [And] I don't know anybody who says he wants an egalitarian relationship. Among the married ones with children, everybody says, "Sure, we'll share with children equally." But nobody does.

Given the lack of structural supports for combining career commitment and childbearing, these women are convinced they must choose between the two. They have decided that the continuing obstacles to integrating employment and child rearing leave women facing a curiously "old-fashioned" choice between mutually exclusive alternatives:

I think you either do one or the other. . . . You could have children and work, but you wouldn't really be a very senior sort of involved person. Although men can be presidents of companies and have children, women can't. [single interior designer]

I just think that [children] are a responsibility, and you have to be willing to devote all your time to them. If you can't do that, I don't think you should have them. I know that's really old-fashioned, but I tend to believe it. [high school–educated secretary]

Most work-committed women, however, do eventually have children. Many hold the beliefs this lawyer voiced:

I don't think it's fair that [working] women can't have kids. They make things fuller, more complete. I think it rounds out your life.

Work-committed mothers must create strategies to meet the competing demands of child rearing and employment. However, their strategic choices are severely limited by intransigence in the workplace. This aspiring banker lamented:

[My bosses] figure that I'm to have my career, and what I do at home is my own business, but it better not interrupt the job. I've been pushed as far forward as I have because I was a maniac and I never went home.

Since most employers continue to penalize workers, regardless of gender, for parental involvement that interferes with the job, employed mothers have had to look elsewhere for relief from the competing demands of employment and child rearing. Three strategies, in particular, offer hope of easing their plight. First, employed mothers limit their demands by limiting family size. Although the two-child family remains the preferred alternative, the one-child family is gaining acceptance.[16] This upwardly mobile office worker concluded:


44

I know one child won't drive me crazy, and two might. I know I couldn't work and have two. . . . I don't think [one child] would affect [my work plans] at all. More than one would. That's one of the reasons I only want one.

Employed mothers must also reevaluate and alter the beliefs about child rearing they inherited from earlier generations, who frowned on working mothers. One work-committed office worker rejected the idea that children suffer when mothers work, despite having been raised by a full-time mother:

I liked my mother being home, but I think it's okay for a mother to work. As long as she doesn't make her children give things up, and I don't think I'd make my children give anything up by me working.

Finally, work-committed mothers have engaged in a protracted struggle to bring men into the process of parenting. Their male partner's support of their independence gives them leverage to demand sharing, even if it doesn't guarantee that such sharing will be equal. A professor acknowledged:

[My husband] respects my accomplishments. He wants me to keep doing something I enjoy. He wants me to be fairly independent, and he also wants his own independence . . . as long as he can support himself and half a child.

Some have decided that male parental involvement is a precondition to childbearing, as these upwardly mobile workers explained:

I want [equal] participation, and without it I don't want children. I want it for the children, for myself. Without two people doing it, I think it would be a burden on one person. It's no longer a positive experience. [lawyer engaged to be married, in her early thirties]

I think it's going to come to the point that if we're willing to have children, we work things out pretty much [equally] between ourselves. And I think he would rather help out than to not have [children] at all. It's a two-way thing. [office manager in her late twenties]

In rejecting childlessness, most work-committed women have developed strategies to cope with the dual burdens of employment and motherhood. In addition to having smaller families, developing new ideologies about child rearing and mothering, and pressuring men to become involved fathers, work-committed mothers are challenging traditional work and family arrangements based on the assumption of a male worker with a wife at home or, at most, loosely tied to paid employment. Like their domestically oriented counterparts, the need to defend their choices against other people's disapproval encourages them to denigrate different resolutions to the conflict between employment and mother-


45

hood. From this perspective, it becomes tempting to define domestically oriented women as:

. . . kind of mentally underdeveloped and not too interesting. Let's face it, it's kind of boring. I guess I don't consider just having children as doing something.

Inconsistent and unequal social change has promoted differing strategic reactions among women that leave them socially divided and politically opposed. The contours of change, however, also depend on men's reactions to the emerging conflicts and dilemmas of family life.[17]

Choosing between Privilege and Sharing: Men's Responses to Gender and Family Change

While the transformation in women's lives has garnered the most attention, significant changes have also occurred in men's family patterns. The primary breadwinner who emphasizes economic support and constricted participation in child rearing persists, but this model—like its female counterpart, the homemaker—no longer predominates. Alongside this pattern, several alternatives have gained adherents. An increasing proportion of men have moved away from family commitments—among them single and childless men who have chosen to forgo parenthood and divorced fathers who maintain weak ties to their offspring.[18] Another group of men, however, has become more involved in the nurturing activities of family life. Although these "involved fathers" rarely assume equal responsibility for child rearing, they are nevertheless significantly more involved with their children than are primary breadwinners, past or present.[19] Change in men's lives, while limited and contradictory, is nonetheless part of overall family change.

As with women's choices, men's family patterns reflect uneven exposure to structural change in family and work arrangements. In my research on men's changing patterns of parental involvement, I found that men who established employment stability in highly rewarded but demanding jobs, and who experienced unexpected marital stability with a domestically oriented spouse, were pushed and pulled toward primary breadwinning even when they had originally hoped to avoid such a fate. In contrast, men who experienced employment instability and dissatisfaction with the "rat race" of high-pressure, bureaucratically controlled jobs tended to turn away from primary breadwinning. When these experiences were coupled with instability in heterosexual relationships and dissatisfying experiences with children, many rejected parental involvement altogether—opting instead for personal independence and freedom from children. When declining work commitments were coupled


46

with unexpected pleasure in committed, egalitarian heterosexual relationships and unexpected fulfillment through involvement with children, men tended to become oriented toward involved fatherhood. Thus, while about 36 percent of the respondents developed a primary breadwinning orientation, the remaining men did not. Unanticipated encounters in relationships with women and at the workplace encouraged these men to establish either greater distance from family life (about 30 percent of the sample) or, alternatively, to become more involved in parenthood than did primary breadwinners (about 34 percent of the sample).

As with women, these contrasting orientations among men represent reasonable, if often unexpected and typically unconscious, reactions to encounters with contrasting packages of constraints and opportunities in adulthood. They also reflect different responses to the trade-offs men face in their family choices. Although men do not typically have to choose between workplace participation and childbearing, they do confront conflicts and dilemmas. As women's lives have been transformed, men, too, are increasingly caught on the horns of a dilemma between preserving their historic privileges and taking advantage of expanded opportunities to share or reject the economic and social burdens of breadwinning.[20] Among men (as among women), different experiences and orientations promote contrasting strategies to cope with the tensions between maintaining male privilege and easing traditional male burdens.

Strategies of Primary Breadwinning Men

Just as domestically oriented women interpret the meaning of work through the lens of their family commitments, whether or not they are employed, so primary breadwinning men define their parental involvement in terms of income, whether or not their wives work. First, these men emphasize money, not time, in calculating their contributions to the household. For this surveyor, "good fathering" means being a good provider—that is providing financial support, not participating in child rearing:

What is a good father? It's really hard to say. I always supported my children, fed them, gave them clothes, a certain amount of love when I had time. There was always the time factor. Maybe giving them money doesn't make you a good father, but not giving it probably makes you a bad father. I guess I could have done maybe a little more with them if financially I wasn't working all the time, but I've never hit my kids. I paid my daughter's tuition. I take them on vacation every year. Am I a good father? Yes, I would say so.

Even when their wives are employed, primary breadwinning men de-value the importance of wives' earnings. They define this income as "ex-


47

tra" and nonessential and thus also define a woman's job as secondary to her domestic responsibilities. Even though his wife worked hard as a waitress, this architect did not believe that she shared the duties of breadwinning:

She took care of [our son], and I did all the breadwinning. When we got the house, she started working for extra money. She worked weekends, but her job doesn't affect us at all. Financially, my job takes care of everything plus. Her income is gravy, I guess you'd call it.

By defining fatherhood in terms of financial support and wives' income as supplementary and nonessential, primary breadwinners relieve themselves of the responsibility for domestic chores and of a sense of guilt that such an arrangement might generate. A park worker was proud to announce:

I do nothing with the cooking or cleaning. I do no household, domestic anything. I could, but I won't, because I feel I shouldn't have to. If my wife's not sick, I see no reason why I should do it. I feel my responsibility is to bring home the money and her responsibility is to cook and clean.

And like the domestically oriented women who felt fortunate not to have to work, primary breadwinners see their wives as the fortunate recipients of personal freedom and material largesse. The park worker continued:

My wife's got it made. The cat's got it made, too. I'm very good to my wife. She drives a new car, has great clothes, no responsibilities. She's very happy to be just around the house, do what she wants. She's got her freedom; what more could you want?

Although primary breadwinning men, like domestically oriented women, have been relatively insulated from the social-structural incentives that promote nontraditional choices among others, they, too, are affected by changes in others' lives. Despite stable employment and marriages, these men fear the erosion of the material and ideological supports for male privilege that their fathers could take for granted. As women have fought for equal rights at the workplace and other men have moved away from family patterns that emphasize separate spheres, those who remain committed to the "good provider" ethic feel embattled and threatened. Even the small gains made by women at the workplace are perceived as unfair as the historic labor market advantages of men undergo reconsideration, if not drastic alteration. A plumber and father of five resented the incursions some women are making into his field of expertise:

Women have it a little easier as far as job-related [matters] is concerned [sic ]. The tests are getting easier, classifications are going down [to let


48

women in]. From what I hear in plants with women, they can't do the job. This isn't chauvinistic guys talking; this is guys talking in general. We pick up a 250-pound motor, but there's no way a young lady will pick it up unless she's a gorilla, a brute. But when she's got to do the job, two other guys have got to come along to help. I'm not saying women can't handle the job, [but] a woman comes to work for us, and they [the bosses] have got five men covering for her. Usually it's a hardship, but the guys bend over backwards for her. If she can't handle her job, why should she be there?

Similarly, primary breadwinners make a distinction between their situation and that of other men—especially childless, single men who, presumably, do not share their heavy economic responsibilities. They define their interests not just in terms of being a man, but a particular type of man who sacrifices for the good of his family and therefore needs to protect his interests in a hostile, changing world. The park worker explained:

Being a father is a responsibility. If you don't have a wife, don't have children—you get fired, who cares? When you have someone who's depending on your salary, you protect your interest on the job more. You become more afraid, and you become more practical. You realize you're out in the ocean, and nobody's going to help you. You're on your own, and you grow up real quick and start behaving like an adult.

In response to this perceived need to protect their interests, primary breadwinning men, like domestically oriented women, hold tightly to a set of social and political beliefs that emphasize the natural basis and moral superiority of gender differences and inequalities. Primary breadwinners argue that their own sexual division of labor is both natural and normal, as the plumber maintained:

As far as bread and butter is concerned, the man should have a little more [of] the responsibility than the woman. I'm not chauvinistic or anything, but it's basic, normal [that] it's a man. There aren't many men where the wife works full-time.

Who should take care of the children?

Again, you go back to whoever's home and whoever's working. Primary would be the mother. It's natural; it comes natural. In my house, it's my wife. She's doing it all.

If their own choices are viewed as natural and normal, then other patterns appear abnormal, unhealthy, and dangerous. Men who are unable or unwilling to meet the demands of breadwinning are judged to be moral and social failures, as the park worker pointed out.

The husband may not be able to provide. Just because he's a man, doesn't mean he can provide. There's a lot of losers out there, a lot of guys have a


49

thirty-dollar-an-hour drug habit. How are they going to work? The wife might have to. Considering both people are normal, the breadwinner in my opinion should be the man.

Primary breadwinners hold similar beliefs about nondomestic patterns for women. Along with domestically oriented women, they tend to argue that career and motherhood are mutually exclusive alternatives for women and that responsible mothering requires forgoing a career. With five children and his own wife a homemaker, the plumber had little sympathy for mothers who feel the need for a life outside the home:

A career is a career; a family is a family. When you have a career, you donate your whole self to your career. If you have a family, you donate yourself to your family. [If a woman] has a career and no children, that's different. If she's a mother, her place should be at home. If she doesn't need the money, she has to have an ulterior motive for going to work. She's either tired of the kids or she's tired of being around the kids. If she's trying to keep her sanity, if she's unhappy at home, then she's got nobody to blame but herself, because she created that.

If a strict sexual division of caretaking and breadwinning is morally correct, it follows that current social changes are dangerous. Primary breadwinners tend to view these changes as hazardous for women as well as for men and children. Like their domestically oriented female counterparts, they argue, according to the park worker, that the decline of inequality threatens the historic protections women have enjoyed:

The woman should be protected, have a higher place. A mother is the most cherished thing you could be on this earth, and the woman should be respected and cared for. Equality would reduce that. A woman should be put on a pedestal above the man, and equality would put them on the same level. Why would they want to be equal with men who are dying earlier, under stress, who are really in the firing position? They already got it all. Equality for a woman would be the worst thing, because she already has the advantage. Women would lose from equality. Why would they want it? What is the need for it?

Despite their contrasting commitments, primary breadwinning men and domestically oriented women are interdependent in ways that lead their world views and political ideologies to converge. Their outlooks contrast not only with those of nondomestic women, but with those developed by nontraditional men as well.

Alternatives to Primary Breadwinning among Men

Men who eschew primary breadwinning have concluded that the privileges afforded "good providers" are not worth the price that privilege


50

entails. They view breadwinning responsibilities as burdensome and constricting, but their rejection of breadwinning poses its own dilemmas. The loosening bonds of marriage allow these men greater latitude to avoid parental responsibilities, both economic and social. On the other hand, the increasing number of work-committed women encourages and, indeed, pressures some nontraditional men to become more involved in the noneconomic aspects of family life than was typical of men a generation ago. These two patterns—forgoing parental commitments and becoming involved in caretaking—represent increasingly popular, if quite different, responses to the search for an alternative to traditional masculinity amid a contradictory and ambiguous set of options.

Forgoing Parental Commitments

Like permanently childless women, some men have opted to forgo parental responsibilities. This group includes childless men who do not wish or plan to become fathers and divorced fathers who have significantly curtailed their economic and social ties to their offspring in the wake of marital disruption. These men have come to value autonomy over commitment and to view children as a threat to their freedom of choice. A social service director, for example, was convinced that childlessness opened vocational options he would not have enjoyed as a breadwinning father:

What do you think things would be like if you did have kids?

Vocationally, I would have had to make other choices because the field I'm in just doesn't pay a terrific amount of money, and with children, you have expenses, and you have to look forward to a lot more future planning than I have to do with my current situation. So I've been able to sort of play with my career, and really just have a lot of fun in doing what I do, without having that responsibility.

Permanently childless men have decided that the potential benefits of fatherhood are not worth its risks. A childless psychologist admitted with some discomfort:

Does seeing other men with young children bring out any response in you?

Relief! [laughs] I don't just see the good parts; I see it all. I see the shit they have to wade through literally and figuratively, and very often I say to myself, "There but for the grace of God go I." It's a very ambivalent position I have about it.

This ambivalence toward fatherhood is not necessarily confined to childless men. Some divorced fathers also develop a relatively weak emotional and social attachment to their children. Whether their reaction is a defense against the pain of loss or an extension of their lack of involve-


51

ment in child rearing prior to divorce, divorced fathers who become distant from their children tend to discount the importance of parenthood. A truck driver and divorced father of two, who sees his children and pays their child support sporadically, explained:

How would you feel if you had never had kids?

I don't think that would bother me. Being that I do have them, it's okay. I enjoy them when I see them. But if I never had them, I don't think I'd really miss them. I don't think it would be that important if they weren't there.

And even though this divorced dentist spent little time with his school-age daughter, who resided with her mother in another city, he envied his childless counterparts:

Men who don't have any children just seem to have more time to do the things they want to do and don't have to deal with the trials and tribulations of raising a child.

In contrast to traditional women and men, these men agree with work-committed women that traditional arrangements are neither inherently superior nor more natural family forms. Instead, they argue that primary breadwinning is oppressive to men and harmful to society. Though more vociferous than most, the psychologist, who at age 43 had never married, painted a vivid picture of the personal and social costs these men attach to male breadwinning:

At this particular period in history, the woman is getting all the sympathy for her desperate position in the home by herself, lonely and isolated, taking care of these kids. I also have sympathy for the guy who has to be out getting his ass kicked by industrial tyrants and corporate assholes and the whole competitive complex. I think it's a tough life, and very often the man is tremendously underestimated, underrated.

Another confirmed bachelor, a childless free-lance writer, equated the woes of materialism with the "trap" of primary breadwinning:

I just see these people, and they seem so closed and so materialistic, and it makes me sad for them. Because all this free spirit seems to go down the drain and they're trapped. I wrote a song called "When You're My Age, You'll Be Selling Insurance." It makes me happy that I managed to avoid it, that I haven't been trapped by that.

Although permanently childless men and uninvolved fathers have rejected the "good provider" ethos, they are less certain about what to put in its place. They reject traditional beliefs about gender, but they are also ambivalent about what gender equality should mean.

In contrast to primary breadwinners and domestically oriented


52

women, these men argue that gender differences are smaller, more malleable, and less desirable than traditional views suggest. A single, childless physician argued that perceived gender differences are socially constructed and reflect social evaluations of behavior rather than essential, sex-related characteristics:

To me, the key to being a man, the same thing as being a woman, is being a good human being. For me, there's nothing that really defines being a man. It doesn't mean you can't cry. I could stay at home and be happy and have a lot of what would be quote feminine characteristics. In some situations, it helps to be macho, but a lot of that just reflects stereotypes. If it's a guy, he's aggressive; if it's a woman, she's a ball buster. One's negative; one's positive. And it can be the same behavior. Sometimes I wish it was a little clearer, but basically you've got to say, "What are you as an individual?"

If gender is socially constructed and thus malleable, it follows that it can be reconstructed in a different way. He believed that a change in the social definitions of gender is desirable:

I look at the grief and anxiety my father had by being the sole provider. So if being a man is being the rock and support of your family, let's change that definition of being a man. Because that doesn't look very good to me.

Although men who have opted for freedom from parental commitments argue that gender differences are neither natural nor necessarily desirable, they remain ambivalent about what gender does or should mean. They believe that gender equality is a desirable goal, but equality defined in a specific and limited way. They emphasize women's equal responsibilities in the context of equal rights. For these men, equality means exactly what domestically oriented women fear it will mean—that women should relinquish the economic and legal "protections" that have accompanied their second-class status.[21] As the divorced truck driver, quoted earlier, argued:

[If] women want equal rights, let them pay child support, alimony. Let them get drafted. You want to be equal, you do everything equal not just certain things. They got girls now in the sanitation department. God bless them if that's what [they] want to do, but when it comes down to it, you've got to pick up that pail and dump it in the truck. I'm not going to go over and help you. You want the job, you do what I do and that's it.

Similarly, these men support women's economic self-sufficiency, for their own ability to remain autonomous is closely linked to women's independence. A systems analyst who had never married declared:

I'm not really big on women who stay at home and just raise kids. I think everybody should be a fully functioning, self-supporting adult, and cer-


53

tainly economically that's necessity now. I believe, in terms of women's issues, that if they prepared themselves for the idea that they have to assume their financial burdens and responsibilities, they won't have to be emotional hostages to toxic relationships. And men won't either.

This vision of economic and social equality does not, however, easily extend to the domestic sphere. Because these men place a high value on their freedom, they resist applying the principle of equality to child rearing. Indeed, the paradox of espousing the equal right to be free while resisting the equal responsibility for parenthood leads them to avoid parental commitments. The systems analyst feared he would be drawn into what he deemed the least attractive aspects of parenting:

[If I had a child,] I could see that I would want to take a role in playing with the child, overseeing its training and schooling, providing that type of thing. I don't really see me wanting to do a lot in the way of getting up in the middle of the night, formulas, changing diapers. I'm not at all into that.

And so, these men are able to resolve the dilemma between their support of some aspects of gender equality and their resistance to its more threatening implications only by forgoing both the burdens and the joys of raising a child.

Caretaking Fathers

A more equal sharing of both earning income and rearing children provides another alternative to primary breadwinning. While complete equality remains rare even among dual-earner couples, male participation in caretaking is nevertheless on the rise.[22] Men who are married to work-committed women and divorced fathers who have retained either joint or sole custody of their children are particularly likely to participate in child rearing.[23] In contrast to childless men, these men have placed family at the center of their lives. In contrast to primary breadwinners, they value spending time with their families as much as contributing money to them. A utility worker with a young daughter and a wife employed as a marketing manager insisted:

For me, being with my family is the major, the ultimate in my life—to be with them and share things with them. Money is secondary, but time with them is the important thing in life. That's why I put up with this job—because I can get home early. To me, spending time with [my daughter] makes up for it. I'm home at 3:40 and spend a lot of time with her, just like the long days when she was young and I was on unemployment.

Some involved fathers view the time spent in child care not as simply helping out, but as an incomparably pleasurable activity and an essential component of good parenting. This thirty-seven-year-old construction


54

worker chose to work the night shift so that he could spend his days with his newborn daughter while his wife pursued a dancing career:

I take care of [my daughter] during the morning and the day. [My wife] takes care [of her] in the evenings. I work from three to eleven P.M . and wake up with the morning ahead of me, and that's important with a little one. Even if I'm pretty tired when I get up, all I have to do is look at that little face, and I feel good. It's not just a case of doing extra things. I'm not doing extra things. This is what has to be done when you have a baby. . . . You learn so much too. It's a thrill to watch the various senses start to come into play. She'll make a gurgling noise that's close to a vowel sound or an actual syllable, and I'll repeat it. I love the communication. The baby smiles more around me than she does around [her mother].

Unlike primary breadwinners or childless men, these "involved fathers" do not draw distinct boundaries between the tasks of mothers and fathers. The time constraints on their wives combine with their own need and preference for economic sharing to promote financial and social interdependence. Neither breadwinning nor nurturing is defined as one person's domain. As the utility worker pointed out:

It's not like, "Give me your money, and I hold it; or you take my money and hold it." We put it in one pot and take care of whatever we need. . . . We pull the same weight. . . . As far as time and being around the house is concerned, I can stay home more than [my wife] can stay home. I come home in the afternoon, and I'm here with [my daughter] after school. [My wife] can come home at night to be with her. She likes her job, and she likes the sharing. She's got both worlds. So it has worked out good.

Like employed mothers, involved fathers must juggle the dual demands of employment and parenthood. While men do not generally jeopardize their chances for workplace success by becoming fathers, those men who wish to spend time with their children must trade off between work and family in much the same kinds of ways that employed mothers do. A bank vice-president, married to a woman with a career in public relations, began to relax his obsessive work habits when his daughter was born:

They changed immediately, which is exactly what I expected would happen, and I've never really gone back to my old habits of working all the time. I still work long days in the office, but I get home every night to relieve the babysitter by six. I hardly ever work on weekends, and I don't work at home. So, yes, my habits have changed.

These involved fathers come closest to embracing the "interdependent" vision of gender equality upheld by work-committed mothers. They see moral and practical advantages to shared caretaking. Accord-


55

ing to the construction worker quoted earlier, domestic as well as workplace equality is not only the most practical response to changed economic conditions but also the best way to avoid the resentment and conflict that too often occur between husbands and wives:

With the baby, we do everything even-steven. What other way can you go nowadays, the whole economy being what it is? But that's also the way it should be. Even if I had the money to take care of things [myself], [my wife] has a calling, a vocation, that she needs to fulfill and I want her to fulfill. We're in this together; we both want to be an influence on the child. The next logical step is for both of us to spend time with her. . . . I feel there won't be any of this women-against-men in our marriage.

If work-committed women face numerous obstacles in their search for ways to combine career and motherhood, then nurturing fathers also face deeply rooted structural barriers to full equality in parenting. Even when the desire to participate in parenting is strong, these men encounter significant constraints on implementing their preferences. Role reversal, for example, is rarely a realistic option, since men's wages remain essential to the survival of the vast majority of households and few couples are comfortable with an arrangement in which a woman supports a man. The utility worker found being a "househusband" unacceptable, despite his preference for not working:

Is there an ideal job for you?

Staying home. But, it's just not possible. I couldn't just quit and say, "You work, and I'll stay home." But if we were put into the situation where we didn't have to work, I could tell her we could both quit. When I hit the Lotto . . . but right now, I'm stuck.

Were it not for the economic and psychological need to earn a steady income through paid employment, these men might be far more involved in child rearing than is currently possible. Yet, as the supports for homemaking mothers erode, supports for homemaking fathers have not arisen to offset the growing imbalance between children's needs and families' resources. Like domestically oriented women, a rehabilitation counselor defined paid work in terms of a "job," not a "career." Unlike such women, however, he lacked the option to trade his paid but tire-some job for the more personally fulfilling work of parenting:

I don't like to work. I work because I need the money, and I want to give my family the best I could [sic ], but work's not that important to me. I'm not the type that has career aspirations and [is] very goal-oriented. To tell you the truth, if I won the lottery, and I didn't have to work, I wouldn't. But I would volunteer. I would work in a nursery school. I would do a lot more volunteer work with my daughter's school. I would love to go on trips


56

that the mothers who don't work get a chance go to on. I would like to be more active in the PTA, get my hands into a lot of different volunteer organizations. I would love that. But I can't.

In sum, while women grapple with the choice between motherhood and committed employment, men are generally denied such a choice. Even when a man wishes to be an involved father, rarely is he able to trade full-time employment for parental involvement. The primary breadwinning surveyor noted, with some envy, that although women remain disadvantaged, many still retain the option not to work—an option few men enjoy:

Women can have the best of both worlds, whereas men can only have one choice. A woman has a choice of which way she wants to go. If she wants to be a successful lawyer, she has that choice. If she wants to stay home, she also has that choice most of the time. Women have doors opened for them and their meals paid for. They have the best of both worlds; men are just stuck with one.

Structural and ideological barriers to men's participation in child rearing inhibit the prospects for genuine equality in parental and employment options. Limits on men's options constrain even the most feminist men's ability and willingness to embrace genuine symmetry in gender relations. The truncated range of choices available to men restricts the options open to women as well.

Beyond the Debate on the Family

Social change in family arrangements has expanded the range of options adult women and men encounter, but the inconsistent nature of change has also created new personal dilemmas, more complex forms of gender inequality, and a growing social and ideological cleavage between more traditional family forms and the emerging alternatives. Since people have different exposure to changes in the structure of marriage, the economy, and the workplace, they have developed contrasting responses. Some have developed new patterns of family life that emphasize either greater freedom from family commitments (for example, childless women and men and uninvolved divorced fathers) or more equal sharing of breadwinning responsibilities (for example, work-committed women and involved fathers). Others have endeavored to re-create a more traditional model of gender exchange in spite of the social forces promoting change (for example, women who are domestically oriented and men who are primary breadwinners). The growth of alternative patterns of family life amid the persistence of more traditional forms has not produced a new


57

consensus to replace the old, but rather an increasing competition among a diverse range of family types. This range cuts across gender, as different groups of women and of men find themselves in opposing positions. The complex landscape of emerging family patterns defies generalizations about either the decline or persistence of American families. Instead, men and women are developing multiple "family strategies" and contradictory directions of change to cope with the contrasting dilemmas they confront.

If the uneven and inconsistent nature of change has produced social division and political conflict in the short run, then the long-run fate of American family life depends on finding genuine resolutions to the dilemmas and conflicts that make all family choices problematic.[24] Such an approach would move beyond a "zero-sum" politics of the family to acceptance of and support for diversity in family life; it would reduce the barriers to integrating work and family for employed parents of either sex; and it would promote gender equality in rights, responsibilities, and options regarding parenting and employment. The conflicts and dilemmas spawned by uneven social change can only be resolved by striving to make change itself more equal and consistent.

The structural changes that have produced the diversification of family forms and the emergence of new family dilemmas are deeply rooted, mutually reinforcing, and far beyond the capacities of either individuals or governments to reverse. The loosening of the bonds of permanent marriage, the erosion of the male "family-breadwinner wage," the expansion of workplace opportunities for women, and the decline in the incentives and supports for childbearing and full-time mothering, are not to be reversed. The foreseeable future is unlikely to provide a return to a period of hegemony for the breadwinner-homemaker family (often mislabeled "traditional") or to any one family type, no matter what its form. But clear-cut and fully satisfying resolutions to the dilemmas and conflicts of unequal and uneven change have yet to emerge. In this context, the central political challenge should not be defined as how to halt the so-called decline of the family. Instead, we need to find a way to transcend the conflicts among the emerging array of "family groups." Surely, the first step is to abandon the search for one, and only one, correct family form in favor of addressing the full range of dilemmas and needs spawned by inevitable but unequal change. Only then will citizens and policymakers be able to forge a humane and just set of opportunities for all parents and their children.


58

Three—
Minor Difficulties:
Changing Children in the Late Twentieth Century

Gary Alan Fine and Jay Mechling

Our children, as Peter Berger once put it, serve as "our hostages to history," by which he meant that the human imperative for continuity—the projecting into the future for one's own children, whatever good things one has in one's own life—has an essentially conservative influence on the institutional order. To love one's children is to have "a stake in the continuity of the social order," and to love one's parents is to want to preserve at least something of their world.[1] Children are not merely reproductions of our individual selves; they bear our communities' values and meanings. They are the guardians of the twenty-first century.

The way that we view the future is linked to our images of our off-spring and to our hopes and fears for them. Likewise, these images influence how our children respond to us, and our response influences their behaviors. If we look at the history of childhood, as described in Philippe Ariès's Centuries of Childhood , we see that images of children have varied considerably over time.[2] Similarly, children's images of themselves are in dynamic tension because their cultures are based in the adult cultures that surround them and are shaped through material circumstances.

Unlike some apocalyptic writers, we avoid the claim that change in the role of children in the late twentieth century has been particularly radical. The gloomy view seems more moral or ideological than empirical. We may be too close to the changes to assess their dimensions and effects with confidence, or to understand latent effects that are tethered to manifest ones. Still, we do accept that as this century ends, we have witnessed substantial changes in the lives of children. While we have not reached the "end of childhood" in a society that hates children, children's lives and cultures are responsive to shifts in the ways adults lead their lives. Since the lives of adults have changed, the conditions of children have clearly been transformed as well.


59

Children's dependency poses special problems for adult historians and social scientists wishing to understand childhood.[3] Identifying changes in the lives of children since World War II entails our sorting out the changing images of the child from the changing practices of child rearing, and this is not an easy distinction. Our goal is to understand the interplay between the environment that children inhabit, the forms of control used by adults, and the responses of children to this environment and control.

First, we explore the effects of the choices and control of adults on children—how the physical and material bases of society channel children's responses and how adult beliefs and actions affect children and adult behaviors toward children. The world as it is given to children is formulated by adults; the social problems that children face are determined by the adult social order. Adults set the material bases of childhood (both the physical and the economic aspects) and they also largely dominate the arrangements of social structure. We explore the way adults channel children into such social institutions as schools, social service agencies, religious establishments, and, of course, the family. The world children inhabit depends on how adults understand their children, how adults think they should interact with their children, and how they understand the ways other adults interact with children.

We also explore the ways children manage to sustain their own culture within the hegemonic structure imposed by adults. Adult practices are not so determining that children are not able to create their own meanings. Children find realms in which they can "be themselves" and share a social structure and folk structure. Whether children's cultures are "oppositional cultures" that eventually fall to the socializing power of the adult cultures is a question worth asking.[4] Our examination of adults' practices, of children's responses, and the complex negotiations between the two groups, moves toward identifying paradoxes in the lives of American children in the last half century. We are not the first to observe the "biformities," "dualisms," or contradictions within patterns of American culture.[5] We shall leave it to historians and other readers to decide whether the paradoxes in children's lives in late twentieth-century America are simply local versions of enduring dualisms or indicate something new.

The Paradox of Adult Choices

The contemporary realities of American life seem remarkably different from the realities that were current a half century ago. Some of the trappings of today, such as computers and televisions, were barely ideas fifty years ago. Other currently important trends existed prior to World War II, but did not have the impact that they now have, such as divorce,


60

drug use, the existence of an urban underclass, federal welfare programs, suburbs, automobiles, sexually transmitted diseases, and the rise of consumer culture. Children are born into a world not of their own making. The world they confront is one created and sustained by adults in institutions ranging from families and neighborhoods to schools, youth organizations, churches, and more. This world was created over generations and a physical and social structure was established.

The physical and material reality of American society has been altered in the past forty-five years in both subtle and dramatic ways. The suburbanization of America has been particularly dramatic (from 23 percent of the population in 1950 to 45 percent in 1980).[6] Less dramatic, but still significant is the decline of the farm population (from 23 percent to 2 percent during the period 1940–87).[7] On the one hand, fewer children now reside outside of hailing distance of other children. On the other hand, in most areas of our cities, dense street culture has diminished. The faded images of New York City's Lower East Side at the turn of the century—the stoop culture—have largely vanished, except for enclaves in the poorest areas of large, urban, Eastern conglomerations, such as New York or Washington, D.C.[8] We have witnessed a "suburbanization of the city," not necessarily in terms of housing types, but in the disappearance of formerly "public" activities into protected spheres indoors.

These effects are filled with paradox. We argue that children have been both incorporated into adult society and set apart from it, to be given their own special, marginalized treatment. Children have been treated, sometimes simultaneously, as adults and as nonadults. We find this paradox in both adult attitudes toward children and in adult behavior.

On the "incorporated" side of the paradox are those critics of culture who argue some version of the "disappearance of childhood" thesis, namely, that television and other adult institutions have erased the boundary between childhood and adulthood, exposing children to every ugly secret of adulthood.[9] They say that childhood is no more, that children have been too integrated into knowledge of the stresses of adult society from which they should be protected for a while longer.

At the same time, children are increasingly isolated from contact with adults and adult worlds. Outside of formal, adult-sanctioned institutions, such as school or organized play, children are often left alone by adults. Brian Sutton-Smith, for example, demonstrates how toys contribute to what he sees as a main function of modern child rearing—"to turn the child into a person capable of functioning in isolation by itself."[10] The modern middle-class parent first buys the child a toy as a symbol of affection and bonding, then sends the child off to his or her own room to play alone with it. Careful analysis of toys and the ways in which children


61

play with them confirms for Sutton-Smith that toys tend to decrease the sociability of play, modeling for children "the solitariness on which modern civilization relies."[11] Adult use of the television as a babysitter further models solitary activity, and the home video game means that the child no longer needs to go to a video game arcade (the 1970s equivalent of the pool hall) to bring at least some sociability into video play.

This is not to say that children only play alone. Children do play in peer groups, but the circumstances of group play have changed considerably. The children's peer groups of an earlier time were more public , operating in public spaces they shared with adults who had the right and moral authority to intervene in the activities of children.[12] The neighborhood was an extended family and, on some level, all adults had the right to intervene in the activities of all children. In a perverse way the drug dealer who hires children to hawk his wares and to warn him of trouble is harkening back to our "romantic" notions of the relationships between adults and children, in which the children were truly a part of the community. The relationship between the child and the adult criminal gang is by no means a "new" problem, even though these connections speak more to economic transactions than to moral education.[13]

Although children are being exposed to the themes of adult life, through television and through contact with adult deviance, they are receiving little guidance as to how they should respond. This leads to the greater sophistication of children—a sophistication that can both be a positive and a negative consequence of their access to adult information and their independence. The rise of computer literacy among some groups of children and their skills in the arts and sciences is deeply satisfying to their adult mentors. However, while there are these successes, there are children who are driven to obtain material objects at any cost, who become bored and alienated, and who make choices that adult society deems improper (pregnancy, alcohol, drug use, or gang activity). In the connection between informational access and lack of supervision the suburb and the inner city have more in common with each other than either has with our images of the small town.

Physical and Material Considerations

As we indicated earlier, adult practices have tended to separate the public spaces of children from those of adults, pushing the children to the margins. Urban folklore portrays an adult world filled with danger for children and teaches the lesson that children must behave in that sphere with adult-defined decorum. The children's public spaces (parks, corners, and some businesses) are continually subject to appropriation by adults.

The single area that is "supposed" to be a child's domain is the school,


62

but adults are rarely content to let children be children within school boundaries. For instance, well-meaning supervisors rigidly prevent children from participating in school-yard fights.[14] What had once been a central feature of childhood jurisprudence, important for maintaining the status system of childhood, has been marginalized and transformed into deviance.[15] Recess has been eliminated in many school districts because educators could not easily define its benefits, and injuries were seen as outweighing the pleasure given to children. Case law holding cities and schools liable for injury, coupled with increasing insurance costs, have led to the closing of school yards and playgrounds outside of school hours, when they cannot be supervised.[16] So those elements that gave children control of their social world have been limited or eliminated entirely. Our concern for the welfare of our children has resulted in denying them responsibility for making mistakes, and for learning from those mistakes.

Another locale in which children (especially adolescents) bump into adults—often quite literally—is the shopping mall. Mall managers recognize that junior-high- and high-school-aged kids like to "hang out" at the mall, sometimes creating a "nuisance." Case law pertaining to malls tends to define malls as public spaces, so it is not easy to evict teenagers. Besides, teens do bring money to the malls and are tomorrow's affluent shoppers. Children have the privilege of consumption within the limits of their budgets, and consequently share these adult spaces so long as they behave according to the rules established by adults, rules often enforced differentially upon adults and minors. (Adolescents are often sanctioned for actions that are tolerated when performed by adults.)[17] For adolescents the mall is a place to "hang out," an expressive locale that is supposed to be devoted to adult, instrumental activities (though adult shopping can be just as expressive as the adolescent behavior). Some mall managers have decided on the solution of creating special places in the malls, such as video game arcades, and put those places down the "side streets" of the mall, away from the main thoroughfares. Thus, even "malling" by America's kids is structured to preserve both the social control that adults have over children and to minimize informal social contacts between the groups.

On the other hand, the suburbanization of American neighborhoods, the continuing geographical mobility of nuclear families, and the pervasive view of the private home as a "haven" from the public world combine to give children more control over their unsupervised free play. Parks with their large, unpatrolled expanses, backyards, homes lacking adult presence, and, especially, bedrooms provide arenas for relatively autonomous action. Children often acquire creative control over their


63

own rooms. It is not uncommon for adults to have to ask permission to enter these spaces; the child's room serves as a haven from the family.

Changes in public technology affect the relationships between adults and children. The availability of transportation, coupled with American affluence since World War II, has increased children's mobility, thereby widening the geography of their experiences. It sometimes has been said that the geographical range of a preadolescent's bicycle represents the limits of his or her community.[18] Bicycles are, of course, a primitive technology, primarily suitable for younger children. Adolescents adore automobiles. The last few decades have witnessed unprecedented numbers of adolescents who can drive and who have access to automobiles. The proportion of fifteen- to nineteen-year-old licensed drivers increased from 46.9 percent in 1963 to 54.4 percent in 1987.[19] With the growth and sophistication of the used car market in the decades since World War II, it has become easier for adolescents to acquire moderately priced automobiles, affordable by adolescents or their parents. The automobile combines the opportunity for an expansion of mobility with a private domain, much like the adolescent's bedroom. The reality that the same sort of activities occur in both spaces (drinking, smoking, joking, and sexual expression) indicates how connected the two locations are. Further, adolescents' automobiles can invade and interfere with adults' spaces and, as a consequence, need to be controlled. Adults have attempted—with mixed success—to prevent cruising, partly by establishing curfews that eliminate late-night driving and by bearing down on adolescent drinking and driving. All of these actions, while grounded in a concern for the adolescent, are equally based in the belief that driving is a privilege that can be suspended at the whim of adult society.

Social Environment

During the past several decades American social relations have changed enormously with significant effects on children. It was only in 1954 that the Supreme Court determined that it was unconstitutional for black and white children to attend segregated schools, and it took several decades for this ruling to be implemented even partially. Along with desegregation, the United States significantly liberalized its immigration policy in 1965 with the passage of the Immigration and Nationality Act. Today the immigrants from Southeast Asia, Eastern Europe, the Soviet Union, Mexico, Africa, and Latin America bring to school classrooms and to neighborhoods a new multicultural texture. Add to this the mainstreaming of mentally and physically handicapped (differently abled) children, and one finds the range of peers for children in the 1990s dramatically wider than for children in the 1950s. These changing social conditions will di-


64

versify children's cultures, since those cultures depend on a transformation of known cultural elements.[20] To the extent that the store of known information increases, the diversity of children's culture will increase, although this depends on the assumption that children from diverse backgrounds will mix informally, as well as in adult-controlled settings.

Consequently one must be careful not to expect a grand melding of the cultures of childhood. To date it has been easier for Asian-American children to be integrated into the culture of "majority" children than for blacks or Hispanics to be so integrated. We see plenty of evidence of the influence of the forms of African-American expressive cultures upon the cultures of white Americans, but white children can appropriate these cultural elements (slang, dress, music, dance, gesture, and so on) without accepting black children as equals and without acknowledging the worth of the values that led to the creation of these expressive elements.

The revolution in gender roles during the past quarter century, although slow, has had some impact on children's cultures. The women's movement and related demographic changes affect the circumstances of children. Girls are no longer fitted with occupational and avocational "girdles," at least in the overt fashion that was so evident a few years ago. Women's employment outside the home has altered children's lives. With many women working at full-time jobs, children are either placed in after-school cultures that permit them to interact with peers, often with only occasional oversight by adults, or, if they are older, they are permitted to live their lives separate from the oversight of their adult supervisors. Latchkey children are not a new phenomenon, but they have a special importance in today's communities, in which few neighbors, if any, will serve as surrogate parents or feel responsibility for doing so.

The increased number of women working outside the home also affect younger children. Perhaps the most significant change in child rearing has been the dramatic growth in the number of full-time day care providers. Day care providers include corporate entities ("McKids," as they are sometimes called) as well as individuals who care for children in private homes. Much debate has ensued over whether day care has deleterious effects on the health of children who are put into it and whether these children are more aggressive.[21] Whatever the case, these venues permit children to form their own group cultures at an earlier age than had been customary before collective child-rearing systems became common. American child-rearing practices can be conceptualized as a blend of the nuclear family structure (on evenings and weekends) and a kibbutz nursery (during weekdays). The long-term effects of this type of system are uncertain, and, because of the many cross-cutting factors that prevent causal certainty, may never be known definitively. Atti-


65

tudes toward these changes depend to a great degree on our ideological beliefs about these changes in family life.

Family structure also influences the culture of childhood. As the two previous chapters have documented, the number of family forms in the United States has multiplied, perhaps exponentially. In many schools, fewer than half the members of a high school class live with both biological parents. Many children pass through several family transformations during their maturation. While neither single parents, step-parents, nor blended families constitutes "the norm," all are normal. Contacts among children from a variety of family types and life experiences increase the range of family group cultures that children can draw upon in creating their own group cultures. Some family types may broaden the networking of children, particularly when the mother and father reside apart; a modern child may belong to two radically different social networks. This mobility also increases the knitting together of the youth subculture, tightening the network of small groups.[22]

The Professionalization of Parenting

An important feature of American culture in the years since World War II has been the greater willingness of citizens to rely on experts to tell them how to think, feel, and act.[23] Child rearing is an activity adults feel increasingly unprepared for and anxious about performing, so it is little wonder that expert advice so quickly came to dominate this aspect of life. Dr. Benjamin Spock's The Common Sense Book of Baby and Child Care , first published in 1945, is only the most famous of a stream of postwar volumes providing advice to parents—and not merely the nuts and bolts of physical child care but also guidance in forming the emotional life of the child.

As Stearns and Stearns demonstrate, expert advice to parents, to married couples, to school teachers, and to office managers has increasingly stressed the domestication of anger and of other unpleasant emotions.[24] Parents have attempted to control their physical aggression and anger toward their children, and children are expected to do likewise. Temper tantrums are seen as social problems that must be dealt with by emphasizing rational discussion and the value of controlling one's emotions. Experts stress the importance of emotional stability and tranquility for healthy family life. This has a dual effect. Anger has been given a "magical" power, power that separates it from the mundane realm of everyday life. Now, a patina of guilt overlays the expression of anger, previously considered a natural emotion.

Parents and other adults are inevitably powerful influencers of children's lives. Adults read children stories, sing them songs, give them


66

books to read, tell them proverbs, teach them games, buy them toys, take them to movies, and control the televised narratives, including commercials, that consume so much time. (The average American preschool child watches twenty-five hours of television every week.) It seems obvious that as parents face new social realities, so will their children. What we have witnessed in the past half century, sped up, no doubt, in the past twenty-five years, is a change in the relationship between American parents and their children so that children are both more and less autonomous, more and less dependent, than ever before.

The Expansion of Children's Rights

To take one important example of the changing relationship between parents and children, consider the issue of "children's rights" as a variant of the incorporation/separation paradox.[25] The notion of "rights" emerged relatively recently in Western history, and debates over competing rights occur in specific times and places in response to social strains.[26] Children's rights movements in America go back at least to nineteenth-century worries about child labor, but the establishment of the modern welfare state in this century has led to children's rights battles, revealing the paradoxical status of minors.[27] During the past few decades adults have both given additional rights to children and taken significant privileges from them. The individual child is given increased rights of access to institutions and state-guaranteed protection from harm, while, simultaneously, the behaviors that children and adolescents are permitted to enact have been limited.

The proper role of children in our society, particularly in relationship to adult society, is a matter of vehement debate. Some advocates for children claim that we are a society that does not much care for our children.[28] Others claim that we need to give children additional freedoms, while still others say that we should restrict those freedoms. Some call for additional protections, and others say that these protections have costs. When we speak of the expansion of children's rights, we tend to speak about "rights" that are institutionally protected. Each child is guaranteed the right to be protected and to have equal access to resources. These guarantees typically come from the state in one of its various manifestations—schools, courts, social welfare agencies, and the like: these are sponsored rights. Children are less likely than they once were to be harmed or discriminated against because of their special conditions or their helplessness vis-à-vis adults. The expansion of children's rights, therefore, comes by virtue of defining them as relatively powerless. Consider two examples.

Most school districts prevent teachers from physically disciplining or


67

restraining children except under unusual circumstances. This policy, while protecting the child from abuse and pain, reduces the social power of teachers, and consequently reduces the range of informal options that teachers have to deal with unruly children. Through current arrangements, discipline in the classroom is only possible if the children accept it. Disruptive children and their parents are not always willing to be party to the discipline. As a consequence, other, more formal and bureaucratic systems of disciplining children need to be established, and these have the effect of turning the school into a kind of court.[29] The child who resists informal understandings may become enmeshed with psychologists, counselors, and psychiatrists, ostensibly to protect the rights of the child, but equally to protect the system from claims that those rights have been violated.

A second contest between the rights of individual children and the rights of the collectivity or institution arises in the issue of "mainstreaming" difficult children in the schools. For example, in the small, rural community of Cannon Falls, Minnesota, a dispute has erupted over the right of a first-grade girl to attend a regular classroom.[30] The girl had a behavior disorder involving a lack of impulse control and low social intuition. She regularly assaulted the other children in her classroom, kicking, punching, and biting them, dozens of times each week. What is to be done for this girl and her classmates? The school district, lacking a special education class, determined that she had a right to remain in the class and that she would benefit from being mainstreamed, despite advice of doctors that she be institutionalized. The school considered the child handicapped and, consequently, entitled to education in a regular classroom under the Education of All Handicapped Children Act of 1975. The child and her parents wanted her to remain in the regular classroom.

Yet, what about the other children? Our education system once routinely segregated handicapped, difficult, and disruptive children, and poor students were given little incentive to continue schooling. Today a "normal" education is seen as a right, but this right is not without social costs. The children in this girl's class, one is led to believe, cannot fully devote themselves to learning because of her actions. Many are frightened, and perhaps some will even suffer psychological trauma as a consequence of her behavior. They feel that they are being sacrificed on the altar of mainstreaming, a choice adults make that affects the children. As this case demonstrates, the balancing point between the individual rights of the child and the collective good have shifted over the past two decades to provide more access for the individual child at the expense of other children. A similar issue concerns the rights of children with AIDS to attend public schools. While AIDS is not per se a children's issue, the


68

presence of infected children in school raises the question of whether the individual child has rights that transcend the desires of other classmates and their parents? Even if some children are made anxious and parents band together to object, the consensus now seems to be that a child with AIDS should be treated similarly to children who are not infected.

Restriction of Privileges

As suggested above, the expansion of rights in some areas has been met by a restriction of privileges in others. Children's rights to equal access to institutions and protection have been expanded at the same time that their behavioral freedoms have been curtailed. Adolescents seem the most notable targets of efforts to curtail freedoms. During the late 1960s and early 1970s a greater variety of behavioral options were available for teens, options not always legally permitted, but winked at by adults. During the past decade the government has felt increasingly secure in limiting choices for minors. This is evident in adults' attempts to control adolescents' sexual behavior and access to alcohol, tobacco, and other drugs. Consider changes in the drinking laws. Whereas between 1970 and 1975 twenty-nine states lowered the age at which young people could legally purchase alcoholic beverages, between 1975 and 1984 twenty-seven states raised their drinking ages. By 1986 the federal government demanded that the minimum legal drinking age be raised to twenty-one, and threatened states that did not comply with the loss of federal highway funds.[31] High schools were eliminating smoking rooms and declaring their campuses "smoke-free zones." Simultaneously, a drug panic gripped the nation, particularly with regard to teenage usage. With the increase of teenage pregnancy (or "promiscuity") and sexually transmitted diseases, baby-boomer adults demanded that adolescents refrain from behaviors that they had enjoyed as teens.

What sustains these paradoxes about the rights and privileges accorded to children is late-twentieth-century ideas about the innocent child . Children and even adolescents are increasingly seen as being in need of protection from themselves and from others. If we consider the dramatic social problems of the 1980s, we find that a large number deal with threats to children. Those social problems that do not specifically center on children—such as the homeless, AIDS, and cocaine addiction—are often framed in terms of their effects upon children: homeless families, pediatric AIDS, and cocaine babies. This trend shows no sign of abating; indeed, each morning newspaper brings new examples. As Joel Best notes, the image of the "threatened innocent" became common in the 1980s.[32]


69

One can see these ideas played out early in the decade in parents' efforts to transform Halloween from a child's folk festival of liminal disorder and inversion to an orderly, "safe" event, closely supervised by adults. The Tylenol product tampering incident of October 1982 sparked the adult move to protect the innocent child from the ravages of Halloween mass murderers, even though the two events seemed on the surface to have little in common. For decades, American adults and children alike had dealt casually with cautionary legends about razor blades in apples, poisoned candy, and heated pennies. But adults responded to the Tylenol tampering by virtually eliminating Halloween altogether that year. As Best and Horiuchi observe, Halloween crystalizes the fears that parents have for their children.[33] The world, even the suburban neighborhood, is a dangerous place for children, a place where even most neighbors are more or less strangers. Halloween epitomizes this concern because it is the only night when children seek routine contact with adult strangers to demand "treats." Despite the absence of children being poisoned by strangers on Halloween, fear runs deep. Hospitals now routinely x-ray children's treats, but in the 1980s adults preferred to organize community Halloween parties and tried to eliminate trick-or-treating altogether.[34] The taming of Halloween that Gregory Stone noted near mid-century has continued, as it has been taken away from the control of children.[35]

The recent social problem of "missing children" dramatically reflects adult fears for innocent children. The number of children abducted by strangers remains very small, certainly under a hundred each year.[36] Most children who "vanish" are either runaways or are snatched by non-custodial parents. While these may be important problems, our interest is in the cultural implications of the unrealistic, widespread adult panic over missing children. A child's vanishing galvanizes a neighborhood and community. In Minnesota, the apparent adbuction of an eleven-year-old from rural Stearns county set off numerous community activities—billboards, rewards, rallies, arm-bands, vigils, and the like. The missing boy, Jacob Wetterling, brought Minnesota together as a community of care and fear.

The "epidemic" of physical and sexual abuse of children in the 1980s taps the same adult anxieties as do Halloween and missing children. There can be no doubt that children are abused, physically and sexually; the nature of abuse and the circumstances of the victims make child abuse a difficult phenomenon to study for large cultural trends. Is child abuse increasing, or have the conditions for uncovering abuse changed? Again, the cultural response to abuse is the "text" that concerns us here. Stories about alleged "sex rings," pornographic filming in day care centers, and satanic rituals circulate far out of proportion to the actual instances. The


70

horror and anger experienced by adult audiences hearing these stories show us the symbolic potency and centrality of our adult view of children as innocent victims.

In 1984, for example, Minnesota was shocked by a county prosecutor's claim of a child sex ring operating in a small exurban community of Minneapolis. The prosecutor accused two dozen adults of sexually abusing twenty children. Eventually she dropped all of the charges (one man pleaded guilty), claiming that further prosecutions might jeopardize a case involving child murder and pornography. State investigators found that the sexual abuse case was mismanaged from the start, and, aside from the single conviction, most of the other people who were charged were almost certainly innocent. The McMartin preschool case, in which the directors of a preschool in Manhattan Beach, California, were tried on fifty-two counts of child molestation, ended in January 1990 with an acquittal. The jury was unable to arrive at a guilty verdict in the longest-running (two and a half years) and most expensive criminal trial in the nation's history. The public was outraged; some callers to a San Francisco Bay Area radio talk show were almost sputtering in their anger. The prosecutors, bowing to public pressure, decided to refile the charges on which the jury could not reach a verdict, but the second jury was also unable to reach agreement and the case was finally dropped.

Parents' concerns that their children could be harmed, or even driven to suicide, by fantasy role-playing games or rock music lyrics are part of this fear of the corruption of innocent children. In many cases, adults envision a vast, hidden conspiracy of satanists who are pied pipers to the young. While not all of those who campaign against fantasy games and rock music share these apocalyptic visions, the concern they do share stems from the image of children being corrupted by evil strangers. Such cultural products can destroy the child's, or even the adolescent's, free will. Several notorious cases, including a case involving Judas Priest, involve the deaths of teenagers who allegedly listened to heavy metal rock songs advocating suicide.[37] However, these cases have not led to conviction of the musicians. Some adult groups, such as the Parents' Music Resource Center, cofounded by Tipper Gore and other political figures, advocate warning labels on rock music packaging or, in some cases, bans on certain offensive lyrics.

Certainly a few abductions by strangers occur each year; certainly children are abused by parents, relatives, trusted caretakers, and some strangers; and certainly some adolescents commit suicide. But the scale, intensity, and rhetoric of the public reaction to these cases suggest that even a few instances raise powerful alarms. Children remain a screen upon which adults tend, in many cases, to project their own fears about the larger social world.


71

The Age of the Innocent Child

Why have the 1980s seemed to adults such a dangerous time for children? What is there about our culture and the social location of the people worrying about these matters that these should be seen as serious social problems at the present time?

The most straightforward explanation is that we are more concerned about these issues because dangers to children actually have increased. Unfortunately, this approach does not get us very far. For one thing, we do not have reliable data for drawing comparisons across time. Changes in reporting practices and changing definitions of behaviors such as child abuse erode our confidence in the data.[38] Even if we could trust the data, figures on injuries, deaths, and kidnappings suggest that the last decade has been no more dangerous for children than past decades; indeed, one could argue that children were much more at risk before World War II than since.

More persuasive, we believe, is an approach stressing the construction of social problems. Those issues that the public turns its attention to are symbolic constructions, and this symbolization mediates reality. We do not claim that parents are misguided or wrong in their choice of social concerns, but rather that circumstances selected and defined as major public concerns are chosen out of social and psychological pressures. As Kessen has argued, the very notion of the child is a social construction.[39] What becomes a salient issue depends on factors that transcend statistics. Statistics can be massaged in numerous, rhetorically sophisticated ways.

Children are subject to a particular form of symbolic demography . Symbolic demography refers to cultural beliefs that result from the intersection of demographic trends with the ideologies of the populations experiencing the trends. Zelizer, for example, argues that over the past century, American children have increasingly been defined as "priceless."[40] With the decline in family size, each child is less replaceable; yet, paradoxically, because of restrictions on child labor and increased affluence, the child has no intrinsic economic value. Since World War II, parents have decided to raise small families, to give extensive attention and resources to each child. Children, it is said, should be works of art. The modern child becomes priceless not only because of his or her replacement value but also because of the child's investment value, both in material and emotional terms. Our point is that many demographic trends are currently filtered through popular perception with cultural consequences.

Consider the symbolic demography of the American baby-boom cohort. During the 1970s and 1980s the baby-boom generation became parents, producing the "echo boom." If parents in the 1950s and 1960s


72

had anxieties peculiar to their social arrangements, so did the baby-boomers when they became parents.[41] Postwar families established patterns of suburban life relatively isolated from the extended family and the world of work. Children growing up in suburban families in the 1960s experienced a dramatic, gender-based division of labor—the existence of relatively clear, if not always fully functional, roles. Cultural images of the family reflected and reinforced these demographic patterns. The baby-boom generation followed the television lives of the Anderson family ("Father Knows Best"), Ozzie and Harriet Nelson, and "Beaver" Cleaver.

The demographic realities of family life changed in the 1970s and 1980s, though the public images of the family were slow to catch up. Economic trends, divorce rates, the women's movement, and other forces undermined the gender-based organization of the family. Memories and images of growing up in the 1950s could no longer serve as useful guides for behaving in the 1970s and 1980s. Eventually, popular culture images began to catch up with the demographic realities; "The Brady Bunch" depicted a blended family, and several popular television series (for example, "One Day at a Time" and "Kate and Allie") featured single mothers struggling to make a living and to be good parents—and succeeding.

Still, television sit-coms provide little real guidance for a generation of parents experiencing a radical discontinuity between the family lives they knew as children and the family lives they were attempting to create for their own children. These discontinuities help to create the fears and anxieties motivating the adult choices we described earlier. People project personal and social uncertainties onto external dangers.[42] Uncertainty and stress are transformed into external threats, making them psychologically tolerable. Fears of kidnappers and Halloween sadists reflect fears of being an inadequate parent.

Perhaps a more dramatic fear is the physical abuse of children. All parents get angry with their children and the vast majority use physical force or psychological pressure.[43] At one time, these behaviors were considered normal for child rearing.[44] Today this "normal" parenting is fraught with guilt in the face of the ideological domestication of anger. Worse, wrongly or publicly labeled actions can lead to an official investigation and the possibility of having one's own child—that most valuable of possessions—snatched away by agents of the state. Every yell, each slap, has a nascent terror for the parent, as well as for the child. The conflation of missing children with unhappy runaways simply adds to parental fears and mistrust. The public narratives of missing children embroider real events with the power of real anxieties.

Parents of the 1970s and 1980s suffer complex reactions to their own insecurities about raising children. They project their own fears and


73

concerns about their potential inadequacies. The threats must not come from inside the family, but from outside, because imagining the threats from inside the family are too painful and too psychologically possible. Guilt feelings about latchkey children and about putting children in day care centers—both necessities, given the demographic and economic realities of the 1970s and 1980s—find expression in the outrage over "missing" children. In a sense, it is the parents who are "missing," both in the sense that they are absent from their children's worlds and in the sense that they are "missing" their children's growing up.

Similarly, the meanings of paradoxical messages to 1980s children about drugs, alcohol, and sex may lie as much in the life experience of the baby-boomer parents as in the objective conditions of the lives of children in the 1980s and 1990s. There is strong evidence of alcohol abuse and drug abuse in children, rates of teenage pregnancies and abortions are startling, and sexually transmitted diseases increase the risks of sexual behavior; thus there is a real basis for the fears that parents have about their children. At the same time, however, one cannot deny that parents' experiences also condition their responses to what they perceive as the world in which their children are coming of age.

Baby-boomers grew up in the relatively repressed 1950s and early 1960s. The "cult of domesticity" of the 1950s demanded control of sexuality. Elaborate dating rituals delayed sexuality, marriage, and childbirth. Alcohol continued to be the intoxicant of choice for the middle class, and recreational drugs were out on the dark edges of society, supposedly used by people of color and other marginal folk, such as artists and musicians. In general, public discourse of the 1950s favored the clean, cool, and controlled, over the dirty, hot, and wild. As a result, many baby-boomers coming of age in the 1960s began experimenting with sexuality and drugs, often simultaneously. The availability of the birth control pill changed things in ways historians and social scientists are still sorting out. The women's movement, the Civil Rights Movement, and the anti-war movement changed the conditions of childhood and adolescence in many ways. Marginal cultures of the 1950s moved into the center of attention in the 1960s, as African-American, Far Eastern, and other cultures produced the multivocal music, dress, dance, and philosophies of the 1960s. In contrast with the cool, controlled 1950s, the 1960s and early 1970s reveled in ecstasy, abandon, and sometimes reckless experimentation.

We can read the late 1970s and 1980s as a cultural commentary on the 1960s. Baby-boomers who smoked marijuana, ingested drugs, and experimented sexually in their youth now scorn cigaret smokers and white sugar.[45] Middle age blunts the edges of experimentation. If this interpretation is correct, then parents of the 1980s come to "read" the alcohol


74

use, drug abuse, and sexual behavior of their children. The child's body again has become a potent symbol, and the treatment of that body—including abduction, physical abuse, sexual abuse, sexual behavior, drug use, alcohol use, suicide, and abortion—has become a powerful cultural text dominating public discourse. The centrality of these images, grounded in the experiences of the baby-boom generation, is a function of that generation's real and symbolic importance in the nation's demographic profile. In a crucial sense, the concerns of the baby-boom generation are the concerns of public discourse.[46] Because so many in the baby-boom cohort are parents, that group's agenda is the agenda for the nation.

Of course, we must not overextend our argument for the centrality of symbolic demography for understanding the effects of adult decisions on children over the past four decades. Class and race differences in this period are dramatic, and attitudes about female children probably have changed a great deal more than attitudes about male children. In all instances, however, the social scientist must look at how the demographic conditions of a social group interact with the symbolic framing of their lives. Parents attempt to socialize their children according to their own subjective and symbolic understandings of the conditions of their lives.

The Responses of Children

We have thus far painted a one-sided portrait of the lives of American children. As we have aimed to show, adults have been particularly confused and ambivalent in their treatment of children over the past few decades, in part because adults have been facing a bewildering array of social transformations. Amidst certain kinds of neglect of children, and perhaps in response to this neglect, adults have moved to exert increasing control over children, from the domestication of anger to the domestication of play.

Yet, children have their own resources for resisting adult control. Children have always maintained their cultures—both authentic and derivative—separate from the cultures of adults. This chapter will conclude by looking at changes in the ways children create their social worlds.

Developmental and sociohistorical forces collide in the creation of the child's world. Our common sense tells us that biological and psychological development must be significant elements in children's cultures; yet we also know that theories about the biology and psychology of child development in any historical period are not immune from ideological pressures. It is therefore impossible to present a true and complete picture of children's culture. The social psychoanalytic approach of Harry Stack Sullivan, for example, led him to emphasize the separateness of


75

the preadolescent's chumship, wherein "one finds oneself more and more able to talk about things which one had learned, during the juvenile era, not to talk about."[47] Others downplay the importance of the chum for social development. These disagreements should not stymie us, but we do need to recognize that our models of the child's development bear social meanings that affect the child we "find" in our inquiries and, further, we need to recognize that there are numerous children's cultures grounded in groupings that may be quite different from each other.

The first thing we can say about children's cultures is that despite differences in content, most research has indicated that children maintain folk traditions that are remarkably resilient. Although children's cultures draw on a wide range of adult cultural materials, they remain indigenous social forms. Folklorists of childhood recognize that many elements in the cultural life of children are nearly unchanging, lasting for centuries, transmitted from older to younger cohorts.[48] This transmission is remarkable in that it is grounded in oral communication. Unlike adult culture (in many respects a written, material, or electronic culture), children rely on their memories. Children remember and share what is important to them. These materials are malleable and alter according to children's changing interests and needs. Here we find what one of us has called "Newell's Paradox," which means the paradox that children's folk cultures are distinctive in being both conservative and innovative.[49] On the thematic level, children's cultures are remarkably stable, but on the level of a local group, traditions are continually being developed.

A distinctive feature of children's expressive cultures are their antithetical stances toward official (adult) cultures.[50] The 1988 presidential election provided an instructive example of the antithetical nature of children's cultures. George Bush had raised as a campaign issue the fact that Michael Dukakis had vetoed, as governor of Massachusetts, a bill requiring teachers to lead students in the Pledge of Allegiance. Bush made it clear that children would pledge allegiance to the flag in the America he meant to lead. The scholar of children's expressive cultures, however, knows the foolishness of this debate. Art Linkletter built a 1950s television career around the fact that when young children learn such things by rote, they garble the text into nonsense words. When children finally are old enough to understand the words, they then invent and pass on parodies of the adults' sacred texts. Thus, the hallowed Pledge of Allegiance becomes, in the mouth of an eleven-year-old, "I pledge allegiance to the flag / Michael Jackson is a fag / Pepsi Cola burned him up / Now he's drinking 7 Up."

The antithetical stance of children's cultures sometimes requires secrecy or esoteric encoding in order for the lore to exist alongside adult


76

cultures. Many antithetical strategies (for example, parody and nonsense) are built into children's lore, but one of the most prominent is "dirty play."[51] Children may be personally wonderful, kind, and good and still engage in play deemed highly undesirable by adult moral standards. Given the ideological suppression of disagreeable emotions, children may have good reasons to keep traditions that involve aggression, vandalism, obscenity, and racism hidden from sensitive adult guardians. Although children, if asked, would admit to behaving in ways adults find disagreeable because such "misbehaving" is fun, we can see such playful subversion as involving children's needs for control, status, social differentiation, and socialization to perceived adult norms.

Control

Dirty play constitutes a claim-making behavior. It proposes that children have the right to engage in activities and have opinions that contradict adult pressures. Children demand for themselves the right to make judgments about race, sex, and authority—precisely those areas of social structure that adults wish to preserve for themselves. Although the content of this play is troubling to adults, it is equally troubling that children should feel competent to make these judgments. Perhaps we can speak of children's culture that undermines the adult authority structures as "playful terrorism," a kind of mock guerrilla warfare. Such terrorism is politically impotent because of the disorganization of the "terrorist group," their lack of commitment and uniformity of beliefs, the tight control that adults have over them, and the rewards that can be offered to those who conform. Still, there are potential threats in the testing of boundaries and legitimacy, and so such actions, when they become known, may provoke harsh retribution.

Status

Children's cultures shape relationships within the group as well as outside. Behavior that is unacceptable to adults may gain a child status with peers. Children are engaged in a continual and consequential contest for status. Within any particular children's community resources are spread relatively equally, so status becomes crucial for distinguishing individuals. A premium exists for being willing to do things that other boys and girls want to do but are afraid to do. If consensus exists that a prank is desirable, the boy or girl who performs it or leads the group gains status for breaking the barrier of fear. The social rewards of "deviant" play suggest why it is so rare for children to engage in these behaviors while alone. It is not that children have destructive impulses, but rather that they want to show off in the presence of friends.


77

Social Differentiation

One important task for children is to define themselves in contrast to other groups that share characteristics with them. Whites are not blacks, boys are not girls; race and gender make a difference, and society reinforces these differences. Public norms of tolerance and civility find such judgments heretical and morally repugnant. Yet, from the standpoint of the child, these beliefs, like so much ethnocentrism, seem natural. Casting racial or gender insults on others provides a group with some measure of collective self-worth, admittedly at the expense of another group. While this need to put down another to gain self-esteem is unfortunate, the process is common at all ages. In childhood, when questions of identity-formation are crucial, it has a particular weight.

Socialization to Perceived Adult Norms

Hidden culture is not created de novo; rather, it is a transformation of what children see enacted by older peers and adults and in the various media to which they are exposed. It is transformed to meet their developmental imperatives and their level of understanding. The content of adult discourse and media to which children are exposed has impact, though often it is content that many adults sincerely wish they had not communicated because of its sexual, aggressive, and anti-social themes. To the extent that children are exposed to large segments of adult life, their cultures will represent transformations of these adult themes. Adults cannot shield preadolescents from what they do not wish them to learn. Aggression, sexism, and racism exist in adult activities and discourse, even when the adults are trying to discourage those behaviors. The exasperated parent's warning, "Listen to what I mean, not what I say," acknowledges the adult's impotence in the face of children's interpretation of messages.

Conclusion

Children reside in a world that they do not create. Dealing with this reality is complicated by the fact that it is constantly changing. It is not the world that children—and adults—faced in the past. The world changes, we change the world, and our responses to that new world change. One should be skeptical of the view that society today is especially at risk. Writers earn their keep by convincing gullible parents, educators, and care providers that some new, unique threat to children exists, but the truth is that the state of the world can always be pictured as a crisis by those with a mind and a motivation to do so.

Children and adults reside in a world that is delicately balanced; it al-


78

ways has been so. Yet, it is a moving equilibrium. The social drama, despite the obvious power difference between adults and children, is not entirely one-sided. As we approach the end of the century and embark on the inevitable self-reflection that such milestones always provoke, it is well to recognize that adults and children do not necessarily have the same cultural or social agendas. Children can, in some measure, resist the control of adults. While we, as adults, have some responsibility to help shape the worlds of children, we should also come to respect their natural response to our aid—a mix of gratitude and an understandable human desire to be left at peace.

If, as Berger suggests, children are hostages to history, socialization inevitably will be proactive. Through our children, we attempt to shape a future vision of society. We plant seeds that we hope others will harvest. Our collective concern about child rearing suggests that we care about more than ourselves: our children do matter, the communities in which they will reside count, and both can be shaped from a distance. How the twenty-first century will unfold is being determined in homes and schools in the twentieth.


79

Four—
Ambivalent Communities:
How Americans Understand Their Localities

Claude S. Fischer

Americans of the Left and of the Right esteem the local community. It rests in the pantheon of American civil religion paradoxically close to that supreme value, individualism. In our ideology, the locality is, following the family, the premier locus for "community," in the fullest sense of solidarity, commitment, and intimacy. Thus, activists of all political hues seek to restore, empower, and mobilize the locality.[1]

This chapter reviews, in broad strokes, the complex changes that have shaped the American locality and Americans' attachments to it in this century. Over the years, Americans have become more committed, in practical ways, to their localities, even while enjoying access to ever-widening social horizons. This localism has served most individual American families well, but the political role of the locality exacts severe costs to the national community.

Contrasting Visions of Community

Americans' affections for "community" are ironic, for much of American history and ideology undercut traditional local solidarity. Unlike Europe, the United States lacks the feudal experience of closed, corporate communities; its founders resisted hierarchy; marketplace liberalism undergirds its economics and politics; its settlers were linguistically, religiously, and culturally diverse; its people have always been mobile; its once-dominant farmers usually lived in isolated homesteads; and in all, unlike Europe, Americans have been, consensus has it, intensely individualistic.[2]

Paul Burstein and David Hummon, as well as Alan Wolfe, provided comments that helped improve this chapter, but I remain responsible for any errors it may contain.


80

In spite—or perhaps because—of these conditions, Americans have glorified and sought the local community.[3] From before Tocqueville to beyond Riesman, observers have described us as inveterate joiners, people in quest of fellowship. The quest has been for the locally based association as much or more than any other. Although American culture esteems the wilderness as an escape from society, as for Thoreau, it simultaneously values the small, rural community as the locus of intimate society, as in Brook Farm. Most Americans believe that small communities preserve morality.[4] Politicians' rhetoric celebrates the virtues of the small, local community. (Recall Geraldine Ferraro's claim in 1984 that her corner of Queens, New York City, was really just a small town—like Mondale's Elmore, Minnesota, and Reagan's Dixon, Illinois—and by being that, entitled her to the same halo of grassroots innocence that the others claimed.) And local political autonomy has long been entrenched in strong home rule, dispersed authority, and checks against central government. Americans continue to subscribe to "community ideologies," beliefs about the inherent connection between place and persona , theories that where we live partly determines who we are, and most often that the best people are to be found in the smallest, most localized places.[5]

This contradiction between individualism and the pursuit of fellowship has yielded paradoxical forms of "voluntary community" in the United States. The classic old-world village, nowadays viewed through pastel prisms, was a place of constraint. Confined together by barriers of geography, poverty, illness, ignorance, law, prejudice, and custom, most old-world people lived out their lives in a small group, shared a common fate, and knew one another intimately.[6] This familiarity, by the way, did not necessarily mean affection.[7] In contrast, Americans have more typically found their fellowship in voluntary associations, be it clubs, churches, or neighborhoods. They have also joined or left those associations as each individual deemed appropriate.[8] We can see this voluntarism in the American approach to caring for the unfortunate, well expressed in George Bush's "thousand points of light" rhetoric. And so with our neighborhoods. They are, as Morris Janowitz termed them, "communities of limited liability," associations in which we invest our families, wealth, and concern—but we guiltlessly leave them for larger houses, more rewarding jobs, or finer amenities.[9]

With minor exceptions, Americans founded their towns as business ventures.[10] Developers platted the land and advertised its bountiful future. Settlers came and then left in search of a higher standard of living.[11] Indeed, they left in vast numbers, making for a great churning of population in nineteenth-century America, through big cities and small towns alike. Despite sentiment, then, we have for the most part


81

long treated our residential communities as "easy come, easy go," rather than as social worlds that envelop us.[12]

Is Ours a "Rootless" Society?

How has the connection of Americans to their localities changed over the years? Many believe that ours has become an ever more "rootless" society; sage commentators diagnose "placelessness" as the source of modern America's ills.[13] The facts are more complex. In several ways, Americans have become more "rooted" to their localities, and in several ways, less rooted. To simplify these complexities, I will argue that, in net, several historical changes have increased Americans' commitments to their localities, decreased their dependence on the locality for sociability, but increased their political—and thus, social—significance.

We cannot directly judge how people of earlier periods felt about their localities and compare them to people of today, but we can examine several changes that, logically, should have affected Americans' attachments to place.[14] Several historical changes probably increased how much Americans care about and invest themselves in their localities.

Reduced residential mobility is one such change. Americans are more mobile than other Western peoples, and they have always been highly mobile. But this mobility has been declining. Historians, by comparing lists of town residents from one year to another, have found that Americans in the nineteenth century were at least as geographically mobile and perhaps twice as much so as contemporary Americans.[15] Since World War II, Census Bureau evidence shows, the total rate of moving from one house to another generally dropped (see figure 4.1). Among those who moved, proportionately more crossed county lines recently, a change attributable to suburbanization and thus implying that these movers remained in the same urban area. The year-to-year fluctuations can be tied to oscillations in the job and housing markets. But the general picture is one of modestly increasing residential stability .[16]

In cross-national perspective, however, Americans remain notably more footloose than Europeans, although only a little more so than the other continental Anglophone countries, Canada and Australia.[17] The reasons are probably structural (our many dispersed metropolises), historical (our open-door immigration until 1924), and cultural (our famed individualism). What has probably changed over the years is a modest shift from "push" to "pull" mobility. Some pushes on nineteenth-century Americans to move—such as land shortages, job losses, disasters, and poverty—weakened in the twentieth century, while pulls—such as retirement communities, climate, college, and job opportunities—expanded.


82

figure

4.1
Percentage of U.S. Population Changing Residence in Previous Year.
SOURCES : Larry H. Long,  Migration and Residential Mobility in the United
States 
(New York: Russel Sage Foundation, 1988), 51; U.S. Bureau of
the Census,  Geographic Mobility: March 1986 to March 1987 ,
Current Population Reports, Series P-20, No. 430
(Washington, D.C.: Government Printing Office, 1989), 2.
NOTEYear  refers to the twelve months prior to the spring of the indicated year.

Americans' greater residential stability has probably increased their attachment to their localities. Studies have repeatedly shown that the longer people live in a place the stronger their emotional and social commitments to it.[18]

Another secular change that, in net, probably increased local commitment is the dispersal of the urban population. Despite the popular image of the ever more crowded city, over the last century, American metropolises have been spreading and thinning out. As a result, proportionally more Americans live in suburban single-family houses, located in small, autonomous, suburban municipalities. For about a generation now, more Americans have lived in suburbs than in either center cities or non-metropolitan areas. These, low-density housing, and suburban governments, in turn, tend to encourage local commitments.[19]

(What about the great migration from farm to city in this century? In that area, one of rural Americans' chronic problems was their difficulty in forming communities—in organizing associations, mobilizing politi-


83

cally, or seeing one another socially.[20] For former homesteaders, the move to town probably increased local involvement.)

A third change, one connected to the growth of urban sprawl, has been the evolution of class-homogenous neighborhoods. At least until the early streetcar era in the 1880s, all but the affluent lived close to their jobs. The elite had their suburban enclaves, but different classes mixed in city neighborhoods, although residents were sometimes well separated by ethnicity. Today, neighborhoods are less segregated by ethnicity—greatly excepting black ghettos—but more finely differentiated by income level.[21] Greater local homogeneity also reinforces neighboring and attachment to the neighborhood.[22]

The great exception of the black ghettos in fact gives emphasis to the general increase in local homogeneity. During the twentieth century, blacks, at least those in the North, became more segregated from whites, even as white ethnic groups, and for that matter Asians and Hispanics, became less segregated from one another. This racial divide has provided to whites neighborhoods devoid of what many find to be the unsettling presence of blacks. It has largely confined blacks, including many in the middle class, to districts with other blacks, including the very poor. Analyses by Douglas Massey and his colleagues suggest that there may have been some small breaches in racial walls recently, but for poor blacks, geographic isolation increased through the 1970s.[23]

A fourth trend is increasing home ownership. Over the century, most American families came to own their homes, with the fastest increase occurring between 1940 and 1960, as figure 4.2 illustrates. The most dramatic change was among the young. In the 1940s the median age of male homeowners was forty-one, but in 1970 it was 28.[24] Home ownership has stagnated in the last fifteen to twenty years of housing inflation and economic doldrums, but remained historically high. (These data do not consider any increase in homelessness.)

Although Americans have long vested their dwellings with important moral qualities—a proper house both reflects and nurtures noble values[25] —in the nineteenth century, Americans did not esteem ownership as they do now. Many middle-class families were content to be renters. The connection between property and propriety apparently arose around the turn of the century, when increasing affordability, suburbanization, and ideologies of domesticity combined to make ownership easier and socially correct. Then, in the twentieth century, rising affluence, new mortgage instruments, government subsidies, tax breaks, and in the 1950s the family boom spurred home ownership to its current levels.[26]

Today, home ownership, preferably of a single, detached house, is the American ideal, despite the financial hurdles involved. In a 1985 poll, for example, 76 percent of respondents agreed that people who do not


84

figure

4.2
Percentage of Housing Units That Are Owner-Occupied.
SOURCE : U.S. Bureau of the Census,  Historical Statistics of the United States,
Colonial Times to 1970
 (Washington, D.C.: Government Printing Office, 1975),
646; U.S. Bureau of the Census,  Statistical Abstract of the United States 1988
(Washington, D.C.: Government Printing Office, 1987), 688; U.S. Bureau of
the Census, Census and You  25 (December 1990), 5.

own their homes are "missing out on an important part of the American dream."[27] Being a renter is stigmatizing unless the person is in a transitional stage, a young single, or elderly.[28]

Growth in home ownership has slowed and even declined slightly in the late 1980s.[29] A sense of crisis about middle-class housing arose, a sense that Michael Dukakis tried to exploit in his presidential campaign in 1988. In historical perspective, still, the decline has been mild. Demographic changes in the last thirty years—aging of the baby-boomers, more divorce, delayed marriage and child rearing—should have led to home ownership sagging much more than it did. The big drop in ownership during the 1980s was precisely among Americans under thirty, who were increasingly putting off marriage and childbearing. Still, income losses, housing speculation, and financing changes strained many families, forcing some to rely on two incomes when they would have preferred one, and pushing some home-seekers out of the market.[30] Other would-be owners turned to condominiums or, in rural areas, mobile homes.[31] The proportion of available housing that is single detached units has dropped since the 1960s.[32] This shift to condos or trailers also contributes to a sense of crisis, since the American dream is so closely tied to the single-family house. Altogether, much of the concern arises


85

from a comparison to the late 1960s, when, with boom times, owning a detached house was easier than now and seemed so normal.

Despite fluctuations owing to changes in demographics and economics, the great increase in home ownership during the twentieth century is unlikely to be soon reversed.

These conditions—urban sprawl, segregation, and home ownership—distinguish America from most European societies. David Popenoe credits them for creating a higher level of neighborhood involvement in the United States than he observed in either Sweden or the United Kingdom.[33] Changes in these conditions over the last few generations, along with declining mobility, would all seem to have helped Americans further attach themselves to their neighborhoods and towns. Besides, most Americans have enjoyed increasing freedom of choice in where they live. Freedom can mean lack of commitment and transiency; but it seems here to have made it easier for most people to find and stay in places they most prefer.[34]

Yet, other changes in the twentieth century may have reduced commitment to the locality. One such change has been the increasing separation of home and workplace. Although some commentators have exaggerated the extent to which home and work were entwined in the past—most people in days gone by were not independent craftsmen working in their homes—the distance between where people live and where they work expanded, particularly with the coming of streetcars in the 1880s.[35] Working outside one's home area probably detracts not only from the time people spend in the neighborhood but also from their subjective feeling of commitment to it.

A second such change is the increasing participation of married women in the labor force. In 1900, 6 percent of married women worked for pay; by 1987, 56 percent did. (The rates for divorced women, a growing fraction of all women, were much higher.)[36] Though married women's employment has typically been part-time, it does mean that fewer American households have a "traditional" homemaker at home all day, the same homemaker who critically connected the family to the neighborhood.[37]

Third, households shrank. With the virtual disappearance of servants, boarders, and lodgers, with later marriage, more divorce, and fewer children, the size of the median American household shrank from 4.8 people in 1900 to 2.7 in 1987.[38] We can assume that, generally, the fewer people at home, the less attached the household is to the locality.

Thus, in the complex weave of twentieth-century social changes, some drew Americans closer to and some pulled them from their neighborhoods and towns. Could we assess past people's identifications, senti-


86

ments, and actions more directly, we would not need to so indirectly estimate the change in local attachment. As it stands, the changes that more tightly bound people to places probably outweighed those that weakened the bonds, and the best estimate is that, contrary to convention, Americans are more "rooted," practically and sentimentally, to their communities than ever before.

The Fate of Local Ties

On another dimension, however, Americans have probably become less rooted to their residential communities: social ties. Although this evidence is also indirect, probably fewer of Americans' relatives, friends, and associates live near them than was true in earlier generations. (I am not referring here to "neighboring," defined as casual interaction with people living nearby. Americans are often "neighborly" but rarely socially close to their neighbors.) In one study, fewer than a third of respondents' important relations were with people living within a five-minute drive. This dispersion was even greater for the middle class. The neighborhood provides proportionately few of middle-class Americans' important ties.[39]

How much recent generations differ from earlier ones in this regard is uncertain. On the one hand, Avery Guest and his co-workers found that neighborhood associations in Seattle in 1979 hosted fewer social activities than they did in 1929.[40] On the other hand, a few researchers have asked whether marrying couples are coming from increasingly distant homes—an index of dispersing social contacts—and the answers are mixed.[41] So far, the evidence for a historical dispersion of social ties is largely indirect: Those people who seem most "modern"—the educated, affluent, young, and urban—tend to have more spread-out networks than those who seem less so. By (a perhaps unwarranted) historical translation, then, we should have seen an increase in the dispersal of social relations.

We can also infer a decline in local ties from other social changes. Those changes that presumably uprooted Americans from their communities also should have scattered their networks: separation of home and work, mothers working, and smaller families. On the other hand, the changes that seemingly rooted Americans also should have contained their social ties: residential stability, suburbanization, neighborhood homogenization, more home ownership. Yet an additional consideration is changing communications and transportation. As early as 1891, an observer claimed that the newly developed telephone had introduced an "epoch of neighborship without propinquity." With the addi-


87

tion of cheap automobiles, analysts often claimed, space was "annihilated" and relations transcended distance.[42] It stands to reason (although reason is sometimes wrong), that with affordable telephones and automobiles, not to mention airplane tickets, people can sustain social ties at farther distances than their great-grandparents could have. We can enjoy an evening with friends who live twenty miles away or celebrate Thanksgiving with kin in another state. Whether, or to what extent, Americans' ties are in fact more dispersed today than previously is still unproven.[43]

The best guess is that there has been a historical change, that Americans' social lives are today less localized than they were a century ago. The more striking conclusion, however, is that the change may not have been as great as we imagine.

The Persistence of Local Autonomy

Localities are more than where we live and the people with whom we dwell. They are also polities. It is especially in "home rule" that the American affection for the locality is problematic. Although tested through the twentieth century, local autonomy continues to shape crucial aspects of daily life, perhaps satisfying most Americans, but undermining the collective good.

Spurred by economic growth and economic crises, technology, and war, state and federal governments undertook vast new responsibilities in the twentieth century, dwarfing the localities in scale and public attention.[44] Also, local governments increasingly depended on cash infusions from the outside, the major shift occurring during the Depression.[45] Higher levels of government usurped some authority from the localities. Early, state governments took over, for example, regulation of utilities and road management. In later years, federal authorities intervened in voting, schools, and zoning to protect civil rights and the environment. These changes probably also shifted media and citizen attention toward higher levels of government.[46]

But the fundamental principle of local autonomy, long distinctive of the American system, has not been breached. Although dwarfed by the growth of state and federal authorities, local governments also increased their financial role in this century.[47] Other changes also strengthened the autonomy of small—especially suburban—municipalities. States granted small cities greater financial independence, including the right to incur debts. Town control over land use, notably through zoning, expanded.[48] In recent years, some authorities, especially the courts, have been able to intervene in local decision making,[49] but the basic independence of


88

the locality remains. One sign is that since World War II the number of municipal employees has grown twice as fast as the number of federal employees.[50]

Most urban Americans now live in the small towns surrounding the center city,[51] and these are the better-educated, more affluent, and whiter urban Americans. They live in distinct, albeit neighboring, communities that differ from one another and surely from the center city in population profile, finances, and land use.[52] Suburban residents, although usually content to leave politics to caretaker governments, do mobilize to protect the legal, fiscal, and social boundaries of their towns.[53] By moving across a municipal line, usually by doing little more than crossing a street, some Americans can obtain better civic services at lower tax rates; choose among different housing styles, prices, and taxes; enroll their children in schools unburdened by poor students; and otherwise "purchase" by their relocation a better "basket" of social goods.[54]

Even within large cities, localities' clout seems to have grown through neighborhood movements. Neighborhood ideology arose at the end of the nineteenth century, was encouraged by Progressive reformers and planners, and then was renewed by "community power" militancy in the 1960s and 1970s.[55] Neighborhood movements have defended local communities from intrusions, resisted growth, and contested with downtown business interests. Critics, however, charge them with creating urban paralysis by NIMBY (not in my backyard) vetoes of citywide endeavors. Although new politics and new laws, such as required "impact" assessments, empowered many low-income neighborhoods—it is hard to imagine that Robert Moses could bulldoze the Bronx today—neighborhood power is still more easily and more often exercised by the same sorts of advantaged people who protect their exclusive suburbs, some of whom now live in gentrified city quarters.

The Place of Place

Peter Rossi has pointed out that "the world has become increasingly cosmopolitan, but the daily lives of most people are contained within local communities."[56] Place still matters. The variations in house prices between and within regions, for example, mock economists' models and futurists' projections that the nation is leveling out into a uniform, placeless realm.[57] How important place will be in the future we can only speculate. Will "cocooning," a media buzzword of the 1980s, typify the next decades, or will there be increasing cosmopolitanism? Much will depend on economic changes and demographic shifts. Unless the economy fails, American wealth should help sustain residential stability and home ownership. As baby-boomers move beyond child rearing and then retire,


89

they will increase geographical mobility, but they will also release more single-family housing for their grandchildren. Spots of inner-city gentrification notwithstanding, the sprawling of the metropolises continues, augmenting suburbanization and "exurbanization" beyond the suburbs. That trend suggests yet more homogeneity, low-density housing, and autonomous political localities.

Most Americans would, in all likelihood, applaud those trends. Raising a family in a detached house, in a homogeneously middle-class, suburban locality, governed by people much like oneself, seems almost ideal. As with other equity issues, even Americans who lack this privilege would preserve it. Experts may criticize localism for its "collective irrationalities" costly to residents themselves—traffic congestion, governmental paralysis, unbalanced growth, domination by business interests, and so on—and for its "externalities" costly to the wider community—ghettoization of the poor, abandonment of the great cities, unjust tax burdens, and so on. No matter. In America, the free pursuit of the private good is the public good. Localism is, as much as ever, an instrument to that good.

Herein lies a seeming contradiction: an inconsistency between the locality's communal role and its role as a vehicle for individual interest.[58] American ideologies of community paint the locality, especially the small one, as a site for fellowship, in contrast to the atomism of the wider, especially the urban, world. Many Americans value and enjoy the congeniality of a local community. They often resist that same local community, yet, when it constrains their interests, be the constraint in taxes, behavioral codes, or infringements of private property. Neighborhood organizations, for example, typically awaken when outsiders threaten residents' safety or wealth. Otherwise, the energy that drives them usually rests dormant. Neighborhood groups rarely act as local governments. Other evidence of the priority of the individual comes in negotiations within condominium complexes, where collective needs and rules run up against assertions of home owners' rights.[59] While Americans value the locality as solidarity, it takes second place to individual freedom.

Another seeming contradiction appears between the persistence of home-rule politics and the dramatic growth of the national government.[60] How can locally oriented Americans tolerate the Washington behemoth? One answer is that the national government has not grown as much as we think.[61] More important, growth in the federal government's role and in its income was a response to seemingly unavoidable crises. The Depression justified social engineering and costly programs. The world wars and the Cold War justified other national initiatives. Officials invoked the Cold War, for example, to rationalize the interstate highway system and subsidies to higher education. And still, the United States is, by Western standards, an incomplete welfare state. The reality is that


90

Americans generally resist government at all levels, but give more grudging preference to local rule by like-minded neighbors as the lesser evil.

National action, piecemeal as it is, also occurs in response to translocal coalitions. That was one lesson, for example, of the Civil Rights struggle, which as movement and as legislation ran roughshod over local autonomy. The environmental movement is a more complex example. In some ways, it too imposed national concerns over local ones, for example, threatening local jobs for old trees or peculiar fish. (In other ways, though, it reinforced the NIMBY pattern of localism, legitimating a "draw up the drawbridge" style of conservatism.) Although local events—Love Canal, for one—dramatized the environmental agenda, the movement's power still appears to rest on coalitions of interests that are translocal.

A strategy to move the nation in a progressive direction would in a similar way involve rethinking the ideology of locality, an ideology really more attuned to privilege than to reform. Thomas Bender has pointed out the dangers of confusing values attached to "community" with the needs of the public, political sphere. To insist, for example, on personal knowledge of political candidates may mean selecting the lesser rather than the better candidate. Or, to cry for "local control" for a community wealthier in needs than in resources may end by perpetuating disadvantage.[62] It is important to look clear-eyed at the consequences of America's localism, not with romanticized nostalgia.


91

PART TWO—
ECONOMICS AND POLITICS:
GLOBAL AND NATIONAL


93

Five—
Mirrors and Metaphors:
The United States and Its Trade Rivals

Fred Block

The Decline of American Competitiveness

In the winter of 1990, the Chrysler Corporation ran a television commercial that featured its chairman, Lee Iacocca, complaining about an American inferiority complex toward the Japanese. He was referring to the perception that Japanese manufactured goods, including automobiles, were generally of higher quality than those made in the United States. This unusual advertising strategy was symptomatic of a radical reversal that occurred over less than forty years. In the 1950s, the label "Made in Japan" was an object of derision; it was synonymous with cheap goods of poor quality. By the 1980s, Japan had established itself as the world's most successful exporter of highly sophisticated manufactured goods.

While Japan's shift is the most dramatic instance, it is symptomatic of a broader transformation of the United States' position in international trade. Immediately after World War II, the United States was the only industrialized country whose manufacturing base had actually been strengthened during the war. U.S. industrial capacity had expanded significantly, while the economies of England, France, Germany, and Japan were all severely damaged. The enormous international appetite for U.S. manufactured goods in the post–World War II years made it possible for the United States to export far more than it imported. The only constraint on this appetite was the difficulty that other nations had in obtaining the dollars with which to purchase U.S. goods. The United States tried to overcome this "dollar gap" through aid programs that were designed to hasten the reconstruction of the economies of Western Europe and Japan. By every possible indicator, the United States dominated the world economy from 1945 through 1965.[1]

By the end of the 1960s, though, it was apparent that U.S. efforts to bolster the economies of its industrialized trading partners had been too


94
 

TABLE 5.1 U.S. Foreign Trade Surplus or Deficit for Various Years
(in millions of current dollars)

1947

$10,124

1950

1,122

1960

4,892

1970

2,603

1980

–25,480

1988

–127,215

SOURCE : Economic Report of the President (Washington, D.C.: U.S. Government Printing Office, 1990), table C-9, 410–11.

NOTE : These figures are for merchandise trade, exclusive of military shipments.

successful. The U.S. trade position went from surplus to deficit as Western Europe and Japan sold increasing volumes of manufactured goods to the United States (see table 5.1). During the 1970s, however, the inflows were largely of consumer goods; the United States still enjoyed a healthy surplus in the export of capital goods, such as computers, machine tools, and airplanes. But in the 1980s, this last remaining advantage weakened as the U.S. economy was overwhelmed both with high-tech manufactured imports from Japan and Western Europe and low-tech imports from Newly Industrializing Countries such as Taiwan and South Korea.[2]

These dramatic shifts in the U.S. trade balance are linked to changes in national self-confidence. The trade surplus after 1945, combined with U.S. military superiority, encouraged talk of the "American Century"—a period of U.S. international dominance comparable to the Pax Britannica of the nineteenth century. However, it was to be a very short century; by the 1980s, the growing trade deficit catapulted Paul Kennedy's The Rise and Fall of the Great Powers onto the national best-seller list. Kennedy argued that the growing U.S. trade deficit meant that the United States was following a long-established pattern of imperial decline.

The United States' competitive decline has become a central issue in the country's politics. Debate focuses on the question of what can be done to improve our international trade position. The AFL-CIO and some of its allies in the Democratic party have consistently argued that the major problem is the unfair trading practices of some of our competitors, but this has been a minority position. Thus far, no clear majority position has emerged, but politicians in both parties increasingly argue that their pet proposals—from cuts in the capital gains tax to educational reform—are necessary to solve the trade problem.

Since 1980, it has been increasingly common for domestic commentators to compare the United States with its leading trade rivals to gain


95

perspective on what should be done. This use of other countries as a kind of mirror—to better assess one's own society—has been a common practice in the history of other countries. Russian history, for example, has been marked by episodes in which invidious comparisons with foreign nations have been used to stimulate domestic reform. The Gorbachev era is only the most recent example. Yet this type of comparative national introspection has been rare in modern U.S. history; for most of this century, national confidence has been so great that the only comparative question was why other nations had been so slow to adopt American institutions and practices.

But faced with trade deficits and a perception of competitive decline, U.S. analysts have increasingly looked to Japan and West Germany for insight into what is wrong in the United States. The intention, of course, is to spark national renewal by recognizing and eliminating those national characteristics that are holding the United States back. Unfortunately, the perceptions from these comparisons that have entered the public debate have been like the images in fun house mirrors. Some of the features of those societies that are most important in explaining their economic successes have been almost completely ignored, while others of marginal or questionable importance have loomed far too large as explanations for economic success. Most sadly, the comparisons—like distorted reflections—have served to obscure rather than to enlighten; they have made it more difficult for this society to understand how to handle its economic and social problems.

Comparing the United States, Japan, and West Germany

In pursuing comparisons among the United States, Japan, and West Germany, it is important to distinguish between the scholarly literature—books and articles that are very rarely read outside of university settings—and the popular literature of newspapers and magazines read by millions of people. In scholarly literature, there are five important areas of contrast between Japan and West Germany and the United States, but in the popular arena, only one—or possibly two—of these factors are emphasized.

Before beginning the comparison, it is important to emphasize that neither Japan nor West Germany is an unequivocal economic success story. West Germany has gone through the 1980s with unemployment rates higher than those in the United States. Large parts of the Japanese economy, particularly the service sector, remain relatively underdeveloped. And both Japan and West Germany have provided far fewer economic opportunities for women than has the United States. Different economies have succeeded with certain parts of the puzzle of how to or-


96

ganize an advanced, postindustrial economy, but no single nation has been able to put the whole puzzle together. Hence, the main economic achievement in both Japan and West Germany has been quite specific—to reorganize manufacturing to produce high-quality goods that are particularly attractive in international trade. In a period in which a number of Newly Industrializing Countries have greatly increased their international market share for such simpler manufactured goods as apparel and steel, Japan and West Germany have run large surpluses in manufacturing trade by specializing in more complex goods, such as automobiles, machine tools, and consumer electronics.

Also, the West German and Japanese economies are very different from each other in their specific institutional arrangements. It is not a simple matter to create a single composite "successful competitor country" out of these quite different national experiences. Nevertheless, there are a number of dimensions on which these two countries are both similar to each other and different from the United States that might account for the variation in the three countries' recent experiences with sophisticated manufacturing. On some of these dimensions, the specific institutional arrangements through which a given set of ends are achieved might be quite different, but the ultimate outcome appears similar. All of these dimensions have been discussed in the scholarly literature, but only a few of them have played a part in more popular discussions.

Marginality of Military Production

One obvious point of comparison between West Germany and Japan is that both were defeated in World War II. As a consequence of that defeat, both nations were constrained to limit their military expenditures. The result has been that defense spending and military production have played far more marginal roles in their economies than in that of the United States.[3] This has contributed substantially to Japan's and West Germany's successes in civilian manufacturing.[4]

In the United States, a large percentage of scientists and engineers have been employed in defense and defense-related industries.[5] Moreover, the proportion of "the best and the brightest" from these technical fields who end up working in the military rather than the civilian side of the economy is even greater. Firms doing military research and development are able to pass their costs along to the government, so they are able to pay higher wages than civilian firms. Also, the needs of the arms industry have profoundly shaped engineering education in the United States, so that the definition of what is exciting and interesting work has been shaped by military demands. The consequence is that the use of scientific and engineering talent in civilian manufacturing in America has been far more limited and far less effective than in Japan and West Germany.


97

The different use of technical labor is only part of a larger contrast. High levels of U.S. defense spending have fostered a business style that is particularly unsuited to success in highly competitive civilian markets. It is a style that involves mastery of the bureaucratic complexities of the procurement process, in which cost of production considerations are relatively unimportant, and where there are few rewards for high levels of flexibility in the production process. This style contrasts sharply with the sensitivity to consumer preferences, the sustained effort to reduce production costs, and the emphasis on flexibility that are characteristic of the firms that have been most successful in competitive civilian industries.[6]

Cooperative Work Arrangements

In both Japan and West Germany, a relatively high level of trust exists between employees and managers in manufacturing. While there are significant differences in the industrial relations patterns of the two countries, with West German unions being far stronger than unions in Japan, both countries have been able to mobilize high levels of employee motivation and initiative. In particular, both countries have evolved practices that protect core employees from displacement as a result of technological change. The consequence has been greater employee receptivity to technological innovation and, thus, quicker and more effective utilization of new productive technologies.[7]

Similar practices have evolved in some of the most important U.S. firms in the computer and electronic industries where no-layoff policies and commitments to retraining have created an openness to continual technological innovation.[8] However, the industrial relations in most U.S. manufacturing firms continue to be characterized by low trust and continued worker fears of displacement resulting from technological innovation. While many Fortune 500 firms have experimented with quality-of-work-life and employee involvement programs in the hope of emulating foreign competitors' high-trust manufacturing environments, the results have been uneven.[9] In many cases, U.S. firms have been unable or unwilling to provide the increased employee job security that is an indispensable part of a more cooperative system of industrial relations.

Supportive Financial Institutions

In both Japan and West Germany, banks have historically played a central role in providing finance for manufacturing firms; the sale of corporate stock to nonbank purchasers—the chief mechanism by which firms raise money in the United States—has played a distinctly secondary role. This greater role of banks in the manufacturing sector has several positive consequences. First, banks tend to have a longer-term time horizon than stock markets. When bankers invest heavily in a firm, the advice


98

that they give and the pressures they exert tend to be oriented to the long term. In contrast, corporate stock prices are heavily influenced by quarterly earnings reports, and concern about the stock price forces firms to emphasize profits in the next quarter over longer-term considerations. At the extreme, the emphasis on next quarter's bottom line can lead firms to sacrifice spending for preventive maintenance, research and development, and good employee relations—all factors that play a large role in the firm's long-term prospects.[10]

Similarly, banks with substantial stakes in manufacturing firms can play an active role in coordinating relations across firms. They can facilitate joint ventures between firms that might have complementary strengths, and they can use their influence to dampen destructive competition in a particular industry. Perhaps, most significantly, neither Japanese nor West German manufacturing has seen anything like the takeover wars that the United States experienced in the 1980s. In those countries, the banks can use their influence to get rid of ineffective management teams without the huge costs that have been incurred in U.S. corporate takeovers.

Social Inclusion

Both West Germany and Japan have dramatically reduced poverty in their societies, although they have accomplished this through different means. In Japan, there has been a very strong political commitment to maintaining high levels of employment, so there are relatively few adult males who are marginal to the economy. Full employment combined with a reasonable minimum wage and a low divorce rate has made it possible to pull most people above the poverty level with comparatively low levels of social welfare spending. In West Germany, where unemployment has been relatively high, the elimination of poverty has required—in addition to a high minimum wage—fairly extensive state welfare spending in support of the unemployed and single-parent families. The results are that in West Germany only 4.9 percent of children live in poverty; in Japan, 8.1 percent of children aged ten to fourteen live in poverty; while in the United States, the comparable figure is 22.4 percent.[11]

The contrast between a large population of poor children in the United States and much smaller populations in Japan and West Germany has direct implications for education. The reduction of poverty goes along with substantially higher levels of educational achievement by young people. There is considerable evidence that the average high school graduate in Japan has substantially higher levels of mathematics and science skills than the average American high school graduate, but the most striking contrast is in the percentage of students who complete high school.[12] In the United States only 71.5 percent of students


99

graduate in contrast to 88 percent in Japan.[13] In West Germany, rates of high school completion are lower, but most of those who leave school at age sixteen enter highly structured three-year apprenticeship programs that combine on-the-job training with formal learning.[14]

The proportion of eighteen-year-olds in the United States who are unqualified for skilled employment is probably as high as 40 percent if one includes both dropouts and students who graduate from high school with only minimal skills. This puts U.S. firms at a distinct disadvantage compared to Japanese and West German firms, who have a much deeper pool of young people who can easily be trained for skilled employment. In some sectors of the economy, the United States can partially make up for this disadvantage by making greater use of female employees than do Japan and West Germany, but this compensating mechanism does not work for skilled manufacturing jobs, where women still only make up about 6 percent of the labor force. Hence policies of social inclusion that result in the general reduction of poverty contribute to Japanese and West German industrial competitiveness by raising the level of educational attainment of the bottom half of the population. This advantage over the United States in the quality of the human input into the production process makes it easier for Japan and West Germany to develop more cooperative employment relations and to place more emphasis on the improvements in worker skill that facilitate the use of advanced production technologies.

Higher Rates of Personal Savings

It is widely believed that in Japan and West Germany households save a much higher proportion of their income than do those in the United States. Official data show that Japanese and West German household savings rates were at least twice as high as those for the United States in the 1980s.[15] This greater frugality means that there is a relatively larger pool of savings available for productive investment by firms at a lower interest rate. The lower interest rate means that firms can justify productive investments that could not be pursued if the cost of capital were higher.[16]

It follows, in turn, that Japan and West Germany use this savings advantage to invest more heavily in manufacturing, with the consequence that their manufacturing productivity has grown substantially faster than that of the United States. The faster rate of productivity growth makes it possible for them to control costs and compete successfully against the United States in manufacturing markets.

Of these five possible explanations for Japanese and West German economic success, it is clear that the fifth explanation—the difference in household savings rates—has completely overshadowed all of the others


100

in popular discussions. By 1989 concern about the low rate of personal savings in the United States had become such a national preoccupation that both major political parties advanced proposals designed to stimulate higher rates of savings. The popular press was filled with laments about the decline of personal savings. Peter Peterson, a former secretary of commerce, wrote a typical column in the New York Times (July 16, 1989), in which he reminisced lovingly about the frugality of his immigrant parents before he made this argument:

Up until about two decades ago, Americans would have considered it unthinkable that they could not save enough as a nation to afford a better future for their children, and that each generation would not "do better" and that the resources we invest into the beginning of life might be dwarfed by the resources we consume at the end of life. Yet, today the unthinkable is happening.

Our net national savings rate is now the lowest in the industrial world, forcing us to borrow abroad massively just to keep our economy functioning.

Later in this chapter I will show how Peterson's argument is based on problematic data and mistaken assumptions about how the economy works. The point to be emphasized here, however, is that of all of the important institutional contrasts between the United States and its major competitors, the difference in the savings rates of households has received disproportionate attention.

Some of the other contrasts have also been part of broader public discussions, but in each case, one element has been emphasized in a very telling fashion. For example, there has been considerable public concern about the shortcomings of U.S. public education, and a number of prominent corporate executives have argued that the failings of our schools have put them at a disadvantage relative to our major international competitors. George Bush promised to be the "Education President" precisely to address these problems. However, the problem of education in the United States is almost never related to the larger issue of social inclusion; it is rarely argued that the best way to improve our schools is to eliminate poverty. On the contrary, discussions of school failure tend to emphasize the personal shortcomings of those who drop out. This constant emphasis on individual characteristics helps give plausibility to the otherwise implausible arguments of those educational reformers who want to "get back to basics" and place renewed emphasis on discipline.

There has also been some broader discussion of the more cooperative employment relations that Japan and West Germany enjoy. Here again, the discussion moves quickly away from the specific institutional arrangements, such as strong unions or employment guarantees, that undergird


101

that cooperation. Instead, the focus shifts to the cultural values of individual workers. Japanese and West German workers are seen as embodying the values of the work ethic: they are disciplined and they take pride in their work, and they contrast sharply with American workers, who are depicted as selfish, lazy, or both.

In short, in the mirror that the United States has held up to itself, only differences in the characteristics of individuals are revealed; the Japanese and West Germans are seen to do better because they are more frugal, more hardworking, and their children are more disciplined. Differences in institutional arrangements disappear from view completely. This kind of selective reflection has important political implications. The focus on individual qualities assures that blame will always be distributed according to Pogo's famous phrase, "We have met the enemy, and he is us." Failures of the U.S. economy thus appear to result from the personal failings of ordinary Americans, above all the failure to save.

Economics and Metaphor

Why is it that in public debate and discussion about declining U.S. competitiveness, comparisons of the United States to its trading partners have focused almost exclusively on differences in personal savings practices? The other institutional contrasts certainly raise all kinds of interesting questions about what the United States is doing wrong as a nation and how it could do better, but these issues are never explored. In my view, the explanation for this strange selectivity lies in the importance of metaphors in economic thinking.

While economists make great claims about the scientific nature of their discipline, economic discourse is dominated by metaphors.[17] From Adam Smith's "invisible hand" to recent discussions of economic "soft landings," economic activity is frequently understood in reference to something else. Even some of the most basic economic concepts, such as the ideas of inflation and deflation, rest on analogies to physical processes.

This is hardly surprising; metaphors are powerful and indispensable tools for understanding complex and abstract processes. Difficulties arise only when we forget that we are thinking metaphorically. A particular metaphor can be taken so much for granted in our intellectual framework that it structures our perception of reality in subtle and hidden ways. Such hidden metaphors can make our theories totally impervious to any kind of disconfirmation. No matter how much evidence a critic might amass, there is simply no way to persuade someone who has organized his or her thinking around one of these taken-for-granted metaphors.


102

There are three metaphors that loom particularly large in contemporary understandings of the economy in the United States. The first of these is so familiar that it is not worth discussing at length; it is the metaphor of government as spendthrift. The idea is simply that the public sector will invariably use its resources in ways that are inferior to their use by the private sector.[18] The other two metaphors are more hidden, but they have a profound impact on both the thinking of economists and the more popular economics of journalists and politicians.

Capital as Blood

In one metaphor, the economy is seen as a hospital patient and money for capital investment is likened to the blood that runs through the veins of the endangered individual. When the supply of money capital diminishes, the patient's heartbeat slows and the vital signs deteriorate. But when the patient's supply of blood is replenished by an intravenous transfusion, there is virtually an instantaneous improvement. Not only does the patient look better, but he or she is suddenly able to move about and do things that were previously unthinkable.

This metaphor establishes money capital as the indispensable element for economic health. Nothing else—not the cooperation of labor nor the ways in which economic institutions are structured—can compare in importance to the availability of money capital. Moreover, virtually any economic problem can ultimately be traced back to an insufficient supply of money capital.

For relatively underdeveloped economies, this metaphor holds an indisputable element of truth; such economies suffer a chronic shortage of resources available for productive investment. However, for economies like those of the United States and its major trading partners, the metaphor is deeply misleading. For one thing, the relationship between the dollar amount of new investment and economic outcomes such as the rate of economic growth is unclear. Frequent attempts have been made to prove that lagging rates of U.S. productivity growth were caused by insufficient rates of new investment, but these attempts have failed. Even the White House Conference on Productivity, convened by Ronald Reagan in 1983, was unable to provide unequivocal evidence of inadequate rates of investment in the United States.[19] The difficulty, of course, is that throwing money at any problem—whether it is lagging productivity or widespread drug abuse—never guarantees success. There are too many other variables that intervene to determine the effectiveness or ineffectiveness of particular expenditures. The picture has become even more clouded recently because computerization has created a pervasive process of capital savings in the economy; a million dollars of capital investment in 1990 bought capital goods that were far more powerful and


103

effective than what the equivalent dollars would have bought five or ten years before. Capital savings is most obvious with computers themselves; the costs of computing power have been falling by 15 to 20 percent a year. But a parallel albeit slower change is occurring with a whole range of other capital goods. This pattern of capital savings means that each year less money capital is necessary to buy the same amount of new plant and equipment.[20]

There are also a number of other contenders for the most indispensable element for an advanced economy. First, it is increasingly obvious that even when there is enough money capital, it cannot be taken for granted that it will be used productively or effectively. When financiers and firms engage in "paper entrepreneurialism," they can spend vast sums of money in corporate raids and leveraged buy-outs that do nothing to enhance the society's productive capacity. Unlike the infusion of blood, there is nothing automatic about the effect of money capital on the economy. One could argue instead that institutional arrangements that effectively channel money capital to productive use are the most indispensable element for a modern economy.

Another crucial element is the flow of new ideas that results from research and development. Without the capacity to innovate effectively in both products and production processes, a modern economy will quickly fall behind competitors who are better at anticipating consumer needs or reducing the costs of production. Still another element to consider is the flow of educated employees who are capable of developing and implementing these innovations. This is not just a question of scientists and engineers, since there is mounting evidence that advanced production processes in both manufacturing and services require workers with significant intellectual skills to use computer-based technologies effectively.[21]

It is, of course, a silly exercise to argue over which is the most indispensable element for a modern economy; one would expect a number of different factors to be extremely important. The point, however, is that the capital-as-blood metaphor is simply wrong in its insistence that one element of economic life can be elevated in importance over all of the others.

Redemption through Sacrifice

The second metaphor, redemption through sacrifice, is Christian rather than medical, but it also rests on the comparison of the economy to an individual. In this case, however, the economy is an individual who has succumbed to temptation. Instead of following the path of righteousness, hard work, and self-discipline, the individual has become either lazy or preoccupied with the pursuit of sensual pleasures. If the individual remains on this path, the future will bring complete moral decay


104

and probable impoverishment. The alternative is to seek redemption through sacrifice; this means not only rejecting all temptations, but even forgoing some of the innocent pleasures that the person previously enjoyed. Only a sustained period of asceticism will atone for past sin and allow the person to return to the path of righteousness.

Economic maladies such as inflation or deflation are seen as evidence that the economy has veered from the correct path, either as a result of insufficient effort or of excessive emphasis on consumption. The remedy is always a sustained period of austerity—of collective belt-tightening. Austerity simultaneously demonstrates that people have remembered the correct priorities and it frees resources for new investment to make the economy more productive. If sustained for an adequate length of time, the pursuit of austerity is almost guaranteed to restore the strength of the economy, no matter how serious the original transgression.

The two metaphors clearly intersect in that austerity is seen as a means to guarantee that the flow of money capital will once again be swift enough to restore the health of the economic patient. Health in one framework is the same as righteousness in the other. Moreover, it is also important that both metaphors equate the economy to an individual. The classic justification of the free market was that the pursuit of greed by individuals was transformed by the invisible hand into a benevolent outcome. However, the disjunction in that argument between individual and collective morality is troubling for those who see the world in purely individualistic terms. They experience some discomfort with the idea that individual greed should produce a positive outcome. These metaphors eliminate that discomfort by restoring the notion that individual virtue is necessary for the collective good and that collective failures can be traced to individual weaknesses. With these metaphors as guides, the path to a more prosperous economy is seen as being reached by persuading individuals to act virtuously.

In actual economies, however, the relationship between individual orientations and collective outcomes is far more uncertain. Nice guys often finish last, while those who lack all virtue might well live happily and prosperously ever after. Virtuous farmers can work diligently to produce a bumper crop that results in a disastrous fall in prices for their products. Similarly, an abstemious nation can find itself in the midst of severe depression when consumption fails to keep pace with production.

For this reason, austerity is often an imperfect route to economic improvement. The Great Depression of the 1930s was a classic illustration; individuals were promised that a period of belt-tightening would inevitably generate a spontaneous recovery. But what actually happened was that the restricted purchasing power of consumers meant that there was insufficient demand to justify new investments and the economy re-


105

mained stagnant until the government intervened to bolster demand. More recently, the theorists of supply-side economics promised that if people accepted a period of austerity as income was shifted to the rich, there would be a dramatic economic expansion that would raise everyone's standard of living. While the economy did expand in the Reagan years, the consequences were far more uneven than the supply-siders had promised. The rich prospered on an unprecedented scale, but the promised acceleration of productive investment did not occur, and large sectors of the population found themselves worse off than they had been before. Many of the defects of the expansion can be directly traced to the consequences of austerity, such as the cutbacks in nondefense federal spending and the weakness of consumer demand among households whose incomes are below the median.

Nevertheless, the belief in redemption through sacrifice taps deep cultural themes. Even beyond the obvious parallel with Christian notions of individual salvation, there is a close fit with the cultural anxieties of the middle class. Barbara Ehrenreich has written persuasively of the profound fear of affluence that haunts the American middle class.[22] Those who have achieved a comfortable existence through their own efforts as doctors, lawyers, or corporate managers cannot usually guarantee their children a comparable existence unless the children enter a middle-class occupation. While the truly wealthy can usually find sinecures for untalented children or even provide for shiftless children through trust funds, those options are not available to the middle class. The danger for the middle class is that children who grow up in economic comfort will lack the drive and discipline to surmount the hurdles that block entry to middle-class occupations for most children of the poor and working classes. Hence, a periodic invocation of the virtues of austerity fits well with the middle class's own efforts to persuade their children of the necessity of self-discipline and hard work.

These two powerful metaphors act as filters through which the United States' perceptions of its major economic competitors have been refracted. While there are many significant differences between the U.S. economy and those of Japan and West Germany, the preoccupation with differences in personal savings can now be understood. The idea that people in the United States do not save enough fits perfectly with both of these hidden metaphors.

The Savings Mythology

Is it really true that people in the United States are far less frugal than people in Japan and West Germany? Discovering the answer requires examining the way in which Commerce Department economists measure


106

personal savings. The problem is that personal savings is not an item that government statisticians find out directly; there is no question on the IRS form that asks "how much have you put aside this year for savings?" Some of the most important economic measures are derived by asking people; for example, the monthly unemployment figure is based on a survey in which thousands of people are questioned about their work experience in the previous month. However, there is no regular large-scale survey in which people are asked about their savings behavior. The government economists are forced to calculate personal savings indirectly; the frequently cited figures on personal savings are derived by subtracting all consumer purchases from the total disposable income that individuals have. In short, personal savings is simply what is left over from income after individuals have paid taxes and purchased all of their consumption items. Here are the formulas:

1. Personal income – Taxes = Disposable personal income

2. Disposable personal income – Personal consumption expenditures = Personal savings

3. Personal savings rate = Personal savings divided by Disposable personal income

This makes sense because individuals can only save income that they have not spent on other items. However, the accuracy of the personal savings figure rests entirely on the accuracy of the estimates of personal income and personal consumption expenditures. But there are three problems here. First, since the personal savings figure is derived by subtracting one very large number from another very large number, it is extremely sensitive to small changes in those large numbers. For example, if the personal income figure for 1987 were 5 percent higher than the official data indicated, the personal savings figure would increase by 54 percent. Second, there are items where data are highly problematic. In calculating personal income, for example, the government economists make use of a fairly solid source—reports by firms of how much they have paid their employees. But this has to be supplemented with data on the income of self-employed individuals, which is based on their own self-reports to the Internal Revenue Service. Such reports are obviously problematic because individuals have an interest in understating their income to save on taxes.[23]

The third problem is that these estimates of personal income and personal consumption expenditures are made within an elaborate accounting framework that was structured to provide a coherent picture of the economy as a whole. This accounting framework involves a series of detailed decisions about how certain kinds of income flows or expenditures


107

will be handled, and quite often, these decisions are not made to improve the accuracy of the personal savings figure but for the sake of consistency or to improve some other part of the accounts. However, these detailed accounting conventions can have a very significant impact on the estimates of personal income and personal consumption expenditures and indirectly on personal savings.

One of these accounting conventions concerns the treatment of public pension funds. There are public pension funds that work in exactly the same way as private pension funds. Both employers and employees put money aside in a trust fund whose earnings are used to pay pension benefits. However, in the national income accounts, it is assumed that all public pension funds pay benefits directly out of state revenues. One recent study showed that when funded public pension funds are treated in the same way as private pension funds, the personal savings figure for 1985 increased by 37.3 percent.[24]

Another convention that is important concerns the treatment of owner-occupied housing. In figuring out personal consumption expenditures, government statisticians use a strange procedure. They treat people who own their own housing as though they are renters paying rent to themselves. Hence, one of the largest items in personal consumption expenditure is the estimate of the total amount of rent that owner-occupiers pay. While this procedure makes sense for other parts of the accounts, it wreaks havoc on the personal savings figure since the estimate of owner-occupied rent might be quite different from the actual current expenditures that home owners incur. In fact, one consequence of this convention is that the personal savings figure largely excludes one of the main forms of household savings in the United States—the accumulation of equity in homes.

These detailed conventions are particularly important in international comparisons of savings rates. While the basic accounting framework used in Japan and West Germany is quite similar to the American system, there are numerous differences in the detailed conventions and the way that specific estimates are constructed. For example, one recent study of the Japanese savings rate noted differences in the ways capital transfers and depreciation are treated in the two countries. When adjustments are made for these differences for 1984, the Japanese savings rate declines from 16.2 percent to 13.7 percent.[25]

Another important part of the discrepancy between Japanese and U.S. savings rates is related not to accounting conventions, but to geography. The high population density in Japan makes land extremely valuable in that country; in 1987, land constituted two-thirds of all Japanese wealth, but only 25 percent of U.S. wealth.[26] This means that the acquisition of land is a much larger component of total personal savings in


108

Japan than in the United States. However, the money that is being put aside for acquiring land for owner-occupied homes is not money that is available for investment by the business sector.[27] Hence, a significant part of the discrepancy between U.S. and Japanese savings rates is irrelevant to the question of international competitiveness.

In short, it is necessary to be extremely skeptical of cross-national comparisons of savings rates because the accounting conventions and the economic institutions differ. Moreover, the differences in institutions can magnify the importance of relatively minor differences in accounting conventions. Instead of pursuing these international comparisons of savings rates further, it is more useful to look at another data source that provides information on personal savings in the United States. The statistical offices of the Federal Reserve Board have developed a number of measures of savings as part of their effort to develop a comprehensive accounting of financial flows in the economy. This data source includes estimates of the annual changes in holdings of financial assets and liabilities (debts) of households. These estimates are based in part on very solid data, such as official reports by pension funds and insurance companies of their holdings, and some less solid data that depend on the indirect calculations of the holdings of households. (See figure 5.1.)

According to the Federal Reserve data, personal savings was quite strong in the United States in the 1980s, and the savings rate actually increased. Since the mid-1970s, the two different government series on personal savings have moved in the opposite direction. While the Commerce Department's figures have slid down, the Federal Reserve figures have gone up. Commerce Department analysts have argued that their data are more accurate because the Federal Reserve figures have been thrown off by unrecorded flows of foreign capital into the United States. However, the Federal Reserve figures are actually more reliable because they are based on an analysis of actual financial flows rather than the indirect methodology of the Commerce Department.

The Federal Reserve data include the annual increase in the assets of pension funds and life insurance reserves. This is a figure that is reported directly and involves a minimum of guesswork. It also represents a form of personal savings that is extremely important because it is directly available for productive investment in other parts of the economy. In 1988, the increase in pension fund and insurance reserves (exclusive of capital gains) was $224.4 billion. This is an enormous sum; it was 50 percent higher than the Commerce Department estimate of all personal savings—$144.7 billion. It was also enough to finance by itself 94.8 percent of all net private domestic investment—in capital goods, plants, and housing—in that year. Of course, increases in pension and insurance re-


109

figure

5.1
Measures of Personal Savings.
SOURCES:  Economic Report of the President  (Washington, D.C.: U.S.
Government Printing Office, 1990), table C-29, 327. The Alternative Personal
Savings is calculated from the table "Savings by Individuals." Net increases
in debt, exclusive of mortgage debt, are subtracted from increases in
financial assets. Some additional adjustments are made for 1986–90 to
compensate for the substitution of home equity loans for other forms of
consumer credit. For a fuller description of data and methods, see Fred Block,
"Bad Data Drive Out Good: The Decline of Personal Savings Reexamined,"
Journal of Post   Keynesian Economics , 13 (1) (Fall 1990): 3–19.

serves do not exhaust the supply of personal savings; there are also substantial accumulations of assets in bank accounts, stocks, and bonds.

The Federal Reserve data also make intuitive sense. It is well known that rich people are responsible for the bulk of household savings because they have far more discretionary income than everybody else. It is also known that the Reagan administration's policies significantly increased the percentage of income going to the richest families. For the Commerce Department figures to be true, the rich would have had to consume their increased income on a scale even more lavish than Leona Helmsley's home remodeling and the late Malcolm Forbes's famous Moroccan birthday party.[28]

Furthermore, personal savings as measured with the Federal Reserve data exceeded net private investment in the economy in every year of the 1980s, sometimes by more than $100 billion. When one adds undistributed corporate profits that are also available to finance invest-


110

ment, the surfeit is even greater. Michael Milken—the convicted junk bond king—was fond of saying, "The common perception is that capital is scarce . . . but in fact capital is abundant; it is vision that is scarce."[29] An examination of the data on personal savings indicates that Milken is correct; the United States does not suffer from a chronic inability to save.

Finally, it is important to emphasize that all of this preoccupation with personal frugality ignores the single most important way in which individuals contribute to economic prosperity—through what can be called "productive consumption."[30] When individuals or the society spends to educate young people or to retrain or deepen the skills of adults, that is productive consumption because it enhances the capacity of people to produce efficiently. Similarly, spending to rehabilitate drug addicts or to improve the physical and mental health of the population is also productive consumption. It is now widely recognized that the development of the capacities of the labor force is an extremely important determinant of a society's wealth.

Yet all of the standard calculations of savings ignore spending on productive consumption. The results are bizarre; a family that deprives its child of a college education in order to put more money in the stock market is seen as contributing to national savings, while the family that does the opposite could appear recklessly spendthrift. This backwards logic makes it harder to identify types of spending and social policies—such as the policies of social inclusion in Japan and West Germany—that could have an important impact on U.S. competitiveness in manufacturing.

Conclusion

The metaphors of "capital as blood" and "redemption through sacrifice" have dominated economic thinking in the United States. The international trade successes of Japan and West Germany have been refracted through these metaphors with the result that people in the United States have learned nothing from the comparisons. On the contrary, the comparisons combined with problematic data have served only to reinforce traditional—but now largely irrelevant—concerns with the quantity of available capital for investment. In the process, institutional issues have been totally forgotten, so that few serious reform proposals have emerged.

And yet, if we return to the neglected institutional dimensions on which Japan and West Germany are similar to each other and different from the United States—the marginality of military production, cooperative work arrangements, supportive financial institutions, and social inclusion—we already have the main elements of a serious program


111

of national economic renewal. Moreover, the passing of the Cold War creates a unique historical opportunity for such a program, since for the first time in forty years a significant reduction in defense spending and a shift of resources to civilian purposes are imaginable.

But this is not the place to flesh out such a program of reform.[31] The point is rather that comparisons with other countries can be the source of real insight into the weaknesses of our own institutions, provided that people are not blinded by obsolete and irrelevant metaphors. As I write this, it is far too early to tell whether the United States will have its own experience of perestroika —the restructuring of economic institutions—in the 1990s. However, several points seem clear. Without an American perestroika , the U.S. economy will continue to weaken and our domestic social problems will only deepen. Furthermore, the most important precondition for a period of domestic reform is what Gorbachev has termed "new thinking"—a willingness to discard outdated metaphors and ideological preconceptions and to examine the world as it actually is.


112

Six—
Uncertain Seas:
Cultural Turmoil and the Domestic Economy

Katherine S. Newman

One kid . . . I don't even know if we can afford having one child. . . . There were two in my family, three in Jane's, and that would be the range [we'd like]. I wouldn't want to have any more than that, but certainly let's say two. But two is going to be a tremendous, tremendous financial burden and drain. Not that you want to think of it in those terms, but right now I can't afford one child, no less two children. Especially when you think about the expenses, maternity expenses and child rearing expenses and all those expenses . . . and then you combine that with the loss of the second income, right? Because you can't have Jane working. Well, you're gonna lose, I figure, a year or two [of her income]. And it's just a double whammy that cannot be overcome.


Dan and Jane Edelman live in a small two-bedroom townhouse on an estate crammed with identical dwellings in northern New Jersey. The houses are stacked cheek by jowl, there is no yard to speak of, and the commute from home to work consumes two hours of their time every day. But the Edelmans count themselves lucky to own a home at all, since many of their friends have found themselves priced out of the market by skyrocketing real estate costs. Thus far they have been able to hold on to their corner of the American dream, but the issue of children looms large in their lives, and as Dan explained it, they cannot easily see past the "double whammy."

Their dilemma is symptomatic of a widespread disease generated by long-term structural changes in the domestic economy. After decades of postwar prosperity and seemingly unlimited opportunity, the American job machine seems to be running down. Wages have stagnated, income inequality is growing, unemployment—though down from its catastrophic levels in the early 1980s—remains troublesome, especially in the Rust Belt cities, and the cost of living continues to rise. Where the Edelman's parents were able to raise a family on the strength of a single

This research was supported by the Social/Cultural Anthropology Program of the National Science Foundation (grant number BNS 89-11266).


113

income and the assistance of a GI Bill mortgage, Dan and Jane are having to make tough, unpleasant choices between a standard of living they consider barely acceptable and the pleasures of family life.

The end of the postwar boom has spelled a slowdown, and in many cases a reversal, in the life chances of young families for career advancement, economic stability, and secure membership in the middle class. For many, downward mobility has become a reality: they will never see the occupational trajectory or lifestyle that their parents took for granted. Baby-boomers will not be able to raise their own children in the fashion they themselves took for granted. Remaining in the middle class mandates that husbands and wives will both have to work, coping as best they can with the task of raising children (and the scramble to find day care).

How did this situation come to pass? What happened to the American economy such that college graduates like Dan and Jane Edelman must struggle to provide a middle-class standard of living for their children-to-be? Our domestic economy has undergone profound changes since the end of World War II, changes that have seen American manufacturing industries yield to foreign competition and then disappear at an alarming rate, and our labor force shift into service jobs that do not pay as well as the unionized blue-collar jobs of the past. Variously termed the "deindustrialization of America"[1] or the emergence of a postindustrial economy,[2] this economic transformation has brought with it profound rearrangements in the way Americans earn their keep, in the way wealth is distributed within the country, and in the prospects for racial, gender, and generational groups to claim a "fair share" of the economic pie.

Evidence of long-term structural change in the economy of the United States abounds, and one purpose of this chapter will be to examine it briefly. Inspecting the facts of industrial decline or income inequality is, however, a starting point for a sociological analysis that must dig beneath the language of labor economics to the lived reality these changes impose upon American families. The domestic economy organizes how and where we spend our time, whether we can afford to marry and raise families, the consequences of divorce for an adult's life-style and a child's well-being, the quality of an individual's life in retirement, one's access to child care or health care. Virtually all aspects of our everyday lives and our long-term dreams are shaped by the economic constraints that have emerged during the latter half of the century.

Beyond the practical concerns of organizing work and family life, the undercurrents of economic transformation also reach deep into our cultural universe. Expectations for individual prosperity and upward mobility are deeply engrained in the generations descended from the survivors of the Great Depression. The post–World War II economic boom fueled this tremendous optimism, creating a baby-boom generation


114

steeped in the belief that home ownership was a birthright and a good white-collar job as normal as a "chicken in every pot."

Economic stagnation and rising inequality brought on by deindustrialization have produced frustration and confusion as people discover that the "normal" future they envisioned, and feel entitled to by virtue of being American, may not materialize. Rates of home ownership among young people (twenty-five to thirty-four) have dropped dramatically, with little prospect for reversal. Men and women raised in suburban comfort now find that they cannot provide the same kind of security for their children. In the 1950s and early 1960s, when most of the baby-boomers were born and raised, Ozzie could expect to support Harriet and the kids in a middle-class fashion solely on the strength of his income. Today that life-style has become ever more difficult to sustain, even though the vast majority of Harriets work full time.[3]

If life is proving less affluent than expected among the white middle class, the picture has become far more grim for America's poor, rural and urban. In the past twenty years, family farms collapsed at the highest rate since the Great Depression. Rural poverty, a phenomenon Americans associate with the dark days of the 1930s, has re-emerged as a major social problem in the midwestern states. Some 17 percent of rural dwellers—nearly ten million people—live in poverty, a figure comparable to the poverty rates in inner cities.[4] Inner cities are plagued by abandoned buildings, larger numbers of school dropouts than ever before, the spectre of homelessness amidst the splendor of gentrification, and rising crime as the underground economy (primarily the crack cocaine trade) engulfs neighborhoods in which prospects for legitimate employment have dried up. The problems of the poor spill out of ghetto enclaves and onto middle-class byways in the form of homeless beggars.

How are these social facts connected to the macroeconomic phenomenon of deindustrialization? Even more important, how has the social experience of economic stagnation and increasing inequality shaped a new view, however confused and ambiguous, of the American experience? The meaning of "being American" has been inextricably embedded in expectations for upward mobility and domination of international trade. The 1970s and 1980s have reshaped this self-perception in ways that we have yet to fully articulate. The change is evident in our fears for the country's economic future and our frustrations over the impact of change on our standard of living, a resurgent conservatism over the responsibilities of the fortunate toward the fate of the poor, a heightened sense of competition between and within generations for the resources needed to raise a family or retire in comfort, and increasing worries over the long-term impact of inner-city decay and minority poverty.

The dimensions of change are best understood by looking first at the macroeconomic facts of industrial decline. Thereafter, I explore the im-


115

pact of this transformation in the realm that matters most to American families: income and employment. Finally, I will consider how deindustrialization has influenced the expectations and experiences of the different generations of Americans who must find their way through the new economy. My quest is to consider the cultural meaning of the country's economic decline.

The Parameters of Deindustrialization

The unprecedented wave of industrial plant shutdowns in the 1970s and 1980s attracted the attention of a wide variety of labor economists and industrial sociologists. Conservatives among them argued that the downturn was simply another "swing" in the business cycle, a term used to describe the episodic ups and downs considered natural, normal features of capitalist systems. If anything else was to blame for America's economic doldrums, conservatives suggested that unproductive and "overpriced" labor was primarily at fault. Union demands were understood to be the root cause of the flight of manufacturing overseas, where wages are lower.

Liberal economists took issue with this view and began to look for new paradigms to describe the postwar development of the U.S. economy. Two well-known scholars on the political left, Barry Bluestone and Bennett Harrison, argued that a fundamental change in the country's economic structure was underway. Their much-debated book, The Deindustrialization of America , united the demise of the country's manufacturing sector with the movement of industry overseas and the spectacular increase in corporate mergers, and in so doing articulated a new and darker vision of the country's economic predicament:

Underlying the high rates of unemployment, the sluggish growth in the domestic economy, and the failure to successfully compete in the international market is the deindustrialization of America. By deindustrialization is meant a widespread, systematic disinvestment in the nation's basic productive capacity. . . . Capital . . . has been diverted from productive investment in our basic national industries into unproductive speculation, mergers and acquisitions, and foreign investment. Left behind are shuttered factories, displaced workers, and a newly emerging group of ghost towns.[5]

Bluestone and Harrison accused American corporations of dismantling even profitable plants to provide revenue for diversified investment, and the relocation of manufacturing facilities to low-wage, nonunionized communities, often at the taxpayers' expense, since these shutdowns could be written off corporate tax bills.

Deindustrialization has been most pronounced in the Rust Belt zones


116

of the Northeast and Midwest, yet Bluestone and Harrison showed that nearly half the jobs lost to plant shutdowns during the 1970s were located in the Sun Belt states of the South and West. Hence the trend cannot be dismissed as a regional problem; it is a nationwide migraine headache. Overall, the 1970s saw the loss of nearly thirty-eight million jobs to runaway shops, plant shutdowns, and cutbacks.[6]

The vulnerability of labor in the face of rising unemployment quickly lead to declining average wages, even for those who were still on the job. Downward pressure on wages was exerted through "freezes and cuts in wages, the introduction of two-tiered wage systems, the proliferation of part-time and "home" work, and the shifting of work previously performed by regular (often unionized) employees to independent, typically nonunion subcontractors."[7] Estimates of overall wage losses in the durable goods sector—which includes automobiles, steel, machinery, and electrical equipment—amount to nearly 18 percent between 1973 and 1986. This translates into a loss of more than $16 million dollars per hour of work, deducted from American paychecks.[8]

Communities suffer collective punishment when faced with local economic contraction. Towns plagued by plant shutdowns usually see sharp declines in the health of industries that supplied parts or raw materials to the now-vacant factory. Taverns and grocery stores feel the pinch not long thereafter, as workers laid off from major employers cut back on their spending. Unemployment benefits cushion the impact for a time, but eventually long-term joblessness translates into mortgage defaults, higher welfare expenditures, and outmigration. When workers are no longer on the payroll, their home towns must weather the loss of income and sales tax revenues. This in turn forces unwelcomed cuts in the quality and quantity of public services (schools, hospitals, roads, etc.), which makes an economically depressed area even less attractive for new investment. As if this weren't enough, the gloom and doom of deindustrialization generates rising demand for social and medical services that can address stress disorders: psychological problems, alcoholism, and high blood pressure, among them.[9]

Enterprising officials, hoping to find ways to reverse the downhill slide, search high and low for new industries to fill the gap in the local economy. With their backs against the wall, communities compete against each other to attract new corporations by providing tax breaks or promises to construct new sewer lines in the hopes of beating out others offering less. Apart from the fiscal burden this places on local residents, the very vulnerability of deindustrializing communities provides them little leverage in bargaining with new companies. They dare not ask much in return for the tax breaks lest they risk the loss of a new business to another town that has proven to be less demanding. Hence, despite the


117

public investment involved they cannot insist that a company return the favor and stay put (or even necessarily exact the promise of a warning if a plant shutdown happens again).

The spectre of industrial decline does not tell the whole story of deindustrialization. There is a growth side to the saga as well, represented by an employment boom in the service sector. Conservatives often point to the remarkable record the United States enjoys in job creation when compared to its relatively stagnant European counterparts. What is often missed in this laudatory portrait is the low-wage character of the American "job machine." Services ranging from fast food to banking, from child care to nursing home attendants, have burgeoned. In most of these growth areas, however, the wage structure has been unfavorable. A small number of professional jobs that pay well has been swamped by minimum wage positions. About 85 percent of the new jobs created in the 1980s were in the lowest paying industries—retail trade and personal, business, and health services.[10] More than half of the eight million (net) new jobs created in the United States between 1979 and 1984 paid less than $7,000 per year (in 1984 dollars). While many of these were part-time jobs (another growth area of dubious value), more than 20 percent of the year-round, full-time jobs created during this period paid no more than $7,000.[11] The economic expansion of the 1980s, much heralded by Presidents Reagan and Bush, failed to improve the standard of living of many Americans because the jobs it generated were disproportionately to be found at the low-wage end of the spectrum.[12]

Hence workers displaced by deindustrialization, new entrants to the labor market (young people and women), and the increasing number of elderly returning to the employment scene to supplement retirement, find their options limited. Moreover, the improvement experienced by black, Hispanic, and Asian workers in the 1960s and early 1970s was all but wiped out in the 1980s as they flooded into low-wage jobs. Younger workers were also disadvantaged: one-fifth of the net new year-round, full-time jobs held by workers under thirty-five years old paid under $11,000.[13] Workers unlucky enough to find themselves in the industrial heartland faced the most hostile climate of all since the region exceeds all other areas of the country in the "ability" to generate bad jobs: 96 percent of the new employment in the Rust Belt Midwest is in the low-income category.

These "replacement" jobs are even more problematic because they generally fail to offer the benefits routinely attached to "good" jobs. As of 1987, roughly 17 percent of American employees had no health insurance and 40 percent were not covered by a pension plan.[14] This is partially attributable to the low levels of unionization in the growing service sector industries: workers who are not organized have no collective


118

bargaining power and hence suffer from relatively low wages and poor benefits.

Employers' increasing reliance on temporary workers hardly helps matters. These "marginal" workers—for example, "Kelly Girls" and "Accountemps"—are often employed full-time, but lack yearly contracts and can be let go with virtually no notice. Temporary jobs are notorious for denying workers insurance and pension coverage as well as prospects for advancement. When compared to the growth rates of permanent employment, temporary work has skyrocketed, growing nine times faster than total employment since 1979. By 1987, Kelly Girls and organizations like it could claim nearly 1.2 million workers.[15]

Moonlighting is also on the increase, with large numbers of men and women working two jobs, either in order to make ends meet or to squirrel away some savings. The practice was not unknown in the past for men, particularly those attempting to support families without the assistance of working wives. Now, however, with divorce increasing (and the prospects for supporting a family on a single income growing ever more problematic), women are moonlighting in record numbers. In 1970 only 636,000 women held down two jobs; by 1989 the numbers had jumped to 3.1 million.[16] If low-wage job growth persists and divorce remains a fixture of the social landscape, we can look forward to more of the same.

The imagery of deindustrialization—ghost towns and empty parking lots—can easily lead one to imagine that the old single-industry cities have gone the way of the dinosaurs. Although it is true that many a company town has disappeared and that urban economies appear to be more diverse in their industrial base than they were in the days of the robber barons, narrowly based local economies are not entirely of the past. The growth of white-collar industries has introduced a new form of dependence into the domestic economy. Stripped of their manufacturing giants, cities like New York, Boston, Los Angeles, Houston, and Chicago have increasingly come to rely on the white-collar businesses—particularly in financial services and information technology—as the engine of their economic development.

The consequences of such a dependence are twofold. On the one hand, we see an increasing divide among city dwellers between those who have high wages, fancy apartments, and affluent life-styles, and those who were turned out of the old manufacturing industries that once dominated city life.[17] Fur-clad brokers are confronted by homeless men, women, and children in the subways and on the streets. Poor people's housing (for example, single-room occupancy hotels, flophouses, and the like) has evaporated in the face of demand for luxury buildings, and the results of this wholesale eviction of the dispossessed is visible to everyone.[18] In the cities and the suburbs, Americans are relentlessly exposed to the growing gap between the haves and the have-nots.


119

But those in the fur coats are not so secure either. In February of 1990, the pages of the Wall Street Journal —the self-proclaimed "daily diary of the American dream"—were filled with stunned accounts of the bankruptcy of Drexel, Burnham, Lambert, one of the country's premier brokerage firms. After a decade of astronomical profits, Drexel filed for Chapter 11 and stranded 5,000 fast-track traders. Two weeks later, Shearson/Lehman announced a 4 percent reduction in its workforce—another 2,000 well-paid workers were let go, with more to follow. Nineteen ninety was not a particularly opportune year to be an unemployed stock-broker, for Wall Street was still reeling from the impact of the massive downturn of October 1987; thirty thousand employees received pink slips in the aftermath of Black Monday, when the worst stock market crash since 1929 sent millions of dollars in investment capital up in smoke. Wall Street salaries have plummeted as overqualified movers and shakers flood the market. The volatile nature of financial services, which is sensitive to fluctuating interest rates, plus the whims of foreign and domestic investors, and the feverish takeover activity of the past decade have combined to make life a bit precarious at the top. Once filled with unstoppable optimism and a degree of arrogance over their successes, the denizens of these high-level firms have joined the ranks of fellow white-collar workers who have learned to watch their backs and duck—if possible—when the pink slips cascade out of the boardroom.[19]

The consequences of this volatility for a city's employment and tax base are considerable. Ray Brady, the CBS News reporter for economic affairs, reported the downsizing at Shearson/Lehman with an ominous tone in his voice.[20] Brady pointed out that every job on Wall Street generated two "support" positions elsewhere in the Big Apple. The corollary seems obvious: the loss of those big salaries translates into higher unemployment for the "little guys." Indeed, Brady noted, the impact of the 1987 Black Monday crash has already translated into a local downturn of no small proportions: in the two years after the Wall Street disaster, retail sales in New York were down 6 percent, restaurant business fell by 10 percent, and the real estate market dropped by about 9 percent, with sales sluggish and prices falling. A variety of factors may have influenced these "secondary" losses, but it is fairly clear that cities like the Big Apple have developed an unhealthy dependency on financial and information service industries. In the postindustrial city, when the brokerage business contracts pneumonia, the rest of the town may be in for a bad bout of the flu, at the very least.

City Hall in the postindustrial urban center is no less vulnerable to the fluctuating health of white-collar industries than the political leadership of the older Rust Belt centers was on heavy manufacturing. When service industries fire their workers, or transfer their operations out of expensive city centers to remote "back room" facilities in faraway suburbs,


120

or threaten to leave altogether unless they are given tax breaks for staying, tax coffers begin to empty. Caught between the twin demands of declining revenues and rising demands for services in the wake of human displacement (homelessness, unemployment, ghetto deterioration), the Gotham cities of the United States are in trouble. Politicians hint at the inevitable need for new taxes to balance the books and refurbish urban infrastructures, only to find resistance strong from industries already straining to compete with overseas counterparts and urban families trying to keep their heads above water.

Who Owns the American Dream?

As the concentration of the work force shifts from manufacturing cars to flipping hamburgers or processing insurance claims, communities are thrown into upheaval. The industries that once provided continuity for generation after generation of blue-collar workers disappear, leaving behind empty parking lots and empty souls. People who have spent their entire working lives in one factory find they must accept premature, and comparatively meagre, retirement, bereft of all the entitlements they expected: health insurance, pension funds, and the peace of mind that comes with knowing that your efforts were part of a larger enterprise that will go on after you.[21]

Young men, particularly minority men, experience rising unemployment as the industries that traditionally provided jobs for unskilled newcomers to the labor market (urban manufacturing) dry up.[22] Meanwhile, job growth in service industries is most pronounced in suburban areas, far from the inner-city ghettos most in need of entry-level employment. The "mismatch" between those in need of jobs and employers in need of employees has become a major logistical and social problem.[23] At all levels of the social structure, economic upheaval leads to social disorganization.

The chaos of deindustrialization brings with it a particularly unfortunate departure from post–World War II trends toward greater equality in the distribution of resources. During the twenty-five years that followed the war, average income in the United States grew at a healthy pace. But even more important (at least from the standpoint of fairness), the distribution of these gains benefited Americans who fell into middle and lower income groups. The country still had its rich and its poor, to be sure, but the gap between them closed to a greater degree than had been the case before 1945. But beginning in 1973, economic growth came sputtering to a halt.[24] Family incomes stopped growing, even though a record number of families had multiple earners. Workers lucky enough to be in high-wage industries fared comparatively well during the post-1973 period, but those in low-wage sectors took the brunt of the slow-


121

down. The real income of the bottom 40 percent of the population fell by about 11 percent between 1979 and 1986. At the same time, income growth for the richest segments of the country grew at rates far exceeding the average. The top 1 percent gained by 20 percent.[25] It will surprise no one to learn that these differential growth rates led to a stunning 18 percent jump in the inequality of income distribution. Virtually all the progress made toward equality in America during the 1950s and 1960s was wiped out by the rising income inequality of the fifteen years that followed.[26]

Some scholars argue that the erosion of equality threatens to put the American middle class on the endangered species list.[27] For as the fortunate few ascend from the middle income level to the upper middle class, and the unfortunate many experience downward mobility and land in lower income groups, it is the middle that seems to be disappearing. Definitions of the middle class are notoriously slippery since they sometimes refer to income, while at other times revolve around occupational prestige. But if we examine the income measure for a start, there is evidence to suggest that the percentage of American families who earn what might be termed a middle income ($20,000 to $50,000 per year) is declining. Katherine Bradbury, a senior economist at the Federal Reserve Bank of Boston, calculated that the size of the middle class shrank by about 5 percent between 1973 and 1984, with the lion's share of these exmiddles dropping down the income charts and less than 1 percent moving up.[28] These kinds of findings have caused Harrison and Bluestone to dub our time the epoch of the "Great U-Turn," since the evidence points to a historic watershed, a reversal of the trends we had come to see as quintessentially true of the American economic experience.

Downward mobility in terms of income is bad enough, but when one considers the difficulty of using what remains to secure a middle-class standard of living, the real social significance of postindustrial wage structures becomes even clearer. Frank Levy, professor of economics at the University of Maryland and author of the influential volume Dollars and Dreams , has shown that up until the 1970s being in the middle income range virtually guaranteed home ownership and most of the other perquisites of the American Dream. After 1973 even remaining in the middle (much less dropping down into the low end of the income spectrum) no longer did the trick. Housing prices rose faster in the 1970s than other goods, owing in part to the unprecedented demand created by the baby-boom generation's desires for real estate. This coupled with wage stagnation combined to place home ownership out of bounds for a growing number of American families—even though more and more of those families were dual-income households. Owning a house is an indispensable benchmark of middle-class status.[29] Men and women who discover


122

that this goal is out of their reach have effectively been written out of the American Dream.

When we look at aggregate statistics on income or housing, we often miss what is sociologically most significant about the changes that postindustrialism creates. The impact of declining average wages on life-style, for example, was experienced most profoundly by younger families. When the slowdown in income growth started in 1973, families that were already secure in their homes, with fixed-rate mortgages, savings accounts, and the like, were "over the hump" and had relatively little to fear. They saw the value of their assets skyrocket and were able to trade up the real estate market, exchanging a two-bedroom starter house for a larger, more elegant one, using the exploding value of their original home to finance the move. But young families, particularly those in the baby-boom generation, were caught on the other side of the divide. They came of age in a sick economy and, owing in part to the pressure of their sheer numbers, never fully recovered.

Climbing out of the Great Depression, each succeeding generation has expected to do better than their parents. The gospel of upward mobility received tremendous reinforcement in the two decades after World War II because economic expansion, coupled with generous government intervention in the form of the GI Bill and other middle-class entitlements, did make it possible for adults of the 1950s to fulfill their material ambitions. But after 1973, this great "American assumption" ran into the wall of economic stagnation and high inflation. The generation gap is no longer simply a matter of musical tastes or the length of one's hair: it now describes a material chasm.[30] Baby-boomers who grew up in suburbia, with Mom at home and Dad at the office, are finding the gates to the suburbs locked and the pressure to keep Mom and Dad in the workplace unrelenting.

The Cultural Costs of Downward Mobility

American culture has always celebrated forward motion, progress, upward mobility. We are true optimists, always assuming that the world—or at least our corner of it—will continue to provide more for us than it did for our parents, and more for our children than we have today. This central expectation dies hard. When reality fails to provide what we think we are owed, we seldom readjust our expectations. Instead, we stew in frustration or search for a target for our anger, pointing fingers at more fortunate generations, incompetent presidents, disloyal corporations. When this fails to satisfy, Americans are often inclined to look within, to personalize wide-scale economic disasters in the form of individual moral failings.


123

Downward mobility, both within and between generations, is an experience particularly ripe for this kind of morality play. Managers who lost their jobs in the last decade's merger mania often find that they cannot hold on to a systemic, structural vision of their loss. Even when they know, at some level, that forces larger than any individual have left them pounding the pavement in search of new jobs that will pay less, be less secure, and symbolize their descent down the class ladder, they cannot hold on to the notion that they are not to blame. Instead, managerial culture in its American form leads them to internalize their occupational troubles and pushes them to comb through their personalities for the hidden flaws that justify their fate.

The culture of meritocracy they embrace teaches that a person's occupational standing is an accurate barometer of his or her intrinsic moral worth. When that barometer fails, it can only mean that the person is a less than fully respectable human being. Meritocratic individualism is so potent a theme in American culture that it can thoroughly undermine decades of evidence to the contrary. John Kowalski, a denizen of the Forty Plus Club, an organization for unemployed executives in Manhattan, devoted thirty years of his working life to a trade association representing the chemical industries. He rose steadily up the ladder of responsibility, graduating over time from assistant secretary to vice-president. John was proud of his work, and had every reason to think he had done a good job, when the board of directors suddenly announced he was to be passed over for the vacant presidency. They let it be known that John was no longer really welcome in his job and that "for his own sake" he ought to be looking elsewhere.

One might think that someone like John, who has dedicated virtually his entire adult life to this organization, would be furious, indeed, filled with righteous indignation. Yet his belief in the truth of meritocracy leads him instead to point the finger back at himself: "I'm beginning to wonder about my abilities to run an association, to manage and motivate people. . . . Having been demoted . . . has to make you think. I have to accept my firing. I have to learn that that's the way it is. The people who were involved in it are people I respect for the most part. . . . They are successful executives. . . . So I can't blame them for doing what they think is right. I have to say where have I gone wrong."[31] I interviewed dozens of men and women cast out from the heartland of corporate America, and rarely did anyone fail to reach the same conclusion John expresses here: "There must be something wrong with me." The cost of intragenerational (within an adult's lifetime) downward mobility has been a massive loss of confidence among some of America's most experienced white-collar managers. As the economic disruptions described earlier in this chapter spread, so too does this culturally defined uneasi-


124

ness and depression. It engulfs workers and surrounds their children, who look at their parents and think, "If this could happen to them, it could happen to me."

The psychic pain caused by unemployment is an enduring problem for those on the receiving end. During depressions and recessions, the number of people who must survive this relentless destruction of their self-esteem grows. But even in good times there are always thousands of American men and women who find themselves falling out of the social structure, struggling to regain a place and an identity they can live with. Some manage to succeed, but many do not: they live for years with their identities in limbo. This is particularly true when the only jobs they can find pay a fraction of what they earned before they found themselves on the unemployment line. For American culture accepts the meritocratic argument that your job defines your worth as a person and subjects those who have moved down the ladder to a devastating critique of their value.

Even after they have recovered, the experience of downward mobility leaves most people insecure and shaken. They never quite trust their new employers or themselves. They cannot leave the past behind, but worry instead that they may plunge down again and join the legions of the lost for a second time. And many do have just that experience, for in their new jobs they are last hired, and when shake-ups occur, as they routinely do, they are often first fired once again.

When downward mobility occurs in one person's adult lifetime, the tragedy sticks in the craw and afflicts the generations to come in the form of nagging insecurity and self-doubt: will this awful descent down the occupational ladder happen to me, the son or daughter of the dispossessed? Am I carrying a gene for disaster? Children of the dispossessed, downwardly mobile can never be entirely sure that the security they once considered a middle-class birthright will be theirs to claim in adulthood.

In fact, there are reasons to suspect that downward mobility of another kind will describe the fate of many in the future. This "other kind" involves a comparison between the standard of living enjoyed by the baby-boom generation (and younger groups coming behind it) with the good fortunes of the generation that graduated to adulthood in the immediate aftermath of World War II. For as I noted earlier, the economic slowdown that began in 1973 caught different generations at different points in the life cycle and bifurcated their experience vis-à-vis the American dream. Where the older generation could expect to own their own homes, the younger group is finding this increasingly beyond reach. Where postwar adults were party to the creation of a "new middle class" of engineers, doctors, psychologists, corporate managers, and the like, their children found the professions crowded and competitive.[32] If all went well, the adult generation of the 1950s could expect to see their ca-


125

reers rocket upward, only to "plateau" (in terms of advancement up the corporate hierarchy) some time in their mid-fifties. Today's corporate managers are finding that the pressure of their numbers, combined with a slowing economy, will force the "plateau" to come earlier in their lives: in their forties. They will have to contend with salaries that are slower to increase, and the psychic consequences of an artificially shortened horizon for professional development. They will "top out" and be unable to go any higher in the organizational structure in which they work at a much younger age than was true for their fathers (or mothers).

Intergenerational downward mobility is causing broadbased cultural confusion. It is a byproduct of economic stagnation and demographic pressure, but these sociological facts are of little comfort to average people who cannot understand why they cannot fulfill the promise of bettering their parents' standard of living. It has been part of the American belief system to assume that each generation outdoes the last, and that the parents' sacrifices (taking a sweatshop job at the turn of the century) will be repaid by childrens' successes.[33] Increasingly, it would appear that with or without parental sacrifice, the baby-boom generation and those coming behind it are likely to experience a significant drop in their standard of living compared to that of their parents.

If human beings were able to adjust their expectations every time the consumer price index came out, this would be of little concern. But our sense of what is normal, of what the average person is entitled to have in life, does not change so easily. Men and women raised in suburban comfort do not simply say to themselves, "This is beyond my reach now; my children will have to settle for less; so be it." Instead, their expectations remain and their frustration grows to epidemic proportions.

For the past two years, I have been collecting life histories from two generations of Americans who graduated from one ordinary high school in a small town near New York City. The community they grew up in was a typical middle-income suburb of Manhattan. It is a bucolic, quiet enclave of commuter homes for people who earn a living in the Big Apple or in the larger cities of northern New Jersey. Developed in the 1950s, "Doeville"[34] attracted growing families out of the congested city. Mothers stayed home in those days, and the fathers of this town went out to work as skilled blue-collar labor, midlevel managers, and young professionals at the beginning of their careers in medicine or law. Many fathers established their own businesses, as contractors or freight haulers, and made a good living off the booming housing industry of the 1950s and 1960s.

There are homes in Doeville that are genuine mansions, with white pillars and circular driveways. But most of the houses are modest three-bedroom New England–style places or fake colonials, with comfortable


126

yards and two-car garages. Their first owners, those who moved into Doeville in the early 1950s, could purchase a home fairly easily on a single income, financed by the GI Bill. Doeville's children (of the 1950s and 1960s), who are now in their late twenties and late thirties, remember their early years building treehouses in the backyard, playing in the woods and the creeks near by, going swimming in the local pool, and gradually moving through the normal ups and downs of adolescence. They had a "perfectly average," not particularly privileged, way of life, as they see it now.[35]

Today Doeville homes cost a fortune. Modest houses that were easily within reach when my "informants" were kids, now routinely sell for a third of a million dollars. The houses haven't changed and the people who grew up in the town haven't changed their view that living in Doeville is an entitlement of middle-class life. But almost none of the people who graduated from high school in this town could possibly afford to live there now. They have been evicted from their own little corner of the world—or anywhere similar to it—by the declining value of their paychecks and the exponential increase in the cost of those ordinary houses.

Fred Bollard is verging on thirty, a 1980 graduate of Doeville High School, who lived at home while he finished a night school accounting degree at a local private college. Fred's parents still live in Doeville, but this is out of the question for Fred or anyone else he grew up with:

People who grew up—myself, my friends, my brothers and sisters and their friends—they don't stay in the area. Probably first and foremost, they can't afford it. The housing is literally ridiculous. My parents purchased their house for $25,000. Now the house on that little piece of property is appraised at $280,000. So now you have to make $70,000 a year to afford it. On two incomes you could do it, but one person? So that's why I think the major change is that the people who grew up there can't stay there. They have to leave and live elsewhere.

The progeny of Doeville who are now in their late twenties and thirties are finding that even when they give up the hope of living in a community like the one they grew up in, they cannot really satisfy their desires for a comfortable standard of living and the pleasures of family life. Jane and Dan Edelman, whom we encountered at the beginning of this chapter, would like to live in a place like Doeville, but can see that this is impossible, even on their combined incomes. Now that they would like to start a family, even their ability to support their modest home may be compromised.

The Edelman's are caught in a squeeze that makes them squirm. Raising a family is supposed to be a personal decision, an expression of love between parents, and a dramatic confirmation of the solidarity that


127

binds their own relationship together. It is meant to be the antithesis of the calculating, rational decision that, for example, buying a car might represent. American culture separates emotional and pragmatic domains. But the purity of this distinction cannot always be maintained, and wasn't during the Great Depression, when sheer necessity forced men and women to calculate carefully over the most personal of decisions. But absent catastrophic conditions, American culture regards pragmatism as a separate orientation from the affairs of the heart. Adhering to this cultural blueprint now seems something of a luxury. Young adults who grew up in Doeville feel compelled to choose between maintaining a standard of living they feel is essential, though hardly equivalent to what they grew up with, and establishing a family. It is a dilemma not easily resolved, for owning a home and having a family are fundamentally intertwined. Men and women who grew up in private homes feel that they must provide the same for their own children and that it would be irresponsible of them to plan a family absent that critical resource.

One might argue that previous generations managed on the strength of rented apartments and a much-reduced standard of living. This is beside the point. The expectations fueled by the postwar boom period of the 1950s and 1960s have become benchmarks against which descendants measure what is reasonable to expect in life. The comparison they naturally make between generations that are chronologically contiguous makes the wound run even deeper. For Dan and Jane ask, "Why are we in this predicament when it seemed so easy only twenty years before our time?"

One disturbing consequence of this intergenerational squeeze is the need baby-boomers and their younger siblings feel to calculate every move they make. Spontaneity appears to be a luxury; planning a necessity. Hypercalculation rears its head where family planning is concerned. It is also omnipresent when career decisions are at stake. For one cannot afford to make a mistake, or to be too much of a risk taker. The consequences could be disastrous: you could fall off the fast track and never recover. The workplace becomes an arena of relentless competition, as Anthony Sandsome (another Doeville graduate of the class of 1980) puts it:

Brokerage is the kind of thing where I get up in the morning and I'm in a boxing ring—not even in a boxing ring—I'm in a jungle. I'm armed with guns, knives, fists, you know, I am fighting for my money each day. I'm knocking the hell out of someone and someone is knocking the hell out of me.

Anthony can't get out of the ring, even though it sometimes exhausts him. He might never get back in. One might be inclined to expect this


128

attitude from a broker, since the field is well known for its cutthroat tendencies. But sentiments of this sort are commonly expressed by Anthony's classmates who are accountants, teachers, secretaries, and the like. Work is not the place where one finds personal fulfillment or fellowship; it is the place where survival of the fittest is the goal and the consequence of being less than the best is likely to be a serious drop in one's standard of living.

To a degree this has always been an aspect of the American workplace. It is viewed by many as an arena for the Darwinian struggle. But for this generation, making a mistake may have draconian consequences. For the treadmill begins when a young woman or man must choose a college degree course that will lead to practical payoffs in the workplace and a job that has the advancement potential needed to purchase a lifestyle consistent with middle-class expectations. That this has become increasingly difficult to pull off is met not with abject resignation, but by winching up the demands an individual places on him- or herself to calculate life decisions more carefully, and by building frustration over the knowledge that despite this increased self-surveillance, life may not turn out to be what was expected.

Who is to blame if this happens? Doeville residents are not entirely sure. But when they reflect on the apparent permanence of their economic exile, the dismay of both generations in Doeville is layers deep. Doeville parents believe their children are entitled to live in their hometown or somewhere just like it. That is what they worked for, to ensure that their children would be as well off, if not better off, than they have been. What they are witnessing is the opposite trend: their children are falling farther and farther behind. Doeville's refugee youth couldn't agree more. Anyone who has been able to escape this pressure is perceived as having benefitted from some unfair advantage. There are such people moving into Doeville now, and most of them are of Asian origin. New York City is a magnet for overseas placement of Asian executives, posted stateside by Japanese and Korean firms with American subsidiaries. Doeville is an attractive place for these newcomers to live since it is close to the City, yet is cloistered from the perceived dangers of urban living. The strength of Asian currencies against the American dollar puts Doeville homes well within reach of overseas executives, even as it recedes from the grasp of "native" Americans. As Maureen Oberlin, a life-long Doeville resident who cannot afford to buy into the community as an adult, sees it, this is a cause for alarm:

This area particularly has had a heavy Asian [influx]. If you go into some of the schools, Doeville is a perfect example, the high school . . . is getting to the point that it's almost 50-50, the percentage of Asians as opposed to Caucasians. The elementary schools are even higher [in percentages of


129

Asian students]. It's frustrating that they can afford it and we can't. We've lived here all our lives. We're working for it and they can just come up with the cash.

We are accustomed to the idea that blue-collar auto workers in Detroit will take a sledge hammer to a stray Toyota parked in the factory parking lot. The expression of frustration in the face of growing Japanese dominance of the American automobile market is understandable. Blue-collar labor faces a direct economic threat in the form of a competition we are losing. This is hardly news.

That the displacement has reached the quiet streets of America's middle and upper middle classes may come as something of a surprise. Nativism, a xenophobic reaction to the threat of "invasion" by alien peoples, is rearing its head behind the white picket fences of suburbia. Residents of Doeville, parents and exiled grown children, question whether the American melting pot is big enough for these newcomers, who seem to be starting at the top rather than working their way up through the ranks.

Conclusion

Deindustrialization is a macroeconomic phenomenon with profound consequences for our daily lives and our long-term ideals. The great American assumption of prosperity dies hard. When our experience falls short of expectations, as it does when downward mobility strikes a business executive, we are inclined to blame ourselves. Here the system appears to function perfectly well; we simply see ourselves as defective parts that need to be cast out or repositioned at a lower rank, more in keeping with our "natural" abilities.

When downward mobility distances the experience of one generation from that of another (adjacent) generation, the blame may also fall on the shoulders of the hapless individual who failed to calculate properly, who allowed an interest in music to overwhelm his or her better judgment (to pursue accounting). Or it may surface in scapegoating. Doeville families look at their new Asian neighbors and ask: why are they able to waltz in here and buy up homes in the neighborhoods we cultivated when we can no longer do so? The sentiment is not a pretty one, for it reflects an underlying sense of entitlement: only certain kinds of people—real Americans who speak English and want to assimilate—should be allowed the fruits of Doeville life. But it is an understandable reaction to the frustration of an intergenerational trajectory that is headed downhill in a culture that only has room for the good news of ever increasing prosperity.

Nativism is but one potential response. One hears as well the faint


130

beat of intergenerational warfare: why should a thirty-year-old woman pay hefty Social Security taxes to pay for the retirement of elder Doevillians, when they no longer pass school bond issues to support the education of young children in a neighboring part of the county? Should the generation that saw the postwar boom and reaped the benefits be entitled to a comfortable retirement, when the baby-boomers pushing up from below may see neither? America's social contract is fraying at the edges. We are no longer certain what we owe each other in the form of mutual support, or how open we can "afford" to be in enfolding immigrants into our society. We first calculate the costs and often fail to see any benefits.

Awareness of the fragility of the bonds holding us together is dim at best. Doeville residents look upon the country as an anthropomorphic being that once had a secure identity and is now adrift. They are confused by the apparent weakness of the economy and by the sense of directionless motion we encounter at every turn. We seize upon high technology as the solution, only to find that we have lost our markets to foreign competition. We indulge in a frenzy of hostile takeovers and mergers, only to find that unemployment and burdensome debt follows in its wake. We send our sons and daughters to Wall Street in search of a financial holy grail, and discover instead that they are nearly as vulnerable to downward mobility as the steel mill worker on Chicago's South Side.

The turmoil we have seen in the domestic economy since the postwar period has brought us tremendous prosperity at times, and a roller coaster of insecurity at others. Most of all, it has created a "postmodern" sense of unpredictability: we no longer have a firm grip on where the domestic economy is headed, on where the end point of change is to be found. This is not a particularly easy moment for Americans, who look toward the twenty-first century with clouded vision. We have not given up on our identity as a dominant force in the international world, but we see the limits of our power in the faltering economy. There are times when reality is at dramatic odds with our cultural expectations, and this is one of those times.


131

Seven—
Labor and Management in Uncertain Times:
Renegotiating the Social Contract

Ruth Milkman

The U.A.W. . . . is the largest labor union on earth. Its membership of 1,300,000 embraces most of the production workers in three major American industries. . . . The U.A.W. itself is diverse and discordant, both in its leaders and its members, among whom are represented every race and shape of political opinion. . . . The union's sharp insistence on democratic expression permits bloc to battle bloc and both to rebel at higher-ups' orders. They often do. But U.A.W. is a smart, aggressive, ambitious outfit with young, skillful leaders. . . . It has improved the working conditions in the sometimes frantically paced production lines. And it has firmly established the union shop in an industry which was once firmly open shop. . . . It is not a rich union. Its dues are one dollar a month, which is low. . . . U.A.W. makes its money go a long way. It sets up social, medical, and educational benefits. . . . In its high ranks are men like Reuther, who believes labor must more and more be given a voice in long-range economic planning of the country.[1]


Curious as it may seem to a late-twentieth-century sensibility, this homage to the United Auto Workers is not from a union publication or some obscure left-wing tract. It appeared in Life magazine in 1945, a month after V-J Day and not long before the century's largest wave of industrial strikes, led by the auto workers, rocked the nation. The cover photo featured a 1940s Everyman: an unnamed auto worker in his work clothes, with factory smokestacks in the background. Blue-collar men in heavy industry, with powerful democratic unions and, at least implicitly, a strong class consciousness—only forty-five years ago this was standard iconography in the mass media and in the popular thinking that it both reflected and helped shape. Organized labor, then embracing over a third of the nation's nonfarm workers and 67 percent of those in man-

Thanks to Miriam Golden, Naomi Schneider, Judith Stacey, and Alan Wolfe for their helpful comments on an earlier version of this chapter.


132

ufacturing, was a central force in the Democratic party and a vital influence in public debate on a wide range of social questions. The industrial unions founded in the New Deal era were leaders in opposing race discrimination (and to some extent even sex discrimination) in this period, and their political agenda went far beyond the narrow, sectional interests of their members. Indeed, as historian Nelson Lichtenstein has written, in the 1940s "the union movement defined the left wing of what was possible in the political affairs of the day."[2]

Today, this history is all but forgotten. Blue-collar workers and labor unions are conspicuous by their absence from the mainstream of public discourse. Across the political spectrum, the conventional wisdom is that both industrial work and the forms of unionism it generated are fading relics of a bygone age, obsolete and irrelevant in today's postindustrial society. As everybody knows, while the unionized male factory worker was prototypical in 1945, today the labor force includes nearly as many women as men, and workers of both genders are more likely to sit behind a desk or perform a service than to toil on an assembly line. Union density has fallen dramatically, and organized labor is so isolated from the larger society that the right-wing characterization of it as a "special interest" prevails unchallenged. Public approval ratings of unions are at a postwar low, and such new social movements as environmentalism and feminism are as likely to define themselves in opposition to as in alliance with organized labor (if they take any notice of it at all).[3]

What has happened in the postwar decades to produce this change? Part of the story involves structural economic shifts. Most obviously, the manufacturing sector has decreased drastically in importance, accounting for only 20 percent of civilian wage and salary employment in the United States in 1987, compared to 34 percent in 1948.[4] And for complex political as well as economic reasons, unionization has declined even more sharply, especially in manufacturing, its historical stronghold. Although numbers fail to capture the qualitative aspects of this decline, they do indicate its massive scale: in 1989, only 16 percent of all U.S. workers, and 22 percent of those in manufacturing, were union members—half and one-third, respectively, of the 1945 density levels.[5] Along-side these massive processes of deindustrialization and deunionization, the widespread introduction of new technologies and the growing diffusion of the "new" industrial relations, with its emphasis on worker participation, have in recent years dramatically transformed both work and unionism in the manufacturing sector itself.

Few workplaces have been affected by these changes as dramatically as those in the automobile industry, the historical prototype of mass production manufacturing and the core of the U.S. economy for most of this century. Since the mid-1970s, hundreds of thousands of auto work-


133

ers have been thrown out of work as some factories have closed and others have been modernized.[6] And although the U.A.W. still represents the vast bulk of workers employed by the "Big Three" auto firms (General Motors, Ford, and Chrysler), in recent years the non-union sector of the industry has grown dramatically. Union coverage in the auto parts industry has fallen sharply since the mid-1970s, and the establishment of new Japanese-owned "transplants" in the 1980s has created a non-union beachhead in the otherwise solidly organized assembly sector.[7] Profoundly weakened by these developments, the U.A.W. has gingerly entered a new era of "cooperation" with management, jettisoning many of its time-honored traditions in hopes of securing a place for itself in the future configuration of the industry. Meanwhile, the Big Three have invested vast sums of money in such new technologies as robotics and programmable automation. They have also experimented extensively with worker participation schemes and other organizational changes.

The current situation of auto workers graphically illustrates both the historical legacy of the glory days of American industrial unionism and the consequences of the recent unravelling of the social contract between labor and management that crystallized in the aftermath of World War II. This chapter explores current changes in the nature of work and unionism in the auto industry, drawing on historical evidence and on field-work in a recently modernized General Motors (GM) assembly plant in Linden, New Jersey. The analysis focuses particularly on the effects of new technology and the new, participatory forms of management. While it is always hazardous to generalize from any one industry to "the" workplace, the recent history of labor relations in the auto industry is nonetheless suggestive of broader patterns. The auto industry case is also of special interest because it figures so prominently in current theoretical debates about workplace change, which are briefly considered in the concluding section.

The story I will recount here is largely a story of failure—on the part of both management and labor—to respond effectively to rapidly changing circumstances. On the management side, the Big Three auto firms (and especially GM) have experienced enormous difficulty in overcoming bureaucratic inertia, particularly in regard to changing the behavior of middle management and first-line supervisors. As a result, their internal organizational structures and traditional corporate cultures have remained largely intact, despite strenuous efforts to institute changes. The auto firms have been unable to reap the potential advantages of the new technologies or to make a successful transition to a more participatory system of workplace management, even though they have invested considerable resources in both areas. Management's own inertia has been reinforced, tragically, by the weakening of the U.A.W. in this critical


134

period. Long habituated to a reactive stance toward management initiatives, in recent years the union has concentrated its energies on the crisis of job security, leaving the challenge of reorganizing the workplace itself largely to management while warily embracing "cooperation" in hopes of slowing the hemorrhaging of jobs in the industry. The net result has been an increasingly uncompetitive domestic auto industry, which in turn has further weakened the union, creating a vicious circle of decline.

Because so much of the recent behavior of automobile manufacturing managers and of the U.A.W. and its members is rooted in the past, the first step in understanding the current situation is to look back to the early days of the auto industry, when the system of mass production and the accompanying pattern of labor-management relations that is now unravelling first took shape.

Fordism and the History of Labor Relations in the U.S. Auto Industry

The earliest car manufacturers depended heavily on skilled craftsmen to make small production runs of luxury vehicles for the rich. But the industry's transformation into a model of mass production efficiency, led by the Ford Motor Company in the 1910s, was predicated on the systematic removal of skill from the industry's labor process through scientific management, or Taylorism (named for its premier theorist, Frederick Winslow Taylor). Ford perfected a system involving not only deskilling but also product standardization, the use of interchangeable parts, mechanization, a moving assembly line, and high wages. These were the elements of what has since come to be known as "Fordism," and they defined not only the organization of the automobile industry but that of modern mass production generally.[8]

As rationalization and deskilling proceeded through the auto industry in the 1910s and 1920s, the proportion of highly skilled jobs fell dramatically. The introduction of Ford's famous Five Dollar Day in 1914 (then twice the going rate for factory workers) both secured labor's consent to the horrendous working conditions these innovations produced and helped promote the mass consumption that mass production required for its success. Managerial paternalism, symbolized by Ford's "Sociological Department," supplemented high wages in this regime of labor control. Early Ford management also developed job classification systems, ranking jobs by skill levels and so establishing an internal labor market within which workers could hope to advance.[9]

Deskilling was never complete, and some skill differentials persisted among production workers. Even in the 1980s, auto body painters and


135

welders had more skill than workers who simply assembled parts, for example. But these were insignificant gradations compared to the gap between production workers and the privileged stratum of craft workers known in the auto industry as the "skilled trades"—tool and die makers, machinists, electricians, and various other maintenance workers. Nevertheless, the mass of the industry's semiskilled operatives united with the skilled trades elite in the great industrial union drives of the 1930s, and in the U.A.W. both groups were integrated into the same local unions.

The triumph of unionism left the industry's internal division of jobs and skills intact, but the U.A.W. did succeed in narrowing wage differentials among production workers and in institutionalizing seniority (a principle originally introduced by management but enforced erratically in the pre-union era) as the basic criterion for layoffs and job transfers for production workers. For the first decade of the union era, much labor-management conflict focused on the definition of seniority groups. Workers wanted plantwide or departmentwide seniority to maximize employment security, while management sought the narrowest possible seniority classifications to minimize the disruptions associated with workers' movement from job to job. But once the U.A.W. won plantwide seniority for layoffs, it welcomed management's efforts to increase the number of job classifications for transfers, since this maximized opportunities for workers with high seniority to choose the jobs they preferred. By the 1950s, this system of narrowly defined jobs, supported by union and management alike, was firmly entrenched.[10]

Management and labor reached an accommodation on many other issues as well in the immediate aftermath of World War II. But at the same time, the U.A.W. began to retreat from the broad, progressive agenda it had championed in the 1930s and during the war. The failure of the 1945–46 "open the books" strike, in which the union demanded that GM raise workers' wages without increasing car prices, and the national resurgence of conservatism in the late 1940s and 1950s led the U.A.W. into its famous postwar "accord" with management. Under its terms, the union increasingly restricted its goals to improving wages and working conditions for its members, while ceding to management all the prerogatives involved in the production process and in economic planning. The shop steward system in the plants was weakened in the postwar period as well, and in the decades that followed, the U.A.W. was gradually transformed from the highly democratic social movement that Life magazine had profiled in 1945 into a more staid, bureaucratic institution that concentrated its energies on the increasingly complex technical issues involved in enforcing its contracts and improving wages, fringe benefits, and job security for its members.[11]

The grueling nature of production work in the auto industry changed


136

relatively little over the postwar decades, even as the U.A.W. continued to extract improvements in the economic terms under which workers agreed to perform it. High wages and excellent benefits made auto workers into the blue-collar aristocrats of the age. It was an overwhelmingly male aristocracy, since women had been largely excluded from auto assembly jobs after World War II; blacks, on the other hand, made up a more substantial part of the auto production work force than of the nation's population. In 1987, at the Linden GM assembly plant where I did my fieldwork, for example, women were 12 percent of the production work force and less than 1 percent of the skilled trades. Linden production workers were a racially diverse group: 61 percent were white, 28 percent were black, and 12 percent were Hispanic; the skilled trades work force, however, was 90 percent white.[12]

While the union did little to ameliorate the actual experience of work in the postwar period, with the job classification system solidified, those committed to a long-term career in the industry could build up enough seniority to bid on the better jobs within their plants. Although the early, management-imposed job classification systems had been based on skill and wage differentials, the union eliminated most of the variation along these dimensions. Indeed, the payment system the U.A.W. won, which persists to this day, is extremely egalitarian. Regardless of seniority or individual merit, assembly workers are paid a fixed hourly rate negotiated for their job classification, and the rate spread across classifications is very narrow. Formal education, which is in any case relatively low (both production workers and skilled trades at Linden GM averaged twelve years of schooling), is virtually irrelevant to earnings. At Linden GM, production workers' rates in 1987 ranged from a low of $13.51 per hour for sweepers and janitors to a high of $14.69 for metal repair work in the body shop. Skilled trades workers' hourly rates were only slightly higher, ranging from $15.90 to $16.80 (with a twenty-cent-an-hour "merit spread"), although their annual earnings are much higher than those of production workers because of their extensive overtime.[13]

Since wage differentials are so small, the informal de facto hierarchy among production jobs is based instead on what workers themselves perceive as desirable job characteristics. While individual preferences always vary somewhat, the consensus is reflected in the seniority required to secure any given position. One testament to the intensely alienating nature of work on the assembly line is that among the jobs auto workers prefer most are those of sweeper and janitor, even though these jobs have the lowest hourly wage rates. Subassembly, inspection, and other jobs where workers could pace themselves rather than be governed by the assembly line are also much sought after. At Linden in 1987, the median seniority of unskilled workers in the material and maintenance departments,


137

which include all the sweepers and janitors and where all jobs are "off the line," was 24 years—twice the median seniority of workers in the assembly departments![14] By contrast, jobs in particularly hot or dirty parts of the plant, or those in areas where supervision is especially hostile, are shunned by workers whose seniority gives them any choice. Such concerns are far more important to production workers than what have become marginal skill or wage differentials, although there is a group that longs to cross the almost insurmountable barrier between production work and the skilled trades.[15]

Such was the system that emerged from the post—World War II accord between the U.A.W. and management. It functioned reasonably well for the first three postwar decades. The auto companies generated huge profits in these years, and for auto workers, too, the period was one of unprecedented prosperity. Even recessions in this cyclically sensitive industry were cushioned by the supplementary unemployment benefits the union won in 1955. However, in the 1970s, fundamental shifts in the international economy began to undermine the domestic auto makers. As skyrocketing oil prices sent shock waves through the U.S. economy, more and more cars were imported from the economically resurgent nations of Western Europe and, most significantly, Japan. For the first time in their history, the domestic producers faced a serious challenge in their home market.[16]

After initially ignoring these developments, in the 1980s the Big Three began to confront their international competition seriously. They invested heavily in computerization and robotization, building a few new high-tech plants and modernizing most of their existing facilities. GM alone spent more than $40 billion during the 1980s on renovating old plants and building new ones.[17] At the same time, inspired by their Japanese competitors, the auto firms sought to change the terms of their postwar accord with labor, seeking wage concessions from the union, reducing the number of job classifications and related work rules in many plants, and experimenting with new forms of "employee involvement" and worker participation, from quality circles to flexible work teams.[18]

The U.A.W., faced with unprecedented job losses and the threat of more to come, accepted most of these changes in the name of labor-management cooperation. To the union's national leadership, this appeared to be the only viable alternative. They justified it to an often skeptical rank and file membership by arguing that resistance to change would only serve to prevent the domestic industry from becoming internationally competitive, which in turn would mean further job losses. Once it won job security provisions protecting those members affected by technological change, the union welcomed management's investments in technological modernization, which both parties saw as a means


138

of meeting the challenge of foreign competition. Classification mergers and worker participation schemes were more controversial within the union, but the leadership accepted these, too, in the name of enhancing the domestic industry's competitiveness.

Most popular and academic commentators view the innovations in technology and industrial relations that the auto industry (among others) undertook in the 1980s in very positive terms. Some go so far as to suggest that they constitute a fundamental break with the old Fordist system. New production technologies in particular, it is widely argued, hold forth the promise of eliminating the most boring and dangerous jobs while upgrading the skill levels of those that remain. In this view, new technology potentially offers workers something the U.A.W. was never able to provide, namely, an end to the deadening monotony of repetitive, deskilled work. Similarly, many commentators applaud the introduction of Japanese-style quality circles and other forms of participative management, which they see as a form of work humanization complementing the new technology. By building on workers' own knowledge of the production process, it is argued, participation enhances both efficiency and the quality of work experience. The realities of work in the auto industry, however, have changed far less than this optimistic scenario suggests.

New Technology and the Skill Question

Computer-based technologies are fundamentally different from earlier waves of industrial innovation. Whereas in the past automation involved the use of special-purpose, or "dedicated," machinery to perform specific functions previously done manually, the new information-based technologies are flexible, allowing a single machine to be adapted to a variety of specific tasks. As Shoshana Zuboff points out, these new technologies often require workers to use "intellective" skills. Workers no longer simply manipulate tools and other tangible objects, but also must respond to abstract, electronically presented information. For this reason, Zuboff suggests, computer technology offers the possibility of a radical break with the Taylorist tradition of work organization that industries like auto manufacturing long ago perfected, moving instead toward more skilled and rewarding jobs, and toward workplaces where learning is encouraged and rewarded. "Learning is the new form of labor," she declares.[19] Larry Hirschhorn, another influential commentator on computer technology, makes a similar argument. As he puts it, in the computerized factory "the deskilling process is reversed. Machines extend workers' skill rather than replace it."[20]

As computer technology has transformed more and more workplaces,


139

claims like these have won widespread public acceptance. They are, in fact, the basis for labor market projections that suggest a declining need for unskilled labor and the need for educational upgrading to produce future generations of workers capable of working in the factory and office of the computer age. Yet it is far from certain that workplaces are actually changing in the ways that Zuboff and Hirschhorn suggest.

The Linden GM plant is a useful case for examining this issue, since it recently underwent dramatic technological change. In 1985–86, GM spent $300 million modernizing the plant, which emerged from this process as one of the nation's most technologically advanced auto assembly facilities and as the most efficient GM plant in the United States. There are now 219 robots in the plant, and 113 automated guided vehicles (AGVs), which carry the car bodies from station to station as they are assembled. Other new technology includes 186 programmable logic controllers (PLCs), used to program the robots. (Before the plant modernization there was only one robot, no AGVs, and eight PLCs.)[21]

Despite this radical technological overhaul, the long-standing division of labor between skilled trades and production workers has been preserved intact. Today, as they did when the plant used traditional technology, Linden's skilled trades workers maintain the plant's machinery and equipment, while production workers perform the unskilled and semiskilled manual work involved in assembling the cars. However, the number of production workers has been drastically reduced (by over 1,100 people, or 26 percent), while the much smaller population of skilled trades workers has risen sharply (by 190 people, or 81 percent). Thus the overall proportion of skilled workers increased—from 5 percent to 11.5 percent—with the introduction of robotics and other computer-based production technologies. In this sense, the plant's modernization did lead to an overall upgrading in skill levels.[22]

However, a closer look at the impact of the technological change on GM-Linden reveals that pre-existing skill differentials among workers have been magnified, leading to skill polarization within the plant rather than across-the-board upgrading.[23] After the plant modernization, the skilled trades workers enjoyed massive skill upgrading and gained higher levels of responsibility, just as Zuboff and Hirschhorn would predict. In contrast, however, the much larger group of production workers, whose jobs were already extremely routinized, typically experienced still further deskilling and found themselves subordinated to and controlled by the new technology to an even greater extent than before.

The skilled trades workers had to learn how to maintain and repair the robots, AGVs, and other new equipment, and since the new technology is far more complex than what it replaced, they acquired many new skills. Most skilled trades workers received extensive retraining, espe-


140

cially in robotics and in the use of computers. Linden's skilled trades workers reported an average (median) of forty-eight full days of technical training in connection with the plant modernization, and some received much more.[24] Most of them were enthusiastic about the situation. "They were anxiously awaiting the new technology," one electrician recalled. "It was like a kid with a new toy. Everyone wanted to know what was going to happen."[25] After the "changeover" (the term Linden workers used for the plant modernization), the skilled trades workers described their work as challenging and intellectually demanding:

We're responsible for programming the robots, troubleshooting the robots, wiping their noses, cleaning them, whatever. . . . It's interesting work. We're doing something that very few people in the world are doing, troubleshooting and repairing robots. It's terrific! I don't think this can be boring because there are so many things involved. There are things happening right now that we haven't ever seen before. Every day there's something different. We're always learning about the program, always changing things to make them better—every single day. [an electrician]

With high technology, skilled trades people are being forced to learn other people's trades in order to do their trade better. Like with me, I have to understand that controller and how it works in order to make sure the robot will work the way it's supposed to. You have to know the whole system. You can't just say, "I work on that one little gear box, I don't give a damn about what the rest of the machine does." You have to have a knowledge of everything you work with and everything that is related to it, whether you want to or not. You got to know pneumatics, hydraulics—all the trades. Everything is so interrelated and connected. You can't be narrow-minded anymore. [a machine repairman]

However, the situation was quite different for production workers. Their jobs, as had always been the case in the auto industry, continued to involve extremely repetitive, machine-paced, unskilled or semiskilled work. Far from being required to learn new skills, many found their jobs were simplified or further deskilled by the new technology:

It does make it easier to an extent, but also at the same time they figure, "Well, I'm giving you a computer and it's going to make your job faster, so instead of you doing this, this, and this, I'm going to have you do this and eight other things, because the time I'm saving you on the first three you're going to make it up on the last." Right now I'm doing more work in less time, the company's benefiting, and I am bored to death—more bored than before! [a trim department worker with nineteen years seniority]

I'm working in assembly. I'm feeding the line, the right side panel, the whole right side of the car. Myself and a fellow worker, in the same spot. Now all we do, actually, is put pieces in, push the buttons, and what they


141

call a shuttle picks up whatever we put on and takes it down the line to be welded. Before the changeover my job was completely different. I was a torch solderer. And I had to solder the roof, you know, the joint of the roof with the side panel. I could use my head more. I liked it more. Because, you know, when you have your mind in it also, it's more interesting. And not too many fellow workers could do the job. You had to be precise, because you had to put only so much material, lead, on the job. [a body shop worker with sixteen years seniority]

Not only were some of the more demanding and relatively skilled traditional production jobs—like soldering, welding, and painting car bodies—automated out of existence, but also many of the relatively desirable off-the-line jobs were eliminated. "Before there were more people working subassembly, assembling parts," one worker recalled. "You have some of the old-timers working on the line right now. Before, if you had more seniority, you were, let's say, off the line, in subassembly."

Even when they operate computers—a rarity for production workers—they typically do so in a highly routinized way. "There is nothing that really takes any skill to operate a computer," one production worker in the final inspection area said. "You just punch in the numbers, the screen will tell you what to do, it will tell you when to race the engine and when to turn the air conditioner off, when to do everything. Everything comes right up on the screen. It's very simple."

The pattern of skill polarization between the skilled trades and production workers that these comments suggest is verified by the findings of an in-plant survey. Skilled trades workers at Linden, asked about the importance of twelve specific on-the-job skills (including "problem solving," "accuracy/precision," "memory," and "reading/spelling") to their jobs before and after the plant was modernized, reported that all but one ("physical strength") increased in importance. In contrast, a survey of the plant's production workers asking about the importance of a similar list of skills found that all twelve declined in importance after the introduction of the new technology.[26] The survey also suggested that boredom levels had increased for production workers; 45 percent stated that their work after the changeover was boring and monotonous "often" or "all the time," compared to 35 percent who had found it boring and monotonous before the changeover. Similarly, 96 percent of production workers said that they now do the same task over and over again "often" or "all the time," up from 79 percent who did so before the changeover.

In the Linden case, the plant modernization had opposite effects on skilled trades and production workers, primarily because no significant job redesign was attempted. The boundary between the two groups and the kinds of work each had traditionally done was maintained, despite the radical technological change. While management might have chosen


142

(and the union might have agreed) to try to transfer some tasks from the skilled trades to production workers, such as minor machine maintenance, or to redesign jobs more extensively in keeping with the potential of the new technology, this was not seriously attempted. Engineers limited their efforts to conventional "line balancing," which simply involves packaging tasks among individual production jobs so as to minimize the idle time of any given worker. In this respect they treated the new technology very much like older forms of machinery. The fundamental division of labor between production workers and the skilled trades persisted despite the massive infusion of new technology, and this organizational continuity led to the intensification of the already existing skill polarization within the plant.

GM-Linden appears to be typical of U.S. auto assembly plants in that new technology has been introduced without jobs having been fundamentally redesigned or the basic division of labor altered between production workers and the skilled trades. Even where significant changes in the division of labor—such as flexible teams—have been introduced, as in the new Japanese transplants, they typically involve rotating workers over a series of conventionally deskilled production jobs, rather than changing the basic nature of the work. While being able to perform eight or ten unskilled jobs rather than only one might be considered skill upgrading in some narrow technical sense, it hardly fits the glowing accounts of commentators who claim that with new technology "the deskilling process is reversed." Rather, it might be characterized best as "flexible Taylorism" or "Toyotism."[27]

Perhaps work in the auto industry could be reorganized along the lines Zuboff and Hirschhorn suggest, now that new technology has been introduced so widely. However, a major obstacle to this is bureaucratic inertia on the management side, for which GM in particular is legendary. As many auto industry analysts have pointed out, the firm's investments in new technology were typically seen by management as a "quick fix," throwing vast sums of money at the accelerating crisis of international competitiveness without seriously revamping the firm's organizational structure or its management strategies to make the most efficient possible use of the new equipment. As Mary Ann Keller put it, for GM "the goal of all the technology push has been to get rid of hourly workers. GM thought in terms of automation rather than replacing the current system with a better system."[28] The technology was meant to replace workers, not to transform work.

Reinforcing management's inertia, ironically, was the weakness of the U.A.W. The union has an old, deeply ingrained habit of ceding to management all prerogatives on such matters as job design. And in the 1980s, faced with unprecedented job losses, union concerns about employment


143

security were in the forefront. The U.A.W. concentrated its efforts on minimizing the pain of "downsizing," generally accepting the notion that new technology and other strategies adopted by management were the best way to meet the challenge of increased competition in the industry. After all, if the domestic firms failed to become competitive, U.A.W. members would have no jobs at all. This kind of reasoning, most prominently associated with the U.A.W.'s GM Department director Donald Ephlin, until his retirement in 1989, also smoothed the path for management's efforts to transform the industrial relations system in the direction of increased "employee involvement" and teamwork, to which we now turn.

Worker Participation and the "New Industrial Relations"

Inspired by both the non-union manufacturing sector in the U.S. and by the Japanese system of work organization, the Big Three began to experiment with various worker participation schemes in the 1970s. By the end of the 1980s, virtually every auto assembly plant in the United States had institutionalized some form of participation. Like the new technologies that were introduced in the same period, these organizational innovations—the "new industrial relations"—were a response to the pressure of international competition. And even more than the new technologies, they signaled a historic break with previous industrial practices. For both the Taylorist organization of work in the auto industry and the system of labor relations that developed around it had presumed that the interests of management and those of workers were fundamentally in conflict. In embracing worker participation, however, management abandoned this worldview and redefined its interests as best served by cooperation with labor, its old adversary.[29]

For management, the goal of worker participation is to increase productivity and quality by drawing on workers' own knowledge of the labor process and by increasing their motivation and thus their commitment to the firm. Participation takes many different forms, ranging from suggestion programs, quality circles, and quality-of-work-life (QWL) programs, which actively solicit workers' ideas about how to improve production processes, to "team concept" systems, which organize workers into small groups that rotate jobs and work together to improve productivity and quality on an ongoing basis. All these initiatives promote communication and trust between management and labor, in the name of efficiency and enhanced international competitiveness. Like the new technologies with which they are often associated, the various forms of worker participation have been widely applauded by many commentators who see them


144

as potentially opening up a new era of work humanization and industrial democracy.[30]

In the early 1970s, some U.A.W. officials (most notably Irving Bluestone, then head of the union's GM department) actively supported experimental QWL programs, which they saw as a means for improving the actual experience of work in the auto industry, a long-neglected part of the union's original agenda. But many unionists were more skeptical about participation in the 1980s, when QWL programs and the team concept became increasingly associated with union "give-backs," or concessions. In a dramatic reversal of the logic of the postwar labor-management accord, under which economic benefits were exchanged for unilateral management control over the production process, now economic concessions went hand-in-hand with the promise of worker participation in decision making. However, QWL and the team concept were introduced largely on management's terms in the 1980s, for in sharp contrast to the period immediately after World War II, now the U.A.W. was in a position of unprecedented weakness. In many Big Three plants, participation schemes were forced on workers (often in the face of organized opposition) through what auto industry analysts call "whipsawing," a process whereby management pits local unions against one another by threatening to close the least "cooperative" plants. Partly for this reason, QWL and the team concept have precipitated serious divisions within the union, with Ephlin and other national union leaders who endorse participation facing opposition from a new generation of union dissidents who view it as a betrayal of the union's membership.[31]

The New United Motor Manufacturing, Inc., plant (NUMMI) in Fremont, California, a joint venture of Toyota and GM, is the focus of much of the recent controversy over worker participation. The plant is run by Toyota, using the team concept and various Japanese management techniques. (GM's responsibility is limited to the marketing side of the operation.) But unlike Toyota's Kentucky plant and the other wholly Japanese-owned transplants, at NUMMI the workers are U.A.W. members. Most of them worked for GM in the same plant before it was closed in 1982. Under GM, the Fremont plant had a reputation for low productivity and frequent wildcat strikes, but when it reopened as NUMMI two years later, with the same work force and even the same local union officers, it became an overnight success story. NUMMI's productivity and quality ratings are comparable to those of Toyota plants in Japan, and higher than any other U.S. auto plant.[32] Efforts to emulate its success further accelerated the push to establish teams in auto plants around the nation.

Many commentators have praised the NUMMI system of work organization as a model of worker participation; yet others have severely criticized it. The system's detractors argue that despite the rhetoric of worker control, the team concept and other participatory schemes are basically


145

strategies to enhance management control. Thus Mike Parker and Jane Slaughter suggest that, far from offering a humane alternative to Taylorism, at NUMMI, and at plants that imitate it, workers mainly "participate" in the intensification of their own exploitation, mobilizing their detailed firsthand knowledge of the labor process to help management speed up production and eliminate wasteful work practices. More generally, "whether through team meetings, quality circles, or suggestion plans," Parker and Slaughter argue, "the little influence workers do have over their jobs is that in effect they are organized to time-study themselves in a kind of super-Taylorism."[33] They see the team concept as extremely treacherous, undermining unionism in the name of a dubious form of participation in management decisions.

Workers themselves, however, seem to find intrinsically appealing the idea of participating in what historically have been exclusively managerial decision-making processes, especially in comparison to traditional American managerial methods. This is the case even though participation typically is limited to an extremely restricted arena, such as helping to streamline the production process or otherwise raise productivity. Even Parker and Slaughter acknowledge that at NUMMI, "nobody says they want to return to the days when GM ran the plant."[34] Unless one wants to believe that auto workers are simply dupes of managerial manipulation, NUMMI's enormous popularity with the work force suggests that the new industrial relations have some positive features and cannot simply be dismissed as the latest form of labor control.

Evidence from the GM-Linden case confirms the appeal of participation to workers, although reforms in labor relations there were much more limited than at NUMMI. Linden still has over eighty populated job classifications, and although 72 percent of the production workers are concentrated in only eight of them, this is quite different from NUMMI, where there is only one job classification for production workers and seniority plays a very limited role. Nor has Linden adopted the team system. However, when the plant reopened after its 1985–86 modernization, among its official goals was to improve communications between labor and management, and both parties embraced "jointness" as a principle of decision making. At the same time, "employee involvement groups" (EIGs) were established. Production workers were welcomed back to the plant after the changeover with a jointly (union-management) developed two-week (eighty-hour) training program, in the course of which they were promised that the "new Linden" would be totally different from the plant they had known before. In particular, workers were led to expect an improved relationship with management, and a larger role in decision making and problem solving on the shop floor.[35]

Most workers were extremely enthusiastic about these ideas—at least initially. The problem was that after the eight-hour training program


146

was over, when everyone was back at work, the daily reality of plant life failed to live up to the promises about the "new Linden." "It's sort of like going to college," one worker commented about the training program. "You learn one thing, and then you go into the real world. . . . " Another agreed:

It sounded good at the time, but it turned out to be a big joke. Management's attitude is still the same. It hasn't changed at all. Foremen who treated you like a fellow human being are still the same—no problems with them. The ones who were arrogant bastards are still the same, with the exception of a few who are a little bit scared, a little bit afraid that it might go to the top man, and, you know, make some trouble. Everyone has pretty much the same attitude.

Indeed, the biggest problem was at the level of first-line supervision. While upper management may have been convinced that workers should have more input into decision making, middle and lower management (who also went through a training program) did not always share this view. Indeed, after the training raised workers' expectations, foremen in the plant, faced with the usual pressures to get production out, seemed to quickly fall back into their old habits. The much-touted "new Linden" thus turned out to be all too familiar. As the workers pointed out:

You still have the management that has the mentality of the top-down, like they're right, they don't listen to the exchange from the workers, like the old school. So that's why when you ask about the "new Linden," people say it's a farce, because you still . . . do not feel mutual respect, you feel the big thing is to get the jobs out. This is a manufacturing plant; they do have to produce. But you can't just tell this worker, you know, take me upstairs [where the training classes were held], give me this big hype, and then bring me downstairs and then have the same kind of attitude.

With management, they don't have the security that we have. Because if a foreman doesn't do his job, he can be replaced tomorrow, and he's got nobody to back him up. So everybody's a little afraid of their jobs. So if you have a problem, you complain to your foreman, he tries to take care of it without bringing it to his general foreman; or the general foreman, he don't want to bring it to his superintendent, because neither of them can control it. So they all try to keep it down, low level, and under the rug, and "Don't bother me about it—just fix it and led it slide." And that is not the teachings that we went through in that eighty-hour [training] course!

Many Linden workers expressed similar cynicism about the EIGs. "A lot of people feel very little comes out of the meetings. It's just to pacify you so you don't write up grievances," one paint department worker said, articulating a widespread sentiment. "It's a half-hour's pay for sitting there and eating your lunch," he added.


147

Research on other U.S. auto assembly plants suggests that Linden, where the rhetoric of participation was introduced without much substantive change in the quality of the labor-management relationship, is a more representative case than NUMMI, where participation (whatever its limits) is by all accounts more genuine. Reports from Big Three plants around the nation suggest that typical complaints concern not the concept of participation—which workers generally endorse—but management's failure to live up to its own stated principles. Gerald Horton, a worker at GM's Wentzville, Missouri, plant "thinks the team concept is a good idea if only management would abide by it." Similarly, Dan Maurin of GM's Shreveport, Louisiana, plant observes, "it makes people resentful when they preach participative management and then come in and say, 'this is how we do it.'"[36] Betty Foote, who works at a Ford truck plant outside Detroit, expressed the sentiments of many auto workers about Employee Involvement (EI): "The supposed concern for workers' happiness now with the EI program is a real joke. It looks good on paper, but it is not effective. . . . Relations between workers and management haven't changed."[37]

At NUMMI, workers view participation far more positively. Critics of the team concept suggest that this is because workers there experienced a "significant emotional event" and suffered economically after GM closed the plant, so that when they were recalled to NUMMI a few years later they gratefully accepted the new system without complaint. But, given the uncertainty of employment and the history of chronic layoffs throughout the auto industry, that this would sharply distinguish NUMMI's workers from those in other plants seems unlikely. Such an explanation for the positive reception of the team concept by NUMMI workers is also dubious in light of the fact that even the opposition caucus in the local union, which criticizes the local U.A.W. officials for being insufficiently militant in representing the rank and file, explicitly supports the team concept.[38]

Instead, the key difference between NUMMI and the Big Three assembly plants may be that workers have more job security at NUMMI, where the Japanese management has evidently succeeded in building a high-trust relationship with workers. When the plant reopened, NUMMI workers were guaranteed no layoffs unless management first took a pay cut; this promise and many others have (so far) been kept, despite slow sales. In contrast, the Big Three (and especially GM) routinely enrage workers by announcing layoffs and then announcing executive pay raises a few days later; while at the plant level, as we have seen, management frequently fails to live up to its rhetorical commitments to participation.[39] On the one hand, this explains why NUMMI workers are so much more enthusiastic about participation than their counterparts in


148

other plants. On the other hand, where teamwork and other participatory schemes have been forced on workers through "whipsawing," the result has been a dismal failure on its own terms. Indeed, one study found a negative correlation between the existence of participation programs and productivity.[40]

Insofar as the U.A.W. has associated itself with such arrangements, it loses legitimacy with the rank and file when management's promises are not fulfilled. Successful participation systems, however, can help strengthen unionism. It is striking that at NUMMI, with its sterling productivity and quality record, high management credibility, and relatively strong job security provisions, the U.A.W. is stronger than in most Big Three plants. For that matter, the local union at NUMMI has more influence than do enterprise unions in Japanese auto plants, where team-work systems are long-standing.[41] But here, as in so many other ways, NUMMI is the exceptional case. In most U.S. auto plants, the weakness of the U.A.W.—in the face of industry overcapacity and capital's enhanced ability to shift production around the globe—has combined with management's inability to transform its own ranks to undermine the promise of participation.

Conclusion

In recent literature, the introduction of new technologies and worker participation in industries like auto manufacturing are often cited as evidence of a radical break from the traditional Fordist logic of mass production.[42] Owing to changed economic conditions, the argument goes, Fordism is becoming less and less viable, so that advanced capitalist countries are now moving toward a more flexible "post-Fordist" production regime. In most such accounts, including the influential "flexible specialization" model of Michael Piore and Charles Sabel, this transformation is driven primarily by the growth of increasingly specialized markets and by the new information-based technologies. Theorists of post-Fordism generally agree with analysts like Zuboff and Hirschhorn that new technologies should lead to skill upgrading and thus reverse the logic of Taylorism. Thus Piore and Sabel write that the computer is "a machine that meets Marx's definition of an artisan's tool; it is an instrument that responds to and extends the productive capacities of the user," and that, together with changes in product markets, computer technology is contributing to "the resurgence of craft principles." Post-Fordist theorists also view QWL programs, the team concept, and other forms of worker participation as changes that will help to humanize and democratize the workplace. Thus Piore and Sabel urge organized labor to "shake its attachment to increasingly indefensible forms of shop floor


149

control" so as not to impede progress toward flexible specialization, and they explicitly applaud the U.A.W. for its willingness to experiment with classification reductions and worker participation.[43]

Opposing this type of interpretation of recent events is another perspective, inspired by the labor process theory of Harry Braverman and associated with such writers as Harley Shaiken and David Noble.[44] While the post-Fordists emphasize the contrast between the historical logic of deskilling in mass production industries and contemporary developments, the labor process theorists instead stress the continuities. The key force shaping work experience in both the past and present, in this alternative view, is the systematic removal of skill from the labor process by Taylorist managers. While accepting the idea that new technologies can potentially increase skill levels, labor process theorists argue that this potential is often impossible to realize in the organizational context of the capitalist firm. In their view, management uses computerization in the same way that it used earlier forms of technology: to appropriate knowledge from workers and tighten control over labor—even when skill upgrading might be a more efficient strategy. As Shaiken puts it, "Unfortunately, the possibility exists for introducing authoritarian principles in flexible as well as more traditional mass-production technologies. Under these circumstances, the extraordinary economic potential of these systems is not realized."[45] Commentators in this tradition are also skeptical about worker participation schemes, which they view, not as authentic efforts to enhance the experience of work, but rather as new tools of managerial control. They are especially troubled by the fact that management-initiated participation schemes are often coupled with antiunionism and concession bargaining.[46]

Although they have contributed many valuable insights into recent workplace changes, both these perspectives are one-sided. The post-Fordists fail to take seriously the firm-level organizational obstacles to the kind of macroeconomic transition they envision. They tend to romanticize the emergent new order, and especially its implications for workers, often ignoring the persistent determination of employers to maintain control over labor—as if this fundamental feature of capitalism were disappearing along with the Fordist system of mass production. At the other extreme, labor process theorists tend to reduce all the recent innovations in work organization to new forms of managerial manipulation, and to see capital's desire for control over labor as an insuperable obstacle to any meaningful improvements in the workplace.

Both schools of thought claim to reject technological determinism, but they consistently posit opposite outcomes of the introduction of new technology. The post-Fordists have devoted a great deal of energy to highlighting instances of skill upgrading associated with new technology.


150

When confronted with examples of deskilling, they retreat to the argument that the post-Fordist perspective is merely an account of emergent tendencies whose full realization is contingent and contested. For their part, labor process theorists also disclaim technological determinism, arguing that the abstract potential of new technology to increase skill levels cannot be realized within the concrete social structure of the capitalist firm. Numerous case studies (including several from the auto industry) have appeared supporting each side of the skill debate.[47] The contradictory evidence suggests that attempts to generalize about overall deskilling or upgrading trends are fruitless. As Kenneth Spenner has persuasively argued on the basis of an extensive literature review, the effects of new technology on skill "are not simple, not necessarily direct, not constant across settings and firms, and cannot be considered in isolation."[48] Rather, as the evidence from GM-Linden also indicates, skill effects are conditioned by a variety of social factors, among them organizational culture and managerial discretion—factors that both the post-Fordists and the labor process theorists ultimately ignore or trivialize.

Like the effect of new technology on skills, the impact of worker participation is impossible to analyze in general terms. Here again, much depends on the organizational context in which participation is introduced, and especially on the relationship between labor and management. The specific characteristics of a firm's management and the relative strength and influence of unions (where they exist) can be crucial determinants of the outcome of workplace reform efforts. Yet neither labor process theory nor post-Fordism takes adequate account of these factors. This is not especially surprising for labor process theory, since as critics of Braverman have frequently complained, he neglected workers' resistance entirely. But the result is that commentators in this tradition tend to view all changes in labor relations as management schemes to enhance its control over the work force, and they can barely even contemplate the possibility that worker participation programs or other innovations could benefit workers in any way. The post-Fordist school, in sharp contrast, tends to romanticize the recent experiments in labor-management cooperation, despite the fact that, in the United States at least, most of them have been forced on unions decimated by increased capital mobility and economic globalization.

Although it is impossible to generalize from any one case, the automobile industry is an especially important test of these competing theoretical perspectives on workplace change if only because it figures so prominently in both of them. Indeed, the very concept of Fordism derives from the history of this industry. Yet organizational inertia seems more compelling an explanation for the recent history of automobile manufacturing than either theory. In recent years, automotive manage-


151

ment's reluctance or inability to abandon its longstanding system of work organization or its tradition of authoritarianism vis-à-vis labor has meant that both technological change and experimentation with participation have produced only a superficial transformation in the workplace. Both were introduced in response to a crisis of international competition, and in both cases management undertook the changes with a limited understanding of their potential impact. Consequently, they have neither resolved the continuing problem of foreign competition, nor have they produced the kinds of benefits for workers—skill upgrading and increased job satisfaction—that the optimistic projections of post-Fordist theorists promised. Yet the existence of more positive examples, such as the NUMMI plant, suggests that the alternative view of the labor process theorists, which constructs capitalist control imperatives as inherently incompatible with the possibility of changes that can offer real benefits to workers, is also problematic.

If management's ineptitude and bureaucratic inertia is the main reason for the limited impact of new technology and the new industrial relations in the U.S. auto industry, the weakness of the U.A.W. has also played a role. Following the habits it developed over the decades following World War II, the U.A.W. continued to cede to management all decisions about technology and its applications. It has yet to demand that jobs be redesigned in tandem with new technology so as to maximize the benefits to its members. The union has been more engaged in the issue of worker participation, but because in most cases QWL and teams were introduced in the context of massive job losses and industry overcapacity, the terms of labor-management "cooperation" were largely dictated by management. The U.A.W. necessarily shares the domestic industry's concern about restoring international competitiveness, but rather than serve as a basis for genuine cooperation, this has all too often become a whip for management to use in extracting concessions on wages and work rules, sending the union into a spiral of declining power and legitimacy and severely weakening the plant seniority system that had been its historical hallmark. In exchange, the U.A.W. has sought enhanced job security provisions, but it has won only modest improvements in this area, while plant closings and job losses continue. Contrary to the popular belief that union strength is an obstacle to restoring American industry to an internationally competitive position, the weakness of unionism in this period of potentially momentous changes in the workplace may be the real problem, along with the organizational ineptitude of management. The sad truth is that both labor and management in this critical industry are ill-prepared to face the future.


152

Eight—
The Blue-Collar Working Class:
Continuity and Change

David Halle and Frank Romo

The blue-collar working class in America (and elsewhere) has always evoked extreme pronouncements about its political and social attitudes. Observers have long been drawn to one of two polar positions: either the working class is a conservative force that is integrated into the class structure or the working class is a radical force at odds with the middle class and with capitalists.[1]

In the depression years of the 1930s and in the context of the burgeoning of radical new labor unions affiliated with the CIO, many observers saw a radical and even revolutionary American working class. After World War II, by contrast, in the context of a sustained period of economic growth in the West, the model of a working class integrated into the mainstream of society (and often dubbed "affluent"), gained ground. The American working class was, at that time, usually seen as the extreme case among the Western working classes (just as America was the economically and politically dominant capitalist society), and the phrase "the American worker" became, for some, a shorthand term for a working class that was politically quiescent and socially integrated.[2] In the 1960s and 1970s, a model of the radical working class regained popularity, as a series of studies disputed the idea that the working class was integrated into society or especially content with its position.[3] Now the pendulum has swung again, and the model of the quiescent (if not content) American working class has returned to dominance.[4]

The reason for the oscillation between these extreme models in part has to do with actual changes in the position and attitudes of the working class itself. Blue-collar Americans were, for example, surely more dis-

The names of the coauthors of this chapter appear in alphabetical order. The authors wish to thank James Bardwell for his help with computer programming.


153

content and more inclined to political radicalism in the 1930s than in the 1950s. But in part the pendulum swings between the two models because neither is fully adequate to capture the situation of blue-collar workers in advanced capitalism in the United States (and elsewhere). A convincing model has to take account of three separate, though related, spheres that influence blue-collar lives and beliefs. There is, first of all, life at the workplace—in the mode of production. It is on this crucial sphere that many classic studies of blue-collar workers have concentrated. Second, there is life outside the workplace—the neighborhood of residence, family, and leisure life. With suburbanization and the widespread possession of automobiles, life outside the workplace is often located at a considerable geographic distance from the plant or other work site. Finally, there is life vis-à-vis the government, especially the federal government. This involves the critical act of voting—above all in presidential elections—as well as basic attitudes toward the federal government and the political system, and attitudes toward a whole range of national policy issues. These three areas are somewhat distinct. What is often done, though it should not be, is to focus on just one aspect of workers' lives and from it infer the character of behavior and attitudes in either of the other two spheres.

Here we will use a combination of case studies and national survey data to demonstrate the inadequacies of extreme models of the blue-collar working class that do not take account of each sphere of blue-collar life or of changes that have taken place in those spheres over time. Most of the survey data have been drawn from the National Election Study (NES) carried out by the Survey Research Center of the University of Michigan, which represent the best continuous data series on political attitudes in America. This multidimensional account of working-class attitudes also sheds light on some of the main transformations in American life that have occurred over the past twenty-five or thirty years.

The first question to be addressed is the actual size of the blue-collar working class today, and its relative size compared with the other main occupational groups. In view of prevailing notions of the demise of blue-collar labor in America, it is important to note that the number of blue-collar workers reached its highest level ever in 1989—31.8 million (see figure 8.1).[5] (The blue-collar working class is here defined as consisting of skilled workers, such as electricians and plumbers; factory workers; transportation workers, such as truck and bus drivers; and nonfarm laborers. Men constitute about three-quarters of all blue-collar workers and over 90 percent of skilled blue-collar workers.[6] )

Moreover, blue-collar workers were still a larger proportion of the labor force than either of the two main white-collar groups (see figure 8.2). Thus in 1989 blue-collar workers constituted 27.1 percent of the labor


154

figure

8.1
Composition of the Civilian Labor Force, Major Occupational Groups,
1900–1989: Number of Workers by Year.

figure

8.2
Composition of the Civilian Labor Force, Major Occupational Groups,
1900–1989: Percent Composition by Year.


155

force. Compare this with the upper-white-collar sector, defined as managers and professionals, who composed 25.9 percent of the labor force; and compare this with the lower-white-collar sector—defined as clerical, secretarial, and sales workers—who composed 24.2 percent of the labor force.

What is true is that the proportion of blue-collar workers in the labor force has declined, from a peak of 34.5 percent in 1950, and is now declining faster than before. Still, it should be noted, especially given the talk about "postindustrial" or "deindustrial" society, that the proportion of blue-collar workers in the labor force is now either higher than or about the same as it was in the period 1900–1940, when America was unarguably an "industrial society."[7]

Blue-Collar Workers and the Federal Government

Presidential Elections and Political Party Identification

Blue-collar workers were a crucial part of the electoral coalition that Franklin Delano Roosevelt put together for the Democratic party. The current disaffection of blue-collar workers, especially of the skilled and better-paid blue-collar workers, from the Democratic party represents one of the major changes in American politics.

Skilled blue-collar workers voted, by a clear majority, for the Democratic candidate in five of the seven presidential elections that took place between 1952 and 1976 (1952, 1960, 1964, 1968, and 1976); they voted by a clear majority for the Republican candidate only once, in 1972 (see figure 8.3). Less-skilled blue-collar workers (defined here as all blue-collar workers except the skilled ones) also voted, by a clear majority, for the Democratic candidate in five of these seven elections, as shown in figure 8.4.[8] They too voted, by a clear majority, for the Republican candidate only once, in 1960. However in the three presidential elections since 1976, the picture is far less clear-cut. Skilled blue-collar workers voted Republican more heavily than Democrat in 1988, while splitting their vote about evenly between Democrats and Republicans in 1980 and 1984. Less-skilled blue-collar workers split their vote about evenly between Republicans and Democrats in 1988, voted more heavily Democrat in 1984, and more heavily Republican in 1980. By contrast, upper-white-collar workers have voted, by large majorities, for the Republican candidate in every election from 1952 to 1988, except for 1964, when they were clearly presented with an intolerable candidate in Barry Gold-water (see figure 8.5).

Figures 8.6 through 8.14 give a detailed analysis of the determinants of the blue-collar vote in the 1988 presidential election, showing that


156

figure

8.3
Presidential Vote, 1952–88: Skilled Blue-Collar Workers.

figure

8.4
Presidential Vote, 1952–88: Less-Skilled Blue-Collar Workers.


157

figure

8.5
Presidential Vote, 1952–88: Upper-White-Collar Workers.

figure

8.6
The 1988 Election: Effects of Union Membership
on the Blue-Collar Vote.
SOURCE : Based on figures from the logistic regression
model presented in the appendix to this chapter.


158

figure

8.7
The 1988 Election: Effects of Religion on the Blue-Collar Vote.
SOURCE : Based on figures from the logistic regression
model presented in the appendix to this chapter.

several of the traditional factors associated with voting Democratic still hold for blue-collar Americans. (These figures and figure 8.15 are based on a multivariate logistic analysis of the vote; see the appendix to this chapter for details.) Union members were more likely than non-union members to vote Democrat (figure 8.6). Blue-collar Catholics were more likely to vote Democratic than were blue-collar Protestants (figure 8.7).[9] Blue-collar blacks were more likely to vote Democrat than whites (figure 8.8). And as their income rises, the proportion of blue-collar workers voting Republican increases (figure 8.13). Notice, however, that the effect of region is now complex. Ironically, the voting profile of blue-collar workers in the East (controlling for such factors as religious differences) is now rather similar to that of blue-collar workers in the South (figure 8.10). Notice also that gender—not one of the factors traditionally associated with voting Democratic or Republican—still makes no difference. Male and female blue-collar workers are alike in their voting preferences (figure 8.9).

The movement of blue-collar workers away from Democratic presidential candidates in recent elections is paralleled by, and partly the result of, a tendency that is at least as striking—that of blue-collar workers not to vote at all in presidential elections.[10] Thus in 1980, 1984, and 1988, a larger percentage of skilled blue-collar workers did not vote than voted for either the Republican or Democratic candidate; and among


159

figure

8.8
The 1988 Election: Effects of Race on the Blue-Collar Vote.
SOURCE : Based on figures from the logistic regression
model presented in the appendix to this chapter.

figure

8.9
The 1988 Election: Effects of Gender on the Blue-Collar Vote.
SOURCE : Based on figures from the logistic regression
model presented in the appendix to this chapter.


160

figure

8.10
The 1988 Election: Effects of Region on the Blue-Collar Vote.
SOURCE : Based on figures from the logistic regression model presented
in the appendix to this chapter.

figure

8.11
The 1988 Election: Effects of Party Identification on the Blue-Collar Vote.
SOURCE : Based on figures from the logistic regression model presented
in the appendix to this chapter.


161

figure

8.12
The 1988 Election: Blue-Collar Vote by Age in Years.
SOURCE: Based on figures from the logistic regression model
presented in the appendix to this chapter.

less-skilled blue-collar workers in four of the five elections from 1972 to 1988, a larger number did not vote than voted for either the Republican or Democratic candidate (see figures 8.3 and 8.4). Further, the proportion of blue-collar workers not voting in the 1988 presidential election was especially high (51 percent of less-skilled and 45 percent of skilled workers). More detailed analysis shows that age, income, and education are the most important determinants of whether blue-collar workers vote (see figures 8.12, 8.13, and 8.14). The younger they are, and the lower their income and level of education, the less likely they are to vote.

Figure 8.15 sums up this tendency of blue-collar workers not to vote. It shows the effect of occupation on the 1988 vote. controlling for race, union membership. religion, age, family income. region and gender. Blue-collar workers were about as likely as either of the white-collar groups to vote Republican, but much less likely than the upper-white-collar sector to vote Democratic, mostly because they were less likely than the upper-white-collar sector to vote at all.

The upper-white-collar sector is in sharp contrast to the blue-collar sector in the matter of voting. The percentage of upper-white-collar workers who did not vote was low in 1988 (only 14.1 percent)[11] and at no time in the period 1952–1988 was it higher than 15.3 percent (see figure 8.18).[12]


162

figure

8.13
The 1988 Election: Blue-Collar Vote by Family Income.
SOURCE: Based on figures from the logistic regression model
presented in the appendix to this chapter.

figure

8.14
The 1988 Election: Effects of Education on the Blue-Collar Vote.
SOURCE: Based on figures from the logistic regression model presented
in the appendix to this chapter.


163

figure

8.15
The 1988 Election: Effects of Occupational Status on the Presidential Vote.
SOURCE: Based on figures from the logistic regression model presented
in the appendix to this chapter.

figure

8.16
Party Identification, 1952–88: Skilled Blue-Collar Workers.


164

figure

8.17
Party Identification, 1952–88: Less-Skilled Blue-Collar Workers.

figure

8.18
Party Identification, 1952–88: Upper-White-Collar Workers.

Changes in the political party identification of blue-collar workers since 1952 are also central. Blue-collar workers once identified in large numbers with the Democratic Party. For most of the time from 1952 to 1968, 60 percent or more of less-skilled blue-collar workers saw themselves as Democrats (the exception is 1960 for less-skilled blue-collar


165

workers), while only 20 percent or less saw themselves as Republicans (see figure 8.17). During most of the same period, 50 percent or more of skilled blue-collar workers saw themselves as Democrats, while only 25 percent or less saw themselves as Republicans (see figure 8.16). There have been two clear changes since 1972. First, a decline in the proportion of blue-collar workers identifying as Democrats (among the less-skilled, the proportion has hovered around 35 percent since 1972; among the skilled, it stabilized in the mid-forties until 1988, when it dropped sharply to 23 percent). The second change is a large increase in the proportion of blue-collar workers reporting no party identification (among less-skilled workers it is now about 40 percent; among skilled workers, it rose to about 38 percent in the period 1972 to 1984, and then climbed sharply in 1988). Interestingly, there has been no major shift of party identification toward the Republicans.

Attitude toward the Political System and Power Structure

The belief that government, including the federal government, is in the hands of a small number of organized groups who have unofficially usurped power is widespread and striking. This belief is common among blue-collar workers, though also among other occupational groups. When asked whether they thought the government was run for the benefit of everybody or for the benefit of a few big interests, 59 percent of blue-collar workers in 1984 answered that the government was run for the benefit of a few big interests. About the same percentage of Americans in upper-white-collar, lower-white-collar, and service-sector occupations agreed, as did 51 percent of housewives.[13]

Survey data going back to 1964 (when the question was first asked) suggest that this belief has been a fairly stable part of the political outlook of most Americans, including blue-collar workers. Thus in every election year except 1964, at least 40 percent of the entire population has believed that the government is run for a few big interests, and in five of the seven election years in this period more than 50 percent of the population has believed this. As in 1988, variation by occupation is not especially pronounced; blue-collar and upper-white-collar Americans both followed this trend from 1964 onward (see figures 8.19 and 8.20). The survey studies do not explore these beliefs further. For example, they do not ask the obvious follow-up question, namely, which are the "few big interests" for whose benefit so many blue-collar workers (and other Americans) believe the government is run. However, data from detailed case studies give an indication of an answer. A study of employees (almost all truck drivers) of a California company that delivers packages, a study of blue-collar and lower-white-collar Italians in Brooklyn, and a study of blue-collar chemical workers in New Jersey, all came to similar conclusions.[14] The vast majority of blue-collar workers believe that Big


166

figure

8.19
Who Benefits from Government, 1964–88: Blue-Collar Workers.

figure

8.20
Who Benefits from Government, 1964–88: Upper-White-Collar Workers.

Business really runs America. The dominant view is that corrupt politicians are a venal facade behind which major corporations, "Big Business," prevails, in politics and economics. Remarks like "it's business that runs the country," "big corporations are behind everything," "the [political] power is in the hands of the people with money," and "oil, steel


167

insurance, and the banks run this country" are commonplace. These were typical comments: "Politics? It's all money! Big Business pays out money to get what it wants." "Who runs the country? Well, I suppose the president does. He makes the decisions. Of course, business is behind him. They make the real decisions. Politicians are all on the take."

That this attitude toward Big Business is widespread is also suggested by Erik Olin Wright's survey data, which found that over 74 percent of blue-collar workers believe that "big corporations have too much power" in America. It is noteworthy that, in terms of their beliefs about the power of corporations in society, American blue-collar workers are just as class conscious as workers in Sweden (presented in Wright's analysis as far more class conscious than American workers). For in both societies, between 75 percent and 82 percent of blue-collar workers believe that "big corporations have too much power" in their respective countries.[15] This underlines the importance of considering separately the three spheres: attitude toward the political regime, attitude toward the work setting, and attitude toward life outside the workplace.

Despite these critical and sceptical beliefs that American blue-collar workers have about who runs the country, the lack of approval for alternatives to the current political system is notable. The general acceptance of the American Constitution ranges from enthusiasm ("it's the best in the world") to lukewarm ("I complain a lot, but it isn't any better anywhere else"). This phenomenon needs explaining. In part it is based on a distinction between the system and those who operate it, between politicians and the Constitution: the political system is sound, but it is in the hands of scoundrels. In part, lack of support for alternative political systems results from a perception that radical change in the United States is impractical: the country is too large, and potential leaders are too prone to sell out. But in part the widespread acceptance of the Constitution and the political system is based on a key distinction most workers make, either explicitly or implicitly, between freedom and democracy. The United States does offer freedom and liberties, which are very valuable. Consider these typical comments, all made by workers who believe venal politicians subvert the electoral process: "In America you have freedom. That's important. I can say Reagan is a jerk and no one is going to put me in jail." Another worker: "You know what I like about America? You're free. No one bothers you. If I want to take a piss over there [points to a corner of the tavern], I can." Socialism and communism are ruled out in almost everyone's eyes, for they are seen as synonymous with dictatorship. They are political systems that permit neither popular control of government (democracy) nor individual freedom and liberties.[16] Survey data suggest that, like mistrust of government, this attitude toward freedom is widespread among blue- and white-collar Americans.


168

The vast majority both value freedom and consider it an important feature of contemporary America.[17]

Life Outside the Workplace

Home Ownership and Suburbia

The combination of home ownership and suburbia is of considerable importance for understanding the American working class. Together they provide the material context for American blue-collar workers to live, or to hope to live, a residential, leisure, and social life in which the barrier between blue- and upper-white-collar is considerably muted (at least, as compared with the typical workplace situation of blue-collar workers).

The high rate of home ownership among Americans, including blue-collar Americans, has for a long time been striking. Back in 1906, Werner Sombart contrasted the United States with his native Germany: "A well-known fact . . . is the way in which the American worker in large cities and industrial areas meets his housing requirements: this has essential differences from that found among continental-European workers, particularly German ones. The German worker in such places usually lives in rented tenements, while his American peer lives correspondingly frequently in single-family or two-family dwellings."[18] By 1975, three-quarters of all AFL-CIO members owned houses.[19] Home ownership not only offers blue-collar workers the possibility of economic gain but also provides a site where they can control their physical and social surroundings—not, of course, completely, but far more than in the work setting where they are typically subordinate to the authority of a direct supervisor, as well as of management and the owners.[20]

Suburbanization, in combination with home ownership, has played a crucial role in undermining working class residential communities, especially after World War II. Suburbanization can be defined as a process involving two crucial factors. First, there is the systematic growth of fringe areas at a pace more rapid than that of the core cities; second, there is a life-style involving a daily commute to jobs in the urban center.[21] The regular commute to a workplace a considerable distance from the work site is an important factor in the fading of working-class residential communities. Many classic labor movements established their strongholds in the nineteenth century in towns and urban areas that were not especially large (by later standards), or especially spread out. Paterson, New Jersey, for example, had only 33,000 inhabitants in 1870. These places were typically urban villages, where, as Eric Hobsbawm put it, "people could walk to and fro from work, and sometimes go home in the dinner-hour . . . places where work, home, leisure, industrial relations, local government and home-town consciousness were inextricably mixed together."[22]


169

In fact, suburbanization involving the commute to work by public transport started before many of these working-class communities were formed. It began in 1814, with the first steam ferry, and continued with newer modes of public transport (the omnibus in 1829, the steam railroad in the 1830s and 1840s, the electric streetcar in the late 1880s).[23] Each of these developments doubtless somewhat undermined working-class occupational communities. But so long as workers were dependent on public transport to get to the workplace, there were limits to where they could live (nowhere too far from public transport).[24] After World War II, as automobiles became widely owned by blue-collar workers, a qualitative change occurred. Workers could live anywhere they could afford that was within commuting range. And since the incomes of better-paid blue-collar workers often approached, equalled, or exceeded those of several upper-white-collar groups (such as teachers and social workers), there developed many occupationally mixed suburbs, where the proportion of blue-collar workers ranged from about 20 percent to about 45 percent, as did the proportion of upper-white-collar workers.[25] For example, when the vast new suburb Levittown, New Jersey, opened in 1958, these two groups bought houses there in roughly equal proportions. By 1960, 26 percent of the employed males there were in blue-collar occupations, while 31 percent were in upper-white-collar occupations.[26]

This residential context provides the framework for the marital and leisure lives of many blue-collar Americans. Several other factors that are also important influences on the leisure lives of blue-collar workers cut across occupational or educational lines. These include gender, age, position in the marital cycle, and income level. For example, many blue-collar workers are enormously interested in sports, as participants and spectators. Among the sports in which they participate are hunting, fishing, and softball; golf, traditionally an upper-white-collar activity, has grown in popularity among blue-collar workers. And, like other American males, many blue-collar workers spend considerable time watching sports on television. Clearly, this interest in sports, shared in many ways by upper-white-collar males and other Americans, has as much to do with gender as with class.

It is true that certain factors add a flavor to the lives of blue-collar workers. In particular, they typically have modest levels of education (an average of twelve years) as compared with upper-white-collar workers (an average of fifteen years of education).[27] Partly as a result, blue-collar workers are less likely than upper-white-collar workers to be interested in high culture (opera, ballet, classical music, serious theater). However, these differences should not be exaggerated, for the level of interest in high culture among upper-white-collar workers is not great. For example, a survey conducted in the early 1970s on exposure to the arts in twelve major American cities showed that no more than 18 percent of


170

managers and professionals had been to a symphony concert in the past year, no more than 9 percent had been to the ballet, and no more than 6 percent had attended the opera.[28]

Finally, there is the issue of marital life. There are certain features of working-class life that may add a distinct flavor to the marriages of blue-collar workers. For example, blue-collar jobs can carry somewhat low status as compared with upper-white-collar jobs and even as compared with some lower-white-collar jobs. Some couples' comments suggest that wives of blue-collar men sometimes resent their husbands' low status occupations. And the modest level of education that blue-collar workers typically possess may affect the character of their marriages; for example, some studies suggest that the level and quality of "communication" between spouses increase with their amount of education.

Still, as with leisure life, there are a variety of forces that affect the marital lives of blue-collar workers but that are by no means confined to them. These include the conflicting demands of home life and work life, the difficulties (and benefits) that arise when both spouses work, and the host of questions associated with raising children (all of which are discussed in other chapters of this book).[29] The best studies of the marital lives of blue-collar Americans suggest that there are as many similarities as differences between their marital lives and those of upper-white-collar people.[30] One explanation is that, as with leisure lives, gender differences are at least as important as class differences. For example, whatever their class, many American wives face the likelihood of being able to find jobs only in poorly paid, lower-white-collar occupations and, at home, of having the major responsibility for child care and housework.[31]

Life at the Workplace

It is in the workplace that differences between blue-collar and white, especially upper-white-collar, are most pronounced. Blue-collar jobs are often dirty and sometimes dangerous, and usually require some degree of physical labor (hence the need to wear special protective clothes—the "blue collar").[32] In addition, such jobs usually involve the following features: (1) work that is repetitive and therefore dull; (2) work that is clearly connected to the creation of a tangible product; (3) work that offers little chance of upward mobility (workers may rise to first-line supervision, but above that level, lack of educational qualifications poses a serious barrier); and (4) work that is supervised, in an obtrusive or unobtrusive manner (there is human supervision, and there is the mechanical supervision of a time clock). These features provide enough real basis for distinguishing blue-collar from upper-white-collar jobs and, to a lesser extent, from lower-white-collar jobs.[33] In occupational settings with a va-


171

riety of work levels, management usually has little difficulty deciding which workers should be classified as blue-collar and therefore be assigned to distinct work areas and required to wear special work clothes, though some groups on the margin may be hard to classify.

Class Consciousness

Last, but definitely not least, there is the question of class consciousness. How do blue-collar workers see their position in the class structure, with whom do they identify, and whom do they oppose? These questions have always been, and remain, central in the debates over the blue-collar working class. In a recent article dramatically titled "Farewell to the Labor Movement?" Eric Hobsbawm, one of the foremost socialist historians, stressed the question of class consciousness:

It is class consciousness, the condition on which our parties [mass socialist or workers parties] were originally built, that is facing the most serious crisis. The problem is not so much objective de-proletarianization, but is rather the subjective decline of class solidarity. . . . What we find today is not that there is no longer any working class consciousness, but that class consciousness no longer has the power to unite.[34]

Hobsbawm cites the fact that in 1987 almost 60 percent of British trade union members voted for parties other than the Labor party. Clearly this is comparable to the tendency for blue-collar Americans nowadays to be at least as likely to vote Republican as Democratic in presidential elections.

Much of the debate over class consciousness has revolved around, or at least begun with, the issue of whether blue-collar workers tend to see themselves as "working class" (and therefore more class conscious) or "middle class" (and therefore less class conscious). It is, then, surprising to discover that in 1988, asked if they saw themselves as "working class" or "middle class," 75 percent of American blue-collar workers said working class. Further, this is only a little less than in 1952, when 80 percent of blue-collar workers categorized themselves as working class in response to the same question (see figure 8.21). Indeed, the proportion of blue-collar workers categorizing themselves as working class has never fallen below 64 percent in the period between 1952 and 1988. Clearly a certain kind of working-class identity can coexist with a declining tendency for blue-collar workers to vote for Democratic presidential candidates and to identify with the Democratic party. This suggests a problem with the debate over class consciousness, which, as we have pointed out, has long pervaded the general debate over the blue-collar working class, namely, the tendency to infer from one area of blue-collar life the nature of behaviors and beliefs that prevail in other areas of those lives. In the


172

figure

8.21
Social Class Identification by Major Occupational Group, 1952–88:
Blue-Collar Workers.

figure

8.22
Social Class Identification by Major Occupational Group, 1952–88:
Upper-White-Collar Workers.


173

case of class consciousness and class identity, this amounts to assuming that blue-collar workers have a single image of their position in the class structure.

A central theme of this chapter has been that the lives of blue-collar workers revolve around three separate, though related, spheres—life at the workplace (in the mode of production), life outside the workplace (residential, marital, and leisure), and life vis-à-vis the federal government. Indeed, there is reason to think that many American blue-collar workers have three social identities, each relating to one of these spheres. These identities are that of the "working man," (or "working woman" for female blue-collar workers); that of being "middle class" or "lower middle class" or "poor," with reference to life outside the workplace; and that of being part of "the people" or "the American people," with reference to the notion of the individual citizen vis-à-vis the federal government and the related power structure. If these spheres have not emerged clearly in much previous research, it is because the main methods used to study class consciousness have tended to encourage, explicitly or implicitly, only one of these identities.

The analysis that follows is based on David Halle's study of class identity among blue-collar chemical workers in New Jersey. These workers were, in several ways, among the better-off blue collar workers. They were comparatively well paid and unionized; about one-quarter of them were skilled; 69 percent were homeowners. They were all men, reflecting the dominance of men in more desirable blue-collar jobs.

Consider, first, the concept of "the working man." A close reading of formal and informal interviews reported by a variety of researchers suggests that male blue-collar workers in America commonly refer to themselves as "working men," but rarely as "working class." This can be seen in interviews with voters during the 1968 and 1972 presidential election campaigns; in the views working-class residents of a new suburban township expressed about their preferred political candidate; from the comments of a group of skilled workers in Providence, Rhode Island; from comments of a group of white working-class males in an East Coast city; from comments of workers in Milwaukee, Chicago, and Pennsylvania; from comments of auto workers in Detroit; and from comments of Italian construction workers in Brooklyn.[35] The concept of the "working man" has also been central in the history of the American labor movement. For example, when trade and craft workers before the Civil War founded political parties, they called them "Workingmen's Political Parties," and the Workingman's Advocate was the name of one of the most important newspapers of the nineteenth century.

The concept of the working man, among the chemical workers studied by Halle, has as its central idea the notion that blue-collar work takes


174

a distinctive form and is productive in a way that the work of other classes is not. This notion has two central components. One involves the features of the job. Being a working man involves one or more of the following clusters of related ideas: (a) physical work ("It's hard physical work," "It's working with your hands"); (b) dangerous or dirty work ("We get our hands dirty"); (c) boring and routine work ("We do the same thing over and again"); (d) factory work (as opposed to office work); (e) closely supervised work ("We have to punch in and out," "We're told what to do").

The other central component of the concept of the "working" man links it to a moral and empirical theory about who really works in America. It implies, in one or more of the following ways, that those who are not working men are not really productive, do not really work. Those who are not "working" (a) literally do not work ("Big business don't work, they just hire people who do," "People on welfare aren't working men, they don't want to work"); (b) perform no productive work ("Teachers aren't teaching the kids anything," White-collar office workers "just sit on their butts all day"); (c) are overpaid ("Doctors earn huge fees," "Lawyers charge whatever they want").

The combination of the "job features" and the "productive labor" aspects of the concept logically entails the idea that only those whose labor involves such job features are productive. As a result, blue-collar work is generally seen as productive. But those whose work lacks many or all such job features, definitely big business and the white-collar sectors in general, are not.

A central point about the concept of the working man is that the term expresses both class and gender consciousness. It expresses class consciousness in implying that blue-collar work is especially productive. But it also implies that blue-collar work is for men (working man ) rather than women, which is a form of gender consciousness. This reflects the history of American labor. In the early stages of industrial growth, women (and children) were the first factory workers, for at that time such jobs were seen as less desirable than agricultural work. As the status and pay of factory and other blue-collar work rose, women were pushed out of almost all except the least desirable jobs. The blue-collar working class is now composed primarily of men, and this is especially true for the better paid and more highly skilled blue-collar jobs.

Among the chemical workers Halle interviewed, the idea that blue-collar work was for men was a form of sexism that most workers were prepared to explicitly support in discussing their own jobs. For example, they would maintain, sometimes in arguments with those of their wives who are feminists, that women cannot be chemical workers because they are too weak to move heavy chemical drums. But such sex stereotyping


175

of occupations is under increasing attack in the United States. As a result, few workers were prepared to explicitly defend this sort of view for the entire spectrum of blue-collar jobs.

This discussion also raises the question of how female blue-collar workers see their position in the class structure at work. Naturally, they see themselves as working women rather than working men. How they use the concept of the "working woman," and how its meaning compares with the concept of the working man, is a question that scarcely has been investigated.[36]

The blue-collar workers that Halle interviewed also place themselves in the class structure, in part according to their life away from work rather than on the job. In this second image, they assume a class structure composed of a hierarchy of groups that are distinguished, above all, by income level but also by standard of living and residential situation. Income level, life-style, consumer goods, and neighborhood constitute the material framework of their lives outside work. (It is true that income originates from their employment, but its effect on their lives is outside, where almost all income is spent.) These criteria for determining position in the class structure increase the range of persons with whom workers consider they have common interests (as compared with the concept of the working man). Thus, though most see clear gaps between their situation and those of the upper and lower extremes (for instance, "the rich" and "the poor"), the categories in between are almost all ones to which they consider they do or could belong. As a result, according to this perspective, the class structure has a sizeable middle range that displays some fluidity, permits individual movement, and takes no account of a person's occupation. This reflects the actual ability of workers, in their life outside the factory, to enjoy a certain mobility through their choice of house, neighborhood, possessions, and life-style.

Income level is the most important of the factors underlying this second image of class. Almost everyone has at least a rough idea of the income distribution in America and his place within it. Workers read government statistics in newspapers and magazines on the average income of an American family, and they are aware of estimates of the income level needed to maintain a minimum, a comfortable, and an affluent standard of living. The federal and state income tax systems both entail a picture of the class structure based on income, and most workers follow with keen interest the relation between their weekly earnings and the taxes deducted from their paychecks. Income level is not the only criterion underlying class distinction based on the setting outside work. Lifestyle, material possessions, and the quality of residence and neighborhood are other criteria that people often use.

Most, but not all, workers place themselves in the middle of the hier-


176

archy (below the "rich" and above the "poor"). But some identify with a category between the poor and the middle class. This view is most common among younger workers. They may have a mortgage, young children, and a spouse who stays at home to look after the children. But for these workers, being middle class implies being able to maintain that life-style without economic pressure. They deem their own situation below that of the middle class because they cannot live such a life-style without a strain—perhaps a serious strain—on their resources. Their income level, material possessions, and life-style make them better off than the poor, but not comfortable or free from major economic worries (as they believe the middle class to be).

The chemical workers studied by Halle were comparatively well paid for blue-collar workers, so it is likely that numerous less well paid blue-collar workers, in thinking of themselves outside the workplace, would classify themselves as below middle class.[37] The coexistence of these two identities—that of being a "working" man, with reference to life at work, and that of being middle class or less, with reference to life outside the workplace—would explain the large number of blue-collar workers who categorize themselves as "working class" rather than "middle class" in response to a survey question on that topic (see figure 8.21). Some workers categorize themselves as working class because they think of themselves as "working" men. Others place themselves in the working class because they are thinking of their position in the class structure outside work and believe their income level or life-style is not high enough to place them in the middle class. Either way, the forced choice of "working class" or "middle class" conceals the coexistence of two images of position in the class structure.

Almost all the blue-collar chemical workers have what amounts to a third image of their position in the class structure. They routinely use the concepts of "the American people" and "the people" in a populist sense. This concept involves the idea of a clear opposition between the power structure, especially big business and politicians, and the rest of the population. According to this view, "the American people" means all those excluded from the heights of political and economic power. Consider this worker, discussing corruption in politics: "Take Johnson for example. When he entered the White House he had $20,000 and then he bought all those estates with the American people's money."

This populist current is the third major aspect of the class consciousness of these workers. The concept of the working man refers to a position in the system of production. The concept of being middle class or lower middle class refers to a position outside work—to a life-style and standard of living. The concept of the people, or the American people, in the populist sense, refers to the division between all ordinary citizens and those with political and economic power.


177

Conclusion

The situation of blue-collar workers is complex and cannot be summed up by approaches that assume that the three main areas of blue-collar life are changing in concert. On the federal level, there is a movement away from voting for Democratic presidential candidates and away from voting at all, which is especially pronounced among younger workers. This has been accompanied by a diminished identification with the Democratic party (though identification with the Republican party has not taken its place). It is this fading of party loyalty and, perhaps, the declining tendency of blue-collar workers to vote at all, that is probably the most distinctive feature of the later decades of the twentieth century. If class solidarity for blue-collar Americans means voting for Democratic presidential candidates and identifying with the Democratic party, then class solidarity is definitely on the wane.

However, a majority of blue-collar workers (and other Americans) believes that the country is "run by a few big interests," particularly by large corporations. And there is reason to think that many blue-collar workers, like many other Americans, will at times subscribe to a version of populism that contrasts "the people" (as those excluded from the heights of political and economic power) with the power structure (above all, big business and politicians). This entire perspective has probably long been a central component of the belief system of many ordinary Americans. (It was, for example, surely prominent during the "trust-busting" movement of the early 1900s.) It is likely to remain so as long as large corporations (American or foreign) play a central role in American life.

Further, the vast majority of blue-collar Americans appear to see themselves, in the workplace, as "working men" (or "working women"), with an implicit solidarity at least with other blue-collar Americans (and probably, in varying degrees, with lower-white-collar Americans, too). This reflects a kind of class consciousness and identity that has long been important and is unlikely to fade, so long as the distinctions in the workplace between blue-collar workers on the one hand and white-collar workers (especially upper-white-collar workers) on the other hand, are pronounced. The current weakness of the union movement is significant in its own right, but may not diminish this class identity. Indeed, to the extent that blue- and lower-white-collar workers are less protected by unions than they once were, their feelings of vulnerability in the face of, and hostility toward, the corporations that employ them are as likely to increase as to wane.

Outside of the workplace, class identity is somewhat more fluid, reflecting the greater degree of penetration and intermingling of blue- and white-collar people outside the workplace—in places of residence, in leisure, and in marital lives.


178

Some of these trends in the attitudes and behavior of blue-collar workers have been present for a long time. Others are more recent. Examining a number of arenas of working-class experience at once, and allowing each to express its own internal dynamics, shows the inadequacies of the two prevailing models of the working class—the radical working class and the integrated working class—each of which focuses on one or two areas of experience to the exclusion of the others. Social life is complex, and the fact that blue-collar workers have several bases for their attitudes and behavior reflects this complexity, which must be incorporated into any model of the American working class.


179

Appendix to Chapter Eight

Several data sets were used to construct the figures presented in this chapter. The source of the employment data in figures 8.1 and 8.2 is explained in note 5. Figures 8.3, 8.4, 8.16, 8.17, and 8.18, which chart presidential vote and political party identification by year and major occupational groupings, are based on the National Election Study (NES) combined file, 1952 to 1986, produced by the Survey Research Center at the University of Michigan. Figures 8.19, 8.20, 8.21, and 8.22, which chart beliefs about who benefits from government and social class identification, for selected years from 1952 to 1988, are based on the specific NES studies for years reported. Figures 8.6–8.15, which take a detailed look at the blue-color vote in 1988, are based on a multinomial logistic regression equation calculated on the NES data for 1988.

The 1988 Logit Model can be clarified as follows. A multinomial logistic regression model was calculated (using maximum likelihood estimation) on the 1988 National Election Study data to assess the impact of demographic variables on the presidential vote.[38] The model is a simple linear regression when the dependent variable is converted to the log of the odds ratios. The dependent variable in this analysis comprises three categories: Voted Republican; Voted Democrat; and Did Not Vote. The odds ratios are (Voted Republican)/(Did Not Vote) and (Voted Democrat)/(Did Not Vote). These two odds ratios (resulting in the estimation of two simultaneous equations) are sufficient to calculate every combination or odds comparison implied by a three-category dependent variable. Independent variables include family income in thousands of dollars (direct effect); age in years (direct effect); education in years (direct effect); region (categorical effect: East, Midwest, South, West); union membership (categorical effect: yes, no); religion (categorical effect: Protestant, Catholic, Jewish); race (categorical effect: white, black); gen-


180
 

TABLE8.1a The 1988 Presidential Election: Analysis of Variance

Effect

Direct Effect

Chi-Square

Alpha

Intercept

2

68.72

0.0001

Family Income

2

22.32

0.0001

Age in Years

2

51.19

0.0001

Education in Years

2

30.25

0.0001

Region

6

17.76

0.0069

Union Membership

2

6.86

0.0324

Religion

4

16.39

0.0025

Race

2

17.68

0.0001

Gender

2

0.04

0.9815

Occupational Group

8

15.39

0.0520

Party Identification

4

190.73

0.0001

Likelihood Ratio

1970

1553.69

1.0000

der (categorical effect: male, female); occupation (categorical effect: homemaker, upper-white-collar, lower-white-collar, service, blue-collar); and party identification (Republican, Democrat, other). Variables identified as "direct effects" are quantitative insofar as their interval values are entered directly into the design matrix. Categorical effects are qualitative, and each category forms a variable in the model, with the exception of the last category, which is estimated by the intercepts. In this model, categorical variables are estimated using an "effect coded" design matrix.[39]

The results of the logistic regression model are given in tables 8.1a and 8.1b. Table 8.1a (analysis of variance) assesses the fit of the overall model and the significance of each set of independent estimators. It reveals that, with the exception of gender, all estimators have obtained a chi-square large enough to be significant at an alpha-level less than 0.05. At the bottom of table 8.1a is the "likelihood ratio," which permits an assessment of the fit of the model to the underlying data. This statistic is distributed as chi-square with degrees of freedom equal to that listed at the bottom of table 8.1a. If the chi-square is large relative to the degrees of freedom, the model demonstrates a poor fit, but if it is small relative to the degrees of freedom, then the model exhibits a close fit to the original data. Traditionally, a chi-square that cannot obtain an alpha-level greater than 0.05 is considered a strong indicator that the model does not fit the data. In the case of the model assessed in table 8.1a, the chi-square is such that the alpha-level is at its maximum of 1.0, indicating a very good fit between the model and the data. It should be noted that the linear design matrix used in this model is an extreme simplification of


181
 

TABLE8.1b The 1988 Presidential Election Vote: Analysis of Individual Parameters

Effect

 

Equationa

Estimate

Standard Error

Chi-Square

Alpha

Interceptb

 

Ln(P1/P3)
Ln(P2/P3)

– 7.04
– 4.60

0.87
0.82

64.94
31.22

0.0001
0.0001

Family Income

Direct Effect

Ln(P1/P3)
Ln(P2/P3)

0.03
0.01

0.01
0.01

20.52
2.88

0.0001
0.0896

Age in Years

Direct Effect

Ln(P1/P3)
Ln(P2/P3)

0.05
0.05

0.01
0.01

44.56
32.85

0.0001
0.0001

Education in Years

Direct Effect

Ln(P1/P3)
Ln(P2/P3)

0.25
0.22

0.05
0.05

25.32
19.61

0.0001
0.0001

Region

East

Ln(P1/P3)
Ln(P2/P3)

– 0.25
– 0.55

0.19
0.20

1.74
7.49

0.1875
0.0062

 

Midwest

Ln(P1/P3)
Ln(P2/P3)

0.31
0.48

0.17
0.17

3.40
7.93

0.0652
0.0049

 

South

Ln(P1/P3)
Ln(P2/P3)

– 0.12
– 0.29

0.16
0.16

0.60
3.24

0.4379
0.0720

Union Membership

Member

Ln(P1/P3)
Ln(P2/P3)

– 0.14
0.16

0.12
0.12

1.36
1.91

0.2435
0.1668

Religion

Protestant

Ln(P1/P3)
Ln(P2/P3)

0.41
– 0.29

0.28
0.24

2.16
1.50

0.1413
0.2209

 

Catholic

Ln(P1/P3)
Ln(P2/P3)

0.77
0.47

0.29
0.25

7.01
3.56

0.0081
0.0591


182
 

TABLE 8.1b

Effect

 

Equationa

Estimate

Standard Error

Chi-Square

Alpha

Race

White

Ln(P1/P3)
Ln(P2/P3)

0.78
– 0.26

0.26
0.14

9.43
3.66

0.0021
0.0557

Gender

Male

Ln(P1/P3)
Ln(P2/P3)

0.02
0.00

0.12
0.12

0.03
0.00

0.8611
0.9815

Occupational Group

Homemaker

Ln(P1/P3)
Ln(P2/P3)

– 0.08
0.11

0.25
0.23

0.11
0.21

0.7358
0.6505

 

Upper White

Ln(P1/P3)
Ln(P2/P3)

0.23
0.63

0.21
0.22

1.20
8.17

0.2734
0.0042

 

Lower White

Ln(P1/P3)
Ln(P2/P3)

0.08
0.18

0.18
0.18

0.19
1.02

0.6636
0.3133

 

Service

Ln(P1/P3)
Ln(P2/P3)

– 0.24
– 0.48

0.25
0.24

0.96
3.98

0.3280
0.0462

Party Identification

Republican

Ln(P1/P3)
Ln(P2/P3)

1.14
– 0.74

0.15
0.20

56.26
13.22

0.0001
0.0003

 

Democrat

Ln(P1/P3)
Ln(P2/P3)

– 0.77
0.98

0.16
0.15

23.63
41.95

0.0001
0.0001

a P1 indicates the probability of voting Republican, P2 indicates the probability of voting Democrat, and P3 indicates the probability of not voting at all.

b The Intercept estimates the following omitted categories: Region = South; Union = No; Religion = Jewish; Race = Black; Gender = Female; Occupation = Blue-Collar; and Party = Other.


183
 

TABLE 8.2 1988 Sample Means for Major Occupational Groups

Variable

 

Blue-Collar

Service

Lower-White-Collar

Upper-White-Collar

Family Income

Direct Effect

$28,252.00

$20,936.17

$34,271.21

$42,639.45

Age in Years

Direct Effect

37.44

39.24

37.62

40.16

Education in Years

Direct Effect

11.66

11.84

13.26

14.99

Region

East

16.1%

22.4%

18.5%

21.3%

Midwest

23.0%

32.2%

31.3%

26.2%

South

44.0%

30.9%

31.3%

28.7%

West

17.0%

14.5%

18.8%

23.8%

Union Membership

Member

30.0%

20.0%

20.9%

18.9%

Nonmember

70.0%

80.0%

79.1%

81.1%

Religion

Protestant

76.3%

71.8%

68.9%

65.8%

Catholic

22.7%

28.2%

28.2%

30.1%

Jew

1.0%

0.0%

3.0%

4.1%

Race

White

86.4%

77.0%

82.1%

52.8%

Black

13.6%

23.0%

17.9%

47.2%

Gender

Male

75.6%

18.4%

31.3%

90.8%

Female

24.4%

81.6%

68.7%

9.2%

Party Identification

Republican

20.2%

21.7%

29.9%

34.5%

Democrat

34.4%

40.8%

31.0%

27.7%

Other

45.4%

37.5%

39.1%

37.8%


184

the possible interactions among categories of the independent variables and the possible nonlinear direct effects implied by such a complex set of variables. Hence, the fit of this very simple logit model is indeed a significant finding.

Table 8.1b (analysis of individual parameters) gives the logit estimates, their individual standard errors, the associated chi-squares and alpha-levels. The estimates are linear with respects to the log of the odds ratios. This makes direct interpretation of the estimates nonintuitive. As a result, we have interpreted the estimates in figures 8.5 through 8.15 for the blue-collar vote. That is, we held the effect of occupation constant at "blue-collar" and calculated the ceteris paribus effects of each independent variable on the probability of voting in one of the three ways (Republican, Democrat, No Vote). For each calculation, the effects of all other independent variables included in the model were held at their "blue-collar" mean effects. These means are presented in table 8.2.


185

Nine—
The Enduring Dilemma of Race in America

Bart Landry

When future historians look back at the late 1960s, the period will appear in many respects as the golden age of American history. Prosperity was at an unprecedented high, while the economy offered the promise of unlimited growth. A cultural revolution, furthermore, was in the making, and Americans had committed themselves, for the first time in their history, to eliminating what Gunnar Myrdal had called the "American dilemma"—the racism and discrimination that had kept millions of Americans in the position of second class citizens. With the passage of the Civil Rights Act in 1964, discrimination was redefined as racism. No longer was the discriminator a "good old boy" and the fair-minded white person a "nigger lover." Lester Maddox, standing in the doorway of his store, ax handle in hand, was not seen any more as a folk hero, but as a national shame. The first steps in the path toward racial equality had been taken.

How long ago it all seems now. Rather than a fruition of the dreams of those golden days, the past two decades have brought confusion and even retrogression. The cultural revolution of the 1960s and early 1970s was overwhelmed by the economic reality of recessions and a declining economy. Though American society would never be quite the same again, it stopped far short of the goals of the reforms of the 1960s. In less than ten years, the nation was tiring of the effort to extend full opportunity to blacks. A new term entered the lexicon of race relations, "reverse discrimination"—elbowing for room with "equal employment opportunity," "discrimination," and "racism."

The turning point, it seems in retrospect, was a suit by Allan Bakke in 1974 accusing the University of California Medical School at Davis, of "reverse discrimination." The decision handed down by the Supreme Court was ambiguous, a victory neither for Bakke nor for those opposed


186

to his position. At the heart of the issue was the nation's commitment not only to provide equal opportunities to all its citizens today, regardless of color, but also to redress the injustices of the past—injustices that have placed blacks at a considerable disadvantage in the competition for desirable jobs. Since Bakke , the courts have been called upon again and again to decide whether the nation can legally redress the market effects of past injustices of slavery and discrimination against blacks. Quotas, timetables, and set-asides have all been challenged. For the time being, the tide has shifted against the struggle of blacks for equality, as a conservative judiciary, including the Supreme Court, has returned numerous decisions that have chipped away at the very foundations of the fight against persistent racial discrimination.

The recent study by the National Academy of Science, A Common Destiny: Blacks and American Society , concludes that "race still matters greatly in the United States."[1] Reminders of the truth of this conclusion are numerous in the United States as we approach a new century. They range from racially motivated incidents and attacks on blacks at predominantly white college campuses to racial attacks in several northeastern communities. While surveys have found that the commitment of whites to the principle of equality for blacks has grown steadily over the decades, the authors of the National Academy of Science report conclude:

Principles of equality are endorsed less when they would result in close, frequent, or prolonged social contact, and whites are much less prone to endorse policies meant to implement equal participation of blacks in important social institutions. In practice, many whites refuse or are reluctant to participate in social settings (e.g., neighborhoods and schools) in which significant numbers of blacks are present.[2]

Today, whites are more likely to say that "blacks have gone far enough" than that there remains an "unfinished agenda" to be completed. This sentiment exists in spite of the studies of black progress by the National Academy of Sciences, as well as others, that have provided ample evidence of the negative effects of discrimination among blacks.[3] These negative effects, moreover, have continued well after the Civil Rights era and the emergence of a black middle class.[4] A review of the record since 1940 prompted the authors of the National Academy of Science study to comment: "The status of black Americans today can be characterized as a glass that is half full—if measured by progress since 1939—or as a glass that is half empty—if measured by the persisting disparities between black and white Americans since the early 1970s."[5] Among the signs of the "half empty" glass is a large economic disparity between blacks and whites that has been traced directly to the discrimination blacks encounter in the employment and housing markets.[6] Caught in this economic disparity are the almost one-third of all blacks who live in


187

poverty, compared to only 11 percent of whites; a growing black under-class incorporating about 13.2 percent of employable black adults in the late 1980s, compared to 3.7 percent of whites; an unemployment rate twice that of whites; continued lower life expectancy than whites; and a serious lag in the proportions of high school graduates who attend college. Though many observers point to the negative impact of a changing economy characterized by a shrinking manufacturing sector and expanding service sector, the authors of the National Academy of Science study unambiguously conclude that "a considerable amount of remaining black/white inequality is due to continuing discriminatory treatment against blacks."[7]

How is it that more than 100 years after emancipation, race is still a salient issue in the United States and blacks continue to lag significantly behind whites on every meaningful economic indicator? Most studies addressing this issue provide descriptions of the remaining black/white gap in indices of economic progress and social well-being. While these studies often offer detailed and invaluable documentation needed by policy makers, they generally fail to offer explanations that might help us understand the persistence of racial inequality. If we are to understand why a movement that began with such promise thirty years ago has, toward the end of the twentieth century, stalled and even gone backward, we need to dig deep below the surface.

In this chapter, therefore, I will not add to already ample descriptions of racial inequality in contemporary America. Because the roots of racism and discrimination are so deep, it is best to rely on a historical approach in analyzing the dynamics by which the present state of black/white relations came into being.

Theories of Racial Inequality

When one sifts through the books and articles on race relations that have appeared over the past fifty years, one finds that the overwhelming majority of scholars have focused in some fashion on the role of individual attitudes. Two of the best examples of this approach can be found in the writings of Gordon Allport and Lloyd Warner. To Allport we owe the emphasis on prejudice as the motivator of discriminatory behavior. Lloyd Warner, for his part, argued that the negative evaluation of all blacks by whites in the South had produced a southern society characterized by a caste division between blacks and whites.[8]

These studies led to a preoccupation among social scientists with racial attitudes and an interest in measuring changes in white attitudes toward blacks over time. The best known of these studies were surveys conducted by the National Opinion Research Center (NORC) and published in a series of articles in Scientific American over many years, begin-


188

ning in 1956. Subsequently, both Gallup and the Institute for Social Research at the University of Michigan took periodic pulses of the racial attitudes of whites. At the heart of these studies was an attempt to measure the extent and depth of prejudicial attitudes held by whites against blacks, and the degree to which these attitudes might be changing over time. From this perspective, white attitudes was the key to black progress. If whites abandoned, or at least softened, their racist attitudes toward blacks, social scientists reasoned, the "race problem" would be solved. At the same time, a kind of social Darwinism informed their thinking, suggesting that white attitudes had to change before discriminatory behavior would cease.[9]

Within this framework, the social distance scale —a measure of the extent to which whites were willing to associate with blacks in various settings characterized by ever greater closeness, from the workplace to interracial marriages—became a major tool. Any sign of a decline in racist attitudes was greeted with enthusiasm, as an indicator of racial progress. While these studies have documented a liberalization of white attitudes toward blacks over the decades, however, other researchers have continued to discover extensive discriminatory behavior in schools, housing, and the workplace.[10]

Recently, several scholars have turned away from the "individual prejudice" approach in favor of some type of "structural" explanation for the limited progress of blacks, as compared to whites, in American society. One version, advanced by Nathan Glazer, attributes the difference to the allegedly more recent arrival of blacks to urban America.[11] Another, proposed by Thomas Sowell, argues that, coming from a rural background, blacks have been hampered by the absence of a work ethic.[12] Both of these approaches fall under what has been called a "blacks-as-the-last-of-the-immigrants" theory, a theory suggesting that blacks lag behind white ethnics primarily because the latter settled in the urban Northeast and Midwest earlier than southern blacks. Their greater progress, therefore, is simply a matter of opportunities that come with time. Two other explanations differing from the prejudice approach are offered by Bonacich and Wilson. Bonacich blames inequality on the manipulation of black workers by capitalists in their struggle with the white working class.[13] Wilson argues that an increasingly impoverished under-class today is the result of structural shifts in the economy that have resulted in the relocation of jobs from the inner city to the suburbs.[14]

The individual prejudice approach attributes the continuing inequality of blacks to racist attitudes held by whites; the structural approach more or less blames impersonal market forces. The first see a black/white polarization in America. The second tends to focus on the varied experiences of numerous ethnic and ethnic minority groups and to min-


189

imize a black/white polarization. Taken alone, each of these two explanations has serious shortcomings.

Though Gordon Allport argued for a universal tendency among all societies toward prejudice and stereotyping, it is one thing to hold negative attitudes toward individuals and quite another to dehumanize them. It is an even greater leap to predict behaviors such as lynching from negative attitudes or stereotyping. Some scholars even challenge the one-to-one correspondence between prejudice and discrimination that is generally presumed. Earl Raab and Seymour Martin Lipset, for instance, have argued that black stereotypes, such as the Sambo image, are neither direct outcomes of negative attitudes toward a group nor predictors of the actions that might result from stereotypically held beliefs.[15] Other scholars have shown that discrimination can occur in the absence of prejudicial attitudes when the practices of institutions are inherently biased.[16]

In spite of its limitations, however, the structural approach broadens the search for an understanding of racial inequality by requiring an explanation for the variability in economic progress among white ethnics as well as between whites and blacks. In his book Ethnic America , Sowell presents a rank ordering of ethnics using a family income index that shows a variation from 103 for Irish-Americans to 172 for Jewish-Americans; and from 60 for Native Americans to 99 for Filipino-Americans (table 9.1). Such data force us to examine more closely the factors relevant to upward mobility and the degree to which these factors have been available to various groups—including blacks. They also prompt questions about the environment and circumstances encountered by immigrants upon their arrival. The structural approach encourages an analysis of the factors relevant to upward mobility in American society, while the individualistic approach emphasizes a black/white polarization that overshadows the variability among white ethnics and among ethnic minorities. The tendency to view structural forces as the impersonal workings of the market, however, has been called into serious question.

Both Stanley Liberson and Stephen Thernstrom present carefully analyzed historical data on the experiences of blacks and white ethnics that discredit the theory of "blacks-as-the-last-of-the-immigrants," and point instead to persistent discrimination against blacks by whites.[17] Even in data assembled to demonstrate ethnic variability, such as Sowell's family income index, an economic polarization along white/nonwhite lines is apparent. For although Sowell's index makes it clear that ethnic groups have experienced different degrees of success in scaling the economic ladder, it is also evident that, with the exception of Japanese- and Chinese-Americans, all groups with a family income index above the mean are white.


190
 

TABLE 9.1 Family Income Average
(U.S. average = 100)

Jewish

172

Japanese

132

Polish

115

Chinese

112

Italian

112

German

107

Anglo-Saxon

107

Irish

103

TOTAL U.S.

100

Filipino

99

West Indian

94

Mexican

76

Puerto Rican

63

Black

62

Indian

60

SOURCE : Thomas Sowell, Ethnic America: A History (New York: Basic Books, 1981), 5.

Bonacich's theory of a "split labor market" is similarly limited. While there is no doubt that white employers have at times used blacks as strikebreakers in their struggle with white labor, Bonacich's theory does not account for discrimination in the North that occurred before the influx of black migrants or in periods when the struggle between capital and labor was not intense. Nor does it help us understand why blacks, rather than some other group, were used as strikebreakers, or why all white ethnics united in their opposition to black workers.

This is not to deny that both individual prejudice and structural conditions have had an impact on black progress. However, either explanation taken alone is inadequate. Missing, therefore, is a link between individual prejudice and structural impediments to black achievement. Rather than view prejudice and structural conditions as factors operating independently of each other, it may be more accurate to see them as connected in some systematic fashion. In the remainder of this chapter I will argue that racism and prejudice are not simply the attitudes of malevolent individuals, but are cultural norms into which whites have been socialized and that have found expression in both systemic institutional and individual discriminatory behavior. From this point of view, structural conditions can no longer be viewed as the impersonal forces some have suggested, and racism is raised from the level of individual "quirks" to that of a societal phenomenon requiring analysis and solutions on the societal level.


191

The following discussion will therefore focus on the societal level and will be placed within the general framework of economic progress through upward mobility. From this point of view, there are three issues to investigate: (1) the factors that promote upward mobility in American society and the process through which mobility occurs, (2) the reasons for different degrees of ethnic success, and (3) the reasons for the more limited success of most ethnic minorities. The first two issues will be discussed briefly, while primary attention is directed toward answering the third question.

Getting a Piece of the Pie

Since the publication of the classic work by Peter Blau and Otis Dudley Duncan, scholars have tried to identify the factors that affect an individual's movement up the class ladder. According to the Blau-Duncan model, an individual's movement up the class ladder involves three stages, beginning with family background, moving on through a period during which education and training are attained, and ending in a particular occupation upon entry into the job market.[18] From the family, the individual receives economic support, encouragement, and the social skills needed to negotiate the next two stages—acquiring an education and entering the work force. An individual's educational achievement is greatly affected by the family's economic resources. As one moves up the class ladder, a family's ability to control the environment within which its children will grow and develop increases. A neighborhood in which all or most families belong to the middle class will not only provide more resources for the local school system but also will place children in schools with other students who arrive equally well prepared. Much of their preparation results from the enriching experiences middle-class children are routinely exposed to in families with college-educated parents, experiences such as visits to museums and puppet shows, and possession of children's books and a variety of educational toys.

Working-class parents above and below the poverty line live in progressively less affluent neighborhoods, depending on income. Their children attend schools with children who often are poorly prepared to begin the educational journey. Because the tax bases upon which the schools' economic structures rely are smaller than in middle-class communities, the resources of such schools are typically inadequate. Differences in the education of working- and middle-class children intensify beyond the elementary level. Children from working-class families—particularly minority children—are more likely to be placed in lower tracks that do not provide preparation for college.[19] Teachers hold lower expectations for them and give them less encouragement to excel.


192

Middle-class children, by virtue of both their educational experiences and their families' greater financial resources, are more likely than working-class children to continue on to college, a must for entry into an upper-middle-class occupation and a chance at the American dream. That the economic success of college-educated individuals far surpasses that of those with only a high school degree has been documented time and time again. In 1989, the median net worth of black college graduates was four times greater than that of college dropouts and six times that of those with only a high school diploma.[20]

As Jencks has pointed out, there is, of course, an element of chance involved in an individual's progress up the class ladder.[21] It is also true that personal or family contacts—"who you know"—can also affect the outcome. Nevertheless, the model proves that the class position of the family into which we are born greatly affects our future success. Although this country prides itself in being a "land of opportunity," opportunity is not uniformly distributed throughout all classes. Different degrees of individual economic success are not accidental, therefore. They are built into our society's structure by variations in the economic resources of the families upon which we all depend to get started along the road to success. Thus some people begin with high-tech running shoes, others with yesterday's models, and some without any shoes at all.

Governmental programs to "equalize" starting opportunities, or at least minimize differential advantages, have had mixed results. Project Head Start has been a real success, but suffers from underfunding and lack of follow-through at the elementary school level. Efforts to eliminate other educational disadvantages through school desegregation have met with massive resistance. At bottom is the failure of political leaders and white citizens to fully commit the nation to institutionalizing equality of opportunity. The goal seemed like a good idea during the late 1960s, when prosperity held the promise of eliminating the black/white economic gap without sacrifices by whites. In the economically insecure decades of the 1970s and 1980s, however, whites have been prone to argue that blacks have gone far enough, or that they lag behind economically through their own fault.

Differences in the Degree of Ethnic Success

But if the Blau-Duncan attainment model identifies the factors and process through which individuals climb the class ladder, why are there differences in economic success on the aggregate level between ethnic groups? Why have more members of some ethnic groups moved into the middle class than others? Stephen Steinberg disputes the traditional view that different rates of ethnic success are due to differences in "their


193

value systems" and that therefore the causes are "to be found within the groups themselves."[22] Rather, he argues, external factors to which the entire group was exposed —such as patterns of settlement, time of arrival, external obstacles, and opportunities in the immediate environment, as well as resources possessed, such as skills and education—have been far more important than internal values.

Some ethnic groups, for instance, tended to settle in rural areas, others in industrial cities. Arriving early in our history and coming from rural backgrounds in Europe, Germans and Swedes sought out the opportunities provided by rich inexpensive land in the Midwest. Other groups, like Jews, Italians, and Irish, seemed to find urban areas more suited to their previous experiences, or they arrived at times when land was no longer plentiful and cheap. On the whole, however, the time of their arrival alone seems to explain little of their eventual success. Poles and Jews who immigrated around the turn of the century have higher family income indices than Germans and Anglos who came decades earlier in the nineteenth century.

The early occupational experiences of ethnics sometimes had a serendipitous explanation. The far greater numbers of Irish—than either Jewish or Italian—girls in domestic work is a case in point. While many analysts have attributed this pattern to different cultural values with respect to domestic work, Steinberg notes that Irish girls often immigrated alone, while Jewish and Italian women accompanied their families. Since domestic work provided lodging as well as income, it was well suited to single girls in cities. Jewish and Italian girls had no such need and therefore concentrated more in the garment industry or factory work. As a result of these immigration patterns, 54 percent of employed Irish women were classified by the U.S. Census as engaged in "domestic and personal" work in 1900, compared to only 9 percent of Italian and 14 percent of Jewish female workers. In contrast, only 8 percent of Irish women worked in the needle trades, compared to 41 percent of Jewish and 38 percent of Italian women.[23]

It is along these lines that Steinberg also explains the rapid upward mobility of Jews to their position above all other ethnic groups in the United States. When Eastern European Jews arrived in the United States they found a particularly good fit between their urban background and skills and the employment needs of a burgeoning garment industry in New York City. Forbidden to own land in Russia, they nevertheless "worked in occupations that prepared them for roles in a modern industrial economy."[24] According to the 1897 Russian census, 38 percent of Jews worked as artisans or were employed in manufacturing, primarily in the production of clothing. Another 32 percent were in commerce. In commerce they were often middlemen, linking the urban and rural


194

economies, a role that has been successfully played earlier by Korean immigrants on the West Coast of the United States and by Koreans today.[25]

Comparing Jews with six other ethnic groups that arrived in the United States between 1899 and 1910 (English, Germans, Scandinavians, Italians, Irish, and Poles), Steinberg found that the highest percentage of skilled workers was found among Jews: 67 percent.[26] The next highest percentage of skilled workers, 49 percent, was found among the English, a group no longer heavily represented among immigrants at that time. Among Germans, only 30 percent of immigrants were skilled; while the lowest percentage was found among Polish immigrants, only 6 percent of whom were skilled.

Skills are useless, however, without a demand for those skills in the area of settlement. By chance, Russian Jews found a demand for their extensive array of skills at the port of arrival, New York City, particularly in the clothing industry, which was primarily concentrated there. As a result of the fit between their extensive skills and the city's economy, Jews "ranked first in 26 of the 47 trades" tabulated by the U.S. Immigration Commission in 1911. The rapid upward mobility of Eastern European Jews, therefore, can be traced to the occupational fit they encountered at their point of entry into the United States. Their occupational success was then translated into sponsorship of their children in similar occupations and in educational attainment. Although, like every other immigrant group, Jews encountered discrimination, it was not sufficient to prevent them from entering skilled occupations or educating their children. They thus followed the classic pattern of each generation doing a little better than the previous one. While other ethnic groups also followed this same pattern, it is clear that the external factors encountered and the skills possessed differed from group to group. Thus some were more successful than others in moving up the class ladder.

The Penalty for Being Black

In spite of their slave experience, blacks in many respects occupied an advantageous position in 1865 relative to most European immigrants who arrived after this date. They were, first of all, experienced farmers in the southern agricultural economy. They knew the land; they understood the crops and the means of cultivating them. Secondly, blacks constituted most of the skilled workers in the urban South, having learned and practiced numerous skills in the slave economy. Thirdly, though their illiteracy rate was high, they knew the language of the country and understood its customs. They were not strangers in a strange land. Finally, blacks lived in proximity to the growing number of industrial jobs in the North. As slaves, many had worked in tobacco factories and in the towns of the South.


195

Had blacks received the promised forty acres and a mule, or at least been allowed to acquire land, thousands would have become small independent farmers at a time when land was still the backbone of the southern economy. Thousands more—if given the chance—would have moved into the industrial economy of the North to work in the factories of Chicago, Pittsburgh, and Detroit. However, blacks were not viewed or treated as another ethnic group in a plural society. Rather, the issue became polarized in both the South and North along black/white lines. Instead of allowing freedmen to acquire land after emancipation, southern planters moved quickly to preserve their cheap labor pool of agricultural workers by denying freedmen access to land throughout the South. At a time when land provided millions of whites a means of achieving self-sufficiency and the possibility of capital accumulation, freedmen did not receive the forty acres and a mule promised them. As W. E. B. Du Bois points out, this was unique in the experience of western societies. When the serfs were freed in Europe, from Russia to England, they were given parcels of land for their livelihood. In the postbellum period, freedmen were not only denied the promised forty acres and a mule, but were effectively prevented from purchasing land throughout the South.[27] In New Orleans, a program by wealthy blacks to lease plantations seized from former owners of slaves and to rent this land to freed blacks was effectively thwarted by the return of plantations to their former owners by a Republican party tiring of reconstruction and eager to ensure a continued flow of cotton to northern textile mills. Faced with these obstacles, only 5 percent to 8 percent of blacks managed to either become or remain landowners.

After some experimentation, a system of sharecropping emerged that ensured plantation owners a cheap labor pool if not always an entirely tractable one. There were, first of all, conflicts over the definition of the labor pool itself. Both southern planters and northern reconstructionists defined black women as part of the new southern plantation labor force, while blacks attempted to redefine the role of their wives in conformity with the "cult of domesticity," newly emerged within the white middle class. "The women," one planter complained, "say that they never mean to do any more outdoor work, that white men support their wives, and they mean that their husbands shall support them."[28]

To enforce their own interpretation of the black work force, planters often used armed riders to go from cabin to cabin, forcing black women into the fields. In the end, the sharecropping system effectively kept most southern blacks in virtual peonage. On "settlement day" (the annual calculation of debits and credits between planter and sharecroppers), the typical black family found itself in debt to the planter, who used their dependency on the plantation store for provisions as a means of cheating them. Those black families "in debt" at the end of the year


196

had to remain another year to work off their debt. Nor could blacks move into the newly found textile mills of the South. This was work reserved for poor whites by a southern planter class, determined to prevent a political alliance between poor blacks and whites. The few blacks admitted to the mills were confined to the most menial tasks.[29]

Those blacks who moved into the urban economy of the South found their labor as exploited there as in rural areas. The dominant position held by black males among the urban crafts workers at the end of the Civil War was lost over the next three decades, through unfair competition and the growing reluctance of whites to employ their skills. By 1900, the class of black artisans had been decimated, reduced from five out of six to only 5 percent in the urban South. There remained only menial, often sporadic, work for them, making it necessary for other family members to supplement their income. Summarizing the experience of black families in the urban South during the decades of the late nineteenth and early twentieth centuries, historian Jacqueline Jones writes: "Husbands were deprived of the satisfaction of providing their families with a reliable source of income, while wives found their duties enlarged as they added paid employment to their already considerable domestic responsibilities."[30]

The only work to which black women could aspire was domestic or laundry work, work of such low status that even poor white women avoided these jobs at all costs. Like white immigrant women in the North, black women in the South sought factory work in preference to domestic work whenever possible. Here, too, they found themselves relegated to the most menial of the available factory work. This included sorting, stripping, and stemming leaves in tobacco factories, work so difficult and unhealthy that white women could not be found to take these jobs. Those white women who did work in tobacco factories were given the more skilled tasks and worked in healthier surroundings, segregated from blacks.[31] When one employer hired a black woman to work in their area, white women walked off the job in protest, forcing the factory owner to fire the black worker. As a result of these kinds of restrictions, 90 percent of the servants in southern cities were black by 1900, as were the majority of laundresses.

In the North, employers showed a decided preference for white immigrant labor over the readily available pool of southern or northern blacks. Because of discrimination, there were no blacks in the brass and ship industries of Detroit in 1890, and only 21 blacks among the 5,839 male workers in the tobacco, stove, iron, machine, and shoe industries. By 1910, only 25 blacks could be found among the 10,000 primarily foreign workers in the burgeoning auto factories.[32] The discrimination black workers faced in northern cities could be felt in the lament of a


197

Detroit whitewasher in 1891: "First it was de Irish, den it was de Dutch, and now it's de Polacks as grinds us down. I s'pose when dey [the Poles] gets like de Irish and stands up for a fair price, some odder strangers'll come over de sea 'nd jine de faimily and cut us down again."[33]

Not until World War I cut off the flow of white immigrants did northern industrialists begin recruiting blacks from the South to work in the factories of Chicago, Philadelphia, Detroit, and other northern cities. Even then, black workers found a labor market in which white ethnics were firmly united in their opposition to competition from black labor, while employers reserved the better jobs for native whites and immigrants alike. When white workers turned to unionization, blacks were excluded.

Black women, forced to work in large numbers to supplement their husbands' and fathers' incomes, did not fare much better in the northern economy than they had in the South. As immigrant women abandoned domestic work for less odious factory jobs and native white women entered the new clerical occupations, black women alone found themselves overwhelmingly confined to domestic and laundry work. In Pittsburgh after World War I, 90 percent of all domestics were black women; in Philadelphia, over half. Those who found work in factories—only 5.5 percent in 1930, compared to 27.1 percent of the foreign born—were confined to the most dangerous and menial tasks. Nevertheless, domestic work represented such an undesirable alternative, the absolutely lowest status work in the economy, that black women sought factory work whenever possible. Their sentiments were unambiguously expressed by a black woman in a Chicago box factory in 1920: "I'll never work in nobody's kitchen but my own any more. No indeed! That's the one thing that makes me stick to this job."[34] These same black women had to watch helplessly as their own daughters were passed over by white employers who filled clerical and sales positions with native white and even immigrant girls, who had no more education than their children.

By 1940, only 5.7 percent of black males and 6.6 percent of black females had been able to enter middle-class occupations, the majority in predominantly black institutions, while 35.7 percent of whites held middle-class jobs.[35] In the ensuing decades, few blacks managed to climb the class ladder into the middle class. On the eve of the Civil Rights Movement, 1960, the effectiveness of discriminatory practices in the job market was apparent in the sizes of the black and white middle classes, which were now 13.4 and 44.1 percent of employed workers, respectively. The job ceiling remained almost as low for blacks in 1960 as it had been at the turn of the century.

How is it that of all ethnic groups, African-Americans still rank near the bottom on all economic indicators? Why have all European ethnics had more success than blacks in moving up the class ladder? As I pointed


198

out earlier, if one considered only the objective characteristics and circumstances of blacks and of white immigrant groups from southeastern Europe, one would have predicted a far different outcome. Blacks' skills were at least equal to those of the new immigrants, and their motivation to succeed was as high. While the selectivity of immigrants is well known, African-Americans emerged from slavery with a tremendous motivation to begin new lives for themselves. Denied the right of an education during slavery, they possessed a strong desire for this forbidden fruit. Adults eagerly flocked to the schools established by northern philanthropists and missionaries, to learn to read, and they were eager to send their children to school. Those who eventually migrated to the North were more likely than many of the new ethnics to send their children to school rather than out into the labor force. Yet these same families had to stand by helplessly while their children were passed over for the better jobs the economy had to offer in factories and offices.

The answer is not simply to be found in the economic competition among ethnic groups. For that would not explain why white ethnics, who competed among themselves, united in competing against blacks. Nor is Bonacich's theory of a split labor market a sufficient explanation. To be sure, employers sometimes profited by using blacks as strikebreakers and as a reserve army of cheap labor, but not on a scale sufficient to suppress the aspirations of white labor. Blacks were never given sufficient access to semiskilled or skilled jobs to play that role. Rather, I would argue that both white employers and ethnics united in their opposition to blacks competing in the market on an equal footing with white workers. But why? It is at this point that we are forced to return to the black/white polarization in American society. But rather than view racism as simply operating on the individual level as prejudice, it has to be interpreted in structural terms—as part of the culture.

Black/White Polarization

My argument is that primarily because of their racial attitudes , whites of all classes have historically reserved the worst jobs in the economy for black workers. It is true that, with the exception of work in the slave economy, whites have at times performed these same tasks in the urban economy. Yet these menial jobs were always viewed as temporary positions in the class structure, stepping stones toward a better life for themselves or their children higher up the class ladder. These white workers, immigrant and native, did in fact move up to better jobs, or at least were able to see their sons and daughters securing better jobs than their own in the next generation. Each new generation of European immigrants competed for positions in the economy and moved a little further up the class ladder.


199

This competition for desirable work, however, was open to whites alone. The norms of the market dictated that throughout the economy black workers be denied opportunities to compete equally with whites for desirable positions. Rather, black workers—both male and female—were reserved for the most menial labor at the very bottom of the class structure: unskilled labor and domestic work. This was not a "reserve labor pool" to be drawn upon by employers to undercut the price of white labor in semiskilled and skilled work. It was a system that defined some jobs as "colored jobs" and others as "white jobs."

Unlike the situation for whites, progress for blacks was not a matter of working harder or acquiring more skills and education. Since blacks were denied opportunities to compete in the market on an equal footing, for the same jobs, as whites, their upward mobility was stymied at its very source: the opportunity for husbands and wives to gain good and secure employment to improve their own living standard and thus be able to sponsor their sons and daughters in the next two stages of their movement up the occupational ladder. For although black parents placed greater emphasis on educating their children than many immigrant groups, they were nevertheless forced to stand by helplessly as their sons and daughters remained shut out of the growing number of skilled and clerical jobs becoming available. While the children of immigrants only recently arrived in this country could aspire to move further up the class ladder than their parents, generation after generation of black youth could aspire to little more than the unskilled labor and domestic work at which their parents toiled. A survey by the Bureau of Jewish Employment Problems of Chicago in 1961 found this to be true in the North as well as the South. Their report concluded that "98 percent of the white-collar job orders received from over 5,000 companies were not available to qualified Negroes" in that year.[36] "No blacks need apply" was the common experience of blacks seeking to move up the class ladder. Those blacks who managed to escape these restrictions to some extent by acquiring an education in a black college found themselves confined to serving the black community, rather than being able to contribute their talents to the development of the entire society.[37] The brain power, creativity, and talents of millions of blacks were lost to both the black community and to the larger society.

Earlier I noted that an individual's movement up the class ladder has been modeled as a three-stage process, involving family background, educational attainment, and entry into the job market. At each of these points, blacks found themselves handicapped. Black families were denied opportunities to increase their economic resources, which then could be used to sponsor their children at the next stage. The education of black children was separate and unequal. When moving into the job market, blacks encountered a ceiling above which they could not aspire


200

to climb. Though immigrants from southeastern Europe frequently encountered discrimination, it was never as severe or prolonged as that faced by blacks.[38] Furthermore, these same immigrants who were themselves discriminated against, united in their opposition to blacks. Thus, their own upward mobility was facilitated at the expense of blacks, who were kept at the very bottom of the occupational structure—all in spite of the initially more favorable position of blacks.

White workers did gain economically from the subjugation of black workers, just as they had profited from the elimination of Chinese workers in California during the late nineteenth century, and just as Anglos profited from the seizure of Mexican-American land after the Mexican-American War. White ethnics could have gained equally from the subjugation of another ethnic group, such as Poles, Jews, or Italians. These latter groups did, in fact, experience discrimination from the older ethnics from northern and western Europe. But none of the new European ethnic groups was confined to unskilled labor and domestic work. Each quickly moved from unskilled labor and domestic work into factories, the first springboard up the class ladder. There they secured the best jobs, while blacks either could not gain access or found only the most menial and dangerous work open to them.

Though discrimination has been part of American history from colonial times and has affected all groups other than Anglos to some degree, it is ethnic minorities, those with darker skins , who have experienced the severest discrimination and faced the most obstacles in their movement up the class ladder. From the very beginning, the class system in America has been a color-conscious class system.[39] Within this color-conscious class system, African-Americans have experienced earlier and more persistent discrimination than any other group except Native Americans.

Cultural Racism and the Role of Blacks in the U.S. Class System

The role of blacks in the U.S. class system was first established with the importation of Africans to labor as slaves on the plantations of the South. This development followed the failure of southerners' attempts to use the quasi-free labor of Indians and white indentured servants on plantations. With the failure of these attempts, planters turned to the use of African slaves. Unlike Native Americans, Africans were accustomed to agricultural work, and they could not blend into the population as did white indentured servants if they escaped, making them an ideal inexpensive work force from the planters' viewpoint.

To justify the total subjugation of Africans through the slave system, negative imaging and stereotyping of African-Americans was resorted


201

to. In time, blacks were portrayed as somewhat less than human, without a Christian soul and devoid of refined, civilized sentiments. During the abolitionist movement in the early nineteenth century, the propaganda of slaveholders intensified. According to Frederickson, a pamphlet published in New York in 1833, entitled Evidence Against the Views of the Abolitionists, Consisting of Physical and Moral Proofs of the Natural Inferiority of the Negroes , presented "the basic racist case against the abolitionist assertion of equality."[40] In this pamphlet the author, Richard Colfax, argued for the innate intellectual inferiority of blacks, based on their alleged physical differences. This theme would later be taken up again and given pseudoscientific support by racist white scholars.

Southern apologists for the system of slavery went beyond the inferiority thesis to argue the benefits slavery held for an inferior race. Such a position was made in the United States Senate in 1858 by Hammond, a planter-intellectual and senator for the State of South Carolina:

In all social systems there must be a class to do the menial duties, to perform the drudgery of life. That is a class requiring but a low order of intellect and but little skill. Its requisites are vigor, docility, fidelity. Such a class you must have. . . . It constitutes the very mud-sill of society. . . . Fortunately for the South we have found a race adapted to that purpose to her hand. . . . We do not think that whites should be slaves either by law or necessity. Our slaves are black, of another, inferior race. The status in which we have placed them is an elevation. They are elevated from the condition in which God first created them by being made our slaves.[41]

Slavery, then, was portrayed as beneficial to blacks, so much so, as one writer asserted, that under slavery they became "the most cheerful and merry people we have among us."[42] Sambo, the grinning, happy-go-lucky, singing and dancing, simple-minded black was a natural product of this thinking and became an image of all blacks—free as well as slave. Nor did these negative images and stereotypes end with slavery. Rather, as Frederickson notes, they "engendered a cultural and psychosocial racism that after a certain point took on a life of [its] own and created a powerful irrational basis for white supremacist attitudes and actions."[43] These attitudes became part of white culture and belief systems well into the twentieth century.

Thus we find that in a serious dissertation, written for his doctoral degree at Columbia University in 1910, Howard Odum wrote:

The Negro has little home conscience or love of home, no local attachments of the better sort. . . . He has no pride of ancestry, and he is not influenced by the lives of great men. . . . He has little conception of the meaning of virtue, truth, honor, manhood, integrity. . . . He does not know the value of his word or the meaning of words in general. . . . They sneer at the idea of work. . . . Their moral natures are miserably perverted.[44]


202

Odum's dissertation later became an influential book under the title Social and Mental Traits of the Negro . Odum's conception of blacks was no different in 1910 than that expressed almost immediately after emancipation in 1866 by George Fitzhugh, who wrote: "They [Negro orphans] lost nothing in losing their parents, but lost everything in losing their masters. Negroes possess much amiableness of feeling, but not the least steady, permanent affection. 'Out of sight, out of mind' is true for them all. They never grieve twenty-four hours for the death of parents, wives, husbands, or children."[45]

Because of the racist ideas about African-Americans before and after the Civil War, debates over their fate "never contemplated an integration of black workers into the nation's industrial labor force."[46] Rather than simply allow blacks to take their place in American society as another ethnic group attempting to struggle up the class ladder, they were viewed collectively as a "problem." One southerner expressed what was the view of many in 1867 when he wrote: "No permanent lodgment, no enduring part nor lot, must the black and baneful Negroes be permitted to acquire in our country. Already have they outlived their usefulness—if, indeed, they were ever useful at all."[47]

The idea of expelling African-Americans from the society altogether was, in fact, entertained by Lincoln himself, who persuaded Congress to pass legislation subsidizing the voluntary emigration of ex-slaves to the Caribbean.[48] A number of northern states, including Pennsylvania, Ohio, and Illinois, went so far as to pass laws to prevent migration of free blacks into their states. Everywhere, the issue was expressed as a competition between black and white labor, rather than as competition between workers, even though Irish and German workers, and later Polish, Italian, and Slav workers, would be in competition. Thus in 1862, as blacks who had been freed by the Union Army drifted northward, the Boston Pilot , and Irish-Catholic newspaper, remarked that "we have already upon us bloody contention between white and black labor. . . . The North is becoming black with refugee Negroes from the South. These wretches crowd our cities, and by overstocking the market of labor, do incalculable injury to white hands."[49] In a similar vein, the Democratic party of Pennsylvania inveighed against the Republican party in 1862 calling them "the party of fanaticism, or crime . . . that seeks to turn the slaves of the Southern states loose to overrun the North and enter into competition with the white laboring masses, thus degrading and insulting their manhood by placing them on an equality with Negroes in their occupations is insulting to our race, and merits our most emphatic and unqualified condemnation."[50]

In the late nineteenth century, whites in California and other far western states would become alarmed at the "yellow peril," and western


203

states would pass discriminatory laws against the Chinese. Similarly, Nativists were instrumental in the passage of the first immigration quota system in 1921, which severely restricted access of eastern and southern Europeans to the United States. In both cases, however, the discrimination was not rooted so far in the past or so deep in the cultural psyche as that aimed against blacks. Eventually, fear of the "yellow peril" subsided, and Asians were able to develop businesses that were patronized by the general white public. And immigrants from southeastern Europe became just so many immigrant groups on the American landscape. Yet, the discriminatory structures and laws enacted against blacks in the South, and the discriminatory practices of the North, persisted for 100 years following the end of slavery. Only when confronted with a major threat to societal order posed by the upheaval of the Civil Rights Movement and the "long hot summers" of ghetto rebellion was American society persuaded to commit itself—for the first time in its history—to equal status for African-Americans. Coming after a century of oppression that left a disproportionate number of black citizens on the last rung of the class ladder, with little wealth, property, or educational resources, the Civil Rights Laws of 1964 could be nothing more than a beginning, a ticket to run in a race for which millions of blacks were ill prepared. The task of redressing the cumulative consequences of past discrimination remained, as well as that of providing truly equal opportunities to those blacks now entering the educational systems and job market.

The Present and Future of Racial Polarization

To some extent, the disadvantaged position in which blacks found themselves in 1964 was recognized. Lyndon Johnson launched the War on Poverty during the euphoria of the prosperous era of the late 1960s, a period in which all things seemed possible. Educational disadvantages were addressed through attempts to desegregate schools at all levels. The Head Start program was launched to help disadvantaged children overcome the educational deprivation associated with poverty. The Office of Civil Rights and the Equal Employment Opportunity Commission (EEOC) were established, the latter to oversee the implementation of Title VII, which outlawed discrimination in employment. An open-housing bill was passed, and the government moved to grant blacks access to the ballot box.

From hindsight, we now see that the federal government and the nation did not fully appreciate the magnitude of the task: to eradicate 100 years of deprivation and oppression, and to remove from the hearts and minds of whites the cultural baggage of racism. In time, discouragement over the slow pace of progress set in. Rather than fine-tune efforts begun


204

during the War on Poverty with more sophisticated approaches and additional resources, the "War" was eventually abandoned.

In the atmosphere of economic insecurity that slowly gripped the nation during the downturn of the 1970s and 1980s, even the white middle class became preoccupied with bread-and-butter issues. A white society still imbued with a racist culture turned once more to a familiar tool in a new guise, discrimination. Concern about "equal opportunity" for blacks was replaced with concern over "reverse discrimination," a term symbolizing the growing unwillingness of present generations of whites to individually and collectively accept the challenge (and burden) of rectifying the evils created by past generations of whites. Well-meaning but naive policymakers had not anticipated the depth of white resistance to the full incorporation of blacks into American society or whites' unwillingness to pay the societal cost of achieving that task. Even today, as the National Academy of Science study notes, while whites are increasingly supportive of the "principles" of racial equality, they offer "substantially less support for policies intended to implement principles of racial equality" and continue to shun sustained and close contacts with blacks.[51] Granted that the unfinished agenda is challenging in the best of times, and even painful in periods of economic sluggishness, such as we faced in the 1970s and 1980s, the subjugation of blacks has been even more painful. Their continued economic inequality is not only debilitating to them, but costly to the nation in terms of both the expense of maintaining the dependent poor and the pool of productive talent lost. The financial loss to the nation associated with the lower earnings of blacks has been estimated by Billy J. Tidwell of the Urban League to equal almost 2 percent of our gross national product, or about $104 billion in 1989.[52]

The problem of fully incorporating blacks into the American mainstream is a societal problem that requires compensatory measures to rectify the disadvantages created by racism. Since this is a societal, rather than merely an individual problem, it is the task of government to mobilize resources and persuade white society to support this undertaking. Such a mobilization of resources and sentiment was begun under John F. Kennedy and continued by Lyndon Johnson. It included the passage of the Civil Rights Act and the launching of the War on Poverty. Many successes can be counted.

There are just as many failures as successes, however. The comprehensive study by the National Academy of Science makes this painfully clear. Efforts to desegregate schools have faltered at all levels, and they continue to fail to provide quality education to blacks generally. Head Start remains far below its full potential because of inadequate funding. As a result of the failure of schools to address the educational disadvantages many blacks face because their socioeconomic backgrounds are


205

lower than those of whites, the National Academy's study concludes: "American students leave the schools with black/white achievement gaps not having been appreciably diminished."[53] A college education, the key to "an estimated 50 percent of new jobs created between [1989] and the year 2000," is becoming less and less accessible to blacks.[54] The proportion of black high school graduates entering college is now lower than in 1976, a victim of declining federal aid to education and the virtual abandonment of desegregation in higher education by the federal government during the Reagan administration.[55] Discrimination in housing remains little changed from the past, so much so that blacks remain today the most residentially segregated of all ethnic minorities. According to the findings of Douglas Massey, "a black person who makes more than $50,000 a year will be virtually as segregated as a black person who makes only $2,500 a year."[56]

While blacks have made tremendous strides in employment since 1964, they still lagged far behind whites in both occupational achievement and income by 1990. The optimism of the late 1960s has given way to caution or even pessimism, leading the authors of the National Academy of Sciences' study to conclude that "since the early 1970s, the economic status of blacks relative to whites has, on average, stagnated or deteriorated."[57]

Signs of this stagnation are evident in both income and occupational statistics. In 1989, the median income of two-earner black families was $36,709, only $8,751 higher than the figure for white families with one earner.[58] The wealth gap was even larger, with whites having a net worth more than three times that of blacks. Black upward mobility into the middle class has also slowed. My overall projections for the black middle class for the years 1990 and 2000, which were based on statistics for the years 1973 to 1981, have proven to be overly optimistic.[59] Rather than a black middle class that is 48.6 percent of employed blacks in 1990, the proportion was closer to 45 percent. At the same time, projections for a white middle class of 59.5 percent in 1990 was just about on target, evidence that whites have not suffered economically during this period as much as blacks. Blacks have had an especially difficult time penetrating the seats of power in the workplace. Although constituting 10.1 percent of all employed workers, they held only 6.1 percent of the managerial and professional jobs in 1989.[60] Dispelling the idea that discrimination is a thing of the past, the National Academy of Science study concludes that "a considerable amount of remaining black/white inequality is due to continuing discriminatory treatment of blacks."[61] The task of fully incorporating African-Americans into American society remains "unfinished business."

Much of this failure can be laid at the feet of the federal government,


206

especially the two-term Reagan administration, which not only failed to provide the leadership needed to complete the task but was actively hostile to the only truly successful tools in this struggle: desegregation of the educational system and implementation of equal employment opportunity laws. The Reagan administration's hostility to quotas and timetables, the only meaningful means of forcing reluctant employers to implement affirmative action, has been especially devastating. It is at best naive to believe that employers who have discriminated against blacks in the past will suddenly have a change of heart and voluntarily afford the same opportunities to blacks as they do to whites. The study by the National Academy of Science should erase all doubts, even among the most skeptical. Rather than race declining in significance, as William Wilson suggested in 1978, it is now clear that race remains a deep, pervasive, and intractable characteristic of white society in the 1990s. The most recent indication of persistent racism comes from the finding of a 1990 national NORC survey that over half of all whites still hold the negative stereotype that blacks are lazy and less intelligent than whites.[62]

Conclusion

Simple justice and a commitment to equality demand that we free ourselves of racism and discrimination. Yet, as this historical review of race relations in the United States indicates, white resistance to black progress has been so deep, and has gone on for so long, that racism seems intractably built into the American experience. Despite such resistance, however, the problem must be addressed. Today it can be said that the future not only of blacks but the nation itself depends on the full incorporation of minorities into the American mainstream. Because of changing demographics, the economy will depend more on minority workers in the future.[63] By the year 2000, about one-third of all workers entering the labor force will be minorities. When combined with patterns of immigration described by Rubén Rumbaut in his chapter in this book, it is clear that perhaps the single biggest change facing the United States is the increasing racial and ethnic diversity of its population. By the year 2080, all minorities taken together may well constitute slightly over half of the U.S. population. Even the selfish self-interest of whites demands their strong support for affirmative action and the elimination of all forms of racism and discrimination. Yet millions of whites (perhaps the majority) do not understand this. The business community—always preoccupied with the present—is just beginning to glimpse this truth.

It should be clear that the problem of racism and discrimination will not resolve itself. Almost every day the news media report some new sign of racial tension in America. Blacks continue to have a difficult time buy-


207

ing homes because of bias by lenders and lower incomes than whites.[64] Discriminatory election laws in many states continue to hinder the election of blacks at the state level, leading the Justice Department to file suit against the state of Georgia.[65] The upper levels of management continue to elude blacks today as they did ten years ago. Even while the economic position of blacks in the 1980s is declining, as measured by falling incomes and rising unemployment, employment agencies continue to assign job interviews to whites more than to blacks, as CBS's "60 Minutes" documented in the summer of 1990.

I have argued throughout this chapter that racism and discrimination should be seen as societal problems, not simply the aberrations of malevolent individuals. Just as the most positive historical changes, such as emancipation and the Civil Rights Act, resulted from leadership at the highest levels of government, so today it is only the initiative and leadership from state and, especially, the federal government, that are up to the task. Without a massive effort, similar to the Civil Rights era of the 1960s, the racial problems of today will only become worse in the twenty-first century.


208

Ten—
Passages to America:
Perspectives on the New Immigration

Rubén G. Rumbaut

Once I thought to write a history of the immigrants in America. Then I discovered that the immigrants were American history.
OSCAR HANDLIN , The Uprooted (1951)


Ironically, those opening lines of Handlin's famous portrait of immigrant America ring truer today than they did when he penned them at mid-century. As Handlin would add in a postscript to the second edition of The Uprooted two decades later, immigration was already "a dimly remote memory, generations away, which had influenced the past but appeared unlikely to count for much in the present or future"; and ethnicity, not a common word in 1950, seemed then "a fading phenomenon, a quaint part of the national heritage, but one likely to diminish steadily in practical importance."[1] After all, the passage of restrictive national-origins laws in the 1920s, the Great Depression and World War II had combined to reduce the flow of immigrants to America to its lowest point since the 1820s. But history is forever ambushed by the unexpected. Handlin might have been surprised, if not astonished, to find that in at least one sense the "American Century" seems to be ending much as it had begun: the United States has again become a nation of immigrants, and it is again being transformed in the process. To be sure, while the old may be a prologue to the new, history does not repeat itself, whether as tragedy or as farce. America is not the same society that processed the "huddled masses" through Castle Garden and Ellis Island, and the vast majority of today's immigrants and refugees hail not from Europe, but from the developing countries of the Third World, especially from Asia and Latin America. Not since the peak years of immigration before World War I have so many millions of strangers sought to make their way in America. They make their passages legally and illegally, aboard jumbo jets and in the trunks of cars, by boat and on foot; incredibly, in 1990 a Cuban refugee came across the Straits of Florida riding a windsurfer. Never before has the United States received such diverse


209

groups—immigrants who mirror in their motives and social class origins the forces that have forged a new world order in the second half of this century and who are, unevenly, engaged in the process of becoming the newest members of American society.[2]

The American ethnic mosaic is being fundamentally altered; ethnicity itself is being redefined, its new images reified in the popular media and reflected in myriad and often surprising ways. Immigrants from a score of nationalities are told that they are all "Hispanics," while far more diverse groups—from India and Laos, China and the Philippines—are lumped together as "Asians." There are foreign-born mayors of large American cities, first-generation millionaires who speak broken English, a proliferation of sweatshops exploiting immigrant labor in an expanding informal economy, and new myths that purport to "explain" the success or failure of different ethnic groups. Along "Calle Ocho" in Miami's Little Havana, shops post signs to reassure potential customers that they'll find "English spoken here," while Koreatown retailers in Los Angeles display "Se habla español" signs next to their own Hangul script, a businesslike acknowledgment that the largest Mexican and Salvadoran communities in the world outside of Mexico and El Salvador are located there. In Brooklyn, along Brighton Beach Avenue ("Little Odessa"), signs written in Cyrillic letters by new Soviet immigrants have replaced old English and Yiddish signs. In Houston, the auxiliary bishop is a Cuban-born Jesuit who speaks fluent Vietnamese—an overflow of 6,000 faithful attended his recent ordination, and he addressed them in three languages—and the best Cuban café is run by Koreans. In a Farsi-language Iranian immigrant monthly in Los Angeles, Rah-E-Zendegi , next to announcements for "Business English" classes, a classified ad offers for sale a $20 million square block on Boston's Commonwealth Avenue, and other ads deal with tax shelters, mergers, and acquisitions. In Santa Barbara, a preliterate Hmong woman from the Laotian highlands, recently converted to Christianity, asked her pastor if she could enter heaven without knowing how to read; while in Chattanooga, Tennessee, a twelve-year-old Cambodian refugee, Linn Yann, placed second in a regional spelling bee (she missed on "enchilada"). At the Massachusetts Institute of Technology, Tue Nguyen, a twenty-six-year-old Vietnamese boat refugee, set an MIT record in 1988 by earning his seventh advanced degree, a doctorate in nuclear engineering, just nine years after arriving in the United States—and landed a job at IBM designing technology for the manufacture of semiconductors. In the San Jose telephone directory, the Nguyens outnumber the Joneses fourteen columns to eight, while in Los Angeles, a Korean restaurant serves kosher burritos in a largely black neighborhood. And then there was this in the New York Times: "At the annual Lower East Side Jewish Festival yesterday, a


210

Chinese woman ate a pizza slice in front of Ty Thuan Duc's Vietnamese grocery store. Beside her a Spanish-speaking family patronized a cart with two signs: 'Italian Ices' and 'Kosher by Rabbi Alper.' And after the pastrami ran out, everybody ate knishes."[3]

Immigration to the United States is a social process, patterned within particular structural and historical contexts. The contemporary world has shrunk even as the populations of developing countries have expanded. Societies have become increasingly linked in numerous ways—economically, politically, culturally—as states and markets have become global forms of social organization, and modern consumption standards (especially American life-styles) are diffused worldwide. Over time, social networks are created that serve as bridges of passage to America, linking places of origin with places of destination. Indeed, transnational population movements of workers, refugees, and their families are but one of many other exchanges of capital, commodities, and information across state borders, all facilitated by a postwar revolution in transportation and communication technologies. In general, the patterns reflect the nature of contemporary global inequality: a flow of capital from more developed countries (MDCs) to less developed countries (LDCs), a flow of labor from LDCs to MDCs, and—in an era of Cold War and global superpower confrontation, decolonization and the formation of new states, revolutions and counterrevolutions—continuing flows of refugees, primarily from one Third World country to another.[4]

Still, moving to a foreign country is not easy, even under the most propitious circumstances. In a world of 5 billion people, only a fraction—perhaps 2 percent—are immigrants or refugees residing permanently outside their country of birth. In absolute numbers, the United States remains by far the principal receiving country: by the late 1950s the United States had admitted half of all legal immigrants worldwide, and that proportion had grown to two-thirds by the 1980s. In relative terms, the picture is different: only 6.2 percent of the 1980 U.S. population was foreign-born, a percentage exceeded by many other countries. For example, recent censuses showed a foreign-born population of 20.9 percent in Australia, 16.1 percent in Canada, 8.2 percent in France, 7.6 percent in West Germany, 7.2 percent in Venezuela, 6.8 percent in Argentina, and 6.6 percent in Great Britain. Some smaller countries have much higher proportions, such as Israel (42 percent) and Saudi Arabia (36 percent). But the 14.1 million foreigners counted in the 1980 U.S. census formed the largest immigrant population in the world.[5]

The public image of today's new American immigration clashes with its complex realities. Because the sending countries are generally poor, many Americans believe that the immigrants themselves are poor and uneducated. Because the size of the new immigration is substantial and


211

concentrated in a few states and metropolitan areas, concerns are raised that the newcomers are taking jobs away from the native-born and unfairly burdening taxpayers and public services. Because of the non-European origins of most new immigrants and the undocumented status of many, their prospects for assimilation are sometimes perceived as worse than those of previous flows. And as in the past—if without much of the vitriol and blatant racism of yesterday's nativists—alarms are sounded about the "Balkanization" of America, the feared loss of English as the national language and even of entire regions to potential secessionist movements. As this chapter will attempt to show, such concerns are fundamentally misplaced, even though immigration again plays a central role in an American society in transition. Within its limits, the essay has three objectives: (1) to sketch a portrait of the contours and diversity of recent immigration to the United States, (2) to examine the modes of incorporation of main types of immigrant groups, and (3) to consider some of the determinants of the new immigration and its consequences for the American economy and society.

Immigration to the United States: Historical Trends and Changing Policies

Decennial trends in immigration to the United States are summarized in table 10.1 for the century from 1890 to 1989. Authorized immigration reached its highest levels (8.8 million) during 1901–10, more than doubling the number of immigrants admitted in the preceding decade. Much of this flow was initiated by active recruitment on the part of employers, and many immigrants (over one-third) returned home after a few years in the United States—"birds of passage," often young single men, whose movements tended to follow the ups and downs of the American business cycle.[6] In the post–World War II period, legal immigration flows have been much less clearly a function of economic cycles and deliberate recruitment, and much more apt to be sustained by social networks of kin and friends developed over time. Since 1930, moreover, some two-thirds of all legal immigrants to the United States have been women and children.[7] After the peak decade of 1901–10, immigration began a steady decline until the trend reversed itself immediately after World War II. Only 23,000—the smallest annual flow recorded since the early nineteenth century—entered in 1933 and again in 1943, in the midst of the Depression and then the world war. The number of legal immigrants doubled from the 1930s to the 1940s, more than doubled again in the 1950s (to 2.5 million), and more than doubled yet again by the 1980s. Indeed, if the 3 million people who recently qualified for legalization of their status under the amnesty provisions of the Immigration Reform


212
 

TABLE10.1 Historical Trends in the U.S. Foreign-Born Population and Legal Immigration, 1890–1989,
by Region of Origin, and Net Immigration Proportion of Total U.S. Population Growth

Census Year/Decade Ending

Foreign-Born Population

Immigration by Intercensal Decade and Region of Last Residence

Population Growth Due to Net Immigration (%)

N
(1000s)

% Foreign-Born of Total Population

N
(1000s)

North/West Europe and Canada
(%)

South/East Europe
(%)

Latin America (%)

Asia
(%)

1900

10,445

13.6

3,688

44.7

51.8

1.0

2.0

20.3

1910

13,360

14.7

8,795

23.8

69.9

2.1

3.7

39.6

1920

14,020

13.2

5,736

30.3

58.0

7.0

4.3

17.7

1930

14,283

11.6

4,107

53.8

28.7

14.4

2.7

15.0

1940

11,657

8.8

528

58.0

28.3

9.7

3.1

1.6

1950

10,431

6.9

1,035

63.8

12.8

14.9

3.6

8.8

1960

9,738

5.5

2,515

51.8

16.0

22.2

6.1

10.6

1970

9,619

4.7

3,322

30.0

16.3

38.6

12.9

16.1

1980

14,080

6.2

4,493

10.2

11.4

40.3

35.3

17.9

1981–89a

NA

NA

5,323

8.0

5.9

37.6

45.1

29.2

SOURCES : U.S. Bureau of the Census, Statistical Abstracts of the United States , 109th ed., (Washington, D.C.: Government Printing Office, 1989), tables 1, 5–6, 46; Leon F. Bouvier and Robert W. Gardner, "Immigration to the U.S.: The Unfinished Story," Population Bulletin 41 (November 1986), tables 1, 3, 6; U.S. Immigration and Naturalization Service, Statistical Yearbooks (Washington, D.C.: Government Printing Office, 1980–89); U.S. Bureau of the Census, Current Population Reports , Series P-25, no. 1018 (Washington, D.C.: Government Printing Office, 1989).

a Data do not include 478,814 immigrants who had resided in the United States since 1982 and whose status was legalized in fiscal year 1989 under the provisions of the Immigration Reform and Control Act (IRCA) of 1986. Beginning in 1990 an additional 2.6 million legalization applicants, including over one million special agricultural workers (SAW), became eligible to adjust their status to permanent resident.


213

and Control Act (IRCA) of 1986 were added to the regular admission totals for the 1980s, the decade ending in 1990 would exceed 8 million immigrants and rival the record numbers registered during the first decade of this century.[8] At that time, however, foreign-born persons constituted 14.7 percent of the total U.S. population, more than twice the relatively small 6.2 percent counted in the 1980 census. As table 10.1 also shows, net immigration accounted for nearly 40 percent of total population growth in the United States by 1910—a level not since approached, though net immigration today (adjusting for both emigration and illegal immigration) makes up an increasing proportion of total U.S. population growth. Given a declining national fertility rate, the demographic impact of immigration will continue to grow in importance.[9]

Until 1890 the overwhelming number of immigrants had come from northwest Europe—particularly from Ireland, Great Britain, Germany, and Scandinavia. From Asia, Chinese laborers were recruited, especially to California, after 1850, until their exclusion by federal law in 1882 (rescinded in 1943, when the United States and China were allies in World War II); their place was taken by Japanese immigrants, who were themselves restricted (though not entirely excluded) by the "Gentleman's Agreement" of 1907 between the U.S. and Japanese governments. After 1890, however, a much larger "new" immigration from southern and eastern Europe—particularly from Italy and the Russian and Austro-Hungarian empires—significantly changed the composition of the trans-atlantic flow. From 1890 to 1920, as shown in table 10.1, well over half of all immigrants to America arrived from these regions. In response, the most restrictive immigration laws in the nation's history were passed in 1921 and 1924 (fully implemented in 1929), limiting the annual flow to 150,000 for Eastern Hemisphere countries and setting national-origins quotas that barred Asians and allocated 82 percent of all visas to northwestern Europeans, 16 percent to southeastern Europeans, and 2 percent to all others. Largely at the urging of American growers, no limits were set on Western Hemisphere countries; it was understood that Mexican labor could be recruited when needed (as happened during World War I and the 1920s, and again during the Bracero Program of contract-labor importation, begun in 1942 to meet labor shortages during World War II but maintained until 1964), and that those laborers could be deported en masse when they were no longer needed (as happened during the 1930s and again during "Operation Wetback" in the mid-1950s).

The McCarran-Walter Act of 1952 retained the national-origins quota system, slightly increasing the annual ceilings for the Eastern Hemisphere to 159,000 and the allocation of visas to northwestern Europeans to 85 percent. It included—again at the urging of growers—a "Texas Proviso" that exempted employers from sanctions for hiring illegal


214

aliens (a loophole, formally closed by IRCA in 1986, that in fact encouraged undocumented immigration, all the more after the Bracero Program was ended in 1964). And it set up a preference system to meet specified labor needs and family reunification priorities. Among numerically restricted immigrants, half of the visas were granted to highly skilled professional and technical workers, and half to immediate relatives of permanent residents and to the parents, siblings, and married children of U.S. citizens. Exempted from the numerical quotas were spouses and unmarried minor children of U.S. citizens. Many British, German, and other European scientists and professionals journeyed to America in the aftermath of the war to pursue opportunities not available in their countries, and the first "refugees" recognized as such by the U.S. government—European "displaced persons" in the late 1940s, Hungarian escapees after the 1956 revolt—were admitted under special legal provisions. In any case, as table 10.1 shows, from 1920 to 1960 the majority of all immigrants to the United States again came from northwest Europe and Canada. After 1960, however, the national composition of the flow changed dramatically, and by the close of the 1980s more than 80 percent of total legal immigration originated in Asia and Latin America.

The Hart-Celler Act of 1965 (fully implemented in 1969), which eliminated the national-origins quota system and basically remained in effect until 1990, has been frequently cited as the main reason for these changes. For a variety of reasons, however, this explanation is insufficient; entry policies do influence but do not determine immigrant flows. As in the past, rules governing immigration are ultimately defeasible and are accompanied by consequences never intended by policymakers. The 1965 Act—amended in 1976, again by the Refugee Act of 1980 and IRCA in 1986—is a case in point. Emmanuel Celler, Brooklyn congressman who cosponsored the 1965 law, had long sought to repeal the discriminatory quota system, but noted that "my efforts were about as useless as trying to make a tiger eat grass or a cow eat meat." He lobbied for the new law—in a political climate changed by the Civil Rights Movement at home and by the geopolitical interests of U.S. foreign policy abroad—by offering opponents family preferences as an alternative to national-origins quotas, confidently predicting that "there will not be, comparatively, many Asians or Africans entering this country . . . since [they] have very few relatives here."[10] Similar pronouncements were made by the Attorney General and other officials in testimony before the Congress; they expected instead that the number of southern and eastern European immigrants would grow. Historically, after all, Asian immigration to the United States had averaged only 2 percent to 4 percent of total admissions—until the 1950s, when 6 percent of legal immigrants came from Asian countries, most of them as brides of U.S. servicemen


215

overseas—and (uncoerced) African immigration had never been a factor. But by the 1980s, Asian immigration accounted for 45 percent of total admissions, and African immigration—though still small in relative numbers—increased eightfold from the early 1960s to the late 1980s. European immigration, in turn, decreased significantly over the same period—precisely the opposite of what had been anticipated.

Immigrants who are legally admitted to the United States fall into two broad categories: those subject to a worldwide limitation and those who are exempt from it. With minor modifications until it was overhauled in late 1990, the 1965 law set a worldwide annual ceiling of 270,000 immigrants, with a maximum of 20,000 per country, under a preference system that greatly emphasized family reunification. The number of immigrants subject to this worldwide limitation remained relatively constant from year to year, since the demand for visas far exceeded the annual limit of 270,000. For example, as of January 1989, there were 2.3 million active registrants awaiting immigrant visas at consular offices abroad.[11] Among these numerically restricted immigrants, 20 percent of the visas were granted to persons certified by the Department of Labor to possess needed job skills (half of them professional, managerial, and technical workers) and their immediate families, and 80 percent to immediate relatives of permanent residents and to siblings and married children of U.S. citizens. But parents as well as spouses and unmarried minor children of American citizens are numerically unrestricted—opening "chain migration" channels for those with family connections—and in addition, refugees and asylees are admitted outside the worldwide limitation under separate ceilings determined each year by the Administration and the Congress (the 1990 refugee ceiling was raised to 125,000). The flow of immigrants thus exempt from numerical limits increased significantly over the past two decades, underscoring the progressive nature of network building processes: for example, 27 percent of the 1.9 million immigrants admitted during 1970–74 came outside the regular quota, as did 36 percent of the 2.4 million admitted during 1975–79, 50 percent of the 2.8 million admitted during 1980–84, and 56 percent of the 3 million admitted during 1985–89.[12] Of all nonquota immigrants legally admitted into the United States in recent years, two-thirds have been immediate relatives of American citizens, and one-third have been admitted as refugees.

Since 1960, the overwhelming majority of refugees have come from Cuba and, since the end of the Indochina War in 1975, primarily from Vietnam, Laos, and Cambodia. Indeed, the consolidation of communist revolutions in Cuba and Vietnam represent by far the worst defeats of American foreign policy in modern history. U.S. refugee policy, a product of the Cold War era, has always been guided by fundamentally politi-


216

cal and not purely "humanitarian" objectives, and refugees fleeing from communist-controlled states to the "free world" have served as potent symbols of the legitimacy of American power and foreign policy. Even after the 1980 Refugee Act accepted the United Nations' ideologically neutral definition of a refugee, more than 90 percent of entrants granted refugee or asylee status by the United States during the 1980s continued to be from communist countries; most escapees from noncommunist regimes, such as Salvadorans and Guatemalans fleeing death squads and civil wars in their countries, have instead been generally labeled as "economic migrants"—and deported or driven underground along with other undocumented immigrants.[13] The conferral or denial of asylum or refugee status has significant consequences for immigrants' incorporation in the American economy and society, since persons so classified have the right to work (which illegal immigrants and temporary visitors do not) and to access public assistance programs on the same basis as U.S. citizens (which legal immigrants do not, at least during their first three years in the country).

The undocumented immigrant population has not only grown but diversified during the 1980s. As noted previously, over 3 million immigrants qualified for legalization of their status under IRCA's amnesty provisions by 1989—including residents who had entered the United States illegally prior to 1982, and Special Agricultural Workers (SAWs) who had been employed in seasonal work during the mid-1980s. Immigrants who entered illegally after 1981 (other than SAWs) were not eligible to qualify for legalization under IRCA, and thus reliable data on the size and composition of that population are unavailable. However, a majority of Central Americans in the country today are probably included—themselves in some measure an unintended consequence of U.S. policy and intervention in their home region—as well as an estimated 100,000 Irish immigrants who have, since 1982, overstayed their temporary visitor visas and clustered in historical areas of Irish settlement in Boston and New York.[14] Furthermore, again contrary to official predictions, IRCA has not stopped the flow of unauthorized migrants; in fact, the number of apprehensions along the Mexican border increased abruptly after 1989 and may again reach historically high levels.[15] In addition, the growing backlog and waiting periods faced by persons applying legally for numerically restricted immigrant visas—above all in Mexico and the Philippines—are likely to encourage further extralegal immigration. Former Immigration and Naturalization Service (INS) Commissioner Leonel Castillo estimated in 1990 that the waiting period for Mexicans applying under the second preference (spouses and children of permanent U.S. residents) could jump to 22 years, and to 10 to 17 years for Filipinos under various family preference categories.[16]


217

Immigration to the United States: Contemporary Trends and the Changing Ethnic Mosaic

National Origins of the New Immigration

Quinquennial trends in U.S. immigration from 1960 to 1989 are summarized in table 10.2, broken down by the major sending countries. While today's immigrants come from over 100 different nation-states, some countries send many more than others, despite the egalitarian numerical quotas provided by U.S. law. The 21 countries listed in table 10.2 accounted for nearly three-fourths of all legal immigration since 1960. One pattern, a continuation of trends already under way in the 1950s, is quite clear: immigration from the more developed countries has declined over time and that from less developed countries has grown steadily. Among the MDCs, this pattern is clearest for Canada, Great Britain, Italy, and Germany, with the sharpest reductions occurring during the 1960s. Although traditional countries of immigration in the past, their prosperous postwar economies dampened the relative attraction of America, while many Italian "guest-workers" sought instead newly opened opportunities in Germany and Switzerland. The smaller flows of Polish and Soviet refugees have oscillated over time, reflecting changes in exit policies in those countries and in their bilateral relations with the United States. The flow from Japan, which as of the early 1960s was still the largest source of immigrants from Asia, has remained small and stable at about 4,000 per year, nearly half entering as spouses of U.S. citizens—in part reflecting labor shortages and exit restrictions at home. Among the LDCs, the major countries of immigration are located either in the Caribbean Basin—in the immediate periphery of the United States—or are certain Asian nations also characterized by significant historical, economic, political, and military ties to the United States. These historical relationships, and the particular social networks to which they give rise, are crucial to an understanding of the new immigration, both legal and illegal—and help explain why most LDCs are not similarly represented in contemporary flows, as might be predicted by orthodox "push-pull" or "supply-demand" theories of transnational labor movements.

In fact, just eight countries have accounted for more than half of all legal immigration since 1975: Mexico, the Philippines, Vietnam, South Korea, China, India, Cuba, and the Dominican Republic. Of these, Mexico and the Philippines alone have sent 20 percent of all legal immigrants to the United States over the past three decades, and Mexico also remains by far the source of most unauthorized immigration. Of the 3 million immigrants who qualified for legalization of their status under IRCA by 1989, about 2 million were Mexican nationals; and while most


218
 

TABLE10.2 Trends in Legal Immigration to the United States, 1960–89, by Region and Principal Sending Countries

Region/Country of Birth

Period of Immigrant Admission to U.S. Permanent Resident Status

Total

1960–64

1965–69

1970–74

1975–79

1980–84

1985–89a

Worldwide:

1,419,013

1,794,736

1,923,413

2,412,588

2,825,036

3,028,368

13,403,154

Latin America

485,016

737,781

768,199

992,719

995,307

1,201,108

5,180,130

Asia

114,571

258,229

574,222

918,362

1,347,705

1,336,056

4,909,145

Europe and Canada

803,596

766,347

530,925

429,353

388,700

297,609

3,216,530

Africa

11,756

21,710

34,336

51,291

73,948

89,636

282,677

More Developed Countries:

Canada

167,482

136,371

54,313

60,727

57,767

56,701

533,361

United Kingdom

123,573

117,364

56,371

65,848

73,800

66,682

503,638

Italy

86,860

109,750

106,572

43,066

20,128

14,672

381,048

Germany

138,530

83,534

36,971

32,110

33,086

34,464

358,695

Poland*

43,758

33,892

20,252

22,194

31,506

44,581

196,183

Japan

23,327

26,802

20,649

21,993

20,159

21,177

132,107

U.S.S.R.*

10,948

6,292

4,941

28,640

46,530

22,451

119,802


219
 

Region/Country of Birth

Period of Immigrant Admission to U.S. Permanent Resident Status

Total

1960–64

1965–69

1970–74

1975–79

1980–84

1985–89a

Less Developed Countries:

Mexico

217,827

213,689

300,341

324,611

330,690

361,445

1,749,603

Philippines

15,753

57,563

152,706

196,397

215,504

251,042

888,965

Cuba*

65,219

183,499

101,070

176,998

53,698

109,885

690,369

Chinab

20,578

65,712

81,202

107,762

168,754

194,330

638,338

Korea

9,521

18,469

93,445

155,505

163,088

173,799

613,827

Vietnam*

603

2,564

14,661

122,987

246,463

149,480

536,758

Dominican Republic

26,624

57,441

63,792

77,786

98,121

127,631

451,395

India

3,164

18,327

67,283

96,982

116,282

134,841

436,879

Jamaica

7,838

49,480

65,402

72,656

100,607

104,623

400,606

Colombia

27,118

39,474

29,404

43,587

50,910

55,990

246,483

Haiti

7,211

24,325

28,917

30,180

40,265

82,156

213,054

Laos*

NA

NA

166

8,430

102,244

46,937

157,777

El Salvador

6,766

7,615

9,795

20,169

38,801

57,408

140,554

Cambodia*

NA

NA

166

5,459

58,964

54,918

119,507

SOURCES : U.S. Immigration and Naturalization Service, Annual Reports (Washington, D.C.: Government Printing Office, 1960–77); and U.S. Immigration and Naturalization Service, Statistical Yearbooks (Washington, D.C.: Government Printing Office, 1978–89).

a Data do not include 478,814 persons whose status was legalized in fiscal year 1989 under the Immigration Reform and Control Act (IRCA).

b Includes Mainland China and Taiwan.

*Denotes country from which the majority of immigrants to the United States have been admitted as refugees.


220

of the remaining amnesty applicants came from nearby Caribbean Basin countries, Filipinos ranked sixth (behind Salvadorans, Guatemalans, Haitians, and Colombians, but ahead of Dominicans, Jamaicans, and Nicaraguans).[17] Indeed, Mexicans and Filipinos comprise, respectively, the largest "Hispanic" and "Asian" populations in the United States today.[18]

Not surprisingly, Mexico and the Philippines share the deepest structural linkages with the United States, including a long history of dependency relationships, external intervention, and (in the case of the Philippines) colonization. In both countries, decades of active agricultural labor recruitment by the United States—of Mexicans to the Southwest, Filipinos to plantations in Hawaii and California—preceded the establishment of self-sustaining migratory social networks. In the case of Mexico, the process has evolved over several generations. From California to Texas, the largest Mexican-origin communities in the United States are still located in former Mexican territories that were annexed in the last century, and they are today linked to entire communities on the other side of the border.[19] In the Philippines—unlike Puerto Rico, which also came under U.S. hegemony as a result of the 1898 Spanish-American War—its formal independence from the United States after World War II has since led to different patterns of immigration. During the half-century of U.S. colonization, the Americanization of Filipino culture was pervasive, especially in the development of a U.S.-styled educational system and the adoption of English as an official language, and the United States today is not only the Philippines' major trading partner but also accounts for more than half of total foreign investment there.[20] Since the 1960s, as will be detailed below, the Philippines have sent the largest number of immigrant professionals to the United States, as well as a high proportion of the many international students enrolled in American colleges and universities. Moreover, the extensive U.S. military presence in the Philippines—including the largest American bases in the Asian-Pacific region—has fueled immigration through marriages with U.S. citizens stationed there, through unique arrangements granting U.S. citizenship to Filipinos who served in the armed forces during World War II, and through direct recruitment of Filipinos into the U.S. Navy. Remarkably, by 1970 there were more Filipinos in the U.S. Navy (14,000) than in the entire Filipino navy.[21] During 1978–85, more than 51 percent of the 12,500 Filipino babies born in the San Diego metropolitan area—site of the largest naval station in the United States and the third largest destination of Filipinio immigrants—were delivered at just one hospital: the U.S. Naval Hospital.[22]

Among the other six leading countries of recent immigration, linkages unwittingly structured by American foreign policy and military in-


221

tervention since the 1950s are most salient in the exodus of the Koreans and Vietnamese. Indeed, an ironic consequence of the wars that took tens of thousands of Americans to Korea and Vietnam is that tens of thousands of Koreans and Vietnamese—including many Amerasians—have since come to America, albeit through quite different routes. Emigration connections variously shaped by U.S. intervention, foreign policies, and immigration policies are also a common denominator in the exodus of the Chinese after the 1949 revolution, the Cubans after the 1959 revolution, and the Dominicans after the U.S.-backed coup in 1965. In the case of India, South Korea, and Taiwan, large-scale U.S. foreign aid, technical assistance, trade, and direct investment (which in India surpassed that of the United Kingdom soon after decolonization) helped to forge the channels for many professionals and exchange students to come to America.[23] It has been estimated that since the early 1950s fewer than 10 percent of the many thousands of students from South Korea, Taiwan, China, and Hong Kong who have come to the United States for training on nonimmigrant visas ever returned home; instead, many adjusted their status and gained U.S. citizenship through occupational connections with American industry and business, thus becoming eligible to send for family members later on.[24] None of this is to suggest, of course, that the complex macrostructural determinants that shape migration flows—above all global market forces, which will be considered further on, and internal dynamics and crises in the sending countries—can be reduced to politico-military factors or state policies, but rather to focus attention on the importance of particular historical patterns of U.S. influence in the creation and consolidation of social networks that over time give the process of immigration its cumulative and seemingly spontaneous character.[25]

Social Class Origins of the New Immigration

There is no doubt that wage differentials between the United States and the LDCs act as a magnet to attract immigrants to America. This is especially the case along the 2,000-mile-long border between the United States and Mexico—the largest point of "North-South" contact in the world. During the 1980s, the minimum wage in the United States ($3.35 per hour) was six times the prevailing rate in Mexico, and higher still than most rates in Central America. But wage differentials alone do not explain why even in neighboring Mexico only a small fraction of the population ever undertakes the journey to "El Norte." What is more, 10 of the 15 poorest nations of the world (with sizable populations and national per capita incomes below U.S. $200)—Chad, Zaire, Mozambique, Mali, Burkina Faso, Nepal, Malawi, Bangladesh, Uganda, and Burma—are scarcely represented among immigrants to America, if at all. Signifi-


222

cantly, the only sizable groups of recent immigrants who do hail from the world's 15 poorest countries—from Cambodia, Laos, and Vietnam, and (though to a much lesser extent) Ethiopia and Afghanistan—have been admitted as political refugees.[26]

Moreover, the fact that most newcomers to America come from comparatively poorer nations—such as the 14 LDCs listed above in table 10.2—does not mean that the immigrants themselves are drawn from the uneducated, unskilled, or unemployed sectors of their countries of origin. Available evidence from the INS, summarized in table 10.3, indicates just the opposite. Over the past two decades, an average of more than 60,000 immigrant engineers, scientists, university professors, physicians, nurses, and other professionals and executives have been admitted each year into the United States. From the 1960s through the early 1980s, about one-third of all legal immigrants to the United States (excluding dependents) were high-status professionals, executives, or managers in their countries of origin. The proportion of these so-called brain drain elites declined somewhat to 26.5 percent by the late 1980s—still a higher percentage than that of the native-born American population—despite the overwhelming majority of immigrants having been admitted under family preferences over the past two decades. In part, these data suggest that while many "pioneer" immigrants have entered with formal credentials under the occupational preferences of U.S. law, their close kin who join them later are drawn from the same social classes—accounting for both the relative stability and similarity of their flows over time, if with a gradually diminishing upper-crust occupational profile as family "chain migration" processes evolve and expand. But the dynamics of particular types of flows are much more complex than might seem at first glance.

Take, for example, the case of so-called foreign medical graduates (FMGs). Worldwide, about 5 percent of physicians have immigrated to foreign countries in recent decades, of whom about half have come to the United States—75,000 entered in the 1965–74 decade alone.[27] During the 1950s and 1960s, enrollments in U.S. medical schools remained virtually stationary, while the American health care system expanded greatly (all the more after the passage of Medicaid and Medicare in the early 1960s), creating many vacancies in the number of internship and residency positions in U.S. hospitals (especially in underserved areas such as inner cities, which did not attract U.S. medical graduates). The demand, reinforced by the new channels opened up by U.S. immigration law and the higher salaries offered by U.S. hospitals, enabled FMGs and nurses to flock to America, particularly from developing countries such as India and the Philippines, where English-language textbooks are used and where many more professionals were graduating than the economies could absorb. Few of these people were directly recruited by


223
 

TABLE 10.3 Trends in Occupational Backgrounds of Legal Immigrants, 1967–87, by Region and Main Sending Countries: Percentage of Immigrant Professionals, Executives, and Managers, in Regional Rank Order

Region/Country of Birth

Reported Occupation of Immigrants Prior to Admission to Permanent Resident Statusa (Percentage Professional Specialty, Executives, and Managers)

1967

1972

1977

1982

1987

Worldwide:

32.4

36.0

33.0

31.9

26.5

Asia

59.3

67.3

53.2

39.9

39.5

Africa

53.7

67.3

60.7

45.8

39.4

Europe and Canada

29.7

26.6

41.5

44.4

40.7

Latin America

22.3

13.8

15.4

15.8

11.3

More Developed Countries:

Japan

57.6

50.1

44.6

48.5

42.2

Canada

48.7

51.6

61.3

57.9

55.0

United Kingdom

43.3

51.5

58.3

60.8

52.9

U.S.S.R.*

40.9

41.0

42.0

39.1

47.0

Poland*

32.3

27.1

30.6

32.1

26.9

Germany

30.5

43.5

37.2

40.9

35.7

Italy

8.4

8.5

21.0

30.8

33.6

Less Developed Countries:

India

90.6

91.6

79.1

73.7

61.7

Korea

80.5

72.9

49.6

42.9

44.0

Philippines

60.2

71.6

46.8

44.9

45.9

Chinab

48.6

52.5

53.9

47.3

34.3

Vietnam*

71.6

56.9

36.6

11.4

7.7

Cuba*

33.1

13.9

14.4

22.3

5.1

Colombia

32.5

27.5

17.4

20.2

20.6

Haiti

23.3

26.8

14.1

17.4

8.1

Jamaica

19.1

15.9

33.4

21.6

18.6

Dominican Republic

14.5

15.3

13.1

13.8

12.2

El Salvador

15.2

16.0

10.0

13.0

7.1

Mexico

8.5

5.1

6.6

7.0

5.9

Cambodia*

NA

NA

NA

7.1

2.0

Laos*

NA

NA

NA

4.7

2.1

SOURCES : U.S. Immigration and Naturalization Service, Annual Reports (Washington, D.C.: Government Printing Office, 1967, 1972, 1977); and U.S. Immigration and Naturalization Service, Statistical Yearbooks (Washington, D.C.: Government Printing Office, 1982, 1987).

a About two-thirds of immigrants admitted as permanent residents report no prior occupation to the INS; they are mainly homemakers, children, retired persons, and other dependents. Data above are based on 152,925 immigrants who reported an occupation in 1967; 157,241 in 1972; 189,378 in 1977; 203,440 in 1982; and 242,072 in 1987.

b Includes Mainland China and Taiwan.

*Denotes country from which the majority of immigrants to the United States have been admitted as refugees.


224

American hospitals; most made their own arrangements through professional networks of friends who were or had been in the United States, or by writing blind letters to hospitals listed in American Medical Association or state directories. By the mid-1970s there were about 9,500 Filipino and 7,000 Indian FMGs in the United States—more than the number of American black physicians—as well as some 3,000 FMGs each from Cuba and South Korea, and 2,000 each from Mexico and Iran. Perhaps the most extraordinary instance occurred in 1972, when practically the entire graduating class of the new medical school in Chiangmai, Thailand, chartered a plane to America. The effect of this kind of emigration on the sending countries' domestic stock of physicians has varied greatly: in 1972 the number of Mexican and Indian FMGs in the United States represented only 4 percent of Mexico's stock and 5 percent of India's, but the proportion was 18 percent of South Korea's, 22 percent of Iran's, 27 percent of Thailand's, 32 percent of the Dominican Republic's, 35 percent of Taiwan's, 43 percent of Cuba's, 63 percent of the Philippines', and—incredibly—95 percent of Haiti's. Since the late 1970s the flow of FMGs has declined, due to a constricting job market (as the supply of U.S.-trained physicians has increased) and the passage of more restrictive U.S. visa and medical licensing requirements, but by the late 1980s, FMGs still comprised 20 percent of the nation's physicians.[28]

The worldwide trends presented in table 10.3 conceal a wide range in the class character of contemporary immigration to the United States; among the principal sending countries there are considerable differences in the occupational backgrounds of immigrants. "Brain drain" immigrants have dominated the flows of Indians, Koreans, Filipinos, and Chinese (including Taiwanese) since the 1960s. High proportions are also in evidence among the Japanese, Canadian, and British groups—although their immigration flows are smaller, as seen earlier—as well as among some refugee groups, particularly Soviet Jews and Armenians and the more sizable first waves of refugees from Vietnam and Cuba. By contrast, immigration from Mexico, El Salvador, the Dominican Republic, and (until very recently) Italy has consisted predominantly of manual laborers and low-wage service workers, as has also been the case among refugees from Laos and Cambodia, and the more recent waves of Vietnamese, Cubans, and Haitians. Between these extremes in occupational profiles are Colombians, Jamaicans, Germans, and Poles.

Over time, the drop in the proportion of highly skilled immigrants within particular national groups is most apparent among non-European refugees, consistent with a general pattern that characterizes refugee flows: initial waves tend to come from the higher socioeconomic strata, followed later by heterogeneous working-class waves more representative of the society of origin. As table 10.3 shows, rapid declines are seen


225

among refugees who come from poor countries, such as Vietnam, where only a small proportion of the population is well educated.

The information provided in table 10.3, while useful as a first step to sort out the diverse class origins of the new immigration, is limited in several ways. The INS does not collect data on the educational backgrounds of legal immigrants, nor on the occupations they enter once in the United States, nor, for that matter, on the characteristics of undocumented immigrants or of emigrants (those who leave the United States after a period of time, estimated at about 160,000 annually). A more precise picture can be drawn from the last available census, which counted a foreign-born population of 14.1 million persons in 1980 (including an estimated 2.1 million undocumented immigrants). Census data on several relevant indicators for the largest foreign-born groups in the United States as of 1980 are presented in table 10.4, rank-ordered by their proportions of college graduates. The picture that emerges shows clearly that the foreign-born are not a homogeneous population; instead, to borrow a term from Milton Gordon, the formation of different "eth-classes" is apparent. Less apparent is the fact that within particular nationalities there is often also considerable socioeconomic diversity.

An upper stratum is composed of foreign-born groups whose educational and occupational attainments significantly exceed the average for the native-born American population. Without exception, all of them are of Asian origin—Indians, Chinese (especially Taiwanese), Filipinos, Koreans, and Japanese—with the most recently immigrated groups reflecting the highest levels of attainment. It is precisely this stratum that accounts for the popularization of the recent myth of Asian-Americans as "model minorities," whose children are overrepresented among the nation's high school valedictorians and in admissions to elite universities from Berkeley to Harvard. For instance, foreign-born students collected 55 percent of all doctoral degrees in engineering awarded by American universities in 1985, with one-fifth of all engineering doctorates going to students from Taiwan, India, and South Korea alone. In 1988 the top two winners of the Westinghouse Science Talent Search, the nation's most prestigious high school competition, were immigrant students from India and Taiwan in New York City public schools; indeed, 22 of the top 40 finalists were children of immigrants. Moreover, the stories of competitive success are not limited to science and math-based fields (where Asian immigrant students tend to concentrate to reduce their English-language handicaps): the 1985 U.S. National Spelling Bee champ was Chicago schoolboy Balu Natarajan, who speaks Tamil at home, and the 1988 winner was a thirteen-year-old girl from a California public school, Indian-born Rageshree Ramachandran, who correctly spelled "elegiacal" to beat out runner-up Victor Wang, a Chinese-American.[29]


226
 

TABLE10.4 Characteristics of the Largest Foreign-Born Groups in the United States in 1980,
Ranked by Their Proportion of College Graduates, Compared to the Native-Born Groups

Country of Birth

Persons
(N)

Education a

Occupation b

Year of Immigration

Not a Citizen (%)

College Graduate (%)

High School Graduate (%)

Professional Specialty (%)

Service Occup. (%)

1970–80 (%)

1960–69 (%)

Pre-1960 (%)

Above U.S. Average:

India

206,087

66.2

88.9

42.8

5.3

76.8

19.3

3.9

76.0

China (Taiwan)

75,353

59.8

89.1

30.4

13.7

81.1

17.0

1.9

71.1

Philippines

501,440

41.8

74.0

20.1

16.2

63.6

22.6

13.8

55.3

Korea

289,885

34.2

77.8

14.7

17.0

83.9

13.0

3.1

65.4

China (Mainland)

286,120

29.5

60.0

16.8

24.4

47.5

27.3

25.2

49.7

Japan

221,794

24.4

78.0

13.6

20.8

45.2

22.7

32.1

56.7

Close to U.S. Average:

England

442,499

16.4

74.6

17.4

12.2

21.9

22.0

56.1

42.0

Cuba

607,814

16.1

54.9

9.2

12.2

26.9

60.4

12.8

54.9

U.S.S.R.

406,022

15.7

47.2

15.9

13.2

24.3

5.3

70.4

27.4


227
 

Country of Birth

Persons (N)

Education a

Occupation b

Year of Immigration

Not a Citizen (%)

College Graduate (%)

High School Graduate (%)

Professional Specialty (%)

Service Occup. (%)

1970–80 (%)

1960–69 (%)

Pre-1960 (%)

Germany

849,384

14.9

67.3

13.4

14.1

10.6

20.6

68.8

21.4

Colombia

143,508

14.6

62.8

8.1

15.8

55.0

37.1

7.9

75.1

Canada

842,859

14.3

61.8

16.2

11.4

15.2

20.1

64.7

39.0

Vietnam

231,120

12.9

62.1

8.6

16.4

97.6

2.1

0.2

88.9

Jamaica

196,811

11.0

63.5

10.2

29.9

58.7

29.8

11.6

63.7

Below U.S. Average:

Poland

418,128

10.0

40.5

10.8

13.5

11.0

14.5

74.5

22.2

Greece

210,998

9.5

40.4

8.0

25.0

32.0

27.7

40.3

35.0

Ireland

197,817

8.8

52.1

14.5

21.7

7.3

14.5

78.1

18.8

Italy

831,992

5.3

28.6

6.1

16.3

12.1

18.2

69.8

22.6

Dominican Republic

169,147

4.3

30.1

3.1

18.5

56.8

37.2

6.1

74.5

Portugal

211,614

3.3

22.3

2.3

10.0

45.0

34.0

21.0

61.6

Mexico

2,199,221

3.0

21.3

2.5

16.6

57.8

21.9

20.3

76.4

S Foreign-Born

14,079,906

15.8

53.1

12.0

16.1

39.5

22.3

38.2

49.5

S Native-Born

212,465,899

16.3

67.3

12.3

12.7

SOURCES : U.S. Bureau of the Census, Statistical Abstracts of the United States , 109th ed. (Washington, D.C.: Government Printing Office, 1989); table 47; and U.S. Bureau of the Census, 1980 Census of Population: Detailed Population Characteristics , PC80-1-D1-A (Washington, D.C.: Government Printing Office, 1984), table 254.

a Years of school completed by persons aged twenty-five years or older.

b Present occupation of employed persons aged sixteen years or older.


228

Yet also during the 1980s, the highest rates of poverty and welfare dependency in the United States have been recorded among Asian-origin groups, particularly refugees from Indochina. One study found poverty rates ranging from over 50 percent for the Vietnamese to 75 percent for the Chinese-Vietnamese and the Lao, 80 percent for Cambodians, and nearly 90 percent for the Hmong. And Southeast Asian and, to a lesser extent, Korean workers are much in evidence, along with undocumented Mexican and Salvadoran immigrants, in a vast underground sweatshop economy that has expanded during the 1980s and into the 1990s in Southern California. Those findings debunk genetic and cultural stereotypes that have been propounded in the mass media as explanations of "Asian" success, and point instead to the diversity of recent Asian immigration and to the class advantages of particular Asian-origin groups.[30]

A middle stratum evident in table 10.4, composed of groups whose educational and occupational characteristics are close to the U.S. average, is more heterogeneous in terms of national origins. It includes older immigrants from England, the U.S.S.R., Germany, and Canada (the majority entering the United States prior to 1960), and more recent immigrants from Cuba, Colombia, Vietnam, and Jamaica. The post-1980 waves of Mariel refugees from Cuba and Vietnamese "boat people" from more modest social class backgrounds are not reflected in the data in table 10.4, since they arrived after the census was taken; the 1990 census will probably reflect much wider differences in the characteristics of these two refugee populations, underscoring the internal diversification of particular national groups over time.

Finally, as table 10.4 shows, a lower stratum is composed of working-class groups who fall substantially below native-born norms. It includes recent immigrants from Mexico and the Dominican Republic—of whom a substantial number entered without documents—but also includes less visible, older European immigrants from Poland, Greece, Ireland, Italy, and Portugal. The 1990 census most probably will add to this stratum several groups who have arrived in sizable numbers during the past decade, including Salvadorans, Guatemalans, Nicaraguans, Haitians, and Cambodian and Laotian refugees. Not included in this bottom stratum are Puerto Ricans, since they are not "foreign-born" but are U.S. citizens by birth; but their aggregate socioeconomic characteristics would place them here, and their large-scale post–World War II migration to the mainland resembles in many respects that of Mexican labor immigration. Mexicans and Puerto Ricans make up the overwhelming majority of the supranational "Hispanic" population of the United States, and their particular characteristics and circumstances have colored the construction of negative ethnic typifications.[31] In any case, these findings, too, debunk cultural stereotypes that have been propounded in the mass


229

media as explanations for the lack of "Hispanic" success in contrast to that of "Asians" and white European ethnics, and point instead to the diversity of recent Latin American immigration and to the class disadvantages of particular groups.

Significantly, there is an imperfect correlation between educational and occupational attainment among these groups. For example, as table 10.4 shows, the percentage of longer-established Canadian and certain European immigrants employed in professional specialties actually exceeds the respective proportion of their groups who are college graduates. By contrast, the percentage of more recently arrived Asian and Latin American immigrants who are employed in the professions is generally far below their respective proportions of college graduates—and, for that matter, far below their respective proportions of those who held professional positions in their countries of origin prior to admission into the United States (as documented previously in table 10.3). These discrepancies offer a clue about barriers such as English proficiency and strict licensing requirements that regulate entry into the professions and that recent immigrants—most of them nonwhite, non-European, and non–English speakers—must confront as they seek to make their way in America. In response, some immigrants shift instead to entrepreneurship as an avenue of economic advancement—and as an alternative to employment in segmented labor markets. Indeed, the process of occupational and economic adaptation is complex and not simply a function of the "human capital" brought by the immigrants. Their varying social-class resources at the time of entry interact with other differences in the contexts of reception experienced by particular groups—such as government policies and programs, local labor markets, cultural prejudices and racial discrimination, and existing ethnic communities and networks—to mold their diverse modes of incorporation in the American economy and society.

In general, however, immigrants who come to the United States are positively selected groups, not only in terms of their above-average urban backgrounds and socioeconomic resources compared to homeland norms, but also in terms of their ambition, determination, and willingness to work and to take risks. Legally or illegally, most make their passages to America not so much to escape perennial unemployment or destitution, but to seek opportunities for advancement that are unavailable in their own countries. They are "innovators," in Robert Merton's sense of the term, who choose immigration as a feasible solution to a widening gap between life goals and actual means, between their own rising aspirations and the dim possibilities for fulfilling them at home. The lure of America is greatest for those who experience this gap at its widest and who have the requisite resources and connections to meet the costs of


230

immigration to a foreign world—such as well-educated cosmopolitans in the less developed countries—and those groups have taken full advantage of the preferences available under U.S. law. Immigration requires both restlessness and resourcefulness, and on the whole, the main reason the richest of the rich and the poorest of the poor do not immigrate is because they are, respectively, unmoved or unable to move.

Even undocumented migrants must be able to cover the often considerable costs of transportation and surreptitious entry into the United States, as must refugees such as "boat people" be willing to take extraordinary risks and pay the costs of surreptitious exit from their countries. Although the socioeconomic origins of unauthorized immigrants are modest by U.S. standards, they consistently meet or surpass the average for their countries of origins. Recent studies report that "coyotes" (smugglers) charge U.S. $700 to get border-crossers from Mexico to Los Angeles, $500 to Houston, $250 to $450 to San Antonio—in large groups the fee may be lowered to $200—and that undocumented Mexican immigrants are on average more urban and literate than the general Mexican population. In the Dominican Republic, it may cost $1,000 to $2,000 to obtain papers and be smuggled out of the country, and undocumented Dominicans actually tend to be more educated than those who immigrate legally. Haitian "boat people" reportedly pay $500 to $1,000 per person to buy passage aboard barely seaworthy craft to South Florida. A decade ago in Vietnam, ethnic Chinese and Vietnamese refugees were paying five to ten gold pieces ($2,000 to $4,000) per adult to cross the South China Sea in flimsy fishing boats—a price well beyond the means of the average Vietnamese. To afford this often required ingenious exchange schemes through kinship networks. For example, a family in Vietnam planning to escape by boat contacted another that had decided to stay to obtain the necessary gold for the passage; they in turn arranged with family members of both already in the United States (usually "first wave" refugees) for the relatives of the escaping family to pay an equivalent amount in dollars to the second family's relatives.[32] Those who surmount such obstacles and succeed in reaching America are far from being representative of the population of their societies of origin. They, too, add to the vitality, energy, and innovativeness that immigrants contribute to American society.

The New Immigrants in America: Impacts on Economic and Cultural Institutions

Patterns of Settlement and Incorporation

Although fewer than one in ten persons in the United States today is an immigrant, the impact of the new immigration on American communities is much more significant than might appear at first glance. The main


231

reason is that immigrants tend to concentrate in urban areas where coethnic communities have been established by past immigration. Such spatial concentrations serve to provide newcomers with manifold sources of moral, social, cultural, and economic support that are unavailable to immigrants who are more dispersed. In general, patterns of concentration or dispersal vary for different classes of immigrants (professionals, entrepreneurs, manual laborers) with different types of legal status (regular immigrants, refugees, the undocumented). The likelihood of dispersal is greatest among immigrant professionals—who tend to rely more on their qualifications and job offers than on pre-existing ethnic communities—and, at least initially, among recent refugees who are sponsored and resettled through official government programs that have sought deliberately to minimize their numbers in particular localities. However, refugee groups, too, have shown a tendency to gravitate as "secondary migrants" to areas where their compatriots have clustered (for example, Cubans to South Florida, Southeast Asians to California). The likelihood of concentration is greatest among working-class immigrants—who tend to rely more on the assistance offered by pre-existing kinship networks—and among business-oriented groups, who tend to settle in large cities. Dense ethnic enclaves provide immigrant entrepreneurs with access to sources of cheap labor, working capital and credit, and dependable markets. Over time, as the immigrants become naturalized U.S. citizens, local strength in numbers also provides opportunities for political advancement and representation of ethnic minority group interests at the ballot box.[33] Social networks are thus crucial for an understanding not only of migration processes, as noted earlier, but also of adaptation processes and settlement patterns in areas of final immigrant destination.

Table 10.5 lists the states and metropolitan areas of principal immigrant settlement (SMSAs) in the United States as of 1980. In addition, table 10.5 provides comparative data on the places of settlement of recent legal immigrants (those admitted during 1987–89) as well as of the 3 million illegal immigrants who qualified for legalization of their status under IRCA in 1989. While there are immigrants today in every one of the fifty states, just six states (California, New York, Florida, Texas, Illinois, and New Jersey) accounted for two-thirds of the total 1980 foreign-born population, for nearly three-fourths of 1987–89 legal immigrants, and for almost nine-tenths of all IRCA applicants. A pattern of increasing spatial concentration is clear for the four states of greatest immigrant settlement (California, New York, Florida, and Texas). California alone, which in 1980 already accounted for 25 percent of all the foreign-born, drew 29 percent of 1987–89 immigrants and a whopping 54 percent of IRCA applicants. New York and Florida combined for another quarter of the foreign-born in 1980 and also of 1987–89 immigrants, but only 11 percent of IRCA applicants. Texas, whose share of immigrants increased


232
 

TABLE 10.5 States and Metropolitan Areas of Principal Immigrant Settlement in the United States:
Location of the 1980 Foreign-Born Population, 1987–89 Immigrants, and 1989 Legalization Applicants

 

Foreign-Born Population, 1980

Immigrants, 1987–89a

IRCA Applicants, 1989b

N

Percentage of Total Population

Percentage of U.S. Foreign-Born Population

N

Percentage of Total Immigrants Admitted

N

Percentage of Total Legalization Applicants

States:

California

3,580,033

15.1

25.4

530,795

28.6

1,636,325

53.9

New York

2,388,938

13.6

17.0

336,845

18.1

170,601

5.6

Florida

1,058,732

10.9

7.5

155,108

8.4

160,262

5.3

Texas

856,213

6.0

6.1

123,446

6.6

440,989

14.5

Illinois

823,696

7.2

5.9

81,011

4.4

158,979

5.2

New Jersey

757,822

10.3

5.4

100,697

5.4

44,184

1.5

Metropolitan Areas:

New York, N.Y.-N.J.

1,946,800

21.3

13.8

285,840

15.4

153,072

5.0


233
 
 

Foreign-Born Population, 1980

Immigrants, 1987–89a

IRCA Applicants, 1989b

N

Percentage of Total Population

Percentage of U.S. Foreign-Born Population

N

Percentage of Total Immigrants Admitted

N

Percentage of Total Legalization Applicants

Los Angeles-Long Beach, Calif.

1,664,793

22.3

11.8

231,096

12.4

809,248

26.6

Chicago, Ill.

744,930

10.5

5.3

64,821

3.5

136,081

4.5

Miami-Hialeah, Fla.

578,055

35.6

4.1

93,776

5.1

66,792

2.2

San Francisco-Oakland, Calif.

551,769

15.4

3.9

81,780

4.4

64,111

2.1

Boston, Mass.

280,080

10.1

2.0

38,218

2.0

12,512

0.4

Anaheim-Santa Ana, Calif.

257,194

13.3

1.8

42,835

2.3

144,521

4.8

Washington, D.C.

249,994

8.2

1.8

56,676

3.1

31,182

1.0

San Diego, Calif.

235,593

12.7

1.7

38,332

2.1

98,875

3.3

Houston, Tex.

220,861

7.6

1.6

33,296

1.8

131,186

4.3

San Jose, Calif.

175,833

13.6

1.2

35,176

1.9

41,857

1.4

U.S. Totals

14,079,906

6.2

100.0

1,856,651

100.0

3,038,825

100.0

SOURCES : U.S. Bureau of the Census, 1980 Census of Population: General Social and Economic Characteristics , PC80-1-C1, State and SMSA Summaries (Washington, D.C.: Government Printing Office, 1983); U.S. Bureau of the Census, Detailed Population Characteristics , PC80-1-D1-A (Washington, D.C.: Government Printing Office, 1984), table 253; U.S. Immigration and Naturalization Service Statistical Yearbooks (Washington, D.C.: Government Printing Office, 1987–89).

a Data indicate the "intended destination" of regular immigrants admitted to permanent resident status during 1987–89, as reported to the INS; data do not include the 478,814 immigrants whose status was legalized in fiscal year 1989 under the Immigration Reform and Control Act (IRCA).

b Persons who formally applied for legalization of their status by May 1990 under IRCA.


234

from 6.1 percent in 1980 to 6.6 percent in 1987–89, also accounted for 14.5 percent of IRCA applicants. In fact, over two-thirds of IRCA applicants resided in California and Texas alone—both states situated along the Mexican border. In Illinois, the proportion of immigrants decreased from 5.9 percent in 1980 to 4.4 percent in 1987–89—partly because Chicago has ceased to be a preferred destination for Mexican immigrants—while in New Jersey the levels for the two time periods remained unchanged at 5.4 percent.

Patterns of immigrant concentration are even more pronounced within particular metropolitan areas. As table 10.5 shows, just eleven SMSAs accounted for more than half of all legal and illegal immigrants in the United States during the 1980s, and five of these were California cities. As in the past, the New York metropolitan area remains the preferred destination of immigrants, accounting for 13.8 percent of the 1980 U.S. foreign-born population and another 15.4 percent of 1987–89 immigrants, though only 5 percent of IRCA applicants resided in New York. Los Angeles is not far behind, with 11.8 percent and 12.4 percent of 1980 and 1987–89 immigrants, respectively—but a huge 26.6 percent of all IRCA applicants nationally (more than 800,000 persons) were concentrated in Los Angeles, more than five times the number in any other urban area. Adjacent areas in Southern California (Santa Ana and San Diego) also show significant increases in both legal and especially illegal immigrant settlement. Of the leading SMSAs, only Chicago showed a drop in its relative proportion of immigrants, from 5.3 percent in 1980 to 3.5 percent in 1987–89 (although more IRCA applicants were recorded in Chicago than in Houston), while Boston's share remained at 2.0 percent during the decade (although only a tiny fraction of IRCA applicants lived in the Boston area). All other cities in table 10.5—Miami; San Francisco; Washington, D.C.; Houston; and San Jose—showed significant increases over time.

Moreover, different immigrant groups concentrate in different metropolitan areas and create distinct communities within each of these cities. For example, among the largest contingents of recent immigrants, Miami remains the premier destination of Cubans (they are already a majority of the city's total population), as is New York for Dominicans, Jamaicans, and Soviet Jews. Colombians and Haitians are also most concentrated in Miami and New York. The Los Angeles area is the main destination for Mexicans, Salvadorans, Filipinos, Koreans, Vietnamese, and Cambodians—their communities there are already the largest in the world outside their respective countries—and it is the third choice of Chinese and Indians. After Los Angeles, recent Mexican immigrants have settled in largest numbers in San Diego and El Paso; Filipinos in San Diego and San Francisco; Koreans in New York and Washington, D.C.; and


235

Vietnamese in Santa Ana and San Jose. More Chinese immigrants settle in New York than in any other city, followed by San Francisco; more Indians also settle in New York, followed by Chicago (although among all major immigrant groups Indians tend to be the most dispersed, reflecting their significantly greater proportion of professionals).[34]

Notwithstanding the relative dispersal of immigrant professionals, they have significant impacts in the sectors within which they are employed. Rather than compete with or take jobs away from the native-born, these groups fill significant national needs for skilled talent and in some respects also serve as a strategic reserve of scarce expertise. For example, we have already mentioned the disproportionate impact of immigrant engineers in U.S. universities. Given the continuing decline of enrollments in advanced engineering training among the native-born, the proportion of the foreign-born in these fields has grown rapidly. By 1987 over half of all assistant professors of engineering under thirty-five years of age in U.S. universities were foreign-born, and it is estimated that by 1992 over 75 percent of all engineering professors in the United States will be foreign-born. Already one out of every three engineers with a doctorate working in U.S. industry today is an immigrant.[35]

The impact of foreign medical graduates (FMGs) is almost as great: over the past two decades they have constituted about 20 percent of the nation's physicians and from about 33 percent (in the 1970s) to 18 percent (by the late 1980s) of its interns and residents. They are not, however, randomly dispersed throughout the country: in New York City in the mid-1970s, for instance, more than half of the interns in municipal hospitals and four-fifths of those at voluntary hospitals were Asian immigrant doctors. Their mode of incorporation into the American health care system is largely determined by the U.S. market for interns and residents. By the mid-1970s, for example, 35 percent of available internships and residency positions could not be filled by U.S. and Canadian medical graduates, and the geographical clustering of immigrant doctors in some northeastern and midwestern states is largely a function of job availability in certain types of hospitals that draw heavily on FMGs. In general, FMGs are concentrated in the less prestigious, non-university-affiliated hospitals in underserved areas that do not attract native-born physicians, and they are relatively few in hospitals with the greatest scientific emphasis and degrees of specialization located in the most desirable areas (such as California). Among FMGs, a further process of socio-cultural stratification is evident: FMGs from countries like Great Britain have exhibited patterns of entry most similar to those of U.S. and Canadian medical graduates; followed by a second stratum of FMGs from countries like Argentina, Colombia, and India; then a third stratum from countries like Taiwan, South Korea, Iran, and the Philippines; and


236

lastly by Cuban refugee physicians (who entered the least prestigious and least scientifically oriented training hospitals). Despite substantial increases in the pool of U.S. medical graduates during the 1980s, many hospitals have been unable to attract even native-born nurse practitioners or physician assistants to replace FMGs who are willing to accept resident salaries and put in the typical 80-to-100-hour resident work week. A recent survey found that FMG-dependent teaching hospitals would each lose $2 to $5 million a year in Medicare training funds were they required to replace FMG residents, forcing cutbacks and affecting patient care. FMGs thus not only perform key functions in American medical care—especially in rural and inner-city hospitals serving Medicaid patients and the uninsured working poor—but they also give U.S. medical graduates more options in choosing jobs.[36]

Concerns about the economic impact of working-class immigrants more often focus on claims that they take jobs away from or depress the wages of native-born workers. Such claims, however, are made in the absence of any evidence that unemployment is caused by immigrants either in the United States as a whole or in areas of high immigrant concentration, or that immigration adversely affects the earnings of either domestic majority or minority groups. To the contrary, recent research studies of both legal and undocumented immigration point to significant net economic benefits accruing to U.S. natives. As a rule, the entry of immigrants into the labor market helps to increase native wages as well as productivity and investment, sustain the pace of economic growth, and revive declining sectors, such as light manufacturing, construction, and apparel (New York City, Los Angeles, and Miami offer recent examples). An influx of new immigrant labor also has the effect of pushing up domestic workers to better supervisory or administrative jobs that may otherwise disappear or go abroad in the absence of a supply of immigrant manual labor. Less-skilled immigrants, paralleling the pattern noted above for FMG professionals, typically move into manual labor markets deserted by native-born workers, who shift into preferred non-manual jobs.[37] In addition, immigrants, on average, actually pay more taxes than natives, but use much smaller amounts of transfer payments and welfare services (such as aid to families with dependent children [AFDC], supplemental security income, state unemployment compensation, food stamps, Medicare, and Medicaid). It has been estimated that immigrants "catch up" with natives in their use of welfare services only after 16 to 25 years in the United States. Because of their vulnerable legal status, undocumented immigrants, in particular, are much less likely to use welfare services, and they receive no Social Security income, yet about three-fourths of them pay Social Security and federal income taxes. And because newly arrived immigrants are primarily younger workers rather


237

than elderly persons, by the time they retire and are eligible to collect Social Security (the costliest government program of transfer payments), they have usually already raised children who are contributing to Social Security taxes and thus balancing their parents' receipts.[38]

Rather than take jobs away, entrepreneurial immigrants often create them. For example, among Koreans in Los Angeles in 1980, a recent study found that 22.5 percent were self-employed (compared to 8.5 percent of the local labor force), and they in turn employed another 40 percent of Korean workers in their businesses. The 4,266 Korean-owned firms thus accounted for two-thirds of all employed Koreans in the Los Angeles metropolitan area.[39] In Miami, Cuban-owned enterprises increased from about 900 to 25,000 between the late 1960s and the late 1980s; by 1985 the $2.2 billion in sales reported by Hispanic-owned firms in Dade County ranked that area first in gross receipts among all such firms in the country. A longitudinal survey of Cuban refugees who arrived in Miami in 1973 showed that by 1979, 21.2 percent were self-employed and another 36.3 percent were employed in businesses owned by Cubans. A subsequent survey of Mariel Cubans who arrived in Miami in 1980 found that by 1986 28.2 percent were self-employed and another 44.9 percent were employed by their co-nationals.[40] In Monterey Park ("Little Taipei"), east of Los Angeles, Chinese immigrants from Taiwan and Hong Kong—who in 1988 already comprised over half of its 61,000 residents—owned two-thirds of the property and businesses in the city. During 1985 an estimated $1.5 billion was deposited in Monterey Park financial institutions (equivalent to about $25,000 for each city resident), much of it the capital of Hong Kong investors nervous about the impending return of Hong Kong to Mainland China.[41] And, although not yet rivaling the scale of these ethnic enclaves, a burgeoning center of Vietnamese-owned enterprises has been developed over the past decade in the city of Westminster ("Little Saigon") in Orange County. In all of these cases, immigrants have built "institutionally complete" ethnic communities offering opportunities for advancement unavailable in the general economy. Already Miami and Monterey Park have mayors who are Cuban and Chinese immigrants, respectively.

To be sure, other newcomers in areas of immigrant concentration—especially the undocumented and unskilled immigrant women—are exploited as sources of cheap labor in a growing informal sector that is fueled by foreign competition and the demand for low-cost goods and services in the larger economy. They find employment in the garment industry (in Los Angeles, perhaps 90 percent of garment workers are undocumented immigrants), as well as in electronics assembly, construction, restaurants, domestic service, and a wide range of other informal activities—often at subminimum wages and under conditions that vio-


238

late federal and state labor laws. In this context the presence of a large supply of cheap labor does keep wages down: the low wages paid to the immigrants themselves, who under their precarious circumstances are willing to accept whatever work is offered. In regions like Southern California there is the added irony that undocumented immigrants are attracted by an economic boom that their own labor has helped to create. IRCA did provide 3 million immigrants with an opportunity to emerge from the shadows of illegality, but at a cost: the new law has had the effect of driving those ineligible for legalization (virtually all post-1981 arrivals) further underground, but without stopping the flow of illegal immigration; it has also led—according to a 1990 report by the General Accounting Office—to increasing ethnic discrimination by employers against legal residents. The new post-IRCA underclass of undocumented (and sometimes homeless) Mexican and Central American workers is increasingly visible, not only in traditional agricultural and horticultural enterprises but especially in dozens of street corners of California cities, from Encinitas to North Hollywood, where groups huddle during the day waiting for job offers from homeowners and small contractors. The situation has bred a new upsurge of nativist intolerance in heavily impacted areas.[42]

Refugees differ from other categories of immigrants in that they are eligible to receive public assistance on the same means-tested basis as U.S. citizens, and the federal government has invested considerable resources since the early 1960s to facilitate the resettlement of selected refugee groups. Prior to that time, refugee assistance depended entirely on the private sector, particularly religious charities and voluntary agencies. The expansion of the state's role in refugee resettlement roughly parallels the expansion of the American welfare state in the 1960s and early 1970s. In the twelve years from 1963 (when federal outlays officially began) to 1974, domestic assistance to mostly Cuban refugees totaled $2.3 billion; and in the twelve years from 1975 to 1986, aid to mostly Indochinese refugees totaled $5.7 billion, peaking in 1982, when $1.5 billion were expended, and declining sharply thereafter (all figures are in constant 1985 dollars). The lion's share of those federal funds goes to reimburse states and localities for cash and medical assistance to refugees during their first three years in the United States. Public assistance to eligible refugees is conditioned upon their attendance in assigned English-as-a-second-language (ESL) or job training classes and acceptance of employment; it also formally allows these groups (at least during a transition period after arrival) an alternative mode of subsistence outside existing labor markets and ethnic enclaves. However, states have different "safety nets"—levels of benefits and eligibility rules vary widely from state to state—forming a segmented state welfare system in the


239

United States. For example, AFDC benefits for a family of four in California in the early 1980s were $591 a month (second highest in the country), compared to only $141 in Texas (second lowest); intact families (two unemployed parents with dependent children) were eligible for AFDC and Medicaid in California, but ineligible in Texas; and indigent adults without dependent children were eligible for general assistance in California localities, but not in Texas. Hence, the initial decision to resettle refugees in one state or another affects not only their destinations but their destinies as well. Welfare dependency rates vary widely among different refugee nationalities, and from state to state among refugees of the same nationality. Not surprisingly, the highest rates have been observed among recently arrived, less-skilled, "second-wave" Southeast Asian families with many dependent children in California; still, all research studies of Cambodian, Laotian, and Vietnamese refugees throughout the country have found that welfare dependency (which even in California keeps families below the federal poverty line) declines steadily over time in the United States.[43]

Language and the Second Generation

A more salient issue concerns the impact of the new immigration on public school systems and their rapidly changing ethnic composition. The issue itself is not new: at the turn of the century, the majority of pupils in many big-city schools from New York to Chicago were children of immigrants. Today, nowhere are immigrant students more visible—or more diverse—than in the public schools of California. By the end of the 1980s, almost a third of California's 4.6 million students in kindergarten through twelfth grade (K–12) in the public schools spoke a language other than English at home; while 70 percent of them spoke Spanish as their mother tongue, the rest spoke over 100 different languages. Yet of California's scarce pool of bilingual teachers, 94 percent spoke only Spanish as a second language, a few spoke various East Asian languages, and there was not a single certified bilingual teacher statewide for the tens of thousands of students who spoke scores of other mother tongues. Table 10.6 summarizes the trend over the past decade in the annual enrollments of language-minority students, who are classified by the schools as either fluent English proficient (FEP) or limited English proficient (LEP). In 1973 there were 168,000 students classified as LEP in the state, and that number doubled by 1980; from 1981 to 1989, as table 10.6 shows, the number of LEP students doubled again to about 743,000, and the number of FEP students increased by over 40 percent to 615,000. The FEP classification marks an arbitrary threshhold of English proficiency, which schools use to "mainstream" students from bilingual or ESL classrooms to regular classes. Indeed, bilingual education in Cali-


240
 

TABLE 10.6 Trends in California Public School Enrollments (K–12) of LEP and FEP Students
Who Speak a Primary Language Other than English at Home, 1981–89

Year

Total Students

Total LEP a Students

Total FEP a Students

Total LEP/FEP a

N

N

%

N

%

N

%

1981

3,941,997

376,794

9.6

434,063

11.0

810,857

20.6

1982

3,976,676

431,443

10.8

437,578

11.0

869,021

21.9

1983

3,984,735

457,542

11.5

460,313

11.6

917,855

23.0

1984

4,014,003

487,835

12.2

475,203

11.8

963,038

24.0

1985

4,078,743

524,082

12.8

503,695

12.3

1,027,777

25.2

1986

4,255,554

567,564

13.3

542,362

12.7

1,109,926

26.1

1987

4,377,989

613,222

14.0

568,928

13.0

1,182,150

27.0

1988

4,488,398

652,439

14.6

598,302

13.3

1,250,741

27.9

1989

4,618,120

742,559

16.1

614,670

13.3

1,357,229

29.4

SOURCE : California State Department of Education, Bilingual Education Office, DATA BICAL series, 1981–89 (Sacramento, Calif.).

a LEP means Limited English Proficient; FEP means Fluent English Proficient. The overwhelming majority of LEP/FEP student are immigrants or children of immigrants. These students speak over 100 different primary languages, although Spanish is the language spoken by about 70 percent of total 1989 LEP/FEP enrollments in California public schools. The largest of the other ethnolinguistic groups, in rank order, include speakers of Vietnamese, Filipino (Tagalog, Ilocano, and other dialects), Chinese (Cantonese, Mandarin, and other dialects), Korean, Cambodian, Hmong, Lao, Japanese, Farsi, Portuguese, Indian (Hindi, Punjabi, and others), Armenian, Arabic, Hebrew, Mien, Thai, Samoan, Guamanian, and a wide range of European and other languages.

fornia largely consists of "transitional" programs whose aim is to place LEP students in the English-language curriculum as quickly as possible. While immigrant children gain proficiency in English at different rates—depending on such extracurricular factors as age at arrival, their parents' social class of origin, community contexts, and other characteristics—very few remain designated as LEP beyond five years, and most are reclassified as FEP within three years.[44]

In some smaller elementary school districts near the Mexican border, such as San Ysidro and Calexico, LEP students alone account for four-fifths of total enrollments. In large school districts in cities of high immigrant concentration, language minorities comprise the great majority of K–12 students. In 1989, LEP students accounted for 56 percent of total enrollments in Santa Ana schools, 31 percent in Los Angeles, 28 percent in San Francisco and Stockton, 25 percent in Long Beach, and close to 20 percent in Oakland, Fresno, San Diego, and San Jose; the number of FEP students nearly doubled those proportions, so that in districts like Santa Ana's over 90 percent of the students were of recent immigrant


241

origin. These shifts, in turn, have generally been accompanied by so-called white flight from the public schools most affected, producing an extraordinary mix of new immigrants and native-born ethnic minorities. In the Los Angeles Unified School District, the nation's second largest, the proportion of native white students declined sharply from about 65 percent in 1980 to only 15 percent in 1990. To varying degrees, the creation of ethnic "minority majorities" is also visible in the school systems of large cities, including all of the SMSAs listed earlier in table 10.5. While a substantial body of research has accumulated recently on the experience of new first-generation immigrants, relatively little is yet known about the U.S.-born or U.S.-reared second generation of their children, although they will represent an even larger proportion of the American school-age population in years to come.

Until the 1960s, bilingualism in immigrant children had been seen as a cognitive handicap associated with "feeblemindedness" and inferior academic achievement. This popular nostrum was based in part on older studies that compared middle-class native-born English monolinguals with lower-class foreign-born bilinguals. Once social class and demographic variables are controlled, however, recent research has reached an opposite conclusion: bilingual groups perform consistently better than monolinguals on a wide range of verbal and nonverbal IQ tests.[45] Along these lines, a 1988 study of 38,820 high school students in San Diego—of whom a quarter were FEP or LEP immigrant children who spoke a diversity of languages other than English at home—found that FEP (or "true") bilinguals outperformed both LEP (or "limited") bilinguals and all native English monolinguals, including white Anglos, in various indicators of educational attainment: they had higher GPAs and standardized math test scores, and lower dropout rates. The pattern was most evident for Chinese, Filipino, German, Indian, Iranian, Israeli, Korean, Japanese, and Vietnamese students: in each of these groups of immigrant children, both FEPs and LEPs exhibited significantly higher GPAs and math (but not English) test scores than did white Anglos. These findings parallel the patterns of educational stratification noted earlier in table 10.4 among foreign-born and native-born adults in the United States. Remarkably, two groups of lower-class LEP refugees—the Cambodians and the Hmong—had higher GPAs than native whites, blacks, and Chicanos. White Anglos (but not blacks and Chicanos) did better than some other language minorities, whether they were classified as FEP or LEP—Italians, Portuguese, Guamanians, Samoans, and "Hispanics" (predominantly of Mexican origin)—almost certainly reflecting intergroup social class differences. And among students whose ethnicity was classified by the schools as black or Hispanic—with the lowest achievement profiles overall in the district—FEP bilinguals outperformed their


242

co-ethnic English monolinguals.[46] Research elsewhere has reported similar findings among Central American, Southeast Asian, and Punjabi Sikh immigrant students, and separate studies have found that Mexican-born immigrant students do better in school and are less likely to drop out than U.S.-born students of Mexican descent.[47]

The idea that bilingualism in children is a "hardship" bound to cause emotional and educational maladjustment has been not only refuted but contradicted by every available evidence; and in a shrinking global village where there are thirty times more languages spoken as there are nation-states, the use of two languages is common to the experience of much of the world's people. But pressures against bilingualism in America—as reflected today by the "U.S. English" nativist movement and the passage of "English Only" measures in several states—are rooted in more fundamental social and political concerns that date back to the origins of the nation. As early as 1751, Benjamin Franklin had put the matter plainly: "Why should Pennsylvania, founded by the English, become a colony of aliens, who will shortly be so numerous as to Germanize us, instead of our Anglifying them?" The point was underscored by Theodore Roosevelt during the peak years of immigration at the turn of the century: "We have room but for one language here, and that is the English language; for we intend to see that the crucible turns our people out as Americans, and not as dwellers in a polyglot boardinghouse." It is ironic that, while the United States has probably incorporated more bilingual people than any other nation since the time of Franklin, American history is notable for its near mass-extinction of non-English languages. A generational pattern of progressive anglicization is clear: immigrants (the first generation) learned survival English but spoke their mother tongue to their children at home; the second generation, in turn, spoke accentless English at school and then at work, where its use was required and its social advantages were unmistakable; and with very few exceptions their children (the third generation) grew up as English monolinguals.

For all the alarm about Quebec-like linguistic separatism in the United States, the 1980 census suggests that this generational pattern remains as strong as in the past. It counted well over 200 million Americans speaking English only, including substantial proportions of the foreign-born. Among new immigrants who had arrived in the United States during 1970–80, 84 percent spoke a language other than English at home, but over half of them (adults as well as children) reported already being able to speak English well. Among pre-1970 immigrants, 62 percent still spoke a language other than English at home, but the overwhelming majority of them spoke English well: 77 percent of the adults and 95 percent of the children. Among the native-born, less than 7 percent spoke


243

a language other than English at home, and over 90 percent of them (adults as well as children) spoke English well. More detailed studies have confirmed that for all American ethnic groups, without exception, children consistently prefer English to their mother tongue, and the shift toward English increases as a function of the proportion of the ethnic group that is U.S.-born. To be sure, immigrant groups vary significantly in their rates of English language ability, reflecting differences in their levels of education and occupation. But even among Spanish speakers, who are considered the most resistant to language shift, the trend toward anglicization is present; the appearance of language loyalty among them (especially Mexicans) is due largely to the effect of continuing high immigration to the United States. For example, a recent study of a large representative sample of Mexican-origin couples in Los Angeles found that among first-generation women, 84 percent used Spanish only at home, 14 percent used both languages, and 2 percent used English only; by the third generation there was a complete reversal, with 4 percent speaking Spanish only at home, 12 percent using both, and 84 percent shifting to English only. Among the men, the pattern was similar except that by the second generation their shift to English was even more marked.[48]

English proficiency has always been a key to socioeconomic mobility for immigrants, and to their full participation in their adoptive society. It is worth noting that in the same year that Proposition 63 (the initiative declaring English as the state's official language) passed in California, more than 40,000 immigrants were turned away from ESL classes in the Los Angeles Unified School District alone: the supply of services could not meet the vigorous demand for English training. Indeed, English language dominance is not threatened in the United States today—or for that matter in the world, where it has become already firmly established as the premier international language of commerce, diplomacy, education, journalism, aviation, technology, and mass culture. What is threatened instead is a more scarce resource: the survival of the foreign languages brought by immigrants themselves, which in the absence of social structural supports are, as in the past, destined to disappear.

Given the immense pressure for linguistic conformity on immigrant children from peers, schools, and the media, the preservation of fluent bilingualism in America beyond the first generation is an exceptional outcome. It is dependent on both the intellectual and economic resources of parents (such as immigrant professionals) and their efforts to transmit the mother tongue to their children, and on the presence of institutionally complete communities where a second language is taught in schools and valued in the labor market (such as those found in large ethnic enclaves). The combination of these factors is rare, since most immi-


244

grants do not belong to a privileged stratum, and immigrant professionals are most likely to be dispersed rather than concentrated in dense ethnic communities. Miami may provide the closest approximation in the United States, but even there the gradual anglicization of the Cuban second generation is evident. Still, the existence of pockets where foreign languages are fluently spoken enriches American culture and the lives of natives and immigrants alike.[49]

The United States has aptly been called a "permanently unfinished society," a global sponge remarkable in its capacity to absorb tens of millions of people from all over the world. Immigrants have made their passages to America a central theme of the country's history. In the process, America has been engaged in an endless passage of its own, and through immigration the country has been revitalized, diversified, strengthened, and transformed. Immigrant America today, however, is not the same as it was at the turn of the century; and while the stories of human drama remain as riveting, the cast of characters and their circumstances have changed in complex ways. In this chapter, I have touched on a few of the ways in which the "new" immigration differs from the "old." But a new phase in the history of American immigration is about to begin. New bills have been introduced in Congress once again to change immigration policies—to reduce or eliminate some of the legal channels for family reunification, to increase quotas for "brain drain" and "new seed" immigrants, to allocate special visas for immigrant millionaires who will invest in job-producing businesses, to rescind the "employer sanctions" provisions of the last law, to grapple with the sustained flow of undocumented immigrants, to consider whether persons from newly noncommunist states in Eastern Europe and Nicaragua are eligible for refugee status—and the debate remains surrounded by characteristic ambivalence. The "new" immigration of the post–World War II period was never simply a matter of individual cost-benefit calculations or of the exit and entry policies of particular states, but is also a consequence of historically established social networks and U.S. economic and political hegemony in a world system. The world as the century ends is changing profoundly—from Yalta to Malta, from the Soviet Union to South Africa, from the European Economic Community to East Asia and the Arab world, from the East-West Cold War to perhaps new North-South economic realignments and Third World refugee movements—and new bridges of immigration will likely be formed in the process. For the future of immigration to America, as in the past, the unexpected lies waiting.[50]


245

Eleven—
The Hollow Center:
U.S. Cities in the Global Era

Sharon Zukin

Cities graphically represent the disappearing center of American society. Over the past twenty years, they have become both more visible and less important symbols of the economy. Paradoxically, despite enormous efforts at rebuilding, they are less different from each other than they were before. The problems of big cities—crime, drugs, high housing prices, unemployment—are just as familiar in Spokane or Tulsa as in New York City. Meanwhile, the provincial decay of smaller cities has been negated by the spread of television, computers, and imported consumer goods. We ordinarily describe America as an urban society, but most Americans no longer live in cities. They are as likely to find their "center" in the suburban shopping mall or office park as in the downtown financial district. To some degree, Americans have always had a love-hate relationship with cities. Throughout American history the major thinkers and many ordinary men and women have loved the countryside because it offers an escape from social pressures. Cities had their own compensation because they brought a varied population into a common public life. Today, however, the public middle ground that was previously identified with cities is dissolving into a collage of racial, ethnic, and other private communities. At the same time, even cities as commanding as Los Angeles and New York are being "globalized"; that is, they are becoming more dependent on political and economic decisions that are made at the global level.

In 1986, a list of urban trends drawn up for the U.S. Conference of Mayors described a sorry situation: population drain, increased poverty, an income gap between city and suburban residents, gaps among racial groups, long-term unemployment in places where manufacturing has declined and services grow slowly, homelessness, hunger, low education levels, high crime rates, and very high taxes.[1] Such conditions cannot


246

be described as anything but structural. Disinvestment by industry and the middle class feeds—and in turn responds to—concentrations of the poor, the ill-educated, and the unemployable. Nonetheless, neither the federal government nor private markets give cities much encouragement. Since the early 1970s, no president of the United States has drawn up an explicit urban policy. Under the Reagan administration, the Department of Housing and Urban Development was used as a patronage arm of the Republican party. The conservative thrust of federalism over the past twenty years has consistently reduced both programs and grants. And during the 1980s, the cities' biggest demands—for social services, public housing, and jobs—were sacrificed to the rhetoric of fiscal purity.

Between the Gramm-Rudman Act of 1985 and the attacks on Big Government by two Reagan administrations, state and local governments were squeezed to only 10 percent of the federal budget. In New York City, the federal government contributed the same amount—$2.5 billion—to an $11 billion municipal budget in 1981 and one that had grown to $27 billion in 1989.

Businesses and households that can afford to move have been leaving cities for many years. Industrial decentralization to the suburbs began a century ago, closely followed by middle-class households seeking "bourgeois utopias." Land is both cheaper and more attractive outside cities. Labor is generally cheaper, too, more docile, less likely to be nonwhite. Restrictions on uses of suburban property also tend to benefit the "haves." Large companies can influence weak suburban governments for preferential zoning and tax laws, and wealthy home owners provide a pressure group for socially exclusive development. In recent years, however, cities have lost jobs and residents to areas farther away. Among households, suburbanization has grown less rapidly since the 1970s than moves to "exurban" locales. Businesses, for their part, have decentralized operations. Many have moved to, or set up branches in, low-wage regions of the country and overseas. To some degree this "footloose capital," as Bluestone and Harrison and others call it, is related to a desire to lower costs and escape the limits imposed by unionization. In part it also reflects a shift from local to nonlocal ownership of firms (as in Buffalo, New York, or Youngstown, Ohio), and an intensification of outsourcing strategies (especially devastating to Detroit). More important, footloose capital also applies to new business start-ups in growth sectors, such as electronics and telecommunications, where manufacturing is likely to be exurban. Once limited only to industrial plants, the outflow of economic activity from cities now includes a significant number of offices and corporate headquarters. The resulting "counter-urbanization" has further reduced most cities' claim to functional pre-eminence in American society.[2]

Not surprisingly, Americans have been attracted by alternatives to tra-


247

ditional cities. On the one hand, they increasingly live, work, and shop in exurbs, especially in the Sun Belt, in regions not previously known as centers of urban life. On the other hand, a small but growing middle-class population inhabits the gentrified centers of older cities. Like exurban residents, gentrifiers enjoy the amenities of personal consumption that are typical of a geographically mobile population. But they are tied to the city by a desire for access to its cultural markets as well as its historic symbols of power. In terms of numbers, gentrification has had a much smaller impact on cities than either suburbanization or exurban migration. It has great appeal, however, because like the exurbs, gentrified areas become great spaces of consumption.

Exurbs and gentrified downtowns are important not only because of visible spatial shifts. They are also significant "fictive spaces" in America's social geography. They convey a powerful image of the way many Americans want to live, an image of escape from the constraints of cities and a confirmation of the free movement of both people and investment capital. A simultaneous decentering to the exurbs and recentering of downtowns tear apart the old image of cities as engines of production. A more subtle picture, instead, differentiates among cities according to their position in both the service economy and a new organization of consumption. This new order alters the relation between urban space and economic and cultural power.

Cities and Economic Power

The post-postwar economy has sharpened the effects on cities of global organization. Since the 1970s, the major area of growth—business services—has depended on linking local to multinational firms in expanding markets. While some services have been bought by or have merged with international companies, others seek clients and contracts overseas. This course of development imposes a dual dependence on American cities. The cities rely on the services to fuel further growth, employ residents, and expand the tax base; but the largest employers among local service institutions, as in mass-production manufacturing, are increasingly responsive to global rather than local trends.

These conditions are especially acute in cities whose financial institutions are major players in global markets. New York and Los Angeles, with their large concentrations of international bankers, stock market traders, and foreign investors, owe their growth since the 1970s to globalization. Just as these two cities have the largest number of corporate financial headquarters and other institutional resources, so they also have the tallest office buildings, the highest land values, and the most business expansion in their downtowns. In large part the economic value


248

of doing business downtown reflects an infusion of foreign property investment. Foreign financial institutions, especially Japanese and other Asian banks, occupy a major portion of downtown office buildings. Not surprisingly, New York and Los Angeles, as major concentrations of the power that moves capital around the world, are considered "world cities." Whether this refers only to their pre-eminent position in global financial markets, or to some index of greater cultural sophistication as well, is unclear.

In some aggregate terms—new employment, for example, or business revenues—the financial, insurance, and real estate industries compensate for cities' losses in traditional manufacturing employment.[3] Yet aspects of the new economy suggest reasons for alarm. Most of the highly paid, prestigious downtown jobs are held by suburban rather than city residents. Men and women of color, who represent a growing portion of all cities' populations, have not made such inroads into the financial services area as they have into the public sector. Because of the layoffs that follow stock market downturns, all employment in this area is risky. The threat of global financial crisis also imposes risk on many property investments, from office construction to the ownership of "signature" or "trophy" buildings that are designed by famous architects and located in high-rent districts.

The technological revolution in computers and telecommunications that made office decentralization possible also creates the means for local financial institutions to move away. "Back offices" that house computer and routine clerical operations have easily been detached from money-center banks, while headquarters and other "front offices" remain in more central locations. The importance of face-to-face contact and the symbolic legitimacy of place may enhance the city's viability as the site of a world financial market. Yet even in New York, high land prices and high wages for clerical personnel create a potential for the city's being abandoned by financial institutions.[4]

In cases where banks, stock brokerages, and insurance companies have not moved away, they have destabilized the labor force by shifting from permanent to temporary employment. These arrangements are not limited to cities, of course. Since 1980, temporary employment of all kinds has been the largest growth sector in jobs around the country (as well as overseas). Some temporary positions may pay as well as permanent jobs and may also offer health insurance and other benefits. But by establishing a large number of temporary positions that are outside the normal career stream, financial services organizations create a tenuous base for urban economic development.

Neither do financial services firms recruit widely among the cities' populations. Jobs at the top are often filled through networks estab-


249

lished in college and business school; these job holders live in gentrified areas downtown or in the suburbs. For the most part, high-level positions are also still restricted by race and gender. When it comes to entry-level jobs requiring lesser skills, urban residents confront another type of barrier. Financial and other business services firms do not find adequate personnel among the city's high school graduates. Lacking training in math, competence in standard interpersonal communication, and skills in dress, deference, and punctuality, young men and women from the city are passed over in favor of suburban youth. Growing opportunities for employment outside cities, however, as well as a shrinking labor pool, cause urban employers much concern. In some cities, notably Boston, the financial community has developed a training-and-recruitment partnership with local high schools. In others, such as New York, this degree of institutional interdependence has not yet grown.[5]

Some demographically minded researchers speak of these employment problems in terms of a job-skills mismatch, and the structural roots of this analysis also appeal to those who think in terms of a postindustrial economy. They consider that the decline in traditional manufacturing industries drastically reduces the number of entry-level jobs that are available to high school graduates of modest academic achievements. Further, if job requirements in business services emphasize math, interpersonal, and other job skills that urban high school graduates (and dropouts) lack, then the growth of such jobs takes place without benefiting the urban population. The concentration of ethnic and racial minorities in cities, however, introduces a disquieting series of bias questions. According to the job-skills mismatch analysis, urban minority residents are unemployed in the city's growth sector because they are intellectually and culturally unemployable. Their soaring unemployment rates first of all reflect the loss of a base in blue-collar jobs in plants that have moved out of the city or shut their doors. Second, this unemployment reflects the diminishing educational achievements of the urban minority population.[6]

But the job-skills mismatch explanation of urban unemployment ignores several important factors. At least since the 1950s, many men and women of color have been employed in the service industries. They have generally been steered toward certain areas—notably, personal rather than business services, and the public rather than the private sector—and discouraged from entering others. In recent years, as racial and ethnic minority students have made up greater proportions of urban high school and college graduates, these students have, presumably, gained the qualifications to get financial jobs. At graduation, however, they confront a decreasing number of entry-level jobs, many of which have been shifted overseas or eliminated by automation (for example, insurance claims processors and bank tellers in financial services, telephone opera-


250

tors in other fields). Further, the hiring process in the financial services area is socially exclusive. It still segregates men from women and people of color from the jobs traditionally held by whites.[7]

This exclusion of part of the urban population is heightened by their absence, by and large, from another growth area in most cities, the sector of individually owned small businesses that are often identified with ethnic or immigrant entrepreneurs. The ethnic concentrations in most large cities enable businesses that cater to their special needs (such as food and travel services) to succeed in an "enclave economy." Alternatively, the capital that many immigrants have access to by means of self-help or mutual-aid associations often provides a base for those groups to enter various niches in the urban economy (as owners of manicure parlors, greengrocers, restauranteurs, and newsstand proprietors). Many of these businesses rely on family capitalism. Family members work long hours at low wages, and defer their individual advancement in favor of the family as a whole or the younger generation. But a preference for recruitment among their own group reinforces other hiring practices in the larger society. The garment industry has had a resurgence in the last ten years, especially in the Chinatowns of New York and Los Angeles, but Asian owners and foremen do not recruit Latinos and blacks.

Immigrants' entrepreneurialism has, at any rate, made a broader, though not necessarily cheaper, array of goods and services available in many urban areas. Child care day workers, street peddlers, and housekeepers represent new or reborn segments of the ethnic division of labor, while their better-educated compatriots staff health care facilities in both the public and private sectors. Despite the success of many immigrant groups—Chinese, Koreans, Indians, Filipinos, Cubans, West Indians, and others—poverty still bears a racial edge. Many of the Latinos and U.S.-born blacks who live in cities are among the poorest urban residents. Although statistical indices of racial segregation have steadily declined, these men and women are more concentrated by race than other groups. Race counts again in the tendency for middle- and low-income African-Americans to live in the same neighborhood. Moreso than in other ethnic and racial minorities, social class fails to separate urban blacks who have steady work from those who do not.[8]

Opportunities for entrepreneurialism and employment do not compensate for the low-wage jobs many urban immigrants hold. Some researchers describe these jobs as "sweatshop" labor, pointing to conditions in such growth areas as the garment and computer industries in New York and Los Angeles. Child labor, piece rates, long hours, and other types of exploitation have not been documented for these industries, but to the degree that they hire only non-union labor, perhaps paid off the books and informally contracted, employers contribute to a para-


251

doxically cash-rich, mobility-poor urban population. The simultaneous proliferation of these jobs and high-level jobs in business services, as well as the absolute difference in incomes between them, has shaped a polarized social structure. Because the polarization of incomes in the city so clearly refers to the ability, or lack of ability, to consume, the urban class system is seen as even more divided between rich and poor than in the country as a whole.[9]

In New York City, where the average income of the poorest 10 percent of the population (including welfare payments) was $3,698 in 1986, there were 53,000 taxpayers with adjusted gross incomes of $100,000 or more; 2,840 with at least $500,000; and 1,764 with more than $1 million. Eighty-two people in New York are believed to have assets worth more than $275 million. The second-place city, Los Angeles, has only 32.[10]

Polarization also refers to divided spaces. Although a "dual city" image is much used by urban critics, the segmentation of incomes and separation of classes and races really require a more specific mapping. Peter Marcuse heuristically outlines a "quartered city," made up of the luxury city of the rich; the gentrified city of managers, professionals, and intellectuals; the "suburban" city of the lower middle class and well-paid blue-collar workers; the tenement city of the working poor; and the ghetto of outcasts, the unemployed, the homeless.[11] Significantly, the occupants of each quarter have more in common with their counterparts in other cities—in terms of jobs, mobility, and choices about what to consume—than they have either contact or common interests with residents of the other quarters. This is especially true for the luxury and gentrified areas, whose residents are likely to be foreign investors or at least consumers in an upscale global culture. The rich and upper middle class also tend to set themselves apart from other city residents by using private facilities (car phones, taxis, prep schools) instead of relying on public institutions.

Such images break the myth of the city as a middle ground between social groups. Both visually and metaphorically, the spaces occupied by more affluent groups are "islands of renewal in seas of decay."[12] Yet the area that attracts reinvestment has become larger and more visibly coherent in recent years. Like new office buildings, new upper-income housing in most older cities is mainly centered downtown. Downtown's expansion feeds on relatively undervalued property markets, the growth of business services, and investors' desire for centrally located projects that minimize risk. But in visual terms, it represents a new and broader landscape of power that grows by incorporating, eliminating, or drastically reducing the "vernacular" inner city inhabited by the city's powerless. These men and women are pushed toward less central areas and nearby suburbs that are relatively cheap and may be racially mixed. No


252

longer geographically bound to the inner city, the less affluent and the poor carry the inner city with them as both a racial stigma and an inability to attract investment.

Public officials are not oblivious of trying to govern "the city of the poor masquerading as the city of the rich."[13] Neither luxury investment nor gentrification raises a city population's median income, which makes the city government that much more dependent on those who pay high taxes. The problem, however, is that city budget authorities are chasing mobile investors. Not only industrial firms but also real estate developers who used to operate only in local markets are now national and even international in scope. To compel them to stay in cities and build the business centers that seem to attract more growth, municipal authorities make concessions. Business influence has always been an important factor in local government, but the new element since 1980 is the formalization of these arrangements in public-private partnerships.

Private-sector organizations like the Chamber of Commerce or the local real estate trade association now initiate redevelopment projects. Their financing depends in part on city government's ability to float municipal bonds and take out short-term loans, as well as its willingness to offer tax reductions, zoning incentives, and aid in acquiring land. "Public" goals tend to converge with those of private developers. The common program is worked out in meetings among business leaders and public officials, and managed by public authorities dominated by business institutions. A focus on high-rent downtown land and new construction is supported by the city's commitment to block off streets, enhance cultural amenities, and, in general, facilitate the "privatization" of development. Under these conditions, urban planners in public employ have no creative work.

Pressure to counter new downtown development with housing that is "affordable," that is, slightly below market rate, reflects the strength of "neighborhoods" where middle-income voters live. The linkage mechanism that was developed (in Boston and San Francisco) in response to such pressure permits developers to have their downtown development—but requires more affordable housing as a quid pro quo. Developers are assessed a percentage of development costs for building such apartments, or agree to allocate a portion of their project to less affluent groups. In some cases, as in Battery Park City in New York, the below-market-rent housing is built elsewhere, outside the most expensive areas. This gains new low- or, more often, middle-income housing at the cost of strengthening social class segregation in the heart of the city. At any rate, such linkages are viable only where developers have a lot to gain by agreeing to them; in other words, in cities like Boston, San Francisco,


253

and New York in the mid-1980s, when "market forces" buoyed the economy. Chicago suggests more a rigorous pressure to make developers respond to public goals (that is, racial integration, increasing the affordable housing stock, and letting neighborhoods share in downtown's prosperity). There, however, the opportunity has depended in large part on a new African-American mayor, the late Harold Washington. He attracted strong black support, dedicated staff members in city agencies, and white coalition partners—all at a time when the city attracted a new round of corporate investment downtown by nationally oriented business services.[14]

Public-private partnerships institutionalize the acknowledgment of dependence on the financial sector that followed the mid-1970s outbreak of fiscal crisis. At that time, commercial banks and other financial institutions threatened New York City, Yonkers, and Cleveland with bankruptcy, supposedly for the city government's profligate use of public finances. Calling in municipal debt served to discipline city agencies and remind them of the need to balance budgets. But fiscal crisis also fulfilled another end. It dramatized the death of the War on Poverty and ended the long New Deal era of social welfare at city—and federal government—expense. Most cities survived the fiscal crisis of the 1970s by concentrating layoffs and reductions on such "nonessential" services as schools and libraries, leaving police, fire, and sanitation agencies wounded but not completely cut down. In more drastic cases, such as New York, Yonkers, and Cleveland, bankers imposed a nonelected supercommission made up of leaders from the financial community and the state to oversee the spending of elected city officials. These supercommissions were given the right of approval on city budgets. Both formally and informally, they exercised control over mayors who were inclined toward populism.[15]

In the United States, linkages are usually limited to the developers' impact on the city's built environment. Provision of low-income housing units is only one possibility; developers may also provide "public areas," such as plazas or indoor galleries; they may preserve a landmark structure on the building site; or they may contribute funds to renovate publicly owned infrastructure, especially transportation facilities. The entire situation, however, is dominated by the private sector. A city's leverage depends on how marketable the project is and how much profit it can bring the developer. No linkage requires developers to extend their efforts to the sore area of employment. Often the indoor public spaces that developers provide are designed to be inhospitable to strangers, and after they are built, they are policed by private security guards. Most of them are entries or backdrops to shops. Even the outdoor spaces that are


254

most praised for their use of public art and open landscaping (New York's Battery Park City being a prominent example) serve to advertise an image of the city as clean, safe, and almost classically cultural.[16]

Dependence on the private sector for creating new public spaces is only a visible means of privatization. Many cities have also tried to save money by privatizing essential public services, that is, by contracting out work, letting private, for-profit firms build and operate facilities, and selling publicly owned assets.[17] While hiring a privately owned towing or waste removal company may seem a reasonable way to reduce the public payroll, shifting other services strikes at government's reason for existence. Courtrooms and prisons may be leased, hospitals may be run by private chains, and forms may be processed outside the public sector. But the efficiency of private managers is based on skewing service to the ability to pay, not equity or universal service. Turning city services over to private firms also means losing control. It suggests that the last vestige of citizenship in the city is gone, that the bureaucracy of city government is just a functional arrangement with no pretense to mediating a moral order.

Though hardly new, the dominance of private organizations in redevelopment and the divorce between downtown and the neighborhoods have been accentuated during the recent growth in service economies. Cities face renewed problems of allocating scarce public resources among needy populations while attracting successful businesses that could easily move away. The irony of a city's success in enhancing its "business climate" is that the occupants of high-income jobs go elsewhere to live. Moreover, the expansion of corporate facilities displaces poor residents farther from the core. And the wide array of private consumption opportunities in the city is monopolized by a narrow band of the most literate, affluent, cosmopolitan men and women. Under these conditions, most public institutions are degraded. They are either ceded to the poor, like public schools, or harnessed to the private sector, like public building authorities. Economically, the sense of public life in the city is eroded.[18]

Cities and Cultural Power

The shift from an industrial to a service economy is paralleled by visual as well as social changes. Just as the use of space shifts from "dirty" to "clean" work, so the visible legend of the city changes to reflect a new landscape of cultural power. To some degree this change is based on the consumption patterns of more affluent, highly educated residents—gentrifiers who graduated from college during the 1960s and 1970s. But it


255

also represents change in the ideological meaning of the city, and as such it shapes the conscious production of city space.

Architecture and design are the intimate partners of redevelopment in this process. Downtown becomes a competitive arena of style, the real estate market's cutting edge. Whether they are in Pittsburgh's Golden Triangle, Renaissance Center in Detroit, or New York's Battery Park City, the buildings are both monumental and commercial. Indeed, they are monumental because they are commercial. They are meant to provide a new skyline for the city, a vertical perspective on the city's financial power. Not coincidentally, they are all important waterfront developments. Reusing this land wrenches it from the docks, the dives, the wholesale markets that for many years enclosed the commercial district and limited its expansion. The waterfront's reuse grows out of both the desire to capture a scarce amenity and a reconsideration of the cultural value of centrality.[19]

Cities never lose the moral aura of central places. This is the secret of their uniqueness that, in turn, explains the endless fascination with rebuilding and the deep nostalgia for structures that have been torn down. What common history there is in American cities is located in the center. This is the marketplace of ideas and commerce, the site of oldest buildings, the area of public ceremony and desire. Theaters coexist with peep shows, corporate headquarters with wholesalers and jobbers, city halls with video arcades. Despite its heterogeneous uses, this is the most attractive place for real estate investment. The irony is that more investment tends to destroy the center by eating away at its diversity.

The recent redevelopment of the center is partly a reaction by institutional investors to risk in alternative investments such as Third World loans, suburban shopping malls, and office buildings in economically troubled Houston or Denver. But it also reflects a quest by certain parts of the middle classes for access to the city's historic cultural power. Beginning in the 1960s, a reaction against publicly funded urban renewal among more culturally sophisticated middle-class men and women inspired them to advocate the preservation rather than the tearing down of old buildings with historic value. They were mainly attracted to buildings in the center—the public halls and private houses that once belonged to, or were designed by, a patrician elite. These were among the first structures to attract the aesthetic eye of gentrification.

During the 1970s, the number of gentrifiers who put down roots in center-city neighborhoods rose. Mostly single men and women or childless couples, they bought nineteenth-century houses that had become run down and restored their old-style beauty. The way they used these houses differed from previous residents. They preferred architectural


256

restoration to modernization (except for creature comforts like bathrooms, kitchens, and air-conditioning). If the houses had been converted to rentals or single-room-occupancy hotels, they returned them to single-family use, usually the owner's, or converted them into pricey condominiums. Gentrifiers also tended to empty the streets. They didn't congregate on corners or in front of their homes, and they didn't mingle with neighbors. Neither did they patronize some of the old neighborhood stores, which were soon replaced by the restaurants, bookshops, and clothing stores that catered to gentrifiers. From one point of view, gentrification created a middle-class neighborhood on the basis of cultural consumption. From another, considering the relative costs of housing downtown and in the suburbs, it represented a rational form of middle-class housing investment.[20]

By the 1980s, a significant movement of investors into some downtowns created pressure on government to generalize the benefits of incremental, private-sector urban renewal. While local governments created historic landmark districts and enacted legislation to encourage reuse of old buildings in the center, the federal government changed the tax laws to make historic preservation and commercial reuse more deductible. Every U.S. city now glories in its historic downtown as a magnet for further private-market investment. Gentrification thus provided a stepping-stone from the federally funded urban renewal that tore down so many old buildings during the 1950s and 1960s to the speculative new construction that augmented the central city during the 1980s. Today, no downtown is considered complete without office towers, ethnic quarters, cultural complexes, and gentrification.

As a cultural ensemble, downtown's selling point is that it contributes to urban economic growth by attracting tourists. But the major tourists are the city's own residents. Those at higher income levels seek out new restaurants, shop for imports of finely wrought or singular goods, and go to look at the places where art is produced, exhibited, and sold. These spaces for cultural consumption are generally located in the center, or in adjacent derelict districts, where rents are cheap, buildings are old enough to provide an atmosphere, and a dense pattern of support services emerges. New York's SoHo provided an unplanned model for this sort of urban revitalization. But during the 1970s, Boston's Faneuil Hall Marketplace and Baltimore's Inner Harbor turned it into a planning model. Faneuil Hall is particularly interesting because its developers took a strong design concept from the existing use of the building and used it to displace the fruit and vegetable vendors who rented stalls there. They were replaced by stands selling arts and crafts products, imported foods, and other gift items that can be found elsewhere. The essence of the transformation, however, is that it opens Faneuil Hall to


257

middle-class use and signals to white residents and tourists that this is a place for them. By making a permanent commercial "festival" out of a grubby daily market, the developers of Faneuil Hall eliminated both the "periodic" use of the space and authentic, even functional, popular culture.[21]

In large part, redeveloping the downtown depends on the commercial re-creation of an urban middle-class culture. More sophisticated than suburbia, the newly interesting downtown is a realm of the senses. Its spatial organization and visual cues "open" the center to a highly selective consumption. In its conversion from small shops, industrial lofts, and working-class homes, downtown is caught up in—and spearheads—an "artistic mode of production." Artists are the primary consumers in this image of the city, and everyone in the more cerebral, or more pretentious, part of the middle class is interested in bridging art and life.[22]

The new downtown also bridges public and private spheres. Large mixed-use projects typically blend shops on the lower floors, offices in the middle, and apartments above. They allude to the density and vitality of older city streets without the hint of chaos, the expectation of the unexpected, that is part of an old city's fabric. New urban spaces give a clear sense of keeping the unruliness of the city out. To enter them, people come inside from the street: they are neither purely public nor private spaces. The State of Illinois Center in Chicago is perhaps the most perverse example of this "liminality." Built for government offices, the project has the atrium design of modern hotels, and the first few floors comprise a shopping center. Projects like these usually enclose an extremely large volume of space. They often include glass-sided elevators or high escalators, which are likely to be filled with moving crowds. But the grandeur of their scale conflicts with the triviality of their function. While shopping may have become a social experience that men and women value in itself, the stores in these mixed-use projects are usually branches of national chains that sell mass-produced goods.[23]

To some extent the quest for distinction in mixed-use spaces has come to rest on the notion of the city as festival. This suits the reorganization of the city as a consumption space, where shoppers are provided with a built environment that contextualizes the ephemeral while the buildings themselves are decontextualized from the city's past. The festival aspect of urban space fits a postmodern susceptibility to eccentricity and invention. Its "free-market populism" benefits the eclectic consumer while segregating those who can pay from those who live on the street. Much of the festival use of the city center relates to the "society of spectacle" that is described in the work of contemporary cultural critics. Born of the late-nineteenth-century burst of commercialism and urbanization, a city of spectacle features passive crowds floating among commercial dis-


258

tractions. But the city's adoption of a festival theme also reflects the influence of theme parks in the culture of contemporary spaces. Theme parks, or their urban equivalents in either red-brick or atrium shopping centers, organize varied bundles of consumption. Equally important, they also organize how people experience the space of consumption: the city becomes an imaginary stage-set for dream fulfillment.[24]

While the qualities of place can be abstracted in both historic preservation and new construction, the real downtown is formed by joining circuits of economic and cultural capital. Old buildings provide an object of aesthetic interest; a site for relatively low-cost cultural production and consumption, especially among more adventurous cultural consumers; and a magnet for real estate investment. The physical infrastructure generates markets for architectural restorations as well as avant-garde art; together, they create a downtown "scene" that—with enough consumers—sparks a booming local service economy. This local economy, however, is highly skewed toward high-class and international uses. It has more art galleries than dry cleaners, more clothing boutiques than supermarkets. The local real estate market grows in tandem with the sale of historical replicas, from Victorian furniture to "French country antiques." Recognizing these areas of the city as historic landmark districts legitimizes property investment there and gives a certain cachet to local business establishments. The areas become well known by means of articles in the daily newspapers and magazines. Target of an ever more mediated middle-class consumption, the historic and cultural downtown attracts more new investment to the central business district. In part because of the arrival of foreign investors, the old financial district sprouts new office towers. What these buildings represent—their cultural power in the world economy—contradicts the local or avant-garde spirit of most initial gentrifiers.

If downtown spells fun for the more sophisticated middle class, it is not so hospitable to the unemployed, the homeless, and lower-income groups. Over the past twenty years revitalization has eliminated low-rent housing from the center, especially the skid row flophouses and single-room-occupancy hotels that catered to a transient, older, jobless group of men that used to be labeled homeless. Revitalization has also displaced the stores such people patronized—food and liquor shops, used clothing stores, pawnshops. New shops and the firms in new office buildings displace the labor market. Unlike the old docks, railroad yards, and warehouses that used to abut the center, they do not recruit the homeless as casual labor. The new downtown provides so much less living space for a poor population that these men and women are literally homeless. High property values and low vacancy rates decrease their chances of finding even a temporary place to settle down, while the density of activity and


259

transportation downtown continually lure them to the center. In recent years the homeless population has been swelled by more women and by families with children that cannot make enough money to pay the rent. Ironically, they are driven out of most private-sector public spaces, especially in front of the tonier shops, and so they try to find shelter in the bus and subway stations, railroad terminals, and city streets.[25]

Just as middle-class consumers of the city demand more meaningful public space, so do homeless men and women seek public space as the last remaining shelter. Whether cities can provide public space for either group—in which proportion, and where—has become an index of public and private social power.

Cities and Social Power

As the largest cities have begun to elect mayors from African-American and Latin communities, the cities themselves have become less prized. Public institutions are required to expand their functions to cover more human needs—adjudicating court cases, tending children all day, providing temporary shelter—while funding lags. Crime and drug sales plague many residents who cannot insulate themselves behind private security guards. From banker to mayor to drug gang, in the city there are many kinds of social power.

When we talk about cities in America today, we should differentiate between three "orders" of cities that create vastly different claims to social power. Within the global social order, the most power is concentrated in New York and Los Angeles, America's largest cities and largest financial and communications capitals. These cities are not fatally threatened by recent downturns in jobs and housing prices that are so inflated they forestall mobility. But their prosperity has left a hollow ring of outer boroughs or inner suburbs between downtown's expansion and more affluent suburban counties. In both cases, the "city" will only continue to grow as a result of regional growth; most older areas of the city house new immigrants who are saving to move out and an underemployed native population. Other cities may look like smaller versions of New York and Los Angeles. By contrast, they lack the base in transnational enterprises that gives these two cities global scope and scale as well as a fearsome glamor.

Aside from the two world cities, a more purely national order differentiates power according to cities' age and region of the country. Newer cities are mostly southwestern and southeastern. They have a "suburban" style of life, which is automobile-dependent, home-owning, private. They also have a base in newer manufacturing industries—mainly as a result of extensive military contracts—as well as regional and national


260

services. Lacking a claim to the social power of global capitals, they nonetheless provide the sort of middle-class life that people identify with the American Dream. And they may be the only cities in the country to do so.

Within cities, another order differentiates between the populist power of the neighborhoods and the financial power of the business center. Neighborhood residents hold the city's remaining manufacturing jobs, work in the civil service, and provide the major part of the work force in the private service economy. But because they cannot or will not move out of the city—for reasons of income and race—they bear the burden of the moral problems that no city government can solve. In the neighborhoods are the homeless shelters, the drug wars, the violence that rips through public schools. And in the neighborhoods we also find the fierce sense of territory that inspires racial terror. From these contradictions arises that which is known in American society as community, the city's only form of legitimate social power.

Since the urban reforms that began in the late 1960s, "community" has been a universal rallying cry for improving public services. The concept of community has also been a focus for organizing low-income men and women to demand access to political power. While community movements have made social power in the city more competitive than before, they have also provided a way to integrate unorganized groups into political life.

Twenty years of experience indicate that the vehicles of community empowerment are flawed. Administrative decentralization, for example, has often suffered from too little funding controlled by too few people. Central bureaucracies, both federal and citywide, have been reluctant to give up control over hiring and budgets. Many civil servants, moreover, such as police and fire fighters, do not live in the cities where they are employed, either because they cannot afford high housing prices or they want better living conditions. Neither are coalitions that elect minority-group mayors effective tools for community empowerment. On the one hand, urban minorities are often divided along the racial and ethnic, as well as political, lines. Terms like the "black community" and "Latin community" encompass a wide variety of competing local groups. On the other hand, the public goods and social conditions toward which they strive are not necessarily allocated by public command. Quality of life in the city is so dependent on income that it is essentially controlled by private decisions.

Despite its real limitations, the concept of community suggests how little even the poorest neighborhoods of a city conform to the stereotype of "social disorganization."[26] Non-nuclear families and the working poor make up a large portion of the urban population, but the areas where


261

they live generate their own, fairly continuous structure of community organizations. Linked by individual activists, these organizations respond to both community issues and external conditions. The encouragement of City Hall (and formerly, the federal government) enables them to develop a fairly stable base that may remain outside the control of traditional urban institutions, especially political parties. At best, community organizations goad the city government into giving poor residents of the city a little more access to public goods—longer library hours, a drug treatment program, a slightly more responsive police department. At worst, they have no effect on housing, jobs, and income—the basic parameters of living conditions.

The structure of the whole society affects the issues that are considered urban problems. But while poverty, drug addiction, and decaying public infrastructure are national in scope, no national institution has the moral authority to compel their solution. Moreover, as long as cities have little autonomy in the face of global markets, their problems are defined in terms set by the private sector. Americans still visualize cities as the public center of their society. Yet it is a hollow center, more an image of power than a means of empowerment.


263

PART THREE—
INSTITUTIONAL ADAPTATIONS


265

Twelve—
National News Culture and the Rise of the Informational Citizen

Michael Schudson

If being well informed means having at hand reliable information about the community and nation, the international world as it impinges on national interests, the natural world, and the world of the arts, then Americans have never before been so well informed nor so abundantly served by broadcasting, the press, and publishing.

If, however, being well informed means having a world view coherent enough to order the buzz of information around us, and having enough personal involvement with people, ideas, and issues beyond our private worlds to absorb and use information, then there is little reason for self-congratulation. This second sense is more nearly what we mean, or should mean, by "the well-informed citizen." The well-informed citizen is defined not by a consumer's familiarity with the contemporary catalog of available information but by a citizen's formed set of interests that make using the catalog something other than a random effort. The news media increasingly help provide the materials for the informational citizen, but they do not and cannot create the informed citizen. The informed citizen appears in a society in which being informed makes good sense, and that is a function not of individual character or news media performance, but of political culture, broadly defined.

Well-reported news, free from censorship, does not a democracy make. Full and accurate reporting of candidates' records and policy positions, even if we had that, would not a well-prepared voter create. What, then, is the impact of all the information around us? What sort of person does it help establish or, at least, set the conditions for? Who are we, these informational people, who daily digest political scandal here and earthquake there, a crime wave in our home town and a guerrilla movement in El Salvador, a ban on alcohol at the local beach and a surgeon general's report on passive smoking, a protest against local devel-


266

opers and a worried report on Third World debt? Are we disabled by media saturation? Distracted or deadened or at least thrown off stride by the avalanche of information?

I don't think that's what happens. People probably muddle through their lives as well today as people ever did (although that may not apply to the poorest residents of urban ghettos). Indeed, they may muddle along a little better, armed with the view that the world is subject to their control. Fundamental matters of fertility, contraception, sexual satisfaction, pain relief, and contact with other human beings across a distance are all more within human capacity and even individual control than ever before. At the same time, different groups in society feel newly entitled to control over their lives—notably blacks and women—and they have found broad political support for their sense of entitlement.

The growth of the media, the explosion of information, and the pounding headache of hype have not prevented this; on the contrary, they have helped it along. Most anxious and apocalyptic commentators on the media forget these and other fundamental realities. Without firmly planting their feet in sociological soil, they examine the media out of social context, picturing the media as self-contained technologies rather than porous social practices, and they often ask unreasonably that "art" or "truth" flow from the media spigot.

Often critics find major cause for alarm in a trend or development of the past year or five years or decade, although, in a somewhat larger compass, our media environment has not changed. The media in the United States, in 1990, as in 1960, are more completely controlled by private corporations than are the media in any other industrialized country in the world. The range of political opinion available in mainstream media in the United States is narrower than in much of Western Europe; this has been true not only for the last generation but for much of American history. At the same time, the freedom of the American media to investigate and to publish is more supported by institutional resources and more protected by constitutional safeguards than in any other country, both today and thirty years ago.

That said about fundamental continuities in the American media, changes in the past thirty years have significantly altered the ordinary person's experience of popular and public culture and have surely enlarged the role of the media, especially the news media. One sign of change is that the concept of "the media" itself has become inescapable. The term the media (meaning, especially, the news media), was not much used before the 1970s. It came into play, I think, in part because the term the press began to seem limited as a descriptor of both print and broadcast journalism (although the term survives in this usage). It gained wide exposure thanks to the Nixon administration's vendetta against "the


267

media." Nixon inherited both the Vietnam War and Lyndon Johnson's "credibility gap." He and Vice-President Spiro Agnew declared war on the news media, arguing publicly and privately that the nation's leading news institutions were an independent source of political power managed by bleeding-heart liberals and dyed-in-the-wool Nixon-haters. This forceful attack helped create the beast it sought to describe. It certainly helped give the beast a name.

So, too, did Vietnam and Watergate and the events leading up to what Hedrick Smith has described as the "political earthquake" of 1974.[1] As Vietnam tore apart consensus between the executive and the Congress in the conduct of foreign policy, between hawks and doves within the Congress, between parents and children even in families where fathers were Cabinet officers or New York Times editors, the media scrambled to represent these divisions. When governing leaders spoke in a single voice, so did the press; when dissent sparked on the floor of the Senate, it fired the media. If the media followed rather than led the breakdown of consensus, they learned from the experience a new style of journalism. In covering Vietnam and Watergate, journalists did not abandon "objectivity" so much as recognize what a poor shadow of objective reporting they had been allegiant to for a generation. Journalism sought, sometimes awkwardly, sometimes irresponsibly, sometimes bravely and brilliantly, to invent the independence it had long claimed to exercise. This, too, helped establish "the media" as a distinct institution.

The present uneasy feeling of media omnipresence and information overload comes in part from this sharp, visible presence of the media as an institution. It comes also from the increasing nationalization of the news media and the identification of the nation itself as an "imagined community," to use Benedict Anderson's phrase, with the national news. It comes as well from other transformations in the character of mass-mediated information—the blurring of the line between news and entertainment, the melding of public and private, and the politicization of once private affairs, and the increasing efficiency of organizations that "target" messages to specialized audiences. All of this makes American citizens informational cousins, even if we are not a particularly close-knit family. These are the features of cultural transformation I want to discuss in the sections that follow.

Nationalization of the News Media, 1960–90

"Nationalization" did not happen all at once. In fact, Godfrey Hodgson has argued that the "nationalizing of the American consciousness" was the primary trend in the media in the late 1950s and early 1960s.[2] But in 1960 or even 1963, the machinery of nationalization was only partially


268

in place, the political and social consequences of cultural nationalization still on the horizon of consciousness, the sense that citizens know too much about things they can do too little about not so keenly experienced.

A national television news system, present in the 1950s, took on new importance in the 1960s and later. The 30-minute format (instead of 15) became standard in 1963. In that same year, the Roper poll found, for the first time, more Americans claiming to rely on television than on newspapers as their primary source of news. To the three network evening news shows were added "60 Minutes" (in 1968) and its imitators, "20/20" (1978) and others. ABC began a late-night news program in 1979 called "America Held Hostage," a daily update of the Iranian hostage crisis. In 1980, its name changed to "Nightline" and it became a regular part of the broadcast news diet.

Only in the 1970s did news on television take on a central role in the thinking of the broadcasting corporations themselves as "60 Minutes" became the most highly rated program in the country and local news began to turn big profits. "60 Minutes" made television news interesting and profitable. This economic maturing of television news coincided with its political coming of age. When President Kennedy began holding live televised press conferences, television as a regular news source gained official imprimatur (the famous Murrow-on-McCarthy programs of the early 1950s were exceptional; television news was superficial, unvisual, and short). It was not until the Vietnam War that television news coverage took on a centrality, both for Washington elites and the public at large. Then the evening news became the symbolic center of the national agenda and the national consciousness. Political campaigners measured their success as much by seconds on the evening news as by polls; presidents—notably Johnson and Nixon—became obsessed with the television screen.

Within two decades of the time television network news became ring-master of the American circus, network dominance was challenged. The networks in 1970 had no competition; only 10 percent of American homes had cable systems. By 1989 the figure was 53 percent. The networks' share of total television viewership has steadily declined, so much so that in 1990 the networks formed their own public relations firm to promote themselves collectively against cable (and other) competition.[3] In news gathering in recent years, new technologies have enabled local television stations to steal a march on the networks. The availability to local stations of vans equipped with satellite dishes, combined with the growing costs of syndicated news programs, has led to several satellite-connected consortiums of local stations that cover national news events on their own. The combined audience for the three evening network news programs has declined by nearly 25 percent since 1980.[4]


269

In 1979, the cable industry began C-SPAN as a public service gesture. C-SPAN's tiny audience is important and the presence of C-SPAN in the Congress has affected the conduct of public affairs. The House of Representatives was on C-SPAN from the beginning; the Senate joining in in 1986.[5] Cable News Network (CNN) began operations in 1980; it provided news around the clock and quickly established a reputation for responsible reporting. Also unknown in President Kennedy's day was television news on public television; in 1975, the Public Broadcasting Service brought the McNeil-Lehrer program to most communities.

As television news has expanded, radio news has had something of a renaissance. In 1970, noncommercial and educational radio licensees formed National Public Radio and a year later launched their first network news program, "All Things Considered." The NPR audience is relatively (seven million listeners) small but devoted;[6] among academics a reference to a recent "All Things Considered" interview is as likely to be common coin as reference to a current Hollywood hit.

News means money on radio as well as television. There had been some experiments with an all-news format in the 1950s, but only in 1964, when WINS in New York became an all-news station, did the phenomenon attract general attention. WCBS joined as a second all-news station in New York in 1967. Soon dozens of cities had all-news stations. "When you want water," said one station manager, "you turn on the faucet. When you want news, you turn us on."[7]

Equally important, "talk radio" became lively and popular. Larry King's nationally syndicated show made its debut on twenty-eight stations in 1978 but eventually served more than 350. CNN adapted it to television in 1985, and by 1990 the program was running nightly on both radio and television. There were interview programs before Larry King's, but his innovation (begun in Miami in 1960) was to add live call-ins to the interview format.[8] News, or "reality programming," has become a pervasive cultural experience.

In the 1990s, if I want to get a copy of the New York Times or the Wall Street Journal in my home town of San Diego, I need only open my front door and pick up my home-delivery copy on the driveway. As late as 1971, when Anthony Russo had strong incentive to read the New York Times because they were publishing the Pentagon Papers, which he had helped Daniel Ellsberg photocopy, there were only a few locations in Los Angeles where the paper could be found.[9] Overcoming the vast size of the country, satellite communication technology and computerized printing systems have made the regional and national newspaper a reality.

The Wall Street Journal has had a national presence for some time, but in the past generation it sharply increased its coverage as a general newspaper, rather than an exclusively business newsletter. It expanded to a


270

two-section format in 1980. The Los Angeles Times , when Otis Chandler became publisher in 1960, had one foreign bureau and two reporters in its Washington bureau. It was a provincial, conservative paper that, within a decade, developed into a distinguished, professional newspaper. The same thing happened at the Washington Post . When the late Howard Simons joined the Post in the early 1960s, the paper had a single foreign correspondent and a single business reporter. Not until it released the Pentagon Papers in 1971, according to editor Ben Bradlee, did the Post make "some kind of ultimate commitment to go super first class."[10]

Another indicator of nationalization is that elite newspaper news services began to compete with the standard Associated Press (AP) and United Press International (UPI) services. The Los Angeles Times–Washington Post news service began in 1961 and grew to more than 350 clients by 1980. The New York Times news service began during World War I, but as late as 1960 had only 50 clients; by 1980 there were 500 clients. These news services are not so much high-fiber substitutes for the traditional wire services as they are dietary supplements, adding more detailed and analytic news for the local subscriber. So while metropolitan daily newspapers have continued to die, new sources of national news have become available. Time and Newsweek developed into professional publications, and other magazines provided new sources of public affairs news and comment, too, including the Washington Post national weekly edition (1983) and several magazines that reached a mass market—notably Rolling Stone and Mother Jones .

The largest and most prosperous newspapers exert regional influence well beyond city limits. The Los Angeles Times challenges the Orange County Register in Orange County, and still further south, its San Diego County edition competes with the San Diego Union and San Diego Tribune . In Santa Cruz, California, the local daily, the Sentinel (owned by Ottaway Newspapers, a Dow-Jones company), is only one of several newspapers available by home delivery: there are also the New York Times , the San Francisco Chronicle , the San Jose Mercury (a Knight-Ridder paper), and the Wall Street Journal .

All this needs emphasis when the most visible national newspaper is the flashy and widely disparaged USA Today . Begun in 1982, it had a circulation of more than 1.5 million by 1987. Printed in thirty-two different sites and produced by satellite transmission of copy, it is a technical achievement of considerable proportion. There is some question about its journalistic achievement, though little dispute about its influence on the look and style of other newspapers in the country. It initiated widespread use of color, for example. Its almost pathological focus on the weather has encouraged more comprehensive weather reporting elsewhere.[11]


271

While an average citizen has access to more, better, more critical, and more diverse sources of national news today than a generation ago, control over news is paradoxically in the hands of fewer and fewer institutions, run more and more by accountants. The chain newspapers are not so much politically conservative as economically risk-averse, which generally comes to the same thing. (In 1986 there were 1,657 dailies, down only slightly from 1,763 in 1960—and the number of cities with a daily newspaper actually increased. But a mere fourteen corporations account for over half of daily newspaper circulation.)[12] While the quality of journalism may be more often increased than decreased in communities whose independent newspapers are bought out by chains, the chance for an independent-minded publisher or individual eccentric to run his or her own show is dying fast. There is a legitimate concern that chain ownership inevitably precludes diversity. The op-ed page, a development that became a standard practice in the 1970s to increase the diversity of opinion, increasingly seems the same from one newspaper to the next. In leading news institutions, the reliance on official government sources is overwhelming, the absence of left-wing critics or commentators consistent, and the inside-the-northeast-corridor orientation hard for a midwesterner, southerner, or westerner to ignore.

National News Culture

Accompanying the nationalization of news institutions is the nationalization of newsroom culture. The managers of small papers or television news shows around the country are aware, as never before, of what goes on on the networks, in USA Today , in the New York Times . So are their employees. One result has been the ability of blacks and women, and in some instances other minorities (Chicanos in Los Angeles, for example) to press their institutions for better treatment in the newsroom and more appropriate play in the news pages for the groups they identify with.[13] It may be hard to recall how recent these changes are. Before the 1960s, women journalists wrote about fashion and society—and rarely anything else. The National Press Club only admitted women in 1971. In 1966, the Chicago bureau chief for Newsweek could turn down a woman reporter from UPI for a job, explaining that "I need someone I can send anywhere, like to riots. And besides, what would you do if someone you were covering ducked into the men's room?"[14] That would be hard to get away with today.

The diversification of the newsroom may be less than it appears. In television news, women and minorities are most often seen on weekends, what has been known in the business as the "weekend ghetto." By the end of 1990, there were no women or minority anchors regularly as-


272

signed to any network evening news. Still, anecdotal evidence suggests that women (and to a lesser extent minorities) in the newsroom have made a real difference in what gets covered and what emphasis coverage receives. Women staff members at the Los Angeles Times spearheaded a major ten-part series on women in the work force in 1984; a woman editorial writer at the Seattle Times asserts that almost all the editorials done on subjects concerning children are done "because I'm here."[15]

Journalists at national news institutions are better educated than ever before, more likely than in the past to have come from relatively privileged backgrounds, and more likely to be paid relatively privileged wages. They are more and more likely to get their views from other journalists, not their own editors or publishers. They are likely to share in what Herb Gans calls a "Progressive" outlook—a belief in a two-party system, responsible capitalism, the virtues of small-town life, individualism, moderate measures under all circumstances, and some vague notion of the public interest.[16] The solidity of these values grows as more and more news is reported out of a single location—Washington, D.C. In Washington there is more of a social arena for journalistic culture than ever before. In 1961, there were some 1,500 journalists working in Washington—but more than 5,300 by 1987.[17] Journalists there can, and apparently do, talk mostly to one another.[18]

This is not to say that ours is now a seamless, coherent national journalistic culture. Look, for instance, at the growth of the Spanish-language media. In 1974, there were fifty-five Spanish-language radio stations; today there are 237. Twenty years ago, there were only a handful of television stations that broadcast Spanish-language programming. Today Univision, the largest Spanish-language television network, claims over 400 broadcast and cable affiliates. A new Spanish-language cable company, Galavision, began in 1989. Thirty-one television stations now are broadcast entirely in Spanish. These stations are concentrated in the "Hispanic Top Ten": Los Angeles, New York, San Francisco, Houston, Dallas, El Paso, Brownsville-McAllen, San Antonio, Miami, and Chicago.

Ethnic and linguistic diversity is well represented in the American media, both print and broadcast. So, too, are media flourishing that appeal to different religious groups, most visibly with the rise of the "televangelists." The technological capabilities that have made possible a dominant national news culture have also been a key resource for the growing power of more parochial but nonetheless nationally based "consumption communities." The nationalization of the news media has not meant the homogenization of media experience, but the creation of a new set of national arenas for a variety of distinctive subcultural tastes. For instance, thanks in part to computers and desktop publishing, there are (by rough estimate) some 100,000 newsletters in the country that circu-


273

late for free or as a part of an organization, association, or business. There is a newsletter industry, with its own trade associations; Newsletters in Print catalogs over 10,000 newsletters.[19] Pluralism is not without problems. Such cases as Ku Klux Klan use of easily accessible computer bulletin boards or unscrupulous evangelists on their own cable programs raise difficult issues. The new national media increase the visibility of pluralism more than they insist on homogenization.

In the world of information, the poor grow richer but the rich grow richer more rapidly (the "knowledge gap" hypothesis as communication researchers call it). The rich have more information and more incentive to get and use information efficiently. Take, for instance, the Republican National Committee's 1984 opposition research group that started collecting data and quotations on leading Democratic contenders early in the primaries. By the time the Democratic convention opened, the Republican computer had 75,000 items on Walter Mondale, including 45,000 quotes from all through his career. The data base was updated daily during the campaign. The materials were accessible through a computer reference dictionary, and computer links were made to 50 state party headquarters, 50 state campaign headquarters, and Republican spokesmen in all 208 broadcast rating markets. This was impressive. More impressive still was that similar systems were available to all Republican candidates for the House in 1986 through the Republican Information Network. If a candidate was running against a Democrat incumbent, he could instantly learn the incumbent's voting record back to 1974 on any issue. Now, this did not change the outcome of the 1984 election; and Republicans remained a minority in the House, even after 1986. But it gives a sense of the sophistication of the new information technologies for matters very close to the heart of the democratic process.[20]

Thanks to the campaign reform acts of the early 1970s, parties have taken to new forms of campaign fund raising, especially direct mail advertising. With computerized mailing lists and sophisticated targeting of zip code locations most likely to provide names of wealthy contributors, direct mail experts have transformed political fund raising. One senator (Tim Wirth, Democrat from Colorado) has 150,000 names in his computerized lists, divided into a thousand categories according to topic of interest, field, or occupation (117 names appear on the list of people interested in women's issues, 8 on the list of those interested in women in mining).[21] These lists are used for fund raising and self-promotion so that the right message can be addressed to the right people.

While the disproportionate weight of media and publicity lies, of course, with established powers, guerrilla media use has its brilliant practitioners. The first "Earth Day" (in 1970), a scheme of Wisconsin Senator Gaylord Nelson's to draw attention, especially through college teach-ins


274

(an invention of the anti-war movement), to environmental issues, was a "patchwork of demonstrations and community activities," though a patchwork that attracted significant media attention and public interest. Planning for Earth Day 1990 was run by two different groups, one with 38 employees, and received backing from labor, business, and the media.[22] Weeks before Earth Day 1990, the Los Angeles Times was already covering not only Earth Day but the coverage of Earth Day. Like the 1988 election campaign, the media were as attracted to the story of media coverage as to the stories media coverage was covering.[23] News culture tends to consume itself.

The Newsification of Popular Culture

An important corollary to the nationalization of news has been the nationalization of public problems and the nationalization of an audience for them. Most observers of the media have complained that serious news institutions have been turning news into entertainment, but the larger trend is that entertainment has turned into news. If "60 Minutes" exemplifies a trend to make news entertaining, the "Donahue" program is the model for making entertainment that feeds on the news.

"Donahue" was first syndicated out of Chicago in 1979; the "Oprah Winfrey Show" was syndicated in 1984, and in the last few years several other competitors have entered the fray. These programs are sometimes televised sideshows, parading the American psyche before us with an exaggerated, freakish self-consciousness. At the same time, cheaper than psychotherapy and more readily available than a close friend, they inform people about a wide range of social, psychological, medical, and occasionally (at least on "Donahue") political problems. The producers of "Donahue" conceive of their topics as "serious issues" or more precisely, "serious issues that are in the news." News culture becomes the central storehouse for the various national conversations in American society.[24]

Television unashamedly, in fact, proudly, runs dramatic programs, sit-coms, and soaps that borrow from contemporary controversies for plot material. This is not like the "spy" shows of the early 1960s that reflected a general Cold War ideology; these are programs whose makers frequently engage in careful research to model a plot episode after a recent news event or to mimic in a sit-com the arguments that rage around a contemporary social problem. This began with "All in the Family" in 1971. Over the course of just a few months, that new sit-com dealt with homosexuality, cohabitation, race and racism, women's rights, and miscarriage. By the end of the 1971–72 season, it was the top show in television, and producer Norman Lear developed enough clout to retain


275

some independence from network censors. When a group called the Population Institute, a lobbying organization for promoting population control, set up a meeting in 1971 with television executives to encourage them to deal with population issues, Lear became personally interested in doing an episode that would deal with the population issue.[25] The "Maude has an abortion" episodes on "Maude" (a spin-off from "All in the Family") the next year were the highly controversial result. "M.A.S.H." dealt with the war in Vietnam (through displacement to Korea), and a whole array of made-for-television "problem" movies dealt with issues from child abuse to chemical pollution of the environment to wives murdering abusive husbands. "Lou Grant," a popular drama in the 1970s and early 1980s, was set in a metropolitan newsroom, and it borrowed directly from recent news events. Mother Jones took pride in telling its readers that the January 19, 1981, "Lou Grant" show on the United States dumping hazardous products in the Third World drew on a 1979 special issue of the magazine, thereby using television fiction to legitimate its own journalism.[26] In 1989, after a jury found for the defendants in a medical malpractice suit in Florida, the plaintiff's lawyer asked for a new trial because, he claimed, a recently aired "L.A. Law" episode in which the doctors won in a malpractice case was "propaganda" that probably influenced the jury.[27]

This newsification of popular culture is no doubt rooted in a longstanding Puritan temper that distrusts entertainment unless it is instructional. But the leakage of news into comedy and drama in the past decades has a more contemporary ideological source, too. Critics of popular culture argued convincingly from the 1960s on that entertainment is a form of instruction, whether it is meant to be or not. It was an old complaint that mass media portrayal of crime and violence encourages crime and violence, renewed in the 1960s with television as the target. It was more novel and more challenging to complain that the subordinate status of women and minorities in contemporary society was encouraged by mass media stereotyping. This criticism was effectively turned into politics by Action for Children's Television in its persistent attacks on children's television programming and by the Ford Foundation's support for Children's Television Workshop and "Sesame Street." "Sesame Street" may look almost painfully self-conscious about racial and sexual stereotyping and, critics charge, not nearly self-conscious enough in its submission to the rapid-fire pace and gleam of commercial broadcasting. But it provides parents a televisual haven of safety from the persistent violence, sexism, and racism of commercials and programming emanating from the commercial stations.

The cultural, rather than political, consequences of newsification may be the more important. We live with more vivid, dramatic knowledge of


276

events around the world than ever before. We live our "real" lives bodily, in our homes and work places and on our streets. But at the same time we live alongside the hyper-reality on our television screens and radios and in our newspapers. Contemporary life becomes some kind of science fiction, two parallel worlds moving along in tandem, usually disconnected, only occasionally, and then perhaps jarringly, in touch.

Television in the Media System

To tell the story of dominant developments in the news media in the past generation as a story of nationalization, newsification, and the rising symbolic centrality of something called "news" differs substantially from some other popular accounts. Perhaps the most common story is that television is the simple one-word answer to the question What has happened to the media in the past 30 years? In this view, television has overwhelmed society, propelling the decline of literacy, the decline of seriousness, and the decline of political participation.

But consider what should be a simple instance: as television news has expanded and as the public's professed reliance on television news has increased, newspaper "penetration" has declined. Newspaper readership among young people is particularly low. What could be a simpler cause-and-effect relationship? Vulgar television does in virtuous newsprint. This has often been cited as incontrovertible evidence of the dangers of television. But in a comparison of 20 Western countries (and Japan) from 1964 to 1984, Leo Bogart found no overall relationship between the spread of television and newspaper penetration.[28] While the number of television receivers per capita and the total time spent viewing television are pretty much the same from one Western country to the next, newspaper circulation per 1,000 population differs dramatically from Japan (562 newspapers per 1,000) to Sweden (521) to the United Kingdom (414) to the United States (268) to Canada (220) to Italy (96). During this period, when television penetration increased everywhere, newspaper circulation per capita also increased in Japan and Sweden, declined imperceptibly in Canada (1 percent) and Italy (5 percent), and declined dramatically in the United States (16 percent) and Britain (21 percent). Television certainly is vital in American news today, yet its centrality can be (and usually is) exaggerated. American journalists underestimate how much time people spend reading newspapers and overestimate how much time they spend watching television news. They mistakenly believe that making print more like television, with shorter news items and more feature stories, will bring in more readers. In fact, in recent years, papers gaining circulation showed no markedly different editorial practices from those losing circulation. Distribution, not con-


277

tent, is the cause of a loss of readership.[29] That is, the main decline in newspaper circulation is in single-copy sales, rather than home delivery, in large metropolitan areas, and the problem seems to be not that people find television more satisfying, but that the suburbanization of American life, the decay of urban neighborhoods, and the unemployment, poor health, poor education, and disaffection of the urban poor make engagement in a community through the newspaper an irrelevancy. The other side of that coin, as Ben Bagdikian has observed, is also important: the economics of newspaper production has led competing papers in a city to fight for the same upscale consumers in order to attract the same advertisers. This process, leading to more and more monopoly newspaper cities, leads to news content less and less relevant to the blue-collar citizens who were once reliable newspaper subscribers. The newspaper, in short, in moving upscale, has significantly authored its own irrelevance.

Television is a centrally important medium in American culture, but it is not in and of itself either the sum of or an explanation for the changing informational environment of American citizens.

Are the News Media Moving Right or Left?

Another story is that the main development in the news media has been a sharp move of news content to the right (a favorite theory on the left) or, alternatively, that the national news media have been captured by a corps of too well paid, too comfortable, too Eastern, too Ivy League, and too liberal journalists (a favorite, naturally, on the right).

In 1969 an economist with the Federal Reserve, Reed Irvine, created Accuracy in Media, an organization devoted to pointing out every actual, and imagined, left-wing bias in what Irvine calls "Big Media," meaning the networks, the few newspapers of national influence, the news magazines, and the wire services.[30] Today AIM reports a membership of more than 25,000, an annual budget of $1.5 million, a speakers bureau, a newsletter with a circulation over 30,000, a daily 3-minute radio program that appears on 200 stations nationally, and a weekly column that appears in some 100 newspapers. A variety of other right-wing critics of the media arose in the wake of AIM. For instance, Robert and Linda Lichter, conservative media scholars, founded the Center for Media and Public Affairs in 1986, which surveys media performance and analyzes media impact on public opinion. In 1986, Fairness and Accuracy in Reporting (FAIR), a left-wing counterpart to AIM, was established.

While the right-wing institutes pushed a view of the left-wing media, their very existence, coupled with the general rightward tilt of elite political thinking in the 1980s, helped promote the idea of a shift to the right in the press. FAIR went over the "Nightline" guest lists for 1985–


278

88 and found an overwhelming preponderance of government officials, almost all of them white and male. What else is new? FAIR observes the flourishing of political talk shows hosted by conservatives—William Buckley's "Firing Line" being the grandaddy, followed by John McLaughlin, Patrick Buchanan, Rowland Evans, and Robert Novak. No show at the time of FAIR's study was hosted by a liberal.[31]

If there was a shift to the right in the media in the 1980s, it may have had something to do with consternation in the business world in the 1970s over the media seeming to tilt against it. Mobil began taking out ads in the 1970s (and the new terms advocacy advertising and advertorial were coined) in the New York Times . In 1975 corporations spent $100 million in advocacy advertising, aiming as much as a third of their total advertising expenses toward people as "citizens" rather than as "consumers." Business groups began to seek ways to influence the news media by giving prizes for economic reporting, establishing business reporting training programs at universities, sponsoring arts and cultural programs on television, creating or supporting new neo-conservative think tanks, and holding roundtables with journalists and complaining loudly that they were being maligned by a "liberal" press.[32]

Perhaps the desire to influence the media had something to do with a loss of direct control over them, notably over television. In the 1950s, sponsors of television programs had significant influence over the content of programming, to the point of reviewing scripts before broadcast. But from the time of the "quiz show scandal" in 1959 (when it was revealed that quiz shows were "fixed") on, the networks took tighter control of the reins themselves. Moreover, as advertising time grew more and more expensive and competition for television time increased, the program with but a single sponsor disappeared from the screen. Between 1967 and 1981 the number of commercials on the networks per week increased from 1,856 to 4,079, while "spot" commercials increased from 2,413 to 5,300 as the standard length of the commercial declined from 60 seconds to 30 seconds.[33] It's no wonder that recently General Motors, among others, has asked for something new in advertising lingo—"pod protection." That is, GM wants to be the only automobile ad within a group of commercials (or "pod") aired consecutively within a single commercial break in a program.[34]

The decline of direct advertiser control over television was minor compared to the loss of business control over the political agenda. The sixties created the climate for a set of issues and institutions that cast a cold eye on business in the 1970s. The Congress, especially the Senate, was influenced beginning in the early 1960s by northern Democrats, who successfully challenged what had been a domain of conservative southern Democrats. This helped the passage of liberal policies in the late


279

1960s and 1970s, including the creation of new government agencies to monitor business activity—the Environmental Protection Agency (1970), the Equal Employment Opportunity Commission (1965), the Occupational Safety and Health Review Commission (1970), and the Consumer Products Safety Commission (1973), not to mention a newly militant Federal Trade Commission. The press, devoted as always to covering government, covered the new agencies and so shone a light on business that was necessarily more critical and concentrated than in the past.

Business antipathy to the media was also a response to the success of Ralph Nader, who helped invent a new public opposition to business. Nader used some old-fashioned media methods in his rise to prominence. He first published an article on automobile safety in The Nation in 1959. His book Unsafe at Any Speed propelled the 1966 legislation that made the federal government a guarantor of highway and automobile safety and led to the "recall" of automobiles with safety defects. In the following years, Nader established a fleet of public interest lobbying and research organizations both in Washington and around the country. The federal government, by establishing new agencies to protect occupational health and consumer safety, and private industry, by getting itself into and mishandling near-disasters (Three-Mile Island) and major disasters (Bhopal), did the rest. The Congress, while still the center of Washington legislative activity, was increasingly a consumer of policy initiatives, not only from the White House but from a mushrooming assortment of lobbyists.[35] While citizens groups and public interest groups remain a small fraction of the total lobbying effort in Washington, they nonetheless proliferated between 1960 and 1980.

So the media, following Washington, moved left in the 1970s; again following Washington and the coming to power of the Reagan administration, they moved right in the 1980s. Too many media critics, left and right, have overestimated the independence of the media and underplayed the power of media routines, repeatedly documented in studies by Edward Epstein, Herbert Gans, Todd Gitlin, Daniel Hallin, Stephen Hess, Michael Robinson, Leon Sigal, Gaye Tuchman, and others.[36] What changed from the 1960s to the 1970s to the 1980s was the political climate that gave differential legitimacy to different sources. The media, in the middle when a polyphony of voices are raining in, have few intellectual resources for independent judgment and no political portfolio for independent polemic.

The 1960s did change the internal culture of working journalists. Television news coverage of election campaigns is more negative than it used to be for both Republican and Democratic candidates.[37] Reporters, like patients seeking medical counsel, are more likely than they used to be to seek second opinions. Institutions well versed in giving second


280

opinions have multiplied rapidly in and around Washington. There is a "social movement industry" now, as Mayer Zald and John McCarthy write, with more resources than ever before.[38] The result, in the national media, is a picture of the world not more left or more right but more muddled and multidimensional (and, if your tastes run to such terminology, more postmodern).

The Survival—and Flourishing—of Print

It remains to say a word about the not negligible medium in which this chapter appears—the book. Little is more important in characterizing the changing contemporary culture than the fact that in 1960 only 41 percent of the adult population (aged 25 and over) had graduated from high school, while in 1988 it was 76 percent. In 1960, 7.7 percent of the adult population had four years or more of college, by 1988 this had jumped to 20.3 percent.[39] While most college education is largely technical or preprofessional, many institutions stress a "liberal education," and pockets of "liberal education" exist even in technically oriented schools, providing an opening for critical inquiry that high schools rarely afford. Literacy is not on its last legs. In fact, there are more books published by a greater variety of publishers and distributed through more bookstores today than ever before. Despite major mergers and acquisitions in the publishing business, the total number of publishers has increased—to say nothing of the "desktop publishing" that the personal computer has made possible. Where some 15,000 new books and new editions were published in 1960, there were 36,000 in 1970 and 56,000 in 1987.[40] In 1963, there were 993 book publishing establishments; 2,264 in 1987.[41]

Still, books reach the public through an increasingly concentrated distribution network. B. Dalton had more than 500 stores by 1980 and nearly 1,000 when it was bought by Barnes & Noble in 1987; Waldenbooks had more than 700 stores in 1980; 1,100 by 1987.[42] The ten largest bookstore chains accounted for 57 percent of all book sales, and B. Dalton and Waldenbooks exercise a significant impact on the industry as a whole.

Books as a category are up against heavy entertainment and leisure competition—not just television, but the new adaptations of the home television set. There are twice as many video rental outlets as bookstores.[43] But the uses of print literacy are still growing. Reports that ours is now a television culture are vastly exaggerated.

Conclusions

It is tempting to suggest that with the present flood of information and the hype that carries the informational load, our eyes glaze over more and more readily, that increasingly we surrender our critical powers or


281

never assume them, accepting that "all politicians are crooks" or that "everything causes cancer." But people keep making sense of their own lives, despite all. People still get irritated, bored, incensed, and mobilized, despite all. We make a mistake if we judge the public mind by the menu for public consumption. There is a tendency to believe that if the television news sound bite has shortened from a minute to 10 seconds (and it has in the space of 20 years),[44] the public capacity for sustained attention has shrunk accordingly. But this does not square with the intensity of careerism in business, the growth of the two-income family, the vitality of the pro-life and pro-choice movements, the return of religious revivals, and even the upturn in S.A.T. scores.[45]

Then what does media saturation mean? Consider a fast food analogy. Most people I know eat more Big Macs than salmon dinners at fine restaurants. McDonald's is faster, cheaper, more predictable, easier to squeeze into the rest of life. This does not mean people prefer Big Macs to salmon. It does not mean, aside from economistic tautology, that they greatly "value" Big Macs. It does not mean that their palates are jaded. It means they have made some decisions about their priorities and, then and there, eating a good meal is lower on the list than quickly reducing hunger. I do not think the growing success of USA Today necessarily indicates anything different: it does not mean people judge McPaper the "best" meal or the only meal they seek; simply that they find in it what they need from a newspaper at a given moment, given the constraints of daily life. With world enough and time, or with an important local issue, or with a hot presidential race, their choices might be different. Their choices, in any event, at any given moment, include an array of other sources of information.

If we cannot infer individual tastes from public menus, can we nonetheless observe something about how available cultural repertoires limit or shape opportunities for consciousness? Yes, but carefully. The flourishing of McDonald's forces other restaurants to change and still others to close up. The prevalence of McDonald's tutors citizens, particularly the young, in what food is good, what food is , what a meal is supposed to feel like. This may not be the tutor we would most like to have for our children. At the same time, Americans eat less beef today than they did when McDonald's was a gleam in Ray Kroc's eye; McDonald's is not the only tutor in the culture. Nor is McDonald's itself untutored by larger social and cultural change; witness the availability of salads and the declaration that french fries will no longer be fried in animal fat. Again, judging American habits or structures from the most visible elements of public consumption is something to undertake only with great care.

American citizens have more information today than they had a generation ago. More credible information. More national sources of information. More authenticated conflicts of information and opinion, thanks


282

to the proliferation of expert lobbying groups and the changing habits of the media to seek out a variety of sources. More information coming to the laity through the media rather than through expert intermediaries. If the New England Journal of Medicine publishes research of possible interest to the laity, it does not percolate down through family physicians, but goes straight to the newspaper, magazine, and broadcast science reporters, and gets picked up soon thereafter in women's and consumer magazines, too. At least with middle-class citizens who read the women's magazines or Jane Brody's column in the New York Times and are empowered by their education and social standing to instruct their friends and families and talk back to their doctors, this kind of information is useful and gets used.

I do not conclude from this that we have the right information at the right time or that available information is distributed equitably or that the informational citizen is well informed. Our increasingly dazzling library of information provides only an illusion of knowledge and a false promise of citizenly competence if the social order does not equip people to use it, if young people are cynical, if the poor have no hope, if the middle class is self-absorbed, and if forays into public life are discouraging and private pursuits altogether more rewarding than public enterprise.


283

Thirteen—
Schools under Pressure

Caroline Hodges Persell

It is fashionable today to attack education in the United States. Conservatives and liberals alike agree that education is in trouble. They disagree about why and what should be done, but they agree that the educational system needs to be improved. I suggest that systemic shifts and demographic changes make the situation facing education more serious than in recent decades and exacerbate the challenges education faces. Chief among these is the challenge of social inequality. But there are other challenges facing education as well, namely those of pedagogy, personnel, national testing, and a shift in the purposes of education.

Changes in the Social Context of Education

Education is a broader concept than schooling, and the social institution of education includes more than just what happens in schools. Education refers to both formal and informal ways that the older members of a society or group try to teach newer members the attitudes, behaviors, skills, beliefs, and roles considered necessary to become participating members in that group or society. Education occurs both informally and formally. Informal education occurs through child rearing by parents and other members of the family, through peer-group interactions, and through observation and imitation of behaviors seen in the neighborhood, on television, or elsewhere. Some of what is observed and imitated may not be intended to be learned. Formal education occurs in schools, where trained personnel try to transmit information, teach skills, and guide inquiry and learning. Formal education in the United States reaches increasing numbers of pupils, for growing numbers of years.


284

Systemic Shifts in Society

Two systemic shifts in society are creating a new crisis in education. The first is that our society is becoming a postindustrial one, as numerous others have noted already. As physical labor and manufacturing become less important, interpersonal services and symbol manipulation become more important. Dropping out of, or being excluded from, education increasingly means being shut out of the economic and cultural core of society. Education, both formal and informal, is particularly important for integrating people into a society where symbolic distinctions are increasingly prevalent. The second shift is the shrinking of the informal sphere of education, and the growing burden being placed on the formal system of education, without concomitant increases in time, staff, money, or innovation. The informal sphere is shrinking because of changes in other social institutions, namely the family and the economy. Children are spending much less time with their families than they did in the past. Many more mothers of even very young children are working full time. A projected 59 percent of the children born in 1983 (who are now in school) will live with only one parent before the age of eighteen, there are at least 4 million latchkey children in the United States of school age, and 20 percent of all children in the United States are being reared in poverty.[1]

As a result of changes in other institutions described in other chapters in this volume, parents can bring less time to the informal education of their children. There is no systemic acknowledgment of the need for, or support for, child care in the United States, unlike Canada and most Western European countries. This is in no way to blame single parents or working mothers, but to recognize the fact that the family's capacity to provide informal education has eroded.

The formal sector might be able to compensate for limitations in the informal sector if it were given more resources. Instead what we see is ever-growing demands being placed on formal education, with no significant increase in the resources needed to meet those demands. Schools are asked to offer all their usual instruction in literacy, numeracy, civil education, science education, language instruction, and reasoning skills, and they are charged with meeting each new challenge facing society, whether for driver's education, the teaching of moral values, technological "literacy," the provision of adequate nutrition, or the avoidance of drug and alcohol abuse, teen pregnancy, and AIDS. Not only is education called upon to solve these social problems, but problems such as these make the task of education more difficult. In addition, schools in the United States educate a larger percentage of youth for a longer period than any other society in the world. Thus the size and the expense


285

of the system has continued to increase, simply because it is touching more lives.

Demographic Changes

Demographic changes intensify the crisis in education. The population to be educated is changing dramatically. No longer are the majority of school children from white middle-class families who live in suburban homes with white picket fences. At least ten states face the prospect of "minority majorities" in their public schools by 1995.[2] In 1987 the Los Angeles public schools were teaching children who spoke eighty-one languages other than English at home.[3] Such children pose major challenges for schools.

A second demographic change is the increase in disabled and handicapped youngsters. There are increasing numbers of teen parents and babies born to addicted parents. Twenty-five percent of babies get no prenatal care. All of these factors are related to increased numbers of disabled children, who may need different kinds of education. In short, massive systemic and demographic changes increase the demands and expectations placed on education, and they heighten the challenges facing education.

The Challenge of Social Inequality

Demographic diversity makes the realization of equal educational opportunity all the more important if society is to be perceived as just, legitimate, and reasonably cohesive. How can education mitigate inequality based on class or ethnicity when, in practice, education reinforces inequality by virtue of vast differences in public and private schools and extensive tracking in public schools? The United States, perhaps more than any other society, holds as a cherished ideology the concept of a fresh start for each new generation. Young people, the creed goes, should be given a fair chance to be all that they can be. For a nation of immigrants, two opportunities are essential—the opportunity to learn the language, the culture, and skills, and the opportunity to work. Other chapters in this book explore the availability of opportunities to work. Here we consider opportunities to learn.

Despite imagery to the contrary, American education is not a uniform system. Therefore, it is very important to understand the broad contours of American education and to discern how variations are related to social class and ethnicity and to educational consequences. The configurations of American education are remarkably related to class and ethnicity. In most schools, students' social class and backgrounds are likely


286

to be similar because most people in the United States live in relatively homogeneous neighborhoods. Children who grow up in large cities or mixed suburbs are less likely to attend a local school with neighborhood children. Private schools flourish in such areas. If by chance students of different backgrounds do attend the same school, they are very likely to have different classmates and to experience different programs of study because of tracking. Distinctions between public and private schools and the practice of tracking have important educational as well as social consequences.

Public, Parochial, and Private Schools

A private school is one controlled and funded by nonstate sources. While 25 percent of all elementary and secondary schools are private schools, they educate only 12 percent of the student population.[4] This is because they are generally quite small; their average enrollment is 234, and 75 percent enroll fewer than 300 students. Only 7 percent enroll 600 students or more. They are thus much smaller than most public schools, which average 482 students. Many urban secondary schools are much larger, often enrolling several thousand students.

The most elite private schools are attended by children of the upper and upper middle classes. In the early 1980s, 90 percent of the fathers of elite boarding school students were executives or professionals, and nearly half had family incomes above $100,000 per year. Fewer than 20 percent of the parents were divorced. The ethnic composition of elite boarding schools has become more diverse in recent decades than in the past, although it is still considerably less diverse than the public school population. Four percent of students are black, 5 percent are Asian, 11 percent are Jewish, and 27 percent are Catholic.[5]

Despite their relatively small size, elite private schools have spacious and well-kept grounds, and extensive computer, laboratory, language, arts, and athletic facilities. The teachers have been educated in the liberal arts at selective colleges and are responsive to students and parents. Teachers generally do not have tenure or belong to unions, so they can be fired by the school head if they are considered unresponsive or incompetent. Although three-quarters of private school teachers nationally are women, at elite private boarding schools 60 percent are male, as are 61 percent of the students.[6] Nationally, 92 percent of private school teachers are white.[7] Classes are small, often having no more than fifteen students, and sometimes considerably fewer. Students are required to be prepared for class and to participate in class discussions, and they write a great deal. Virtually all students study a college preparatory curriculum, and considerable homework is assigned. Numerous advanced placement courses offer the possibility of college credit. There are also many op-


287

portunities for extracurricular activities, such as debate and drama clubs, publications, and music, and the chance to learn unusual sports that colleges value, such as crew, squash, and ice hockey. Students have both academic and personal advisors who monitor their progress, help them resolve problems, and try to see that they have a successful school experience.

In terms of the family backgrounds of children who attend them, other private schools and parochial schools are quite similar to each other, although the philosophy and organization of the schools may vary considerably. Parental education, especially the mother's education, is highly related to student aspirations, and mothers of parochial school students are comparable to public school mothers. In parochial schools, 6 percent of students are black and 10 percent are Hispanic. In other private schools, 2 percent are black and 8 percent are Hispanic.[8] Hence, the ethnic composition of these schools is less diverse than that of the public schools.

Like elite boarding schools, parochial and other private schools are almost exclusively academic, and their students take more credits in mathematics, English, foreign language, history, and science than do public school students. There is also little grade inflation. Parochial schools more closely resemble public schools than other private schools or elite schools in terms of the levels of student participation in extracurricular activities, with more students participating in the latter two types of schools.[9]

The costs at parochial schools are relatively low, especially compared to private schools, because the schools are subsidized by religious groups. These schools have relatively low teacher salaries and usually have no teachers' unions. Currently there are more lay teachers and fewer nuns, sisters, priests, and brothers as teachers than in the past.

Parents are generally involved in private schools, first by paying directly for them, and also in terms of attending parent conferences, school meetings, and doing volunteer work. They are often involved in fund raising and promotion for the schools.[10]

Although there is wide variation among public schools in the United States, they are usually quite large, and part of an even larger school system that is highly bureaucratic. They are usually comprehensive schools, which means that they offer varied courses of study, including academic, vocational, and general curricular tracks. James Coleman and Thomas Hoffer, authors of a recent book on public and private high schools, note that "two-thirds of the public schools, enrolling three-quarters of all public school students, are organized as comprehensive schools."[11]

In the early 1980s, the median income of parents of public school students was $18,700,[12] four to six thousand dollars less per year than that


288

of parochial and other private school parents. The parents are much more likely to be working or lower class, with lower average levels of education, and they are more likely to be divorced. Among public school students in 1984, 16.2 percent were black, 9.1 percent were Hispanic, 2.5 percent were Asian or Pacific Islander, and .9 percent were Native American or Alaskan Natives.[13]

In 1986, 90 percent of public school teachers were white and 69 percent were female. The authority of professional educators is often buttressed by bureaucratic procedures and by unionization of teachers and administrators. In general, there is a higher ratio of administrators to teachers and students in public than in private schools, perhaps partly because of the numerous governmental requirements public schools have to meet.

Clearly there are differences in terms of who goes to different types of schools, and what they experience there. The question is, are there different consequences, and can any of those differences be attributed to school effects rather than to selectivity bias? There are a number of differences in outcomes, which we will consider briefly, before turning to the question of their causes.

A perfectly reasonable question is what proportion of the students drop out. Among public high school students, 24 percent drop out, compared to 12 percent of parochial and 13 percent of other private school students. In terms of achievement test scores, private and parochial school students score higher than public school students in every subject. When Coleman, Hoffer, and Kilgore introduced statistical controls for various relevant family background factors, they found that achievement differences between public and private sectors were reduced (more for private schools than for parochial schools), but that differences remain.[14] In other analyses, Catsambis found that most of the "school effect" of Catholic schools was due to curricular track placement and types of courses taken.[15]

A third differential outcome is college attendance. Private and parochial school graduates are much more likely to attend college than are public school seniors; 45 percent of public school students, compared to 76 percent of Catholic school students and 76 percent of other private school students, enrolled in college.[16] Part of this difference is due to differences in individual abilities and social backgrounds, part is due to curricular placement, and part is related to type of school attended (whether public, private, or parochial). Among elite boarding school graduates, virtually all (99 percent) attend college, a result that is related to parental and peer expectations, curricula, and highly organized efforts by college advisors in those schools.[17] Such graduates are also much more likely to


289

attend high-status selective private colleges than are public high school graduates.[18]

A fourth consequence is seldom considered in discussions of American education. As Cookson notes:

Schools not only impart to students skills, but they also confer social status. Status competition is an ever-present fact of social life, and the effects of having high-status educational credentials ripple through graduates' lives like waves emanating from a central source; in time, they touch every social and economic boundary. Much of what is currently being written about public and private schools shapes the issue in terms of "choice," a value-free term that implies that private schools are educational alternatives more or less available to all families. This is not true. Private schools, especially socially elite private schools, are similar to private clubs; admission is contingent not only on the ability of the client to pay but [on] his or her personal and social attributes. Educational choice is not a neutral, self-regulating mechanism that acts as a kind of invisible educational hand, sorting and selecting students according to their preferences. To make a meaningful choice one must have the resources to act.[19]

These resources are both financial and social. If private schools were to be supported by public funds, as Chubb and Moe urge,[20] Cookson suggests that such a policy would be likely to increase educational opportunities for already advantaged members of society, result in greater stratification of school children, promote the founding of more and weaker private schools, limit the autonomy of existing private schools, and promote further racial and ethnic segregation.[21]

Finally, graduates of private schools earn more than public school graduates, even when appropriate background characteristics are controlled.[22] Graduates of thirteen elite boarding schools comprise 10 percent of the members of the board of directors of large American business organizations, and 17 percent of those who sit on multiple boards of such corporations.[23]

Whether or not differences between types of schools are the decisive factors in these unequal social outcomes, there is clearly a pattern of cumulative advantage at work here. More privileged families—families that have ethnic, economic, occupational, and structural advantages—gravitate toward certain types of schools, where their children experience different educational programs. The combined effects of initial advantage and educational experience contribute to the more advantaged positions and incomes such children attain in their adult lives. In these ways, family, educational experiences, and resources combine to reproduce social inequalities from one generation to the next. One of the major challenges facing education is how to provide equal educa-


290

tional opportunities to children, regardless of their family backgrounds. The educational practice of curricular tracking in the public schools is one that needs to be reconsidered if educational inequalities are to be reduced.

Curricular Tracking

As we have seen, in private and parochial schools most students study an academic curriculum. This is not the case in public schools, however, where most students are tracked into different curricula. Tracking consists of two elements: sorting by ability and by curriculum. Ability grouping assigns students to learning groups based on their background and achievement in a subject area at any given moment. Their skills and knowledge are evaluated at relatively frequent intervals, and students showing gains can be shifted readily into another group. Students might be in different ability groups in different subjects. Ability grouping can occur while students share a common curriculum, with only the mix of student abilities being varied. All students are taught the same material, although they may be taught in different ways or at different speeds. Quite often, however, different ability groups are assigned to different courses of study, resulting in simultaneous grouping by curriculum and ability. Such placements tend to become self-perpetuating.[24]

One major result of tracking is the differential respect students receive from peers and teachers, with implications for both instruction and esteem. Curricular track placement has long-term consequences with regard to whether people go to college or not, and what type of college they attend.[25] Among public school graduates, 73 percent of academic track students attended college, compared to 30 percent of nonacademic track students.[26] Because where people attend college is related to their chances of graduating, and because college attendance and graduation are related to occupational prestige and income,[27] the issue of educational tracking has profound social implications. Many researchers recommend that tracking should be used much more carefully,[28] or abandoned completely.[29]

Desegregation, Bilingual Education, and Special Education

In addition to struggling with the issue of offering equal opportunity, education is confronted with meeting a number of social and political goals including desegregation, bilingualism, and special education. By putting responsibility for these issues onto education, our society may try to absolve other institutions from worrying about them. Systematic research has been done on desegregation. In the last decade "many education policy makers seem to have decided that high-quality education rather than equal educational opportunity should be the primary goal of


291

public education."[30] However, desegregation and quality education need not be mutually exclusive goals. In fact, they may be mutually reinforcing. This is particularly likely if we accept Willis Hawley's definition of quality or effectiveness in education in terms of "(1) academic achievement in mathematics and language arts and (2) tolerance and understanding of people of different races and social backgrounds."[31] He concludes that desegregation improves the achievement of ethnic minorities and does not undermine the achievement of whites.[32]

Bilingual education has been less extensively researched than has desegregation. Bilingual education often results from court cases on behalf of non-English-speaking children who are alleged to be receiving unequal educational opportunities.[33] One short-term study found that students enrolled in bilingual programs did not achieve any better than their counterparts who were not enrolled in such programs.[34] Other research, however, which followed students enrolled in bilingual programs for at least four years, found that they made positive academic gains.[35] The issues surrounding bilingual education involve more than education; they include social and political issues that affect the rights of non-English-speaking persons in a predominantly English-speaking country. On the one hand, in an ever-shrinking international world, multiple linguistic traditions can enrich a society. On the other hand, lack of proficiency in the dominant language of a country can lead to linguistic ghettos, fragmentation, and distrust within a society. These issues need to be resolved at a societal level before the mission of bilingual education can be clarified.

About 10 to 12 percent of all students are currently classified as in need of special educational services.[36] As Public Law 94-142 was implemented, a new category of handicap, namely, "learning disability," was introduced. The term refers to students who display inadequate achievement in speech, language, spelling, writing, or arithmetic, as a result of cerebral dysfunction. By 1986–87, the number of students identified as learning disabled was 1,914,000, or 44 percent of all handicapped students.[37] Although couched as a psychological model to explain why children fail, many children who are classified as learning disabled do not match the theoretical model. "Learning disabled" is the latest in a long list of labels applied to children who are having trouble in school. Like all deficit theories, this one places the blame for failure squarely on the child's limitations and effectively diverts blame from curriculum or pedagogy. Both of these, however, should be considered candidates for explaining pupil failure. The concept of learning disability also has self-fulfilling potency; that is, if teachers believe that children cannot learn because of their disability, they will expect less of them, teach them less, and it is likely that the children will learn less.[38] Learning disability the-


292

ory diverts attention from the issue of how children learn and what kinds of cognitive skills they have. Such diversions make it less likely that effective forms of teaching and learning will occur.

Other Challenges Facing Education

Four other challenges facing education also warrant consideration: pedagogy, personnel, national testing and standards, and the purposes of education.

Pedagogy

The need for improved methods of teaching has already been mentioned. If tracking is going to be discontinued, if desegregation is to work, if children with special needs are to be effectively educated, new and better forms of pedagogy are required. At least three possibilities exist, and others might be found with further experimentation and research. The three are peer teaching and counseling, cooperative learning, and new technologies, specifically microcomputers.

In peer teaching and peer counseling programs, students who have been trained by teachers and counselors help and tutor other children. Already widely used in a variety of public and private schools, peer teaching has three major strengths. First, those doing the teaching learn more and consolidate their own knowledge and skills. Second, peer tutors are often able to communicate very effectively with students slightly younger than they are. Third, enlisting students in pursuit of the school's goals expands the resources of the schools, at no additional cost. Peer tutors can help teach academic skills, they can help mediate conflict, and provide drug education and AIDS awareness. A study of 143 adolescent drug prevention programs, for example, found that "peer programs are dramatically more effective than all other 'interventions.'"[39]

Cooperative learning is an instructional method whereby students of different abilities and achievements work together in small groups to achieve a group goal. It is a more effective pedagogy than traditional classroom methods, in terms of learning subject matter, increasing self-esteem, and improving race relations.[40] The learning groups usually have four members, one who is a high achiever, two who are average, and one who is below average. Each student is responsible not only for learning the material taught in class but also for helping the other members of the group learn it.[41] The benefits are that each individual's achievement brings glory to the group rather than depresses the group achievement, as happens in conventionally structured systems of classroom rewards. Such an arrangement motivates students to help each other to learn.

Robert Slavin, who has studied cooperative learning extensively, cau-


293

tions that several conditions must be present if it is to be effective. As noted, there must be a group goal, but it must be of a particular type. It is not enough to have the group complete a single project because there the temptation is great simply to have the strongest students do the work. Rather, the group goal needs to be to prepare each member for success on an individual test or other appraisal. Therefore, group success needs to be measured in terms of the individual achievement of each member. When the above conditions are met, student achievement improves.[42]

Cooperative learning strategies have been used in both elementary and secondary schools, in urban, rural, and suburban schools in the United States, Canada, Israel, West Germany, and Nigeria, from grades 2 to 12, and in such subjects as mathematics, language arts, writing, reading, social studies, and science.[43] They have led to social growth and improved academic achievement.[44]

If education sometimes resists new ways of socially structuring learning, it has often looked to the latest in technological wizardry for quick cures. The dust still gathers on teaching machines, and audiovisual equipment lies broken or unused in many schools. The big question is whether personal computers (PCs) will go the same route as other promised technological fixes or whether they can improve teaching efficiency and effectiveness. If they can, several major stumbling blocks must first be overcome. These include shortages of computers in many schools, the need for suitable instructional software, and the need to provide teachers with time and training to use computers effectively in courses. These are issues that schools and communities need to address.

Personnel

The teaching force is aging and many experienced people are leaving the field. In the next decade, an estimated two million new teachers will be needed. This need will pose tremendous recruitment problems for school districts across the country. As many local and state governments face deficits and resistance to higher taxes, districts will need to marshal public support for higher teacher salaries and devise creative new strategies for attracting academically talented people to teaching. Only when teachers can have more influence on curriculum, choice of books, and pedagogy, is the occupation likely to attract highly able people.[45]

If teaching is to become an attractive occupation, it needs to be a well-paid career, with professional training, responsibility, accountability, and respect. If it were to change in those ways, it is quite likely that it could appeal to intelligent young men and women who want to work with young people. It is a career that provides flexibility for young adults who want to help raise their own children while pursuing a professional occupation. If certain features of teaching could be changed, namely pay


294

and working conditions, it is possible that education's drawing power could increase.

Testing and Standards

One problem that efforts to mobilize support will face is a perception that public education is not doing a very good job. The bombshell report by the National Commission on Excellence in Education, A Nation at Risk (1983), reported that "international comparisons of student achievement, completed a decade ago, reveal that on nineteen academic tests American students were never first or second and, in comparison with other industrialized nations, were last seven times."[46] However, the data used to support this statement were gathered between 1964 and 1971. Furthermore, the comparisons were based on averages for different countries as reported by the International Assessment of Educational Achievement (IEA). The problem with using country averages is that the student bodies in different countries are not comparable.[47] For example, only 9 percent of eighteen-and nineteen-year-olds in other countries reached the last year of high school in the period studied, compared to about 75 percent of those in the United States.[48] Not surprisingly, the more academically select students in other countries did better than the average U.S. student. There is no adequate international study of the academic performance of comparable populations of students. As a result, we do not really know whether U.S. education is suffering internationally.

A Nation at Risk also argued that educational content has been watered down over time, suggesting that the growth of electives has diminished the academic focus of secondary schools, resulting in a "curricular smorgasbord."[49] This assertion depends on one analysis of high school transcripts from a 1964–69 sample and a second based on a 1975–81 sample. But these two samples are not comparable.[50] Even if the two samples could be compared, the evidence does not fully support the Commission's conclusions. While the second sample showed more students in the general track and big increases in the numbers of students taking such courses as driver's education and marriage training, students in both samples spent nearly the same amount of total time on academics (the first sample had 69 percent of their credits in academic subjects, while the second had 62 percent).[51] Other research based on representative samples indicates a modest decline in academic emphasis overall.[52]

Additional school practices were also cited as declining: amounts of homework required, disciplinary standards, and the standards required for high grades (so-called grade inflation).[53] It is hard to prove that such changes are the causes of lower academic performance. One often-cited indicator of lower performance is declining Scholastic Aptitude Test (SAT) scores. In its own analysis, the College Board (who makes the test) found that the decline was largely due to changes in the students taking


295

the test. The College Board attributed the remainder of the decline to changing social conditions, such as increased television watching, student unrest, and other factors. Curricular differences among schools accounted for little of the difference in scores between schools.[54] While there appear to be some small declines in curricular content over time, and some small performance declines over time, no one has pinpointed the causes of the performance declines in terms of educational practices.

In more recent years there is some evidence of educational improvement. For example, there were fewer dropouts in 1985–88 than in 1978, and reading scores have increased in recent years.[55] However, these seemingly mixed and noncomparable results over time lead some to call for national educational testing.

Recent Gallup polls reveal that the majority of Americans favor national education standards, national tests, and a national curriculum, the strong tradition of local control over education notwithstanding.[56] The National Assessment of Educational Progress (NAEP) has done periodic studies to assess what American children know. A national test would set absolute standards of what children should know and would ascertain their mastery of that prescribed content. In this sense it is different from tests, such as the SAT, which grade students in relation to each other, that is, essentially on a curve. But the possibility of such a test raises a series of important questions. Would such a test be a politically attractive substitute for improving teacher pay and teaching methods? Who would decide what all children should know? How would that knowledge be tested? Standardized multiple-choice tests are not usually the best way to measure higher-order thinking and reasoning, which many educators consider an important educational goal. If such tests lead to tighter requirements for high school graduation, they could increase the number of students having difficulties, and the number who drop out earlier.[57] Debates over national testing are invariably linked with discussions of the goals of education.

The Goals of Education

Over time, American education appears to have shifted its primary purpose. In colonial times, education was designed to produce literacy so people could read the Bible. Reading and simple arithmetic were also helpful to people conducting their daily lives. Most people did not go beyond the eighth grade. With the giant waves of immigration in the nineteenth century, education took on the additional goal of preparing people for citizenship. There were literacy tests for voting in some states, and English tests for immigrants seeking naturalization. The importance of education for citizenship should not be forgotten. Every totalitarian regime that takes power tries immediately to seize control of the educational system. Education is considered a nerve center of political control.


296

An educated citizenry is an essential ingredient for a healthy democratic society.

In recent years, education has added the goal of preparation for a vocation or career. Education and the economy are increasingly closely coupled, at least for people who want to work in corporate or nonprofit bureaucracies.[58] Education is an increasingly important passkey to reasonably well paid and secure employment. The continuing importance of education for citizenship should not be overlooked, however, particularly in an era when the labor market is becoming increasingly polarized and pressures are on education to become increasingly tightly linked to the labor market. If education succumbs to this trend, it will relegate large numbers of lower-class and minority children to second-class citizenship and membership in a permanent underclass. As a recent report from the New World Foundation notes:

Education for citizenship means that schools should provide children with the social and intellectual skills to function well as members of families and communities, as political participants, as adult learners, as self-directed individuals. It means educating children about the way the world works, and arming them to influence how it works. Citizenship requires basic skills, but it requires other forms of learning as well: critical thinking, social awareness, connection to community, shared values. The call is for educational values which recognize student needs as legitimate and which prepare students for multiple roles as adults, regardless of their labor-market destinies or economic status. The bottom line for democratic education is empowerment, not employment.[59]

Directions for the Future

To address the current crisis in education, two major directions need to be pursued. First, the United States needs to change its political and social orientations. It needs to become willing to invest now for future benefits, and it needs to move from a "scrap heap" mentality to a reclamation stance toward difficult cases. Second, we need to structure innovation into education.

Investment now could pay big dividends for the future. One area of educational investment that has a proven track record is Head Start, the preschool program for four-year-olds.[60] Every dollar invested in quality preschool education yields $4.75 because of lower costs later on for special education, public assistance, and crime.[61] However, only 16 percent of the low-income children eligible for Head Start are now in it.[62] Liberals and conservatives both are coming to the view that Head Start is an essential but incomplete program.

As the United States struggles to change its stance toward natural resources and the environment, it might similarly change its attitude to-


297

ward human resources. The country could move away from scrapping undeveloped human talent and toward developing and utilizing it. Much could be learned from the nation of Israel in this regard. There every child is needed and valued, partly because the country is in a virtual state of military seige. In the United States, the "birth dearth," which is following on the heels of the "baby boom," may produce labor shortages and may help to mobilize public support for salvaging human talent and developing it from a young age.

Part of developing youthful talent involves ameliorating the worst features of poverty. As long as hundreds of thousands, perhaps millions, of people are homeless, reaching the children and youth living under such conditions with the fact and the promise of education will be difficult, if not impossible. Twenty-five percent of pregnant women in the United States receive no prenatal care. One result is an infant mortality rate in the United States that places it nineteenth in the world, above Cuba and Bulgaria, but below Singapore, Hong Kong, and Spain.[63] A second result is a greater number of handicapped children. Of the 11 percent of school children who are classified as handicapped, one-third would have had a lesser handicap or none if their mothers had received prenatal care.[64] Handicapped children can cost the state as much as $100,000 per year to educate.[65] Prevention is cheaper for society in the long run than treatment after the fact.

Schools need to consider how to structure innovation into a bureaucratic system. Schools and teachers need to be given incentives and support to experiment with new pedagogies, such as peer teaching, cooperative learning, and new educational technologies.

Because so many children are being reared in poverty, so many are being raised by single parents, and the informal sphere of education is shrinking, the formal sphere of public education must take up the slack or our future will founder. Social justice is a noble reason for changing the level of investment and innovation we pour into education. The threat of international competition is a catchy, if somewhat irrelevant, rallying cry. But if those reasons fail to convince in this age of rational self-interest then good sense, social pragmatism, and even self-interest can be summoned to elicit support. Do we want to see our society become increasingly polarized, to the point where there is a permanently excluded underclass and an increasingly besieged dominant class? Given this prospect, Americans might try to think more rationally about several interrelated questions. What do they need and want from their schools? Can tracked public schools and socially segregated private schools accomplish the goals Americans have for education? Is the form and level of funding for education adequate to the tasks that it faces?


298

Fourteen—
Doctor No Longer Knows Best:
Changing American Attitudes toward Medicine and Health

Jonathan B. Imber

The physician, who killed me,
Neither bled, purged, nor pilled me,
Nor counted my pulse; but it comes to the same:
In the height of my fever, I died of his name.[1]
CALLICTER


The wounded surgeon plies the steel
That questions the distempered part;
Beneath the bleeding hands we feel
The sharp compassion of the healer's art
Resolving the enigma of the fever chart.
T. S. ELIOT


In the 1970s, a debate was launched by numerous observers of medicine and medical practice about the fate of one of the most powerful institutions in modernity. Some referred to it as "The End of Medicine" debate because nothing less was at stake than the autonomy of physicians to determine how to practice medicine as they saw fit.[2] The challenges to this autonomy contrasted sharply to the image of doctors in the 1950s. During that time, with the medical profession at the seeming height of its scientific and professional powers, the comparison of doctors with gods may have been overstated in reality, but not in perception. The authority of medicine was secure.

Doctors were guardians of hope and progress that derived from medical innovations and "discoveries" (antisepsis, penicillin, insulin, X rays, etc.) and that vastly increased their power to diagnose, treat, and prevent disease. The search for new discoveries and cures, following upon a more systematic understanding of the etiology of disease, was uniquely medical and scientific in its social organization. The growth of the modern hospital and the increased resources devoted to laboratory research were hailed as evidence of true progress in the conquering of human


299

disease and disability. Only recently, in the past quarter century, has the optimism of medical progress been challenged by forces within and beyond medicine.

The internal dynamics of medical authority are obviously tied to the external dynamics of medical power. Sociologists and social historians have argued, for example, that consistent with their aim to dominate professionally, doctors deliberately allied themselves with the state during the nineteenth century in order ostensibly to protect the public from the malpractices of quacks and legally to assure that the only medical practitioners would be those certified by university and governmental authorities.[3] These studies have confirmed that the maintenance of the physician's authority was deeply tied to the creation and preservation of medical organization. Further, the sociological understanding of medicine has placed a decided, and perhaps fateful, emphasis on the external power of doctors to map out their jurisdiction, to eliminate competition, and to use science and the state to consolidate their professional powers.[4]

Talcott Parsons' pioneering discussion of themes in the sociology of medicine in The Social System (1951) provides a classic example of the difference between the internal dynamics of medical authority and the external dynamics of medical power.[5] Parsons ably drew the line between these two dynamics by focusing on how the physician-patient relation was possible. What did this relation presuppose? What was expected of those who entered into it?

Parson's formulation of the "sick role" captured all of the human tensions that were naturally a part of being ill and of treating illness. These tensions were diffused in a series of institutional expectations that enabled the patient and the practitioner to negotiate their way through the uncertainties that each had about the outcome of those negotiations. Parsons articulated a normative faith for medical practice whose tenets included specificity of function (specialization), affective neutrality (scientific objectivity), and suppression of the profit motive in the provision of treatment. Each of these tenets was a theoretical guide to how the doctor-patient relation was possible and why that relation was sociologically determined by the growth of scientific knowledge and technical expertise and by the dedication to a vocation. Both expertise and dedication reassured patients that the doctor acted first and foremost in the patient's best interest, based upon long training and a commitment that transcended monetary reward. The "sick role" was also a description of the expectations incumbent upon the patient, who, by entering into this inevitably asymmetrical relation, understood that caring and curing were not always synonymous. But "care" symbolized a fiduciary responsibility and communicative trust on the part of the physician against which any failure to "cure" had to be measured.


300

What may be most remarkable to someone arriving on the scene of contemporary medicine and the delivery of medical services in the last decade of the twentieth century in the United States is how far Parsons' normative description of medical practice is from the popular perception of physicians, who are seen as uncaring, uncommunicative, self-interested, and ambitious. One genre of medical autobiography in which this popular perception is also confirmed has been written by young physicians who passionately contend that the rites of passage to physicianhood, especially in the resident years, bring out the worst in doctors, forcing them into a structured indifference toward patients and a near-hostile contempt for attending physicians who oversee the residency programs. Of course, patients, and especially those with the least financial means, are said to suffer most.[6]

Critics of Parsons' "sick role" have argued that the ideal typical physician-patient relation cannot be judged apart from the diverse social contexts in which it is played out. Patients with acute symptoms amenable to specific therapies are significantly different from those whose symptoms are either nonspecific or chronic and less amenable to standard therapies. Doctors practicing alone with long-term, stable patient populations are not the same as doctors moving in and out of medical groups whose patient populations are constantly changing. In other words, Parsons' depiction was accurate insofar as an illness was assumed to be either temporary or controllable (for example, diabetes) and the doctor-patient relation was believed to be a personal, mutually informed encounter between people known to one another.

Parsons has also been chided for uncritically depicting a "middle-class" portrait of medical work. His depiction was more hopeful than class-determined. The "class" critique does not explain why the sovereign and charismatic image of doctoring has come under such fire during the past quarter century. More than a century ago, Marx and Engels recognized the historical transformation of the professions under the class domination of the bourgeoisie. The stripping of the halo from medicine, like the disenchantment of the world, is not so much a function of class conflict as a cultural struggle to determine the nature of individual and group authority at any time and place.[7]

Like Charles Horton Cooley's hopeful and resilient image of the "primary group"—a bulwark against the inevitable incursion of modern mass society—Parsons' physician-patient relation is no longer a bulwark against the omnipresence of illness and illness-causing agents that are said to pervade the workplace, food, and the environment in all societies. Something more substantial than middle-class values is at stake in understanding health and the expectations about it. Medical malpractice and the part played by the law in raising the ante on the decisions made by


301

doctors are symptoms of a cultural transformation that goes well beyond the long-standing concerns about unequal access to medical services. Parsons spoke unwittingly and authoritatively for cultural assumptions that are no longer fully our own. New assumptions, to which the remainder of this chapter will be devoted, have taken hold, and they have led to a revaluing and devaluing of doctoring.

Three types of transformative cultural forces and movements in and around medicine have altered, perhaps permanently, the popular perception of doctoring and have led to new expectations about human health. The first type, and by far the oldest, is constituted in the forces of epidemiological thought and research that have largely shaped medical and public health understanding in their modern forms. The second type is the influence upon medicine of self-help movements, including the most challenging, feminism, and the most sublime, bioethics. The third is the environmental movements and other cultural/political movements, including parts of the public health movement, that have sought to redefine the relation between human beings and nature.

In the face of these forces and movements, the insulated character of medical "progress," the inner dynamics of which are celebrated in histories of scientific discovery, can no longer be maintained. On the contrary, the modern doctor as a figure of authority born of that progress has been challenged not only because the power granted by science is said to be insufficient or abused but also because that power is said to be impossible to control completely. The physician is the personification of a cultural struggle to define the limits of both scientific knowledge and professional power. A double meaning applies to all contemporary critiques of professional power. The jurisdictional meaning of that power is the operative one fully explored by sociology.[8] The cultural meaning of that power is less operative and more illuminative precisely in the sense that it reveals similar features of legitimation for all roles deemed authoritative, including those of parent, teacher, priest, and doctor. As that authority wanes—that is, as it is challenged as illegitimate power—something of the lasting meaning of physicianhood may still be visible to the naked eye insofar as it can see in superordinate/subordinate relations the moral nature and moral limits of human life.

The Forces of Epidemiology

Epidemiology, the study of the incidence, distribution, and determinants of disease in a population, has been the dominant scientific influence in shaping the early and present character of public health policy. It has also deeply affected the way in which medical innovations and treatments are ascertained, administered, and withdrawn. Harm to indi-


302

viduals can be calculated and predicted statistically, given certain controls. The role that epidemiology has played in defining how harm is understood, both scientifically and commonsensically, has yet to be fully assessed. Probabilistic reasoning is not new.[9] But how it relates to an understanding of individual health and to the formulation of health policy is neither obvious nor settled, and it represents an intellectual force whose influence is systemic in all advanced societies.

Because it costs money to maintain health, actuarial studies of morbidity and mortality have been central guides in enabling the insurance industry to develop profitably in alliance with those delivering medical services during this century. Just as legal hegemony was sought by professionally minded practitioners in the nineteenth century, economic security has been assured in this century for both patients and physicians through a system of insurance that depends upon epidemiologically derived analyses of risk. Medical services and insurance coverage for them are interactive, and their escalating costs have been driven by the cultural and political belief that good health should be one of the essential benefits of living in modern society.

In the United States, the poor, the unemployed, and their children do not stand on equal footing with those able to obtain medical services by qualifying for private insurance. No other belief about social equality and equity raises more fundamental questions about justice for all than the belief that a right to medical services should be guaranteed to all. Inequity in services intensifies the conflict between aspiring rights and actual privileges. The economic privilege of health insurance determines not only the quality of medical services but their availability as well.[10] In this sense, Parsons articulated a norm of equity for the medical encounter that remains implicitly intact. No "sick role" can exist without a corresponding availability of medical services. The number of persons presently without health insurance is estimated to be close to forty million, or over 15 percent of the American population.

In light of the tremendous impact that the insurance industry has had on establishing the economic vitality of the profession and the financial security of the patient, it is important to consider how the idea of "risk" is used to determine what kinds of behavior are encouraged and discouraged in the name of good health and good medical practice. Obviously, age, gender, class, race, and ethnicity can be used to define categories of insurance risk in broad, probabilistic ways. The same is true for practitioners, except that one substitutes for class, race, and ethnicity, the categories of medical specialty, medical procedure, and region of the country. When viewed in terms of how to calculate the costs of doing business, the principles of epidemiology can be applied in ways that can selectively correlate certain behaviors with illness and certain medical


303

practices with law suits. Cigaret smoking illustrates the first case and is taken up in this section; obstetrical care illustrates the second case and is taken up in the next section.

Thomas McKeown has argued that, in the advanced societies, diseases of affluence have replaced diseases of poverty as the primary focus for public health study and action.[11] Most diseases of poverty have been prevented by immunization, proper housing, sufficient food, clean water, sanitation, and other measures that for nearly three centuries have represented the minimal conditions for maintaining the public health. Most diseases of affluence, however, have been studied in terms of the behavior of individuals. The epidemiology of the past proved how diseases of poverty could be reduced or prevented by use of law and regulation, in the name of public welfare. The epidemiology of the future has set its sights on the control and transformation of individual behavior by use of similar strategies, in the name of public health.

As recently as forty years ago, doctors endorsed cigaret smoking as a benign, even stress-reducing, behavior. With the notable exception of the United States, the rest of the world continues to consume cigarets with an ardor that shows little sign of weakening in response to the new American—indeed, federal—moral stance toward smoking. The rise of modern epidemiological thinking about diseases of affluence owes much to the struggle over social policy toward smoking. Doctors, insurance companies, and consumers have been generally persuaded that statistical analyses of future risk can and should serve as the basis for the formulation of policy in matters pertaining to health and welfare.

American liberalism, as distinct from libertarianism, has long embraced the principle that the state should support efforts to protect citizens from undue risks, including those risks assumed by individuals voluntarily. In previous centuries, the public health was protected in the belief that risk—for example, of contagion—could be reduced by legally mandating who could leave or enter a home. The quarantine was a community- or state-enforced action that took for granted the legal and moral sufficiency of acting on the basis of probabilities to prevent the spread of disease. Harm averted was welfare maintained. The logical and historical extension of this policy has led to the largely noncontroversial requirements that children should be immunized against measles and other childhood diseases before being permitted to enter school.[12]

Far greater controversy remains over smoking because the "public" health is now defined by probability studies that identify individuals whose actions pose no immediate threat to their own or anyone else's life or health. Rather, these studies establish statistical links between certain behaviors and the future onset of disease. Epidemiological studies of smoking and disease have shown, within the canons of statistical reason-


304

ing, that the use of cigarets and any tobacco product will inevitably cause disease in a significant percentage of the people who use them. These studies confirm that hundreds of thousands of people suffer disease and death each year from smoking, and that nothing short of reducing the number of people who smoke will reduce the incidence of morbidity and mortality associated with the behavior. Yet probability theory allows that the diseases may or may not appear at some undetermined time in the future, and that they will certainly not appear in all persons regardless of how they behave.

The medical profession can play only a minor part in reducing probabilities of the onset of future disease, given the specialized, and thus more limited, skills at its command. Its power in regard to smoking is now mostly rhetorical—often called "preventive" medicine. Patients are persuaded that giving up smoking is the best bet against a whole host of afflictions that are associated with it. Beyond the capacities of trained doctors, a dogma of sorts is in the social process of creation. Because the more magical and immediate skills of doctors cannot fend off the ghosts of new probabilities, an ancient school of thought has reasserted itself, claiming that the best cure is self-discovered. But this new self-discovery is neither theocratic nor Socratic but rather sociocratic: it concerns less the meaning of individual mortality and more the fate of a society, indeed, of an entire planet.

The doctor is necessary, but no longer sufficient, for saving lives. A more ominous hope rises out of the epidemiological project, requiring the full backing of the state in all of its agencies of influence—legal, regulatory, and rhetorical. Contemporary society stands on the verge of a full-scale institutionalization of the belief that no perfection of the public welfare can be achieved without the elimination of most, if not all, of the diseases of affluence. A significant question, taken up later in this chapter, is whether affluence, in all its supposedly costly and wasteful forms, must be eliminated along with the diseases it causes. The new dogma of health as self-discovery encompasses the agony of lung cancer as well as the horror of nuclear war.

The Office of the Surgeon General has about it totalitarian designs of an entirely new sort, more cultural than political, and perhaps not fully recognizable for what they augur. The liberal impulse in its postmodern guise is a profoundly satisfying one and is undeniably rooted in a concept of health that has served to protect the public in positive and substantial ways. A new kind of cultural politics is emerging as insurance companies have recognized the financial implications of actuarial calculations based on studies of behaviors deemed risky by evidence gathered epidemiologically. Automobile insurers have operated on a similar principle for many years, charging higher rates on automobiles classified as


305

"sports cars" and driven by persons under a certain age. The principle has found its way into newer, more controversial applications. Acquired immune deficiency syndrome (AIDS), for example, has opened a new chapter in the history of discriminatory insurance practices.[13]

Beyond the politics of epidemiology, a cultural transformation in the meaning of health has made epidemiology the true policy science of modernity, for, in effect, no behavior is immune from being designated as health promoting or health defeating, in a personal or social sense. The modern dogma of health also contains its own lessons in new styles of conformity, the most obvious one of which is exemplified in the learned revulsion toward smoke, especially among born-again, reformed, ex-smokers.

The argument about the cultural impact of epidemiology can now be summarized. In an effort to map out the causal nature of disease, the epidemiological study of behavior has superseded the epidemiological study of disease organisms. The new vectors of disease are defined in terms of specific human behaviors, for which individuals can be held accountable. The logic of this accountability invites the call for evermore refined correlations that amount to a cultural recycling of older styles of moral conformity that dictated, for example, what one should eat and drink. The return of sumptuary norms in a culture of radical individualism is not likely to be recognized as totalitarian, but the establishment of such norms is one of the most significant consequences of the institutionalization of an ethic of public health based on correlation studies. All types of behavior and consumption are subject to this ethic, though, its imposition will not succeed solely at the level of the individual. Instead, by means of a dedicated force of governmental and voluntary regulators of behavior and consumption, the institutional producers of bad health will be monitored, transformed, and if necessary, eliminated.

The quest for better health promises a kind of certainty about well-being that parodies the old-time religion of certainty about the next life. The jargon of the authenticity of health owes an outstanding debt to science and epidemiology, which have brought probability and health policy together in ways that may render older styles of medical authority obsolete. As the focus on health maintenance has changed, chronic disease has been identified as the central concern of future health policy.[14] In the United States, as the majority of citizens continues to grow older, the links between behavior and chronic illness are bound to be made more publicly explicit and culturally compelling.

The place of the physician and of the physician-patient relation in the new discourse on health is neither obvious nor secure. Asymmetry remains but is now mediated more than ever by a class-bound system of medical insurance and by the sociocratically based mandates of epidemi-


306

ology. Health may not be the exclusive domain of medicine, but medicine is still held accountable for diagnosing and treating disease. The popular image of doctoring has been undermined not least of all by the overpowering nature of diagnosis. The demand for greater knowledge about how illness comes into being has led to the paradoxical result of greater uncertainty about what to do with this new knowledge. The paradox works against preserving any residues of charismatic authority in the physician. This authority was first revitalized by scientific understanding in the last century and is now being displaced by diagnostic and prognostic technologies that cannot possibly give reassurance in the same forms once entrusted to the deeply personal authority of the medical practitioner. The modern uncertainties about health have invited the present demands for reassurance that can be heard in various social and intellectual movements beyond medicine.

The Self-Help Movements

During the last twenty-five years, a variety of voluntary associations have appeared in American society in direct response to the idea that practitioners of conventional medicine should not be left to decide alone what is in the best interests of patients. Although "alternative" forms of medicine are sometimes heralded as successors to the institutionally dominant and conventional form, these alternatives have remained a luxury for people seeking some release from the scientific and cultural presuppositions of Western medicine. Homeopathy, chiropractic, and other competitors in the Western tradition have re-emerged in the late twentieth century with some measure of respectability because the conventional, allopathic market is secure.

Laetrile and faith healing can easily be tested within the scientific paradigm of medical practice, can be shown to be ineffective epidemiologically, and can be tolerated only insofar as the illnesses they are putatively treating are not amenable to standard therapies. Acupuncture and other imported medical practices from the East are also tolerated as first or last resorts, while all other resorts are defined and controlled by scientific medicine. Whether for cancer or backache, the modern patient seeks cure in an environment that offers numerous alternatives to the standard therapies. The meaning of "cure" is in flux and always has been, from the religious cure of souls to Freud's psychoanalysis. Conventional medicine still has social legitimacy because it has demonstrably succeeded where others have failed. The American expectation that no disease should be beyond cure has been reinforced by those successes even as new challenges appear on the horizon.

The movements that have been most influential in supporting and


307

challenging professional dominance are indigenous to American society and the American present. Around each medical specialty, for example, various types of groups have arisen whose expressed aim is to represent the interests of those under the care of that specialty. The American Heart Association is an example of an early support movement for cardiology. Without undermining the authority of the cardiologist, the Association has utilized epidemiological data to demonstrate that the health of the human heart is linked to exercise, diet, and other behavioral factors. Controversy is growing about the ways in which the Association publicizes its claims about healthful behavior. Epidemiological studies are sometimes conflicting: how much exercise is "necessary" and what kinds of foods are "healthful" are questions not yet fully settled. In contrast, smoking is an old enemy, as is fatty food. The devils are clear, even if the angels are not, and cardiology owes a great deal to the publicity and research machines that have made heart disease the pre-eminent disease of affluence.[15]

Other medical specialties have their allied associations. The point here is that, working together, specialists and voluntary associations have been able to justify their mutual recognition and livelihood to a larger public, which has then been asked to donate money and effort to the cause of fighting disease. Marathons to raise money for particular diseases are part of the same social process. In some cases, where medicine has failed to provide satisfactory therapy, for whatever reason, an extraordinary number of self-help groups have responded, continuing the long-standing American tradition, noted by Tocqueville, of voluntary effort and association. Support groups for the deaf and blind have long histories, but an aging population produces degrees of loss of hearing and eyesight that are not the immediate concerns of practitioners who are trained to diagnose and treat disease rather than to respond to the consequences of aging on health and the provision of medical services. The demographic realities of an aging American population suggest that resources will naturally begin to shift away from specialized diagnoses and therapies of "cure" toward new responses of "care" if the medical profession intends to retain control over the largest and most affluent market of health consumers.

That shift has already begun, as is evident in the reluctance of pharmaceutical companies to devote extensive venture capital to the discovery of cures for "orphan" diseases, that is, diseases that afflict a statistically and politically insignificant number of individuals. Diagnosis, as already noted, has exceeded the ability to treat. Medical power creates new inequities and injustices because knowledge exceeds the resources to apply it. Organ transplantation is an especially poignant example in this regard. Scarcity of medical resources is paralleled by a "scarcity" of


308

donors. In the search for new forms of medical treatment, new demands are being made on Americans to broaden their sensibilities about what altruism itself means. Bone marrow, kidneys, hearts, and livers have become elements of a moral calculus not yet fully developed. A lurching progress has ensued, leaving many people uncertain how to respond. Advice about how to establish priorities for the use of all these resources has come most clearly and often from two politically unrelated but sociologically influential movements, feminism and bioethics.

The impact of feminism on health issues has been most strongly felt in the medical specialty of obstetrics and gynecology. The abortion issue and the broad range of health issues related to women's reproductive lives have been central subjects of feminist concern and political action. At the same time, bioethics has sought to clarify the medical responsibilities over life and death in the hope of improving both medical practice and health policy. One of the earliest studies of bioethical significance by a leading figure in this movement was, not coincidentally, on abortion.[16] Feminism has been, in its most recent wave, a politically powerful "self-help" movement that has challenged the autonomy of medical judgment. Bioethics has also attempted to introduce its insights into medical practice in the form of "clinical ethics," though, unlike the feminist challenge, its intellectual perspective has been institutionalized in medical schools across the country. Coming from different sides of the physician-patient relation, these two movements reflect the diminishing importance of professional self-determination.

Epidemiologically, childbirth is safer for mother and child in the United States today than it was a century ago. Yet the medical profession, in particular obstetrics and gynecology, has been under sustained pressure to redefine the roles of physician and patient in this particular encounter. The professionalization of obstetrics during the last two hundred years marginalized other forms of birthing (for example, midwifery) in such a way that contemporary countermovements to obstetrical, professional dominance have sought a reinstatement of the values, if not of all the practices, that preceded professionalization.

Demarginalization and accommodation are two very different social processes, and "natural" childbirth belies a disingenuous affirmation of older values that would hardly be so popular if it were not for the fact that medicine waits in the wings on the chance (indeed, likelihood) that emergency measures have to be taken. The medical environment in which birth occurs has been transformed, but not because the physician has been somehow genuinely marginalized, and the midwife made central, to the event. The process might be thought of in bureaucratic terms as a decentralization of responsibility, in which patient, family, nurse, nurse-midwife, and physician communicate with one another, even though the


309

physician hovers just out of sight, being on call to respond to any complications. Emergency imposes asymmetry with startling speed. Here is an example of too much caution and control giving way to a reinstatement of the nonmedical meanings of the event. If improvement has occurred, it has little or nothing to do with what counts as scientific or technical knowledge. On the contrary, the birth event can be demedicalized temporally only because the medical ability to respond to complications is so sophisticated. The impact of feminism in this regard has been socially, rather than scientifically, progressive.

Medical innovation in the treatment of women's health has resulted in a number of classic instances of iatrogenically induced harm. The oral anovulant contraceptive "pill" is a case study in the hidden history of medical experimentation on women's bodies.[17] Despite the fact that clinical studies for this drug were conducted, mass distribution produced a scientific awareness of its effects that could not have been obtained in any other way, or so it seems. The use of Diethylstilbestrol (DES) to prevent miscarriages represents another tragic instance of discovering the untoward effects of treatment long after mass distribution has occurred, in this case across generations.[18] The use of both these drugs implicates physicians, but not in any direct malpractice sense. Pharmaceutical firms have been held liable for the distribution of certain prescribed contraceptives, such as the Dalkon Shield, but these legal settlements cannot begin to address the underwriting process of clinical and epidemiological investigation that in the attempt to "control" human reproduction instead has sometimes harmed it.

Just as research on orphan drugs has been slowed, so, too, has research on and development of new forms of mass-distributed birth control. Epidemiology is a double-edged sword, revealing both how few may be helped by expensive treatments and how many may be harmed by inexpensive ones. Yet, primarily because of feminist activism, litigation (against manufacturers and distributors implicated in the mass-produced harm to women's reproduction) has slowed the search for the perfect, that is, harmless, form of birth control. Courts might someday limit the kinds of suits permitted for "statistical" victims of drugs that do not harm the vast majority of users. In such a world, all kinds of birth control innovations might appear in the medical marketplace. The acceptance of inevitable harms when weighed against disproportionately larger benefits would signify a cultural hypocrisy that is plausible if the consumers of these therapies believe, too, that the benefits outweigh the risks. "Informed consent" is the professional safeguard designed to protect everyone but the patient from what cannot be known until the risk is taken.

The reduction of risk can probably not be accomplished without


310

harm to someone. William Ray Arney has described the negotiations between women and doctors over the birth event as a series of historical transformations in how the health of mother and child has been guaranteed.[19] Beyond the problems of professional management at the time of birth, a more fundamental transformation in knowledge about birth, from conception to "bonding," has provided obstetrical science with an exceedingly broad mandate to propose and demonstrate its guarantees. Like the epidemiological project, the obstetrical project has enormous implications for the cultural understanding of disease, well-being, and, most importantly, normalcy.

The evolution of the obstetrical project can be sketched as a series of attempts to monitor the outcome of birth. At the level of bodily function, women's bodies are personally and professionally monitored in order to maximize conception. The range of artificial maximizations include a variety of drug therapies and in vitro fertilization. Once conception is achieved, professional monitoring assumes an even greater role in identifying, analyzing, and judging the progress of fetal growth through ultrasound scanning, prenatal diagnosis (chorionic villus sampling and amniocentesis), a standard range of tests to determine the physical health of the fetus, and fetal surgery. These technological forms of professional monitoring have been given a substantial boost in their sophisticated uses by the science of genetics, which offers the ability to predict the lifelong fate of the fetus. The diagnostic strategy used to foretell the fate of the individual before birth coincides with the epidemiological strategy used to foretell the future of the individual after birth. Taken together, these two strategies further the social processes of rationalization, disenchantment, and attenuation of the meaning of individuality; leaving nothing to chance, the individual enters into the world and acts in the world entirely at the design of specific others.[20]

Once again, as in the case of health problems defined by epidemiology, the role of doctors in the obstetrical project is subordinated to the predictive role of knowledge. Against the backdrop of change in the asymmetrical character of the doctor-patient relation, a more substantial change in what is produced out of that relation has occurred. The liberal-feminist focus on interaction and communication has concealed what modern obstetrical science has sought to guarantee for sometime, in addition to the preservation of the lives and health of women. Professional dominance has shifted its focus to life before birth and, in the course of doing so, has made birthing more "natural" while offering, because of genetics, more ways to guarantee that what is born will be "normal," according to whatever sociocratic standard is imposed. The achievement of normalcy will emerge in the twenty-first century as the central goal of the obstetrical project, but notice will hardly be given to what appears to


311

be little more than the end result of a series of tests to determine the "health" of the unborn, perhaps eventually in the form of a blood test administered to women who are no more than a few weeks pregnant.

If the state could be persuaded, the principles used to institutionalize a federal moral stance toward smoking could be, and no doubt will be, applied to bring the new science of the unborn to the conscious attention of health policymakers. Carrier screening, genetic screening, and postnatal screening are becoming effective agents of a new social control that discourages the birth or nurturing of any children whose "life chances" are sufficiently dim or who require extraordinary amounts of money to maintain. Advocates of this control endorse the strategy that "abnormal" individuality should be consciously and deliberately "prevented."

The politics of choice in the abortion issue have remained unconnected to the science of prevention that has focused a considerable amount of its attention on the status and health of the unborn.[21] After natural child-birth comes the normal child. The first client class of the obstetrical project has been wealthy, white, educated, and willing to comply to the mandates of the Surgeon General. Here is a client class, fully insured, who embody the sociocratic ideology of normalcy—a demand for long life, if not immortality. This ideology has implications for the public understanding of disability and the private and public responsibilities that are encouraged to accommodate all manner of human differences.

A few feminist intellectuals have recognized the dilemmas raised by the institutionalization of the obstetrical project.[22] The reshaping of expectations about how human reproduction is accomplished (that is, in terms of perfectibility) has given medicine extraordinary power to shape future generations. The feminist challenge has been to question how that power is used and by whom. But it is probably already too late in the history of cultural individualism to disestablish the eugenics of choice as it has developed out of modern obstetrics. Some residual revulsion is still expressed about aborting females when males are desired, but the statistical significance of such desire, at least in the United States, is so small that the principle of choice is unlikely to be compromised. In other words, the confidential relation between doctor and patient is making over the world in an unprecedented way—quietly, effectively, and with consequences that are destined to exacerbate tensions between those who have bad luck and those who have medicine.

The doctor is no longer a univocal figure. Malpractice represents, of course, incompetence in doctors, but that is only its operative meaning. Its illuminative or cultural meaning points to an increased readiness to make accusations about who is responsible for bad luck when bad luck cannot be clearly distinguished from medical practice. The obstetrical project in its scientific turn toward "guaranteeing" the outcome of each


312

pregnancy has invited deep suspicions on both sides of the physician-patient relation about how certain any guarantee can ever be. Genetic counseling, for example, is the vanguard therapy of twenty-first century obstetrical medicine. The doctor, in conjunction with genetic counselors, administers information about the probabilities of producing genetically ascribed illnesses in one's offspring. The choices thus far, given such information, have been limited. Such "information therapy" will eventually give way to the direct treatment of genetic disorders. Already "gene therapy" trials are being conducted on patients with disorders that have not responded to other forms of medical treatment. In this progress, probabilities give way to new certainties in the approaches to disease.

Yet uncertainty remains. The receding presence of the doctor who reassures in the old belief that no one can know anything for certain, at least not without fail, has left in its wake a kind of resignation that a doctor's competence is the only thing worth valuing. The pastoral role of the physician is undermined by the demand that the "right" doctor should always be the one whose competence results in the outcome one desires. "Defensive" medicine is thus the legal strategy to dissuade patients from doubting the competence, if not giving up the demand.

If feminism has challenged more than resolved the meaning of progress in late-twentieth-century medicine, bioethics has aimed to achieve in the name of the whole society the role of mediator and advisor to the medical profession and the government about possible resolutions. Bioethics has gained prestige and influence over the years in large part because it maps a terrain left virtually unexplored and thus undisturbed by earlier generations of philosophers and theologians. Throughout history, the religious and philosophical observers of medicine were not unaware of the misuses of doctoring. But they were confident that a "profession" of medicine retained its links to a culture of character and conduct that instructed, among other things, what could not be done in the name of medicine and remain medicine. Progress thus conformed to moral as well as technical limits of what constituted proper medical practice. The medical profession now regularly tests the limits of its inherited moral understandings of the uses of scientific and technical advance, and bioethics has captured the divided imagination of a culture that no longer easily recognizes how these limits function to protect both physician and patient. As the contemporary voice of reason, bioethics speaks ever more loudly and often about what those limits are and what the tests of them should be.[23]

Of course, a limit so recognized is bound to be tested, and this is why bioethics cannot address its complicity in the very challenge to those limits it supposedly sets out to establish. Not since the advent of psychoanalysis has an intellectual movement been so transformative in character and


313

yet so decidedly in search of the truth. Unlike Freud's psychoanalytic movement, bioethics has deliberately sought to insinuate itself into forums whose goals are to clarify the relation between state and medicine in those cases where both face such dilemmas as abortion, euthanasia, or any other one that blurs the line between the technically possible and the morally dubious. In its didactic mode, bioethics is highly influential. Governmental commissions are the most well known and the most visible of these didactic forums, but hundreds of "seminars" and "workshops" contribute to the perception that bioethics has set its sights well beyond the sphere of intellectual and reflective debate. The pragmatism of bioethics makes every theoretical exercise an opportunity to settle, so it seems, once and for all, matters that have yet to yield fully to bureaucratic control.

Bioethics has not required highly trained experts to conduct its investigations. As an "applied" discipline, it has attracted numerous academics who have sought some greater say in what the public responses should be toward medical innovation. Like epidemiology, bioethics is dedicated to a kind of impartiality that lifts it above the fray of politics. It has superseded sociology which, for a time, offered its own empirically derived wisdom about the conduct of medicine and other major institutions. The only part of sociological investigation that remains influential is expressed methodologically in the framework of epidemiology.

Unlike the cultural impact of feminism, the bioethical challenge to medicine suggests a further inclination toward specialization in which the intellectual division of labor requires a kind of metadoctoring. The bioethicist is doctor to the public, "on call" much as the physician used to be, ready to answer questions from those who are curious and concerned and who are just as likely to be news reporters as patients. The persona of bioethicist is as much therapist as "philosopher king," offering to an anxious public a means to formulate ways to think about medicine and its moral dilemmas, but not necessarily what to think about them. Occasionally, bioethicists do call for global, rather than incremental, assessments of the progress of medicine.[24] Although bioethics has succeeded in dominating the present conversation about medical morality, feminism shows more what is at stake in that morality. Both movements will continue to influence how doctors themselves think about what they do.

Environmental Movements

Affluence in the advanced societies has brought with it an increased recognition and understanding of the effects of production and consumption on health. At times, depending on what aspect of environmentalism one is addressing, affluence and its effects on health and the environ-


314

ment are said to be tied to the forces of capitalist production and consumption. Environmental damage in socialist countries has alerted most observers to the possibility that industrialization and the demands of population size have contributed to problems of health and welfare in ways that cannot be ideologically assigned to one or another economic system.

The versions of apocalypse that environmentalism brings to the conference tables and the heads of state is not illusory. The idea of the green-house effect has recently developed into a kind of unifying theory that explains how the depletion of ozone in the atmosphere, the burning of fossil fuels, and the explosion of nuclear weapons each contributes to a scenario that would at best substantially reduce the standard of living or possibly destroy life as we know it. Ecosystems, food chains, and the politics of development are said to be intimately linked. If sufficient preoccupation with the kinds of food consumed did not already exist, environmentalists have added the sometimes terrifying claims that the ways in which foods are produced promote disease, in particular, cancer. The public scares about Alar on apples and the dozens of other less well publicized "contaminations" have affected consumer confidence in the name of health.[25]

The cataloguing of disease-causing substances diverts attention from the epidemiology of the behavioral causes of disease. Unlike those health movements more closely allied with medicine and devoted to reforming the individual, the environmental movements have consciously set their sights on regulating industry and government. The growing interest in regulating industry for public health reasons finds parallels in research conducted to determine the healthfulness of work environments themselves. Occupational health movements target the workplace as another environment that under certain conditions produces stress, reduces productivity, and eventually causes disease, including and especially heart disease.[26]

Similarly, environmental movements also prevail upon industry and government to preserve the natural environment and all that is in it besides what is created by human beings. Greenpeace opposes certain types of fishing methods and nuclear submarines; voluntary activists and state authorities together oppose nuclear power; celebrity opponents of all kinds of harmful things (including harm to animals) have popularized the environmental agenda. Just as medicine has responded to the demands of its consumers, so, too, have many manufacturers of foods and goods. The result is a kind of cultural conspiracy to reduce to zero the costs—in terms of disease and harm—of living well. Among those in search of health and active longevity are some who design to eliminate certain forms of manufacture and consumption altogether.


315

Since the creation of nuclear weapons, scientists have maintained an ambivalent relation toward those in political and military power who determine how many such weapons are produced and where they are located. The clock with hands close to midnight has come to symbolize both ambivalence and a distinct incapacity to do anything about resolving it.[27] The symbol of time, or not enough of it, shapes an ambivalence about mortality that is felt individually and collectively. The power of science is so great that no single person or state can be trusted to determine its ultimate and fatal uses. And so, given the increased and fearful recognition of what the world and planet would be like after a nuclear war, some physicians have led a small but noticeable movement to oppose the production of all nuclear weapons. Physicians for Social Responsibility devotes its resources to informing the public about the medical consequences of nuclear war. Their strategy, supported and endorsed by schools of public health across the country, has been to demonstrate why medicine in its presently organized form would be useless against the injuries, illnesses, and hopelessness resulting from wide-scale nuclear conflagration.

The everyday practice of medicine is, indeed, some distance from the strategies and goals of physicians who actively oppose nuclear war. The idea that the calling of medicine includes such strategies and goals is not entirely fantastic. Great physicians of the past have voiced their concerns about human suffering and war. What is different is that these strategies are construed as defining the very possibility for medicine and its practice. As in the case of bioethics, the preaching function is a powerful source of self-legitimation in the lives of those who seek ultimate cures for ultimate diseases. The more modest endeavors of practicing physicians may not be the subject of modern heroism, but they are fraught with the same anxieties and uncertainties that characterize the larger politics of environmentalism. The end of medicine, in this final twist, is not unlike its purported demise in the face of self-help movements. That those trained in medicine would actively lobby, however rhetorically, for their own irrelevance speaks to an extensive division of labor in the medical-industrial-public health complex.

With the rise of environmental movements, consumption has become a form of political behavior; the foods we eat, the products we use, the work we do, have all come under the increased scrutiny of the environmental agenda. Health is no longer the principal responsibility of medicine, and physicians actively working to eliminate nuclear weapons symbolize how encompassing the health agenda has become. Environmentalism, in its elective affinities with the rich and famous, has linked individual health with cosmic destiny. The famous voice, as distinct from the voice of institutional authority, speaks in a language that effortlessly


316

inspires fear and trembling. Under different circumstances, for example in the confidential relation of physician and patient, such inspirational language might sound entirely inappropriate or unreasonable. The movie star reciting the latest update of politically correct habits of consumption, condemning this or that form of manufacture, and "speaking for" a safer world, is the "carrier" of the new power of environmentalism to make health a moral ideal and a political project. The greening of the world will, no doubt, list health as a high priority. Whether governments, industries, and individuals will actively conform to the future agendas of health and safety remains to be seen.

The End of Medicine?

This chapter has proposed to review those forces within and beyond medicine that have deeply altered the public image of doctoring in American society. If a severe shortage of physician authority exists in matters pertaining to the health and welfare of the nation, the causes are multifarious. First, modern attitudes toward health are impossible to understand without assessing the influence of epidemiology on the progress of medicine. Second, the same epidemiological methods applied to diagnosis and treatment are applicable to an understanding of health behavior and of the harmful effects of environmental contamination. The various movements that have taken shape in response to the voluminous knowledge about risk are central to the politics of all advanced societies. Their influence has yet to be fully gauged politically, but can be appraised culturally.

The class interests of the worried well will determine the future direction of the medical profession, but whether toward more care or more cure is far from settled. As the economic interests of the medical profession are subordinated to corporate, insurance, and governmental interests, a devaluation in the cultural meaning of doctoring will likely continue. The health maintenance organization will devote its resources to "preventive" rather than "defensive" medicine. Yet the worried well will not be fully appeased by either strategy so long as the public anxiety about individual and collective health is so strong. Unless the medical profession abandons its historic responsibility to heal, this anxiety will never disappear.

Since competing strategies for reducing such anxiety cannot be exclusively assigned to either the preventive or defensive side, the figure of the physician will not give up its authority entirely. It cannot. Even physicians who advocate euthanasia cannot abolish the authority invested in that figure of the healer for whom death is both enigmatic and inevitable. Death, as it can be understood in the vocation of medicine, can no


317

more be actively sought than it can be denied by the use of technology to achieve immortality.[28] There are many kinds of death in life, and no government commission will eliminate the perfectly frightening fact that the pronouncement of death is not and cannot be fully in the hands of human beings. An acknowledgment of the limits of medicine is one key to its preservation and is found in the physician's ethic of responsibility to offer those calming hopes and finalities that make our lives together possible.


318

Fifteen—
Unlikely Alliances:
The Changing Contours of American Religious Faith

James Davison Hunter and John Steadman Rice

• On March 9, 1986, a coalition of members of the Presbyterian Church, the United Church of Christ, the Unitarian Universalist Association, the Reform and Conservative branch of Judaism, the Episcopal Church, the Methodist Church, and numerous other denominations, all committed to pro-choice policy, led a line of 100,000 marchers in Washington, D.C., in the National March for Women's Lives, the largest women's rights demonstration in American history.

• Four months later, on July 25, 1986, approximately thirty prominent leaders, representing a broad-based coalition of Mormons, conservative Protestants, Catholics, Jews, and Greek Orthodox, met at the residence of John Cardinal O'Connor, Archbishop of New York, to address the problem of child and adult pornography.

• At the Citicorp annual shareholders meeting in New York in April 1987, a group of ministers, priests, and rabbis, fully bedecked in clerical garments, stood outside the building singing songs and praying for the dead and detained in South Africa, in protest against the company's investment policies.

• In early May 1988, Operation Rescue brought orthodox activists of every religious stripe together at an abortion clinic in midtown Manhattan to protest the practice of abortion through civil disobedience. Eight hundred people were arrested that day, including four Orthodox rabbis, eleven Evangelical pastors, one Catholic bishop, two monsignors, and four nuns. Similar "rescues" were performed in Long Island, Philadelphia, and Atlanta.

This chapter draws on the book Culture Wars: The Struggle to Define America , by James Davison Hunter (forthcoming).


319

In a previous century and, indeed, earlier in the present century, events such as these would have been unthinkable. What, exactly, is going on when liberal Protestants, liberal Catholics, and Reform Jews speak out in one voice for progressive abortion legislation? Or when Evangelical Protestants, traditionalist Catholics, Orthodox Jews, and even Moonies become allies in the war against pornography?

Although these alliances are historically "unnatural," they have become increasingly commonplace in this last decade of the twentieth century. Indeed, on a broad spectrum of social issues, religious opinion now reflects not only divisions within each faith community but alliances across denominational lines. These vignettes point to a fundamental shift in American religious pluralism. Namely, the politically significant divisions in American religion are no longer those that divide Protestants and Catholics or Christians and Jews, but those that divide the "orthodox" and "progressive" within each religious tradition.

The divisions between orthodox and progressive within the various religious traditions in America did not just spontaneously emerge in the 1980s, but developed out of a series of events and circumstances that span the course of a century. The question is, then, how precisely did this realignment come to be?

Early Faultlines

The origins of the present realignment can be traced back at least to the late nineteenth century in a society at the threshold of world economic and political dominance. All three of the major religious traditions in America were struggling to cope with the intellectual and social dilemmas of contemporary life: labor and public health problems, increased crime, poverty and indigence, deep ethnic distrust, political instability, the weakening of the credibility of religious faith, and so on. Each denomination, of course, forged its own responses to these changes, but at a broader level, the breaks within each community were between those who longed to preserve the ancient truths and traditional way of life and those who aspired to forge new moral ideals appropriate to novel social circumstances. First, consider the emergence of the progressivist appeal.

In Protestantism, for example, the Social Gospel movement espoused active institutional measures of redress for these institutionally based social ills. Rejecting the individualistic view that sin and personal moral failure were to blame for human hardship, the Social Gospel traced many of the problems of modernity to the brutal power of contemporary social and economic institutions; and it was here, its advocates contended, that the modern church could most effectively serve the cause of Christianity.


320

By the 1890s an enormous literature advocating the tenets of the Social Gospel was being published and distributed. Prominent in this work was the manifesto, published in 1908, "The Social Creed of the Churches." Translating these tenets into a programmatic agenda were new organizations, such as the Brotherhood of the Kingdom, the Department of Church and Labor of the Presbyterian Church's Board of Home Missions, the Methodist Federation for Social Service, and the commission on the Church and Social Service.

Within Catholicism, liberal or progressivist initiatives came in the 1890s, primarily in the form of new attitudes and policies articulated by particular bishops in the American hierarchy. In part they were associated with the rights of labor, particularly in the support for the "Knights of Labor," a Catholic precursor to the labor union. In part, they were associated with the desire to cooperate with Protestants in the realm of education. But the movement that came to embody these progressive ideas more prominently than any other was the Americanist movement.

At the heart of the Americanist movement in the Catholic hierarchy was the desire to integrate the American Catholic Church into the mainstream of modern American society. To do this the Americanists sought to phase out what they considered inessential Romanist traditions and to present the Catholic faith positively to a Protestant society. To this end, they endeavored to eliminate the foreign cast of the church by Americanizing the immigrant population (through language and custom) as quickly as possible, to celebrate and promote the American principles of religious liberty and the separation of Church and State, and to participate in fostering American-style democracy globally. By the mid-1890s the Americanist movement acquired a more universal appeal through its espousal of progressive biblical, theological, and historical scholarship emanating from Europe. The coupling of these two was based on mutual affinities: the Americanists' praise of religious liberty complemented the European modernists' advocacy of subjectivity in theology; the former's praise of democracy and scientific progress fit well with the latter's program to reconcile the Church with the modern age.[1] The modernist movement within American Catholic scholarship was fairly small at the beginning of the twentieth century. Yet whether European or American, the progressive theology of modernism was associated with, and found support in, the Americanist movement, and the movement foreshadowed contemporary developments.

As with the Catholics, accommodation to American life and purpose was perhaps the dominant inspiration behind progressivist Jewish thought. To that end, the worship service was shortened, the vernacular was introduced, the use of the organ was sanctioned, and the segrega-


321

tion of men and women in all aspects of the worship service was ended. More important than these modifications, though, were the theological accommodations. In this there was a decisive move away from traditional belief and ritual observance toward ethical idealism.

These theological alterations first became crystallized in a series of resolutions drawn up in Philadelphia in 1869 and then, more formally, in the Pittsburgh Platform of 1885. In these documents, progressives maintained that a Rabbinical Judaism based on the Law and Tradition had forever lost its grip on the modern Jew. The only viable course, therefore, was to reinterpret the meaning of Judaism in light of new historical developments. As such the entire range of traditional rabbinic beliefs and practices were abandoned. The first to be rejected was the traditional conviction that the Torah, or Jewish law, was unalterable—that it was somehow sufficient for the religious needs of the Jewish people at all times and places. From this the doctrine of the bodily resurrection was declared to have "no religious foundation," as were the concepts of Gehenna and Eden (hell and paradise). Repudiated as well were the laws regulating dress, diet, and purification and the excessive ritualism of traditional worship. And not least, the messianic hope of a restored Jewish state under a son of David was also completely disavowed.

In its stead was the affirmation of the universalism of Hebraic ethical principles—that Judaism was the highest conception of the "God-idea." Having abandoned any conception of Jewish nationalism, the mission of Israel was now to bring the ethical ideals of the Jewish tradition to the rest of the world. Remarkable given the historical context in which it was made, the document even extended the hand of ecumenical cooperation to Christianity and Islam. As "daughter religions of Judaism" they were welcome as partners in Judaism's mission of spreading "monotheistic and moral truth." In large measure, the ethical truths they desired to proclaim could be translated into a language that harmonized with the Protestant Social Gospel. As stated in Principle I of the Pittsburgh manifesto, Reform Jews would commit themselves "to regulate the relations between rich and poor" and to help solve the "problem presented by the contrasts and evils of the present organization of society."

As if to leave absolutely no doubt about the rightness of their cause, the authors of the Pittsburgh Platform threw down the ultimate challenge to their nonprogressive rabbinical counterparts:

We can see no good reason why we should ogle you, allow you to act as a brake to the wheel of progress, and confirm you in your pretensions. You do not represent the ideas and sentiments of the American Jews, [in] this phase upon which Judaism entered in this country, you are an anachronism, strangers in this country, and to your own brethren. You represent


322

yourselves, together with a past age and a foreign land. We must proceed without you to perform our duties to God, and our country, and our religion, for WE are the orthodox Jews in America.[2]

The boldness and enthusiasm (even if not the audacity) expressed by these Reform rabbis for their campaign of change in Judaism was remarkable, but it was not isolated. It was in large measure shared by progressives in both Protestantism and Catholicism as well.

The orthodox responses to these progressivist challenges varied within the main religious traditions, but the common thread among them was the conviction that such revisionism posed a serious threat to the sacred authority upon which each faith rested. Thus, the orthodox response within Protestantism called for a return to Scripture as the source of moral authority. By demonstrating that the Bible was the Word of God, inerrant in all of its teachings, they felt confident that they would have an adequate foundation to reject heresy and to prevent the ordinary churchgoer from straying into impiety and irreligion. Reflecting this spirit, throughout the late nineteenth and early twentieth century the defenders of Protestant tradition established a wealth of Bible colleges, such as the Moody Bible Institute (1886), the Bible Institute of Los Angeles (1913), St. Paul Bible College (1916), Faith Baptist Bible College (1921), Columbia Bible College (1923). (The Moody Bible Institute was originally founded for the purpose of urban ministry, but within ten to fifteen years it became caught up in the fundamentalist reaction.) Also created were a variety of Bible conferences—such as the Niagara Bible Conferences, the American Bible and Prophetic Conference, the Northfield Conferences, the Old Point Comfort Bible Conference, and the Seaside Bible Conference—devoted to the careful study and affirmation of scriptural principles.

In a similar vein, in Catholicism, the Americanist movement was seen as an unconscionable challenge to the authority of the Holy See. By the end of January 1899, Pope Leo XIII voiced his opinion in the form of an apostolic letter, Testem Benevolentiae , and though he was not totally condemnatory, his censure was still broad and effective. Through the eyes of the Vatican, the Americanist idea of presenting the truths of the Catholic Church "positively" in a Protestant context was seen as the watering down of doctrine, their praise of religious liberty was perceived as the praise of religious subjectivism, and their desire to accommodate the Church to American democratic institutions (the separation of Church and State) was viewed as a desire to surrender the temporal powers of the papacy—to introduce democracy into the Church.

The orthodox response in Judaism also asserted the inviolability of the tradition to which the present generation were heirs. All traditional


323

Jews interpreted the Pittsburgh statement as an insult and immediately proceeded to sever their relations with the Union of American Hebrew Congregations. Likewise, Hebrew Union College was declared unfit to educate the next generation of rabbis. However, the most orthodox and observant Jews found themselves a beleaguered and ghettoized minority, with few adherents and little resources. Of approximately two hundred major Jewish congregations in existence in the 1880s, only a dozen of these, representing between three and four thousand people, remained strictly Orthodox.[3] The larger portion of traditionalists pursued compromise. These traditionalists remained committed to traditional practices and teachings—to the foundation provided by biblical and Talmudic authority—but they were also committed to the political emancipation and westernization (and therefore, deghettoization) of Jewish experience. They recognized that this would entail modifications to orthodoxy, but they were persuaded that these changes should only be made according to Talmudic precedent and with the consent of the whole community of believers.[4] In 1886, one year after the publication of the Pittsburgh Platform, the Jewish Theological Seminary in New York was founded, and with it, the Conservative movement in American Judaism was formally launched. By 1901, with the founding of the Rabbinical Assembly of America (the national association of Conservative Rabbis), and 1913, with the establishment of the United Synagogue of America (a national union of the Conservative synagogues), the Conservative movement had become a more fully distinct and powerful force in American Judaism.

As these brief glimpses illustrate, pluralism has long been characteristic of both intra- and interdenominational religious life in American history. What is surprising about current developments, however, is that the divisions between orthodox and progressive within each tradition have, in large measure, come to outweigh or take precedence over those separating the major faiths. This historical realignment has taken place in conjunction with, and is evident in, other structural changes.

The Waning of Denominational Loyalties

However deep the internal disagreements were within each faith community between the 1880s and the 1960s, opposing factions always implicitly understood the limitations of their quarrel. As such, Protestants, Catholics, and Jews retained their theological and ideological distinctiveness.

A number of empirical studies of the post–World War II period confirmed the seemingly unchangeable nature of these denominational lines. Perhaps the most famous of these was the 1958 public opinion sur-


324

vey of the residents of the Detroit metropolitan area. The study, revealingly entitled The Religious Factor , found that vast differences still existed among Protestants, Catholics, and Jews, not only in terms of their relative socioeconomic positions but in terms of their broader view of the world. Religious tradition was the source of significant differences in their general political orientation and commitment to civil liberties (for example, freedom of speech and desegregation), not to mention the differences in voting behavior and in attitudes toward the exercise of governmental power (for example, in setting price controls, establishing national health insurance and medical care, lessening unemployment, and strengthening educational programs). The religious factor also had a marked effect in shaping the public's views of morality (for example, gambling, drinking, birth control, divorce, and Sunday business), and the public's views on the role of the family. Finally, religious differences had consequences for economic aspirations and attitudes toward work (as seen in various views on installment buying, saving, the American Dream, and the like).[5]

Yet within two decades, new evidence was showing a certain reversal in these trends: people were becoming less and less concerned about denominational identity and loyalty.[6] Surveys of the period showed that the majority of people of all faiths (up to 90 percent) favored increased cooperation among local churches in community projects, in promoting racial tolerance, in sharing facilities, and even in worship.[7] The weakening of denominational boundaries extended to the relations among denominations within the Protestant community as well. According to Gallup surveys conducted from the mid-1970s to the mid-1980s, the overwhelming majority of Protestants carried equally positive feelings toward Protestants belonging to denominations other than their own.[8]

The waning of denominational loyalty was reflected in people's attitudes, but it was confirmed increasingly in their behavior. Since mid-century, Americans of every faith community have become far more prone to change denominational membership in the course of their lives.[9] The evidence on inter-religious marriages also suggests this pattern. For example, the proportion of Jews marrying non-Jews increased from 3 percent in 1965 to 17 percent in 1983. The proportions of inter-religious marriage between Catholics and Protestants and of different denominations within Protestantism are considerably higher.

Ideological Realignment

As denominational affiliation has weakened, so too have the effects of denominational identity on the way people actually view the world. The 1987 General Social Survey showed no significant differences among


325

Protestants, Catholics, and Jews on most issues, including capital punishment, the tolerance of communists, gun control, interracial marriage, welfare, and defense spending. And there was no significant difference between Protestants and Catholics on the abortion issue.[10] What is more, the only significant differences among Protestant denominations exist according to their general location on the ideological continuum between orthodoxy and progressivism.[11]

These ideological affinities across denominational lines are reflected time and again. (It is important to recall, however, that public culture is largely constituted by the activities and pronouncements of elites. The key players, then, are not so much the "rank and file," the ordinary passive supporters of a cause, but the activists and leadership. The ideological constructions of elites are most consequential and, importantly, it is here that ideological affinities are most clearly crystallized.) The 1987 Religion and Power Survey, for example, documented just this—that two fairly distinct cultural orientations take shape across religious tradition on the basis of theological commitment.[12] The theologically orthodox of each faith and the theologically progressive of each faith divided predictably on the issues of sexual morality, family life and family policy, political party preference and ideology, political economy, and international affairs. (In the appendix to this chapter, the exact distribution of opinions on these issues and their statistical significance are analyzed.)

Moreover, this same survey found that Protestants, Catholics, and Jews on both ends of the new cultural axis generally agreed that America bore tremendous responsibility in world affairs. Virtually all were prone to agree that the United States is not "pretty much like other countries," but "has a special role to play in the world today."[13] So, too, leaders of all faiths were strongly disposed to affirm that "the United States should aspire to remain a world power" and not "a neutral country, like Switzerland or Sweden."[14] But orthodox and progressive factions sharply disagreed as to how the United States should actually carry out that responsibility. When asked "How much confidence do you have in the ability of the United States to deal wisely with present world problems?" progressives in all three faiths were at least twice as likely as their more orthodox counterparts to say "not very much" or "none at all."[15]

The same kind of division was exhibited between the orthodox and the progressives when they were asked to make moral assessments of America's place in the world order. The overwhelming majority of the orthodox in Protestant (78 percent), Catholic (73 percent), and Jewish (92 percent) leadership circles said, for example, that the United States was, in general, "a force for good in the world." By contrast, the majority of the progressives in Protestantism and Catholicism (51 percent and 56 percent respectively) said that the United States was either "neutral" or


326

"a force for ill."[16] The contrast was even more stark when respondents were asked to assess how America treats people in the Third World. Progressives, particularly in Protestantism (71 percent) and Catholicism (87 percent), were much more likely to agree that America "treats people in the Third World unfairly." The majority of the orthodox in each tradition claimed just the opposite.[17]

Opposing perspectives of America's moral status in world affairs became apparent when respondents were asked to compare the United States and the Soviet Union. A plurality of all religious leaders characterized the competition between the United States and the Soviet Union as a struggle in power politics, as opposed to a moral struggle, yet the more orthodox Catholics and Protestants were three times more likely (and orthodox Jews more than twice as likely) to say that it was a moral struggle.[18] Ideological disparities between orthodox and progressive respondents were even more dramatic, however, when they were asked which was the greater problem in the world today, repressive regimes aligned with the United States or Soviet expansion? The majority of progressives within Protestantism (61 percent), Catholicism (71 percent), and Judaism (57 percent) claimed it was the repressive regimes aligned with the United States; the majority of the orthodox in these three faiths (Protestants, 84 percent; Catholics, 64 percent; and Jews, 87 percent) identified Soviet expansion as the greater problem.

The results of a survey of the political opinion of Christian theologians conducted in 1982 reveal similar divisions in perspectives on domestic spending.[19] Nearly two-thirds (63 percent) of the progressives compared to less than one-fifth (19 percent) of the orthodox claimed that the government was spending too little on welfare. Eighty percent of the progressives said that the government was spending too little on national health, compared to just 52 percent of the Evangelicals. Likewise, nearly nine out of ten (89 percent) of the progressives agree that the government was spending too little on protecting the environment; just half (50 percent) of the orthodox Protestants felt the same way. Almost nine out of ten (87 percent) of the progressives complained that the government spent too little money on urban problems compared to 56 percent of the orthodox. And roughly six out of every ten of the progressives (59 percent) claimed that too little was spent on foreign aid; just one out of every four (24 percent) of the orthodox agreed.

In short, not only in surveys but in other recent empirical studies as well it is clear that the relative embrace of orthodoxy is the single most important factor in explaining variation in political values.[20] Indeed, it accounts for more variation within and across religious tradition than any other single factor, including social class background, race, ethnicity, gender, the size of the organization the person works in, and the degree


327

of pietism each one individually lives by. Obviously, some words of caution are in order. The attempt to dichotomize these religious leaders according to either an orthodox or progressive theological inclination is admittedly forced. Dichotomies may be more prone to show up in organizations, but among individuals the distinctions would seem artificial and perhaps unfair. Among individuals intuition suggests a continuum, with orthodoxy and progressivism being the two extreme poles. Undoubtedly this is true. Even so, at least in the present situation there appears to be an increasing polarization among denominational and paradenominational organizations. What is more, there may be a tendency for the leadership to align themselves dichotomously as well. Would the differences between orthodox and progressive camps in each religious tradition have been as prominent if this were not the case? Though a dichotomy may not adequately reflect reality, as an analytical exercise it has still proven to be extremely instructive. The evidence pointing to a restructuring of ideological affinities within America's religious leadership would seem overwhelming.

The importance of these changes is nothing short of world-historical. For the full length of American history, from colonial times to the middle of the twentieth century, pluralism in American public culture existed primarily within the limits or boundaries of a biblical culture. As such, cultural diversity revolved principally around the cultural axes of doctrine and ecclesia. With the erosion of those boundaries, the primary axis defining religious and cultural pluralism in American life shifted and is continuing to shift.

A New Ecumenism

Increasingly, these cosmological positions have come to be expressed institutionally and manifested in the form of alliances that reach across denominational lines. Because of the commonalities of vision and concern, the orthodox wings of Protestantism, Catholicism, and Judaism are forming associations with each other, as are the progressive wings of each faith, and each side does so in opposition to the influence the other seeks to exert in public culture. The vignettes recounted at the beginning of this chapter merely illustrate this development. The heart of the new cultural realignment, then, are the pragmatic alliances being formed across faith traditions, alliances that constitute an altogether new form of ecumenism.

The clearest ways in which this new ecumenism takes tangible expression is within the newly expanded structure of para-church organizations. Most obviously it is seen in the way these organizations relate to each other. In some instances groups will, as a matter of long-standing


328

policy, join together with other groups in pursuit of realizing a particular policy objective. The Catholic League for Religious and Civil Rights provides a telling illustration of this dynamic on the side of orthodoxy.[21] The Catholic League was established in 1973 by a Jesuit priest, as a Catholic counterpart to the Jewish Anti-Defamation League and the secular American Civil Liberties Union, "to protect the religious rights and advance the just interests of Catholics in secular society."[22] While it claims to be a nonpartisan organization, working to serve the needs of the whole Catholic community, it tilts decisively toward the orthodox community in Catholicism. In this, it openly supports the work of like-minded Protestants and Jews. Indeed, the League's first major legal case was the defense of Dr. Frank Bolles, a Protestant physician and right-to-life activist. (Bolles had been charged by a Colorado district attorney for "harassing and causing alarm" by mailing out antiabortion literature.) In the first fifteen years of existence, the Catholic League also has publicly defended the rights of a Jew to wear his yarmulke while in uniform; it supported Reverend Sun Myung Moon, the leader of the Unification Church, in his tax-evasion case; it has publicly "defended the right of parents [Protestant, Catholic, and Jewish] to give their children a God-centered education"; and so on.

A similar dynamic operates on the progressive side of the cultural divide. The Religious Action Center of Reform Judaism, for example, officially serves as a government liaison between the Union of American Hebrew Congregations and the Central Conference of American Rabbis by representing the positions of these groups to the federal government. Beyond this, however, it cooperates with a wide variety of liberal Protestant and Catholic denominations and organizations on progressive policy concerns, issuing statements against the nuclear arms race, America's involvement in Central America, and Supreme Court nominee Robert Bork. In both cases, the alliances formed are built upon a perceived self-interest. Both organizations tend to support groups and individuals of other religious faiths when such support also advances their own particular objectives.

The pattern here is frequently repeated. The activists in these organizations communicate with each other and even draw direct support from each other. For example, in a survey of forty-seven of these public affairs organizations, the leadership of all of these groups claimed to be in communication with individuals or groups outside their own religious or philosophical tradition, and most of these had engaged in active cooperation.[23] The public affairs office of the Orthodox Jewish organization, Agudath Israel, for example, regularly allies with Catholics on concerns over private education and with conservative Protestants on moral issues. The overwhelming majority of these organizations were sup-


329

ported by grass-roots contributions and of these, all but one or two claimed to receive contributions from Protestants, Catholics, the Eastern Orthodox, and Jews. In the early 1980s, for example, 30 percent of the membership of the Moral Majority was Catholic. Finally, roughly half of these groups sought to make explicit and public their commitment to coalition formation (that is, the larger ecumenism) by deliberately including representation from the range of traditions on their organization's board of advisors or board of trustees. For example, the (orthodox) American Family Association—which is located in Tupelo, Mississippi, and led by Donald Wildmon as its executive director—advertises an advisory board that includes four Catholic Bishops and one Cardinal, three Eastern Orthodox Bishops, including the Primate of the Greek Orthodox Church, and dozens of Evangelical and Pentecostal leaders.

Legitimating the Alliances

Although these coalitional organizations on both sides of the divide vary considerably in their size, scope of activity, and ability to actually unify member groups, their very presence on the political landscape aptly symbolizes the nature and direction of a major realignment in American public culture. And the degree to which the activists themselves recognize the historically unique positions that they have come to occupy with regard to this realignment is evident in the legitimations offered by both sides to account for their newly forged alliances. On both sides of the divide, the accounts are framed in terms of a pragmatism necessary to the survival of their respectively besieged ways of life. Speaking from the orthodox point of view, for example, Tim LaHaye has asserted the following:

Protestants, Catholics and Jews do share two very basic beliefs: we all believe in God to Whom we must give account some day for the way we live our lives; we share a basic concern for the moral values that are found in the Old Testament. . . . I really believe that we are in a fierce battle for the very survival of our culture. . . . Obviously I am not suggesting joint evangelistic crusades with these religions; that would reflect an unacceptable theological compromise for all of us. [Nevertheless] . . . we can respect the people and realize that we have more in common with each other than we ever will with the secularizers of this country. It is time for all religiously committed citizens to unite against our common enemy.[24]

Of a very different generation, but from a like-minded perspective, Evangelical activist Franky Schaeffer observed that "the time has come for those who remain to band together in an ecumenism of orthodoxy . Unlike liberal ecumenicism which is bound together by unbelief, this ecumenicism is


330

based upon what we agree to be the essence of the Christian faith, including an orthodoxy of belief in social concerns and priorities."[25]

Nor are these solely the sentiments of the Protestant fundamentalists.[26] As the director of the public affairs office of Agudath Israel argued, "Joint efforts with Catholics and Protestants do not mean that we Jews are endorsing their theology. We can overlook our religious differences because politically it makes sense."[27] So, too, a spokesman for the Catholic League maintained, "the issues are too important to have a denominational focus."[28]

The moral reasoning employed by both sides of the cultural divide to legitimate these alliances, then, is very similar. Although the alliances being formed are, as suggested above, historically "unnatural," they have become pragmatically necessary. In the end, they are justified by the simple dictum "an enemy of an enemy is a friend of mine."

Religious Realignment and Social Science

It is important to recognize that the lines separating orthodox and progressive are not, in reality, always sharp. There are some notable ideological cross-currents that flow against the larger cultural tendencies—the pro-life organization Feminists for Life, for example, or the left-wing Evangelicals for Social Action. Yet recognizing the existence of these counterintuitive phenomena does not negate the broader tendencies taking place within the realm of American public culture. The dominant impulse at the present time is toward the polarization of a religiously informed public culture into two relatively distinct moral and ideological camps.

Curiously, these developments have been almost completely ignored for nearly two decades. If anything, the social scientific establishment has seemingly documented what it has long (and incorrectly) assumed—that religious and cultural phenomena really are "epi-phenomenal" to the course and conduct of contemporary affairs. Attitudinal surveys and organizational studies, for example, have consistently shown that religious affiliation is insignificant (statistically and substantively) in explaining social and political reality. Social scientists will concede that the distinct traditions of creed, religious observance, and ecclesiastical politics are important sources of personal meaning and communal identity. Even so, the conceptual apparatus they employ and the evidence they marshal support the view that there is no longer a distinct Protestant position or Catholic position or Jewish position (or, for that matter, Mormon or Buddhist position) with regard to American public culture. The guiding assumption about the secularization of modern life has been substantiated.


331

The cultural realignments discussed here, however, suggest that the social scientific reports of the effective death of religion are, as were those of Mark Twain's demise, "greatly exaggerated." Indeed, events of the past two decades have clearly caught social scientists looking in the wrong places. The relationship between religion and public life has, if anything, become more significant, but it is because the contours of the religious organization and expression in America have fundamentally changed.

These changes, in turn, have and will continue to have a fundamental significance for American culture. The reason for this is clear: this realignment is not based on a facile division between "liberals" and "conservatives" in different faith communities. Political ideology is merely an artifact of a much deeper disagreement. Namely, what unites the orthodox and the progressive across tradition and divides the orthodox and progressive within tradition are different formulations of moral authority. Whereas the orthodox side of the cultural divide is guided by conceptions of a transcendent source of moral authority, the progressive formulation grants that authority to what could be called, "self-grounded rational discourse." These opposing conceptions of moral authority are at the heart of most of the political and ideological disagreements in American public discourse—including the debates over abortion, legitimate sexuality, the nature of the family, the moral content of education, Church/State law, the meaning of First Amendment free speech liberties, and on and on.

We are dealing with more than "religion," strictly defined. The politically consequential divisions in American culture are no longer ecclesiastical, as they once were; they are "cosmological." They no longer revolve around specific doctrinal issues or styles of religious practice and organization; they revolve around fundamental assumptions about value, purpose, truth, freedom, and collective identity. This realignment in American "religion" and the conflicts that are born from it, then, are neither narrow nor trivial, but are central to the restructuring of America itself.


332

Appendix to Chapter Fifteen

The survey part of the Religion and Power Project, funded by the Lilly Endowment, was conducted under the direction of the author by the Opinion Research Corporation of Princeton, New Jersey. A sample of roughly 1,300 religious leaders was drawn from the 1985 edition of Who's Who in Religion . After deaths and nonforwarded mail were discounted, a total of 791 individuals responded, representing a 61 percent response rate. Protestantism, Catholicism, and Judaism were dichotomized into theologically liberal and conservative camps, in line with the present argument. The divisions took the following form: conservative Protestants were operationalized as those who identified themselves as either an Evangelical or a Fundamentalist; liberal Protestants comprised the remainder. Conservative Catholics were defined as those who identified their theological inclinations on the conservative side (values 4, 5, 6, and 7) of a 7-point liberal/conservative, continuum, while liberal Catholics identified their theology on the liberal side of the continuum (values 1,2, and 3). The Orthodox Jews identified themselves as such in the survey just as Conservative and Reform Jews identified themselves this way.

The results of this survey with respect to the issues discussed in this chapter are as follows:

Sexual Morality

The orthodox wings of Protestantism, Catholicism, and Judaism were significantly more likely to condemn premarital sexual relations and premarital cohabitation as "morally wrong" than each of their progressive counterparts. The question for this series of behaviors reads as follows: "Please indicate how you personally feel about each of the following. Do you believe each is morally wrong, morally acceptable, or not a moral issue." On premarital sexuality, the actual figures were as follows: Prot-


333

estants—orthodox, 97 percent, and progressive, 59 percent; Catholics—orthodox, 97 percent, and progressive, 82 percent; Jews—orthodox, 72 percent, and progressive, 31 percent. Chi-square significant at the .000 level. On premarital cohabitation: Protestants—orthodox, 95 percent, and progressive, 58 percent; Catholics—orthodox, 93 percent, and progressive, 82 percent; Jews—orthodox, 74 percent, and progressive, 33 percent. Chi-square significant at the .000 level.

The orthodox were also between two and three times more likely than progressives to condemn the viewing of pornographic films as morally wrong. The pattern holds for Catholics, but not as dramatically as for Jews and Protestants. Catholics were a bit more uniform in their opinion here. The actual figures were as follows: Protestants—orthodox, 94 percent, and progressive, 47 percent; Catholics—orthodox, 87 percent, and progressive, 75 percent; Jews—orthodox, 64 percent, and progressive, 15 percent. Chi-square significant at the .000 level.

Family Life

When presented with the statement "It is much better for everyone involved if the man is the achiever outside the home and the woman takes care of the home and family," Evangelical Protestants were three times more likely to agree, conservative Catholics were twice as likely to agree, and orthodox Jews were nearly six times as likely to agree than their progressive counterparts. On this question the actual figures were as follows: Protestants—orthodox, 68 percent, and progressive, 23 percent; Catholics—orthodox, 57 percent, and progressive, 32 percent; Jews—orthodox, 45 percent, and progressive, 8 percent. Chi-square significant at the .000 level.

The pattern of response was similar when subjects were asked about authority in the home. The theologically orthodox of each faith were more apt to agree that "the husband should have the 'final say' in the family's decision making." The actual figures were as follows: Protestants—orthodox, 53 percent, and progressive, 10 percent; Catholics—orthodox, 27 percent, and progressive, 8 percent; Jews—orthodox, 13 percent, and progressive, 4 percent. Chi-square significant at the .000 level.

One of the more important tests of this authority concerns the decision to bear children: "Is it all right for a woman to refuse to have children, even against the desires of her husband to have children?" The majority of progressive leaders in Protestantism and Judaism agreed that it was all right, compared to minorities in the orthodox sides of these faiths. The figures for this item were as follows: Protestants—orthodox, 49 percent, and progressive, 70 percent; Catholics—orthodox, 27 percent, and progressive, 8 percent; Jews—orthodox, 23 percent, and pro-


334

gressive, 63 percent. Chi-square significant at the .000 level. Progressive Catholic leaders (18 percent) were more likely to agree than orthodox Catholic leaders (11 percent), and yet the majority of both camps disagreed with the statement.

Few in either the orthodox or progressive camps in Protestantism, Catholicism, and Judaism maintained an unqualified traditionalism in family affairs. For example, only a very small number held that a married woman should not work if she has a husband who can support her, and just as few in either camp would allow that "women should take care of running their home and leave the running of the country up to men." (The statement read "It is all right for a married woman to earn money in business or industry, even if she has a husband capable of supporting her." Among all groups the number disagreeing with this statement was under 5 percent. The same is true with the second statement, with the exception of conservative Protestant leaders, 18 percent of whom agreed that women should take care of running their home and leave the running of the country up to men.) Yet they disagreed sharply when responding to the question of priorities. More than eight out of ten of the orthodox leaders in these faiths agreed that "a woman should put her husband and children ahead of her career," compared to only four out of ten of the progressive Protestant and Jewish leaders and six out of ten of the liberal Catholic leaders. (The actual figures were as follows: Protestants—orthodox, 86 percent, and progressive, 40 percent; Catholics–orthodox, 83 percent, and progressive, 63 percent; Jews—orthodox, 80 percent, and progressive, 46 percent. Chi-square significant at the .000 level.) This general disposition extended to attitudes about the mother's relations with her children. Leaders on the progressive side of the theological continuum in all faiths were more inclined than their theologically conservative counterparts to agree that "a working mother can establish just as warm and secure a relationship with her children as a mother who does not work." Accordingly, they were disproportionately more likely (twice as likely if they were Protestants) to disagree that "a preschool child is likely to suffer if his or her mother works." (For the first question about mother-child relationships, the figures were: Protestants—orthodox, 57 percent, and progressive, 81 percent; Catholics—orthodox, 65 percent, and progressive, 77 percent; Jews—orthodox, 56 percent, and progressive, 82 percent. Chi-square significant at the .000 level. For the second question, the figures [disagreeing] were: Protestants—orthodox, 32 percent, and progressive, 65 percent; Catholics—orthodox, 41 percent, and progressive, 48 percent; Jews—orthodox, 46 percent, and progressive, 76 percent. Chi-square significant at the .000 level.)


335

Family Policy

Not surprisingly, this pattern was generally reflected in the opinion of these leaders when asked about three divisive public policy issues: support for the Equal Rights Amendment, the morality of abortion, and the morality of homosexuality. Roughly eight out of ten of the progressives in Protestantism (80 percent), Catholicism (78 percent), and Judaism (88 percent) favored the passage of the ERA compared to much smaller numbers on the orthodox side (Protestant, 31 percent; Catholic, 42 percent; Jewish, 54 percent). On abortion, progressives of all three faiths were significantly less likely to condemn abortion as morally wrong, particularly within Protestantism and Judaism. (The actual figures were as follows: Protestants—orthodox, 93 percent, and progressive, 41 percent; Catholics—orthodox, 100 percent, and progressive, 93 percent; Jews—orthodox, 40 percent, and progressive, 8 percent. Chi-square significant at the .000 level.) So, too, the orthodox and progressive wings of these faiths were deeply split over the issue of homosexuality and lesbianism; the former were between two and three times more likely to denounce the practice of homosexuality and lesbianism as morally wrong than the latter. Nine of ten Evangelicals, and eight of every ten Catholic and Jewish leaders condemned homosexuality as morally wrong, compared to fewer than five of every ten mainline Protestant and liberal Catholic leaders, and fewer than three of every ten of the liberal Jewish leaders. The actual figures on the question on homosexuality read as follows: Protestants—orthodox, 96 percent, and progressive, 45 percent; Catholics—orthodox, 81 percent, and progressive, 49 percent; Jews—orthodox, 80 percent, and progressive, 25 percent. Chi-square significant at the .000 level. The responses to the question on lesbianism were, within a percentage point, identical.

Political Party Preference

Once again, for reasons relating to the political and ethnic history of the Jewish community in America (for example, their longstanding political liberalism), the pattern is generally less distinct among Jewish elites than among Protestant or Catholic elites, but the divisions there are still quite remarkable. For example, the survey showed that by a margin of about 2 to 1 in the Protestant and Catholic leadership and 1.5 to 1 in the Jewish leadership, progressives identified themselves as Democrats. (On political party preference, the percentage of those who identified themselves as Democrats were: Protestants—orthodox, 25 percent, and progressive, 53 percent; Catholics—orthodox, 46 percent, and progressive, 77 percent; Jews—orthodox, 38 percent, and progressive, 57 percent. Chi-square significant at the .000 level.)


336

Ideology

Progressives in Protestantism were six times as likely, in Catholicism were seven and a half times as likely, and in Judaism were nearly twice as likely as their more orthodox counterparts to describe their political ideology as liberal or left-wing. (Those describing themselves as somewhat liberal, very liberal, or far left were as follows: Protestants—orthodox, 11 percent, and progressive, 60 percent; Catholics—orthodox, 12 percent, and progressive, 77 percent; Jews—orthodox, 36 percent, and progressive, 67 percent. Chi-square significant at the .000 level.)

Political Economy

There was basic agreement among all parties on the basic functions of the welfare state: that "the government has the responsibility to meet the basic needs of its citizens, even in the case of sickness, poverty, unemployment, and old age," and that "the government should have a high commitment to curbing the economic and environmental abuses of Big Business." At least eight out of ten of all religious leaders, regardless of theological orientation, agreed with these statements. (The only exception was the opinion of Evangelical leaders on the issue of governmental responsibility; 54 percent agreed.) While there is basic agreement all the way around, there still are differences in the intensity with which the various factions agree. Catholic and Protestant leaders on the progressive side were significantly more likely to "strongly agree" with these statements.

There was also a certain agreement that "the government should work to substantially reduce the income gap between the rich and the poor." The difference between liberal (76 percent) and conservative (43 percent) Protestants is 33 percentage points, and between liberal (78 percent) and conservative (59 percent) Jews, it is 19 percentage points. Among Catholics, however, there is only a 2 percentage point difference (92 percent to 90 percent).

Beyond this, however, the agreement came to an end. As one might predict, the more progressively oriented leaders in Catholicism and Protestantism were up to twice as likely as the orthodox to agree that "big business in America is generally unfair to working people." Though not as striking, the same general pattern held for Jews as well. (The actual figures were as follows: Protestants—orthodox, 27 percent, and progressive, 48 percent; Catholics—orthodox, 39 percent, and progressive, 69 percent; Jews—orthodox, 36 percent, and progressive, 42 percent. Chi-square significant at the .000 level.)

Similarly, progressives in each tradition were up to twice as inclined as their theologically orthodox counterparts to disagree with the statement "economic growth is a better way to improve the lot of the poor than the


337

redistribution of existing wealth." (The actual figures of those disagreeing with that statement were: Protestants—orthodox, 14 percent, and progressive, 44 percent; Catholics—orthodox, 23 percent, and progressive, 50 percent; Jews—orthodox, 24 percent, and progressive, 33 percent. Chi-square significant at the .000 level.) A similar statement was made about the application of this principle to the Third World: "Capitalist development is more likely than socialist development to improve the material standard of living of people in the contemporary Third World." The ideological gap between the orthodox and progressive ranged between 24 percentage points (Catholic) and 34 percentage points (Protestant), with Jews in between, at 28 percentage points of difference.

When presented with the statement "The United States would be better off if it moved toward socialism," less than half of all the religio-cultural factions agreed, yet the pattern once again held true to form: progressives of all traditions were three or four times more likely to agree than their orthodox rivals. (The figures of those agreeing with that statement about socialism were: Protestants—orthodox, 7 percent, and progressive, 33 percent; Catholics—orthodox, 13 percent, and progressive, 46 percent; Jews—orthodox, 8 percent, and progressive, 25 percent. Chi-square significant at the .000 level.)

International Affairs

When asked whether they thought "U.S.-based multinational corporations help or hurt poor countries in the Third World," the orthodox were substantially more prone to believe that they helped—at a ratio of 2 to 1 in Protestantism and 3 to 1 in Catholicism. (The percentages of those responding "helped" were: Protestants—orthodox, 76 percent, and progressive, 38 percent; Catholics—orthodox, 53 percent, and progressive, 16 percent; Jews—orthodox, 76 percent, and progressive, 53 percent. Chi-square significant at the .000 level.)

On the political rather than economic side of this concern, the pattern again holds true. When asked whether they favored or opposed the U.S. policy of "selling arms and giving military aid to countries that are against the Soviet Union," the orthodox of these three faiths were more inclined to favor this action by dramatic margins. The differences between the orthodox and progressive in Protestantism were, respectively, 73 percent and 35 percent; within Catholicism, 52 percent and 22 percent; and within Judaism, 92 percent and 61 percent. Chi-square significant at the .000 level.

This was also the case when these leaders were pushed further on this issue, with the special case of the anti-Sandinista Contras of Nicaragua. Only in the case of Evangelicals did a decisive majority actually favor


338

the policy, yet the ratio of those favoring to opposing the policy (according to theological disposition) within the other traditions was equally strong. (Those favoring the policy were as follows: Protestants—orthodox, 62 percent, and progressive, 14 percent; Catholics—orthodox, 39 percent, and progressive, 5 percent; Jews—orthodox,