Preferred Citation: Wolfe, Alan, editor. America at Century's End. Berkeley:  University of California Press,  c1991 1991. http://ark.cdlib.org/ark:/13030/ft158004pr/


 
PART TWO— ECONOMICS AND POLITICS: GLOBAL AND NATIONAL

PART TWO—
ECONOMICS AND POLITICS:
GLOBAL AND NATIONAL


93

Five—
Mirrors and Metaphors:
The United States and Its Trade Rivals

Fred Block

The Decline of American Competitiveness

In the winter of 1990, the Chrysler Corporation ran a television commercial that featured its chairman, Lee Iacocca, complaining about an American inferiority complex toward the Japanese. He was referring to the perception that Japanese manufactured goods, including automobiles, were generally of higher quality than those made in the United States. This unusual advertising strategy was symptomatic of a radical reversal that occurred over less than forty years. In the 1950s, the label "Made in Japan" was an object of derision; it was synonymous with cheap goods of poor quality. By the 1980s, Japan had established itself as the world's most successful exporter of highly sophisticated manufactured goods.

While Japan's shift is the most dramatic instance, it is symptomatic of a broader transformation of the United States' position in international trade. Immediately after World War II, the United States was the only industrialized country whose manufacturing base had actually been strengthened during the war. U.S. industrial capacity had expanded significantly, while the economies of England, France, Germany, and Japan were all severely damaged. The enormous international appetite for U.S. manufactured goods in the post–World War II years made it possible for the United States to export far more than it imported. The only constraint on this appetite was the difficulty that other nations had in obtaining the dollars with which to purchase U.S. goods. The United States tried to overcome this "dollar gap" through aid programs that were designed to hasten the reconstruction of the economies of Western Europe and Japan. By every possible indicator, the United States dominated the world economy from 1945 through 1965.[1]

By the end of the 1960s, though, it was apparent that U.S. efforts to bolster the economies of its industrialized trading partners had been too


94
 

TABLE 5.1 U.S. Foreign Trade Surplus or Deficit for Various Years
(in millions of current dollars)

1947

$10,124

1950

1,122

1960

4,892

1970

2,603

1980

–25,480

1988

–127,215

SOURCE : Economic Report of the President (Washington, D.C.: U.S. Government Printing Office, 1990), table C-9, 410–11.

NOTE : These figures are for merchandise trade, exclusive of military shipments.

successful. The U.S. trade position went from surplus to deficit as Western Europe and Japan sold increasing volumes of manufactured goods to the United States (see table 5.1). During the 1970s, however, the inflows were largely of consumer goods; the United States still enjoyed a healthy surplus in the export of capital goods, such as computers, machine tools, and airplanes. But in the 1980s, this last remaining advantage weakened as the U.S. economy was overwhelmed both with high-tech manufactured imports from Japan and Western Europe and low-tech imports from Newly Industrializing Countries such as Taiwan and South Korea.[2]

These dramatic shifts in the U.S. trade balance are linked to changes in national self-confidence. The trade surplus after 1945, combined with U.S. military superiority, encouraged talk of the "American Century"—a period of U.S. international dominance comparable to the Pax Britannica of the nineteenth century. However, it was to be a very short century; by the 1980s, the growing trade deficit catapulted Paul Kennedy's The Rise and Fall of the Great Powers onto the national best-seller list. Kennedy argued that the growing U.S. trade deficit meant that the United States was following a long-established pattern of imperial decline.

The United States' competitive decline has become a central issue in the country's politics. Debate focuses on the question of what can be done to improve our international trade position. The AFL-CIO and some of its allies in the Democratic party have consistently argued that the major problem is the unfair trading practices of some of our competitors, but this has been a minority position. Thus far, no clear majority position has emerged, but politicians in both parties increasingly argue that their pet proposals—from cuts in the capital gains tax to educational reform—are necessary to solve the trade problem.

Since 1980, it has been increasingly common for domestic commentators to compare the United States with its leading trade rivals to gain


95

perspective on what should be done. This use of other countries as a kind of mirror—to better assess one's own society—has been a common practice in the history of other countries. Russian history, for example, has been marked by episodes in which invidious comparisons with foreign nations have been used to stimulate domestic reform. The Gorbachev era is only the most recent example. Yet this type of comparative national introspection has been rare in modern U.S. history; for most of this century, national confidence has been so great that the only comparative question was why other nations had been so slow to adopt American institutions and practices.

But faced with trade deficits and a perception of competitive decline, U.S. analysts have increasingly looked to Japan and West Germany for insight into what is wrong in the United States. The intention, of course, is to spark national renewal by recognizing and eliminating those national characteristics that are holding the United States back. Unfortunately, the perceptions from these comparisons that have entered the public debate have been like the images in fun house mirrors. Some of the features of those societies that are most important in explaining their economic successes have been almost completely ignored, while others of marginal or questionable importance have loomed far too large as explanations for economic success. Most sadly, the comparisons—like distorted reflections—have served to obscure rather than to enlighten; they have made it more difficult for this society to understand how to handle its economic and social problems.

Comparing the United States, Japan, and West Germany

In pursuing comparisons among the United States, Japan, and West Germany, it is important to distinguish between the scholarly literature—books and articles that are very rarely read outside of university settings—and the popular literature of newspapers and magazines read by millions of people. In scholarly literature, there are five important areas of contrast between Japan and West Germany and the United States, but in the popular arena, only one—or possibly two—of these factors are emphasized.

Before beginning the comparison, it is important to emphasize that neither Japan nor West Germany is an unequivocal economic success story. West Germany has gone through the 1980s with unemployment rates higher than those in the United States. Large parts of the Japanese economy, particularly the service sector, remain relatively underdeveloped. And both Japan and West Germany have provided far fewer economic opportunities for women than has the United States. Different economies have succeeded with certain parts of the puzzle of how to or-


96

ganize an advanced, postindustrial economy, but no single nation has been able to put the whole puzzle together. Hence, the main economic achievement in both Japan and West Germany has been quite specific—to reorganize manufacturing to produce high-quality goods that are particularly attractive in international trade. In a period in which a number of Newly Industrializing Countries have greatly increased their international market share for such simpler manufactured goods as apparel and steel, Japan and West Germany have run large surpluses in manufacturing trade by specializing in more complex goods, such as automobiles, machine tools, and consumer electronics.

Also, the West German and Japanese economies are very different from each other in their specific institutional arrangements. It is not a simple matter to create a single composite "successful competitor country" out of these quite different national experiences. Nevertheless, there are a number of dimensions on which these two countries are both similar to each other and different from the United States that might account for the variation in the three countries' recent experiences with sophisticated manufacturing. On some of these dimensions, the specific institutional arrangements through which a given set of ends are achieved might be quite different, but the ultimate outcome appears similar. All of these dimensions have been discussed in the scholarly literature, but only a few of them have played a part in more popular discussions.

Marginality of Military Production

One obvious point of comparison between West Germany and Japan is that both were defeated in World War II. As a consequence of that defeat, both nations were constrained to limit their military expenditures. The result has been that defense spending and military production have played far more marginal roles in their economies than in that of the United States.[3] This has contributed substantially to Japan's and West Germany's successes in civilian manufacturing.[4]

In the United States, a large percentage of scientists and engineers have been employed in defense and defense-related industries.[5] Moreover, the proportion of "the best and the brightest" from these technical fields who end up working in the military rather than the civilian side of the economy is even greater. Firms doing military research and development are able to pass their costs along to the government, so they are able to pay higher wages than civilian firms. Also, the needs of the arms industry have profoundly shaped engineering education in the United States, so that the definition of what is exciting and interesting work has been shaped by military demands. The consequence is that the use of scientific and engineering talent in civilian manufacturing in America has been far more limited and far less effective than in Japan and West Germany.


97

The different use of technical labor is only part of a larger contrast. High levels of U.S. defense spending have fostered a business style that is particularly unsuited to success in highly competitive civilian markets. It is a style that involves mastery of the bureaucratic complexities of the procurement process, in which cost of production considerations are relatively unimportant, and where there are few rewards for high levels of flexibility in the production process. This style contrasts sharply with the sensitivity to consumer preferences, the sustained effort to reduce production costs, and the emphasis on flexibility that are characteristic of the firms that have been most successful in competitive civilian industries.[6]

Cooperative Work Arrangements

In both Japan and West Germany, a relatively high level of trust exists between employees and managers in manufacturing. While there are significant differences in the industrial relations patterns of the two countries, with West German unions being far stronger than unions in Japan, both countries have been able to mobilize high levels of employee motivation and initiative. In particular, both countries have evolved practices that protect core employees from displacement as a result of technological change. The consequence has been greater employee receptivity to technological innovation and, thus, quicker and more effective utilization of new productive technologies.[7]

Similar practices have evolved in some of the most important U.S. firms in the computer and electronic industries where no-layoff policies and commitments to retraining have created an openness to continual technological innovation.[8] However, the industrial relations in most U.S. manufacturing firms continue to be characterized by low trust and continued worker fears of displacement resulting from technological innovation. While many Fortune 500 firms have experimented with quality-of-work-life and employee involvement programs in the hope of emulating foreign competitors' high-trust manufacturing environments, the results have been uneven.[9] In many cases, U.S. firms have been unable or unwilling to provide the increased employee job security that is an indispensable part of a more cooperative system of industrial relations.

Supportive Financial Institutions

In both Japan and West Germany, banks have historically played a central role in providing finance for manufacturing firms; the sale of corporate stock to nonbank purchasers—the chief mechanism by which firms raise money in the United States—has played a distinctly secondary role. This greater role of banks in the manufacturing sector has several positive consequences. First, banks tend to have a longer-term time horizon than stock markets. When bankers invest heavily in a firm, the advice


98

that they give and the pressures they exert tend to be oriented to the long term. In contrast, corporate stock prices are heavily influenced by quarterly earnings reports, and concern about the stock price forces firms to emphasize profits in the next quarter over longer-term considerations. At the extreme, the emphasis on next quarter's bottom line can lead firms to sacrifice spending for preventive maintenance, research and development, and good employee relations—all factors that play a large role in the firm's long-term prospects.[10]

Similarly, banks with substantial stakes in manufacturing firms can play an active role in coordinating relations across firms. They can facilitate joint ventures between firms that might have complementary strengths, and they can use their influence to dampen destructive competition in a particular industry. Perhaps, most significantly, neither Japanese nor West German manufacturing has seen anything like the takeover wars that the United States experienced in the 1980s. In those countries, the banks can use their influence to get rid of ineffective management teams without the huge costs that have been incurred in U.S. corporate takeovers.

Social Inclusion

Both West Germany and Japan have dramatically reduced poverty in their societies, although they have accomplished this through different means. In Japan, there has been a very strong political commitment to maintaining high levels of employment, so there are relatively few adult males who are marginal to the economy. Full employment combined with a reasonable minimum wage and a low divorce rate has made it possible to pull most people above the poverty level with comparatively low levels of social welfare spending. In West Germany, where unemployment has been relatively high, the elimination of poverty has required—in addition to a high minimum wage—fairly extensive state welfare spending in support of the unemployed and single-parent families. The results are that in West Germany only 4.9 percent of children live in poverty; in Japan, 8.1 percent of children aged ten to fourteen live in poverty; while in the United States, the comparable figure is 22.4 percent.[11]

The contrast between a large population of poor children in the United States and much smaller populations in Japan and West Germany has direct implications for education. The reduction of poverty goes along with substantially higher levels of educational achievement by young people. There is considerable evidence that the average high school graduate in Japan has substantially higher levels of mathematics and science skills than the average American high school graduate, but the most striking contrast is in the percentage of students who complete high school.[12] In the United States only 71.5 percent of students


99

graduate in contrast to 88 percent in Japan.[13] In West Germany, rates of high school completion are lower, but most of those who leave school at age sixteen enter highly structured three-year apprenticeship programs that combine on-the-job training with formal learning.[14]

The proportion of eighteen-year-olds in the United States who are unqualified for skilled employment is probably as high as 40 percent if one includes both dropouts and students who graduate from high school with only minimal skills. This puts U.S. firms at a distinct disadvantage compared to Japanese and West German firms, who have a much deeper pool of young people who can easily be trained for skilled employment. In some sectors of the economy, the United States can partially make up for this disadvantage by making greater use of female employees than do Japan and West Germany, but this compensating mechanism does not work for skilled manufacturing jobs, where women still only make up about 6 percent of the labor force. Hence policies of social inclusion that result in the general reduction of poverty contribute to Japanese and West German industrial competitiveness by raising the level of educational attainment of the bottom half of the population. This advantage over the United States in the quality of the human input into the production process makes it easier for Japan and West Germany to develop more cooperative employment relations and to place more emphasis on the improvements in worker skill that facilitate the use of advanced production technologies.

Higher Rates of Personal Savings

It is widely believed that in Japan and West Germany households save a much higher proportion of their income than do those in the United States. Official data show that Japanese and West German household savings rates were at least twice as high as those for the United States in the 1980s.[15] This greater frugality means that there is a relatively larger pool of savings available for productive investment by firms at a lower interest rate. The lower interest rate means that firms can justify productive investments that could not be pursued if the cost of capital were higher.[16]

It follows, in turn, that Japan and West Germany use this savings advantage to invest more heavily in manufacturing, with the consequence that their manufacturing productivity has grown substantially faster than that of the United States. The faster rate of productivity growth makes it possible for them to control costs and compete successfully against the United States in manufacturing markets.

Of these five possible explanations for Japanese and West German economic success, it is clear that the fifth explanation—the difference in household savings rates—has completely overshadowed all of the others


100

in popular discussions. By 1989 concern about the low rate of personal savings in the United States had become such a national preoccupation that both major political parties advanced proposals designed to stimulate higher rates of savings. The popular press was filled with laments about the decline of personal savings. Peter Peterson, a former secretary of commerce, wrote a typical column in the New York Times (July 16, 1989), in which he reminisced lovingly about the frugality of his immigrant parents before he made this argument:

Up until about two decades ago, Americans would have considered it unthinkable that they could not save enough as a nation to afford a better future for their children, and that each generation would not "do better" and that the resources we invest into the beginning of life might be dwarfed by the resources we consume at the end of life. Yet, today the unthinkable is happening.

Our net national savings rate is now the lowest in the industrial world, forcing us to borrow abroad massively just to keep our economy functioning.

Later in this chapter I will show how Peterson's argument is based on problematic data and mistaken assumptions about how the economy works. The point to be emphasized here, however, is that of all of the important institutional contrasts between the United States and its major competitors, the difference in the savings rates of households has received disproportionate attention.

Some of the other contrasts have also been part of broader public discussions, but in each case, one element has been emphasized in a very telling fashion. For example, there has been considerable public concern about the shortcomings of U.S. public education, and a number of prominent corporate executives have argued that the failings of our schools have put them at a disadvantage relative to our major international competitors. George Bush promised to be the "Education President" precisely to address these problems. However, the problem of education in the United States is almost never related to the larger issue of social inclusion; it is rarely argued that the best way to improve our schools is to eliminate poverty. On the contrary, discussions of school failure tend to emphasize the personal shortcomings of those who drop out. This constant emphasis on individual characteristics helps give plausibility to the otherwise implausible arguments of those educational reformers who want to "get back to basics" and place renewed emphasis on discipline.

There has also been some broader discussion of the more cooperative employment relations that Japan and West Germany enjoy. Here again, the discussion moves quickly away from the specific institutional arrangements, such as strong unions or employment guarantees, that undergird


101

that cooperation. Instead, the focus shifts to the cultural values of individual workers. Japanese and West German workers are seen as embodying the values of the work ethic: they are disciplined and they take pride in their work, and they contrast sharply with American workers, who are depicted as selfish, lazy, or both.

In short, in the mirror that the United States has held up to itself, only differences in the characteristics of individuals are revealed; the Japanese and West Germans are seen to do better because they are more frugal, more hardworking, and their children are more disciplined. Differences in institutional arrangements disappear from view completely. This kind of selective reflection has important political implications. The focus on individual qualities assures that blame will always be distributed according to Pogo's famous phrase, "We have met the enemy, and he is us." Failures of the U.S. economy thus appear to result from the personal failings of ordinary Americans, above all the failure to save.

Economics and Metaphor

Why is it that in public debate and discussion about declining U.S. competitiveness, comparisons of the United States to its trading partners have focused almost exclusively on differences in personal savings practices? The other institutional contrasts certainly raise all kinds of interesting questions about what the United States is doing wrong as a nation and how it could do better, but these issues are never explored. In my view, the explanation for this strange selectivity lies in the importance of metaphors in economic thinking.

While economists make great claims about the scientific nature of their discipline, economic discourse is dominated by metaphors.[17] From Adam Smith's "invisible hand" to recent discussions of economic "soft landings," economic activity is frequently understood in reference to something else. Even some of the most basic economic concepts, such as the ideas of inflation and deflation, rest on analogies to physical processes.

This is hardly surprising; metaphors are powerful and indispensable tools for understanding complex and abstract processes. Difficulties arise only when we forget that we are thinking metaphorically. A particular metaphor can be taken so much for granted in our intellectual framework that it structures our perception of reality in subtle and hidden ways. Such hidden metaphors can make our theories totally impervious to any kind of disconfirmation. No matter how much evidence a critic might amass, there is simply no way to persuade someone who has organized his or her thinking around one of these taken-for-granted metaphors.


102

There are three metaphors that loom particularly large in contemporary understandings of the economy in the United States. The first of these is so familiar that it is not worth discussing at length; it is the metaphor of government as spendthrift. The idea is simply that the public sector will invariably use its resources in ways that are inferior to their use by the private sector.[18] The other two metaphors are more hidden, but they have a profound impact on both the thinking of economists and the more popular economics of journalists and politicians.

Capital as Blood

In one metaphor, the economy is seen as a hospital patient and money for capital investment is likened to the blood that runs through the veins of the endangered individual. When the supply of money capital diminishes, the patient's heartbeat slows and the vital signs deteriorate. But when the patient's supply of blood is replenished by an intravenous transfusion, there is virtually an instantaneous improvement. Not only does the patient look better, but he or she is suddenly able to move about and do things that were previously unthinkable.

This metaphor establishes money capital as the indispensable element for economic health. Nothing else—not the cooperation of labor nor the ways in which economic institutions are structured—can compare in importance to the availability of money capital. Moreover, virtually any economic problem can ultimately be traced back to an insufficient supply of money capital.

For relatively underdeveloped economies, this metaphor holds an indisputable element of truth; such economies suffer a chronic shortage of resources available for productive investment. However, for economies like those of the United States and its major trading partners, the metaphor is deeply misleading. For one thing, the relationship between the dollar amount of new investment and economic outcomes such as the rate of economic growth is unclear. Frequent attempts have been made to prove that lagging rates of U.S. productivity growth were caused by insufficient rates of new investment, but these attempts have failed. Even the White House Conference on Productivity, convened by Ronald Reagan in 1983, was unable to provide unequivocal evidence of inadequate rates of investment in the United States.[19] The difficulty, of course, is that throwing money at any problem—whether it is lagging productivity or widespread drug abuse—never guarantees success. There are too many other variables that intervene to determine the effectiveness or ineffectiveness of particular expenditures. The picture has become even more clouded recently because computerization has created a pervasive process of capital savings in the economy; a million dollars of capital investment in 1990 bought capital goods that were far more powerful and


103

effective than what the equivalent dollars would have bought five or ten years before. Capital savings is most obvious with computers themselves; the costs of computing power have been falling by 15 to 20 percent a year. But a parallel albeit slower change is occurring with a whole range of other capital goods. This pattern of capital savings means that each year less money capital is necessary to buy the same amount of new plant and equipment.[20]

There are also a number of other contenders for the most indispensable element for an advanced economy. First, it is increasingly obvious that even when there is enough money capital, it cannot be taken for granted that it will be used productively or effectively. When financiers and firms engage in "paper entrepreneurialism," they can spend vast sums of money in corporate raids and leveraged buy-outs that do nothing to enhance the society's productive capacity. Unlike the infusion of blood, there is nothing automatic about the effect of money capital on the economy. One could argue instead that institutional arrangements that effectively channel money capital to productive use are the most indispensable element for a modern economy.

Another crucial element is the flow of new ideas that results from research and development. Without the capacity to innovate effectively in both products and production processes, a modern economy will quickly fall behind competitors who are better at anticipating consumer needs or reducing the costs of production. Still another element to consider is the flow of educated employees who are capable of developing and implementing these innovations. This is not just a question of scientists and engineers, since there is mounting evidence that advanced production processes in both manufacturing and services require workers with significant intellectual skills to use computer-based technologies effectively.[21]

It is, of course, a silly exercise to argue over which is the most indispensable element for a modern economy; one would expect a number of different factors to be extremely important. The point, however, is that the capital-as-blood metaphor is simply wrong in its insistence that one element of economic life can be elevated in importance over all of the others.

Redemption through Sacrifice

The second metaphor, redemption through sacrifice, is Christian rather than medical, but it also rests on the comparison of the economy to an individual. In this case, however, the economy is an individual who has succumbed to temptation. Instead of following the path of righteousness, hard work, and self-discipline, the individual has become either lazy or preoccupied with the pursuit of sensual pleasures. If the individual remains on this path, the future will bring complete moral decay


104

and probable impoverishment. The alternative is to seek redemption through sacrifice; this means not only rejecting all temptations, but even forgoing some of the innocent pleasures that the person previously enjoyed. Only a sustained period of asceticism will atone for past sin and allow the person to return to the path of righteousness.

Economic maladies such as inflation or deflation are seen as evidence that the economy has veered from the correct path, either as a result of insufficient effort or of excessive emphasis on consumption. The remedy is always a sustained period of austerity—of collective belt-tightening. Austerity simultaneously demonstrates that people have remembered the correct priorities and it frees resources for new investment to make the economy more productive. If sustained for an adequate length of time, the pursuit of austerity is almost guaranteed to restore the strength of the economy, no matter how serious the original transgression.

The two metaphors clearly intersect in that austerity is seen as a means to guarantee that the flow of money capital will once again be swift enough to restore the health of the economic patient. Health in one framework is the same as righteousness in the other. Moreover, it is also important that both metaphors equate the economy to an individual. The classic justification of the free market was that the pursuit of greed by individuals was transformed by the invisible hand into a benevolent outcome. However, the disjunction in that argument between individual and collective morality is troubling for those who see the world in purely individualistic terms. They experience some discomfort with the idea that individual greed should produce a positive outcome. These metaphors eliminate that discomfort by restoring the notion that individual virtue is necessary for the collective good and that collective failures can be traced to individual weaknesses. With these metaphors as guides, the path to a more prosperous economy is seen as being reached by persuading individuals to act virtuously.

In actual economies, however, the relationship between individual orientations and collective outcomes is far more uncertain. Nice guys often finish last, while those who lack all virtue might well live happily and prosperously ever after. Virtuous farmers can work diligently to produce a bumper crop that results in a disastrous fall in prices for their products. Similarly, an abstemious nation can find itself in the midst of severe depression when consumption fails to keep pace with production.

For this reason, austerity is often an imperfect route to economic improvement. The Great Depression of the 1930s was a classic illustration; individuals were promised that a period of belt-tightening would inevitably generate a spontaneous recovery. But what actually happened was that the restricted purchasing power of consumers meant that there was insufficient demand to justify new investments and the economy re-


105

mained stagnant until the government intervened to bolster demand. More recently, the theorists of supply-side economics promised that if people accepted a period of austerity as income was shifted to the rich, there would be a dramatic economic expansion that would raise everyone's standard of living. While the economy did expand in the Reagan years, the consequences were far more uneven than the supply-siders had promised. The rich prospered on an unprecedented scale, but the promised acceleration of productive investment did not occur, and large sectors of the population found themselves worse off than they had been before. Many of the defects of the expansion can be directly traced to the consequences of austerity, such as the cutbacks in nondefense federal spending and the weakness of consumer demand among households whose incomes are below the median.

Nevertheless, the belief in redemption through sacrifice taps deep cultural themes. Even beyond the obvious parallel with Christian notions of individual salvation, there is a close fit with the cultural anxieties of the middle class. Barbara Ehrenreich has written persuasively of the profound fear of affluence that haunts the American middle class.[22] Those who have achieved a comfortable existence through their own efforts as doctors, lawyers, or corporate managers cannot usually guarantee their children a comparable existence unless the children enter a middle-class occupation. While the truly wealthy can usually find sinecures for untalented children or even provide for shiftless children through trust funds, those options are not available to the middle class. The danger for the middle class is that children who grow up in economic comfort will lack the drive and discipline to surmount the hurdles that block entry to middle-class occupations for most children of the poor and working classes. Hence, a periodic invocation of the virtues of austerity fits well with the middle class's own efforts to persuade their children of the necessity of self-discipline and hard work.

These two powerful metaphors act as filters through which the United States' perceptions of its major economic competitors have been refracted. While there are many significant differences between the U.S. economy and those of Japan and West Germany, the preoccupation with differences in personal savings can now be understood. The idea that people in the United States do not save enough fits perfectly with both of these hidden metaphors.

The Savings Mythology

Is it really true that people in the United States are far less frugal than people in Japan and West Germany? Discovering the answer requires examining the way in which Commerce Department economists measure


106

personal savings. The problem is that personal savings is not an item that government statisticians find out directly; there is no question on the IRS form that asks "how much have you put aside this year for savings?" Some of the most important economic measures are derived by asking people; for example, the monthly unemployment figure is based on a survey in which thousands of people are questioned about their work experience in the previous month. However, there is no regular large-scale survey in which people are asked about their savings behavior. The government economists are forced to calculate personal savings indirectly; the frequently cited figures on personal savings are derived by subtracting all consumer purchases from the total disposable income that individuals have. In short, personal savings is simply what is left over from income after individuals have paid taxes and purchased all of their consumption items. Here are the formulas:

1. Personal income – Taxes = Disposable personal income

2. Disposable personal income – Personal consumption expenditures = Personal savings

3. Personal savings rate = Personal savings divided by Disposable personal income

This makes sense because individuals can only save income that they have not spent on other items. However, the accuracy of the personal savings figure rests entirely on the accuracy of the estimates of personal income and personal consumption expenditures. But there are three problems here. First, since the personal savings figure is derived by subtracting one very large number from another very large number, it is extremely sensitive to small changes in those large numbers. For example, if the personal income figure for 1987 were 5 percent higher than the official data indicated, the personal savings figure would increase by 54 percent. Second, there are items where data are highly problematic. In calculating personal income, for example, the government economists make use of a fairly solid source—reports by firms of how much they have paid their employees. But this has to be supplemented with data on the income of self-employed individuals, which is based on their own self-reports to the Internal Revenue Service. Such reports are obviously problematic because individuals have an interest in understating their income to save on taxes.[23]

The third problem is that these estimates of personal income and personal consumption expenditures are made within an elaborate accounting framework that was structured to provide a coherent picture of the economy as a whole. This accounting framework involves a series of detailed decisions about how certain kinds of income flows or expenditures


107

will be handled, and quite often, these decisions are not made to improve the accuracy of the personal savings figure but for the sake of consistency or to improve some other part of the accounts. However, these detailed accounting conventions can have a very significant impact on the estimates of personal income and personal consumption expenditures and indirectly on personal savings.

One of these accounting conventions concerns the treatment of public pension funds. There are public pension funds that work in exactly the same way as private pension funds. Both employers and employees put money aside in a trust fund whose earnings are used to pay pension benefits. However, in the national income accounts, it is assumed that all public pension funds pay benefits directly out of state revenues. One recent study showed that when funded public pension funds are treated in the same way as private pension funds, the personal savings figure for 1985 increased by 37.3 percent.[24]

Another convention that is important concerns the treatment of owner-occupied housing. In figuring out personal consumption expenditures, government statisticians use a strange procedure. They treat people who own their own housing as though they are renters paying rent to themselves. Hence, one of the largest items in personal consumption expenditure is the estimate of the total amount of rent that owner-occupiers pay. While this procedure makes sense for other parts of the accounts, it wreaks havoc on the personal savings figure since the estimate of owner-occupied rent might be quite different from the actual current expenditures that home owners incur. In fact, one consequence of this convention is that the personal savings figure largely excludes one of the main forms of household savings in the United States—the accumulation of equity in homes.

These detailed conventions are particularly important in international comparisons of savings rates. While the basic accounting framework used in Japan and West Germany is quite similar to the American system, there are numerous differences in the detailed conventions and the way that specific estimates are constructed. For example, one recent study of the Japanese savings rate noted differences in the ways capital transfers and depreciation are treated in the two countries. When adjustments are made for these differences for 1984, the Japanese savings rate declines from 16.2 percent to 13.7 percent.[25]

Another important part of the discrepancy between Japanese and U.S. savings rates is related not to accounting conventions, but to geography. The high population density in Japan makes land extremely valuable in that country; in 1987, land constituted two-thirds of all Japanese wealth, but only 25 percent of U.S. wealth.[26] This means that the acquisition of land is a much larger component of total personal savings in


108

Japan than in the United States. However, the money that is being put aside for acquiring land for owner-occupied homes is not money that is available for investment by the business sector.[27] Hence, a significant part of the discrepancy between U.S. and Japanese savings rates is irrelevant to the question of international competitiveness.

In short, it is necessary to be extremely skeptical of cross-national comparisons of savings rates because the accounting conventions and the economic institutions differ. Moreover, the differences in institutions can magnify the importance of relatively minor differences in accounting conventions. Instead of pursuing these international comparisons of savings rates further, it is more useful to look at another data source that provides information on personal savings in the United States. The statistical offices of the Federal Reserve Board have developed a number of measures of savings as part of their effort to develop a comprehensive accounting of financial flows in the economy. This data source includes estimates of the annual changes in holdings of financial assets and liabilities (debts) of households. These estimates are based in part on very solid data, such as official reports by pension funds and insurance companies of their holdings, and some less solid data that depend on the indirect calculations of the holdings of households. (See figure 5.1.)

According to the Federal Reserve data, personal savings was quite strong in the United States in the 1980s, and the savings rate actually increased. Since the mid-1970s, the two different government series on personal savings have moved in the opposite direction. While the Commerce Department's figures have slid down, the Federal Reserve figures have gone up. Commerce Department analysts have argued that their data are more accurate because the Federal Reserve figures have been thrown off by unrecorded flows of foreign capital into the United States. However, the Federal Reserve figures are actually more reliable because they are based on an analysis of actual financial flows rather than the indirect methodology of the Commerce Department.

The Federal Reserve data include the annual increase in the assets of pension funds and life insurance reserves. This is a figure that is reported directly and involves a minimum of guesswork. It also represents a form of personal savings that is extremely important because it is directly available for productive investment in other parts of the economy. In 1988, the increase in pension fund and insurance reserves (exclusive of capital gains) was $224.4 billion. This is an enormous sum; it was 50 percent higher than the Commerce Department estimate of all personal savings—$144.7 billion. It was also enough to finance by itself 94.8 percent of all net private domestic investment—in capital goods, plants, and housing—in that year. Of course, increases in pension and insurance re-


109

figure

5.1
Measures of Personal Savings.
SOURCES:  Economic Report of the President  (Washington, D.C.: U.S.
Government Printing Office, 1990), table C-29, 327. The Alternative Personal
Savings is calculated from the table "Savings by Individuals." Net increases
in debt, exclusive of mortgage debt, are subtracted from increases in
financial assets. Some additional adjustments are made for 1986–90 to
compensate for the substitution of home equity loans for other forms of
consumer credit. For a fuller description of data and methods, see Fred Block,
"Bad Data Drive Out Good: The Decline of Personal Savings Reexamined,"
Journal of Post   Keynesian Economics , 13 (1) (Fall 1990): 3–19.

serves do not exhaust the supply of personal savings; there are also substantial accumulations of assets in bank accounts, stocks, and bonds.

The Federal Reserve data also make intuitive sense. It is well known that rich people are responsible for the bulk of household savings because they have far more discretionary income than everybody else. It is also known that the Reagan administration's policies significantly increased the percentage of income going to the richest families. For the Commerce Department figures to be true, the rich would have had to consume their increased income on a scale even more lavish than Leona Helmsley's home remodeling and the late Malcolm Forbes's famous Moroccan birthday party.[28]

Furthermore, personal savings as measured with the Federal Reserve data exceeded net private investment in the economy in every year of the 1980s, sometimes by more than $100 billion. When one adds undistributed corporate profits that are also available to finance invest-


110

ment, the surfeit is even greater. Michael Milken—the convicted junk bond king—was fond of saying, "The common perception is that capital is scarce . . . but in fact capital is abundant; it is vision that is scarce."[29] An examination of the data on personal savings indicates that Milken is correct; the United States does not suffer from a chronic inability to save.

Finally, it is important to emphasize that all of this preoccupation with personal frugality ignores the single most important way in which individuals contribute to economic prosperity—through what can be called "productive consumption."[30] When individuals or the society spends to educate young people or to retrain or deepen the skills of adults, that is productive consumption because it enhances the capacity of people to produce efficiently. Similarly, spending to rehabilitate drug addicts or to improve the physical and mental health of the population is also productive consumption. It is now widely recognized that the development of the capacities of the labor force is an extremely important determinant of a society's wealth.

Yet all of the standard calculations of savings ignore spending on productive consumption. The results are bizarre; a family that deprives its child of a college education in order to put more money in the stock market is seen as contributing to national savings, while the family that does the opposite could appear recklessly spendthrift. This backwards logic makes it harder to identify types of spending and social policies—such as the policies of social inclusion in Japan and West Germany—that could have an important impact on U.S. competitiveness in manufacturing.

Conclusion

The metaphors of "capital as blood" and "redemption through sacrifice" have dominated economic thinking in the United States. The international trade successes of Japan and West Germany have been refracted through these metaphors with the result that people in the United States have learned nothing from the comparisons. On the contrary, the comparisons combined with problematic data have served only to reinforce traditional—but now largely irrelevant—concerns with the quantity of available capital for investment. In the process, institutional issues have been totally forgotten, so that few serious reform proposals have emerged.

And yet, if we return to the neglected institutional dimensions on which Japan and West Germany are similar to each other and different from the United States—the marginality of military production, cooperative work arrangements, supportive financial institutions, and social inclusion—we already have the main elements of a serious program


111

of national economic renewal. Moreover, the passing of the Cold War creates a unique historical opportunity for such a program, since for the first time in forty years a significant reduction in defense spending and a shift of resources to civilian purposes are imaginable.

But this is not the place to flesh out such a program of reform.[31] The point is rather that comparisons with other countries can be the source of real insight into the weaknesses of our own institutions, provided that people are not blinded by obsolete and irrelevant metaphors. As I write this, it is far too early to tell whether the United States will have its own experience of perestroika —the restructuring of economic institutions—in the 1990s. However, several points seem clear. Without an American perestroika , the U.S. economy will continue to weaken and our domestic social problems will only deepen. Furthermore, the most important precondition for a period of domestic reform is what Gorbachev has termed "new thinking"—a willingness to discard outdated metaphors and ideological preconceptions and to examine the world as it actually is.


112

Six—
Uncertain Seas:
Cultural Turmoil and the Domestic Economy

Katherine S. Newman

One kid . . . I don't even know if we can afford having one child. . . . There were two in my family, three in Jane's, and that would be the range [we'd like]. I wouldn't want to have any more than that, but certainly let's say two. But two is going to be a tremendous, tremendous financial burden and drain. Not that you want to think of it in those terms, but right now I can't afford one child, no less two children. Especially when you think about the expenses, maternity expenses and child rearing expenses and all those expenses . . . and then you combine that with the loss of the second income, right? Because you can't have Jane working. Well, you're gonna lose, I figure, a year or two [of her income]. And it's just a double whammy that cannot be overcome.


Dan and Jane Edelman live in a small two-bedroom townhouse on an estate crammed with identical dwellings in northern New Jersey. The houses are stacked cheek by jowl, there is no yard to speak of, and the commute from home to work consumes two hours of their time every day. But the Edelmans count themselves lucky to own a home at all, since many of their friends have found themselves priced out of the market by skyrocketing real estate costs. Thus far they have been able to hold on to their corner of the American dream, but the issue of children looms large in their lives, and as Dan explained it, they cannot easily see past the "double whammy."

Their dilemma is symptomatic of a widespread disease generated by long-term structural changes in the domestic economy. After decades of postwar prosperity and seemingly unlimited opportunity, the American job machine seems to be running down. Wages have stagnated, income inequality is growing, unemployment—though down from its catastrophic levels in the early 1980s—remains troublesome, especially in the Rust Belt cities, and the cost of living continues to rise. Where the Edelman's parents were able to raise a family on the strength of a single

This research was supported by the Social/Cultural Anthropology Program of the National Science Foundation (grant number BNS 89-11266).


113

income and the assistance of a GI Bill mortgage, Dan and Jane are having to make tough, unpleasant choices between a standard of living they consider barely acceptable and the pleasures of family life.

The end of the postwar boom has spelled a slowdown, and in many cases a reversal, in the life chances of young families for career advancement, economic stability, and secure membership in the middle class. For many, downward mobility has become a reality: they will never see the occupational trajectory or lifestyle that their parents took for granted. Baby-boomers will not be able to raise their own children in the fashion they themselves took for granted. Remaining in the middle class mandates that husbands and wives will both have to work, coping as best they can with the task of raising children (and the scramble to find day care).

How did this situation come to pass? What happened to the American economy such that college graduates like Dan and Jane Edelman must struggle to provide a middle-class standard of living for their children-to-be? Our domestic economy has undergone profound changes since the end of World War II, changes that have seen American manufacturing industries yield to foreign competition and then disappear at an alarming rate, and our labor force shift into service jobs that do not pay as well as the unionized blue-collar jobs of the past. Variously termed the "deindustrialization of America"[1] or the emergence of a postindustrial economy,[2] this economic transformation has brought with it profound rearrangements in the way Americans earn their keep, in the way wealth is distributed within the country, and in the prospects for racial, gender, and generational groups to claim a "fair share" of the economic pie.

Evidence of long-term structural change in the economy of the United States abounds, and one purpose of this chapter will be to examine it briefly. Inspecting the facts of industrial decline or income inequality is, however, a starting point for a sociological analysis that must dig beneath the language of labor economics to the lived reality these changes impose upon American families. The domestic economy organizes how and where we spend our time, whether we can afford to marry and raise families, the consequences of divorce for an adult's life-style and a child's well-being, the quality of an individual's life in retirement, one's access to child care or health care. Virtually all aspects of our everyday lives and our long-term dreams are shaped by the economic constraints that have emerged during the latter half of the century.

Beyond the practical concerns of organizing work and family life, the undercurrents of economic transformation also reach deep into our cultural universe. Expectations for individual prosperity and upward mobility are deeply engrained in the generations descended from the survivors of the Great Depression. The post–World War II economic boom fueled this tremendous optimism, creating a baby-boom generation


114

steeped in the belief that home ownership was a birthright and a good white-collar job as normal as a "chicken in every pot."

Economic stagnation and rising inequality brought on by deindustrialization have produced frustration and confusion as people discover that the "normal" future they envisioned, and feel entitled to by virtue of being American, may not materialize. Rates of home ownership among young people (twenty-five to thirty-four) have dropped dramatically, with little prospect for reversal. Men and women raised in suburban comfort now find that they cannot provide the same kind of security for their children. In the 1950s and early 1960s, when most of the baby-boomers were born and raised, Ozzie could expect to support Harriet and the kids in a middle-class fashion solely on the strength of his income. Today that life-style has become ever more difficult to sustain, even though the vast majority of Harriets work full time.[3]

If life is proving less affluent than expected among the white middle class, the picture has become far more grim for America's poor, rural and urban. In the past twenty years, family farms collapsed at the highest rate since the Great Depression. Rural poverty, a phenomenon Americans associate with the dark days of the 1930s, has re-emerged as a major social problem in the midwestern states. Some 17 percent of rural dwellers—nearly ten million people—live in poverty, a figure comparable to the poverty rates in inner cities.[4] Inner cities are plagued by abandoned buildings, larger numbers of school dropouts than ever before, the spectre of homelessness amidst the splendor of gentrification, and rising crime as the underground economy (primarily the crack cocaine trade) engulfs neighborhoods in which prospects for legitimate employment have dried up. The problems of the poor spill out of ghetto enclaves and onto middle-class byways in the form of homeless beggars.

How are these social facts connected to the macroeconomic phenomenon of deindustrialization? Even more important, how has the social experience of economic stagnation and increasing inequality shaped a new view, however confused and ambiguous, of the American experience? The meaning of "being American" has been inextricably embedded in expectations for upward mobility and domination of international trade. The 1970s and 1980s have reshaped this self-perception in ways that we have yet to fully articulate. The change is evident in our fears for the country's economic future and our frustrations over the impact of change on our standard of living, a resurgent conservatism over the responsibilities of the fortunate toward the fate of the poor, a heightened sense of competition between and within generations for the resources needed to raise a family or retire in comfort, and increasing worries over the long-term impact of inner-city decay and minority poverty.

The dimensions of change are best understood by looking first at the macroeconomic facts of industrial decline. Thereafter, I explore the im-


115

pact of this transformation in the realm that matters most to American families: income and employment. Finally, I will consider how deindustrialization has influenced the expectations and experiences of the different generations of Americans who must find their way through the new economy. My quest is to consider the cultural meaning of the country's economic decline.

The Parameters of Deindustrialization

The unprecedented wave of industrial plant shutdowns in the 1970s and 1980s attracted the attention of a wide variety of labor economists and industrial sociologists. Conservatives among them argued that the downturn was simply another "swing" in the business cycle, a term used to describe the episodic ups and downs considered natural, normal features of capitalist systems. If anything else was to blame for America's economic doldrums, conservatives suggested that unproductive and "overpriced" labor was primarily at fault. Union demands were understood to be the root cause of the flight of manufacturing overseas, where wages are lower.

Liberal economists took issue with this view and began to look for new paradigms to describe the postwar development of the U.S. economy. Two well-known scholars on the political left, Barry Bluestone and Bennett Harrison, argued that a fundamental change in the country's economic structure was underway. Their much-debated book, The Deindustrialization of America , united the demise of the country's manufacturing sector with the movement of industry overseas and the spectacular increase in corporate mergers, and in so doing articulated a new and darker vision of the country's economic predicament:

Underlying the high rates of unemployment, the sluggish growth in the domestic economy, and the failure to successfully compete in the international market is the deindustrialization of America. By deindustrialization is meant a widespread, systematic disinvestment in the nation's basic productive capacity. . . . Capital . . . has been diverted from productive investment in our basic national industries into unproductive speculation, mergers and acquisitions, and foreign investment. Left behind are shuttered factories, displaced workers, and a newly emerging group of ghost towns.[5]

Bluestone and Harrison accused American corporations of dismantling even profitable plants to provide revenue for diversified investment, and the relocation of manufacturing facilities to low-wage, nonunionized communities, often at the taxpayers' expense, since these shutdowns could be written off corporate tax bills.

Deindustrialization has been most pronounced in the Rust Belt zones


116

of the Northeast and Midwest, yet Bluestone and Harrison showed that nearly half the jobs lost to plant shutdowns during the 1970s were located in the Sun Belt states of the South and West. Hence the trend cannot be dismissed as a regional problem; it is a nationwide migraine headache. Overall, the 1970s saw the loss of nearly thirty-eight million jobs to runaway shops, plant shutdowns, and cutbacks.[6]

The vulnerability of labor in the face of rising unemployment quickly lead to declining average wages, even for those who were still on the job. Downward pressure on wages was exerted through "freezes and cuts in wages, the introduction of two-tiered wage systems, the proliferation of part-time and "home" work, and the shifting of work previously performed by regular (often unionized) employees to independent, typically nonunion subcontractors."[7] Estimates of overall wage losses in the durable goods sector—which includes automobiles, steel, machinery, and electrical equipment—amount to nearly 18 percent between 1973 and 1986. This translates into a loss of more than $16 million dollars per hour of work, deducted from American paychecks.[8]

Communities suffer collective punishment when faced with local economic contraction. Towns plagued by plant shutdowns usually see sharp declines in the health of industries that supplied parts or raw materials to the now-vacant factory. Taverns and grocery stores feel the pinch not long thereafter, as workers laid off from major employers cut back on their spending. Unemployment benefits cushion the impact for a time, but eventually long-term joblessness translates into mortgage defaults, higher welfare expenditures, and outmigration. When workers are no longer on the payroll, their home towns must weather the loss of income and sales tax revenues. This in turn forces unwelcomed cuts in the quality and quantity of public services (schools, hospitals, roads, etc.), which makes an economically depressed area even less attractive for new investment. As if this weren't enough, the gloom and doom of deindustrialization generates rising demand for social and medical services that can address stress disorders: psychological problems, alcoholism, and high blood pressure, among them.[9]

Enterprising officials, hoping to find ways to reverse the downhill slide, search high and low for new industries to fill the gap in the local economy. With their backs against the wall, communities compete against each other to attract new corporations by providing tax breaks or promises to construct new sewer lines in the hopes of beating out others offering less. Apart from the fiscal burden this places on local residents, the very vulnerability of deindustrializing communities provides them little leverage in bargaining with new companies. They dare not ask much in return for the tax breaks lest they risk the loss of a new business to another town that has proven to be less demanding. Hence, despite the


117

public investment involved they cannot insist that a company return the favor and stay put (or even necessarily exact the promise of a warning if a plant shutdown happens again).

The spectre of industrial decline does not tell the whole story of deindustrialization. There is a growth side to the saga as well, represented by an employment boom in the service sector. Conservatives often point to the remarkable record the United States enjoys in job creation when compared to its relatively stagnant European counterparts. What is often missed in this laudatory portrait is the low-wage character of the American "job machine." Services ranging from fast food to banking, from child care to nursing home attendants, have burgeoned. In most of these growth areas, however, the wage structure has been unfavorable. A small number of professional jobs that pay well has been swamped by minimum wage positions. About 85 percent of the new jobs created in the 1980s were in the lowest paying industries—retail trade and personal, business, and health services.[10] More than half of the eight million (net) new jobs created in the United States between 1979 and 1984 paid less than $7,000 per year (in 1984 dollars). While many of these were part-time jobs (another growth area of dubious value), more than 20 percent of the year-round, full-time jobs created during this period paid no more than $7,000.[11] The economic expansion of the 1980s, much heralded by Presidents Reagan and Bush, failed to improve the standard of living of many Americans because the jobs it generated were disproportionately to be found at the low-wage end of the spectrum.[12]

Hence workers displaced by deindustrialization, new entrants to the labor market (young people and women), and the increasing number of elderly returning to the employment scene to supplement retirement, find their options limited. Moreover, the improvement experienced by black, Hispanic, and Asian workers in the 1960s and early 1970s was all but wiped out in the 1980s as they flooded into low-wage jobs. Younger workers were also disadvantaged: one-fifth of the net new year-round, full-time jobs held by workers under thirty-five years old paid under $11,000.[13] Workers unlucky enough to find themselves in the industrial heartland faced the most hostile climate of all since the region exceeds all other areas of the country in the "ability" to generate bad jobs: 96 percent of the new employment in the Rust Belt Midwest is in the low-income category.

These "replacement" jobs are even more problematic because they generally fail to offer the benefits routinely attached to "good" jobs. As of 1987, roughly 17 percent of American employees had no health insurance and 40 percent were not covered by a pension plan.[14] This is partially attributable to the low levels of unionization in the growing service sector industries: workers who are not organized have no collective


118

bargaining power and hence suffer from relatively low wages and poor benefits.

Employers' increasing reliance on temporary workers hardly helps matters. These "marginal" workers—for example, "Kelly Girls" and "Accountemps"—are often employed full-time, but lack yearly contracts and can be let go with virtually no notice. Temporary jobs are notorious for denying workers insurance and pension coverage as well as prospects for advancement. When compared to the growth rates of permanent employment, temporary work has skyrocketed, growing nine times faster than total employment since 1979. By 1987, Kelly Girls and organizations like it could claim nearly 1.2 million workers.[15]

Moonlighting is also on the increase, with large numbers of men and women working two jobs, either in order to make ends meet or to squirrel away some savings. The practice was not unknown in the past for men, particularly those attempting to support families without the assistance of working wives. Now, however, with divorce increasing (and the prospects for supporting a family on a single income growing ever more problematic), women are moonlighting in record numbers. In 1970 only 636,000 women held down two jobs; by 1989 the numbers had jumped to 3.1 million.[16] If low-wage job growth persists and divorce remains a fixture of the social landscape, we can look forward to more of the same.

The imagery of deindustrialization—ghost towns and empty parking lots—can easily lead one to imagine that the old single-industry cities have gone the way of the dinosaurs. Although it is true that many a company town has disappeared and that urban economies appear to be more diverse in their industrial base than they were in the days of the robber barons, narrowly based local economies are not entirely of the past. The growth of white-collar industries has introduced a new form of dependence into the domestic economy. Stripped of their manufacturing giants, cities like New York, Boston, Los Angeles, Houston, and Chicago have increasingly come to rely on the white-collar businesses—particularly in financial services and information technology—as the engine of their economic development.

The consequences of such a dependence are twofold. On the one hand, we see an increasing divide among city dwellers between those who have high wages, fancy apartments, and affluent life-styles, and those who were turned out of the old manufacturing industries that once dominated city life.[17] Fur-clad brokers are confronted by homeless men, women, and children in the subways and on the streets. Poor people's housing (for example, single-room occupancy hotels, flophouses, and the like) has evaporated in the face of demand for luxury buildings, and the results of this wholesale eviction of the dispossessed is visible to everyone.[18] In the cities and the suburbs, Americans are relentlessly exposed to the growing gap between the haves and the have-nots.


119

But those in the fur coats are not so secure either. In February of 1990, the pages of the Wall Street Journal —the self-proclaimed "daily diary of the American dream"—were filled with stunned accounts of the bankruptcy of Drexel, Burnham, Lambert, one of the country's premier brokerage firms. After a decade of astronomical profits, Drexel filed for Chapter 11 and stranded 5,000 fast-track traders. Two weeks later, Shearson/Lehman announced a 4 percent reduction in its workforce—another 2,000 well-paid workers were let go, with more to follow. Nineteen ninety was not a particularly opportune year to be an unemployed stock-broker, for Wall Street was still reeling from the impact of the massive downturn of October 1987; thirty thousand employees received pink slips in the aftermath of Black Monday, when the worst stock market crash since 1929 sent millions of dollars in investment capital up in smoke. Wall Street salaries have plummeted as overqualified movers and shakers flood the market. The volatile nature of financial services, which is sensitive to fluctuating interest rates, plus the whims of foreign and domestic investors, and the feverish takeover activity of the past decade have combined to make life a bit precarious at the top. Once filled with unstoppable optimism and a degree of arrogance over their successes, the denizens of these high-level firms have joined the ranks of fellow white-collar workers who have learned to watch their backs and duck—if possible—when the pink slips cascade out of the boardroom.[19]

The consequences of this volatility for a city's employment and tax base are considerable. Ray Brady, the CBS News reporter for economic affairs, reported the downsizing at Shearson/Lehman with an ominous tone in his voice.[20] Brady pointed out that every job on Wall Street generated two "support" positions elsewhere in the Big Apple. The corollary seems obvious: the loss of those big salaries translates into higher unemployment for the "little guys." Indeed, Brady noted, the impact of the 1987 Black Monday crash has already translated into a local downturn of no small proportions: in the two years after the Wall Street disaster, retail sales in New York were down 6 percent, restaurant business fell by 10 percent, and the real estate market dropped by about 9 percent, with sales sluggish and prices falling. A variety of factors may have influenced these "secondary" losses, but it is fairly clear that cities like the Big Apple have developed an unhealthy dependency on financial and information service industries. In the postindustrial city, when the brokerage business contracts pneumonia, the rest of the town may be in for a bad bout of the flu, at the very least.

City Hall in the postindustrial urban center is no less vulnerable to the fluctuating health of white-collar industries than the political leadership of the older Rust Belt centers was on heavy manufacturing. When service industries fire their workers, or transfer their operations out of expensive city centers to remote "back room" facilities in faraway suburbs,


120

or threaten to leave altogether unless they are given tax breaks for staying, tax coffers begin to empty. Caught between the twin demands of declining revenues and rising demands for services in the wake of human displacement (homelessness, unemployment, ghetto deterioration), the Gotham cities of the United States are in trouble. Politicians hint at the inevitable need for new taxes to balance the books and refurbish urban infrastructures, only to find resistance strong from industries already straining to compete with overseas counterparts and urban families trying to keep their heads above water.

Who Owns the American Dream?

As the concentration of the work force shifts from manufacturing cars to flipping hamburgers or processing insurance claims, communities are thrown into upheaval. The industries that once provided continuity for generation after generation of blue-collar workers disappear, leaving behind empty parking lots and empty souls. People who have spent their entire working lives in one factory find they must accept premature, and comparatively meagre, retirement, bereft of all the entitlements they expected: health insurance, pension funds, and the peace of mind that comes with knowing that your efforts were part of a larger enterprise that will go on after you.[21]

Young men, particularly minority men, experience rising unemployment as the industries that traditionally provided jobs for unskilled newcomers to the labor market (urban manufacturing) dry up.[22] Meanwhile, job growth in service industries is most pronounced in suburban areas, far from the inner-city ghettos most in need of entry-level employment. The "mismatch" between those in need of jobs and employers in need of employees has become a major logistical and social problem.[23] At all levels of the social structure, economic upheaval leads to social disorganization.

The chaos of deindustrialization brings with it a particularly unfortunate departure from post–World War II trends toward greater equality in the distribution of resources. During the twenty-five years that followed the war, average income in the United States grew at a healthy pace. But even more important (at least from the standpoint of fairness), the distribution of these gains benefited Americans who fell into middle and lower income groups. The country still had its rich and its poor, to be sure, but the gap between them closed to a greater degree than had been the case before 1945. But beginning in 1973, economic growth came sputtering to a halt.[24] Family incomes stopped growing, even though a record number of families had multiple earners. Workers lucky enough to be in high-wage industries fared comparatively well during the post-1973 period, but those in low-wage sectors took the brunt of the slow-


121

down. The real income of the bottom 40 percent of the population fell by about 11 percent between 1979 and 1986. At the same time, income growth for the richest segments of the country grew at rates far exceeding the average. The top 1 percent gained by 20 percent.[25] It will surprise no one to learn that these differential growth rates led to a stunning 18 percent jump in the inequality of income distribution. Virtually all the progress made toward equality in America during the 1950s and 1960s was wiped out by the rising income inequality of the fifteen years that followed.[26]

Some scholars argue that the erosion of equality threatens to put the American middle class on the endangered species list.[27] For as the fortunate few ascend from the middle income level to the upper middle class, and the unfortunate many experience downward mobility and land in lower income groups, it is the middle that seems to be disappearing. Definitions of the middle class are notoriously slippery since they sometimes refer to income, while at other times revolve around occupational prestige. But if we examine the income measure for a start, there is evidence to suggest that the percentage of American families who earn what might be termed a middle income ($20,000 to $50,000 per year) is declining. Katherine Bradbury, a senior economist at the Federal Reserve Bank of Boston, calculated that the size of the middle class shrank by about 5 percent between 1973 and 1984, with the lion's share of these exmiddles dropping down the income charts and less than 1 percent moving up.[28] These kinds of findings have caused Harrison and Bluestone to dub our time the epoch of the "Great U-Turn," since the evidence points to a historic watershed, a reversal of the trends we had come to see as quintessentially true of the American economic experience.

Downward mobility in terms of income is bad enough, but when one considers the difficulty of using what remains to secure a middle-class standard of living, the real social significance of postindustrial wage structures becomes even clearer. Frank Levy, professor of economics at the University of Maryland and author of the influential volume Dollars and Dreams , has shown that up until the 1970s being in the middle income range virtually guaranteed home ownership and most of the other perquisites of the American Dream. After 1973 even remaining in the middle (much less dropping down into the low end of the income spectrum) no longer did the trick. Housing prices rose faster in the 1970s than other goods, owing in part to the unprecedented demand created by the baby-boom generation's desires for real estate. This coupled with wage stagnation combined to place home ownership out of bounds for a growing number of American families—even though more and more of those families were dual-income households. Owning a house is an indispensable benchmark of middle-class status.[29] Men and women who discover


122

that this goal is out of their reach have effectively been written out of the American Dream.

When we look at aggregate statistics on income or housing, we often miss what is sociologically most significant about the changes that postindustrialism creates. The impact of declining average wages on life-style, for example, was experienced most profoundly by younger families. When the slowdown in income growth started in 1973, families that were already secure in their homes, with fixed-rate mortgages, savings accounts, and the like, were "over the hump" and had relatively little to fear. They saw the value of their assets skyrocket and were able to trade up the real estate market, exchanging a two-bedroom starter house for a larger, more elegant one, using the exploding value of their original home to finance the move. But young families, particularly those in the baby-boom generation, were caught on the other side of the divide. They came of age in a sick economy and, owing in part to the pressure of their sheer numbers, never fully recovered.

Climbing out of the Great Depression, each succeeding generation has expected to do better than their parents. The gospel of upward mobility received tremendous reinforcement in the two decades after World War II because economic expansion, coupled with generous government intervention in the form of the GI Bill and other middle-class entitlements, did make it possible for adults of the 1950s to fulfill their material ambitions. But after 1973, this great "American assumption" ran into the wall of economic stagnation and high inflation. The generation gap is no longer simply a matter of musical tastes or the length of one's hair: it now describes a material chasm.[30] Baby-boomers who grew up in suburbia, with Mom at home and Dad at the office, are finding the gates to the suburbs locked and the pressure to keep Mom and Dad in the workplace unrelenting.

The Cultural Costs of Downward Mobility

American culture has always celebrated forward motion, progress, upward mobility. We are true optimists, always assuming that the world—or at least our corner of it—will continue to provide more for us than it did for our parents, and more for our children than we have today. This central expectation dies hard. When reality fails to provide what we think we are owed, we seldom readjust our expectations. Instead, we stew in frustration or search for a target for our anger, pointing fingers at more fortunate generations, incompetent presidents, disloyal corporations. When this fails to satisfy, Americans are often inclined to look within, to personalize wide-scale economic disasters in the form of individual moral failings.


123

Downward mobility, both within and between generations, is an experience particularly ripe for this kind of morality play. Managers who lost their jobs in the last decade's merger mania often find that they cannot hold on to a systemic, structural vision of their loss. Even when they know, at some level, that forces larger than any individual have left them pounding the pavement in search of new jobs that will pay less, be less secure, and symbolize their descent down the class ladder, they cannot hold on to the notion that they are not to blame. Instead, managerial culture in its American form leads them to internalize their occupational troubles and pushes them to comb through their personalities for the hidden flaws that justify their fate.

The culture of meritocracy they embrace teaches that a person's occupational standing is an accurate barometer of his or her intrinsic moral worth. When that barometer fails, it can only mean that the person is a less than fully respectable human being. Meritocratic individualism is so potent a theme in American culture that it can thoroughly undermine decades of evidence to the contrary. John Kowalski, a denizen of the Forty Plus Club, an organization for unemployed executives in Manhattan, devoted thirty years of his working life to a trade association representing the chemical industries. He rose steadily up the ladder of responsibility, graduating over time from assistant secretary to vice-president. John was proud of his work, and had every reason to think he had done a good job, when the board of directors suddenly announced he was to be passed over for the vacant presidency. They let it be known that John was no longer really welcome in his job and that "for his own sake" he ought to be looking elsewhere.

One might think that someone like John, who has dedicated virtually his entire adult life to this organization, would be furious, indeed, filled with righteous indignation. Yet his belief in the truth of meritocracy leads him instead to point the finger back at himself: "I'm beginning to wonder about my abilities to run an association, to manage and motivate people. . . . Having been demoted . . . has to make you think. I have to accept my firing. I have to learn that that's the way it is. The people who were involved in it are people I respect for the most part. . . . They are successful executives. . . . So I can't blame them for doing what they think is right. I have to say where have I gone wrong."[31] I interviewed dozens of men and women cast out from the heartland of corporate America, and rarely did anyone fail to reach the same conclusion John expresses here: "There must be something wrong with me." The cost of intragenerational (within an adult's lifetime) downward mobility has been a massive loss of confidence among some of America's most experienced white-collar managers. As the economic disruptions described earlier in this chapter spread, so too does this culturally defined uneasi-


124

ness and depression. It engulfs workers and surrounds their children, who look at their parents and think, "If this could happen to them, it could happen to me."

The psychic pain caused by unemployment is an enduring problem for those on the receiving end. During depressions and recessions, the number of people who must survive this relentless destruction of their self-esteem grows. But even in good times there are always thousands of American men and women who find themselves falling out of the social structure, struggling to regain a place and an identity they can live with. Some manage to succeed, but many do not: they live for years with their identities in limbo. This is particularly true when the only jobs they can find pay a fraction of what they earned before they found themselves on the unemployment line. For American culture accepts the meritocratic argument that your job defines your worth as a person and subjects those who have moved down the ladder to a devastating critique of their value.

Even after they have recovered, the experience of downward mobility leaves most people insecure and shaken. They never quite trust their new employers or themselves. They cannot leave the past behind, but worry instead that they may plunge down again and join the legions of the lost for a second time. And many do have just that experience, for in their new jobs they are last hired, and when shake-ups occur, as they routinely do, they are often first fired once again.

When downward mobility occurs in one person's adult lifetime, the tragedy sticks in the craw and afflicts the generations to come in the form of nagging insecurity and self-doubt: will this awful descent down the occupational ladder happen to me, the son or daughter of the dispossessed? Am I carrying a gene for disaster? Children of the dispossessed, downwardly mobile can never be entirely sure that the security they once considered a middle-class birthright will be theirs to claim in adulthood.

In fact, there are reasons to suspect that downward mobility of another kind will describe the fate of many in the future. This "other kind" involves a comparison between the standard of living enjoyed by the baby-boom generation (and younger groups coming behind it) with the good fortunes of the generation that graduated to adulthood in the immediate aftermath of World War II. For as I noted earlier, the economic slowdown that began in 1973 caught different generations at different points in the life cycle and bifurcated their experience vis-à-vis the American dream. Where the older generation could expect to own their own homes, the younger group is finding this increasingly beyond reach. Where postwar adults were party to the creation of a "new middle class" of engineers, doctors, psychologists, corporate managers, and the like, their children found the professions crowded and competitive.[32] If all went well, the adult generation of the 1950s could expect to see their ca-


125

reers rocket upward, only to "plateau" (in terms of advancement up the corporate hierarchy) some time in their mid-fifties. Today's corporate managers are finding that the pressure of their numbers, combined with a slowing economy, will force the "plateau" to come earlier in their lives: in their forties. They will have to contend with salaries that are slower to increase, and the psychic consequences of an artificially shortened horizon for professional development. They will "top out" and be unable to go any higher in the organizational structure in which they work at a much younger age than was true for their fathers (or mothers).

Intergenerational downward mobility is causing broadbased cultural confusion. It is a byproduct of economic stagnation and demographic pressure, but these sociological facts are of little comfort to average people who cannot understand why they cannot fulfill the promise of bettering their parents' standard of living. It has been part of the American belief system to assume that each generation outdoes the last, and that the parents' sacrifices (taking a sweatshop job at the turn of the century) will be repaid by childrens' successes.[33] Increasingly, it would appear that with or without parental sacrifice, the baby-boom generation and those coming behind it are likely to experience a significant drop in their standard of living compared to that of their parents.

If human beings were able to adjust their expectations every time the consumer price index came out, this would be of little concern. But our sense of what is normal, of what the average person is entitled to have in life, does not change so easily. Men and women raised in suburban comfort do not simply say to themselves, "This is beyond my reach now; my children will have to settle for less; so be it." Instead, their expectations remain and their frustration grows to epidemic proportions.

For the past two years, I have been collecting life histories from two generations of Americans who graduated from one ordinary high school in a small town near New York City. The community they grew up in was a typical middle-income suburb of Manhattan. It is a bucolic, quiet enclave of commuter homes for people who earn a living in the Big Apple or in the larger cities of northern New Jersey. Developed in the 1950s, "Doeville"[34] attracted growing families out of the congested city. Mothers stayed home in those days, and the fathers of this town went out to work as skilled blue-collar labor, midlevel managers, and young professionals at the beginning of their careers in medicine or law. Many fathers established their own businesses, as contractors or freight haulers, and made a good living off the booming housing industry of the 1950s and 1960s.

There are homes in Doeville that are genuine mansions, with white pillars and circular driveways. But most of the houses are modest three-bedroom New England–style places or fake colonials, with comfortable


126

yards and two-car garages. Their first owners, those who moved into Doeville in the early 1950s, could purchase a home fairly easily on a single income, financed by the GI Bill. Doeville's children (of the 1950s and 1960s), who are now in their late twenties and late thirties, remember their early years building treehouses in the backyard, playing in the woods and the creeks near by, going swimming in the local pool, and gradually moving through the normal ups and downs of adolescence. They had a "perfectly average," not particularly privileged, way of life, as they see it now.[35]

Today Doeville homes cost a fortune. Modest houses that were easily within reach when my "informants" were kids, now routinely sell for a third of a million dollars. The houses haven't changed and the people who grew up in the town haven't changed their view that living in Doeville is an entitlement of middle-class life. But almost none of the people who graduated from high school in this town could possibly afford to live there now. They have been evicted from their own little corner of the world—or anywhere similar to it—by the declining value of their paychecks and the exponential increase in the cost of those ordinary houses.

Fred Bollard is verging on thirty, a 1980 graduate of Doeville High School, who lived at home while he finished a night school accounting degree at a local private college. Fred's parents still live in Doeville, but this is out of the question for Fred or anyone else he grew up with:

People who grew up—myself, my friends, my brothers and sisters and their friends—they don't stay in the area. Probably first and foremost, they can't afford it. The housing is literally ridiculous. My parents purchased their house for $25,000. Now the house on that little piece of property is appraised at $280,000. So now you have to make $70,000 a year to afford it. On two incomes you could do it, but one person? So that's why I think the major change is that the people who grew up there can't stay there. They have to leave and live elsewhere.

The progeny of Doeville who are now in their late twenties and thirties are finding that even when they give up the hope of living in a community like the one they grew up in, they cannot really satisfy their desires for a comfortable standard of living and the pleasures of family life. Jane and Dan Edelman, whom we encountered at the beginning of this chapter, would like to live in a place like Doeville, but can see that this is impossible, even on their combined incomes. Now that they would like to start a family, even their ability to support their modest home may be compromised.

The Edelman's are caught in a squeeze that makes them squirm. Raising a family is supposed to be a personal decision, an expression of love between parents, and a dramatic confirmation of the solidarity that


127

binds their own relationship together. It is meant to be the antithesis of the calculating, rational decision that, for example, buying a car might represent. American culture separates emotional and pragmatic domains. But the purity of this distinction cannot always be maintained, and wasn't during the Great Depression, when sheer necessity forced men and women to calculate carefully over the most personal of decisions. But absent catastrophic conditions, American culture regards pragmatism as a separate orientation from the affairs of the heart. Adhering to this cultural blueprint now seems something of a luxury. Young adults who grew up in Doeville feel compelled to choose between maintaining a standard of living they feel is essential, though hardly equivalent to what they grew up with, and establishing a family. It is a dilemma not easily resolved, for owning a home and having a family are fundamentally intertwined. Men and women who grew up in private homes feel that they must provide the same for their own children and that it would be irresponsible of them to plan a family absent that critical resource.

One might argue that previous generations managed on the strength of rented apartments and a much-reduced standard of living. This is beside the point. The expectations fueled by the postwar boom period of the 1950s and 1960s have become benchmarks against which descendants measure what is reasonable to expect in life. The comparison they naturally make between generations that are chronologically contiguous makes the wound run even deeper. For Dan and Jane ask, "Why are we in this predicament when it seemed so easy only twenty years before our time?"

One disturbing consequence of this intergenerational squeeze is the need baby-boomers and their younger siblings feel to calculate every move they make. Spontaneity appears to be a luxury; planning a necessity. Hypercalculation rears its head where family planning is concerned. It is also omnipresent when career decisions are at stake. For one cannot afford to make a mistake, or to be too much of a risk taker. The consequences could be disastrous: you could fall off the fast track and never recover. The workplace becomes an arena of relentless competition, as Anthony Sandsome (another Doeville graduate of the class of 1980) puts it:

Brokerage is the kind of thing where I get up in the morning and I'm in a boxing ring—not even in a boxing ring—I'm in a jungle. I'm armed with guns, knives, fists, you know, I am fighting for my money each day. I'm knocking the hell out of someone and someone is knocking the hell out of me.

Anthony can't get out of the ring, even though it sometimes exhausts him. He might never get back in. One might be inclined to expect this


128

attitude from a broker, since the field is well known for its cutthroat tendencies. But sentiments of this sort are commonly expressed by Anthony's classmates who are accountants, teachers, secretaries, and the like. Work is not the place where one finds personal fulfillment or fellowship; it is the place where survival of the fittest is the goal and the consequence of being less than the best is likely to be a serious drop in one's standard of living.

To a degree this has always been an aspect of the American workplace. It is viewed by many as an arena for the Darwinian struggle. But for this generation, making a mistake may have draconian consequences. For the treadmill begins when a young woman or man must choose a college degree course that will lead to practical payoffs in the workplace and a job that has the advancement potential needed to purchase a lifestyle consistent with middle-class expectations. That this has become increasingly difficult to pull off is met not with abject resignation, but by winching up the demands an individual places on him- or herself to calculate life decisions more carefully, and by building frustration over the knowledge that despite this increased self-surveillance, life may not turn out to be what was expected.

Who is to blame if this happens? Doeville residents are not entirely sure. But when they reflect on the apparent permanence of their economic exile, the dismay of both generations in Doeville is layers deep. Doeville parents believe their children are entitled to live in their hometown or somewhere just like it. That is what they worked for, to ensure that their children would be as well off, if not better off, than they have been. What they are witnessing is the opposite trend: their children are falling farther and farther behind. Doeville's refugee youth couldn't agree more. Anyone who has been able to escape this pressure is perceived as having benefitted from some unfair advantage. There are such people moving into Doeville now, and most of them are of Asian origin. New York City is a magnet for overseas placement of Asian executives, posted stateside by Japanese and Korean firms with American subsidiaries. Doeville is an attractive place for these newcomers to live since it is close to the City, yet is cloistered from the perceived dangers of urban living. The strength of Asian currencies against the American dollar puts Doeville homes well within reach of overseas executives, even as it recedes from the grasp of "native" Americans. As Maureen Oberlin, a life-long Doeville resident who cannot afford to buy into the community as an adult, sees it, this is a cause for alarm:

This area particularly has had a heavy Asian [influx]. If you go into some of the schools, Doeville is a perfect example, the high school . . . is getting to the point that it's almost 50-50, the percentage of Asians as opposed to Caucasians. The elementary schools are even higher [in percentages of


129

Asian students]. It's frustrating that they can afford it and we can't. We've lived here all our lives. We're working for it and they can just come up with the cash.

We are accustomed to the idea that blue-collar auto workers in Detroit will take a sledge hammer to a stray Toyota parked in the factory parking lot. The expression of frustration in the face of growing Japanese dominance of the American automobile market is understandable. Blue-collar labor faces a direct economic threat in the form of a competition we are losing. This is hardly news.

That the displacement has reached the quiet streets of America's middle and upper middle classes may come as something of a surprise. Nativism, a xenophobic reaction to the threat of "invasion" by alien peoples, is rearing its head behind the white picket fences of suburbia. Residents of Doeville, parents and exiled grown children, question whether the American melting pot is big enough for these newcomers, who seem to be starting at the top rather than working their way up through the ranks.

Conclusion

Deindustrialization is a macroeconomic phenomenon with profound consequences for our daily lives and our long-term ideals. The great American assumption of prosperity dies hard. When our experience falls short of expectations, as it does when downward mobility strikes a business executive, we are inclined to blame ourselves. Here the system appears to function perfectly well; we simply see ourselves as defective parts that need to be cast out or repositioned at a lower rank, more in keeping with our "natural" abilities.

When downward mobility distances the experience of one generation from that of another (adjacent) generation, the blame may also fall on the shoulders of the hapless individual who failed to calculate properly, who allowed an interest in music to overwhelm his or her better judgment (to pursue accounting). Or it may surface in scapegoating. Doeville families look at their new Asian neighbors and ask: why are they able to waltz in here and buy up homes in the neighborhoods we cultivated when we can no longer do so? The sentiment is not a pretty one, for it reflects an underlying sense of entitlement: only certain kinds of people—real Americans who speak English and want to assimilate—should be allowed the fruits of Doeville life. But it is an understandable reaction to the frustration of an intergenerational trajectory that is headed downhill in a culture that only has room for the good news of ever increasing prosperity.

Nativism is but one potential response. One hears as well the faint


130

beat of intergenerational warfare: why should a thirty-year-old woman pay hefty Social Security taxes to pay for the retirement of elder Doevillians, when they no longer pass school bond issues to support the education of young children in a neighboring part of the county? Should the generation that saw the postwar boom and reaped the benefits be entitled to a comfortable retirement, when the baby-boomers pushing up from below may see neither? America's social contract is fraying at the edges. We are no longer certain what we owe each other in the form of mutual support, or how open we can "afford" to be in enfolding immigrants into our society. We first calculate the costs and often fail to see any benefits.

Awareness of the fragility of the bonds holding us together is dim at best. Doeville residents look upon the country as an anthropomorphic being that once had a secure identity and is now adrift. They are confused by the apparent weakness of the economy and by the sense of directionless motion we encounter at every turn. We seize upon high technology as the solution, only to find that we have lost our markets to foreign competition. We indulge in a frenzy of hostile takeovers and mergers, only to find that unemployment and burdensome debt follows in its wake. We send our sons and daughters to Wall Street in search of a financial holy grail, and discover instead that they are nearly as vulnerable to downward mobility as the steel mill worker on Chicago's South Side.

The turmoil we have seen in the domestic economy since the postwar period has brought us tremendous prosperity at times, and a roller coaster of insecurity at others. Most of all, it has created a "postmodern" sense of unpredictability: we no longer have a firm grip on where the domestic economy is headed, on where the end point of change is to be found. This is not a particularly easy moment for Americans, who look toward the twenty-first century with clouded vision. We have not given up on our identity as a dominant force in the international world, but we see the limits of our power in the faltering economy. There are times when reality is at dramatic odds with our cultural expectations, and this is one of those times.


131

Seven—
Labor and Management in Uncertain Times:
Renegotiating the Social Contract

Ruth Milkman

The U.A.W. . . . is the largest labor union on earth. Its membership of 1,300,000 embraces most of the production workers in three major American industries. . . . The U.A.W. itself is diverse and discordant, both in its leaders and its members, among whom are represented every race and shape of political opinion. . . . The union's sharp insistence on democratic expression permits bloc to battle bloc and both to rebel at higher-ups' orders. They often do. But U.A.W. is a smart, aggressive, ambitious outfit with young, skillful leaders. . . . It has improved the working conditions in the sometimes frantically paced production lines. And it has firmly established the union shop in an industry which was once firmly open shop. . . . It is not a rich union. Its dues are one dollar a month, which is low. . . . U.A.W. makes its money go a long way. It sets up social, medical, and educational benefits. . . . In its high ranks are men like Reuther, who believes labor must more and more be given a voice in long-range economic planning of the country.[1]


Curious as it may seem to a late-twentieth-century sensibility, this homage to the United Auto Workers is not from a union publication or some obscure left-wing tract. It appeared in Life magazine in 1945, a month after V-J Day and not long before the century's largest wave of industrial strikes, led by the auto workers, rocked the nation. The cover photo featured a 1940s Everyman: an unnamed auto worker in his work clothes, with factory smokestacks in the background. Blue-collar men in heavy industry, with powerful democratic unions and, at least implicitly, a strong class consciousness—only forty-five years ago this was standard iconography in the mass media and in the popular thinking that it both reflected and helped shape. Organized labor, then embracing over a third of the nation's nonfarm workers and 67 percent of those in man-

Thanks to Miriam Golden, Naomi Schneider, Judith Stacey, and Alan Wolfe for their helpful comments on an earlier version of this chapter.


132

ufacturing, was a central force in the Democratic party and a vital influence in public debate on a wide range of social questions. The industrial unions founded in the New Deal era were leaders in opposing race discrimination (and to some extent even sex discrimination) in this period, and their political agenda went far beyond the narrow, sectional interests of their members. Indeed, as historian Nelson Lichtenstein has written, in the 1940s "the union movement defined the left wing of what was possible in the political affairs of the day."[2]

Today, this history is all but forgotten. Blue-collar workers and labor unions are conspicuous by their absence from the mainstream of public discourse. Across the political spectrum, the conventional wisdom is that both industrial work and the forms of unionism it generated are fading relics of a bygone age, obsolete and irrelevant in today's postindustrial society. As everybody knows, while the unionized male factory worker was prototypical in 1945, today the labor force includes nearly as many women as men, and workers of both genders are more likely to sit behind a desk or perform a service than to toil on an assembly line. Union density has fallen dramatically, and organized labor is so isolated from the larger society that the right-wing characterization of it as a "special interest" prevails unchallenged. Public approval ratings of unions are at a postwar low, and such new social movements as environmentalism and feminism are as likely to define themselves in opposition to as in alliance with organized labor (if they take any notice of it at all).[3]

What has happened in the postwar decades to produce this change? Part of the story involves structural economic shifts. Most obviously, the manufacturing sector has decreased drastically in importance, accounting for only 20 percent of civilian wage and salary employment in the United States in 1987, compared to 34 percent in 1948.[4] And for complex political as well as economic reasons, unionization has declined even more sharply, especially in manufacturing, its historical stronghold. Although numbers fail to capture the qualitative aspects of this decline, they do indicate its massive scale: in 1989, only 16 percent of all U.S. workers, and 22 percent of those in manufacturing, were union members—half and one-third, respectively, of the 1945 density levels.[5] Along-side these massive processes of deindustrialization and deunionization, the widespread introduction of new technologies and the growing diffusion of the "new" industrial relations, with its emphasis on worker participation, have in recent years dramatically transformed both work and unionism in the manufacturing sector itself.

Few workplaces have been affected by these changes as dramatically as those in the automobile industry, the historical prototype of mass production manufacturing and the core of the U.S. economy for most of this century. Since the mid-1970s, hundreds of thousands of auto work-


133

ers have been thrown out of work as some factories have closed and others have been modernized.[6] And although the U.A.W. still represents the vast bulk of workers employed by the "Big Three" auto firms (General Motors, Ford, and Chrysler), in recent years the non-union sector of the industry has grown dramatically. Union coverage in the auto parts industry has fallen sharply since the mid-1970s, and the establishment of new Japanese-owned "transplants" in the 1980s has created a non-union beachhead in the otherwise solidly organized assembly sector.[7] Profoundly weakened by these developments, the U.A.W. has gingerly entered a new era of "cooperation" with management, jettisoning many of its time-honored traditions in hopes of securing a place for itself in the future configuration of the industry. Meanwhile, the Big Three have invested vast sums of money in such new technologies as robotics and programmable automation. They have also experimented extensively with worker participation schemes and other organizational changes.

The current situation of auto workers graphically illustrates both the historical legacy of the glory days of American industrial unionism and the consequences of the recent unravelling of the social contract between labor and management that crystallized in the aftermath of World War II. This chapter explores current changes in the nature of work and unionism in the auto industry, drawing on historical evidence and on field-work in a recently modernized General Motors (GM) assembly plant in Linden, New Jersey. The analysis focuses particularly on the effects of new technology and the new, participatory forms of management. While it is always hazardous to generalize from any one industry to "the" workplace, the recent history of labor relations in the auto industry is nonetheless suggestive of broader patterns. The auto industry case is also of special interest because it figures so prominently in current theoretical debates about workplace change, which are briefly considered in the concluding section.

The story I will recount here is largely a story of failure—on the part of both management and labor—to respond effectively to rapidly changing circumstances. On the management side, the Big Three auto firms (and especially GM) have experienced enormous difficulty in overcoming bureaucratic inertia, particularly in regard to changing the behavior of middle management and first-line supervisors. As a result, their internal organizational structures and traditional corporate cultures have remained largely intact, despite strenuous efforts to institute changes. The auto firms have been unable to reap the potential advantages of the new technologies or to make a successful transition to a more participatory system of workplace management, even though they have invested considerable resources in both areas. Management's own inertia has been reinforced, tragically, by the weakening of the U.A.W. in this critical


134

period. Long habituated to a reactive stance toward management initiatives, in recent years the union has concentrated its energies on the crisis of job security, leaving the challenge of reorganizing the workplace itself largely to management while warily embracing "cooperation" in hopes of slowing the hemorrhaging of jobs in the industry. The net result has been an increasingly uncompetitive domestic auto industry, which in turn has further weakened the union, creating a vicious circle of decline.

Because so much of the recent behavior of automobile manufacturing managers and of the U.A.W. and its members is rooted in the past, the first step in understanding the current situation is to look back to the early days of the auto industry, when the system of mass production and the accompanying pattern of labor-management relations that is now unravelling first took shape.

Fordism and the History of Labor Relations in the U.S. Auto Industry

The earliest car manufacturers depended heavily on skilled craftsmen to make small production runs of luxury vehicles for the rich. But the industry's transformation into a model of mass production efficiency, led by the Ford Motor Company in the 1910s, was predicated on the systematic removal of skill from the industry's labor process through scientific management, or Taylorism (named for its premier theorist, Frederick Winslow Taylor). Ford perfected a system involving not only deskilling but also product standardization, the use of interchangeable parts, mechanization, a moving assembly line, and high wages. These were the elements of what has since come to be known as "Fordism," and they defined not only the organization of the automobile industry but that of modern mass production generally.[8]

As rationalization and deskilling proceeded through the auto industry in the 1910s and 1920s, the proportion of highly skilled jobs fell dramatically. The introduction of Ford's famous Five Dollar Day in 1914 (then twice the going rate for factory workers) both secured labor's consent to the horrendous working conditions these innovations produced and helped promote the mass consumption that mass production required for its success. Managerial paternalism, symbolized by Ford's "Sociological Department," supplemented high wages in this regime of labor control. Early Ford management also developed job classification systems, ranking jobs by skill levels and so establishing an internal labor market within which workers could hope to advance.[9]

Deskilling was never complete, and some skill differentials persisted among production workers. Even in the 1980s, auto body painters and


135

welders had more skill than workers who simply assembled parts, for example. But these were insignificant gradations compared to the gap between production workers and the privileged stratum of craft workers known in the auto industry as the "skilled trades"—tool and die makers, machinists, electricians, and various other maintenance workers. Nevertheless, the mass of the industry's semiskilled operatives united with the skilled trades elite in the great industrial union drives of the 1930s, and in the U.A.W. both groups were integrated into the same local unions.

The triumph of unionism left the industry's internal division of jobs and skills intact, but the U.A.W. did succeed in narrowing wage differentials among production workers and in institutionalizing seniority (a principle originally introduced by management but enforced erratically in the pre-union era) as the basic criterion for layoffs and job transfers for production workers. For the first decade of the union era, much labor-management conflict focused on the definition of seniority groups. Workers wanted plantwide or departmentwide seniority to maximize employment security, while management sought the narrowest possible seniority classifications to minimize the disruptions associated with workers' movement from job to job. But once the U.A.W. won plantwide seniority for layoffs, it welcomed management's efforts to increase the number of job classifications for transfers, since this maximized opportunities for workers with high seniority to choose the jobs they preferred. By the 1950s, this system of narrowly defined jobs, supported by union and management alike, was firmly entrenched.[10]

Management and labor reached an accommodation on many other issues as well in the immediate aftermath of World War II. But at the same time, the U.A.W. began to retreat from the broad, progressive agenda it had championed in the 1930s and during the war. The failure of the 1945–46 "open the books" strike, in which the union demanded that GM raise workers' wages without increasing car prices, and the national resurgence of conservatism in the late 1940s and 1950s led the U.A.W. into its famous postwar "accord" with management. Under its terms, the union increasingly restricted its goals to improving wages and working conditions for its members, while ceding to management all the prerogatives involved in the production process and in economic planning. The shop steward system in the plants was weakened in the postwar period as well, and in the decades that followed, the U.A.W. was gradually transformed from the highly democratic social movement that Life magazine had profiled in 1945 into a more staid, bureaucratic institution that concentrated its energies on the increasingly complex technical issues involved in enforcing its contracts and improving wages, fringe benefits, and job security for its members.[11]

The grueling nature of production work in the auto industry changed


136

relatively little over the postwar decades, even as the U.A.W. continued to extract improvements in the economic terms under which workers agreed to perform it. High wages and excellent benefits made auto workers into the blue-collar aristocrats of the age. It was an overwhelmingly male aristocracy, since women had been largely excluded from auto assembly jobs after World War II; blacks, on the other hand, made up a more substantial part of the auto production work force than of the nation's population. In 1987, at the Linden GM assembly plant where I did my fieldwork, for example, women were 12 percent of the production work force and less than 1 percent of the skilled trades. Linden production workers were a racially diverse group: 61 percent were white, 28 percent were black, and 12 percent were Hispanic; the skilled trades work force, however, was 90 percent white.[12]

While the union did little to ameliorate the actual experience of work in the postwar period, with the job classification system solidified, those committed to a long-term career in the industry could build up enough seniority to bid on the better jobs within their plants. Although the early, management-imposed job classification systems had been based on skill and wage differentials, the union eliminated most of the variation along these dimensions. Indeed, the payment system the U.A.W. won, which persists to this day, is extremely egalitarian. Regardless of seniority or individual merit, assembly workers are paid a fixed hourly rate negotiated for their job classification, and the rate spread across classifications is very narrow. Formal education, which is in any case relatively low (both production workers and skilled trades at Linden GM averaged twelve years of schooling), is virtually irrelevant to earnings. At Linden GM, production workers' rates in 1987 ranged from a low of $13.51 per hour for sweepers and janitors to a high of $14.69 for metal repair work in the body shop. Skilled trades workers' hourly rates were only slightly higher, ranging from $15.90 to $16.80 (with a twenty-cent-an-hour "merit spread"), although their annual earnings are much higher than those of production workers because of their extensive overtime.[13]

Since wage differentials are so small, the informal de facto hierarchy among production jobs is based instead on what workers themselves perceive as desirable job characteristics. While individual preferences always vary somewhat, the consensus is reflected in the seniority required to secure any given position. One testament to the intensely alienating nature of work on the assembly line is that among the jobs auto workers prefer most are those of sweeper and janitor, even though these jobs have the lowest hourly wage rates. Subassembly, inspection, and other jobs where workers could pace themselves rather than be governed by the assembly line are also much sought after. At Linden in 1987, the median seniority of unskilled workers in the material and maintenance departments,


137

which include all the sweepers and janitors and where all jobs are "off the line," was 24 years—twice the median seniority of workers in the assembly departments![14] By contrast, jobs in particularly hot or dirty parts of the plant, or those in areas where supervision is especially hostile, are shunned by workers whose seniority gives them any choice. Such concerns are far more important to production workers than what have become marginal skill or wage differentials, although there is a group that longs to cross the almost insurmountable barrier between production work and the skilled trades.[15]

Such was the system that emerged from the post—World War II accord between the U.A.W. and management. It functioned reasonably well for the first three postwar decades. The auto companies generated huge profits in these years, and for auto workers, too, the period was one of unprecedented prosperity. Even recessions in this cyclically sensitive industry were cushioned by the supplementary unemployment benefits the union won in 1955. However, in the 1970s, fundamental shifts in the international economy began to undermine the domestic auto makers. As skyrocketing oil prices sent shock waves through the U.S. economy, more and more cars were imported from the economically resurgent nations of Western Europe and, most significantly, Japan. For the first time in their history, the domestic producers faced a serious challenge in their home market.[16]

After initially ignoring these developments, in the 1980s the Big Three began to confront their international competition seriously. They invested heavily in computerization and robotization, building a few new high-tech plants and modernizing most of their existing facilities. GM alone spent more than $40 billion during the 1980s on renovating old plants and building new ones.[17] At the same time, inspired by their Japanese competitors, the auto firms sought to change the terms of their postwar accord with labor, seeking wage concessions from the union, reducing the number of job classifications and related work rules in many plants, and experimenting with new forms of "employee involvement" and worker participation, from quality circles to flexible work teams.[18]

The U.A.W., faced with unprecedented job losses and the threat of more to come, accepted most of these changes in the name of labor-management cooperation. To the union's national leadership, this appeared to be the only viable alternative. They justified it to an often skeptical rank and file membership by arguing that resistance to change would only serve to prevent the domestic industry from becoming internationally competitive, which in turn would mean further job losses. Once it won job security provisions protecting those members affected by technological change, the union welcomed management's investments in technological modernization, which both parties saw as a means


138

of meeting the challenge of foreign competition. Classification mergers and worker participation schemes were more controversial within the union, but the leadership accepted these, too, in the name of enhancing the domestic industry's competitiveness.

Most popular and academic commentators view the innovations in technology and industrial relations that the auto industry (among others) undertook in the 1980s in very positive terms. Some go so far as to suggest that they constitute a fundamental break with the old Fordist system. New production technologies in particular, it is widely argued, hold forth the promise of eliminating the most boring and dangerous jobs while upgrading the skill levels of those that remain. In this view, new technology potentially offers workers something the U.A.W. was never able to provide, namely, an end to the deadening monotony of repetitive, deskilled work. Similarly, many commentators applaud the introduction of Japanese-style quality circles and other forms of participative management, which they see as a form of work humanization complementing the new technology. By building on workers' own knowledge of the production process, it is argued, participation enhances both efficiency and the quality of work experience. The realities of work in the auto industry, however, have changed far less than this optimistic scenario suggests.

New Technology and the Skill Question

Computer-based technologies are fundamentally different from earlier waves of industrial innovation. Whereas in the past automation involved the use of special-purpose, or "dedicated," machinery to perform specific functions previously done manually, the new information-based technologies are flexible, allowing a single machine to be adapted to a variety of specific tasks. As Shoshana Zuboff points out, these new technologies often require workers to use "intellective" skills. Workers no longer simply manipulate tools and other tangible objects, but also must respond to abstract, electronically presented information. For this reason, Zuboff suggests, computer technology offers the possibility of a radical break with the Taylorist tradition of work organization that industries like auto manufacturing long ago perfected, moving instead toward more skilled and rewarding jobs, and toward workplaces where learning is encouraged and rewarded. "Learning is the new form of labor," she declares.[19] Larry Hirschhorn, another influential commentator on computer technology, makes a similar argument. As he puts it, in the computerized factory "the deskilling process is reversed. Machines extend workers' skill rather than replace it."[20]

As computer technology has transformed more and more workplaces,


139

claims like these have won widespread public acceptance. They are, in fact, the basis for labor market projections that suggest a declining need for unskilled labor and the need for educational upgrading to produce future generations of workers capable of working in the factory and office of the computer age. Yet it is far from certain that workplaces are actually changing in the ways that Zuboff and Hirschhorn suggest.

The Linden GM plant is a useful case for examining this issue, since it recently underwent dramatic technological change. In 1985–86, GM spent $300 million modernizing the plant, which emerged from this process as one of the nation's most technologically advanced auto assembly facilities and as the most efficient GM plant in the United States. There are now 219 robots in the plant, and 113 automated guided vehicles (AGVs), which carry the car bodies from station to station as they are assembled. Other new technology includes 186 programmable logic controllers (PLCs), used to program the robots. (Before the plant modernization there was only one robot, no AGVs, and eight PLCs.)[21]

Despite this radical technological overhaul, the long-standing division of labor between skilled trades and production workers has been preserved intact. Today, as they did when the plant used traditional technology, Linden's skilled trades workers maintain the plant's machinery and equipment, while production workers perform the unskilled and semiskilled manual work involved in assembling the cars. However, the number of production workers has been drastically reduced (by over 1,100 people, or 26 percent), while the much smaller population of skilled trades workers has risen sharply (by 190 people, or 81 percent). Thus the overall proportion of skilled workers increased—from 5 percent to 11.5 percent—with the introduction of robotics and other computer-based production technologies. In this sense, the plant's modernization did lead to an overall upgrading in skill levels.[22]

However, a closer look at the impact of the technological change on GM-Linden reveals that pre-existing skill differentials among workers have been magnified, leading to skill polarization within the plant rather than across-the-board upgrading.[23] After the plant modernization, the skilled trades workers enjoyed massive skill upgrading and gained higher levels of responsibility, just as Zuboff and Hirschhorn would predict. In contrast, however, the much larger group of production workers, whose jobs were already extremely routinized, typically experienced still further deskilling and found themselves subordinated to and controlled by the new technology to an even greater extent than before.

The skilled trades workers had to learn how to maintain and repair the robots, AGVs, and other new equipment, and since the new technology is far more complex than what it replaced, they acquired many new skills. Most skilled trades workers received extensive retraining, espe-


140

cially in robotics and in the use of computers. Linden's skilled trades workers reported an average (median) of forty-eight full days of technical training in connection with the plant modernization, and some received much more.[24] Most of them were enthusiastic about the situation. "They were anxiously awaiting the new technology," one electrician recalled. "It was like a kid with a new toy. Everyone wanted to know what was going to happen."[25] After the "changeover" (the term Linden workers used for the plant modernization), the skilled trades workers described their work as challenging and intellectually demanding:

We're responsible for programming the robots, troubleshooting the robots, wiping their noses, cleaning them, whatever. . . . It's interesting work. We're doing something that very few people in the world are doing, troubleshooting and repairing robots. It's terrific! I don't think this can be boring because there are so many things involved. There are things happening right now that we haven't ever seen before. Every day there's something different. We're always learning about the program, always changing things to make them better—every single day. [an electrician]

With high technology, skilled trades people are being forced to learn other people's trades in order to do their trade better. Like with me, I have to understand that controller and how it works in order to make sure the robot will work the way it's supposed to. You have to know the whole system. You can't just say, "I work on that one little gear box, I don't give a damn about what the rest of the machine does." You have to have a knowledge of everything you work with and everything that is related to it, whether you want to or not. You got to know pneumatics, hydraulics—all the trades. Everything is so interrelated and connected. You can't be narrow-minded anymore. [a machine repairman]

However, the situation was quite different for production workers. Their jobs, as had always been the case in the auto industry, continued to involve extremely repetitive, machine-paced, unskilled or semiskilled work. Far from being required to learn new skills, many found their jobs were simplified or further deskilled by the new technology:

It does make it easier to an extent, but also at the same time they figure, "Well, I'm giving you a computer and it's going to make your job faster, so instead of you doing this, this, and this, I'm going to have you do this and eight other things, because the time I'm saving you on the first three you're going to make it up on the last." Right now I'm doing more work in less time, the company's benefiting, and I am bored to death—more bored than before! [a trim department worker with nineteen years seniority]

I'm working in assembly. I'm feeding the line, the right side panel, the whole right side of the car. Myself and a fellow worker, in the same spot. Now all we do, actually, is put pieces in, push the buttons, and what they


141

call a shuttle picks up whatever we put on and takes it down the line to be welded. Before the changeover my job was completely different. I was a torch solderer. And I had to solder the roof, you know, the joint of the roof with the side panel. I could use my head more. I liked it more. Because, you know, when you have your mind in it also, it's more interesting. And not too many fellow workers could do the job. You had to be precise, because you had to put only so much material, lead, on the job. [a body shop worker with sixteen years seniority]

Not only were some of the more demanding and relatively skilled traditional production jobs—like soldering, welding, and painting car bodies—automated out of existence, but also many of the relatively desirable off-the-line jobs were eliminated. "Before there were more people working subassembly, assembling parts," one worker recalled. "You have some of the old-timers working on the line right now. Before, if you had more seniority, you were, let's say, off the line, in subassembly."

Even when they operate computers—a rarity for production workers—they typically do so in a highly routinized way. "There is nothing that really takes any skill to operate a computer," one production worker in the final inspection area said. "You just punch in the numbers, the screen will tell you what to do, it will tell you when to race the engine and when to turn the air conditioner off, when to do everything. Everything comes right up on the screen. It's very simple."

The pattern of skill polarization between the skilled trades and production workers that these comments suggest is verified by the findings of an in-plant survey. Skilled trades workers at Linden, asked about the importance of twelve specific on-the-job skills (including "problem solving," "accuracy/precision," "memory," and "reading/spelling") to their jobs before and after the plant was modernized, reported that all but one ("physical strength") increased in importance. In contrast, a survey of the plant's production workers asking about the importance of a similar list of skills found that all twelve declined in importance after the introduction of the new technology.[26] The survey also suggested that boredom levels had increased for production workers; 45 percent stated that their work after the changeover was boring and monotonous "often" or "all the time," compared to 35 percent who had found it boring and monotonous before the changeover. Similarly, 96 percent of production workers said that they now do the same task over and over again "often" or "all the time," up from 79 percent who did so before the changeover.

In the Linden case, the plant modernization had opposite effects on skilled trades and production workers, primarily because no significant job redesign was attempted. The boundary between the two groups and the kinds of work each had traditionally done was maintained, despite the radical technological change. While management might have chosen


142

(and the union might have agreed) to try to transfer some tasks from the skilled trades to production workers, such as minor machine maintenance, or to redesign jobs more extensively in keeping with the potential of the new technology, this was not seriously attempted. Engineers limited their efforts to conventional "line balancing," which simply involves packaging tasks among individual production jobs so as to minimize the idle time of any given worker. In this respect they treated the new technology very much like older forms of machinery. The fundamental division of labor between production workers and the skilled trades persisted despite the massive infusion of new technology, and this organizational continuity led to the intensification of the already existing skill polarization within the plant.

GM-Linden appears to be typical of U.S. auto assembly plants in that new technology has been introduced without jobs having been fundamentally redesigned or the basic division of labor altered between production workers and the skilled trades. Even where significant changes in the division of labor—such as flexible teams—have been introduced, as in the new Japanese transplants, they typically involve rotating workers over a series of conventionally deskilled production jobs, rather than changing the basic nature of the work. While being able to perform eight or ten unskilled jobs rather than only one might be considered skill upgrading in some narrow technical sense, it hardly fits the glowing accounts of commentators who claim that with new technology "the deskilling process is reversed." Rather, it might be characterized best as "flexible Taylorism" or "Toyotism."[27]

Perhaps work in the auto industry could be reorganized along the lines Zuboff and Hirschhorn suggest, now that new technology has been introduced so widely. However, a major obstacle to this is bureaucratic inertia on the management side, for which GM in particular is legendary. As many auto industry analysts have pointed out, the firm's investments in new technology were typically seen by management as a "quick fix," throwing vast sums of money at the accelerating crisis of international competitiveness without seriously revamping the firm's organizational structure or its management strategies to make the most efficient possible use of the new equipment. As Mary Ann Keller put it, for GM "the goal of all the technology push has been to get rid of hourly workers. GM thought in terms of automation rather than replacing the current system with a better system."[28] The technology was meant to replace workers, not to transform work.

Reinforcing management's inertia, ironically, was the weakness of the U.A.W. The union has an old, deeply ingrained habit of ceding to management all prerogatives on such matters as job design. And in the 1980s, faced with unprecedented job losses, union concerns about employment


143

security were in the forefront. The U.A.W. concentrated its efforts on minimizing the pain of "downsizing," generally accepting the notion that new technology and other strategies adopted by management were the best way to meet the challenge of increased competition in the industry. After all, if the domestic firms failed to become competitive, U.A.W. members would have no jobs at all. This kind of reasoning, most prominently associated with the U.A.W.'s GM Department director Donald Ephlin, until his retirement in 1989, also smoothed the path for management's efforts to transform the industrial relations system in the direction of increased "employee involvement" and teamwork, to which we now turn.

Worker Participation and the "New Industrial Relations"

Inspired by both the non-union manufacturing sector in the U.S. and by the Japanese system of work organization, the Big Three began to experiment with various worker participation schemes in the 1970s. By the end of the 1980s, virtually every auto assembly plant in the United States had institutionalized some form of participation. Like the new technologies that were introduced in the same period, these organizational innovations—the "new industrial relations"—were a response to the pressure of international competition. And even more than the new technologies, they signaled a historic break with previous industrial practices. For both the Taylorist organization of work in the auto industry and the system of labor relations that developed around it had presumed that the interests of management and those of workers were fundamentally in conflict. In embracing worker participation, however, management abandoned this worldview and redefined its interests as best served by cooperation with labor, its old adversary.[29]

For management, the goal of worker participation is to increase productivity and quality by drawing on workers' own knowledge of the labor process and by increasing their motivation and thus their commitment to the firm. Participation takes many different forms, ranging from suggestion programs, quality circles, and quality-of-work-life (QWL) programs, which actively solicit workers' ideas about how to improve production processes, to "team concept" systems, which organize workers into small groups that rotate jobs and work together to improve productivity and quality on an ongoing basis. All these initiatives promote communication and trust between management and labor, in the name of efficiency and enhanced international competitiveness. Like the new technologies with which they are often associated, the various forms of worker participation have been widely applauded by many commentators who see them


144

as potentially opening up a new era of work humanization and industrial democracy.[30]

In the early 1970s, some U.A.W. officials (most notably Irving Bluestone, then head of the union's GM department) actively supported experimental QWL programs, which they saw as a means for improving the actual experience of work in the auto industry, a long-neglected part of the union's original agenda. But many unionists were more skeptical about participation in the 1980s, when QWL programs and the team concept became increasingly associated with union "give-backs," or concessions. In a dramatic reversal of the logic of the postwar labor-management accord, under which economic benefits were exchanged for unilateral management control over the production process, now economic concessions went hand-in-hand with the promise of worker participation in decision making. However, QWL and the team concept were introduced largely on management's terms in the 1980s, for in sharp contrast to the period immediately after World War II, now the U.A.W. was in a position of unprecedented weakness. In many Big Three plants, participation schemes were forced on workers (often in the face of organized opposition) through what auto industry analysts call "whipsawing," a process whereby management pits local unions against one another by threatening to close the least "cooperative" plants. Partly for this reason, QWL and the team concept have precipitated serious divisions within the union, with Ephlin and other national union leaders who endorse participation facing opposition from a new generation of union dissidents who view it as a betrayal of the union's membership.[31]

The New United Motor Manufacturing, Inc., plant (NUMMI) in Fremont, California, a joint venture of Toyota and GM, is the focus of much of the recent controversy over worker participation. The plant is run by Toyota, using the team concept and various Japanese management techniques. (GM's responsibility is limited to the marketing side of the operation.) But unlike Toyota's Kentucky plant and the other wholly Japanese-owned transplants, at NUMMI the workers are U.A.W. members. Most of them worked for GM in the same plant before it was closed in 1982. Under GM, the Fremont plant had a reputation for low productivity and frequent wildcat strikes, but when it reopened as NUMMI two years later, with the same work force and even the same local union officers, it became an overnight success story. NUMMI's productivity and quality ratings are comparable to those of Toyota plants in Japan, and higher than any other U.S. auto plant.[32] Efforts to emulate its success further accelerated the push to establish teams in auto plants around the nation.

Many commentators have praised the NUMMI system of work organization as a model of worker participation; yet others have severely criticized it. The system's detractors argue that despite the rhetoric of worker control, the team concept and other participatory schemes are basically


145

strategies to enhance management control. Thus Mike Parker and Jane Slaughter suggest that, far from offering a humane alternative to Taylorism, at NUMMI, and at plants that imitate it, workers mainly "participate" in the intensification of their own exploitation, mobilizing their detailed firsthand knowledge of the labor process to help management speed up production and eliminate wasteful work practices. More generally, "whether through team meetings, quality circles, or suggestion plans," Parker and Slaughter argue, "the little influence workers do have over their jobs is that in effect they are organized to time-study themselves in a kind of super-Taylorism."[33] They see the team concept as extremely treacherous, undermining unionism in the name of a dubious form of participation in management decisions.

Workers themselves, however, seem to find intrinsically appealing the idea of participating in what historically have been exclusively managerial decision-making processes, especially in comparison to traditional American managerial methods. This is the case even though participation typically is limited to an extremely restricted arena, such as helping to streamline the production process or otherwise raise productivity. Even Parker and Slaughter acknowledge that at NUMMI, "nobody says they want to return to the days when GM ran the plant."[34] Unless one wants to believe that auto workers are simply dupes of managerial manipulation, NUMMI's enormous popularity with the work force suggests that the new industrial relations have some positive features and cannot simply be dismissed as the latest form of labor control.

Evidence from the GM-Linden case confirms the appeal of participation to workers, although reforms in labor relations there were much more limited than at NUMMI. Linden still has over eighty populated job classifications, and although 72 percent of the production workers are concentrated in only eight of them, this is quite different from NUMMI, where there is only one job classification for production workers and seniority plays a very limited role. Nor has Linden adopted the team system. However, when the plant reopened after its 1985–86 modernization, among its official goals was to improve communications between labor and management, and both parties embraced "jointness" as a principle of decision making. At the same time, "employee involvement groups" (EIGs) were established. Production workers were welcomed back to the plant after the changeover with a jointly (union-management) developed two-week (eighty-hour) training program, in the course of which they were promised that the "new Linden" would be totally different from the plant they had known before. In particular, workers were led to expect an improved relationship with management, and a larger role in decision making and problem solving on the shop floor.[35]

Most workers were extremely enthusiastic about these ideas—at least initially. The problem was that after the eight-hour training program


146

was over, when everyone was back at work, the daily reality of plant life failed to live up to the promises about the "new Linden." "It's sort of like going to college," one worker commented about the training program. "You learn one thing, and then you go into the real world. . . . " Another agreed:

It sounded good at the time, but it turned out to be a big joke. Management's attitude is still the same. It hasn't changed at all. Foremen who treated you like a fellow human being are still the same—no problems with them. The ones who were arrogant bastards are still the same, with the exception of a few who are a little bit scared, a little bit afraid that it might go to the top man, and, you know, make some trouble. Everyone has pretty much the same attitude.

Indeed, the biggest problem was at the level of first-line supervision. While upper management may have been convinced that workers should have more input into decision making, middle and lower management (who also went through a training program) did not always share this view. Indeed, after the training raised workers' expectations, foremen in the plant, faced with the usual pressures to get production out, seemed to quickly fall back into their old habits. The much-touted "new Linden" thus turned out to be all too familiar. As the workers pointed out:

You still have the management that has the mentality of the top-down, like they're right, they don't listen to the exchange from the workers, like the old school. So that's why when you ask about the "new Linden," people say it's a farce, because you still . . . do not feel mutual respect, you feel the big thing is to get the jobs out. This is a manufacturing plant; they do have to produce. But you can't just tell this worker, you know, take me upstairs [where the training classes were held], give me this big hype, and then bring me downstairs and then have the same kind of attitude.

With management, they don't have the security that we have. Because if a foreman doesn't do his job, he can be replaced tomorrow, and he's got nobody to back him up. So everybody's a little afraid of their jobs. So if you have a problem, you complain to your foreman, he tries to take care of it without bringing it to his general foreman; or the general foreman, he don't want to bring it to his superintendent, because neither of them can control it. So they all try to keep it down, low level, and under the rug, and "Don't bother me about it—just fix it and led it slide." And that is not the teachings that we went through in that eighty-hour [training] course!

Many Linden workers expressed similar cynicism about the EIGs. "A lot of people feel very little comes out of the meetings. It's just to pacify you so you don't write up grievances," one paint department worker said, articulating a widespread sentiment. "It's a half-hour's pay for sitting there and eating your lunch," he added.


147

Research on other U.S. auto assembly plants suggests that Linden, where the rhetoric of participation was introduced without much substantive change in the quality of the labor-management relationship, is a more representative case than NUMMI, where participation (whatever its limits) is by all accounts more genuine. Reports from Big Three plants around the nation suggest that typical complaints concern not the concept of participation—which workers generally endorse—but management's failure to live up to its own stated principles. Gerald Horton, a worker at GM's Wentzville, Missouri, plant "thinks the team concept is a good idea if only management would abide by it." Similarly, Dan Maurin of GM's Shreveport, Louisiana, plant observes, "it makes people resentful when they preach participative management and then come in and say, 'this is how we do it.'"[36] Betty Foote, who works at a Ford truck plant outside Detroit, expressed the sentiments of many auto workers about Employee Involvement (EI): "The supposed concern for workers' happiness now with the EI program is a real joke. It looks good on paper, but it is not effective. . . . Relations between workers and management haven't changed."[37]

At NUMMI, workers view participation far more positively. Critics of the team concept suggest that this is because workers there experienced a "significant emotional event" and suffered economically after GM closed the plant, so that when they were recalled to NUMMI a few years later they gratefully accepted the new system without complaint. But, given the uncertainty of employment and the history of chronic layoffs throughout the auto industry, that this would sharply distinguish NUMMI's workers from those in other plants seems unlikely. Such an explanation for the positive reception of the team concept by NUMMI workers is also dubious in light of the fact that even the opposition caucus in the local union, which criticizes the local U.A.W. officials for being insufficiently militant in representing the rank and file, explicitly supports the team concept.[38]

Instead, the key difference between NUMMI and the Big Three assembly plants may be that workers have more job security at NUMMI, where the Japanese management has evidently succeeded in building a high-trust relationship with workers. When the plant reopened, NUMMI workers were guaranteed no layoffs unless management first took a pay cut; this promise and many others have (so far) been kept, despite slow sales. In contrast, the Big Three (and especially GM) routinely enrage workers by announcing layoffs and then announcing executive pay raises a few days later; while at the plant level, as we have seen, management frequently fails to live up to its rhetorical commitments to participation.[39] On the one hand, this explains why NUMMI workers are so much more enthusiastic about participation than their counterparts in


148

other plants. On the other hand, where teamwork and other participatory schemes have been forced on workers through "whipsawing," the result has been a dismal failure on its own terms. Indeed, one study found a negative correlation between the existence of participation programs and productivity.[40]

Insofar as the U.A.W. has associated itself with such arrangements, it loses legitimacy with the rank and file when management's promises are not fulfilled. Successful participation systems, however, can help strengthen unionism. It is striking that at NUMMI, with its sterling productivity and quality record, high management credibility, and relatively strong job security provisions, the U.A.W. is stronger than in most Big Three plants. For that matter, the local union at NUMMI has more influence than do enterprise unions in Japanese auto plants, where team-work systems are long-standing.[41] But here, as in so many other ways, NUMMI is the exceptional case. In most U.S. auto plants, the weakness of the U.A.W.—in the face of industry overcapacity and capital's enhanced ability to shift production around the globe—has combined with management's inability to transform its own ranks to undermine the promise of participation.

Conclusion

In recent literature, the introduction of new technologies and worker participation in industries like auto manufacturing are often cited as evidence of a radical break from the traditional Fordist logic of mass production.[42] Owing to changed economic conditions, the argument goes, Fordism is becoming less and less viable, so that advanced capitalist countries are now moving toward a more flexible "post-Fordist" production regime. In most such accounts, including the influential "flexible specialization" model of Michael Piore and Charles Sabel, this transformation is driven primarily by the growth of increasingly specialized markets and by the new information-based technologies. Theorists of post-Fordism generally agree with analysts like Zuboff and Hirschhorn that new technologies should lead to skill upgrading and thus reverse the logic of Taylorism. Thus Piore and Sabel write that the computer is "a machine that meets Marx's definition of an artisan's tool; it is an instrument that responds to and extends the productive capacities of the user," and that, together with changes in product markets, computer technology is contributing to "the resurgence of craft principles." Post-Fordist theorists also view QWL programs, the team concept, and other forms of worker participation as changes that will help to humanize and democratize the workplace. Thus Piore and Sabel urge organized labor to "shake its attachment to increasingly indefensible forms of shop floor


149

control" so as not to impede progress toward flexible specialization, and they explicitly applaud the U.A.W. for its willingness to experiment with classification reductions and worker participation.[43]

Opposing this type of interpretation of recent events is another perspective, inspired by the labor process theory of Harry Braverman and associated with such writers as Harley Shaiken and David Noble.[44] While the post-Fordists emphasize the contrast between the historical logic of deskilling in mass production industries and contemporary developments, the labor process theorists instead stress the continuities. The key force shaping work experience in both the past and present, in this alternative view, is the systematic removal of skill from the labor process by Taylorist managers. While accepting the idea that new technologies can potentially increase skill levels, labor process theorists argue that this potential is often impossible to realize in the organizational context of the capitalist firm. In their view, management uses computerization in the same way that it used earlier forms of technology: to appropriate knowledge from workers and tighten control over labor—even when skill upgrading might be a more efficient strategy. As Shaiken puts it, "Unfortunately, the possibility exists for introducing authoritarian principles in flexible as well as more traditional mass-production technologies. Under these circumstances, the extraordinary economic potential of these systems is not realized."[45] Commentators in this tradition are also skeptical about worker participation schemes, which they view, not as authentic efforts to enhance the experience of work, but rather as new tools of managerial control. They are especially troubled by the fact that management-initiated participation schemes are often coupled with antiunionism and concession bargaining.[46]

Although they have contributed many valuable insights into recent workplace changes, both these perspectives are one-sided. The post-Fordists fail to take seriously the firm-level organizational obstacles to the kind of macroeconomic transition they envision. They tend to romanticize the emergent new order, and especially its implications for workers, often ignoring the persistent determination of employers to maintain control over labor—as if this fundamental feature of capitalism were disappearing along with the Fordist system of mass production. At the other extreme, labor process theorists tend to reduce all the recent innovations in work organization to new forms of managerial manipulation, and to see capital's desire for control over labor as an insuperable obstacle to any meaningful improvements in the workplace.

Both schools of thought claim to reject technological determinism, but they consistently posit opposite outcomes of the introduction of new technology. The post-Fordists have devoted a great deal of energy to highlighting instances of skill upgrading associated with new technology.


150

When confronted with examples of deskilling, they retreat to the argument that the post-Fordist perspective is merely an account of emergent tendencies whose full realization is contingent and contested. For their part, labor process theorists also disclaim technological determinism, arguing that the abstract potential of new technology to increase skill levels cannot be realized within the concrete social structure of the capitalist firm. Numerous case studies (including several from the auto industry) have appeared supporting each side of the skill debate.[47] The contradictory evidence suggests that attempts to generalize about overall deskilling or upgrading trends are fruitless. As Kenneth Spenner has persuasively argued on the basis of an extensive literature review, the effects of new technology on skill "are not simple, not necessarily direct, not constant across settings and firms, and cannot be considered in isolation."[48] Rather, as the evidence from GM-Linden also indicates, skill effects are conditioned by a variety of social factors, among them organizational culture and managerial discretion—factors that both the post-Fordists and the labor process theorists ultimately ignore or trivialize.

Like the effect of new technology on skills, the impact of worker participation is impossible to analyze in general terms. Here again, much depends on the organizational context in which participation is introduced, and especially on the relationship between labor and management. The specific characteristics of a firm's management and the relative strength and influence of unions (where they exist) can be crucial determinants of the outcome of workplace reform efforts. Yet neither labor process theory nor post-Fordism takes adequate account of these factors. This is not especially surprising for labor process theory, since as critics of Braverman have frequently complained, he neglected workers' resistance entirely. But the result is that commentators in this tradition tend to view all changes in labor relations as management schemes to enhance its control over the work force, and they can barely even contemplate the possibility that worker participation programs or other innovations could benefit workers in any way. The post-Fordist school, in sharp contrast, tends to romanticize the recent experiments in labor-management cooperation, despite the fact that, in the United States at least, most of them have been forced on unions decimated by increased capital mobility and economic globalization.

Although it is impossible to generalize from any one case, the automobile industry is an especially important test of these competing theoretical perspectives on workplace change if only because it figures so prominently in both of them. Indeed, the very concept of Fordism derives from the history of this industry. Yet organizational inertia seems more compelling an explanation for the recent history of automobile manufacturing than either theory. In recent years, automotive manage-


151

ment's reluctance or inability to abandon its longstanding system of work organization or its tradition of authoritarianism vis-à-vis labor has meant that both technological change and experimentation with participation have produced only a superficial transformation in the workplace. Both were introduced in response to a crisis of international competition, and in both cases management undertook the changes with a limited understanding of their potential impact. Consequently, they have neither resolved the continuing problem of foreign competition, nor have they produced the kinds of benefits for workers—skill upgrading and increased job satisfaction—that the optimistic projections of post-Fordist theorists promised. Yet the existence of more positive examples, such as the NUMMI plant, suggests that the alternative view of the labor process theorists, which constructs capitalist control imperatives as inherently incompatible with the possibility of changes that can offer real benefits to workers, is also problematic.

If management's ineptitude and bureaucratic inertia is the main reason for the limited impact of new technology and the new industrial relations in the U.S. auto industry, the weakness of the U.A.W. has also played a role. Following the habits it developed over the decades following World War II, the U.A.W. continued to cede to management all decisions about technology and its applications. It has yet to demand that jobs be redesigned in tandem with new technology so as to maximize the benefits to its members. The union has been more engaged in the issue of worker participation, but because in most cases QWL and teams were introduced in the context of massive job losses and industry overcapacity, the terms of labor-management "cooperation" were largely dictated by management. The U.A.W. necessarily shares the domestic industry's concern about restoring international competitiveness, but rather than serve as a basis for genuine cooperation, this has all too often become a whip for management to use in extracting concessions on wages and work rules, sending the union into a spiral of declining power and legitimacy and severely weakening the plant seniority system that had been its historical hallmark. In exchange, the U.A.W. has sought enhanced job security provisions, but it has won only modest improvements in this area, while plant closings and job losses continue. Contrary to the popular belief that union strength is an obstacle to restoring American industry to an internationally competitive position, the weakness of unionism in this period of potentially momentous changes in the workplace may be the real problem, along with the organizational ineptitude of management. The sad truth is that both labor and management in this critical industry are ill-prepared to face the future.


152

Eight—
The Blue-Collar Working Class:
Continuity and Change

David Halle and Frank Romo

The blue-collar working class in America (and elsewhere) has always evoked extreme pronouncements about its political and social attitudes. Observers have long been drawn to one of two polar positions: either the working class is a conservative force that is integrated into the class structure or the working class is a radical force at odds with the middle class and with capitalists.[1]

In the depression years of the 1930s and in the context of the burgeoning of radical new labor unions affiliated with the CIO, many observers saw a radical and even revolutionary American working class. After World War II, by contrast, in the context of a sustained period of economic growth in the West, the model of a working class integrated into the mainstream of society (and often dubbed "affluent"), gained ground. The American working class was, at that time, usually seen as the extreme case among the Western working classes (just as America was the economically and politically dominant capitalist society), and the phrase "the American worker" became, for some, a shorthand term for a working class that was politically quiescent and socially integrated.[2] In the 1960s and 1970s, a model of the radical working class regained popularity, as a series of studies disputed the idea that the working class was integrated into society or especially content with its position.[3] Now the pendulum has swung again, and the model of the quiescent (if not content) American working class has returned to dominance.[4]

The reason for the oscillation between these extreme models in part has to do with actual changes in the position and attitudes of the working class itself. Blue-collar Americans were, for example, surely more dis-

The names of the coauthors of this chapter appear in alphabetical order. The authors wish to thank James Bardwell for his help with computer programming.


153

content and more inclined to political radicalism in the 1930s than in the 1950s. But in part the pendulum swings between the two models because neither is fully adequate to capture the situation of blue-collar workers in advanced capitalism in the United States (and elsewhere). A convincing model has to take account of three separate, though related, spheres that influence blue-collar lives and beliefs. There is, first of all, life at the workplace—in the mode of production. It is on this crucial sphere that many classic studies of blue-collar workers have concentrated. Second, there is life outside the workplace—the neighborhood of residence, family, and leisure life. With suburbanization and the widespread possession of automobiles, life outside the workplace is often located at a considerable geographic distance from the plant or other work site. Finally, there is life vis-à-vis the government, especially the federal government. This involves the critical act of voting—above all in presidential elections—as well as basic attitudes toward the federal government and the political system, and attitudes toward a whole range of national policy issues. These three areas are somewhat distinct. What is often done, though it should not be, is to focus on just one aspect of workers' lives and from it infer the character of behavior and attitudes in either of the other two spheres.

Here we will use a combination of case studies and national survey data to demonstrate the inadequacies of extreme models of the blue-collar working class that do not take account of each sphere of blue-collar life or of changes that have taken place in those spheres over time. Most of the survey data have been drawn from the National Election Study (NES) carried out by the Survey Research Center of the University of Michigan, which represent the best continuous data series on political attitudes in America. This multidimensional account of working-class attitudes also sheds light on some of the main transformations in American life that have occurred over the past twenty-five or thirty years.

The first question to be addressed is the actual size of the blue-collar working class today, and its relative size compared with the other main occupational groups. In view of prevailing notions of the demise of blue-collar labor in America, it is important to note that the number of blue-collar workers reached its highest level ever in 1989—31.8 million (see figure 8.1).[5] (The blue-collar working class is here defined as consisting of skilled workers, such as electricians and plumbers; factory workers; transportation workers, such as truck and bus drivers; and nonfarm laborers. Men constitute about three-quarters of all blue-collar workers and over 90 percent of skilled blue-collar workers.[6] )

Moreover, blue-collar workers were still a larger proportion of the labor force than either of the two main white-collar groups (see figure 8.2). Thus in 1989 blue-collar workers constituted 27.1 percent of the labor


154

figure

8.1
Composition of the Civilian Labor Force, Major Occupational Groups,
1900–1989: Number of Workers by Year.

figure

8.2
Composition of the Civilian Labor Force, Major Occupational Groups,
1900–1989: Percent Composition by Year.


155

force. Compare this with the upper-white-collar sector, defined as managers and professionals, who composed 25.9 percent of the labor force; and compare this with the lower-white-collar sector—defined as clerical, secretarial, and sales workers—who composed 24.2 percent of the labor force.

What is true is that the proportion of blue-collar workers in the labor force has declined, from a peak of 34.5 percent in 1950, and is now declining faster than before. Still, it should be noted, especially given the talk about "postindustrial" or "deindustrial" society, that the proportion of blue-collar workers in the labor force is now either higher than or about the same as it was in the period 1900–1940, when America was unarguably an "industrial society."[7]

Blue-Collar Workers and the Federal Government

Presidential Elections and Political Party Identification

Blue-collar workers were a crucial part of the electoral coalition that Franklin Delano Roosevelt put together for the Democratic party. The current disaffection of blue-collar workers, especially of the skilled and better-paid blue-collar workers, from the Democratic party represents one of the major changes in American politics.

Skilled blue-collar workers voted, by a clear majority, for the Democratic candidate in five of the seven presidential elections that took place between 1952 and 1976 (1952, 1960, 1964, 1968, and 1976); they voted by a clear majority for the Republican candidate only once, in 1972 (see figure 8.3). Less-skilled blue-collar workers (defined here as all blue-collar workers except the skilled ones) also voted, by a clear majority, for the Democratic candidate in five of these seven elections, as shown in figure 8.4.[8] They too voted, by a clear majority, for the Republican candidate only once, in 1960. However in the three presidential elections since 1976, the picture is far less clear-cut. Skilled blue-collar workers voted Republican more heavily than Democrat in 1988, while splitting their vote about evenly between Democrats and Republicans in 1980 and 1984. Less-skilled blue-collar workers split their vote about evenly between Republicans and Democrats in 1988, voted more heavily Democrat in 1984, and more heavily Republican in 1980. By contrast, upper-white-collar workers have voted, by large majorities, for the Republican candidate in every election from 1952 to 1988, except for 1964, when they were clearly presented with an intolerable candidate in Barry Gold-water (see figure 8.5).

Figures 8.6 through 8.14 give a detailed analysis of the determinants of the blue-collar vote in the 1988 presidential election, showing that


156

figure

8.3
Presidential Vote, 1952–88: Skilled Blue-Collar Workers.

figure

8.4
Presidential Vote, 1952–88: Less-Skilled Blue-Collar Workers.


157

figure

8.5
Presidential Vote, 1952–88: Upper-White-Collar Workers.

figure

8.6
The 1988 Election: Effects of Union Membership
on the Blue-Collar Vote.
SOURCE : Based on figures from the logistic regression
model presented in the appendix to this chapter.


158

figure

8.7
The 1988 Election: Effects of Religion on the Blue-Collar Vote.
SOURCE : Based on figures from the logistic regression
model presented in the appendix to this chapter.

several of the traditional factors associated with voting Democratic still hold for blue-collar Americans. (These figures and figure 8.15 are based on a multivariate logistic analysis of the vote; see the appendix to this chapter for details.) Union members were more likely than non-union members to vote Democrat (figure 8.6). Blue-collar Catholics were more likely to vote Democratic than were blue-collar Protestants (figure 8.7).[9] Blue-collar blacks were more likely to vote Democrat than whites (figure 8.8). And as their income rises, the proportion of blue-collar workers voting Republican increases (figure 8.13). Notice, however, that the effect of region is now complex. Ironically, the voting profile of blue-collar workers in the East (controlling for such factors as religious differences) is now rather similar to that of blue-collar workers in the South (figure 8.10). Notice also that gender—not one of the factors traditionally associated with voting Democratic or Republican—still makes no difference. Male and female blue-collar workers are alike in their voting preferences (figure 8.9).

The movement of blue-collar workers away from Democratic presidential candidates in recent elections is paralleled by, and partly the result of, a tendency that is at least as striking—that of blue-collar workers not to vote at all in presidential elections.[10] Thus in 1980, 1984, and 1988, a larger percentage of skilled blue-collar workers did not vote than voted for either the Republican or Democratic candidate; and among


159

figure

8.8
The 1988 Election: Effects of Race on the Blue-Collar Vote.
SOURCE : Based on figures from the logistic regression
model presented in the appendix to this chapter.

figure

8.9
The 1988 Election: Effects of Gender on the Blue-Collar Vote.
SOURCE : Based on figures from the logistic regression
model presented in the appendix to this chapter.


160

figure

8.10
The 1988 Election: Effects of Region on the Blue-Collar Vote.
SOURCE : Based on figures from the logistic regression model presented
in the appendix to this chapter.

figure

8.11
The 1988 Election: Effects of Party Identification on the Blue-Collar Vote.
SOURCE : Based on figures from the logistic regression model presented
in the appendix to this chapter.


161

figure

8.12
The 1988 Election: Blue-Collar Vote by Age in Years.
SOURCE: Based on figures from the logistic regression model
presented in the appendix to this chapter.

less-skilled blue-collar workers in four of the five elections from 1972 to 1988, a larger number did not vote than voted for either the Republican or Democratic candidate (see figures 8.3 and 8.4). Further, the proportion of blue-collar workers not voting in the 1988 presidential election was especially high (51 percent of less-skilled and 45 percent of skilled workers). More detailed analysis shows that age, income, and education are the most important determinants of whether blue-collar workers vote (see figures 8.12, 8.13, and 8.14). The younger they are, and the lower their income and level of education, the less likely they are to vote.

Figure 8.15 sums up this tendency of blue-collar workers not to vote. It shows the effect of occupation on the 1988 vote. controlling for race, union membership. religion, age, family income. region and gender. Blue-collar workers were about as likely as either of the white-collar groups to vote Republican, but much less likely than the upper-white-collar sector to vote Democratic, mostly because they were less likely than the upper-white-collar sector to vote at all.

The upper-white-collar sector is in sharp contrast to the blue-collar sector in the matter of voting. The percentage of upper-white-collar workers who did not vote was low in 1988 (only 14.1 percent)[11] and at no time in the period 1952–1988 was it higher than 15.3 percent (see figure 8.18).[12]


162

figure

8.13
The 1988 Election: Blue-Collar Vote by Family Income.
SOURCE: Based on figures from the logistic regression model
presented in the appendix to this chapter.

figure

8.14
The 1988 Election: Effects of Education on the Blue-Collar Vote.
SOURCE: Based on figures from the logistic regression model presented
in the appendix to this chapter.


163

figure

8.15
The 1988 Election: Effects of Occupational Status on the Presidential Vote.
SOURCE: Based on figures from the logistic regression model presented
in the appendix to this chapter.

figure

8.16
Party Identification, 1952–88: Skilled Blue-Collar Workers.


164

figure

8.17
Party Identification, 1952–88: Less-Skilled Blue-Collar Workers.

figure

8.18
Party Identification, 1952–88: Upper-White-Collar Workers.

Changes in the political party identification of blue-collar workers since 1952 are also central. Blue-collar workers once identified in large numbers with the Democratic Party. For most of the time from 1952 to 1968, 60 percent or more of less-skilled blue-collar workers saw themselves as Democrats (the exception is 1960 for less-skilled blue-collar


165

workers), while only 20 percent or less saw themselves as Republicans (see figure 8.17). During most of the same period, 50 percent or more of skilled blue-collar workers saw themselves as Democrats, while only 25 percent or less saw themselves as Republicans (see figure 8.16). There have been two clear changes since 1972. First, a decline in the proportion of blue-collar workers identifying as Democrats (among the less-skilled, the proportion has hovered around 35 percent since 1972; among the skilled, it stabilized in the mid-forties until 1988, when it dropped sharply to 23 percent). The second change is a large increase in the proportion of blue-collar workers reporting no party identification (among less-skilled workers it is now about 40 percent; among skilled workers, it rose to about 38 percent in the period 1972 to 1984, and then climbed sharply in 1988). Interestingly, there has been no major shift of party identification toward the Republicans.

Attitude toward the Political System and Power Structure

The belief that government, including the federal government, is in the hands of a small number of organized groups who have unofficially usurped power is widespread and striking. This belief is common among blue-collar workers, though also among other occupational groups. When asked whether they thought the government was run for the benefit of everybody or for the benefit of a few big interests, 59 percent of blue-collar workers in 1984 answered that the government was run for the benefit of a few big interests. About the same percentage of Americans in upper-white-collar, lower-white-collar, and service-sector occupations agreed, as did 51 percent of housewives.[13]

Survey data going back to 1964 (when the question was first asked) suggest that this belief has been a fairly stable part of the political outlook of most Americans, including blue-collar workers. Thus in every election year except 1964, at least 40 percent of the entire population has believed that the government is run for a few big interests, and in five of the seven election years in this period more than 50 percent of the population has believed this. As in 1988, variation by occupation is not especially pronounced; blue-collar and upper-white-collar Americans both followed this trend from 1964 onward (see figures 8.19 and 8.20). The survey studies do not explore these beliefs further. For example, they do not ask the obvious follow-up question, namely, which are the "few big interests" for whose benefit so many blue-collar workers (and other Americans) believe the government is run. However, data from detailed case studies give an indication of an answer. A study of employees (almost all truck drivers) of a California company that delivers packages, a study of blue-collar and lower-white-collar Italians in Brooklyn, and a study of blue-collar chemical workers in New Jersey, all came to similar conclusions.[14] The vast majority of blue-collar workers believe that Big


166

figure

8.19
Who Benefits from Government, 1964–88: Blue-Collar Workers.

figure

8.20
Who Benefits from Government, 1964–88: Upper-White-Collar Workers.

Business really runs America. The dominant view is that corrupt politicians are a venal facade behind which major corporations, "Big Business," prevails, in politics and economics. Remarks like "it's business that runs the country," "big corporations are behind everything," "the [political] power is in the hands of the people with money," and "oil, steel


167

insurance, and the banks run this country" are commonplace. These were typical comments: "Politics? It's all money! Big Business pays out money to get what it wants." "Who runs the country? Well, I suppose the president does. He makes the decisions. Of course, business is behind him. They make the real decisions. Politicians are all on the take."

That this attitude toward Big Business is widespread is also suggested by Erik Olin Wright's survey data, which found that over 74 percent of blue-collar workers believe that "big corporations have too much power" in America. It is noteworthy that, in terms of their beliefs about the power of corporations in society, American blue-collar workers are just as class conscious as workers in Sweden (presented in Wright's analysis as far more class conscious than American workers). For in both societies, between 75 percent and 82 percent of blue-collar workers believe that "big corporations have too much power" in their respective countries.[15] This underlines the importance of considering separately the three spheres: attitude toward the political regime, attitude toward the work setting, and attitude toward life outside the workplace.

Despite these critical and sceptical beliefs that American blue-collar workers have about who runs the country, the lack of approval for alternatives to the current political system is notable. The general acceptance of the American Constitution ranges from enthusiasm ("it's the best in the world") to lukewarm ("I complain a lot, but it isn't any better anywhere else"). This phenomenon needs explaining. In part it is based on a distinction between the system and those who operate it, between politicians and the Constitution: the political system is sound, but it is in the hands of scoundrels. In part, lack of support for alternative political systems results from a perception that radical change in the United States is impractical: the country is too large, and potential leaders are too prone to sell out. But in part the widespread acceptance of the Constitution and the political system is based on a key distinction most workers make, either explicitly or implicitly, between freedom and democracy. The United States does offer freedom and liberties, which are very valuable. Consider these typical comments, all made by workers who believe venal politicians subvert the electoral process: "In America you have freedom. That's important. I can say Reagan is a jerk and no one is going to put me in jail." Another worker: "You know what I like about America? You're free. No one bothers you. If I want to take a piss over there [points to a corner of the tavern], I can." Socialism and communism are ruled out in almost everyone's eyes, for they are seen as synonymous with dictatorship. They are political systems that permit neither popular control of government (democracy) nor individual freedom and liberties.[16] Survey data suggest that, like mistrust of government, this attitude toward freedom is widespread among blue- and white-collar Americans.


168

The vast majority both value freedom and consider it an important feature of contemporary America.[17]

Life Outside the Workplace

Home Ownership and Suburbia

The combination of home ownership and suburbia is of considerable importance for understanding the American working class. Together they provide the material context for American blue-collar workers to live, or to hope to live, a residential, leisure, and social life in which the barrier between blue- and upper-white-collar is considerably muted (at least, as compared with the typical workplace situation of blue-collar workers).

The high rate of home ownership among Americans, including blue-collar Americans, has for a long time been striking. Back in 1906, Werner Sombart contrasted the United States with his native Germany: "A well-known fact . . . is the way in which the American worker in large cities and industrial areas meets his housing requirements: this has essential differences from that found among continental-European workers, particularly German ones. The German worker in such places usually lives in rented tenements, while his American peer lives correspondingly frequently in single-family or two-family dwellings."[18] By 1975, three-quarters of all AFL-CIO members owned houses.[19] Home ownership not only offers blue-collar workers the possibility of economic gain but also provides a site where they can control their physical and social surroundings—not, of course, completely, but far more than in the work setting where they are typically subordinate to the authority of a direct supervisor, as well as of management and the owners.[20]

Suburbanization, in combination with home ownership, has played a crucial role in undermining working class residential communities, especially after World War II. Suburbanization can be defined as a process involving two crucial factors. First, there is the systematic growth of fringe areas at a pace more rapid than that of the core cities; second, there is a life-style involving a daily commute to jobs in the urban center.[21] The regular commute to a workplace a considerable distance from the work site is an important factor in the fading of working-class residential communities. Many classic labor movements established their strongholds in the nineteenth century in towns and urban areas that were not especially large (by later standards), or especially spread out. Paterson, New Jersey, for example, had only 33,000 inhabitants in 1870. These places were typically urban villages, where, as Eric Hobsbawm put it, "people could walk to and fro from work, and sometimes go home in the dinner-hour . . . places where work, home, leisure, industrial relations, local government and home-town consciousness were inextricably mixed together."[22]


169

In fact, suburbanization involving the commute to work by public transport started before many of these working-class communities were formed. It began in 1814, with the first steam ferry, and continued with newer modes of public transport (the omnibus in 1829, the steam railroad in the 1830s and 1840s, the electric streetcar in the late 1880s).[23] Each of these developments doubtless somewhat undermined working-class occupational communities. But so long as workers were dependent on public transport to get to the workplace, there were limits to where they could live (nowhere too far from public transport).[24] After World War II, as automobiles became widely owned by blue-collar workers, a qualitative change occurred. Workers could live anywhere they could afford that was within commuting range. And since the incomes of better-paid blue-collar workers often approached, equalled, or exceeded those of several upper-white-collar groups (such as teachers and social workers), there developed many occupationally mixed suburbs, where the proportion of blue-collar workers ranged from about 20 percent to about 45 percent, as did the proportion of upper-white-collar workers.[25] For example, when the vast new suburb Levittown, New Jersey, opened in 1958, these two groups bought houses there in roughly equal proportions. By 1960, 26 percent of the employed males there were in blue-collar occupations, while 31 percent were in upper-white-collar occupations.[26]

This residential context provides the framework for the marital and leisure lives of many blue-collar Americans. Several other factors that are also important influences on the leisure lives of blue-collar workers cut across occupational or educational lines. These include gender, age, position in the marital cycle, and income level. For example, many blue-collar workers are enormously interested in sports, as participants and spectators. Among the sports in which they participate are hunting, fishing, and softball; golf, traditionally an upper-white-collar activity, has grown in popularity among blue-collar workers. And, like other American males, many blue-collar workers spend considerable time watching sports on television. Clearly, this interest in sports, shared in many ways by upper-white-collar males and other Americans, has as much to do with gender as with class.

It is true that certain factors add a flavor to the lives of blue-collar workers. In particular, they typically have modest levels of education (an average of twelve years) as compared with upper-white-collar workers (an average of fifteen years of education).[27] Partly as a result, blue-collar workers are less likely than upper-white-collar workers to be interested in high culture (opera, ballet, classical music, serious theater). However, these differences should not be exaggerated, for the level of interest in high culture among upper-white-collar workers is not great. For example, a survey conducted in the early 1970s on exposure to the arts in twelve major American cities showed that no more than 18 percent of


170

managers and professionals had been to a symphony concert in the past year, no more than 9 percent had been to the ballet, and no more than 6 percent had attended the opera.[28]

Finally, there is the issue of marital life. There are certain features of working-class life that may add a distinct flavor to the marriages of blue-collar workers. For example, blue-collar jobs can carry somewhat low status as compared with upper-white-collar jobs and even as compared with some lower-white-collar jobs. Some couples' comments suggest that wives of blue-collar men sometimes resent their husbands' low status occupations. And the modest level of education that blue-collar workers typically possess may affect the character of their marriages; for example, some studies suggest that the level and quality of "communication" between spouses increase with their amount of education.

Still, as with leisure life, there are a variety of forces that affect the marital lives of blue-collar workers but that are by no means confined to them. These include the conflicting demands of home life and work life, the difficulties (and benefits) that arise when both spouses work, and the host of questions associated with raising children (all of which are discussed in other chapters of this book).[29] The best studies of the marital lives of blue-collar Americans suggest that there are as many similarities as differences between their marital lives and those of upper-white-collar people.[30] One explanation is that, as with leisure lives, gender differences are at least as important as class differences. For example, whatever their class, many American wives face the likelihood of being able to find jobs only in poorly paid, lower-white-collar occupations and, at home, of having the major responsibility for child care and housework.[31]

Life at the Workplace

It is in the workplace that differences between blue-collar and white, especially upper-white-collar, are most pronounced. Blue-collar jobs are often dirty and sometimes dangerous, and usually require some degree of physical labor (hence the need to wear special protective clothes—the "blue collar").[32] In addition, such jobs usually involve the following features: (1) work that is repetitive and therefore dull; (2) work that is clearly connected to the creation of a tangible product; (3) work that offers little chance of upward mobility (workers may rise to first-line supervision, but above that level, lack of educational qualifications poses a serious barrier); and (4) work that is supervised, in an obtrusive or unobtrusive manner (there is human supervision, and there is the mechanical supervision of a time clock). These features provide enough real basis for distinguishing blue-collar from upper-white-collar jobs and, to a lesser extent, from lower-white-collar jobs.[33] In occupational settings with a va-


171

riety of work levels, management usually has little difficulty deciding which workers should be classified as blue-collar and therefore be assigned to distinct work areas and required to wear special work clothes, though some groups on the margin may be hard to classify.

Class Consciousness

Last, but definitely not least, there is the question of class consciousness. How do blue-collar workers see their position in the class structure, with whom do they identify, and whom do they oppose? These questions have always been, and remain, central in the debates over the blue-collar working class. In a recent article dramatically titled "Farewell to the Labor Movement?" Eric Hobsbawm, one of the foremost socialist historians, stressed the question of class consciousness:

It is class consciousness, the condition on which our parties [mass socialist or workers parties] were originally built, that is facing the most serious crisis. The problem is not so much objective de-proletarianization, but is rather the subjective decline of class solidarity. . . . What we find today is not that there is no longer any working class consciousness, but that class consciousness no longer has the power to unite.[34]

Hobsbawm cites the fact that in 1987 almost 60 percent of British trade union members voted for parties other than the Labor party. Clearly this is comparable to the tendency for blue-collar Americans nowadays to be at least as likely to vote Republican as Democratic in presidential elections.

Much of the debate over class consciousness has revolved around, or at least begun with, the issue of whether blue-collar workers tend to see themselves as "working class" (and therefore more class conscious) or "middle class" (and therefore less class conscious). It is, then, surprising to discover that in 1988, asked if they saw themselves as "working class" or "middle class," 75 percent of American blue-collar workers said working class. Further, this is only a little less than in 1952, when 80 percent of blue-collar workers categorized themselves as working class in response to the same question (see figure 8.21). Indeed, the proportion of blue-collar workers categorizing themselves as working class has never fallen below 64 percent in the period between 1952 and 1988. Clearly a certain kind of working-class identity can coexist with a declining tendency for blue-collar workers to vote for Democratic presidential candidates and to identify with the Democratic party. This suggests a problem with the debate over class consciousness, which, as we have pointed out, has long pervaded the general debate over the blue-collar working class, namely, the tendency to infer from one area of blue-collar life the nature of behaviors and beliefs that prevail in other areas of those lives. In the


172

figure

8.21
Social Class Identification by Major Occupational Group, 1952–88:
Blue-Collar Workers.

figure

8.22
Social Class Identification by Major Occupational Group, 1952–88:
Upper-White-Collar Workers.


173

case of class consciousness and class identity, this amounts to assuming that blue-collar workers have a single image of their position in the class structure.

A central theme of this chapter has been that the lives of blue-collar workers revolve around three separate, though related, spheres—life at the workplace (in the mode of production), life outside the workplace (residential, marital, and leisure), and life vis-à-vis the federal government. Indeed, there is reason to think that many American blue-collar workers have three social identities, each relating to one of these spheres. These identities are that of the "working man," (or "working woman" for female blue-collar workers); that of being "middle class" or "lower middle class" or "poor," with reference to life outside the workplace; and that of being part of "the people" or "the American people," with reference to the notion of the individual citizen vis-à-vis the federal government and the related power structure. If these spheres have not emerged clearly in much previous research, it is because the main methods used to study class consciousness have tended to encourage, explicitly or implicitly, only one of these identities.

The analysis that follows is based on David Halle's study of class identity among blue-collar chemical workers in New Jersey. These workers were, in several ways, among the better-off blue collar workers. They were comparatively well paid and unionized; about one-quarter of them were skilled; 69 percent were homeowners. They were all men, reflecting the dominance of men in more desirable blue-collar jobs.

Consider, first, the concept of "the working man." A close reading of formal and informal interviews reported by a variety of researchers suggests that male blue-collar workers in America commonly refer to themselves as "working men," but rarely as "working class." This can be seen in interviews with voters during the 1968 and 1972 presidential election campaigns; in the views working-class residents of a new suburban township expressed about their preferred political candidate; from the comments of a group of skilled workers in Providence, Rhode Island; from comments of a group of white working-class males in an East Coast city; from comments of workers in Milwaukee, Chicago, and Pennsylvania; from comments of auto workers in Detroit; and from comments of Italian construction workers in Brooklyn.[35] The concept of the "working man" has also been central in the history of the American labor movement. For example, when trade and craft workers before the Civil War founded political parties, they called them "Workingmen's Political Parties," and the Workingman's Advocate was the name of one of the most important newspapers of the nineteenth century.

The concept of the working man, among the chemical workers studied by Halle, has as its central idea the notion that blue-collar work takes


174

a distinctive form and is productive in a way that the work of other classes is not. This notion has two central components. One involves the features of the job. Being a working man involves one or more of the following clusters of related ideas: (a) physical work ("It's hard physical work," "It's working with your hands"); (b) dangerous or dirty work ("We get our hands dirty"); (c) boring and routine work ("We do the same thing over and again"); (d) factory work (as opposed to office work); (e) closely supervised work ("We have to punch in and out," "We're told what to do").

The other central component of the concept of the "working" man links it to a moral and empirical theory about who really works in America. It implies, in one or more of the following ways, that those who are not working men are not really productive, do not really work. Those who are not "working" (a) literally do not work ("Big business don't work, they just hire people who do," "People on welfare aren't working men, they don't want to work"); (b) perform no productive work ("Teachers aren't teaching the kids anything," White-collar office workers "just sit on their butts all day"); (c) are overpaid ("Doctors earn huge fees," "Lawyers charge whatever they want").

The combination of the "job features" and the "productive labor" aspects of the concept logically entails the idea that only those whose labor involves such job features are productive. As a result, blue-collar work is generally seen as productive. But those whose work lacks many or all such job features, definitely big business and the white-collar sectors in general, are not.

A central point about the concept of the working man is that the term expresses both class and gender consciousness. It expresses class consciousness in implying that blue-collar work is especially productive. But it also implies that blue-collar work is for men (working man ) rather than women, which is a form of gender consciousness. This reflects the history of American labor. In the early stages of industrial growth, women (and children) were the first factory workers, for at that time such jobs were seen as less desirable than agricultural work. As the status and pay of factory and other blue-collar work rose, women were pushed out of almost all except the least desirable jobs. The blue-collar working class is now composed primarily of men, and this is especially true for the better paid and more highly skilled blue-collar jobs.

Among the chemical workers Halle interviewed, the idea that blue-collar work was for men was a form of sexism that most workers were prepared to explicitly support in discussing their own jobs. For example, they would maintain, sometimes in arguments with those of their wives who are feminists, that women cannot be chemical workers because they are too weak to move heavy chemical drums. But such sex stereotyping


175

of occupations is under increasing attack in the United States. As a result, few workers were prepared to explicitly defend this sort of view for the entire spectrum of blue-collar jobs.

This discussion also raises the question of how female blue-collar workers see their position in the class structure at work. Naturally, they see themselves as working women rather than working men. How they use the concept of the "working woman," and how its meaning compares with the concept of the working man, is a question that scarcely has been investigated.[36]

The blue-collar workers that Halle interviewed also place themselves in the class structure, in part according to their life away from work rather than on the job. In this second image, they assume a class structure composed of a hierarchy of groups that are distinguished, above all, by income level but also by standard of living and residential situation. Income level, life-style, consumer goods, and neighborhood constitute the material framework of their lives outside work. (It is true that income originates from their employment, but its effect on their lives is outside, where almost all income is spent.) These criteria for determining position in the class structure increase the range of persons with whom workers consider they have common interests (as compared with the concept of the working man). Thus, though most see clear gaps between their situation and those of the upper and lower extremes (for instance, "the rich" and "the poor"), the categories in between are almost all ones to which they consider they do or could belong. As a result, according to this perspective, the class structure has a sizeable middle range that displays some fluidity, permits individual movement, and takes no account of a person's occupation. This reflects the actual ability of workers, in their life outside the factory, to enjoy a certain mobility through their choice of house, neighborhood, possessions, and life-style.

Income level is the most important of the factors underlying this second image of class. Almost everyone has at least a rough idea of the income distribution in America and his place within it. Workers read government statistics in newspapers and magazines on the average income of an American family, and they are aware of estimates of the income level needed to maintain a minimum, a comfortable, and an affluent standard of living. The federal and state income tax systems both entail a picture of the class structure based on income, and most workers follow with keen interest the relation between their weekly earnings and the taxes deducted from their paychecks. Income level is not the only criterion underlying class distinction based on the setting outside work. Lifestyle, material possessions, and the quality of residence and neighborhood are other criteria that people often use.

Most, but not all, workers place themselves in the middle of the hier-


176

archy (below the "rich" and above the "poor"). But some identify with a category between the poor and the middle class. This view is most common among younger workers. They may have a mortgage, young children, and a spouse who stays at home to look after the children. But for these workers, being middle class implies being able to maintain that life-style without economic pressure. They deem their own situation below that of the middle class because they cannot live such a life-style without a strain—perhaps a serious strain—on their resources. Their income level, material possessions, and life-style make them better off than the poor, but not comfortable or free from major economic worries (as they believe the middle class to be).

The chemical workers studied by Halle were comparatively well paid for blue-collar workers, so it is likely that numerous less well paid blue-collar workers, in thinking of themselves outside the workplace, would classify themselves as below middle class.[37] The coexistence of these two identities—that of being a "working" man, with reference to life at work, and that of being middle class or less, with reference to life outside the workplace—would explain the large number of blue-collar workers who categorize themselves as "working class" rather than "middle class" in response to a survey question on that topic (see figure 8.21). Some workers categorize themselves as working class because they think of themselves as "working" men. Others place themselves in the working class because they are thinking of their position in the class structure outside work and believe their income level or life-style is not high enough to place them in the middle class. Either way, the forced choice of "working class" or "middle class" conceals the coexistence of two images of position in the class structure.

Almost all the blue-collar chemical workers have what amounts to a third image of their position in the class structure. They routinely use the concepts of "the American people" and "the people" in a populist sense. This concept involves the idea of a clear opposition between the power structure, especially big business and politicians, and the rest of the population. According to this view, "the American people" means all those excluded from the heights of political and economic power. Consider this worker, discussing corruption in politics: "Take Johnson for example. When he entered the White House he had $20,000 and then he bought all those estates with the American people's money."

This populist current is the third major aspect of the class consciousness of these workers. The concept of the working man refers to a position in the system of production. The concept of being middle class or lower middle class refers to a position outside work—to a life-style and standard of living. The concept of the people, or the American people, in the populist sense, refers to the division between all ordinary citizens and those with political and economic power.


177

Conclusion

The situation of blue-collar workers is complex and cannot be summed up by approaches that assume that the three main areas of blue-collar life are changing in concert. On the federal level, there is a movement away from voting for Democratic presidential candidates and away from voting at all, which is especially pronounced among younger workers. This has been accompanied by a diminished identification with the Democratic party (though identification with the Republican party has not taken its place). It is this fading of party loyalty and, perhaps, the declining tendency of blue-collar workers to vote at all, that is probably the most distinctive feature of the later decades of the twentieth century. If class solidarity for blue-collar Americans means voting for Democratic presidential candidates and identifying with the Democratic party, then class solidarity is definitely on the wane.

However, a majority of blue-collar workers (and other Americans) believes that the country is "run by a few big interests," particularly by large corporations. And there is reason to think that many blue-collar workers, like many other Americans, will at times subscribe to a version of populism that contrasts "the people" (as those excluded from the heights of political and economic power) with the power structure (above all, big business and politicians). This entire perspective has probably long been a central component of the belief system of many ordinary Americans. (It was, for example, surely prominent during the "trust-busting" movement of the early 1900s.) It is likely to remain so as long as large corporations (American or foreign) play a central role in American life.

Further, the vast majority of blue-collar Americans appear to see themselves, in the workplace, as "working men" (or "working women"), with an implicit solidarity at least with other blue-collar Americans (and probably, in varying degrees, with lower-white-collar Americans, too). This reflects a kind of class consciousness and identity that has long been important and is unlikely to fade, so long as the distinctions in the workplace between blue-collar workers on the one hand and white-collar workers (especially upper-white-collar workers) on the other hand, are pronounced. The current weakness of the union movement is significant in its own right, but may not diminish this class identity. Indeed, to the extent that blue- and lower-white-collar workers are less protected by unions than they once were, their feelings of vulnerability in the face of, and hostility toward, the corporations that employ them are as likely to increase as to wane.

Outside of the workplace, class identity is somewhat more fluid, reflecting the greater degree of penetration and intermingling of blue- and white-collar people outside the workplace—in places of residence, in leisure, and in marital lives.


178

Some of these trends in the attitudes and behavior of blue-collar workers have been present for a long time. Others are more recent. Examining a number of arenas of working-class experience at once, and allowing each to express its own internal dynamics, shows the inadequacies of the two prevailing models of the working class—the radical working class and the integrated working class—each of which focuses on one or two areas of experience to the exclusion of the others. Social life is complex, and the fact that blue-collar workers have several bases for their attitudes and behavior reflects this complexity, which must be incorporated into any model of the American working class.


179

Appendix to Chapter Eight

Several data sets were used to construct the figures presented in this chapter. The source of the employment data in figures 8.1 and 8.2 is explained in note 5. Figures 8.3, 8.4, 8.16, 8.17, and 8.18, which chart presidential vote and political party identification by year and major occupational groupings, are based on the National Election Study (NES) combined file, 1952 to 1986, produced by the Survey Research Center at the University of Michigan. Figures 8.19, 8.20, 8.21, and 8.22, which chart beliefs about who benefits from government and social class identification, for selected years from 1952 to 1988, are based on the specific NES studies for years reported. Figures 8.6–8.15, which take a detailed look at the blue-color vote in 1988, are based on a multinomial logistic regression equation calculated on the NES data for 1988.

The 1988 Logit Model can be clarified as follows. A multinomial logistic regression model was calculated (using maximum likelihood estimation) on the 1988 National Election Study data to assess the impact of demographic variables on the presidential vote.[38] The model is a simple linear regression when the dependent variable is converted to the log of the odds ratios. The dependent variable in this analysis comprises three categories: Voted Republican; Voted Democrat; and Did Not Vote. The odds ratios are (Voted Republican)/(Did Not Vote) and (Voted Democrat)/(Did Not Vote). These two odds ratios (resulting in the estimation of two simultaneous equations) are sufficient to calculate every combination or odds comparison implied by a three-category dependent variable. Independent variables include family income in thousands of dollars (direct effect); age in years (direct effect); education in years (direct effect); region (categorical effect: East, Midwest, South, West); union membership (categorical effect: yes, no); religion (categorical effect: Protestant, Catholic, Jewish); race (categorical effect: white, black); gen-


180
 

TABLE8.1a The 1988 Presidential Election: Analysis of Variance

Effect

Direct Effect

Chi-Square

Alpha

Intercept

2

68.72

0.0001

Family Income

2

22.32

0.0001

Age in Years

2

51.19

0.0001

Education in Years

2

30.25

0.0001

Region

6

17.76

0.0069

Union Membership

2

6.86

0.0324

Religion

4

16.39

0.0025

Race

2

17.68

0.0001

Gender

2

0.04

0.9815

Occupational Group

8

15.39

0.0520

Party Identification

4

190.73

0.0001

Likelihood Ratio

1970

1553.69

1.0000

der (categorical effect: male, female); occupation (categorical effect: homemaker, upper-white-collar, lower-white-collar, service, blue-collar); and party identification (Republican, Democrat, other). Variables identified as "direct effects" are quantitative insofar as their interval values are entered directly into the design matrix. Categorical effects are qualitative, and each category forms a variable in the model, with the exception of the last category, which is estimated by the intercepts. In this model, categorical variables are estimated using an "effect coded" design matrix.[39]

The results of the logistic regression model are given in tables 8.1a and 8.1b. Table 8.1a (analysis of variance) assesses the fit of the overall model and the significance of each set of independent estimators. It reveals that, with the exception of gender, all estimators have obtained a chi-square large enough to be significant at an alpha-level less than 0.05. At the bottom of table 8.1a is the "likelihood ratio," which permits an assessment of the fit of the model to the underlying data. This statistic is distributed as chi-square with degrees of freedom equal to that listed at the bottom of table 8.1a. If the chi-square is large relative to the degrees of freedom, the model demonstrates a poor fit, but if it is small relative to the degrees of freedom, then the model exhibits a close fit to the original data. Traditionally, a chi-square that cannot obtain an alpha-level greater than 0.05 is considered a strong indicator that the model does not fit the data. In the case of the model assessed in table 8.1a, the chi-square is such that the alpha-level is at its maximum of 1.0, indicating a very good fit between the model and the data. It should be noted that the linear design matrix used in this model is an extreme simplification of


181
 

TABLE8.1b The 1988 Presidential Election Vote: Analysis of Individual Parameters

Effect

 

Equationa

Estimate

Standard Error

Chi-Square

Alpha

Interceptb

 

Ln(P1/P3)
Ln(P2/P3)

– 7.04
– 4.60

0.87
0.82

64.94
31.22

0.0001
0.0001

Family Income

Direct Effect

Ln(P1/P3)
Ln(P2/P3)

0.03
0.01

0.01
0.01

20.52
2.88

0.0001
0.0896

Age in Years

Direct Effect

Ln(P1/P3)
Ln(P2/P3)

0.05
0.05

0.01
0.01

44.56
32.85

0.0001
0.0001

Education in Years

Direct Effect

Ln(P1/P3)
Ln(P2/P3)

0.25
0.22

0.05
0.05

25.32
19.61

0.0001
0.0001

Region

East

Ln(P1/P3)
Ln(P2/P3)

– 0.25
– 0.55

0.19
0.20

1.74
7.49

0.1875
0.0062

 

Midwest

Ln(P1/P3)
Ln(P2/P3)

0.31
0.48

0.17
0.17

3.40
7.93

0.0652
0.0049

 

South

Ln(P1/P3)
Ln(P2/P3)

– 0.12
– 0.29

0.16
0.16

0.60
3.24

0.4379
0.0720

Union Membership

Member

Ln(P1/P3)
Ln(P2/P3)

– 0.14
0.16

0.12
0.12

1.36
1.91

0.2435
0.1668

Religion

Protestant

Ln(P1/P3)
Ln(P2/P3)

0.41
– 0.29

0.28
0.24

2.16
1.50

0.1413
0.2209

 

Catholic

Ln(P1/P3)
Ln(P2/P3)

0.77
0.47

0.29
0.25

7.01
3.56

0.0081
0.0591


182
 

TABLE 8.1b

Effect

 

Equationa

Estimate

Standard Error

Chi-Square

Alpha

Race

White

Ln(P1/P3)
Ln(P2/P3)

0.78
– 0.26

0.26
0.14

9.43
3.66

0.0021
0.0557

Gender

Male

Ln(P1/P3)
Ln(P2/P3)

0.02
0.00

0.12
0.12

0.03
0.00

0.8611
0.9815

Occupational Group

Homemaker

Ln(P1/P3)
Ln(P2/P3)

– 0.08
0.11

0.25
0.23

0.11
0.21

0.7358
0.6505

 

Upper White

Ln(P1/P3)
Ln(P2/P3)

0.23
0.63

0.21
0.22

1.20
8.17

0.2734
0.0042

 

Lower White

Ln(P1/P3)
Ln(P2/P3)

0.08
0.18

0.18
0.18

0.19
1.02

0.6636
0.3133

 

Service

Ln(P1/P3)
Ln(P2/P3)

– 0.24
– 0.48

0.25
0.24

0.96
3.98

0.3280
0.0462

Party Identification

Republican

Ln(P1/P3)
Ln(P2/P3)

1.14
– 0.74

0.15
0.20

56.26
13.22

0.0001
0.0003

 

Democrat

Ln(P1/P3)
Ln(P2/P3)

– 0.77
0.98

0.16
0.15

23.63
41.95

0.0001
0.0001

a P1 indicates the probability of voting Republican, P2 indicates the probability of voting Democrat, and P3 indicates the probability of not voting at all.

b The Intercept estimates the following omitted categories: Region = South; Union = No; Religion = Jewish; Race = Black; Gender = Female; Occupation = Blue-Collar; and Party = Other.


183
 

TABLE 8.2 1988 Sample Means for Major Occupational Groups

Variable

 

Blue-Collar

Service

Lower-White-Collar

Upper-White-Collar

Family Income

Direct Effect

$28,252.00

$20,936.17

$34,271.21

$42,639.45

Age in Years

Direct Effect

37.44

39.24

37.62

40.16

Education in Years

Direct Effect

11.66

11.84

13.26

14.99

Region

East

16.1%

22.4%

18.5%

21.3%

Midwest

23.0%

32.2%

31.3%

26.2%

South

44.0%

30.9%

31.3%

28.7%

West

17.0%

14.5%

18.8%

23.8%

Union Membership

Member

30.0%

20.0%

20.9%

18.9%

Nonmember

70.0%

80.0%

79.1%

81.1%

Religion

Protestant

76.3%

71.8%

68.9%

65.8%

Catholic

22.7%

28.2%

28.2%

30.1%

Jew

1.0%

0.0%

3.0%

4.1%

Race

White

86.4%

77.0%

82.1%

52.8%

Black

13.6%

23.0%

17.9%

47.2%

Gender

Male

75.6%

18.4%

31.3%

90.8%

Female

24.4%

81.6%

68.7%

9.2%

Party Identification

Republican

20.2%

21.7%

29.9%

34.5%

Democrat

34.4%

40.8%

31.0%

27.7%

Other

45.4%

37.5%

39.1%

37.8%


184

the possible interactions among categories of the independent variables and the possible nonlinear direct effects implied by such a complex set of variables. Hence, the fit of this very simple logit model is indeed a significant finding.

Table 8.1b (analysis of individual parameters) gives the logit estimates, their individual standard errors, the associated chi-squares and alpha-levels. The estimates are linear with respects to the log of the odds ratios. This makes direct interpretation of the estimates nonintuitive. As a result, we have interpreted the estimates in figures 8.5 through 8.15 for the blue-collar vote. That is, we held the effect of occupation constant at "blue-collar" and calculated the ceteris paribus effects of each independent variable on the probability of voting in one of the three ways (Republican, Democrat, No Vote). For each calculation, the effects of all other independent variables included in the model were held at their "blue-collar" mean effects. These means are presented in table 8.2.


185

Nine—
The Enduring Dilemma of Race in America

Bart Landry

When future historians look back at the late 1960s, the period will appear in many respects as the golden age of American history. Prosperity was at an unprecedented high, while the economy offered the promise of unlimited growth. A cultural revolution, furthermore, was in the making, and Americans had committed themselves, for the first time in their history, to eliminating what Gunnar Myrdal had called the "American dilemma"—the racism and discrimination that had kept millions of Americans in the position of second class citizens. With the passage of the Civil Rights Act in 1964, discrimination was redefined as racism. No longer was the discriminator a "good old boy" and the fair-minded white person a "nigger lover." Lester Maddox, standing in the doorway of his store, ax handle in hand, was not seen any more as a folk hero, but as a national shame. The first steps in the path toward racial equality had been taken.

How long ago it all seems now. Rather than a fruition of the dreams of those golden days, the past two decades have brought confusion and even retrogression. The cultural revolution of the 1960s and early 1970s was overwhelmed by the economic reality of recessions and a declining economy. Though American society would never be quite the same again, it stopped far short of the goals of the reforms of the 1960s. In less than ten years, the nation was tiring of the effort to extend full opportunity to blacks. A new term entered the lexicon of race relations, "reverse discrimination"—elbowing for room with "equal employment opportunity," "discrimination," and "racism."

The turning point, it seems in retrospect, was a suit by Allan Bakke in 1974 accusing the University of California Medical School at Davis, of "reverse discrimination." The decision handed down by the Supreme Court was ambiguous, a victory neither for Bakke nor for those opposed


186

to his position. At the heart of the issue was the nation's commitment not only to provide equal opportunities to all its citizens today, regardless of color, but also to redress the injustices of the past—injustices that have placed blacks at a considerable disadvantage in the competition for desirable jobs. Since Bakke , the courts have been called upon again and again to decide whether the nation can legally redress the market effects of past injustices of slavery and discrimination against blacks. Quotas, timetables, and set-asides have all been challenged. For the time being, the tide has shifted against the struggle of blacks for equality, as a conservative judiciary, including the Supreme Court, has returned numerous decisions that have chipped away at the very foundations of the fight against persistent racial discrimination.

The recent study by the National Academy of Science, A Common Destiny: Blacks and American Society , concludes that "race still matters greatly in the United States."[1] Reminders of the truth of this conclusion are numerous in the United States as we approach a new century. They range from racially motivated incidents and attacks on blacks at predominantly white college campuses to racial attacks in several northeastern communities. While surveys have found that the commitment of whites to the principle of equality for blacks has grown steadily over the decades, the authors of the National Academy of Science report conclude:

Principles of equality are endorsed less when they would result in close, frequent, or prolonged social contact, and whites are much less prone to endorse policies meant to implement equal participation of blacks in important social institutions. In practice, many whites refuse or are reluctant to participate in social settings (e.g., neighborhoods and schools) in which significant numbers of blacks are present.[2]

Today, whites are more likely to say that "blacks have gone far enough" than that there remains an "unfinished agenda" to be completed. This sentiment exists in spite of the studies of black progress by the National Academy of Sciences, as well as others, that have provided ample evidence of the negative effects of discrimination among blacks.[3] These negative effects, moreover, have continued well after the Civil Rights era and the emergence of a black middle class.[4] A review of the record since 1940 prompted the authors of the National Academy of Science study to comment: "The status of black Americans today can be characterized as a glass that is half full—if measured by progress since 1939—or as a glass that is half empty—if measured by the persisting disparities between black and white Americans since the early 1970s."[5] Among the signs of the "half empty" glass is a large economic disparity between blacks and whites that has been traced directly to the discrimination blacks encounter in the employment and housing markets.[6] Caught in this economic disparity are the almost one-third of all blacks who live in


187

poverty, compared to only 11 percent of whites; a growing black under-class incorporating about 13.2 percent of employable black adults in the late 1980s, compared to 3.7 percent of whites; an unemployment rate twice that of whites; continued lower life expectancy than whites; and a serious lag in the proportions of high school graduates who attend college. Though many observers point to the negative impact of a changing economy characterized by a shrinking manufacturing sector and expanding service sector, the authors of the National Academy of Science study unambiguously conclude that "a considerable amount of remaining black/white inequality is due to continuing discriminatory treatment against blacks."[7]

How is it that more than 100 years after emancipation, race is still a salient issue in the United States and blacks continue to lag significantly behind whites on every meaningful economic indicator? Most studies addressing this issue provide descriptions of the remaining black/white gap in indices of economic progress and social well-being. While these studies often offer detailed and invaluable documentation needed by policy makers, they generally fail to offer explanations that might help us understand the persistence of racial inequality. If we are to understand why a movement that began with such promise thirty years ago has, toward the end of the twentieth century, stalled and even gone backward, we need to dig deep below the surface.

In this chapter, therefore, I will not add to already ample descriptions of racial inequality in contemporary America. Because the roots of racism and discrimination are so deep, it is best to rely on a historical approach in analyzing the dynamics by which the present state of black/white relations came into being.

Theories of Racial Inequality

When one sifts through the books and articles on race relations that have appeared over the past fifty years, one finds that the overwhelming majority of scholars have focused in some fashion on the role of individual attitudes. Two of the best examples of this approach can be found in the writings of Gordon Allport and Lloyd Warner. To Allport we owe the emphasis on prejudice as the motivator of discriminatory behavior. Lloyd Warner, for his part, argued that the negative evaluation of all blacks by whites in the South had produced a southern society characterized by a caste division between blacks and whites.[8]

These studies led to a preoccupation among social scientists with racial attitudes and an interest in measuring changes in white attitudes toward blacks over time. The best known of these studies were surveys conducted by the National Opinion Research Center (NORC) and published in a series of articles in Scientific American over many years, begin-


188

ning in 1956. Subsequently, both Gallup and the Institute for Social Research at the University of Michigan took periodic pulses of the racial attitudes of whites. At the heart of these studies was an attempt to measure the extent and depth of prejudicial attitudes held by whites against blacks, and the degree to which these attitudes might be changing over time. From this perspective, white attitudes was the key to black progress. If whites abandoned, or at least softened, their racist attitudes toward blacks, social scientists reasoned, the "race problem" would be solved. At the same time, a kind of social Darwinism informed their thinking, suggesting that white attitudes had to change before discriminatory behavior would cease.[9]

Within this framework, the social distance scale —a measure of the extent to which whites were willing to associate with blacks in various settings characterized by ever greater closeness, from the workplace to interracial marriages—became a major tool. Any sign of a decline in racist attitudes was greeted with enthusiasm, as an indicator of racial progress. While these studies have documented a liberalization of white attitudes toward blacks over the decades, however, other researchers have continued to discover extensive discriminatory behavior in schools, housing, and the workplace.[10]

Recently, several scholars have turned away from the "individual prejudice" approach in favor of some type of "structural" explanation for the limited progress of blacks, as compared to whites, in American society. One version, advanced by Nathan Glazer, attributes the difference to the allegedly more recent arrival of blacks to urban America.[11] Another, proposed by Thomas Sowell, argues that, coming from a rural background, blacks have been hampered by the absence of a work ethic.[12] Both of these approaches fall under what has been called a "blacks-as-the-last-of-the-immigrants" theory, a theory suggesting that blacks lag behind white ethnics primarily because the latter settled in the urban Northeast and Midwest earlier than southern blacks. Their greater progress, therefore, is simply a matter of opportunities that come with time. Two other explanations differing from the prejudice approach are offered by Bonacich and Wilson. Bonacich blames inequality on the manipulation of black workers by capitalists in their struggle with the white working class.[13] Wilson argues that an increasingly impoverished under-class today is the result of structural shifts in the economy that have resulted in the relocation of jobs from the inner city to the suburbs.[14]

The individual prejudice approach attributes the continuing inequality of blacks to racist attitudes held by whites; the structural approach more or less blames impersonal market forces. The first see a black/white polarization in America. The second tends to focus on the varied experiences of numerous ethnic and ethnic minority groups and to min-


189

imize a black/white polarization. Taken alone, each of these two explanations has serious shortcomings.

Though Gordon Allport argued for a universal tendency among all societies toward prejudice and stereotyping, it is one thing to hold negative attitudes toward individuals and quite another to dehumanize them. It is an even greater leap to predict behaviors such as lynching from negative attitudes or stereotyping. Some scholars even challenge the one-to-one correspondence between prejudice and discrimination that is generally presumed. Earl Raab and Seymour Martin Lipset, for instance, have argued that black stereotypes, such as the Sambo image, are neither direct outcomes of negative attitudes toward a group nor predictors of the actions that might result from stereotypically held beliefs.[15] Other scholars have shown that discrimination can occur in the absence of prejudicial attitudes when the practices of institutions are inherently biased.[16]

In spite of its limitations, however, the structural approach broadens the search for an understanding of racial inequality by requiring an explanation for the variability in economic progress among white ethnics as well as between whites and blacks. In his book Ethnic America , Sowell presents a rank ordering of ethnics using a family income index that shows a variation from 103 for Irish-Americans to 172 for Jewish-Americans; and from 60 for Native Americans to 99 for Filipino-Americans (table 9.1). Such data force us to examine more closely the factors relevant to upward mobility and the degree to which these factors have been available to various groups—including blacks. They also prompt questions about the environment and circumstances encountered by immigrants upon their arrival. The structural approach encourages an analysis of the factors relevant to upward mobility in American society, while the individualistic approach emphasizes a black/white polarization that overshadows the variability among white ethnics and among ethnic minorities. The tendency to view structural forces as the impersonal workings of the market, however, has been called into serious question.

Both Stanley Liberson and Stephen Thernstrom present carefully analyzed historical data on the experiences of blacks and white ethnics that discredit the theory of "blacks-as-the-last-of-the-immigrants," and point instead to persistent discrimination against blacks by whites.[17] Even in data assembled to demonstrate ethnic variability, such as Sowell's family income index, an economic polarization along white/nonwhite lines is apparent. For although Sowell's index makes it clear that ethnic groups have experienced different degrees of success in scaling the economic ladder, it is also evident that, with the exception of Japanese- and Chinese-Americans, all groups with a family income index above the mean are white.


190
 

TABLE 9.1 Family Income Average
(U.S. average = 100)

Jewish

172

Japanese

132

Polish

115

Chinese

112

Italian

112

German

107

Anglo-Saxon

107

Irish

103

TOTAL U.S.

100

Filipino

99

West Indian

94

Mexican

76

Puerto Rican

63

Black

62

Indian

60

SOURCE : Thomas Sowell, Ethnic America: A History (New York: Basic Books, 1981), 5.

Bonacich's theory of a "split labor market" is similarly limited. While there is no doubt that white employers have at times used blacks as strikebreakers in their struggle with white labor, Bonacich's theory does not account for discrimination in the North that occurred before the influx of black migrants or in periods when the struggle between capital and labor was not intense. Nor does it help us understand why blacks, rather than some other group, were used as strikebreakers, or why all white ethnics united in their opposition to black workers.

This is not to deny that both individual prejudice and structural conditions have had an impact on black progress. However, either explanation taken alone is inadequate. Missing, therefore, is a link between individual prejudice and structural impediments to black achievement. Rather than view prejudice and structural conditions as factors operating independently of each other, it may be more accurate to see them as connected in some systematic fashion. In the remainder of this chapter I will argue that racism and prejudice are not simply the attitudes of malevolent individuals, but are cultural norms into which whites have been socialized and that have found expression in both systemic institutional and individual discriminatory behavior. From this point of view, structural conditions can no longer be viewed as the impersonal forces some have suggested, and racism is raised from the level of individual "quirks" to that of a societal phenomenon requiring analysis and solutions on the societal level.


191

The following discussion will therefore focus on the societal level and will be placed within the general framework of economic progress through upward mobility. From this point of view, there are three issues to investigate: (1) the factors that promote upward mobility in American society and the process through which mobility occurs, (2) the reasons for different degrees of ethnic success, and (3) the reasons for the more limited success of most ethnic minorities. The first two issues will be discussed briefly, while primary attention is directed toward answering the third question.

Getting a Piece of the Pie

Since the publication of the classic work by Peter Blau and Otis Dudley Duncan, scholars have tried to identify the factors that affect an individual's movement up the class ladder. According to the Blau-Duncan model, an individual's movement up the class ladder involves three stages, beginning with family background, moving on through a period during which education and training are attained, and ending in a particular occupation upon entry into the job market.[18] From the family, the individual receives economic support, encouragement, and the social skills needed to negotiate the next two stages—acquiring an education and entering the work force. An individual's educational achievement is greatly affected by the family's economic resources. As one moves up the class ladder, a family's ability to control the environment within which its children will grow and develop increases. A neighborhood in which all or most families belong to the middle class will not only provide more resources for the local school system but also will place children in schools with other students who arrive equally well prepared. Much of their preparation results from the enriching experiences middle-class children are routinely exposed to in families with college-educated parents, experiences such as visits to museums and puppet shows, and possession of children's books and a variety of educational toys.

Working-class parents above and below the poverty line live in progressively less affluent neighborhoods, depending on income. Their children attend schools with children who often are poorly prepared to begin the educational journey. Because the tax bases upon which the schools' economic structures rely are smaller than in middle-class communities, the resources of such schools are typically inadequate. Differences in the education of working- and middle-class children intensify beyond the elementary level. Children from working-class families—particularly minority children—are more likely to be placed in lower tracks that do not provide preparation for college.[19] Teachers hold lower expectations for them and give them less encouragement to excel.


192

Middle-class children, by virtue of both their educational experiences and their families' greater financial resources, are more likely than working-class children to continue on to college, a must for entry into an upper-middle-class occupation and a chance at the American dream. That the economic success of college-educated individuals far surpasses that of those with only a high school degree has been documented time and time again. In 1989, the median net worth of black college graduates was four times greater than that of college dropouts and six times that of those with only a high school diploma.[20]

As Jencks has pointed out, there is, of course, an element of chance involved in an individual's progress up the class ladder.[21] It is also true that personal or family contacts—"who you know"—can also affect the outcome. Nevertheless, the model proves that the class position of the family into which we are born greatly affects our future success. Although this country prides itself in being a "land of opportunity," opportunity is not uniformly distributed throughout all classes. Different degrees of individual economic success are not accidental, therefore. They are built into our society's structure by variations in the economic resources of the families upon which we all depend to get started along the road to success. Thus some people begin with high-tech running shoes, others with yesterday's models, and some without any shoes at all.

Governmental programs to "equalize" starting opportunities, or at least minimize differential advantages, have had mixed results. Project Head Start has been a real success, but suffers from underfunding and lack of follow-through at the elementary school level. Efforts to eliminate other educational disadvantages through school desegregation have met with massive resistance. At bottom is the failure of political leaders and white citizens to fully commit the nation to institutionalizing equality of opportunity. The goal seemed like a good idea during the late 1960s, when prosperity held the promise of eliminating the black/white economic gap without sacrifices by whites. In the economically insecure decades of the 1970s and 1980s, however, whites have been prone to argue that blacks have gone far enough, or that they lag behind economically through their own fault.

Differences in the Degree of Ethnic Success

But if the Blau-Duncan attainment model identifies the factors and process through which individuals climb the class ladder, why are there differences in economic success on the aggregate level between ethnic groups? Why have more members of some ethnic groups moved into the middle class than others? Stephen Steinberg disputes the traditional view that different rates of ethnic success are due to differences in "their


193

value systems" and that therefore the causes are "to be found within the groups themselves."[22] Rather, he argues, external factors to which the entire group was exposed —such as patterns of settlement, time of arrival, external obstacles, and opportunities in the immediate environment, as well as resources possessed, such as skills and education—have been far more important than internal values.

Some ethnic groups, for instance, tended to settle in rural areas, others in industrial cities. Arriving early in our history and coming from rural backgrounds in Europe, Germans and Swedes sought out the opportunities provided by rich inexpensive land in the Midwest. Other groups, like Jews, Italians, and Irish, seemed to find urban areas more suited to their previous experiences, or they arrived at times when land was no longer plentiful and cheap. On the whole, however, the time of their arrival alone seems to explain little of their eventual success. Poles and Jews who immigrated around the turn of the century have higher family income indices than Germans and Anglos who came decades earlier in the nineteenth century.

The early occupational experiences of ethnics sometimes had a serendipitous explanation. The far greater numbers of Irish—than either Jewish or Italian—girls in domestic work is a case in point. While many analysts have attributed this pattern to different cultural values with respect to domestic work, Steinberg notes that Irish girls often immigrated alone, while Jewish and Italian women accompanied their families. Since domestic work provided lodging as well as income, it was well suited to single girls in cities. Jewish and Italian girls had no such need and therefore concentrated more in the garment industry or factory work. As a result of these immigration patterns, 54 percent of employed Irish women were classified by the U.S. Census as engaged in "domestic and personal" work in 1900, compared to only 9 percent of Italian and 14 percent of Jewish female workers. In contrast, only 8 percent of Irish women worked in the needle trades, compared to 41 percent of Jewish and 38 percent of Italian women.[23]

It is along these lines that Steinberg also explains the rapid upward mobility of Jews to their position above all other ethnic groups in the United States. When Eastern European Jews arrived in the United States they found a particularly good fit between their urban background and skills and the employment needs of a burgeoning garment industry in New York City. Forbidden to own land in Russia, they nevertheless "worked in occupations that prepared them for roles in a modern industrial economy."[24] According to the 1897 Russian census, 38 percent of Jews worked as artisans or were employed in manufacturing, primarily in the production of clothing. Another 32 percent were in commerce. In commerce they were often middlemen, linking the urban and rural


194

economies, a role that has been successfully played earlier by Korean immigrants on the West Coast of the United States and by Koreans today.[25]

Comparing Jews with six other ethnic groups that arrived in the United States between 1899 and 1910 (English, Germans, Scandinavians, Italians, Irish, and Poles), Steinberg found that the highest percentage of skilled workers was found among Jews: 67 percent.[26] The next highest percentage of skilled workers, 49 percent, was found among the English, a group no longer heavily represented among immigrants at that time. Among Germans, only 30 percent of immigrants were skilled; while the lowest percentage was found among Polish immigrants, only 6 percent of whom were skilled.

Skills are useless, however, without a demand for those skills in the area of settlement. By chance, Russian Jews found a demand for their extensive array of skills at the port of arrival, New York City, particularly in the clothing industry, which was primarily concentrated there. As a result of the fit between their extensive skills and the city's economy, Jews "ranked first in 26 of the 47 trades" tabulated by the U.S. Immigration Commission in 1911. The rapid upward mobility of Eastern European Jews, therefore, can be traced to the occupational fit they encountered at their point of entry into the United States. Their occupational success was then translated into sponsorship of their children in similar occupations and in educational attainment. Although, like every other immigrant group, Jews encountered discrimination, it was not sufficient to prevent them from entering skilled occupations or educating their children. They thus followed the classic pattern of each generation doing a little better than the previous one. While other ethnic groups also followed this same pattern, it is clear that the external factors encountered and the skills possessed differed from group to group. Thus some were more successful than others in moving up the class ladder.

The Penalty for Being Black

In spite of their slave experience, blacks in many respects occupied an advantageous position in 1865 relative to most European immigrants who arrived after this date. They were, first of all, experienced farmers in the southern agricultural economy. They knew the land; they understood the crops and the means of cultivating them. Secondly, blacks constituted most of the skilled workers in the urban South, having learned and practiced numerous skills in the slave economy. Thirdly, though their illiteracy rate was high, they knew the language of the country and understood its customs. They were not strangers in a strange land. Finally, blacks lived in proximity to the growing number of industrial jobs in the North. As slaves, many had worked in tobacco factories and in the towns of the South.


195

Had blacks received the promised forty acres and a mule, or at least been allowed to acquire land, thousands would have become small independent farmers at a time when land was still the backbone of the southern economy. Thousands more—if given the chance—would have moved into the industrial economy of the North to work in the factories of Chicago, Pittsburgh, and Detroit. However, blacks were not viewed or treated as another ethnic group in a plural society. Rather, the issue became polarized in both the South and North along black/white lines. Instead of allowing freedmen to acquire land after emancipation, southern planters moved quickly to preserve their cheap labor pool of agricultural workers by denying freedmen access to land throughout the South. At a time when land provided millions of whites a means of achieving self-sufficiency and the possibility of capital accumulation, freedmen did not receive the forty acres and a mule promised them. As W. E. B. Du Bois points out, this was unique in the experience of western societies. When the serfs were freed in Europe, from Russia to England, they were given parcels of land for their livelihood. In the postbellum period, freedmen were not only denied the promised forty acres and a mule, but were effectively prevented from purchasing land throughout the South.[27] In New Orleans, a program by wealthy blacks to lease plantations seized from former owners of slaves and to rent this land to freed blacks was effectively thwarted by the return of plantations to their former owners by a Republican party tiring of reconstruction and eager to ensure a continued flow of cotton to northern textile mills. Faced with these obstacles, only 5 percent to 8 percent of blacks managed to either become or remain landowners.

After some experimentation, a system of sharecropping emerged that ensured plantation owners a cheap labor pool if not always an entirely tractable one. There were, first of all, conflicts over the definition of the labor pool itself. Both southern planters and northern reconstructionists defined black women as part of the new southern plantation labor force, while blacks attempted to redefine the role of their wives in conformity with the "cult of domesticity," newly emerged within the white middle class. "The women," one planter complained, "say that they never mean to do any more outdoor work, that white men support their wives, and they mean that their husbands shall support them."[28]

To enforce their own interpretation of the black work force, planters often used armed riders to go from cabin to cabin, forcing black women into the fields. In the end, the sharecropping system effectively kept most southern blacks in virtual peonage. On "settlement day" (the annual calculation of debits and credits between planter and sharecroppers), the typical black family found itself in debt to the planter, who used their dependency on the plantation store for provisions as a means of cheating them. Those black families "in debt" at the end of the year


196

had to remain another year to work off their debt. Nor could blacks move into the newly found textile mills of the South. This was work reserved for poor whites by a southern planter class, determined to prevent a political alliance between poor blacks and whites. The few blacks admitted to the mills were confined to the most menial tasks.[29]

Those blacks who moved into the urban economy of the South found their labor as exploited there as in rural areas. The dominant position held by black males among the urban crafts workers at the end of the Civil War was lost over the next three decades, through unfair competition and the growing reluctance of whites to employ their skills. By 1900, the class of black artisans had been decimated, reduced from five out of six to only 5 percent in the urban South. There remained only menial, often sporadic, work for them, making it necessary for other family members to supplement their income. Summarizing the experience of black families in the urban South during the decades of the late nineteenth and early twentieth centuries, historian Jacqueline Jones writes: "Husbands were deprived of the satisfaction of providing their families with a reliable source of income, while wives found their duties enlarged as they added paid employment to their already considerable domestic responsibilities."[30]

The only work to which black women could aspire was domestic or laundry work, work of such low status that even poor white women avoided these jobs at all costs. Like white immigrant women in the North, black women in the South sought factory work in preference to domestic work whenever possible. Here, too, they found themselves relegated to the most menial of the available factory work. This included sorting, stripping, and stemming leaves in tobacco factories, work so difficult and unhealthy that white women could not be found to take these jobs. Those white women who did work in tobacco factories were given the more skilled tasks and worked in healthier surroundings, segregated from blacks.[31] When one employer hired a black woman to work in their area, white women walked off the job in protest, forcing the factory owner to fire the black worker. As a result of these kinds of restrictions, 90 percent of the servants in southern cities were black by 1900, as were the majority of laundresses.

In the North, employers showed a decided preference for white immigrant labor over the readily available pool of southern or northern blacks. Because of discrimination, there were no blacks in the brass and ship industries of Detroit in 1890, and only 21 blacks among the 5,839 male workers in the tobacco, stove, iron, machine, and shoe industries. By 1910, only 25 blacks could be found among the 10,000 primarily foreign workers in the burgeoning auto factories.[32] The discrimination black workers faced in northern cities could be felt in the lament of a


197

Detroit whitewasher in 1891: "First it was de Irish, den it was de Dutch, and now it's de Polacks as grinds us down. I s'pose when dey [the Poles] gets like de Irish and stands up for a fair price, some odder strangers'll come over de sea 'nd jine de faimily and cut us down again."[33]

Not until World War I cut off the flow of white immigrants did northern industrialists begin recruiting blacks from the South to work in the factories of Chicago, Philadelphia, Detroit, and other northern cities. Even then, black workers found a labor market in which white ethnics were firmly united in their opposition to competition from black labor, while employers reserved the better jobs for native whites and immigrants alike. When white workers turned to unionization, blacks were excluded.

Black women, forced to work in large numbers to supplement their husbands' and fathers' incomes, did not fare much better in the northern economy than they had in the South. As immigrant women abandoned domestic work for less odious factory jobs and native white women entered the new clerical occupations, black women alone found themselves overwhelmingly confined to domestic and laundry work. In Pittsburgh after World War I, 90 percent of all domestics were black women; in Philadelphia, over half. Those who found work in factories—only 5.5 percent in 1930, compared to 27.1 percent of the foreign born—were confined to the most dangerous and menial tasks. Nevertheless, domestic work represented such an undesirable alternative, the absolutely lowest status work in the economy, that black women sought factory work whenever possible. Their sentiments were unambiguously expressed by a black woman in a Chicago box factory in 1920: "I'll never work in nobody's kitchen but my own any more. No indeed! That's the one thing that makes me stick to this job."[34] These same black women had to watch helplessly as their own daughters were passed over by white employers who filled clerical and sales positions with native white and even immigrant girls, who had no more education than their children.

By 1940, only 5.7 percent of black males and 6.6 percent of black females had been able to enter middle-class occupations, the majority in predominantly black institutions, while 35.7 percent of whites held middle-class jobs.[35] In the ensuing decades, few blacks managed to climb the class ladder into the middle class. On the eve of the Civil Rights Movement, 1960, the effectiveness of discriminatory practices in the job market was apparent in the sizes of the black and white middle classes, which were now 13.4 and 44.1 percent of employed workers, respectively. The job ceiling remained almost as low for blacks in 1960 as it had been at the turn of the century.

How is it that of all ethnic groups, African-Americans still rank near the bottom on all economic indicators? Why have all European ethnics had more success than blacks in moving up the class ladder? As I pointed


198

out earlier, if one considered only the objective characteristics and circumstances of blacks and of white immigrant groups from southeastern Europe, one would have predicted a far different outcome. Blacks' skills were at least equal to those of the new immigrants, and their motivation to succeed was as high. While the selectivity of immigrants is well known, African-Americans emerged from slavery with a tremendous motivation to begin new lives for themselves. Denied the right of an education during slavery, they possessed a strong desire for this forbidden fruit. Adults eagerly flocked to the schools established by northern philanthropists and missionaries, to learn to read, and they were eager to send their children to school. Those who eventually migrated to the North were more likely than many of the new ethnics to send their children to school rather than out into the labor force. Yet these same families had to stand by helplessly while their children were passed over for the better jobs the economy had to offer in factories and offices.

The answer is not simply to be found in the economic competition among ethnic groups. For that would not explain why white ethnics, who competed among themselves, united in competing against blacks. Nor is Bonacich's theory of a split labor market a sufficient explanation. To be sure, employers sometimes profited by using blacks as strikebreakers and as a reserve army of cheap labor, but not on a scale sufficient to suppress the aspirations of white labor. Blacks were never given sufficient access to semiskilled or skilled jobs to play that role. Rather, I would argue that both white employers and ethnics united in their opposition to blacks competing in the market on an equal footing with white workers. But why? It is at this point that we are forced to return to the black/white polarization in American society. But rather than view racism as simply operating on the individual level as prejudice, it has to be interpreted in structural terms—as part of the culture.

Black/White Polarization

My argument is that primarily because of their racial attitudes , whites of all classes have historically reserved the worst jobs in the economy for black workers. It is true that, with the exception of work in the slave economy, whites have at times performed these same tasks in the urban economy. Yet these menial jobs were always viewed as temporary positions in the class structure, stepping stones toward a better life for themselves or their children higher up the class ladder. These white workers, immigrant and native, did in fact move up to better jobs, or at least were able to see their sons and daughters securing better jobs than their own in the next generation. Each new generation of European immigrants competed for positions in the economy and moved a little further up the class ladder.


199

This competition for desirable work, however, was open to whites alone. The norms of the market dictated that throughout the economy black workers be denied opportunities to compete equally with whites for desirable positions. Rather, black workers—both male and female—were reserved for the most menial labor at the very bottom of the class structure: unskilled labor and domestic work. This was not a "reserve labor pool" to be drawn upon by employers to undercut the price of white labor in semiskilled and skilled work. It was a system that defined some jobs as "colored jobs" and others as "white jobs."

Unlike the situation for whites, progress for blacks was not a matter of working harder or acquiring more skills and education. Since blacks were denied opportunities to compete in the market on an equal footing, for the same jobs, as whites, their upward mobility was stymied at its very source: the opportunity for husbands and wives to gain good and secure employment to improve their own living standard and thus be able to sponsor their sons and daughters in the next two stages of their movement up the occupational ladder. For although black parents placed greater emphasis on educating their children than many immigrant groups, they were nevertheless forced to stand by helplessly as their sons and daughters remained shut out of the growing number of skilled and clerical jobs becoming available. While the children of immigrants only recently arrived in this country could aspire to move further up the class ladder than their parents, generation after generation of black youth could aspire to little more than the unskilled labor and domestic work at which their parents toiled. A survey by the Bureau of Jewish Employment Problems of Chicago in 1961 found this to be true in the North as well as the South. Their report concluded that "98 percent of the white-collar job orders received from over 5,000 companies were not available to qualified Negroes" in that year.[36] "No blacks need apply" was the common experience of blacks seeking to move up the class ladder. Those blacks who managed to escape these restrictions to some extent by acquiring an education in a black college found themselves confined to serving the black community, rather than being able to contribute their talents to the development of the entire society.[37] The brain power, creativity, and talents of millions of blacks were lost to both the black community and to the larger society.

Earlier I noted that an individual's movement up the class ladder has been modeled as a three-stage process, involving family background, educational attainment, and entry into the job market. At each of these points, blacks found themselves handicapped. Black families were denied opportunities to increase their economic resources, which then could be used to sponsor their children at the next stage. The education of black children was separate and unequal. When moving into the job market, blacks encountered a ceiling above which they could not aspire


200

to climb. Though immigrants from southeastern Europe frequently encountered discrimination, it was never as severe or prolonged as that faced by blacks.[38] Furthermore, these same immigrants who were themselves discriminated against, united in their opposition to blacks. Thus, their own upward mobility was facilitated at the expense of blacks, who were kept at the very bottom of the occupational structure—all in spite of the initially more favorable position of blacks.

White workers did gain economically from the subjugation of black workers, just as they had profited from the elimination of Chinese workers in California during the late nineteenth century, and just as Anglos profited from the seizure of Mexican-American land after the Mexican-American War. White ethnics could have gained equally from the subjugation of another ethnic group, such as Poles, Jews, or Italians. These latter groups did, in fact, experience discrimination from the older ethnics from northern and western Europe. But none of the new European ethnic groups was confined to unskilled labor and domestic work. Each quickly moved from unskilled labor and domestic work into factories, the first springboard up the class ladder. There they secured the best jobs, while blacks either could not gain access or found only the most menial and dangerous work open to them.

Though discrimination has been part of American history from colonial times and has affected all groups other than Anglos to some degree, it is ethnic minorities, those with darker skins , who have experienced the severest discrimination and faced the most obstacles in their movement up the class ladder. From the very beginning, the class system in America has been a color-conscious class system.[39] Within this color-conscious class system, African-Americans have experienced earlier and more persistent discrimination than any other group except Native Americans.

Cultural Racism and the Role of Blacks in the U.S. Class System

The role of blacks in the U.S. class system was first established with the importation of Africans to labor as slaves on the plantations of the South. This development followed the failure of southerners' attempts to use the quasi-free labor of Indians and white indentured servants on plantations. With the failure of these attempts, planters turned to the use of African slaves. Unlike Native Americans, Africans were accustomed to agricultural work, and they could not blend into the population as did white indentured servants if they escaped, making them an ideal inexpensive work force from the planters' viewpoint.

To justify the total subjugation of Africans through the slave system, negative imaging and stereotyping of African-Americans was resorted


201

to. In time, blacks were portrayed as somewhat less than human, without a Christian soul and devoid of refined, civilized sentiments. During the abolitionist movement in the early nineteenth century, the propaganda of slaveholders intensified. According to Frederickson, a pamphlet published in New York in 1833, entitled Evidence Against the Views of the Abolitionists, Consisting of Physical and Moral Proofs of the Natural Inferiority of the Negroes , presented "the basic racist case against the abolitionist assertion of equality."[40] In this pamphlet the author, Richard Colfax, argued for the innate intellectual inferiority of blacks, based on their alleged physical differences. This theme would later be taken up again and given pseudoscientific support by racist white scholars.

Southern apologists for the system of slavery went beyond the inferiority thesis to argue the benefits slavery held for an inferior race. Such a position was made in the United States Senate in 1858 by Hammond, a planter-intellectual and senator for the State of South Carolina:

In all social systems there must be a class to do the menial duties, to perform the drudgery of life. That is a class requiring but a low order of intellect and but little skill. Its requisites are vigor, docility, fidelity. Such a class you must have. . . . It constitutes the very mud-sill of society. . . . Fortunately for the South we have found a race adapted to that purpose to her hand. . . . We do not think that whites should be slaves either by law or necessity. Our slaves are black, of another, inferior race. The status in which we have placed them is an elevation. They are elevated from the condition in which God first created them by being made our slaves.[41]

Slavery, then, was portrayed as beneficial to blacks, so much so, as one writer asserted, that under slavery they became "the most cheerful and merry people we have among us."[42] Sambo, the grinning, happy-go-lucky, singing and dancing, simple-minded black was a natural product of this thinking and became an image of all blacks—free as well as slave. Nor did these negative images and stereotypes end with slavery. Rather, as Frederickson notes, they "engendered a cultural and psychosocial racism that after a certain point took on a life of [its] own and created a powerful irrational basis for white supremacist attitudes and actions."[43] These attitudes became part of white culture and belief systems well into the twentieth century.

Thus we find that in a serious dissertation, written for his doctoral degree at Columbia University in 1910, Howard Odum wrote:

The Negro has little home conscience or love of home, no local attachments of the better sort. . . . He has no pride of ancestry, and he is not influenced by the lives of great men. . . . He has little conception of the meaning of virtue, truth, honor, manhood, integrity. . . . He does not know the value of his word or the meaning of words in general. . . . They sneer at the idea of work. . . . Their moral natures are miserably perverted.[44]


202

Odum's dissertation later became an influential book under the title Social and Mental Traits of the Negro . Odum's conception of blacks was no different in 1910 than that expressed almost immediately after emancipation in 1866 by George Fitzhugh, who wrote: "They [Negro orphans] lost nothing in losing their parents, but lost everything in losing their masters. Negroes possess much amiableness of feeling, but not the least steady, permanent affection. 'Out of sight, out of mind' is true for them all. They never grieve twenty-four hours for the death of parents, wives, husbands, or children."[45]

Because of the racist ideas about African-Americans before and after the Civil War, debates over their fate "never contemplated an integration of black workers into the nation's industrial labor force."[46] Rather than simply allow blacks to take their place in American society as another ethnic group attempting to struggle up the class ladder, they were viewed collectively as a "problem." One southerner expressed what was the view of many in 1867 when he wrote: "No permanent lodgment, no enduring part nor lot, must the black and baneful Negroes be permitted to acquire in our country. Already have they outlived their usefulness—if, indeed, they were ever useful at all."[47]

The idea of expelling African-Americans from the society altogether was, in fact, entertained by Lincoln himself, who persuaded Congress to pass legislation subsidizing the voluntary emigration of ex-slaves to the Caribbean.[48] A number of northern states, including Pennsylvania, Ohio, and Illinois, went so far as to pass laws to prevent migration of free blacks into their states. Everywhere, the issue was expressed as a competition between black and white labor, rather than as competition between workers, even though Irish and German workers, and later Polish, Italian, and Slav workers, would be in competition. Thus in 1862, as blacks who had been freed by the Union Army drifted northward, the Boston Pilot , and Irish-Catholic newspaper, remarked that "we have already upon us bloody contention between white and black labor. . . . The North is becoming black with refugee Negroes from the South. These wretches crowd our cities, and by overstocking the market of labor, do incalculable injury to white hands."[49] In a similar vein, the Democratic party of Pennsylvania inveighed against the Republican party in 1862 calling them "the party of fanaticism, or crime . . . that seeks to turn the slaves of the Southern states loose to overrun the North and enter into competition with the white laboring masses, thus degrading and insulting their manhood by placing them on an equality with Negroes in their occupations is insulting to our race, and merits our most emphatic and unqualified condemnation."[50]

In the late nineteenth century, whites in California and other far western states would become alarmed at the "yellow peril," and western


203

states would pass discriminatory laws against the Chinese. Similarly, Nativists were instrumental in the passage of the first immigration quota system in 1921, which severely restricted access of eastern and southern Europeans to the United States. In both cases, however, the discrimination was not rooted so far in the past or so deep in the cultural psyche as that aimed against blacks. Eventually, fear of the "yellow peril" subsided, and Asians were able to develop businesses that were patronized by the general white public. And immigrants from southeastern Europe became just so many immigrant groups on the American landscape. Yet, the discriminatory structures and laws enacted against blacks in the South, and the discriminatory practices of the North, persisted for 100 years following the end of slavery. Only when confronted with a major threat to societal order posed by the upheaval of the Civil Rights Movement and the "long hot summers" of ghetto rebellion was American society persuaded to commit itself—for the first time in its history—to equal status for African-Americans. Coming after a century of oppression that left a disproportionate number of black citizens on the last rung of the class ladder, with little wealth, property, or educational resources, the Civil Rights Laws of 1964 could be nothing more than a beginning, a ticket to run in a race for which millions of blacks were ill prepared. The task of redressing the cumulative consequences of past discrimination remained, as well as that of providing truly equal opportunities to those blacks now entering the educational systems and job market.

The Present and Future of Racial Polarization

To some extent, the disadvantaged position in which blacks found themselves in 1964 was recognized. Lyndon Johnson launched the War on Poverty during the euphoria of the prosperous era of the late 1960s, a period in which all things seemed possible. Educational disadvantages were addressed through attempts to desegregate schools at all levels. The Head Start program was launched to help disadvantaged children overcome the educational deprivation associated with poverty. The Office of Civil Rights and the Equal Employment Opportunity Commission (EEOC) were established, the latter to oversee the implementation of Title VII, which outlawed discrimination in employment. An open-housing bill was passed, and the government moved to grant blacks access to the ballot box.

From hindsight, we now see that the federal government and the nation did not fully appreciate the magnitude of the task: to eradicate 100 years of deprivation and oppression, and to remove from the hearts and minds of whites the cultural baggage of racism. In time, discouragement over the slow pace of progress set in. Rather than fine-tune efforts begun


204

during the War on Poverty with more sophisticated approaches and additional resources, the "War" was eventually abandoned.

In the atmosphere of economic insecurity that slowly gripped the nation during the downturn of the 1970s and 1980s, even the white middle class became preoccupied with bread-and-butter issues. A white society still imbued with a racist culture turned once more to a familiar tool in a new guise, discrimination. Concern about "equal opportunity" for blacks was replaced with concern over "reverse discrimination," a term symbolizing the growing unwillingness of present generations of whites to individually and collectively accept the challenge (and burden) of rectifying the evils created by past generations of whites. Well-meaning but naive policymakers had not anticipated the depth of white resistance to the full incorporation of blacks into American society or whites' unwillingness to pay the societal cost of achieving that task. Even today, as the National Academy of Science study notes, while whites are increasingly supportive of the "principles" of racial equality, they offer "substantially less support for policies intended to implement principles of racial equality" and continue to shun sustained and close contacts with blacks.[51] Granted that the unfinished agenda is challenging in the best of times, and even painful in periods of economic sluggishness, such as we faced in the 1970s and 1980s, the subjugation of blacks has been even more painful. Their continued economic inequality is not only debilitating to them, but costly to the nation in terms of both the expense of maintaining the dependent poor and the pool of productive talent lost. The financial loss to the nation associated with the lower earnings of blacks has been estimated by Billy J. Tidwell of the Urban League to equal almost 2 percent of our gross national product, or about $104 billion in 1989.[52]

The problem of fully incorporating blacks into the American mainstream is a societal problem that requires compensatory measures to rectify the disadvantages created by racism. Since this is a societal, rather than merely an individual problem, it is the task of government to mobilize resources and persuade white society to support this undertaking. Such a mobilization of resources and sentiment was begun under John F. Kennedy and continued by Lyndon Johnson. It included the passage of the Civil Rights Act and the launching of the War on Poverty. Many successes can be counted.

There are just as many failures as successes, however. The comprehensive study by the National Academy of Science makes this painfully clear. Efforts to desegregate schools have faltered at all levels, and they continue to fail to provide quality education to blacks generally. Head Start remains far below its full potential because of inadequate funding. As a result of the failure of schools to address the educational disadvantages many blacks face because their socioeconomic backgrounds are


205

lower than those of whites, the National Academy's study concludes: "American students leave the schools with black/white achievement gaps not having been appreciably diminished."[53] A college education, the key to "an estimated 50 percent of new jobs created between [1989] and the year 2000," is becoming less and less accessible to blacks.[54] The proportion of black high school graduates entering college is now lower than in 1976, a victim of declining federal aid to education and the virtual abandonment of desegregation in higher education by the federal government during the Reagan administration.[55] Discrimination in housing remains little changed from the past, so much so that blacks remain today the most residentially segregated of all ethnic minorities. According to the findings of Douglas Massey, "a black person who makes more than $50,000 a year will be virtually as segregated as a black person who makes only $2,500 a year."[56]

While blacks have made tremendous strides in employment since 1964, they still lagged far behind whites in both occupational achievement and income by 1990. The optimism of the late 1960s has given way to caution or even pessimism, leading the authors of the National Academy of Sciences' study to conclude that "since the early 1970s, the economic status of blacks relative to whites has, on average, stagnated or deteriorated."[57]

Signs of this stagnation are evident in both income and occupational statistics. In 1989, the median income of two-earner black families was $36,709, only $8,751 higher than the figure for white families with one earner.[58] The wealth gap was even larger, with whites having a net worth more than three times that of blacks. Black upward mobility into the middle class has also slowed. My overall projections for the black middle class for the years 1990 and 2000, which were based on statistics for the years 1973 to 1981, have proven to be overly optimistic.[59] Rather than a black middle class that is 48.6 percent of employed blacks in 1990, the proportion was closer to 45 percent. At the same time, projections for a white middle class of 59.5 percent in 1990 was just about on target, evidence that whites have not suffered economically during this period as much as blacks. Blacks have had an especially difficult time penetrating the seats of power in the workplace. Although constituting 10.1 percent of all employed workers, they held only 6.1 percent of the managerial and professional jobs in 1989.[60] Dispelling the idea that discrimination is a thing of the past, the National Academy of Science study concludes that "a considerable amount of remaining black/white inequality is due to continuing discriminatory treatment of blacks."[61] The task of fully incorporating African-Americans into American society remains "unfinished business."

Much of this failure can be laid at the feet of the federal government,


206

especially the two-term Reagan administration, which not only failed to provide the leadership needed to complete the task but was actively hostile to the only truly successful tools in this struggle: desegregation of the educational system and implementation of equal employment opportunity laws. The Reagan administration's hostility to quotas and timetables, the only meaningful means of forcing reluctant employers to implement affirmative action, has been especially devastating. It is at best naive to believe that employers who have discriminated against blacks in the past will suddenly have a change of heart and voluntarily afford the same opportunities to blacks as they do to whites. The study by the National Academy of Science should erase all doubts, even among the most skeptical. Rather than race declining in significance, as William Wilson suggested in 1978, it is now clear that race remains a deep, pervasive, and intractable characteristic of white society in the 1990s. The most recent indication of persistent racism comes from the finding of a 1990 national NORC survey that over half of all whites still hold the negative stereotype that blacks are lazy and less intelligent than whites.[62]

Conclusion

Simple justice and a commitment to equality demand that we free ourselves of racism and discrimination. Yet, as this historical review of race relations in the United States indicates, white resistance to black progress has been so deep, and has gone on for so long, that racism seems intractably built into the American experience. Despite such resistance, however, the problem must be addressed. Today it can be said that the future not only of blacks but the nation itself depends on the full incorporation of minorities into the American mainstream. Because of changing demographics, the economy will depend more on minority workers in the future.[63] By the year 2000, about one-third of all workers entering the labor force will be minorities. When combined with patterns of immigration described by Rubén Rumbaut in his chapter in this book, it is clear that perhaps the single biggest change facing the United States is the increasing racial and ethnic diversity of its population. By the year 2080, all minorities taken together may well constitute slightly over half of the U.S. population. Even the selfish self-interest of whites demands their strong support for affirmative action and the elimination of all forms of racism and discrimination. Yet millions of whites (perhaps the majority) do not understand this. The business community—always preoccupied with the present—is just beginning to glimpse this truth.

It should be clear that the problem of racism and discrimination will not resolve itself. Almost every day the news media report some new sign of racial tension in America. Blacks continue to have a difficult time buy-


207

ing homes because of bias by lenders and lower incomes than whites.[64] Discriminatory election laws in many states continue to hinder the election of blacks at the state level, leading the Justice Department to file suit against the state of Georgia.[65] The upper levels of management continue to elude blacks today as they did ten years ago. Even while the economic position of blacks in the 1980s is declining, as measured by falling incomes and rising unemployment, employment agencies continue to assign job interviews to whites more than to blacks, as CBS's "60 Minutes" documented in the summer of 1990.

I have argued throughout this chapter that racism and discrimination should be seen as societal problems, not simply the aberrations of malevolent individuals. Just as the most positive historical changes, such as emancipation and the Civil Rights Act, resulted from leadership at the highest levels of government, so today it is only the initiative and leadership from state and, especially, the federal government, that are up to the task. Without a massive effort, similar to the Civil Rights era of the 1960s, the racial problems of today will only become worse in the twenty-first century.


208

Ten—
Passages to America:
Perspectives on the New Immigration

Rubén G. Rumbaut

Once I thought to write a history of the immigrants in America. Then I discovered that the immigrants were American history.
OSCAR HANDLIN , The Uprooted (1951)


Ironically, those opening lines of Handlin's famous portrait of immigrant America ring truer today than they did when he penned them at mid-century. As Handlin would add in a postscript to the second edition of The Uprooted two decades later, immigration was already "a dimly remote memory, generations away, which had influenced the past but appeared unlikely to count for much in the present or future"; and ethnicity, not a common word in 1950, seemed then "a fading phenomenon, a quaint part of the national heritage, but one likely to diminish steadily in practical importance."[1] After all, the passage of restrictive national-origins laws in the 1920s, the Great Depression and World War II had combined to reduce the flow of immigrants to America to its lowest point since the 1820s. But history is forever ambushed by the unexpected. Handlin might have been surprised, if not astonished, to find that in at least one sense the "American Century" seems to be ending much as it had begun: the United States has again become a nation of immigrants, and it is again being transformed in the process. To be sure, while the old may be a prologue to the new, history does not repeat itself, whether as tragedy or as farce. America is not the same society that processed the "huddled masses" through Castle Garden and Ellis Island, and the vast majority of today's immigrants and refugees hail not from Europe, but from the developing countries of the Third World, especially from Asia and Latin America. Not since the peak years of immigration before World War I have so many millions of strangers sought to make their way in America. They make their passages legally and illegally, aboard jumbo jets and in the trunks of cars, by boat and on foot; incredibly, in 1990 a Cuban refugee came across the Straits of Florida riding a windsurfer. Never before has the United States received such diverse


209

groups—immigrants who mirror in their motives and social class origins the forces that have forged a new world order in the second half of this century and who are, unevenly, engaged in the process of becoming the newest members of American society.[2]

The American ethnic mosaic is being fundamentally altered; ethnicity itself is being redefined, its new images reified in the popular media and reflected in myriad and often surprising ways. Immigrants from a score of nationalities are told that they are all "Hispanics," while far more diverse groups—from India and Laos, China and the Philippines—are lumped together as "Asians." There are foreign-born mayors of large American cities, first-generation millionaires who speak broken English, a proliferation of sweatshops exploiting immigrant labor in an expanding informal economy, and new myths that purport to "explain" the success or failure of different ethnic groups. Along "Calle Ocho" in Miami's Little Havana, shops post signs to reassure potential customers that they'll find "English spoken here," while Koreatown retailers in Los Angeles display "Se habla español" signs next to their own Hangul script, a businesslike acknowledgment that the largest Mexican and Salvadoran communities in the world outside of Mexico and El Salvador are located there. In Brooklyn, along Brighton Beach Avenue ("Little Odessa"), signs written in Cyrillic letters by new Soviet immigrants have replaced old English and Yiddish signs. In Houston, the auxiliary bishop is a Cuban-born Jesuit who speaks fluent Vietnamese—an overflow of 6,000 faithful attended his recent ordination, and he addressed them in three languages—and the best Cuban café is run by Koreans. In a Farsi-language Iranian immigrant monthly in Los Angeles, Rah-E-Zendegi , next to announcements for "Business English" classes, a classified ad offers for sale a $20 million square block on Boston's Commonwealth Avenue, and other ads deal with tax shelters, mergers, and acquisitions. In Santa Barbara, a preliterate Hmong woman from the Laotian highlands, recently converted to Christianity, asked her pastor if she could enter heaven without knowing how to read; while in Chattanooga, Tennessee, a twelve-year-old Cambodian refugee, Linn Yann, placed second in a regional spelling bee (she missed on "enchilada"). At the Massachusetts Institute of Technology, Tue Nguyen, a twenty-six-year-old Vietnamese boat refugee, set an MIT record in 1988 by earning his seventh advanced degree, a doctorate in nuclear engineering, just nine years after arriving in the United States—and landed a job at IBM designing technology for the manufacture of semiconductors. In the San Jose telephone directory, the Nguyens outnumber the Joneses fourteen columns to eight, while in Los Angeles, a Korean restaurant serves kosher burritos in a largely black neighborhood. And then there was this in the New York Times: "At the annual Lower East Side Jewish Festival yesterday, a


210

Chinese woman ate a pizza slice in front of Ty Thuan Duc's Vietnamese grocery store. Beside her a Spanish-speaking family patronized a cart with two signs: 'Italian Ices' and 'Kosher by Rabbi Alper.' And after the pastrami ran out, everybody ate knishes."[3]

Immigration to the United States is a social process, patterned within particular structural and historical contexts. The contemporary world has shrunk even as the populations of developing countries have expanded. Societies have become increasingly linked in numerous ways—economically, politically, culturally—as states and markets have become global forms of social organization, and modern consumption standards (especially American life-styles) are diffused worldwide. Over time, social networks are created that serve as bridges of passage to America, linking places of origin with places of destination. Indeed, transnational population movements of workers, refugees, and their families are but one of many other exchanges of capital, commodities, and information across state borders, all facilitated by a postwar revolution in transportation and communication technologies. In general, the patterns reflect the nature of contemporary global inequality: a flow of capital from more developed countries (MDCs) to less developed countries (LDCs), a flow of labor from LDCs to MDCs, and—in an era of Cold War and global superpower confrontation, decolonization and the formation of new states, revolutions and counterrevolutions—continuing flows of refugees, primarily from one Third World country to another.[4]

Still, moving to a foreign country is not easy, even under the most propitious circumstances. In a world of 5 billion people, only a fraction—perhaps 2 percent—are immigrants or refugees residing permanently outside their country of birth. In absolute numbers, the United States remains by far the principal receiving country: by the late 1950s the United States had admitted half of all legal immigrants worldwide, and that proportion had grown to two-thirds by the 1980s. In relative terms, the picture is different: only 6.2 percent of the 1980 U.S. population was foreign-born, a percentage exceeded by many other countries. For example, recent censuses showed a foreign-born population of 20.9 percent in Australia, 16.1 percent in Canada, 8.2 percent in France, 7.6 percent in West Germany, 7.2 percent in Venezuela, 6.8 percent in Argentina, and 6.6 percent in Great Britain. Some smaller countries have much higher proportions, such as Israel (42 percent) and Saudi Arabia (36 percent). But the 14.1 million foreigners counted in the 1980 U.S. census formed the largest immigrant population in the world.[5]

The public image of today's new American immigration clashes with its complex realities. Because the sending countries are generally poor, many Americans believe that the immigrants themselves are poor and uneducated. Because the size of the new immigration is substantial and


211

concentrated in a few states and metropolitan areas, concerns are raised that the newcomers are taking jobs away from the native-born and unfairly burdening taxpayers and public services. Because of the non-European origins of most new immigrants and the undocumented status of many, their prospects for assimilation are sometimes perceived as worse than those of previous flows. And as in the past—if without much of the vitriol and blatant racism of yesterday's nativists—alarms are sounded about the "Balkanization" of America, the feared loss of English as the national language and even of entire regions to potential secessionist movements. As this chapter will attempt to show, such concerns are fundamentally misplaced, even though immigration again plays a central role in an American society in transition. Within its limits, the essay has three objectives: (1) to sketch a portrait of the contours and diversity of recent immigration to the United States, (2) to examine the modes of incorporation of main types of immigrant groups, and (3) to consider some of the determinants of the new immigration and its consequences for the American economy and society.

Immigration to the United States: Historical Trends and Changing Policies

Decennial trends in immigration to the United States are summarized in table 10.1 for the century from 1890 to 1989. Authorized immigration reached its highest levels (8.8 million) during 1901–10, more than doubling the number of immigrants admitted in the preceding decade. Much of this flow was initiated by active recruitment on the part of employers, and many immigrants (over one-third) returned home after a few years in the United States—"birds of passage," often young single men, whose movements tended to follow the ups and downs of the American business cycle.[6] In the post–World War II period, legal immigration flows have been much less clearly a function of economic cycles and deliberate recruitment, and much more apt to be sustained by social networks of kin and friends developed over time. Since 1930, moreover, some two-thirds of all legal immigrants to the United States have been women and children.[7] After the peak decade of 1901–10, immigration began a steady decline until the trend reversed itself immediately after World War II. Only 23,000—the smallest annual flow recorded since the early nineteenth century—entered in 1933 and again in 1943, in the midst of the Depression and then the world war. The number of legal immigrants doubled from the 1930s to the 1940s, more than doubled again in the 1950s (to 2.5 million), and more than doubled yet again by the 1980s. Indeed, if the 3 million people who recently qualified for legalization of their status under the amnesty provisions of the Immigration Reform


212
 

TABLE10.1 Historical Trends in the U.S. Foreign-Born Population and Legal Immigration, 1890–1989,
by Region of Origin, and Net Immigration Proportion of Total U.S. Population Growth

Census Year/Decade Ending

Foreign-Born Population

Immigration by Intercensal Decade and Region of Last Residence

Population Growth Due to Net Immigration (%)

N
(1000s)

% Foreign-Born of Total Population

N
(1000s)

North/West Europe and Canada
(%)

South/East Europe
(%)

Latin America (%)

Asia
(%)

1900

10,445

13.6

3,688

44.7

51.8

1.0

2.0

20.3

1910

13,360

14.7

8,795

23.8

69.9

2.1

3.7

39.6

1920

14,020

13.2

5,736

30.3

58.0

7.0

4.3

17.7

1930

14,283

11.6

4,107

53.8

28.7

14.4

2.7

15.0

1940

11,657

8.8

528

58.0

28.3

9.7

3.1

1.6

1950

10,431

6.9

1,035

63.8

12.8

14.9

3.6

8.8

1960

9,738

5.5

2,515

51.8

16.0

22.2

6.1

10.6

1970

9,619

4.7

3,322

30.0

16.3

38.6

12.9

16.1

1980

14,080

6.2

4,493

10.2

11.4

40.3

35.3

17.9

1981–89a

NA

NA

5,323

8.0

5.9

37.6

45.1

29.2

SOURCES : U.S. Bureau of the Census, Statistical Abstracts of the United States , 109th ed., (Washington, D.C.: Government Printing Office, 1989), tables 1, 5–6, 46; Leon F. Bouvier and Robert W. Gardner, "Immigration to the U.S.: The Unfinished Story," Population Bulletin 41 (November 1986), tables 1, 3, 6; U.S. Immigration and Naturalization Service, Statistical Yearbooks (Washington, D.C.: Government Printing Office, 1980–89); U.S. Bureau of the Census, Current Population Reports , Series P-25, no. 1018 (Washington, D.C.: Government Printing Office, 1989).

a Data do not include 478,814 immigrants who had resided in the United States since 1982 and whose status was legalized in fiscal year 1989 under the provisions of the Immigration Reform and Control Act (IRCA) of 1986. Beginning in 1990 an additional 2.6 million legalization applicants, including over one million special agricultural workers (SAW), became eligible to adjust their status to permanent resident.


213

and Control Act (IRCA) of 1986 were added to the regular admission totals for the 1980s, the decade ending in 1990 would exceed 8 million immigrants and rival the record numbers registered during the first decade of this century.[8] At that time, however, foreign-born persons constituted 14.7 percent of the total U.S. population, more than twice the relatively small 6.2 percent counted in the 1980 census. As table 10.1 also shows, net immigration accounted for nearly 40 percent of total population growth in the United States by 1910—a level not since approached, though net immigration today (adjusting for both emigration and illegal immigration) makes up an increasing proportion of total U.S. population growth. Given a declining national fertility rate, the demographic impact of immigration will continue to grow in importance.[9]

Until 1890 the overwhelming number of immigrants had come from northwest Europe—particularly from Ireland, Great Britain, Germany, and Scandinavia. From Asia, Chinese laborers were recruited, especially to California, after 1850, until their exclusion by federal law in 1882 (rescinded in 1943, when the United States and China were allies in World War II); their place was taken by Japanese immigrants, who were themselves restricted (though not entirely excluded) by the "Gentleman's Agreement" of 1907 between the U.S. and Japanese governments. After 1890, however, a much larger "new" immigration from southern and eastern Europe—particularly from Italy and the Russian and Austro-Hungarian empires—significantly changed the composition of the trans-atlantic flow. From 1890 to 1920, as shown in table 10.1, well over half of all immigrants to America arrived from these regions. In response, the most restrictive immigration laws in the nation's history were passed in 1921 and 1924 (fully implemented in 1929), limiting the annual flow to 150,000 for Eastern Hemisphere countries and setting national-origins quotas that barred Asians and allocated 82 percent of all visas to northwestern Europeans, 16 percent to southeastern Europeans, and 2 percent to all others. Largely at the urging of American growers, no limits were set on Western Hemisphere countries; it was understood that Mexican labor could be recruited when needed (as happened during World War I and the 1920s, and again during the Bracero Program of contract-labor importation, begun in 1942 to meet labor shortages during World War II but maintained until 1964), and that those laborers could be deported en masse when they were no longer needed (as happened during the 1930s and again during "Operation Wetback" in the mid-1950s).

The McCarran-Walter Act of 1952 retained the national-origins quota system, slightly increasing the annual ceilings for the Eastern Hemisphere to 159,000 and the allocation of visas to northwestern Europeans to 85 percent. It included—again at the urging of growers—a "Texas Proviso" that exempted employers from sanctions for hiring illegal


214

aliens (a loophole, formally closed by IRCA in 1986, that in fact encouraged undocumented immigration, all the more after the Bracero Program was ended in 1964). And it set up a preference system to meet specified labor needs and family reunification priorities. Among numerically restricted immigrants, half of the visas were granted to highly skilled professional and technical workers, and half to immediate relatives of permanent residents and to the parents, siblings, and married children of U.S. citizens. Exempted from the numerical quotas were spouses and unmarried minor children of U.S. citizens. Many British, German, and other European scientists and professionals journeyed to America in the aftermath of the war to pursue opportunities not available in their countries, and the first "refugees" recognized as such by the U.S. government—European "displaced persons" in the late 1940s, Hungarian escapees after the 1956 revolt—were admitted under special legal provisions. In any case, as table 10.1 shows, from 1920 to 1960 the majority of all immigrants to the United States again came from northwest Europe and Canada. After 1960, however, the national composition of the flow changed dramatically, and by the close of the 1980s more than 80 percent of total legal immigration originated in Asia and Latin America.

The Hart-Celler Act of 1965 (fully implemented in 1969), which eliminated the national-origins quota system and basically remained in effect until 1990, has been frequently cited as the main reason for these changes. For a variety of reasons, however, this explanation is insufficient; entry policies do influence but do not determine immigrant flows. As in the past, rules governing immigration are ultimately defeasible and are accompanied by consequences never intended by policymakers. The 1965 Act—amended in 1976, again by the Refugee Act of 1980 and IRCA in 1986—is a case in point. Emmanuel Celler, Brooklyn congressman who cosponsored the 1965 law, had long sought to repeal the discriminatory quota system, but noted that "my efforts were about as useless as trying to make a tiger eat grass or a cow eat meat." He lobbied for the new law—in a political climate changed by the Civil Rights Movement at home and by the geopolitical interests of U.S. foreign policy abroad—by offering opponents family preferences as an alternative to national-origins quotas, confidently predicting that "there will not be, comparatively, many Asians or Africans entering this country . . . since [they] have very few relatives here."[10] Similar pronouncements were made by the Attorney General and other officials in testimony before the Congress; they expected instead that the number of southern and eastern European immigrants would grow. Historically, after all, Asian immigration to the United States had averaged only 2 percent to 4 percent of total admissions—until the 1950s, when 6 percent of legal immigrants came from Asian countries, most of them as brides of U.S. servicemen


215

overseas—and (uncoerced) African immigration had never been a factor. But by the 1980s, Asian immigration accounted for 45 percent of total admissions, and African immigration—though still small in relative numbers—increased eightfold from the early 1960s to the late 1980s. European immigration, in turn, decreased significantly over the same period—precisely the opposite of what had been anticipated.

Immigrants who are legally admitted to the United States fall into two broad categories: those subject to a worldwide limitation and those who are exempt from it. With minor modifications until it was overhauled in late 1990, the 1965 law set a worldwide annual ceiling of 270,000 immigrants, with a maximum of 20,000 per country, under a preference system that greatly emphasized family reunification. The number of immigrants subject to this worldwide limitation remained relatively constant from year to year, since the demand for visas far exceeded the annual limit of 270,000. For example, as of January 1989, there were 2.3 million active registrants awaiting immigrant visas at consular offices abroad.[11] Among these numerically restricted immigrants, 20 percent of the visas were granted to persons certified by the Department of Labor to possess needed job skills (half of them professional, managerial, and technical workers) and their immediate families, and 80 percent to immediate relatives of permanent residents and to siblings and married children of U.S. citizens. But parents as well as spouses and unmarried minor children of American citizens are numerically unrestricted—opening "chain migration" channels for those with family connections—and in addition, refugees and asylees are admitted outside the worldwide limitation under separate ceilings determined each year by the Administration and the Congress (the 1990 refugee ceiling was raised to 125,000). The flow of immigrants thus exempt from numerical limits increased significantly over the past two decades, underscoring the progressive nature of network building processes: for example, 27 percent of the 1.9 million immigrants admitted during 1970–74 came outside the regular quota, as did 36 percent of the 2.4 million admitted during 1975–79, 50 percent of the 2.8 million admitted during 1980–84, and 56 percent of the 3 million admitted during 1985–89.[12] Of all nonquota immigrants legally admitted into the United States in recent years, two-thirds have been immediate relatives of American citizens, and one-third have been admitted as refugees.

Since 1960, the overwhelming majority of refugees have come from Cuba and, since the end of the Indochina War in 1975, primarily from Vietnam, Laos, and Cambodia. Indeed, the consolidation of communist revolutions in Cuba and Vietnam represent by far the worst defeats of American foreign policy in modern history. U.S. refugee policy, a product of the Cold War era, has always been guided by fundamentally politi-


216

cal and not purely "humanitarian" objectives, and refugees fleeing from communist-controlled states to the "free world" have served as potent symbols of the legitimacy of American power and foreign policy. Even after the 1980 Refugee Act accepted the United Nations' ideologically neutral definition of a refugee, more than 90 percent of entrants granted refugee or asylee status by the United States during the 1980s continued to be from communist countries; most escapees from noncommunist regimes, such as Salvadorans and Guatemalans fleeing death squads and civil wars in their countries, have instead been generally labeled as "economic migrants"—and deported or driven underground along with other undocumented immigrants.[13] The conferral or denial of asylum or refugee status has significant consequences for immigrants' incorporation in the American economy and society, since persons so classified have the right to work (which illegal immigrants and temporary visitors do not) and to access public assistance programs on the same basis as U.S. citizens (which legal immigrants do not, at least during their first three years in the country).

The undocumented immigrant population has not only grown but diversified during the 1980s. As noted previously, over 3 million immigrants qualified for legalization of their status under IRCA's amnesty provisions by 1989—including residents who had entered the United States illegally prior to 1982, and Special Agricultural Workers (SAWs) who had been employed in seasonal work during the mid-1980s. Immigrants who entered illegally after 1981 (other than SAWs) were not eligible to qualify for legalization under IRCA, and thus reliable data on the size and composition of that population are unavailable. However, a majority of Central Americans in the country today are probably included—themselves in some measure an unintended consequence of U.S. policy and intervention in their home region—as well as an estimated 100,000 Irish immigrants who have, since 1982, overstayed their temporary visitor visas and clustered in historical areas of Irish settlement in Boston and New York.[14] Furthermore, again contrary to official predictions, IRCA has not stopped the flow of unauthorized migrants; in fact, the number of apprehensions along the Mexican border increased abruptly after 1989 and may again reach historically high levels.[15] In addition, the growing backlog and waiting periods faced by persons applying legally for numerically restricted immigrant visas—above all in Mexico and the Philippines—are likely to encourage further extralegal immigration. Former Immigration and Naturalization Service (INS) Commissioner Leonel Castillo estimated in 1990 that the waiting period for Mexicans applying under the second preference (spouses and children of permanent U.S. residents) could jump to 22 years, and to 10 to 17 years for Filipinos under various family preference categories.[16]


217

Immigration to the United States: Contemporary Trends and the Changing Ethnic Mosaic

National Origins of the New Immigration

Quinquennial trends in U.S. immigration from 1960 to 1989 are summarized in table 10.2, broken down by the major sending countries. While today's immigrants come from over 100 different nation-states, some countries send many more than others, despite the egalitarian numerical quotas provided by U.S. law. The 21 countries listed in table 10.2 accounted for nearly three-fourths of all legal immigration since 1960. One pattern, a continuation of trends already under way in the 1950s, is quite clear: immigration from the more developed countries has declined over time and that from less developed countries has grown steadily. Among the MDCs, this pattern is clearest for Canada, Great Britain, Italy, and Germany, with the sharpest reductions occurring during the 1960s. Although traditional countries of immigration in the past, their prosperous postwar economies dampened the relative attraction of America, while many Italian "guest-workers" sought instead newly opened opportunities in Germany and Switzerland. The smaller flows of Polish and Soviet refugees have oscillated over time, reflecting changes in exit policies in those countries and in their bilateral relations with the United States. The flow from Japan, which as of the early 1960s was still the largest source of immigrants from Asia, has remained small and stable at about 4,000 per year, nearly half entering as spouses of U.S. citizens—in part reflecting labor shortages and exit restrictions at home. Among the LDCs, the major countries of immigration are located either in the Caribbean Basin—in the immediate periphery of the United States—or are certain Asian nations also characterized by significant historical, economic, political, and military ties to the United States. These historical relationships, and the particular social networks to which they give rise, are crucial to an understanding of the new immigration, both legal and illegal—and help explain why most LDCs are not similarly represented in contemporary flows, as might be predicted by orthodox "push-pull" or "supply-demand" theories of transnational labor movements.

In fact, just eight countries have accounted for more than half of all legal immigration since 1975: Mexico, the Philippines, Vietnam, South Korea, China, India, Cuba, and the Dominican Republic. Of these, Mexico and the Philippines alone have sent 20 percent of all legal immigrants to the United States over the past three decades, and Mexico also remains by far the source of most unauthorized immigration. Of the 3 million immigrants who qualified for legalization of their status under IRCA by 1989, about 2 million were Mexican nationals; and while most


218
 

TABLE10.2 Trends in Legal Immigration to the United States, 1960–89, by Region and Principal Sending Countries

Region/Country of Birth

Period of Immigrant Admission to U.S. Permanent Resident Status

Total

1960–64

1965–69

1970–74

1975–79

1980–84

1985–89a

Worldwide:

1,419,013

1,794,736

1,923,413

2,412,588

2,825,036

3,028,368

13,403,154

Latin America

485,016

737,781

768,199

992,719

995,307

1,201,108

5,180,130

Asia

114,571

258,229

574,222

918,362

1,347,705

1,336,056

4,909,145

Europe and Canada

803,596

766,347

530,925

429,353

388,700

297,609

3,216,530

Africa

11,756

21,710

34,336

51,291

73,948

89,636

282,677

More Developed Countries:

Canada

167,482

136,371

54,313

60,727

57,767

56,701

533,361

United Kingdom

123,573

117,364

56,371

65,848

73,800

66,682

503,638

Italy

86,860

109,750

106,572

43,066

20,128

14,672

381,048

Germany

138,530

83,534

36,971

32,110

33,086

34,464

358,695

Poland*

43,758

33,892

20,252

22,194

31,506

44,581

196,183

Japan

23,327

26,802

20,649

21,993

20,159

21,177

132,107

U.S.S.R.*

10,948

6,292

4,941

28,640

46,530

22,451

119,802


219
 

Region/Country of Birth

Period of Immigrant Admission to U.S. Permanent Resident Status

Total

1960–64

1965–69

1970–74

1975–79

1980–84

1985–89a

Less Developed Countries:

Mexico

217,827

213,689

300,341

324,611

330,690

361,445

1,749,603

Philippines

15,753

57,563

152,706

196,397

215,504

251,042

888,965

Cuba*

65,219

183,499

101,070

176,998

53,698

109,885

690,369

Chinab

20,578

65,712

81,202

107,762

168,754

194,330

638,338

Korea

9,521

18,469

93,445

155,505

163,088

173,799

613,827

Vietnam*

603

2,564

14,661

122,987

246,463

149,480

536,758

Dominican Republic

26,624

57,441

63,792

77,786

98,121

127,631

451,395

India

3,164

18,327

67,283

96,982

116,282

134,841

436,879

Jamaica

7,838

49,480

65,402

72,656

100,607

104,623

400,606

Colombia

27,118

39,474

29,404

43,587

50,910

55,990

246,483

Haiti

7,211

24,325

28,917

30,180

40,265

82,156

213,054

Laos*

NA

NA

166

8,430

102,244

46,937

157,777

El Salvador

6,766

7,615

9,795

20,169

38,801

57,408

140,554

Cambodia*

NA

NA

166

5,459

58,964

54,918

119,507

SOURCES : U.S. Immigration and Naturalization Service, Annual Reports (Washington, D.C.: Government Printing Office, 1960–77); and U.S. Immigration and Naturalization Service, Statistical Yearbooks (Washington, D.C.: Government Printing Office, 1978–89).

a Data do not include 478,814 persons whose status was legalized in fiscal year 1989 under the Immigration Reform and Control Act (IRCA).

b Includes Mainland China and Taiwan.

*Denotes country from which the majority of immigrants to the United States have been admitted as refugees.


220

of the remaining amnesty applicants came from nearby Caribbean Basin countries, Filipinos ranked sixth (behind Salvadorans, Guatemalans, Haitians, and Colombians, but ahead of Dominicans, Jamaicans, and Nicaraguans).[17] Indeed, Mexicans and Filipinos comprise, respectively, the largest "Hispanic" and "Asian" populations in the United States today.[18]

Not surprisingly, Mexico and the Philippines share the deepest structural linkages with the United States, including a long history of dependency relationships, external intervention, and (in the case of the Philippines) colonization. In both countries, decades of active agricultural labor recruitment by the United States—of Mexicans to the Southwest, Filipinos to plantations in Hawaii and California—preceded the establishment of self-sustaining migratory social networks. In the case of Mexico, the process has evolved over several generations. From California to Texas, the largest Mexican-origin communities in the United States are still located in former Mexican territories that were annexed in the last century, and they are today linked to entire communities on the other side of the border.[19] In the Philippines—unlike Puerto Rico, which also came under U.S. hegemony as a result of the 1898 Spanish-American War—its formal independence from the United States after World War II has since led to different patterns of immigration. During the half-century of U.S. colonization, the Americanization of Filipino culture was pervasive, especially in the development of a U.S.-styled educational system and the adoption of English as an official language, and the United States today is not only the Philippines' major trading partner but also accounts for more than half of total foreign investment there.[20] Since the 1960s, as will be detailed below, the Philippines have sent the largest number of immigrant professionals to the United States, as well as a high proportion of the many international students enrolled in American colleges and universities. Moreover, the extensive U.S. military presence in the Philippines—including the largest American bases in the Asian-Pacific region—has fueled immigration through marriages with U.S. citizens stationed there, through unique arrangements granting U.S. citizenship to Filipinos who served in the armed forces during World War II, and through direct recruitment of Filipinos into the U.S. Navy. Remarkably, by 1970 there were more Filipinos in the U.S. Navy (14,000) than in the entire Filipino navy.[21] During 1978–85, more than 51 percent of the 12,500 Filipino babies born in the San Diego metropolitan area—site of the largest naval station in the United States and the third largest destination of Filipinio immigrants—were delivered at just one hospital: the U.S. Naval Hospital.[22]

Among the other six leading countries of recent immigration, linkages unwittingly structured by American foreign policy and military in-


221

tervention since the 1950s are most salient in the exodus of the Koreans and Vietnamese. Indeed, an ironic consequence of the wars that took tens of thousands of Americans to Korea and Vietnam is that tens of thousands of Koreans and Vietnamese—including many Amerasians—have since come to America, albeit through quite different routes. Emigration connections variously shaped by U.S. intervention, foreign policies, and immigration policies are also a common denominator in the exodus of the Chinese after the 1949 revolution, the Cubans after the 1959 revolution, and the Dominicans after the U.S.-backed coup in 1965. In the case of India, South Korea, and Taiwan, large-scale U.S. foreign aid, technical assistance, trade, and direct investment (which in India surpassed that of the United Kingdom soon after decolonization) helped to forge the channels for many professionals and exchange students to come to America.[23] It has been estimated that since the early 1950s fewer than 10 percent of the many thousands of students from South Korea, Taiwan, China, and Hong Kong who have come to the United States for training on nonimmigrant visas ever returned home; instead, many adjusted their status and gained U.S. citizenship through occupational connections with American industry and business, thus becoming eligible to send for family members later on.[24] None of this is to suggest, of course, that the complex macrostructural determinants that shape migration flows—above all global market forces, which will be considered further on, and internal dynamics and crises in the sending countries—can be reduced to politico-military factors or state policies, but rather to focus attention on the importance of particular historical patterns of U.S. influence in the creation and consolidation of social networks that over time give the process of immigration its cumulative and seemingly spontaneous character.[25]

Social Class Origins of the New Immigration

There is no doubt that wage differentials between the United States and the LDCs act as a magnet to attract immigrants to America. This is especially the case along the 2,000-mile-long border between the United States and Mexico—the largest point of "North-South" contact in the world. During the 1980s, the minimum wage in the United States ($3.35 per hour) was six times the prevailing rate in Mexico, and higher still than most rates in Central America. But wage differentials alone do not explain why even in neighboring Mexico only a small fraction of the population ever undertakes the journey to "El Norte." What is more, 10 of the 15 poorest nations of the world (with sizable populations and national per capita incomes below U.S. $200)—Chad, Zaire, Mozambique, Mali, Burkina Faso, Nepal, Malawi, Bangladesh, Uganda, and Burma—are scarcely represented among immigrants to America, if at all. Signifi-


222

cantly, the only sizable groups of recent immigrants who do hail from the world's 15 poorest countries—from Cambodia, Laos, and Vietnam, and (though to a much lesser extent) Ethiopia and Afghanistan—have been admitted as political refugees.[26]

Moreover, the fact that most newcomers to America come from comparatively poorer nations—such as the 14 LDCs listed above in table 10.2—does not mean that the immigrants themselves are drawn from the uneducated, unskilled, or unemployed sectors of their countries of origin. Available evidence from the INS, summarized in table 10.3, indicates just the opposite. Over the past two decades, an average of more than 60,000 immigrant engineers, scientists, university professors, physicians, nurses, and other professionals and executives have been admitted each year into the United States. From the 1960s through the early 1980s, about one-third of all legal immigrants to the United States (excluding dependents) were high-status professionals, executives, or managers in their countries of origin. The proportion of these so-called brain drain elites declined somewhat to 26.5 percent by the late 1980s—still a higher percentage than that of the native-born American population—despite the overwhelming majority of immigrants having been admitted under family preferences over the past two decades. In part, these data suggest that while many "pioneer" immigrants have entered with formal credentials under the occupational preferences of U.S. law, their close kin who join them later are drawn from the same social classes—accounting for both the relative stability and similarity of their flows over time, if with a gradually diminishing upper-crust occupational profile as family "chain migration" processes evolve and expand. But the dynamics of particular types of flows are much more complex than might seem at first glance.

Take, for example, the case of so-called foreign medical graduates (FMGs). Worldwide, about 5 percent of physicians have immigrated to foreign countries in recent decades, of whom about half have come to the United States—75,000 entered in the 1965–74 decade alone.[27] During the 1950s and 1960s, enrollments in U.S. medical schools remained virtually stationary, while the American health care system expanded greatly (all the more after the passage of Medicaid and Medicare in the early 1960s), creating many vacancies in the number of internship and residency positions in U.S. hospitals (especially in underserved areas such as inner cities, which did not attract U.S. medical graduates). The demand, reinforced by the new channels opened up by U.S. immigration law and the higher salaries offered by U.S. hospitals, enabled FMGs and nurses to flock to America, particularly from developing countries such as India and the Philippines, where English-language textbooks are used and where many more professionals were graduating than the economies could absorb. Few of these people were directly recruited by


223
 

TABLE 10.3 Trends in Occupational Backgrounds of Legal Immigrants, 1967–87, by Region and Main Sending Countries: Percentage of Immigrant Professionals, Executives, and Managers, in Regional Rank Order

Region/Country of Birth

Reported Occupation of Immigrants Prior to Admission to Permanent Resident Statusa (Percentage Professional Specialty, Executives, and Managers)

1967

1972

1977

1982

1987

Worldwide:

32.4

36.0

33.0

31.9

26.5

Asia

59.3

67.3

53.2

39.9

39.5

Africa

53.7

67.3

60.7

45.8

39.4

Europe and Canada

29.7

26.6

41.5

44.4

40.7

Latin America

22.3

13.8

15.4

15.8

11.3

More Developed Countries:

Japan

57.6

50.1

44.6

48.5

42.2

Canada

48.7

51.6

61.3

57.9

55.0

United Kingdom

43.3

51.5

58.3

60.8

52.9

U.S.S.R.*

40.9

41.0

42.0

39.1

47.0

Poland*

32.3

27.1

30.6

32.1

26.9

Germany

30.5

43.5

37.2

40.9

35.7

Italy

8.4

8.5

21.0

30.8

33.6

Less Developed Countries:

India

90.6

91.6

79.1

73.7

61.7

Korea

80.5

72.9

49.6

42.9

44.0

Philippines

60.2

71.6

46.8

44.9

45.9

Chinab

48.6

52.5

53.9

47.3

34.3

Vietnam*

71.6

56.9

36.6

11.4

7.7

Cuba*

33.1

13.9

14.4

22.3

5.1

Colombia

32.5

27.5

17.4

20.2

20.6

Haiti

23.3

26.8

14.1

17.4

8.1

Jamaica

19.1

15.9

33.4

21.6

18.6

Dominican Republic

14.5

15.3

13.1

13.8

12.2

El Salvador

15.2

16.0

10.0

13.0

7.1

Mexico

8.5

5.1

6.6

7.0

5.9

Cambodia*

NA

NA

NA

7.1

2.0

Laos*

NA

NA

NA

4.7

2.1

SOURCES : U.S. Immigration and Naturalization Service, Annual Reports (Washington, D.C.: Government Printing Office, 1967, 1972, 1977); and U.S. Immigration and Naturalization Service, Statistical Yearbooks (Washington, D.C.: Government Printing Office, 1982, 1987).

a About two-thirds of immigrants admitted as permanent residents report no prior occupation to the INS; they are mainly homemakers, children, retired persons, and other dependents. Data above are based on 152,925 immigrants who reported an occupation in 1967; 157,241 in 1972; 189,378 in 1977; 203,440 in 1982; and 242,072 in 1987.

b Includes Mainland China and Taiwan.

*Denotes country from which the majority of immigrants to the United States have been admitted as refugees.


224

American hospitals; most made their own arrangements through professional networks of friends who were or had been in the United States, or by writing blind letters to hospitals listed in American Medical Association or state directories. By the mid-1970s there were about 9,500 Filipino and 7,000 Indian FMGs in the United States—more than the number of American black physicians—as well as some 3,000 FMGs each from Cuba and South Korea, and 2,000 each from Mexico and Iran. Perhaps the most extraordinary instance occurred in 1972, when practically the entire graduating class of the new medical school in Chiangmai, Thailand, chartered a plane to America. The effect of this kind of emigration on the sending countries' domestic stock of physicians has varied greatly: in 1972 the number of Mexican and Indian FMGs in the United States represented only 4 percent of Mexico's stock and 5 percent of India's, but the proportion was 18 percent of South Korea's, 22 percent of Iran's, 27 percent of Thailand's, 32 percent of the Dominican Republic's, 35 percent of Taiwan's, 43 percent of Cuba's, 63 percent of the Philippines', and—incredibly—95 percent of Haiti's. Since the late 1970s the flow of FMGs has declined, due to a constricting job market (as the supply of U.S.-trained physicians has increased) and the passage of more restrictive U.S. visa and medical licensing requirements, but by the late 1980s, FMGs still comprised 20 percent of the nation's physicians.[28]

The worldwide trends presented in table 10.3 conceal a wide range in the class character of contemporary immigration to the United States; among the principal sending countries there are considerable differences in the occupational backgrounds of immigrants. "Brain drain" immigrants have dominated the flows of Indians, Koreans, Filipinos, and Chinese (including Taiwanese) since the 1960s. High proportions are also in evidence among the Japanese, Canadian, and British groups—although their immigration flows are smaller, as seen earlier—as well as among some refugee groups, particularly Soviet Jews and Armenians and the more sizable first waves of refugees from Vietnam and Cuba. By contrast, immigration from Mexico, El Salvador, the Dominican Republic, and (until very recently) Italy has consisted predominantly of manual laborers and low-wage service workers, as has also been the case among refugees from Laos and Cambodia, and the more recent waves of Vietnamese, Cubans, and Haitians. Between these extremes in occupational profiles are Colombians, Jamaicans, Germans, and Poles.

Over time, the drop in the proportion of highly skilled immigrants within particular national groups is most apparent among non-European refugees, consistent with a general pattern that characterizes refugee flows: initial waves tend to come from the higher socioeconomic strata, followed later by heterogeneous working-class waves more representative of the society of origin. As table 10.3 shows, rapid declines are seen


225

among refugees who come from poor countries, such as Vietnam, where only a small proportion of the population is well educated.

The information provided in table 10.3, while useful as a first step to sort out the diverse class origins of the new immigration, is limited in several ways. The INS does not collect data on the educational backgrounds of legal immigrants, nor on the occupations they enter once in the United States, nor, for that matter, on the characteristics of undocumented immigrants or of emigrants (those who leave the United States after a period of time, estimated at about 160,000 annually). A more precise picture can be drawn from the last available census, which counted a foreign-born population of 14.1 million persons in 1980 (including an estimated 2.1 million undocumented immigrants). Census data on several relevant indicators for the largest foreign-born groups in the United States as of 1980 are presented in table 10.4, rank-ordered by their proportions of college graduates. The picture that emerges shows clearly that the foreign-born are not a homogeneous population; instead, to borrow a term from Milton Gordon, the formation of different "eth-classes" is apparent. Less apparent is the fact that within particular nationalities there is often also considerable socioeconomic diversity.

An upper stratum is composed of foreign-born groups whose educational and occupational attainments significantly exceed the average for the native-born American population. Without exception, all of them are of Asian origin—Indians, Chinese (especially Taiwanese), Filipinos, Koreans, and Japanese—with the most recently immigrated groups reflecting the highest levels of attainment. It is precisely this stratum that accounts for the popularization of the recent myth of Asian-Americans as "model minorities," whose children are overrepresented among the nation's high school valedictorians and in admissions to elite universities from Berkeley to Harvard. For instance, foreign-born students collected 55 percent of all doctoral degrees in engineering awarded by American universities in 1985, with one-fifth of all engineering doctorates going to students from Taiwan, India, and South Korea alone. In 1988 the top two winners of the Westinghouse Science Talent Search, the nation's most prestigious high school competition, were immigrant students from India and Taiwan in New York City public schools; indeed, 22 of the top 40 finalists were children of immigrants. Moreover, the stories of competitive success are not limited to science and math-based fields (where Asian immigrant students tend to concentrate to reduce their English-language handicaps): the 1985 U.S. National Spelling Bee champ was Chicago schoolboy Balu Natarajan, who speaks Tamil at home, and the 1988 winner was a thirteen-year-old girl from a California public school, Indian-born Rageshree Ramachandran, who correctly spelled "elegiacal" to beat out runner-up Victor Wang, a Chinese-American.[29]


226
 

TABLE10.4 Characteristics of the Largest Foreign-Born Groups in the United States in 1980,
Ranked by Their Proportion of College Graduates, Compared to the Native-Born Groups

Country of Birth

Persons
(N)

Education a

Occupation b

Year of Immigration

Not a Citizen (%)

College Graduate (%)

High School Graduate (%)

Professional Specialty (%)

Service Occup. (%)

1970–80 (%)

1960–69 (%)

Pre-1960 (%)

Above U.S. Average:

India

206,087

66.2

88.9

42.8

5.3

76.8

19.3

3.9

76.0

China (Taiwan)

75,353

59.8

89.1

30.4

13.7

81.1

17.0

1.9

71.1

Philippines

501,440

41.8

74.0

20.1

16.2

63.6

22.6

13.8

55.3

Korea

289,885

34.2

77.8

14.7

17.0

83.9

13.0

3.1

65.4

China (Mainland)

286,120

29.5

60.0

16.8

24.4

47.5

27.3

25.2

49.7

Japan

221,794

24.4

78.0

13.6

20.8

45.2

22.7

32.1

56.7

Close to U.S. Average:

England

442,499

16.4

74.6

17.4

12.2

21.9

22.0

56.1

42.0

Cuba

607,814

16.1

54.9

9.2

12.2

26.9

60.4

12.8

54.9

U.S.S.R.

406,022

15.7

47.2

15.9

13.2

24.3

5.3

70.4

27.4


227
 

Country of Birth

Persons (N)

Education a

Occupation b

Year of Immigration

Not a Citizen (%)

College Graduate (%)

High School Graduate (%)

Professional Specialty (%)

Service Occup. (%)

1970–80 (%)

1960–69 (%)

Pre-1960 (%)

Germany

849,384

14.9

67.3

13.4

14.1

10.6

20.6

68.8

21.4

Colombia

143,508

14.6

62.8

8.1

15.8

55.0

37.1

7.9

75.1

Canada

842,859

14.3

61.8

16.2

11.4

15.2

20.1

64.7

39.0

Vietnam

231,120

12.9

62.1

8.6

16.4

97.6

2.1

0.2

88.9

Jamaica

196,811

11.0

63.5

10.2

29.9

58.7

29.8

11.6

63.7

Below U.S. Average:

Poland

418,128

10.0

40.5

10.8

13.5

11.0

14.5

74.5

22.2

Greece

210,998

9.5

40.4

8.0

25.0

32.0

27.7

40.3

35.0

Ireland

197,817

8.8

52.1

14.5

21.7

7.3

14.5

78.1

18.8

Italy

831,992

5.3

28.6

6.1

16.3

12.1

18.2

69.8

22.6

Dominican Republic

169,147

4.3

30.1

3.1

18.5

56.8

37.2

6.1

74.5

Portugal

211,614

3.3

22.3

2.3

10.0

45.0

34.0

21.0

61.6

Mexico

2,199,221

3.0

21.3

2.5

16.6

57.8

21.9

20.3

76.4

S Foreign-Born

14,079,906

15.8

53.1

12.0

16.1

39.5

22.3

38.2

49.5

S Native-Born

212,465,899

16.3

67.3

12.3

12.7

SOURCES : U.S. Bureau of the Census, Statistical Abstracts of the United States , 109th ed. (Washington, D.C.: Government Printing Office, 1989); table 47; and U.S. Bureau of the Census, 1980 Census of Population: Detailed Population Characteristics , PC80-1-D1-A (Washington, D.C.: Government Printing Office, 1984), table 254.

a Years of school completed by persons aged twenty-five years or older.

b Present occupation of employed persons aged sixteen years or older.


228

Yet also during the 1980s, the highest rates of poverty and welfare dependency in the United States have been recorded among Asian-origin groups, particularly refugees from Indochina. One study found poverty rates ranging from over 50 percent for the Vietnamese to 75 percent for the Chinese-Vietnamese and the Lao, 80 percent for Cambodians, and nearly 90 percent for the Hmong. And Southeast Asian and, to a lesser extent, Korean workers are much in evidence, along with undocumented Mexican and Salvadoran immigrants, in a vast underground sweatshop economy that has expanded during the 1980s and into the 1990s in Southern California. Those findings debunk genetic and cultural stereotypes that have been propounded in the mass media as explanations of "Asian" success, and point instead to the diversity of recent Asian immigration and to the class advantages of particular Asian-origin groups.[30]

A middle stratum evident in table 10.4, composed of groups whose educational and occupational characteristics are close to the U.S. average, is more heterogeneous in terms of national origins. It includes older immigrants from England, the U.S.S.R., Germany, and Canada (the majority entering the United States prior to 1960), and more recent immigrants from Cuba, Colombia, Vietnam, and Jamaica. The post-1980 waves of Mariel refugees from Cuba and Vietnamese "boat people" from more modest social class backgrounds are not reflected in the data in table 10.4, since they arrived after the census was taken; the 1990 census will probably reflect much wider differences in the characteristics of these two refugee populations, underscoring the internal diversification of particular national groups over time.

Finally, as table 10.4 shows, a lower stratum is composed of working-class groups who fall substantially below native-born norms. It includes recent immigrants from Mexico and the Dominican Republic—of whom a substantial number entered without documents—but also includes less visible, older European immigrants from Poland, Greece, Ireland, Italy, and Portugal. The 1990 census most probably will add to this stratum several groups who have arrived in sizable numbers during the past decade, including Salvadorans, Guatemalans, Nicaraguans, Haitians, and Cambodian and Laotian refugees. Not included in this bottom stratum are Puerto Ricans, since they are not "foreign-born" but are U.S. citizens by birth; but their aggregate socioeconomic characteristics would place them here, and their large-scale post–World War II migration to the mainland resembles in many respects that of Mexican labor immigration. Mexicans and Puerto Ricans make up the overwhelming majority of the supranational "Hispanic" population of the United States, and their particular characteristics and circumstances have colored the construction of negative ethnic typifications.[31] In any case, these findings, too, debunk cultural stereotypes that have been propounded in the mass


229

media as explanations for the lack of "Hispanic" success in contrast to that of "Asians" and white European ethnics, and point instead to the diversity of recent Latin American immigration and to the class disadvantages of particular groups.

Significantly, there is an imperfect correlation between educational and occupational attainment among these groups. For example, as table 10.4 shows, the percentage of longer-established Canadian and certain European immigrants employed in professional specialties actually exceeds the respective proportion of their groups who are college graduates. By contrast, the percentage of more recently arrived Asian and Latin American immigrants who are employed in the professions is generally far below their respective proportions of college graduates—and, for that matter, far below their respective proportions of those who held professional positions in their countries of origin prior to admission into the United States (as documented previously in table 10.3). These discrepancies offer a clue about barriers such as English proficiency and strict licensing requirements that regulate entry into the professions and that recent immigrants—most of them nonwhite, non-European, and non–English speakers—must confront as they seek to make their way in America. In response, some immigrants shift instead to entrepreneurship as an avenue of economic advancement—and as an alternative to employment in segmented labor markets. Indeed, the process of occupational and economic adaptation is complex and not simply a function of the "human capital" brought by the immigrants. Their varying social-class resources at the time of entry interact with other differences in the contexts of reception experienced by particular groups—such as government policies and programs, local labor markets, cultural prejudices and racial discrimination, and existing ethnic communities and networks—to mold their diverse modes of incorporation in the American economy and society.

In general, however, immigrants who come to the United States are positively selected groups, not only in terms of their above-average urban backgrounds and socioeconomic resources compared to homeland norms, but also in terms of their ambition, determination, and willingness to work and to take risks. Legally or illegally, most make their passages to America not so much to escape perennial unemployment or destitution, but to seek opportunities for advancement that are unavailable in their own countries. They are "innovators," in Robert Merton's sense of the term, who choose immigration as a feasible solution to a widening gap between life goals and actual means, between their own rising aspirations and the dim possibilities for fulfilling them at home. The lure of America is greatest for those who experience this gap at its widest and who have the requisite resources and connections to meet the costs of


230

immigration to a foreign world—such as well-educated cosmopolitans in the less developed countries—and those groups have taken full advantage of the preferences available under U.S. law. Immigration requires both restlessness and resourcefulness, and on the whole, the main reason the richest of the rich and the poorest of the poor do not immigrate is because they are, respectively, unmoved or unable to move.

Even undocumented migrants must be able to cover the often considerable costs of transportation and surreptitious entry into the United States, as must refugees such as "boat people" be willing to take extraordinary risks and pay the costs of surreptitious exit from their countries. Although the socioeconomic origins of unauthorized immigrants are modest by U.S. standards, they consistently meet or surpass the average for their countries of origins. Recent studies report that "coyotes" (smugglers) charge U.S. $700 to get border-crossers from Mexico to Los Angeles, $500 to Houston, $250 to $450 to San Antonio—in large groups the fee may be lowered to $200—and that undocumented Mexican immigrants are on average more urban and literate than the general Mexican population. In the Dominican Republic, it may cost $1,000 to $2,000 to obtain papers and be smuggled out of the country, and undocumented Dominicans actually tend to be more educated than those who immigrate legally. Haitian "boat people" reportedly pay $500 to $1,000 per person to buy passage aboard barely seaworthy craft to South Florida. A decade ago in Vietnam, ethnic Chinese and Vietnamese refugees were paying five to ten gold pieces ($2,000 to $4,000) per adult to cross the South China Sea in flimsy fishing boats—a price well beyond the means of the average Vietnamese. To afford this often required ingenious exchange schemes through kinship networks. For example, a family in Vietnam planning to escape by boat contacted another that had decided to stay to obtain the necessary gold for the passage; they in turn arranged with family members of both already in the United States (usually "first wave" refugees) for the relatives of the escaping family to pay an equivalent amount in dollars to the second family's relatives.[32] Those who surmount such obstacles and succeed in reaching America are far from being representative of the population of their societies of origin. They, too, add to the vitality, energy, and innovativeness that immigrants contribute to American society.

The New Immigrants in America: Impacts on Economic and Cultural Institutions

Patterns of Settlement and Incorporation

Although fewer than one in ten persons in the United States today is an immigrant, the impact of the new immigration on American communities is much more significant than might appear at first glance. The main


231

reason is that immigrants tend to concentrate in urban areas where coethnic communities have been established by past immigration. Such spatial concentrations serve to provide newcomers with manifold sources of moral, social, cultural, and economic support that are unavailable to immigrants who are more dispersed. In general, patterns of concentration or dispersal vary for different classes of immigrants (professionals, entrepreneurs, manual laborers) with different types of legal status (regular immigrants, refugees, the undocumented). The likelihood of dispersal is greatest among immigrant professionals—who tend to rely more on their qualifications and job offers than on pre-existing ethnic communities—and, at least initially, among recent refugees who are sponsored and resettled through official government programs that have sought deliberately to minimize their numbers in particular localities. However, refugee groups, too, have shown a tendency to gravitate as "secondary migrants" to areas where their compatriots have clustered (for example, Cubans to South Florida, Southeast Asians to California). The likelihood of concentration is greatest among working-class immigrants—who tend to rely more on the assistance offered by pre-existing kinship networks—and among business-oriented groups, who tend to settle in large cities. Dense ethnic enclaves provide immigrant entrepreneurs with access to sources of cheap labor, working capital and credit, and dependable markets. Over time, as the immigrants become naturalized U.S. citizens, local strength in numbers also provides opportunities for political advancement and representation of ethnic minority group interests at the ballot box.[33] Social networks are thus crucial for an understanding not only of migration processes, as noted earlier, but also of adaptation processes and settlement patterns in areas of final immigrant destination.

Table 10.5 lists the states and metropolitan areas of principal immigrant settlement (SMSAs) in the United States as of 1980. In addition, table 10.5 provides comparative data on the places of settlement of recent legal immigrants (those admitted during 1987–89) as well as of the 3 million illegal immigrants who qualified for legalization of their status under IRCA in 1989. While there are immigrants today in every one of the fifty states, just six states (California, New York, Florida, Texas, Illinois, and New Jersey) accounted for two-thirds of the total 1980 foreign-born population, for nearly three-fourths of 1987–89 legal immigrants, and for almost nine-tenths of all IRCA applicants. A pattern of increasing spatial concentration is clear for the four states of greatest immigrant settlement (California, New York, Florida, and Texas). California alone, which in 1980 already accounted for 25 percent of all the foreign-born, drew 29 percent of 1987–89 immigrants and a whopping 54 percent of IRCA applicants. New York and Florida combined for another quarter of the foreign-born in 1980 and also of 1987–89 immigrants, but only 11 percent of IRCA applicants. Texas, whose share of immigrants increased


232
 

TABLE 10.5 States and Metropolitan Areas of Principal Immigrant Settlement in the United States:
Location of the 1980 Foreign-Born Population, 1987–89 Immigrants, and 1989 Legalization Applicants

 

Foreign-Born Population, 1980

Immigrants, 1987–89a

IRCA Applicants, 1989b

N

Percentage of Total Population

Percentage of U.S. Foreign-Born Population

N

Percentage of Total Immigrants Admitted

N

Percentage of Total Legalization Applicants

States:

California

3,580,033

15.1

25.4

530,795

28.6

1,636,325

53.9

New York

2,388,938

13.6

17.0

336,845

18.1

170,601

5.6

Florida

1,058,732

10.9

7.5

155,108

8.4

160,262

5.3

Texas

856,213

6.0

6.1

123,446

6.6

440,989

14.5

Illinois

823,696

7.2

5.9

81,011

4.4

158,979

5.2

New Jersey

757,822

10.3

5.4

100,697

5.4

44,184

1.5

Metropolitan Areas:

New York, N.Y.-N.J.

1,946,800

21.3

13.8

285,840

15.4

153,072

5.0


233
 
 

Foreign-Born Population, 1980

Immigrants, 1987–89a

IRCA Applicants, 1989b

N

Percentage of Total Population

Percentage of U.S. Foreign-Born Population

N

Percentage of Total Immigrants Admitted

N

Percentage of Total Legalization Applicants

Los Angeles-Long Beach, Calif.

1,664,793

22.3

11.8

231,096

12.4

809,248

26.6

Chicago, Ill.

744,930

10.5

5.3

64,821

3.5

136,081

4.5

Miami-Hialeah, Fla.

578,055

35.6

4.1

93,776

5.1

66,792

2.2

San Francisco-Oakland, Calif.

551,769

15.4

3.9

81,780

4.4

64,111

2.1

Boston, Mass.

280,080

10.1

2.0

38,218

2.0

12,512

0.4

Anaheim-Santa Ana, Calif.

257,194

13.3

1.8

42,835

2.3

144,521

4.8

Washington, D.C.

249,994

8.2

1.8

56,676

3.1

31,182

1.0

San Diego, Calif.

235,593

12.7

1.7

38,332

2.1

98,875

3.3

Houston, Tex.

220,861

7.6

1.6

33,296

1.8

131,186

4.3

San Jose, Calif.

175,833

13.6

1.2

35,176

1.9

41,857

1.4

U.S. Totals

14,079,906

6.2

100.0

1,856,651

100.0

3,038,825

100.0

SOURCES : U.S. Bureau of the Census, 1980 Census of Population: General Social and Economic Characteristics , PC80-1-C1, State and SMSA Summaries (Washington, D.C.: Government Printing Office, 1983); U.S. Bureau of the Census, Detailed Population Characteristics , PC80-1-D1-A (Washington, D.C.: Government Printing Office, 1984), table 253; U.S. Immigration and Naturalization Service Statistical Yearbooks (Washington, D.C.: Government Printing Office, 1987–89).

a Data indicate the "intended destination" of regular immigrants admitted to permanent resident status during 1987–89, as reported to the INS; data do not include the 478,814 immigrants whose status was legalized in fiscal year 1989 under the Immigration Reform and Control Act (IRCA).

b Persons who formally applied for legalization of their status by May 1990 under IRCA.


234

from 6.1 percent in 1980 to 6.6 percent in 1987–89, also accounted for 14.5 percent of IRCA applicants. In fact, over two-thirds of IRCA applicants resided in California and Texas alone—both states situated along the Mexican border. In Illinois, the proportion of immigrants decreased from 5.9 percent in 1980 to 4.4 percent in 1987–89—partly because Chicago has ceased to be a preferred destination for Mexican immigrants—while in New Jersey the levels for the two time periods remained unchanged at 5.4 percent.

Patterns of immigrant concentration are even more pronounced within particular metropolitan areas. As table 10.5 shows, just eleven SMSAs accounted for more than half of all legal and illegal immigrants in the United States during the 1980s, and five of these were California cities. As in the past, the New York metropolitan area remains the preferred destination of immigrants, accounting for 13.8 percent of the 1980 U.S. foreign-born population and another 15.4 percent of 1987–89 immigrants, though only 5 percent of IRCA applicants resided in New York. Los Angeles is not far behind, with 11.8 percent and 12.4 percent of 1980 and 1987–89 immigrants, respectively—but a huge 26.6 percent of all IRCA applicants nationally (more than 800,000 persons) were concentrated in Los Angeles, more than five times the number in any other urban area. Adjacent areas in Southern California (Santa Ana and San Diego) also show significant increases in both legal and especially illegal immigrant settlement. Of the leading SMSAs, only Chicago showed a drop in its relative proportion of immigrants, from 5.3 percent in 1980 to 3.5 percent in 1987–89 (although more IRCA applicants were recorded in Chicago than in Houston), while Boston's share remained at 2.0 percent during the decade (although only a tiny fraction of IRCA applicants lived in the Boston area). All other cities in table 10.5—Miami; San Francisco; Washington, D.C.; Houston; and San Jose—showed significant increases over time.

Moreover, different immigrant groups concentrate in different metropolitan areas and create distinct communities within each of these cities. For example, among the largest contingents of recent immigrants, Miami remains the premier destination of Cubans (they are already a majority of the city's total population), as is New York for Dominicans, Jamaicans, and Soviet Jews. Colombians and Haitians are also most concentrated in Miami and New York. The Los Angeles area is the main destination for Mexicans, Salvadorans, Filipinos, Koreans, Vietnamese, and Cambodians—their communities there are already the largest in the world outside their respective countries—and it is the third choice of Chinese and Indians. After Los Angeles, recent Mexican immigrants have settled in largest numbers in San Diego and El Paso; Filipinos in San Diego and San Francisco; Koreans in New York and Washington, D.C.; and


235

Vietnamese in Santa Ana and San Jose. More Chinese immigrants settle in New York than in any other city, followed by San Francisco; more Indians also settle in New York, followed by Chicago (although among all major immigrant groups Indians tend to be the most dispersed, reflecting their significantly greater proportion of professionals).[34]

Notwithstanding the relative dispersal of immigrant professionals, they have significant impacts in the sectors within which they are employed. Rather than compete with or take jobs away from the native-born, these groups fill significant national needs for skilled talent and in some respects also serve as a strategic reserve of scarce expertise. For example, we have already mentioned the disproportionate impact of immigrant engineers in U.S. universities. Given the continuing decline of enrollments in advanced engineering training among the native-born, the proportion of the foreign-born in these fields has grown rapidly. By 1987 over half of all assistant professors of engineering under thirty-five years of age in U.S. universities were foreign-born, and it is estimated that by 1992 over 75 percent of all engineering professors in the United States will be foreign-born. Already one out of every three engineers with a doctorate working in U.S. industry today is an immigrant.[35]

The impact of foreign medical graduates (FMGs) is almost as great: over the past two decades they have constituted about 20 percent of the nation's physicians and from about 33 percent (in the 1970s) to 18 percent (by the late 1980s) of its interns and residents. They are not, however, randomly dispersed throughout the country: in New York City in the mid-1970s, for instance, more than half of the interns in municipal hospitals and four-fifths of those at voluntary hospitals were Asian immigrant doctors. Their mode of incorporation into the American health care system is largely determined by the U.S. market for interns and residents. By the mid-1970s, for example, 35 percent of available internships and residency positions could not be filled by U.S. and Canadian medical graduates, and the geographical clustering of immigrant doctors in some northeastern and midwestern states is largely a function of job availability in certain types of hospitals that draw heavily on FMGs. In general, FMGs are concentrated in the less prestigious, non-university-affiliated hospitals in underserved areas that do not attract native-born physicians, and they are relatively few in hospitals with the greatest scientific emphasis and degrees of specialization located in the most desirable areas (such as California). Among FMGs, a further process of socio-cultural stratification is evident: FMGs from countries like Great Britain have exhibited patterns of entry most similar to those of U.S. and Canadian medical graduates; followed by a second stratum of FMGs from countries like Argentina, Colombia, and India; then a third stratum from countries like Taiwan, South Korea, Iran, and the Philippines; and


236

lastly by Cuban refugee physicians (who entered the least prestigious and least scientifically oriented training hospitals). Despite substantial increases in the pool of U.S. medical graduates during the 1980s, many hospitals have been unable to attract even native-born nurse practitioners or physician assistants to replace FMGs who are willing to accept resident salaries and put in the typical 80-to-100-hour resident work week. A recent survey found that FMG-dependent teaching hospitals would each lose $2 to $5 million a year in Medicare training funds were they required to replace FMG residents, forcing cutbacks and affecting patient care. FMGs thus not only perform key functions in American medical care—especially in rural and inner-city hospitals serving Medicaid patients and the uninsured working poor—but they also give U.S. medical graduates more options in choosing jobs.[36]

Concerns about the economic impact of working-class immigrants more often focus on claims that they take jobs away from or depress the wages of native-born workers. Such claims, however, are made in the absence of any evidence that unemployment is caused by immigrants either in the United States as a whole or in areas of high immigrant concentration, or that immigration adversely affects the earnings of either domestic majority or minority groups. To the contrary, recent research studies of both legal and undocumented immigration point to significant net economic benefits accruing to U.S. natives. As a rule, the entry of immigrants into the labor market helps to increase native wages as well as productivity and investment, sustain the pace of economic growth, and revive declining sectors, such as light manufacturing, construction, and apparel (New York City, Los Angeles, and Miami offer recent examples). An influx of new immigrant labor also has the effect of pushing up domestic workers to better supervisory or administrative jobs that may otherwise disappear or go abroad in the absence of a supply of immigrant manual labor. Less-skilled immigrants, paralleling the pattern noted above for FMG professionals, typically move into manual labor markets deserted by native-born workers, who shift into preferred non-manual jobs.[37] In addition, immigrants, on average, actually pay more taxes than natives, but use much smaller amounts of transfer payments and welfare services (such as aid to families with dependent children [AFDC], supplemental security income, state unemployment compensation, food stamps, Medicare, and Medicaid). It has been estimated that immigrants "catch up" with natives in their use of welfare services only after 16 to 25 years in the United States. Because of their vulnerable legal status, undocumented immigrants, in particular, are much less likely to use welfare services, and they receive no Social Security income, yet about three-fourths of them pay Social Security and federal income taxes. And because newly arrived immigrants are primarily younger workers rather


237

than elderly persons, by the time they retire and are eligible to collect Social Security (the costliest government program of transfer payments), they have usually already raised children who are contributing to Social Security taxes and thus balancing their parents' receipts.[38]

Rather than take jobs away, entrepreneurial immigrants often create them. For example, among Koreans in Los Angeles in 1980, a recent study found that 22.5 percent were self-employed (compared to 8.5 percent of the local labor force), and they in turn employed another 40 percent of Korean workers in their businesses. The 4,266 Korean-owned firms thus accounted for two-thirds of all employed Koreans in the Los Angeles metropolitan area.[39] In Miami, Cuban-owned enterprises increased from about 900 to 25,000 between the late 1960s and the late 1980s; by 1985 the $2.2 billion in sales reported by Hispanic-owned firms in Dade County ranked that area first in gross receipts among all such firms in the country. A longitudinal survey of Cuban refugees who arrived in Miami in 1973 showed that by 1979, 21.2 percent were self-employed and another 36.3 percent were employed in businesses owned by Cubans. A subsequent survey of Mariel Cubans who arrived in Miami in 1980 found that by 1986 28.2 percent were self-employed and another 44.9 percent were employed by their co-nationals.[40] In Monterey Park ("Little Taipei"), east of Los Angeles, Chinese immigrants from Taiwan and Hong Kong—who in 1988 already comprised over half of its 61,000 residents—owned two-thirds of the property and businesses in the city. During 1985 an estimated $1.5 billion was deposited in Monterey Park financial institutions (equivalent to about $25,000 for each city resident), much of it the capital of Hong Kong investors nervous about the impending return of Hong Kong to Mainland China.[41] And, although not yet rivaling the scale of these ethnic enclaves, a burgeoning center of Vietnamese-owned enterprises has been developed over the past decade in the city of Westminster ("Little Saigon") in Orange County. In all of these cases, immigrants have built "institutionally complete" ethnic communities offering opportunities for advancement unavailable in the general economy. Already Miami and Monterey Park have mayors who are Cuban and Chinese immigrants, respectively.

To be sure, other newcomers in areas of immigrant concentration—especially the undocumented and unskilled immigrant women—are exploited as sources of cheap labor in a growing informal sector that is fueled by foreign competition and the demand for low-cost goods and services in the larger economy. They find employment in the garment industry (in Los Angeles, perhaps 90 percent of garment workers are undocumented immigrants), as well as in electronics assembly, construction, restaurants, domestic service, and a wide range of other informal activities—often at subminimum wages and under conditions that vio-


238

late federal and state labor laws. In this context the presence of a large supply of cheap labor does keep wages down: the low wages paid to the immigrants themselves, who under their precarious circumstances are willing to accept whatever work is offered. In regions like Southern California there is the added irony that undocumented immigrants are attracted by an economic boom that their own labor has helped to create. IRCA did provide 3 million immigrants with an opportunity to emerge from the shadows of illegality, but at a cost: the new law has had the effect of driving those ineligible for legalization (virtually all post-1981 arrivals) further underground, but without stopping the flow of illegal immigration; it has also led—according to a 1990 report by the General Accounting Office—to increasing ethnic discrimination by employers against legal residents. The new post-IRCA underclass of undocumented (and sometimes homeless) Mexican and Central American workers is increasingly visible, not only in traditional agricultural and horticultural enterprises but especially in dozens of street corners of California cities, from Encinitas to North Hollywood, where groups huddle during the day waiting for job offers from homeowners and small contractors. The situation has bred a new upsurge of nativist intolerance in heavily impacted areas.[42]

Refugees differ from other categories of immigrants in that they are eligible to receive public assistance on the same means-tested basis as U.S. citizens, and the federal government has invested considerable resources since the early 1960s to facilitate the resettlement of selected refugee groups. Prior to that time, refugee assistance depended entirely on the private sector, particularly religious charities and voluntary agencies. The expansion of the state's role in refugee resettlement roughly parallels the expansion of the American welfare state in the 1960s and early 1970s. In the twelve years from 1963 (when federal outlays officially began) to 1974, domestic assistance to mostly Cuban refugees totaled $2.3 billion; and in the twelve years from 1975 to 1986, aid to mostly Indochinese refugees totaled $5.7 billion, peaking in 1982, when $1.5 billion were expended, and declining sharply thereafter (all figures are in constant 1985 dollars). The lion's share of those federal funds goes to reimburse states and localities for cash and medical assistance to refugees during their first three years in the United States. Public assistance to eligible refugees is conditioned upon their attendance in assigned English-as-a-second-language (ESL) or job training classes and acceptance of employment; it also formally allows these groups (at least during a transition period after arrival) an alternative mode of subsistence outside existing labor markets and ethnic enclaves. However, states have different "safety nets"—levels of benefits and eligibility rules vary widely from state to state—forming a segmented state welfare system in the


239

United States. For example, AFDC benefits for a family of four in California in the early 1980s were $591 a month (second highest in the country), compared to only $141 in Texas (second lowest); intact families (two unemployed parents with dependent children) were eligible for AFDC and Medicaid in California, but ineligible in Texas; and indigent adults without dependent children were eligible for general assistance in California localities, but not in Texas. Hence, the initial decision to resettle refugees in one state or another affects not only their destinations but their destinies as well. Welfare dependency rates vary widely among different refugee nationalities, and from state to state among refugees of the same nationality. Not surprisingly, the highest rates have been observed among recently arrived, less-skilled, "second-wave" Southeast Asian families with many dependent children in California; still, all research studies of Cambodian, Laotian, and Vietnamese refugees throughout the country have found that welfare dependency (which even in California keeps families below the federal poverty line) declines steadily over time in the United States.[43]

Language and the Second Generation

A more salient issue concerns the impact of the new immigration on public school systems and their rapidly changing ethnic composition. The issue itself is not new: at the turn of the century, the majority of pupils in many big-city schools from New York to Chicago were children of immigrants. Today, nowhere are immigrant students more visible—or more diverse—than in the public schools of California. By the end of the 1980s, almost a third of California's 4.6 million students in kindergarten through twelfth grade (K–12) in the public schools spoke a language other than English at home; while 70 percent of them spoke Spanish as their mother tongue, the rest spoke over 100 different languages. Yet of California's scarce pool of bilingual teachers, 94 percent spoke only Spanish as a second language, a few spoke various East Asian languages, and there was not a single certified bilingual teacher statewide for the tens of thousands of students who spoke scores of other mother tongues. Table 10.6 summarizes the trend over the past decade in the annual enrollments of language-minority students, who are classified by the schools as either fluent English proficient (FEP) or limited English proficient (LEP). In 1973 there were 168,000 students classified as LEP in the state, and that number doubled by 1980; from 1981 to 1989, as table 10.6 shows, the number of LEP students doubled again to about 743,000, and the number of FEP students increased by over 40 percent to 615,000. The FEP classification marks an arbitrary threshhold of English proficiency, which schools use to "mainstream" students from bilingual or ESL classrooms to regular classes. Indeed, bilingual education in Cali-


240
 

TABLE 10.6 Trends in California Public School Enrollments (K–12) of LEP and FEP Students
Who Speak a Primary Language Other than English at Home, 1981–89

Year

Total Students

Total LEP a Students

Total FEP a Students

Total LEP/FEP a

N

N

%

N

%

N

%

1981

3,941,997

376,794

9.6

434,063

11.0

810,857

20.6

1982

3,976,676

431,443

10.8

437,578

11.0

869,021

21.9

1983

3,984,735

457,542

11.5

460,313

11.6

917,855

23.0

1984

4,014,003

487,835

12.2

475,203

11.8

963,038

24.0

1985

4,078,743

524,082

12.8

503,695

12.3

1,027,777

25.2

1986

4,255,554

567,564

13.3

542,362

12.7

1,109,926

26.1

1987

4,377,989

613,222

14.0

568,928

13.0

1,182,150

27.0

1988

4,488,398

652,439

14.6

598,302

13.3

1,250,741

27.9

1989

4,618,120

742,559

16.1

614,670

13.3

1,357,229

29.4

SOURCE : California State Department of Education, Bilingual Education Office, DATA BICAL series, 1981–89 (Sacramento, Calif.).

a LEP means Limited English Proficient; FEP means Fluent English Proficient. The overwhelming majority of LEP/FEP student are immigrants or children of immigrants. These students speak over 100 different primary languages, although Spanish is the language spoken by about 70 percent of total 1989 LEP/FEP enrollments in California public schools. The largest of the other ethnolinguistic groups, in rank order, include speakers of Vietnamese, Filipino (Tagalog, Ilocano, and other dialects), Chinese (Cantonese, Mandarin, and other dialects), Korean, Cambodian, Hmong, Lao, Japanese, Farsi, Portuguese, Indian (Hindi, Punjabi, and others), Armenian, Arabic, Hebrew, Mien, Thai, Samoan, Guamanian, and a wide range of European and other languages.

fornia largely consists of "transitional" programs whose aim is to place LEP students in the English-language curriculum as quickly as possible. While immigrant children gain proficiency in English at different rates—depending on such extracurricular factors as age at arrival, their parents' social class of origin, community contexts, and other characteristics—very few remain designated as LEP beyond five years, and most are reclassified as FEP within three years.[44]

In some smaller elementary school districts near the Mexican border, such as San Ysidro and Calexico, LEP students alone account for four-fifths of total enrollments. In large school districts in cities of high immigrant concentration, language minorities comprise the great majority of K–12 students. In 1989, LEP students accounted for 56 percent of total enrollments in Santa Ana schools, 31 percent in Los Angeles, 28 percent in San Francisco and Stockton, 25 percent in Long Beach, and close to 20 percent in Oakland, Fresno, San Diego, and San Jose; the number of FEP students nearly doubled those proportions, so that in districts like Santa Ana's over 90 percent of the students were of recent immigrant


241

origin. These shifts, in turn, have generally been accompanied by so-called white flight from the public schools most affected, producing an extraordinary mix of new immigrants and native-born ethnic minorities. In the Los Angeles Unified School District, the nation's second largest, the proportion of native white students declined sharply from about 65 percent in 1980 to only 15 percent in 1990. To varying degrees, the creation of ethnic "minority majorities" is also visible in the school systems of large cities, including all of the SMSAs listed earlier in table 10.5. While a substantial body of research has accumulated recently on the experience of new first-generation immigrants, relatively little is yet known about the U.S.-born or U.S.-reared second generation of their children, although they will represent an even larger proportion of the American school-age population in years to come.

Until the 1960s, bilingualism in immigrant children had been seen as a cognitive handicap associated with "feeblemindedness" and inferior academic achievement. This popular nostrum was based in part on older studies that compared middle-class native-born English monolinguals with lower-class foreign-born bilinguals. Once social class and demographic variables are controlled, however, recent research has reached an opposite conclusion: bilingual groups perform consistently better than monolinguals on a wide range of verbal and nonverbal IQ tests.[45] Along these lines, a 1988 study of 38,820 high school students in San Diego—of whom a quarter were FEP or LEP immigrant children who spoke a diversity of languages other than English at home—found that FEP (or "true") bilinguals outperformed both LEP (or "limited") bilinguals and all native English monolinguals, including white Anglos, in various indicators of educational attainment: they had higher GPAs and standardized math test scores, and lower dropout rates. The pattern was most evident for Chinese, Filipino, German, Indian, Iranian, Israeli, Korean, Japanese, and Vietnamese students: in each of these groups of immigrant children, both FEPs and LEPs exhibited significantly higher GPAs and math (but not English) test scores than did white Anglos. These findings parallel the patterns of educational stratification noted earlier in table 10.4 among foreign-born and native-born adults in the United States. Remarkably, two groups of lower-class LEP refugees—the Cambodians and the Hmong—had higher GPAs than native whites, blacks, and Chicanos. White Anglos (but not blacks and Chicanos) did better than some other language minorities, whether they were classified as FEP or LEP—Italians, Portuguese, Guamanians, Samoans, and "Hispanics" (predominantly of Mexican origin)—almost certainly reflecting intergroup social class differences. And among students whose ethnicity was classified by the schools as black or Hispanic—with the lowest achievement profiles overall in the district—FEP bilinguals outperformed their


242

co-ethnic English monolinguals.[46] Research elsewhere has reported similar findings among Central American, Southeast Asian, and Punjabi Sikh immigrant students, and separate studies have found that Mexican-born immigrant students do better in school and are less likely to drop out than U.S.-born students of Mexican descent.[47]

The idea that bilingualism in children is a "hardship" bound to cause emotional and educational maladjustment has been not only refuted but contradicted by every available evidence; and in a shrinking global village where there are thirty times more languages spoken as there are nation-states, the use of two languages is common to the experience of much of the world's people. But pressures against bilingualism in America—as reflected today by the "U.S. English" nativist movement and the passage of "English Only" measures in several states—are rooted in more fundamental social and political concerns that date back to the origins of the nation. As early as 1751, Benjamin Franklin had put the matter plainly: "Why should Pennsylvania, founded by the English, become a colony of aliens, who will shortly be so numerous as to Germanize us, instead of our Anglifying them?" The point was underscored by Theodore Roosevelt during the peak years of immigration at the turn of the century: "We have room but for one language here, and that is the English language; for we intend to see that the crucible turns our people out as Americans, and not as dwellers in a polyglot boardinghouse." It is ironic that, while the United States has probably incorporated more bilingual people than any other nation since the time of Franklin, American history is notable for its near mass-extinction of non-English languages. A generational pattern of progressive anglicization is clear: immigrants (the first generation) learned survival English but spoke their mother tongue to their children at home; the second generation, in turn, spoke accentless English at school and then at work, where its use was required and its social advantages were unmistakable; and with very few exceptions their children (the third generation) grew up as English monolinguals.

For all the alarm about Quebec-like linguistic separatism in the United States, the 1980 census suggests that this generational pattern remains as strong as in the past. It counted well over 200 million Americans speaking English only, including substantial proportions of the foreign-born. Among new immigrants who had arrived in the United States during 1970–80, 84 percent spoke a language other than English at home, but over half of them (adults as well as children) reported already being able to speak English well. Among pre-1970 immigrants, 62 percent still spoke a language other than English at home, but the overwhelming majority of them spoke English well: 77 percent of the adults and 95 percent of the children. Among the native-born, less than 7 percent spoke


243

a language other than English at home, and over 90 percent of them (adults as well as children) spoke English well. More detailed studies have confirmed that for all American ethnic groups, without exception, children consistently prefer English to their mother tongue, and the shift toward English increases as a function of the proportion of the ethnic group that is U.S.-born. To be sure, immigrant groups vary significantly in their rates of English language ability, reflecting differences in their levels of education and occupation. But even among Spanish speakers, who are considered the most resistant to language shift, the trend toward anglicization is present; the appearance of language loyalty among them (especially Mexicans) is due largely to the effect of continuing high immigration to the United States. For example, a recent study of a large representative sample of Mexican-origin couples in Los Angeles found that among first-generation women, 84 percent used Spanish only at home, 14 percent used both languages, and 2 percent used English only; by the third generation there was a complete reversal, with 4 percent speaking Spanish only at home, 12 percent using both, and 84 percent shifting to English only. Among the men, the pattern was similar except that by the second generation their shift to English was even more marked.[48]

English proficiency has always been a key to socioeconomic mobility for immigrants, and to their full participation in their adoptive society. It is worth noting that in the same year that Proposition 63 (the initiative declaring English as the state's official language) passed in California, more than 40,000 immigrants were turned away from ESL classes in the Los Angeles Unified School District alone: the supply of services could not meet the vigorous demand for English training. Indeed, English language dominance is not threatened in the United States today—or for that matter in the world, where it has become already firmly established as the premier international language of commerce, diplomacy, education, journalism, aviation, technology, and mass culture. What is threatened instead is a more scarce resource: the survival of the foreign languages brought by immigrants themselves, which in the absence of social structural supports are, as in the past, destined to disappear.

Given the immense pressure for linguistic conformity on immigrant children from peers, schools, and the media, the preservation of fluent bilingualism in America beyond the first generation is an exceptional outcome. It is dependent on both the intellectual and economic resources of parents (such as immigrant professionals) and their efforts to transmit the mother tongue to their children, and on the presence of institutionally complete communities where a second language is taught in schools and valued in the labor market (such as those found in large ethnic enclaves). The combination of these factors is rare, since most immi-


244

grants do not belong to a privileged stratum, and immigrant professionals are most likely to be dispersed rather than concentrated in dense ethnic communities. Miami may provide the closest approximation in the United States, but even there the gradual anglicization of the Cuban second generation is evident. Still, the existence of pockets where foreign languages are fluently spoken enriches American culture and the lives of natives and immigrants alike.[49]

The United States has aptly been called a "permanently unfinished society," a global sponge remarkable in its capacity to absorb tens of millions of people from all over the world. Immigrants have made their passages to America a central theme of the country's history. In the process, America has been engaged in an endless passage of its own, and through immigration the country has been revitalized, diversified, strengthened, and transformed. Immigrant America today, however, is not the same as it was at the turn of the century; and while the stories of human drama remain as riveting, the cast of characters and their circumstances have changed in complex ways. In this chapter, I have touched on a few of the ways in which the "new" immigration differs from the "old." But a new phase in the history of American immigration is about to begin. New bills have been introduced in Congress once again to change immigration policies—to reduce or eliminate some of the legal channels for family reunification, to increase quotas for "brain drain" and "new seed" immigrants, to allocate special visas for immigrant millionaires who will invest in job-producing businesses, to rescind the "employer sanctions" provisions of the last law, to grapple with the sustained flow of undocumented immigrants, to consider whether persons from newly noncommunist states in Eastern Europe and Nicaragua are eligible for refugee status—and the debate remains surrounded by characteristic ambivalence. The "new" immigration of the post–World War II period was never simply a matter of individual cost-benefit calculations or of the exit and entry policies of particular states, but is also a consequence of historically established social networks and U.S. economic and political hegemony in a world system. The world as the century ends is changing profoundly—from Yalta to Malta, from the Soviet Union to South Africa, from the European Economic Community to East Asia and the Arab world, from the East-West Cold War to perhaps new North-South economic realignments and Third World refugee movements—and new bridges of immigration will likely be formed in the process. For the future of immigration to America, as in the past, the unexpected lies waiting.[50]


245

Eleven—
The Hollow Center:
U.S. Cities in the Global Era

Sharon Zukin

Cities graphically represent the disappearing center of American society. Over the past twenty years, they have become both more visible and less important symbols of the economy. Paradoxically, despite enormous efforts at rebuilding, they are less different from each other than they were before. The problems of big cities—crime, drugs, high housing prices, unemployment—are just as familiar in Spokane or Tulsa as in New York City. Meanwhile, the provincial decay of smaller cities has been negated by the spread of television, computers, and imported consumer goods. We ordinarily describe America as an urban society, but most Americans no longer live in cities. They are as likely to find their "center" in the suburban shopping mall or office park as in the downtown financial district. To some degree, Americans have always had a love-hate relationship with cities. Throughout American history the major thinkers and many ordinary men and women have loved the countryside because it offers an escape from social pressures. Cities had their own compensation because they brought a varied population into a common public life. Today, however, the public middle ground that was previously identified with cities is dissolving into a collage of racial, ethnic, and other private communities. At the same time, even cities as commanding as Los Angeles and New York are being "globalized"; that is, they are becoming more dependent on political and economic decisions that are made at the global level.

In 1986, a list of urban trends drawn up for the U.S. Conference of Mayors described a sorry situation: population drain, increased poverty, an income gap between city and suburban residents, gaps among racial groups, long-term unemployment in places where manufacturing has declined and services grow slowly, homelessness, hunger, low education levels, high crime rates, and very high taxes.[1] Such conditions cannot


246

be described as anything but structural. Disinvestment by industry and the middle class feeds—and in turn responds to—concentrations of the poor, the ill-educated, and the unemployable. Nonetheless, neither the federal government nor private markets give cities much encouragement. Since the early 1970s, no president of the United States has drawn up an explicit urban policy. Under the Reagan administration, the Department of Housing and Urban Development was used as a patronage arm of the Republican party. The conservative thrust of federalism over the past twenty years has consistently reduced both programs and grants. And during the 1980s, the cities' biggest demands—for social services, public housing, and jobs—were sacrificed to the rhetoric of fiscal purity.

Between the Gramm-Rudman Act of 1985 and the attacks on Big Government by two Reagan administrations, state and local governments were squeezed to only 10 percent of the federal budget. In New York City, the federal government contributed the same amount—$2.5 billion—to an $11 billion municipal budget in 1981 and one that had grown to $27 billion in 1989.

Businesses and households that can afford to move have been leaving cities for many years. Industrial decentralization to the suburbs began a century ago, closely followed by middle-class households seeking "bourgeois utopias." Land is both cheaper and more attractive outside cities. Labor is generally cheaper, too, more docile, less likely to be nonwhite. Restrictions on uses of suburban property also tend to benefit the "haves." Large companies can influence weak suburban governments for preferential zoning and tax laws, and wealthy home owners provide a pressure group for socially exclusive development. In recent years, however, cities have lost jobs and residents to areas farther away. Among households, suburbanization has grown less rapidly since the 1970s than moves to "exurban" locales. Businesses, for their part, have decentralized operations. Many have moved to, or set up branches in, low-wage regions of the country and overseas. To some degree this "footloose capital," as Bluestone and Harrison and others call it, is related to a desire to lower costs and escape the limits imposed by unionization. In part it also reflects a shift from local to nonlocal ownership of firms (as in Buffalo, New York, or Youngstown, Ohio), and an intensification of outsourcing strategies (especially devastating to Detroit). More important, footloose capital also applies to new business start-ups in growth sectors, such as electronics and telecommunications, where manufacturing is likely to be exurban. Once limited only to industrial plants, the outflow of economic activity from cities now includes a significant number of offices and corporate headquarters. The resulting "counter-urbanization" has further reduced most cities' claim to functional pre-eminence in American society.[2]

Not surprisingly, Americans have been attracted by alternatives to tra-


247

ditional cities. On the one hand, they increasingly live, work, and shop in exurbs, especially in the Sun Belt, in regions not previously known as centers of urban life. On the other hand, a small but growing middle-class population inhabits the gentrified centers of older cities. Like exurban residents, gentrifiers enjoy the amenities of personal consumption that are typical of a geographically mobile population. But they are tied to the city by a desire for access to its cultural markets as well as its historic symbols of power. In terms of numbers, gentrification has had a much smaller impact on cities than either suburbanization or exurban migration. It has great appeal, however, because like the exurbs, gentrified areas become great spaces of consumption.

Exurbs and gentrified downtowns are important not only because of visible spatial shifts. They are also significant "fictive spaces" in America's social geography. They convey a powerful image of the way many Americans want to live, an image of escape from the constraints of cities and a confirmation of the free movement of both people and investment capital. A simultaneous decentering to the exurbs and recentering of downtowns tear apart the old image of cities as engines of production. A more subtle picture, instead, differentiates among cities according to their position in both the service economy and a new organization of consumption. This new order alters the relation between urban space and economic and cultural power.

Cities and Economic Power

The post-postwar economy has sharpened the effects on cities of global organization. Since the 1970s, the major area of growth—business services—has depended on linking local to multinational firms in expanding markets. While some services have been bought by or have merged with international companies, others seek clients and contracts overseas. This course of development imposes a dual dependence on American cities. The cities rely on the services to fuel further growth, employ residents, and expand the tax base; but the largest employers among local service institutions, as in mass-production manufacturing, are increasingly responsive to global rather than local trends.

These conditions are especially acute in cities whose financial institutions are major players in global markets. New York and Los Angeles, with their large concentrations of international bankers, stock market traders, and foreign investors, owe their growth since the 1970s to globalization. Just as these two cities have the largest number of corporate financial headquarters and other institutional resources, so they also have the tallest office buildings, the highest land values, and the most business expansion in their downtowns. In large part the economic value


248

of doing business downtown reflects an infusion of foreign property investment. Foreign financial institutions, especially Japanese and other Asian banks, occupy a major portion of downtown office buildings. Not surprisingly, New York and Los Angeles, as major concentrations of the power that moves capital around the world, are considered "world cities." Whether this refers only to their pre-eminent position in global financial markets, or to some index of greater cultural sophistication as well, is unclear.

In some aggregate terms—new employment, for example, or business revenues—the financial, insurance, and real estate industries compensate for cities' losses in traditional manufacturing employment.[3] Yet aspects of the new economy suggest reasons for alarm. Most of the highly paid, prestigious downtown jobs are held by suburban rather than city residents. Men and women of color, who represent a growing portion of all cities' populations, have not made such inroads into the financial services area as they have into the public sector. Because of the layoffs that follow stock market downturns, all employment in this area is risky. The threat of global financial crisis also imposes risk on many property investments, from office construction to the ownership of "signature" or "trophy" buildings that are designed by famous architects and located in high-rent districts.

The technological revolution in computers and telecommunications that made office decentralization possible also creates the means for local financial institutions to move away. "Back offices" that house computer and routine clerical operations have easily been detached from money-center banks, while headquarters and other "front offices" remain in more central locations. The importance of face-to-face contact and the symbolic legitimacy of place may enhance the city's viability as the site of a world financial market. Yet even in New York, high land prices and high wages for clerical personnel create a potential for the city's being abandoned by financial institutions.[4]

In cases where banks, stock brokerages, and insurance companies have not moved away, they have destabilized the labor force by shifting from permanent to temporary employment. These arrangements are not limited to cities, of course. Since 1980, temporary employment of all kinds has been the largest growth sector in jobs around the country (as well as overseas). Some temporary positions may pay as well as permanent jobs and may also offer health insurance and other benefits. But by establishing a large number of temporary positions that are outside the normal career stream, financial services organizations create a tenuous base for urban economic development.

Neither do financial services firms recruit widely among the cities' populations. Jobs at the top are often filled through networks estab-


249

lished in college and business school; these job holders live in gentrified areas downtown or in the suburbs. For the most part, high-level positions are also still restricted by race and gender. When it comes to entry-level jobs requiring lesser skills, urban residents confront another type of barrier. Financial and other business services firms do not find adequate personnel among the city's high school graduates. Lacking training in math, competence in standard interpersonal communication, and skills in dress, deference, and punctuality, young men and women from the city are passed over in favor of suburban youth. Growing opportunities for employment outside cities, however, as well as a shrinking labor pool, cause urban employers much concern. In some cities, notably Boston, the financial community has developed a training-and-recruitment partnership with local high schools. In others, such as New York, this degree of institutional interdependence has not yet grown.[5]

Some demographically minded researchers speak of these employment problems in terms of a job-skills mismatch, and the structural roots of this analysis also appeal to those who think in terms of a postindustrial economy. They consider that the decline in traditional manufacturing industries drastically reduces the number of entry-level jobs that are available to high school graduates of modest academic achievements. Further, if job requirements in business services emphasize math, interpersonal, and other job skills that urban high school graduates (and dropouts) lack, then the growth of such jobs takes place without benefiting the urban population. The concentration of ethnic and racial minorities in cities, however, introduces a disquieting series of bias questions. According to the job-skills mismatch analysis, urban minority residents are unemployed in the city's growth sector because they are intellectually and culturally unemployable. Their soaring unemployment rates first of all reflect the loss of a base in blue-collar jobs in plants that have moved out of the city or shut their doors. Second, this unemployment reflects the diminishing educational achievements of the urban minority population.[6]

But the job-skills mismatch explanation of urban unemployment ignores several important factors. At least since the 1950s, many men and women of color have been employed in the service industries. They have generally been steered toward certain areas—notably, personal rather than business services, and the public rather than the private sector—and discouraged from entering others. In recent years, as racial and ethnic minority students have made up greater proportions of urban high school and college graduates, these students have, presumably, gained the qualifications to get financial jobs. At graduation, however, they confront a decreasing number of entry-level jobs, many of which have been shifted overseas or eliminated by automation (for example, insurance claims processors and bank tellers in financial services, telephone opera-


250

tors in other fields). Further, the hiring process in the financial services area is socially exclusive. It still segregates men from women and people of color from the jobs traditionally held by whites.[7]

This exclusion of part of the urban population is heightened by their absence, by and large, from another growth area in most cities, the sector of individually owned small businesses that are often identified with ethnic or immigrant entrepreneurs. The ethnic concentrations in most large cities enable businesses that cater to their special needs (such as food and travel services) to succeed in an "enclave economy." Alternatively, the capital that many immigrants have access to by means of self-help or mutual-aid associations often provides a base for those groups to enter various niches in the urban economy (as owners of manicure parlors, greengrocers, restauranteurs, and newsstand proprietors). Many of these businesses rely on family capitalism. Family members work long hours at low wages, and defer their individual advancement in favor of the family as a whole or the younger generation. But a preference for recruitment among their own group reinforces other hiring practices in the larger society. The garment industry has had a resurgence in the last ten years, especially in the Chinatowns of New York and Los Angeles, but Asian owners and foremen do not recruit Latinos and blacks.

Immigrants' entrepreneurialism has, at any rate, made a broader, though not necessarily cheaper, array of goods and services available in many urban areas. Child care day workers, street peddlers, and housekeepers represent new or reborn segments of the ethnic division of labor, while their better-educated compatriots staff health care facilities in both the public and private sectors. Despite the success of many immigrant groups—Chinese, Koreans, Indians, Filipinos, Cubans, West Indians, and others—poverty still bears a racial edge. Many of the Latinos and U.S.-born blacks who live in cities are among the poorest urban residents. Although statistical indices of racial segregation have steadily declined, these men and women are more concentrated by race than other groups. Race counts again in the tendency for middle- and low-income African-Americans to live in the same neighborhood. Moreso than in other ethnic and racial minorities, social class fails to separate urban blacks who have steady work from those who do not.[8]

Opportunities for entrepreneurialism and employment do not compensate for the low-wage jobs many urban immigrants hold. Some researchers describe these jobs as "sweatshop" labor, pointing to conditions in such growth areas as the garment and computer industries in New York and Los Angeles. Child labor, piece rates, long hours, and other types of exploitation have not been documented for these industries, but to the degree that they hire only non-union labor, perhaps paid off the books and informally contracted, employers contribute to a para-


251

doxically cash-rich, mobility-poor urban population. The simultaneous proliferation of these jobs and high-level jobs in business services, as well as the absolute difference in incomes between them, has shaped a polarized social structure. Because the polarization of incomes in the city so clearly refers to the ability, or lack of ability, to consume, the urban class system is seen as even more divided between rich and poor than in the country as a whole.[9]

In New York City, where the average income of the poorest 10 percent of the population (including welfare payments) was $3,698 in 1986, there were 53,000 taxpayers with adjusted gross incomes of $100,000 or more; 2,840 with at least $500,000; and 1,764 with more than $1 million. Eighty-two people in New York are believed to have assets worth more than $275 million. The second-place city, Los Angeles, has only 32.[10]

Polarization also refers to divided spaces. Although a "dual city" image is much used by urban critics, the segmentation of incomes and separation of classes and races really require a more specific mapping. Peter Marcuse heuristically outlines a "quartered city," made up of the luxury city of the rich; the gentrified city of managers, professionals, and intellectuals; the "suburban" city of the lower middle class and well-paid blue-collar workers; the tenement city of the working poor; and the ghetto of outcasts, the unemployed, the homeless.[11] Significantly, the occupants of each quarter have more in common with their counterparts in other cities—in terms of jobs, mobility, and choices about what to consume—than they have either contact or common interests with residents of the other quarters. This is especially true for the luxury and gentrified areas, whose residents are likely to be foreign investors or at least consumers in an upscale global culture. The rich and upper middle class also tend to set themselves apart from other city residents by using private facilities (car phones, taxis, prep schools) instead of relying on public institutions.

Such images break the myth of the city as a middle ground between social groups. Both visually and metaphorically, the spaces occupied by more affluent groups are "islands of renewal in seas of decay."[12] Yet the area that attracts reinvestment has become larger and more visibly coherent in recent years. Like new office buildings, new upper-income housing in most older cities is mainly centered downtown. Downtown's expansion feeds on relatively undervalued property markets, the growth of business services, and investors' desire for centrally located projects that minimize risk. But in visual terms, it represents a new and broader landscape of power that grows by incorporating, eliminating, or drastically reducing the "vernacular" inner city inhabited by the city's powerless. These men and women are pushed toward less central areas and nearby suburbs that are relatively cheap and may be racially mixed. No


252

longer geographically bound to the inner city, the less affluent and the poor carry the inner city with them as both a racial stigma and an inability to attract investment.

Public officials are not oblivious of trying to govern "the city of the poor masquerading as the city of the rich."[13] Neither luxury investment nor gentrification raises a city population's median income, which makes the city government that much more dependent on those who pay high taxes. The problem, however, is that city budget authorities are chasing mobile investors. Not only industrial firms but also real estate developers who used to operate only in local markets are now national and even international in scope. To compel them to stay in cities and build the business centers that seem to attract more growth, municipal authorities make concessions. Business influence has always been an important factor in local government, but the new element since 1980 is the formalization of these arrangements in public-private partnerships.

Private-sector organizations like the Chamber of Commerce or the local real estate trade association now initiate redevelopment projects. Their financing depends in part on city government's ability to float municipal bonds and take out short-term loans, as well as its willingness to offer tax reductions, zoning incentives, and aid in acquiring land. "Public" goals tend to converge with those of private developers. The common program is worked out in meetings among business leaders and public officials, and managed by public authorities dominated by business institutions. A focus on high-rent downtown land and new construction is supported by the city's commitment to block off streets, enhance cultural amenities, and, in general, facilitate the "privatization" of development. Under these conditions, urban planners in public employ have no creative work.

Pressure to counter new downtown development with housing that is "affordable," that is, slightly below market rate, reflects the strength of "neighborhoods" where middle-income voters live. The linkage mechanism that was developed (in Boston and San Francisco) in response to such pressure permits developers to have their downtown development—but requires more affordable housing as a quid pro quo. Developers are assessed a percentage of development costs for building such apartments, or agree to allocate a portion of their project to less affluent groups. In some cases, as in Battery Park City in New York, the below-market-rent housing is built elsewhere, outside the most expensive areas. This gains new low- or, more often, middle-income housing at the cost of strengthening social class segregation in the heart of the city. At any rate, such linkages are viable only where developers have a lot to gain by agreeing to them; in other words, in cities like Boston, San Francisco,


253

and New York in the mid-1980s, when "market forces" buoyed the economy. Chicago suggests more a rigorous pressure to make developers respond to public goals (that is, racial integration, increasing the affordable housing stock, and letting neighborhoods share in downtown's prosperity). There, however, the opportunity has depended in large part on a new African-American mayor, the late Harold Washington. He attracted strong black support, dedicated staff members in city agencies, and white coalition partners—all at a time when the city attracted a new round of corporate investment downtown by nationally oriented business services.[14]

Public-private partnerships institutionalize the acknowledgment of dependence on the financial sector that followed the mid-1970s outbreak of fiscal crisis. At that time, commercial banks and other financial institutions threatened New York City, Yonkers, and Cleveland with bankruptcy, supposedly for the city government's profligate use of public finances. Calling in municipal debt served to discipline city agencies and remind them of the need to balance budgets. But fiscal crisis also fulfilled another end. It dramatized the death of the War on Poverty and ended the long New Deal era of social welfare at city—and federal government—expense. Most cities survived the fiscal crisis of the 1970s by concentrating layoffs and reductions on such "nonessential" services as schools and libraries, leaving police, fire, and sanitation agencies wounded but not completely cut down. In more drastic cases, such as New York, Yonkers, and Cleveland, bankers imposed a nonelected supercommission made up of leaders from the financial community and the state to oversee the spending of elected city officials. These supercommissions were given the right of approval on city budgets. Both formally and informally, they exercised control over mayors who were inclined toward populism.[15]

In the United States, linkages are usually limited to the developers' impact on the city's built environment. Provision of low-income housing units is only one possibility; developers may also provide "public areas," such as plazas or indoor galleries; they may preserve a landmark structure on the building site; or they may contribute funds to renovate publicly owned infrastructure, especially transportation facilities. The entire situation, however, is dominated by the private sector. A city's leverage depends on how marketable the project is and how much profit it can bring the developer. No linkage requires developers to extend their efforts to the sore area of employment. Often the indoor public spaces that developers provide are designed to be inhospitable to strangers, and after they are built, they are policed by private security guards. Most of them are entries or backdrops to shops. Even the outdoor spaces that are


254

most praised for their use of public art and open landscaping (New York's Battery Park City being a prominent example) serve to advertise an image of the city as clean, safe, and almost classically cultural.[16]

Dependence on the private sector for creating new public spaces is only a visible means of privatization. Many cities have also tried to save money by privatizing essential public services, that is, by contracting out work, letting private, for-profit firms build and operate facilities, and selling publicly owned assets.[17] While hiring a privately owned towing or waste removal company may seem a reasonable way to reduce the public payroll, shifting other services strikes at government's reason for existence. Courtrooms and prisons may be leased, hospitals may be run by private chains, and forms may be processed outside the public sector. But the efficiency of private managers is based on skewing service to the ability to pay, not equity or universal service. Turning city services over to private firms also means losing control. It suggests that the last vestige of citizenship in the city is gone, that the bureaucracy of city government is just a functional arrangement with no pretense to mediating a moral order.

Though hardly new, the dominance of private organizations in redevelopment and the divorce between downtown and the neighborhoods have been accentuated during the recent growth in service economies. Cities face renewed problems of allocating scarce public resources among needy populations while attracting successful businesses that could easily move away. The irony of a city's success in enhancing its "business climate" is that the occupants of high-income jobs go elsewhere to live. Moreover, the expansion of corporate facilities displaces poor residents farther from the core. And the wide array of private consumption opportunities in the city is monopolized by a narrow band of the most literate, affluent, cosmopolitan men and women. Under these conditions, most public institutions are degraded. They are either ceded to the poor, like public schools, or harnessed to the private sector, like public building authorities. Economically, the sense of public life in the city is eroded.[18]

Cities and Cultural Power

The shift from an industrial to a service economy is paralleled by visual as well as social changes. Just as the use of space shifts from "dirty" to "clean" work, so the visible legend of the city changes to reflect a new landscape of cultural power. To some degree this change is based on the consumption patterns of more affluent, highly educated residents—gentrifiers who graduated from college during the 1960s and 1970s. But it


255

also represents change in the ideological meaning of the city, and as such it shapes the conscious production of city space.

Architecture and design are the intimate partners of redevelopment in this process. Downtown becomes a competitive arena of style, the real estate market's cutting edge. Whether they are in Pittsburgh's Golden Triangle, Renaissance Center in Detroit, or New York's Battery Park City, the buildings are both monumental and commercial. Indeed, they are monumental because they are commercial. They are meant to provide a new skyline for the city, a vertical perspective on the city's financial power. Not coincidentally, they are all important waterfront developments. Reusing this land wrenches it from the docks, the dives, the wholesale markets that for many years enclosed the commercial district and limited its expansion. The waterfront's reuse grows out of both the desire to capture a scarce amenity and a reconsideration of the cultural value of centrality.[19]

Cities never lose the moral aura of central places. This is the secret of their uniqueness that, in turn, explains the endless fascination with rebuilding and the deep nostalgia for structures that have been torn down. What common history there is in American cities is located in the center. This is the marketplace of ideas and commerce, the site of oldest buildings, the area of public ceremony and desire. Theaters coexist with peep shows, corporate headquarters with wholesalers and jobbers, city halls with video arcades. Despite its heterogeneous uses, this is the most attractive place for real estate investment. The irony is that more investment tends to destroy the center by eating away at its diversity.

The recent redevelopment of the center is partly a reaction by institutional investors to risk in alternative investments such as Third World loans, suburban shopping malls, and office buildings in economically troubled Houston or Denver. But it also reflects a quest by certain parts of the middle classes for access to the city's historic cultural power. Beginning in the 1960s, a reaction against publicly funded urban renewal among more culturally sophisticated middle-class men and women inspired them to advocate the preservation rather than the tearing down of old buildings with historic value. They were mainly attracted to buildings in the center—the public halls and private houses that once belonged to, or were designed by, a patrician elite. These were among the first structures to attract the aesthetic eye of gentrification.

During the 1970s, the number of gentrifiers who put down roots in center-city neighborhoods rose. Mostly single men and women or childless couples, they bought nineteenth-century houses that had become run down and restored their old-style beauty. The way they used these houses differed from previous residents. They preferred architectural


256

restoration to modernization (except for creature comforts like bathrooms, kitchens, and air-conditioning). If the houses had been converted to rentals or single-room-occupancy hotels, they returned them to single-family use, usually the owner's, or converted them into pricey condominiums. Gentrifiers also tended to empty the streets. They didn't congregate on corners or in front of their homes, and they didn't mingle with neighbors. Neither did they patronize some of the old neighborhood stores, which were soon replaced by the restaurants, bookshops, and clothing stores that catered to gentrifiers. From one point of view, gentrification created a middle-class neighborhood on the basis of cultural consumption. From another, considering the relative costs of housing downtown and in the suburbs, it represented a rational form of middle-class housing investment.[20]

By the 1980s, a significant movement of investors into some downtowns created pressure on government to generalize the benefits of incremental, private-sector urban renewal. While local governments created historic landmark districts and enacted legislation to encourage reuse of old buildings in the center, the federal government changed the tax laws to make historic preservation and commercial reuse more deductible. Every U.S. city now glories in its historic downtown as a magnet for further private-market investment. Gentrification thus provided a stepping-stone from the federally funded urban renewal that tore down so many old buildings during the 1950s and 1960s to the speculative new construction that augmented the central city during the 1980s. Today, no downtown is considered complete without office towers, ethnic quarters, cultural complexes, and gentrification.

As a cultural ensemble, downtown's selling point is that it contributes to urban economic growth by attracting tourists. But the major tourists are the city's own residents. Those at higher income levels seek out new restaurants, shop for imports of finely wrought or singular goods, and go to look at the places where art is produced, exhibited, and sold. These spaces for cultural consumption are generally located in the center, or in adjacent derelict districts, where rents are cheap, buildings are old enough to provide an atmosphere, and a dense pattern of support services emerges. New York's SoHo provided an unplanned model for this sort of urban revitalization. But during the 1970s, Boston's Faneuil Hall Marketplace and Baltimore's Inner Harbor turned it into a planning model. Faneuil Hall is particularly interesting because its developers took a strong design concept from the existing use of the building and used it to displace the fruit and vegetable vendors who rented stalls there. They were replaced by stands selling arts and crafts products, imported foods, and other gift items that can be found elsewhere. The essence of the transformation, however, is that it opens Faneuil Hall to


257

middle-class use and signals to white residents and tourists that this is a place for them. By making a permanent commercial "festival" out of a grubby daily market, the developers of Faneuil Hall eliminated both the "periodic" use of the space and authentic, even functional, popular culture.[21]

In large part, redeveloping the downtown depends on the commercial re-creation of an urban middle-class culture. More sophisticated than suburbia, the newly interesting downtown is a realm of the senses. Its spatial organization and visual cues "open" the center to a highly selective consumption. In its conversion from small shops, industrial lofts, and working-class homes, downtown is caught up in—and spearheads—an "artistic mode of production." Artists are the primary consumers in this image of the city, and everyone in the more cerebral, or more pretentious, part of the middle class is interested in bridging art and life.[22]

The new downtown also bridges public and private spheres. Large mixed-use projects typically blend shops on the lower floors, offices in the middle, and apartments above. They allude to the density and vitality of older city streets without the hint of chaos, the expectation of the unexpected, that is part of an old city's fabric. New urban spaces give a clear sense of keeping the unruliness of the city out. To enter them, people come inside from the street: they are neither purely public nor private spaces. The State of Illinois Center in Chicago is perhaps the most perverse example of this "liminality." Built for government offices, the project has the atrium design of modern hotels, and the first few floors comprise a shopping center. Projects like these usually enclose an extremely large volume of space. They often include glass-sided elevators or high escalators, which are likely to be filled with moving crowds. But the grandeur of their scale conflicts with the triviality of their function. While shopping may have become a social experience that men and women value in itself, the stores in these mixed-use projects are usually branches of national chains that sell mass-produced goods.[23]

To some extent the quest for distinction in mixed-use spaces has come to rest on the notion of the city as festival. This suits the reorganization of the city as a consumption space, where shoppers are provided with a built environment that contextualizes the ephemeral while the buildings themselves are decontextualized from the city's past. The festival aspect of urban space fits a postmodern susceptibility to eccentricity and invention. Its "free-market populism" benefits the eclectic consumer while segregating those who can pay from those who live on the street. Much of the festival use of the city center relates to the "society of spectacle" that is described in the work of contemporary cultural critics. Born of the late-nineteenth-century burst of commercialism and urbanization, a city of spectacle features passive crowds floating among commercial dis-


258

tractions. But the city's adoption of a festival theme also reflects the influence of theme parks in the culture of contemporary spaces. Theme parks, or their urban equivalents in either red-brick or atrium shopping centers, organize varied bundles of consumption. Equally important, they also organize how people experience the space of consumption: the city becomes an imaginary stage-set for dream fulfillment.[24]

While the qualities of place can be abstracted in both historic preservation and new construction, the real downtown is formed by joining circuits of economic and cultural capital. Old buildings provide an object of aesthetic interest; a site for relatively low-cost cultural production and consumption, especially among more adventurous cultural consumers; and a magnet for real estate investment. The physical infrastructure generates markets for architectural restorations as well as avant-garde art; together, they create a downtown "scene" that—with enough consumers—sparks a booming local service economy. This local economy, however, is highly skewed toward high-class and international uses. It has more art galleries than dry cleaners, more clothing boutiques than supermarkets. The local real estate market grows in tandem with the sale of historical replicas, from Victorian furniture to "French country antiques." Recognizing these areas of the city as historic landmark districts legitimizes property investment there and gives a certain cachet to local business establishments. The areas become well known by means of articles in the daily newspapers and magazines. Target of an ever more mediated middle-class consumption, the historic and cultural downtown attracts more new investment to the central business district. In part because of the arrival of foreign investors, the old financial district sprouts new office towers. What these buildings represent—their cultural power in the world economy—contradicts the local or avant-garde spirit of most initial gentrifiers.

If downtown spells fun for the more sophisticated middle class, it is not so hospitable to the unemployed, the homeless, and lower-income groups. Over the past twenty years revitalization has eliminated low-rent housing from the center, especially the skid row flophouses and single-room-occupancy hotels that catered to a transient, older, jobless group of men that used to be labeled homeless. Revitalization has also displaced the stores such people patronized—food and liquor shops, used clothing stores, pawnshops. New shops and the firms in new office buildings displace the labor market. Unlike the old docks, railroad yards, and warehouses that used to abut the center, they do not recruit the homeless as casual labor. The new downtown provides so much less living space for a poor population that these men and women are literally homeless. High property values and low vacancy rates decrease their chances of finding even a temporary place to settle down, while the density of activity and


259

transportation downtown continually lure them to the center. In recent years the homeless population has been swelled by more women and by families with children that cannot make enough money to pay the rent. Ironically, they are driven out of most private-sector public spaces, especially in front of the tonier shops, and so they try to find shelter in the bus and subway stations, railroad terminals, and city streets.[25]

Just as middle-class consumers of the city demand more meaningful public space, so do homeless men and women seek public space as the last remaining shelter. Whether cities can provide public space for either group—in which proportion, and where—has become an index of public and private social power.

Cities and Social Power

As the largest cities have begun to elect mayors from African-American and Latin communities, the cities themselves have become less prized. Public institutions are required to expand their functions to cover more human needs—adjudicating court cases, tending children all day, providing temporary shelter—while funding lags. Crime and drug sales plague many residents who cannot insulate themselves behind private security guards. From banker to mayor to drug gang, in the city there are many kinds of social power.

When we talk about cities in America today, we should differentiate between three "orders" of cities that create vastly different claims to social power. Within the global social order, the most power is concentrated in New York and Los Angeles, America's largest cities and largest financial and communications capitals. These cities are not fatally threatened by recent downturns in jobs and housing prices that are so inflated they forestall mobility. But their prosperity has left a hollow ring of outer boroughs or inner suburbs between downtown's expansion and more affluent suburban counties. In both cases, the "city" will only continue to grow as a result of regional growth; most older areas of the city house new immigrants who are saving to move out and an underemployed native population. Other cities may look like smaller versions of New York and Los Angeles. By contrast, they lack the base in transnational enterprises that gives these two cities global scope and scale as well as a fearsome glamor.

Aside from the two world cities, a more purely national order differentiates power according to cities' age and region of the country. Newer cities are mostly southwestern and southeastern. They have a "suburban" style of life, which is automobile-dependent, home-owning, private. They also have a base in newer manufacturing industries—mainly as a result of extensive military contracts—as well as regional and national


260

services. Lacking a claim to the social power of global capitals, they nonetheless provide the sort of middle-class life that people identify with the American Dream. And they may be the only cities in the country to do so.

Within cities, another order differentiates between the populist power of the neighborhoods and the financial power of the business center. Neighborhood residents hold the city's remaining manufacturing jobs, work in the civil service, and provide the major part of the work force in the private service economy. But because they cannot or will not move out of the city—for reasons of income and race—they bear the burden of the moral problems that no city government can solve. In the neighborhoods are the homeless shelters, the drug wars, the violence that rips through public schools. And in the neighborhoods we also find the fierce sense of territory that inspires racial terror. From these contradictions arises that which is known in American society as community, the city's only form of legitimate social power.

Since the urban reforms that began in the late 1960s, "community" has been a universal rallying cry for improving public services. The concept of community has also been a focus for organizing low-income men and women to demand access to political power. While community movements have made social power in the city more competitive than before, they have also provided a way to integrate unorganized groups into political life.

Twenty years of experience indicate that the vehicles of community empowerment are flawed. Administrative decentralization, for example, has often suffered from too little funding controlled by too few people. Central bureaucracies, both federal and citywide, have been reluctant to give up control over hiring and budgets. Many civil servants, moreover, such as police and fire fighters, do not live in the cities where they are employed, either because they cannot afford high housing prices or they want better living conditions. Neither are coalitions that elect minority-group mayors effective tools for community empowerment. On the one hand, urban minorities are often divided along the racial and ethnic, as well as political, lines. Terms like the "black community" and "Latin community" encompass a wide variety of competing local groups. On the other hand, the public goods and social conditions toward which they strive are not necessarily allocated by public command. Quality of life in the city is so dependent on income that it is essentially controlled by private decisions.

Despite its real limitations, the concept of community suggests how little even the poorest neighborhoods of a city conform to the stereotype of "social disorganization."[26] Non-nuclear families and the working poor make up a large portion of the urban population, but the areas where


261

they live generate their own, fairly continuous structure of community organizations. Linked by individual activists, these organizations respond to both community issues and external conditions. The encouragement of City Hall (and formerly, the federal government) enables them to develop a fairly stable base that may remain outside the control of traditional urban institutions, especially political parties. At best, community organizations goad the city government into giving poor residents of the city a little more access to public goods—longer library hours, a drug treatment program, a slightly more responsive police department. At worst, they have no effect on housing, jobs, and income—the basic parameters of living conditions.

The structure of the whole society affects the issues that are considered urban problems. But while poverty, drug addiction, and decaying public infrastructure are national in scope, no national institution has the moral authority to compel their solution. Moreover, as long as cities have little autonomy in the face of global markets, their problems are defined in terms set by the private sector. Americans still visualize cities as the public center of their society. Yet it is a hollow center, more an image of power than a means of empowerment.


263

PART TWO— ECONOMICS AND POLITICS: GLOBAL AND NATIONAL
 

Preferred Citation: Wolfe, Alan, editor. America at Century's End. Berkeley:  University of California Press,  c1991 1991. http://ark.cdlib.org/ark:/13030/ft158004pr/