Preferred Citation: Dimock, Wai Chee. Residues of Justice: Literature, Law, Philosophy. Berkeley:  University of California Press,  c1996 1996.

4— Pain and Compensation

Pain and Compensation

"Although the traditional subject of economics is indeed the behavior of individuals and organizations in markets, a moment's reflection," Richard Posner explains, "will suggest the possibility of using economics more broadly."[1] These words, offered at the outset of The Economics of Justice (1981), might be taken as a manifesto for one of the most powerful movements in contemporary legal thought, the "Law and Economics" movement.[2] Posner wants to apply economics to law, or, more accurately, he wants to absorb law into economics, as its subset. Law is a subset of economics because the latter discipline is not only more encompassing but also more foundational—more controlling of our behavior and more constitutive of our thought—so much so that all rational choice must issue from its premise and look to it for justification.

This cognitive primacy of the economic, for Posner, is not at all occasional but is absolute. Like Gary Becker[3] (his acknowledged intellectual forebear and winner of the 1992 Nobel Prize in Economics), Posner thinks of human reason itself as fundamentally economic in nature. Human beings are "rational maximizers of their satisfactions," and our thoughts are always calculations, always a form of cost-benefit analysis, guided always by efficiency as their legitimizing end. In short, for Posner as for Becker, economics is coextensive with (and indeed equatable with) human rationality; it makes up the "reason" in any reasoning process. And since "rationality is not confined to explicit market transactions but is a general and dominant characteristic of social behavior," economics too must reign as the governing principle in all areas of life, "not limited to the market." Indeed, given its universal currency, its status as a language common to all, a language that informs every decision-making, "the economics of nonmarket behavior" must encompass the sum total of human experience. It is nothing less than a language of "rational choice," nothing less than a "moral system," nothing less, in fact, than "a concept of justice."[4]


A Just Measure

Like Marx, Posner can figure in this book only as an agent provocateur, only as a style of rationality and a style of explanation the limits of which I want to test. This chapter is, in many ways, an extended response to Posner. I take issue not only with the economic reasoning he exemplifies but also with the theory of justice he expounds. That theory, broadly speaking a theory about the quantifiability of justice (using cost-benefit analysis as the all-purpose yardstick), can claim a precursor as remote as Aristotle, but it is Law and Economics that has secured its current authority and vitality. As is characteristic of the method throughout this book, my response to Posner is threaded through a historical argument in which Posner himself will actually appear as a less than central figure, since the adequating rationality of which he is so forceful an exponent is also a phenomenon with a genealogy of its own, one whose ambitions and limits might be historically investigated. In the nineteenth century, for example, such a rationality would inspire not only a new penal philosophy, and not only a new tort law (the precursor of Law and Economics), but also, especially in the contexts of slavery and urban poverty, a new ambition to quantify sentience as the instrumental ground of humanitarian reform, an ambition to come up with something like a calculus of pain.

These (and other) rationalizing projects might be seen as so many contextual associates—and so many quarreling neighbors—for the realist novel, a genre driven, perhaps as much as any literature can be, by a longing for objective adequation, but also haunted, again as much as any literature can be, by the futility of such an ideal. In its very search for commensurability, in its very desire for a just measure of things, the realist novel is darkened, fleetingly but also quite routinely, by the specter of the incalculable, the noncorresponding, the unrationalizable. Given such specters, such misgivings of a self-inflected (not to say self-afflicted) character, I want to make a plea for a critical practice responsive to what we might call the cognitive residues of a text, responsive to what remains not exhausted, not encompassed by its supposed resolution.

The novels of William Dean Howells readily come to mind—one thinks of A Modern Instance (1882), The Minister's Charge (1886), A Hazard of New Fortunes (1890)—novels that, like so many other works


by Howells and so many other works in the realist genre, seem to owe their very existence to a certain adjudicatory crisis. This crisis they dwell upon, fret over, and preserve in memory—not in spite of but because of their endings, endings often so meagre in their proposed satisfaction as to seem a virtual parody of the term. Even The Rise of Silas Lapham (1885), a novel that, at first glance, might seem less anguished than the others, manages all the same to have an adjudicatory crisis of its own, which it tries (and fails) to handle as a Posner-like problem, a problem in the economics of justice.

On that fateful occasion, the Laphams, feeling confused and wretched, find themselves seated in front of the Reverend Sewell, desperate for advice. They have just been hit by a terrible disaster, a bizarre new development in their daughters' marital fortunes. The presumptive suitor of one daughter, they discover, is actually courting and indeed has proposed to the other one. What is one to do? Should one opt for an across-the-board suffering for all concerned, or should one settle for damage control? The Laphams have no idea. But the Reverend Sewell knows exactly what to think. The answer seems clear to him, as clear as an arithmetic equation, for what is at stake here is simply a question of numbers:

"One suffer instead of three, if none is to blame?" suggested Sewell.

"That's sense, and that's justice. It's the economy of pain which naturally suggests itself, and which would insist upon itself, if we were not all perverted by traditions which are the figment of the shallowest sentimentality."[5]

Like Richard Posner, the Reverend Sewell is impressed by the rationality of economics: by its ability to quantify and clarify, to provide a just measure of things. "Justice," then, for Sewell as for Posner, is a matter of efficiency, achieved in this case by the minimization of cost. "One suffer instead of three," Sewell says, as he urges upon the Laphams what he calls an "economy of pain." Behind this specific recommendation is a more general proposition, one that locates the cognitive ground of ethics in economics and locates it, furthermore, in something like a quantification of sentience, a calculus of pleasure and pain. Sewell's advice is eminently rational, but, we might add, not altogether new, for his "economy of pain" had a different name and a wider currency long before he proposed it, being immortalized by the phrase "the greatest happiness of the greatest number."


That phrase is, of course, most famously (or infamously) associated with Jeremy Bentham. In the preface to A Fragment on Government (1776), an anonymous attack on Blackstone, Bentham had offered up (and emphasized with italics) what he called a "fundamental axiom," namely, that "it is the greatest happiness of the greatest number that is the measure of right and wrong ."[6] This "Greatest Happiness Principle," as John Stuart Mill glosses it in Utilitarianism (1861), is one that "holds that actions are right in proportion as they tend to promote happiness, wrong as they tend to promote the reverse of happiness. By happiness is intended pleasure, and the absence of pain; by unhappiness, pain, and the privation of pleasure."[7] Pleasure and pain are not just physical sensations to the utilitarians. They are important, above all, because they are computable units, because they can be weighed, measured, aggregated, and translated into a commensurate ratio. As such, they make up the very numerical ground upon which ethics itself can become quantified, upon which every act of judgment can become an act of calculation. "Sum up all the values of all the pleasures on the one side, and those of all the pains on the other," Bentham urges, and "the balance" will yield the measure of right and wrong for any individual action. For communal actions, Bentham says,

take an account of the number of persons whose interests appear to be concerned; and repeat the above process with respect to each. . . . Sum up the numbers. . . . Take the balance; which, if on the side of pleasure, will give the general good tendency of the act, with respect to the total number or community of individuals concerned; if on the side of pain, the general evil tendency, with respect to the same community.[8]

This hedonistic calculus—this emphasis on the ethical primacy of pleasure and pain, and on their numerical computability—is usually taken to be the hallmark of utilitarianism. Other eighteenth-century thinkers, notably Locke, Hutcheson, and Hume, had also tried to develop an ethical system from a sensationalist epistemology, but it was Bentham who tried, most indefatigably, to ground that epistemology in arithmetic, claiming for it the quantifiability of a simple equation. Of course, Bentham is a man whose company the Reverend Sewell might not relish. Sewell's successor, Richard Posner, certainly does not relish it. Mindful that the critics of Law and Economics are most likely to "attack it as a version of utilitarianism,"[9] Posner sets out


to exorcise the "spongy, nonoperational" ghost of Bentham and to demonstrate, once and for all, how infinitely superior "wealth maximization" is to the "greatest happiness" principle. Even so, as he reluctantly admits, "Bentham plays a prominent, if somewhat sinister, role" in his book.[10]

Yet Posner might have set his mind entirely at ease on this score, for his intellectual genealogy is both longer and more honorable than his attacks on Bentham would suggest. It was Aristotle, after all, in the Nicomachean Ethics , who first tried out something like a mathematization of ethics, analyzing distributive justice as a geometrical progression and rectificatory justice as an arithmetical progression.[11] Closer to home, the search for a formalizable ethics—a uniform measure for all human affairs—could also claim its descent from Bacon and Newton, Condorcet and Leibniz, Hutcheson and Hume. In short, an idealized principle of commensurability had dominated Western thought long before Bentham gave it his distinctive expression. It is this principle that Adorno and Horkheimer would single out for critique in Dialectic of Enlightenment , their fierce attack on the rule of "equivalence" which they see as the origin as well as the burden of Western thought. Enlightenment rationality, they argue, is nothing less than a "principle of dissolvent rationality." It believes in "universal interchangeability," believes in the "calculability of the world," and so equates everything, "liquidates" everything, and subjects everything to the rule of the "fungible." And, as damning evidence, Adorno and Horkheimer cite a remark by Bacon: "Is not the rule, 'Si inaequalibus aequalia addas, omnia erunt inaequalia ,' an axiom of justice as well as of the mathematics? And is there not a true coincidence between commutative and distributive justice, and arithmetical and geometrical proportion?"[12]

Bacon was, of course, doing no more than echoing Aristotle and refurbishing an ancient dream of adequation, a dream of a rational order at once immanent and objective, at once numerically computable and humanly edifying. In its full flowering in the eighteenth century, this rationality would produce, among other things, William Petty's "political arithmetic," Condorcet's "mathematique sociale," and Chastellux's "indices du bonheur." To these ambitious efforts, we might also add the precedent of Descartes, with his esprit de geometrie ,[13] as well as that of Hobbes, who, writing at almost exactly the same time as Descartes—in 1642—had looked to "the Geometri-


cians" for moral guidance. ("If the moral philosophers had as happily discharged their duty," Hobbes said, "the nature of human actions [would have been] as distinctly known as the nature of quantity in geometrical figures.")[14] And we might add Spinoza as well, who, as if in response to the very challenge issued by Hobbes, would soon take it upon himself to apply Euclidean geometry to moral philosophy; his major work, Ethica more geometrico demonstrata ,[15] was published after his death in 1677. Leibniz, meanwhile, announced a project which would include not only geometry and mechanics but also a scheme for settling all political, legal, and moral disputes:

If controversies were to arise, there would be no more need of disputation between two philosophers than between two accountants. For it would suffice for them to take their pencils in their hands, to sit down to their slates, and to say to each other (with a friend to witness, if they liked), "Let us calculate."[16]

Such supreme faith in "calculations" suggests that at the onset of the Enlightenment, the reign of the numerical was already customary rather than revolutionary. Still, there was something unusual about the computing fervor of the eighteenth century: unusual not only in its many obsessions but also in its many innovations. Through the influence of Locke, for example, psychology was to emerge as a new discipline, indeed as the preeminent science of man, predicated in part on a speculative—but nonetheless enumerable—inventory of the mind. The idea of "Number," Locke wrote, "is the most intimate to our Thoughts, as well as it is, in its Agreement to all other things, the most universal Idea we have. For Number applies it self to Men, Angels, Actions, Thoughts, every thing that either doth exist, or can be imagined."[17]

The new philosophy of mind, taking its measure from this "most universal Idea," was therefore also to be a science of numbers. And "upon this ground," Locke said, "I am bold to think, that Morality is capable of Demonstration , as well as Mathematicks."[18] This was the ambition of Locke's Essay Concerning Human Understanding (1689), and it was the ambition as well of a long line of distinguished successors, from Hutcheson's Inquiry into the Original of our Ideas of Beauty and Virtue (1725) to Hume's Treatise of Human Nature (1740). In this context, it is not surprising that "happiness" should emerge as one of the key words of the Enlightenment, the pursuit of which would ani-


mate not just Jefferson's Declaration of Independence but numerous other declarations similarly inspired by dreams of a rational order. As Garry Wills points out, "happiness was not only a constant preoccupation of the eighteenth century; it was one inextricably linked with the effort to create a science of man based on numerical gauges for all his activity."[19] Happiness had a place in ethics precisely because it was quantifiable, because it could be itemized and distributed, on the one hand, and aggregated, on the other hand, in terms of its sum total both within one individual and within any group of individuals. It was in this quantifying spirit that Beccaria would write, in On Crimes and Punishments , of "la massima felicitàa divisa nel maggior numero," a phrase which would in turn inspire Bentham's English adaptation and from which he was to derive "the principle by which the precision and clearness and incontestableness of mathematical calculations are introduced for the first time into the field of morals."[20]

Quantifying Morality

And so, for all his disreputableness, then and now, Bentham would seem to be writing out of a broad intellectual tradition.[21] Even his painstaking (and, to us, seemingly demented) efforts, in An Introduction to the Principles of Morals and Legislations (1789), to work out the exact degrees and ratios of pleasures and pains, had a genealogy of sorts.[22] Indeed, even the momentous phrase itself, "the greatest happiness of the greatest number," turns out to have a previous user. As Robert Shackleton has shown in his valuable bit of detective work,[23] it was Francis Hutcheson who first used the phrase, in his Inquiry into the Original of our Ideas of Beauty and Virtue (1725). That treatise, an attempt to measure morality by algebraic equations, offered advice that, like Bentham's more celebrated formula, might have been of interest to the nineteenth-century Laphams, stuck in their impasse:

In comparing the moral qualities of actions, in order to regulate our election among various actions proposed . . . we are led by our moral sense of virtue thus to judge: that in equal degrees of happiness expected to proceed from the action, the virtue is in proportion to the number of persons to whom the happiness shall extend . . . and in equal numbers, the virtue is as the quantity of the happiness or natural good; or that virtue is in a compound ratio of the quantity of good


and number of enjoyers. . . . So that, that action is best which accomplishes the greatest happiness for the greatest numbers.[24]

Except for the memorable phrase at the end, the statement is perhaps not overly striking. Still, it is worth pointing out (and not merely as a matter of antiquarian interest) that it was Hutcheson rather than Bentham who introduced the phrase, and who did so within the framework of a moral philosophy in which the numerical would figure centrally as the arbitrating ground. Unlike Bentham (whose influence in America was relatively negligible),[25] Hutcheson, as a major figure of the Scottish Enlightenment, had an intellectual legacy that was both extensive and well documented. That legacy became increasingly transatlantic in the course of the eighteenth century. Both in his own right and through his influence on David Hume, Adam Ferguson, Lord Kames, Adam Smith, Thomas Reid, and others, Hutcheson had every claim to being the founder of a tradition of moral philosophy in America. This moral philosophy, always priding itself on its numerical clarity, would soon gravitate (especially in the hands of Adam Smith) toward an intellectual as well as institutional alliance with the emerging discipline of economics.[26] Beginning with Hutcheson, then, we can map out a line of descent for a style of rationality characteristic of the Scottish Enlightenment, a rationality in which the moral and the economic would soon become commensurate: commensurate, because they were articulated out of a shared cognitive foundation, a shared assumption about the quantifiability of the world.

Terence Martin has alerted us to this Scottish tradition in American thought.[27] Henry May has characterized its powerful influence as the "conquest" of America.[28] More recently, as revisionist historians try to unseat Locke from his putative centrality in America, the philosophy of the Scottish Enlightenment has emerged (along with classical republicanism) as a major contending force within the volatile intellectual climate that was the eighteenth century.[29] By the late eighteenth century, the Scots had scored a decisive victory in at least one area. Sober, pragmatic, and lucid, their moral philosophy was eminently readable and eminently teachable. The Common Sense popularizers—Thomas Reid, Lord Kames, James Beattie, and Dugald Stewart—all showed up in great numbers on booksellers' lists. To American colleges, charged with the education of a free and virtu-


ous people, this enlightened doctrine must have seemed a gift from heaven. Championed by Francis Alison and William Smith of the College of Philadelphia and by John Witherspoon of Princeton (all transplanted Scots), the Scottish moral philosophy quickly became the backbone of the American college curriculum. Madison studied it at Princeton, Jefferson studied it at William and Mary, and five future signers of the Declaration of Independence applied themselves to it at the College of Philadelphia. Even at the Anglican King's College (later Columbia), the Presbyterian Hutcheson would still claim the pride of place, his text taking up the last two years of study.[30]

The intellectual accessibility of moral philosophy was further enhanced by its disciplinary centrality within the Scottish organization of knowledge. In Scottish universities, moral philosophy had always been defined in the broadest of terms, as an umbrella discipline. Hutcheson, Ferguson, and Reid were all professors of moral philosophy. So was Adam Smith—who wrote not only The Wealth of Nations (1776) but also The Theory of Moral Sentiments (1759) and who held the Chair of Moral Philosophy at Glasgow for twelve years, from 1752 to 1764, lecturing on political economy only as one of the fourfold divisions of moral philosophy.[31] Inspired by that tradition, American colleges during the first half of the nineteenth century also featured moral philosophy as the centerpiece of the curriculum. The entire senior year was devoted to it. Often taught by the college presidents themselves and using textbooks written by them, this core course provided the standard educational experience for generations of college graduates. Needless to say, moral philosophy so broadly defined was also broadly various in its subject matter. Like its Scottish kindred, it became at many points indistinguishable from political economy, the two being seen as adjacent (or overlapping) disciplines. Thus, the Reverend John McVickar at Columbia, appointed in 1818 to the first chair of political economy in the United States, had actually been appointed the year before as the professor of moral philosophy, rhetoric, and belles lettres. Similarly, Francis Wayland, the energetic president of Brown, not only began as a minister and not only wrote a "phenomenally successful" textbook, The Elements of Moral Science (1835), but also felt qualified, in the space of two years, to write another textbook, The Elements of Political Economy (1837).[32] As late as 1860, political economy at Harvard was taught by Francis Bowen, the Alford Professor of Natural Religion, Moral Phi-


losophy, and Civil Polity.[33] This arrangement was not as outrageous as it might seem, for Bowen, like McVickar and Wayland, and like Richard Posner in the twentieth century, turned out to be an expert on both morality and economics. Just one year after he collected his Lowell Lectures into the popular Principles of Metaphysics and Ethical Science Applied to the Evidences of Religion (1855), he would publish an equally magisterial volume, The Principles of Political Economy Applied to the Conditions, Resources, and the Institutions of the American People (1856).

If figures like McVickar, Wayland, and Bowen seem to us unduly interdisciplinary, it is helpful to remind ourselves that what we now take to be clear-cut "disciplines" had not always been so perceived. As Francis Wayland said, "The Principles of Political Economy are so closely analogous to those of Moral Philosophy, that almost every question in one, may be argued on ground belonging to the other."[34] And there was every reason why the "analogy" between moral philosophy and political economy should be regularly enforced, for not only did they share a common foundation, a quantifying foundation, but as Albert O. Hirschman has shown, political economy had always been held up by its early advocates as a much-needed complement and corrective to the unstable field of morality. In David Hume and Adam Smith, and even in Montesquieu, economic "interests" were understood to have a sobering effect, useful in restraining and counteracting the dangerous "passions" afoot in the moral universe.[35] Far from being an enemy to morality (a reputation it later acquired), economics at the outset of its career was looked upon very much as the guardian of morality, as the expression of morality in its strictest and most rational form. It was this commensurability that prompted John McVickar to remark, in his Outlines of Political Economy (1825), that "the high principles which this science teaches entitle it to be regarded as the moral instructor of nations."[36]

The Reverend Sewell is in good company, then, when he urges upon the Laphams his moral instructor, his "economy of pain." Such equation of the moral and the economic echoes not only the teachings of Sewell's ecclesiastical forebears, the Reverend Wayland and the Reverend McVickar, it echoes as well the teachings of someone closer to home. Simon Nelson Patten, the Wharton School economist and a contemporary of Howells, was writing about pleasure and pain at almost exactly this time and, like the Reverend Sewell, was also


doing a kind of double bookkeeping on these sensory phenomena, counting them up, that is, on two registers at once commensurate and interchangeable: ethics on the one side, economics on the other. A popularizer of "marginal utility analysis"[37] and a pioneering advocate of modern consumerism, Patten gave the utilitarian calculus what amounted to a late-nineteenth-century facelift by turning it into a dual-purpose index, unifying and indeed equalizing both the economics and the morality of consumption. In a series of books published in the 1880s and 1890s, Patten made a case for something like an ethics of spending, arguing that consumer demand was directly translatable into the general well-being of the nation. The private splurges of the consumer led to the common good for all.[38] The effect of that, Patten went on to say (in language strikingly similar to the Reverend Sewell's), was that "we are now in the transition stage from this pain economy to a pleasure economy." "The development of human society has gradually eliminated from the environment the sources of pain," he said. "These changes make a pleasure economy possible and destroy the conditions which made the subjective environment of the old pain economy a necessity."[39]

The open advocacy of a "pleasure economy" by such an economist as Simon Nelson Patten and the confident counsel of an "economy of pain" in such a novel as The Rise of Silas Lapham suggest that the intellectual landscape of the late nineteenth century might be more complicated than we have hitherto assumed. Morton White, who has given us the most influential account of that landscape, has characterized it as a revolt against utilitarianism, a revolt against its "abstraction, deduction, mathematics, and mechanics." As White describes it:

When Dewey first published books on ethics, it was hedonism, and utilitarianism, which he most severely attacked; when Veblen criticized the foundations of classical economics, it was Bentham's calculus of pains and pleasures that he was undermining; when Holmes was advancing his own view of the law, it was the tradition of Bentham that he was fighting against.[40]

We might disagree with White about the three figures he specifically mentions.[41] We might further disagree with his account of a unified revolt—a revolt on one front against one enemy—which, to my mind, does not quite square with the rich brew of philosophical contradictions during this period. White's discretely periodizing model—


marked by the clear banishment of utilitarianism—should perhaps be qualified, then, by analytic concepts more fluid and more subtly evolving. My own candidates here are "uneven development" and "imperfect rationalization"—concepts that, especially in this case, would acknowledge both the persistence of certain cognitive categories and their semantic mutations and permutations over time. The evolution of the utilitarian calculus is especially interesting in this context, for the late-nineteenth-century language of pleasure and pain, as articulated by Patten and Howells, was as much a transformation as it was an inheritance from the language of their eighteenth-century precursors. After all, for all his faith in a future of abundance and for all his impatience with "sufferers who have made an art of wretchedness," Patten was deeply troubled by the psychological implications of a world without pain.[42] Noting that "individuals as well as nations show the deteriorating influence of pleasure as soon as they are freed from the restraints of a pain economy," he was almost consoled by the thought that even in a pleasure economy, pain would still remain—not as physical deprivation but as psychic anguish, brought about by the "defective relations which exist between men or between man and nature." Pain of this sort would always be with us, he predicted, for "no change in the ultimate forms of the universe or of man can alter" this salutary affliction .[43] The Reverend Sewell is not so adamant on this point, but even he, for all his sensible optimism, is strangely unconcerned with the greatest happiness of the greatest number. He speaks only of the economy of pain.

If happiness was the key word of the eighteenth century, pain might well be the key word of the nineteenth. Ideas about pain—about its provenance and consequence, its reality and calculability—have of course evolved throughout the course of history. In the nineteenth century, this "changing ethos of injury" was especially striking, as G. Edward White has noted.[44] That changing ethos, I would argue, had much to do with various emerging forms of nineteenth-century rationality, as they achieved various degrees of institutionalization. Earlier that century, a concern with objective adequation had led to an international venture in penal reform, which, rejecting torture as a corrective instrument, had insisted on applying no more than a "just measure of pain" to the criminal, proportional to the crime committed.[45] Further into the century, the same adequating rationality would manifest itself in a range of practices inspired by a


newborn utilitarianism, a utilitarianism modernized and reinvigorated.[46] These practices, revolving around the quantification, supervision, and utilization of pain, are most readily observable in three fields institutionalized in the nineteenth century: humanitarianism, modern tort law, and anesthetics. To their company, I would add a fourth field, adjacent to and in dialogue with the others, but, I would also argue, complexly at odds with them, complexly irreducible to them. I am thinking of the realist novel of the nineteenth century, a genre in which pain is documented both with judicious precision and with involuntary obsession, a genre committed to rational solutions but also haunted by their unsuccess.

Rational Benevolence

The new visibility of pain (and the rational solutions it called forth) might be studied in conjunction with a host of historical developments. Here, I begin with the rise of cities, which, especially in the nineteenth century, would confer on human suffering not only the status of a "problem" but, happily for all concerned, the status of a solvable problem .[47] "Why is it, my friends, that we are brought so near to one another in cities?" William Ellery Charming asked in an 1841 sermon. "It is, that nearness should awake sympathy; that multiplying wants should knit us more closely together; that we should understand one another's perils and sufferings; that we should act perpetually on one another for good."[48]

Nineteenth-century cities were scenes of moral action, in which nearness translated into sympathy, perils and sufferings into the desire to do good. Between 1800 and 1880, New York grew from a city of just 60,000 inhabitants to a metropolis of 1,100,000, with 600,000 more living across the East River in Brooklyn.[49] Other cities across the nation experienced similar rates of growth. By 1900, 60 percent of the inhabitants of the nation's twelve largest cities were either foreignborn or of foreign parentage, with the figure approaching and even exceeding 80 percent in some cities, including St. Louis, Cleveland, Detroit, Milwaukee, Chicago, and New York.[50] With their miseries and mysteries of indigence, these new metropolises showcased human suffering as a sign, a symptom, a challenge to the explanatory powers of the new social sciences, and a challenge to the investigatory zeal of the new philanthropists.


Humanitarianism was thus very much an institutional presence in the nineteenth century, an organized experiment in rational benevolence. And most rational of all was the "Charity Organization Movement," which, first launched by S. Humphreys Gurteen in Buffalo, in 1877, quickly spread across the country in the 1880s. By the 1890s over l00 cities had charity organization societies, equipped with their own journals (Lend-a-Hand in Boston, Charities Review in New York, Charities Record in Baltimore) and convening once a year in the National Conference of Charities and Corrections.[51] Rejecting the traditional practice of public outdoor relief (which they saw as sentimental and haphazard), these new charities emphasized information gathering, the compilation of dossiers, and moral supervision by middleclass "visitors." In short, for these new-style philanthropists, the point was not simply to do good but to do so efficiently, scientifically, wasting no sentiment and no expense.[52] Josephine Shaw Lowell, founder of the New York City Charity Organization Society in 1882 and its guiding spirit for the next twenty-five years, personified this new development. Lowell distinguished between what she called mindless "benevolence" and vigilant "beneficence." Clearly an advocate of the latter, she argued that the proper goal of philanthropy was not to relieve suffering but rather to economize it, which is to say, to dispense it in such a way as to get the maximum result.[53] What was wrong with traditional almsgiving, she said, was that it was "indiscriminate," that it operated without "the intimate knowledge of the suffering people . . . necessary to all efficient help,"[54] and that, far from minimizing suffering, it actually perpetrated it:

The argument which always has the most weight in favor of continuing public out-door relief is that many deserving poor persons may suffer should it be cut off. It has already been proved by experience, however, that not only many suffer, but all suffer , by the continuance of a system which undermines the character of those it pretends to relieve, and at the same time drags down to their level many who never, but for its false allurements, would have been sufferers at all.[55]

With a kind of accusatory compulsion, then, Lowell called attention, over and over again, to the pain caused by inefficient philanthropy. "Almsgiving and dolegiving are hurtful," she insisted; they are "injurious act[s]"; they not only "injure" the recipients by destroying their character but "are hurtful even to those who do not receive them,"


because the "moral harm" they propagate is so "infectious" as to afflict the entire community.

Against such multiplication of injuries, the Charity Organization Society offered itself, by contrast, as a veritable economy of pain: it would "bring less suffering to the innocent and less injury to the community."[56] It would do so, however, not by eliminating pain but by instrumentalizing it, which is to say, by inflicting it and utilizing it in the short run in order to minimize it in the long run. On this point, of course, Lowell was simply echoing Herbert Spencer, her guiding spirit in every respect. Spencer had argued that "the well-being of existing humanity" could only be "secured by that same beneficent, though severe discipline[,] . . . a discipline which is pitiless in the working out of good: a felicity-pursuing law which never swerves for the avoidance of partial and temporary suffering." There was such a thing as "salutary suffering," he insisted, and the challenge was to put it to good use.[57] Following his lead, Lowell too never doubted for a moment that humanity "must suffer"; the "only question" was "as to the kind of suffering." On this point she had no doubts either: salutary suffering must be purgative suffering, directed at its cause, for "the process of cure . . . will be painful to all alike." And so, "finding fellow beings in want and suffering, the cause of the want and suffering are [sic ] to be removed if possible even if the process be as painful as plucking out an eye or cutting off a limb."[58]

Lowell's emphasis on the cause of suffering—and her determination to eradicate that cause, even when it means plucking out an eye or cutting off a limb—casts an interesting light on a recent debate among historians about causation and humanitarianism, a debate that has ignited the pages of the American Historical Review . What occasioned the debate was an important theoretical essay by Thomas Haskell, "Capitalism and the Origins of the Humanitarian Sensibility." Haskell argues that a particular form of moral sensibility, in this case a capacity for humanitarian action, is most likely to flourish within the causal universe of a particular form of economic life, in this case capitalism. Capitalism rewards those who can think in terms of distant events, who can connect things across space and time, and in doing so it helps to enlarge not only "the range of causal perceptions" but also the range of assumed responsibilities. What capitalism accomplishes is not just an economic revolution but, even more crucially, a cognitive revolution, a drastic broadening of our causal


horizons. Out of this revised causality, Haskell suggests, a "new moral universe" is born, where "failing to go to the aid of a suffering stranger might become an unconscionable act." Against the usual assumption that capitalism encourages greed and selfishness, then, Haskell argues that the opposite is just as true. Indeed, according to him, "the emergence of a market-oriented form of life gave rise to new habits of causal attribution that set the stage for humanitarianism."[59]

Haskell's argument is partly designed to outrage his colleagues, but, polemics aside, his remains an intriguing hypothesis. Indeed, we only have to think of the Ford, Rockefeller, and Carnegie Foundations to see that there is in fact a vital link between capitalism and humanitarianism. Haskell's paradigm is invaluable in foregrounding a historical conjunction between a form of economic life and a form of moral action, and in theorizing it as a cognitive conjunction, a shared set of assumptions about time and space, about distance and connectedness, about causation and responsibility. And yet, readers of Sophocles must wonder whether remote causation really did begin with capitalism, whether its prospect and terror might not already have cast a large shadow over, say, Oedipus the King .[60]

Rather than seeing capitalism as the "origin" of humanitarianism, then, it might be more helpful to think about the relation between the two as one of cognitive affinity (not one of inaugural entailment). Both might be seen, that is, as expressions of a rationality with a longer and more complex genealogy: a rationality predicated on the commensurability between the moral and the economic, a rationality that, in this instance, would underwrite not only the moral claims of capitalism but also the economic claims of humanitarianism. In short, what seems most striking to me in the historical conjunction of capitalism and humanitarianism is the extent to which morality in the nineteenth century was always understood to be a cognate of the economic, compatible with and translatable into the most exacting standards of bookkeeping. S. Humphreys Gurteen and Josephine Shaw Lowell, vigilant philanthropists that they were, were no less emphatic about the money-saving virtues of their charities. "To cure paupers and make them self-supporting, however costly the process," Lowell said, "must always be economical as compared with a smaller but constantly increasing and continual outlay for their maintenance."[61] Gurteen, meanwhile, even had statistics at his fingertips. "The saving to the city, in out-door relief alone during the first year of


the Society's work," he proudly reported, "amounted, in round numbers, to $48,000, and the average saving, during the past three years, has been somewhat over $50,000 per annum."[62]

This supposed commensurability between the moral and the economic—the supposed ease with which one translated into the other—led not only to the peculiar institutional landscape of the nineteenth century, its many projects of rational benevolence, but also to some of its deepest anxieties and bewilderments. What was one to do when the moral and the economic turned out to be less than equatable, when the translation between them turned out to be less than complete, less than recuperative? These were the questions that would come to haunt the realist novel in a rather conspicuous way, just as, less conspicuously but perhaps no less persistently, they would also come to haunt the precincts of tort law and of social theory. It was not altogether fortuitous, perhaps, that the word "responsibility"—a word with resonances in all three fields, and with both moral and economic implications—would emerge as one of the most deeply conflictual words in the nineteenth century, its scope subject to contrary definition, its boundaries and limits being matters of dispute.

In an 1887 essay called "The Shifting of Responsibility," for example, William Graham Sumner complained bitterly about a new concept of responsibility, which he found "immoral to the very last degree." According to this immoral concept,

the employer becomes responsible for the welfare of the employees. . . . The employee is not held to any new responsibility for the welfare of the employer; the duties are all held to lie on the other side. The employer must assure the employed against the risks of his calling, and even against his own negligence; the employee is not held to assure himself. . . . [H]e is released from responsibility for himself.[63]

Sumner's semantic usage was less than consistent, no doubt because he was choking over the very sound of "responsibility." But his point, at least, was consistent enough. For him, responsibility as a relational concept—responsibility as an obligation to others—was clearly a travesty of the term. The word has only one legitimate usage, a reflexive usage, as in a man's "responsibility for himself."

Such vitriolic outbursts over the definition of responsibility were clearly fueled by more than just a semantic interest. Sumner, as always, did not mince words about what was at stake. He condemned


responsibility of the "immoral" sort, he said, because it imposed obligations not only between individual human beings but, more specifically, between different classes of human beings: human beings differently endowed, differently situated, and differently entitled. In a short work with a self-explanatory title, What Social Classes Owe to Each Other (1883), Sumner took it upon himself "to find out whether there is any class in society which lies under the duty and burden of fighting the battles of life for any other class."[64] The answer, to no one's surprise, was a resounding no. No obligations exist, Sumner concluded, and no obligations ought ever to exist, for, as he explained in "The Forgotten Man," his well-known essay published the same year, society works by "the balance of the account," and the "advantage of some is won by an equivalent loss of others." It follows, then, that "if you give a loaf to a pauper," you are in effect "trampling on the Forgotten Man," that "clean, quiet, virtuous, domestic citizen" who is the "victim" of the "idle, the intemperate, the extravagant," who is "weighted down with the support of all the loafers," who is made to pay "the penalty while our minds were full of the drunkards, spendthrifts, gamblers."[65] In short, all social obligations must be seen as unfair impositions, for, according to Sumner, each of us has only "one big duty," namely, "to take care of his or her own self."[66]

Sumner's recommendations here are only to be expected, given his reputation. It would be a mistake, however, to see his teachings solely as an expression of Social Darwinism. The scope of responsibility was a matter of concern to a much more diverse group of commentators and in a much more diverse set of contexts. As early as 1838, Francis Wayland was already publishing a popular text in moral philosophy entitled The Limitations of Human Responsibility . Wayland argued that whereas "our responsibility for the temper of mind is unlimited and universal, our responsibility for the outward act is limited and special."[67] In other words, as far as our intentions are concerned, unlimited responsibility applies; as far as our actions are concerned, however, that responsibility must be set within judicious boundaries.

Wayland was writing in the context of slavery, which for the North made the question of moral responsibility especially thorny. If slavery was indeed an abomination, as most Northerners believed, wasn't one morally obligated to put an end to it? And wouldn't that obligation commit one to abolition, civil war, perhaps even the dissolution


of the union? Wayland's answer was exquisitely tempered. "Granting all that may be said of the moral evil of this institution," he reasoned, "the question still remains to be decided, what is our duty in respect to it; and, what are the limitations." It was those "limitations" that he proposed to dwell on. In the end, the only "practical duty" he counseled was that of moral suasion, which is to say, the duty of imparting to our Southern "friends and acquaintances . . . the truth which we believe to be conducive to their happiness."[68] Wayland clearly had a genius for moderation. Even so, that did not prevent his book or its predecessor, The Elements of Moral Science (1835), from stirring up a storm of controversy. The sales figures for both books were "phenomenal," according to Joseph Blau.[69] Moral responsibility was a best-selling topic in the nineteenth century.

Tort Law

The topical interest of moral responsibility did not end with the Civil War, for human suffering was to remain a visible problem all through the nineteenth century. Indeed, beginning in the 1840s, a whole new arena of suffering would open up, fueled by the pace of industrial expansion and the rate of industrial accidents. Injury was an accepted hazard of early industrialization, its liability cost being computed as a standard operational cost.[70] Not surprisingly, it was during this period that a newly consolidated legal field should come into being, drastically expanding its domain to cope with the drastically escalating cases of civil responsibility for injury. Modern tort law (popularly known as "injury law") was very much a creation of the nineteenth century. And among the many injurers brought before it, none was guiltier than the railroad. Still primitive in its safety features, operating without the benefit of the air brake, the early railroads, according to Lawrence Friedman, behaved like "wild beasts; they roared through the countryside, killing livestock, setting fires to houses and crops, smashing wagons at grade crossings, mangling passengers and freight. Boilers exploded; trains hurtled off tracks; bridges collapsed; locomotives collided in a grinding scream of steel."[71]

The nineteenth-century railroad might be said to carry an economy of pain of its own—quite literally so, since judges were increasingly called upon to award money damages for bodily injuries, and thus to work out a numerical equivalent not only for the experience of


physical harm but also for the scope of moral responsibility. Here then was a unique juncture of the moral and the economic,[72] one whose resolution would have profound consequences both for legal and nonlegal thinking. What the court adopted was the doctrine of "negligence" (as opposed to the doctrine of strict liability), which made the relation between injury and compensation a mediate relation, contingent upon demonstrable fault on the part of the injurer.[73] In Farwell v. Boston & Worcester Railroad Corporation (1842), a landmark case decided by Chief Justice Lemuel Shaw of Massachusetts (better known to literary critics as Melville's father-in-law), a railroad engineer, who had lost his right hand in an accident caused by a switchman, sued the railroad for damages. His claim was rejected. The court ruled that since the engineer had voluntarily taken on a dangerous job, he must also be held to have assumed the ordinary risks of that job. His higher than usual wages had already adjusted for the higher than usual hazard, and any injuries sustained must be considered already compensated for.

The legal reasoning exhibited in this case (and in the tort law that came after it) was a style of reasoning very much predicated on a notion of reflexive compensation and reflexive equilibrium. Even though the employee was hurt, his wages had already rectified that hurt, and so everything ended up being balanced out. This compensatory equilibrium quickly became a standard premise of legal reasoning, and, by the beginning of the Gilded Age, the general drift of the new tort law was unmistakable. Its central features—the fault principle, assumption of risk, contributory negligence, the fellow-servant rule—all helped to limit the grounds for redress and hence the scope of entrepreneurial responsibility. Liability was unquestionably a critical issue in late-nineteenth-century legal thinking, so critical, in fact, that when Oliver Wendell Holmes delivered his famous lectures at the Lowell Institute—collected into his equally famous The Common Law (1881)—he was inspired to devote his first lecture to "Early Forms of Liability," followed by others on criminal liability, tort liability, and contractual liability.[74] For Holmes, too, the notion of "universal and unlimited responsibility" was infinitely troubling, because it would "make a defendant responsible for all damage, however remote, of which his act might be called the cause." If such a concept were to prevail, the state would have to act like "a mutual insurance company against accidents, and distribute the burden of its citizens' mishaps


among all its members." This was clearly unacceptable to Holmes; it would "offend the sense of justice." To guard against this eventuality, it was the business of tort law "to fix the dividing line" between what was actionable negligence and what was not, and to ensure that the "loss from accident must lie where it falls."[75]

Austerely just, and austerely economical, the modern tort law is now seen by some of its theorists as a "mirror of American society, held up to some of its most difficult moments of private conflict."[76] If so, what that mirror revealed in the late nineteenth century was not only the frequency of injury but also the seeming ability of legal reasoning to localize its effects, to restrict its claims, and to define legal responsibility without moral ambiguities. The emergence of the modern tort law might be seen, in this context, as an especially important moment in the history of American law, consonant with that broad process whose stated goal, in the words of Holmes, was to shape the "law as a business with well understood limits," to "emphasize the difference between law and morals," and to keep "the boundary constantly before our minds."[77] What is further clear is that, in that proposed parting of ways between the legal and the moral, it was the economic that was consistently enlisted as the instrument of separation. The economic, that is to say, was now appealed to as something more basic than morality, subsuming it and replacing it as the cognitive foundation for the law. Economics, and economics alone, would now furnish the rational ground for legal action.

It is logical, then, that in our own time, exponents of Law and Economics—notably Richard Posner—would be quick to commend Holmes on just this point and quick to endorse nineteenth-century tort principles on the ground of efficiency.[78] And yet, to read Law and Economics against Holmes (or against contemporaries of Holmes who also wrote on liability, including Charles Peirce and Nicholas St. John Green),[79] is to be struck by the enormous distance, intellectual as well as stylistic, between the nineteenth-century theorists and their twentieth-century successors. For Holmes, Peirce, and Green, liability was primarily a philosophical (or perhaps even metaphysical) problem. The technicalities of tort law were anchored and broadened, always, within a discourse as wide-ranging as it was conceptually intricate. For them, what had to be circumscribed through a focus on tort liability was the idea of causation itself, for here as elsewhere, this vexed concept, with its radically enlarged operative radius, proved to


be both intriguing and intolerable. Indeed, the nineteenth-century debate on tort liability quickly became a debate over what Morton Horwitz has called the "politics of causation."[80]

In a series of important cases, including Stone v. Boston and Albany Railroad Co. (1898) and Central of Georgia Railway Co. v. Price (1898), the court decided that "the proximate, and not the remote, cause is to be regarded" as responsible for damage and that "a proximate cause must be that which immediately precedes and produces the effect, as distinguished from the remote, mediate, or predisposing cause."[81] Proximate cause—the narrowest range of causal attribution—must be upheld, for it alone could provide an adequate safeguard against the specter of unlimited claims, not only in specific cases of industrial accidents but more generally in any distributive situation: any situation, that is, where a dispute might arise about the proper allocation of pleasures and pains, burdens and benefits. Theories of causation are, at heart, social theories on a grand scale, with broad implications for distributive justice and broad conclusions about the legitimacy of a particular social order. Nineteenth-century theorists of causation were very much aware of this. Indeed, for Francis Wharton, the influential treatise writer, the seemingly arcane question of "whether a railroad company is to be liable for all fires of which its locomotives are the occasion" turned out to be the central question for "the industrial interests of the land," so central that what hinged upon it was nothing other than the life of capitalism itself.[82]

Wharton reprimanded those who acted on the assumption that "when we are seeking for a responsible cause, we are allowed to go back until we hit, in the line of antecedents, upon wealth that is without immediate friends." Such a mistake is all too common, he said, because we "are accustomed to look with apathy at the ruin of great corporations, and to say, 'Well enough, they have no souls, they can bear it without pain, for they have nothing in them by which pain can be felt.'" For Wharton, this was not at all the right way to think about causation, or about corporations, or about pain. It would lead to "communism," which "makes wealth the basis of liability."[83] It would encourage us to blame our sufferings on an unlimited range of causal antecedents and to say:

"Here is a capitalist among these antecedents; he shall be forced to pay." The capitalist, therefore, becomes liable for all disasters of which


he is in any sense the condition, and the fact that he thus is held liable, multiplies these disasters. Men become prudent and diligent by the consciousness that they will be made to suffer if they are not prudent and diligent. If they know they will not be made to suffer for their neglects; if they know that though the true cause of a disaster, they will be passed over in order to reach the capitalist who is a remote condition, then they will cease to be prudent.[84]

For Wharton, pain, in all its undesirability, was nonetheless an economic resource and as such must be instrumentally distributed. It must be distributed, that is, as a corrective mechanism, a hard lesson to those who needed it. In the case of industrial accidents, this lesson must fall on those who were "mediately or immediately employed,"[85] those who were injured by the accidents, and who, for that reason, must also be designated the cause of those accidents. Wharton thus insisted that "one of the chief offices of society was to discriminate between the antecedents by which an event is conditioned," to "single out one only of the antecedents under the denomination of cause, calling the others merely conditions."[86] That being done, the employer, as a "remote condition," would be excused from the scene, leaving behind only the workers, the real causes of the disaster and therefore the rightful sufferers—which, for Wharton, was all to the good. Pain would teach the lesson of prudence.

In localizing the distribution of suffering, Wharton offered one way to think about time and space, about distance and nearness, about causation and responsibility. In converting suffering into a resource, a usable resource, he offered as well an example (perhaps the most striking we have seen thus far) of the rationalization of pain: a rationalization openly economic, grounded in the trade-off between the brute fact of suffering and the moral it could be counted on to deliver. Nineteenth-century tort law thus stood as one of the boldest experiments in commensurability, one of the boldest attempts to create a symmetrical order out of its designated problem and solution. It did so, we might add, primarily by instrumentalizing pain, turning it into what Spencer would call "salutary suffering." Chastened by it, workers would become so prudent as to make any further suffering unlikely. The cause of pain and the effect of pain were understood, then, both to emanate from and to descend upon the same party. Mutually entailed and inversely corresponding, they would work to neutralize and cancel out each other. Tort law thus brought about an adaptative


equilibrium in the workforce, even as it achieved an operative equilibrium within itself, by a method of damage control that contained the damage within the narrowest possible compass. The problem of pain was not at all a problem here: not a problem, because it could be counted on to take care of itself, to work toward its own cure and its own end.

Functional Adaptation

For Wharton specifically, as for tort law more generally, suffering thus carried with it a rationality of its own, a rationality that made it necessary, useful, and, in the end, happily self-regulating. This was the hope, I have argued, of the philanthropists, and it was also the hope, I would further argue, of an entire age fascinated by the phenomena of pleasure and pain and predisposed to see an instrumental reason behind their occurrence. Pleasure and pain were understood, that is, to be purposive phenomena: they came into the world for a certain reason, they functioned in a certain way, and they produced a certain result. For Herbert Spencer, for example, it was through the agency of pleasure and pain that human evolution could take place at all. If it were not for our sentience, our ability to register those hard lessons taught by the environment, we would not have been made to adapt functionally to that environment.

Since pleasure and pain were instruments of adaptation, since there were evolutionary reasons for them to be felt in a particular way, Spencer also argued that human sentience must vary—vary in degrees of acuity as well as thresholds of susceptibility—in different environments, which is to say, they must vary from one human population to another. "There is no kind of activity which will not become a source of pleasure if continued," he said, for, by the doctrine of evolution, "there will be evolved, in adaptation to any new sets of conditions that may be established, appropriate structures of which the functions will yield their respective gratifications." And so "the common assumption that equal bodily injuries excite equal pains" could not be more of "a mistake." Indeed, "after contemplating the wide divergences of sentiency accompanying the wide divergences of organization which evolution in general has brought about," one cannot doubt "the divergences of sentiency to be expected from the further evolution of humanity."[87]


The medical profession, staking its faith on this "divergence of sentiency," was quick to develop a procedure rejecting "the common assumption that equal bodily injuries excite equal pains." Between 1840 and 1880, it became common for doctors to administer anesthetics to some patients and not to others, not only because they believed that "different types of people differed in their sensitivity to pain" but also because they believed there was a reason behind this difference, a reason medical science ought to respect. Doctors seemed to be guided by "a calculus of suffering," the historian Martin Pernick has observed, a calculus which, in making pain a measure of the functional disparities among human beings, and therefore a measure of the functional rationality of the world, obviously had "implications reach[ing] far beyond anesthesia."[88]

And indeed, Spencer was by no means the first to see in pain the differential effects of adaptation. Throughout much of the nineteenth century, medical doctors experimented with the idea that pain was a variable phenomenon, that those who were refined and civilized were also more susceptible. As early as 1806, Thomas Trotter, surgeon to the British fleet, began to worry that the march of civilization would result in a "general effeminacy," since it "never fail[s] to induce a delicacy of feeling, that disposes alike to more acute pain, as to more exquisite pleasure."[89] Other physicians, sharing his concern, lamented that "civilized life" had sharpened the sensitivity to pain, and that childbirth had become "exceedingly painful[,] . . . especially in the upper walks of life."[90] As late as 1892, Dr. S. Weir Mitchell, the founder of American neurology, would make the same argument, in an essay entitled "Civilization and Pain." "In our process of being civilized we have won an intensified capacity to suffer," he wrote. "The savage does not feel pain as we do."[91]

The "savage [who] does not feel pain" included Indians, who, according to Benjamin Rush (the most prominent physician of the late eighteenth century), could "inure themselves to burning part of their bodies with fire."[92] It also included blacks, who, having a "greater insensibility to pain," could "submit to and bear the infliction of the rod with a surprising degree of resignation, and even cheerfulness."[93] But cheerfulness in the face of bodily affliction was by no means limited to these two groups of savages, hardened by experience into insentience. "Savagery" was a remarkably elastic category in the nineteenth century—it was understood to exist, for instance, also in urban slums, whose population, according to Horace Mann,


was rapidly "falling back into the conditions of half-barbarous or of savage life."[94] These born-again savages, adapting functionally to their new environment, were also now becoming immunized to pain, much in the manner of Indians and blacks—a fact attested to by John William De Forest, a wealthy Connecticut citizen and an occasional social commentator. "We waste unnecessary sympathy on poor people," he said. "A man is not necessarily wretched because he is cold & hungry and unsheltered; provided these circumstances usually attend him, he gets along very well with them."[95]

By a feat of adaptation, insensitivity to pain turned out to be proportional to the incidence of pain. Those who had the most to suffer were least hurt by it. The functional correlation between pain and insentience thus became for De Forest the ultimate proof of the rationality of the world, for here too injury was dispensed with minimal damage, dispensed, that is, literally as an economy, the frequency of pain being compensated by a corresponding immunity. Such faith in a compensatory structure might sound like the sentiment of some arch-antihumanitarian, but (and the point is worth emphasizing) that was precisely what De Forest was not . A Civil War veteran and an agent of the Freedmen's Bureau in Greenville, South Carolina, from 1866 to 1867, his credentials revealed quite a different profile, not at all what one would expect from his seemingly callous statement about the painlessness of pain.

The paradox deepens when we turn from John William De Forest to Lydia Maria Child. A staunch abolitionist, loyal friend of Harriet Jacobs, and the author not only of the popular Appeal in Favor of That Class of Americans Called Africans (1833) but also of the controversial 1824 Hobomok (with its daring depiction of an interracial marriage between an Indian man and a white woman), Child would seem to personify the very spirit of nineteenth-century humanitarianism. Yet she too believed in a differential scale in pain, seeing a functional correlation between the necessary hardships of blacks and their necessary insentience. For her, it was a "merciful arrangement of Divine Providence, by which the acuteness of sensibility is lessened when it becomes merely a source of suffering."[96] Like De Forest, then, Child also envisioned a principle of compensatory equilibrium in every living organism. Indeed, her abolitionism rested on just this point. For her, the slaves' insentience was evidence in itself, proving that slavery was atrocious and that abolition was imperative.

It might seem odd that those who made pain a political issue


should also argue for its endurability among the habitually afflicted. But the oddity here is perhaps no more than a dramatic example of the adequating rationality we have been examining, a rationality that imaged forth the world as a commensurate order, so that problem and solution were not only reflexively generated but also instrumentally corresponding. Adaptation to pain turned out to be nature's solution to the problem of pain. It was Herbert Spencer, of course, who gave this functional rationality its grandest expression, making it the centerpiece of his "Doctrine of Evolution," which saw adaptation itself as a cosmic drive toward compensatory equilibrium, toward a "conciliation of individual natures with social requirements," so that at the end of the process "pleasure will eventually accompany every mode of action demanded by social conditions."[97] Spencer found such an adaptative equilibrium "in the balanced functions of organic bodies that have reached their adult forms, and in the acting and reacting processes of fully-developed societies," both of which were "characterized by compensating oscillations." Indeed, according to him, "the evolution of every aggregate must go on until" all imbalances are eliminated, for "an excess of force which the aggregate possesses in any direction, must eventually be expended in overcoming resistances to change in that direction: leaving behind only those movements which compensate each other, and so form a moving equilibrium."[98] It was this compensatory structure that made it possible for Spencer to speak of a "rational ethics" in which "men themselves are answerable" to themselves, justice here being simply a reflexive equilibrium, simply "a definite balance, achieved by measure."[99]

In making "compensation" and "equilibrium" the overarching terms under which pain is both instrumentalized and neutralized, Spencer dramatizes the functionalist logic of the nineteenth century, a logic at work, as we have seen, not only in tort law, perhaps its most salient expression, but also in the practice of selective anesthesia, and in the rational beneficence of the new-style philanthropists. It is against that logic, against its claim to being a universal form of reason, that I want to bring to the foreground a different cognitive style, a different way of thinking about pain. Here, over and against any proposed solution, any proposed equilibrium, there remains the untidy fact of residues: residues unutilized, uncompensated, unspoken for. The phenomenon I have in mind is something like "incomplete ratio-


nalization,"[100] a phenomenon I associate most especially with the realist novel of the nineteenth century. Committed as it is to a dream of commensurate order, the novel is also haunted, fleetingly but also quite routinely, by the obverse of such a dream: by the failure of the world to conduct itself symmetrically, its failure to resolve itself into a perfect fit, a self-regulating circuit of pain and compensation.

A Cognitive History of the Novel

Incomplete rationalization—understood as an analytic postulate about structural non totality, about what is not integrated, not instrumentalized—thus seems to me to one of the most fruitful ways to think about the form of the novel, about its messiness, its ample proportions, and sometimes its lack of proportions. In chapter 1, I examined the signifying latitude in the novel, generated by its figurations, a latitude I see as residual especially in relation to the increasingly strict constructions in criminal law. In this chapter, I examine the novel's narrative latitude, generated by its complex plot, a latitude residual perhaps along a different axis: residual in relation to the instrumental reason of the nineteenth century. Against the latter's compensatory equilibrium, it is hard not to be struck by the imbalanced form of the novel. There is no symmetry of resolution here, and no symmetry of compensation. That lack of symmetry is most striking when the novel interweaves, as it so often does, a narrative epistemology about the bounds of time and space with a moral epistemology about the bounds of causation and responsibility. This interlocking epistemology—with its ethical imperatives and embarrassments—calls for a form of criticism attentive to the cognitive mapping[101] of the fictional domain, attentive, that is, to the landscape generated by the narrative sequence and associative radius, the length of antecedence and breadth of concurrence. The designation of a length of time understood as meaningful duration and the designation of a width of space understood as meaningful vicinity—these are matters not only of the novel's form but also of its vexed relation to the prevailing rationality of the nineteenth century. I call this a cognitive history of the novel.

From The Pioneers (1823, in which the revelation of a past secret restores Oliver Effingham to his rightful estate), to The Blithedale Romance (1852, in which a similar revelation disinherits Zenobia), to


Pierre (1852, in which yet another revelation literally destroys everyone), the American novel of the mid-nineteenth century might be called the novel of remote causation. In its richly involved (and sometimes richly improbable) plots, in its far-flung attribution of cause and consequence, it gives voice to a deep fascination, and perhaps a deep discomfort, with the bounds of pertinent time and pertinent space, with the range of human connectedness, and with the scope of assumable responsibility.

Howells himself would write explicitly about this problem in The Minister's Charge (1886), published just one year after The Rise of Silas Lapham . In this book, it is once again the Reverend Sewell who is made to deliver the book's central statement. "Everybody's mixed up with everybody else," he observes with admirable succinctness, in a sermon entitled "Complicity."[102] Complicity, the condition of being all mixed up, is indeed an inescapable fact in Howells and in virtually all realist fiction. The ethical entanglements it creates—and the imperfect disentanglements that follow—dramatize not only the need for rational order in the world but also a sharp and sharply unsettling sense of where such an order might not suffice. A cognitive history of the novel, then, might want to focus on those very lapses in its instrumental logic, those very qualities of dissatisfaction and inefficacy which afflict its reasoning. And to the extent that these afflictions are seen to be especially endemic in the novel, this cognitive history will imagine human reason itself not as a unified principle but as a field of uneven development, giving rise to different domains of thought, different shapes of causation and compensation, different shapes of pertinence and answerability.

Paradoxically, then, to approach the novel as a cognitive phenomenon is to destabilize the very idea of cognition itself. It is to acknowledge, within the seemingly integral idea of "reason," something like a constitutive ground for incommensurability.[103] Nineteenth-century humanitarianism[104] and tort law are "cognitive associates" for the novel, then, not only in the sense that they jointly inhabit a universe of thought but also in the sense that they jointly attest to the differentiations within that universe. Even the most ordinary cognitive coordinates—for example, the length and breadth of pertinent connections, longer and broader in some domains than in others, and longest and broadest of all in the novel—will suggest to us some intriguing lines of inquiry both about the contrary claims of reason and


about its contrary institutionalizations over time. "Incomplete rationalization" thus seems to me one of the most helpful concepts, both to think about the imperfect integration of a literary text and to think about the unconcluded dynamics of historical process. Confronting us with what is imperfectly aligned, imperfectly adapted, imperfectly utilized, such a concept restores every naturalized given to a state of underdeterminacy, in which the nontotality of effect also marks the limits of instrumental reason itself.[105]

It is the limits of instrumental reason—its inability to resolve the world in its own image, its inability to translate the world into a functional blueprint—which suggest that human history is perhaps also not a story of functional integration but a story considerably less streamlined, a story of losses unrecovered and residues unassimilated. By the same token, the literary text, too, is not a perfectly working unit, not a feat of engineering, but something less efficient, less goal oriented, less instrumentally assignable, and, because of that, perhaps also less exhausted by its rational purpose, its strategic end.[106] Proceeding from this premise of "incomplete rationalization," what I hope to explore, then, is not a "logic" of the novel,[107] but something like its obverse, an illogic , which is to say, a lapse in its ability to instrumentalize its narrative universe, to make that universe serve one particular end. From the standpoint of practical criticism, what this suggests is a retreat from the functionalist premise which has long dominated our thinking about the novel and which, in its current emphasis on the novel as "cultural work," would seem to align it unproblematically with the reign of instrumental reason. Qualifying that premise, we might want to think of the novel instead as something less seamlessly at work, less seamlessly integrated, something not necessarily unifiable under the category of "function," something that might suggest a limit to that concept.

Radius of Pertinence

That lapse in seamlessness is perhaps most noticeable in the novel's radius of pertinence, which, linking each episode to yet another antecedent or adjacent episode, making it contingent upon yet another eventuality, must attest at every turn to the infinitude of causal horizons and the infinitude of perceptual limits. More than any other American author, William Dean Howells champions the realist novel,


and champions it for just that radius of pertinence. The novel has the ability to give the world its broadest representation, to "widen the bounds of sympathy." Nothing is extraneous for the realist: "In life he finds nothing insignificant. . . . He cannot look upon human life and declare this thing or that thing unworthy of notice."[108] But the realist novel is not only inclusive, it also dwells on the connectedness among all that it sees fit to include. Giving primacy to those threads "which unite rather than sever humanity," it everywhere proclaims the "equality of things and the unity of men."[109] Howells himself was so taken with the idea that he even suggested that, as a writer, he was "merely a working-man," "allied to the great mass of wage-workers." He urged other writers to forge the same alliance, "to feel the tie that binds us to all the toilers of the shop and field, not as a galling chain, but as a mystic bond."[110]

It was these mystic bonds that made for the abiding sympathies (and the abiding sense of guilt) in a novel like A Hazard of New Fortunes (1890). And it was the same mystic bonds that, just three years earlier, had prompted Howells (virtually alone among his generation of writers) to come to the defense of the Haymarket anarchists, whose trial and conviction he protested not as an abstract problem of justice but as a matter deeply affecting to himself. "The justice or injustice of their sentence was not before the highest tribunal of our law, and unhappily could not be got there. That question must remain for history, which judges the judgment of courts," Howells warned.[111] Meanwhile, "for many weeks, for months," he had been living with a "heavy heart," for the "impending tragedy" of the anarchists had not "been for one hour out of my waking thoughts; it is the last thing when I lie down, and the first thing when I rise up; it blackens my life."[112]

Howells's sense of human connectedness has been accepted as one of the central attributes of the realist novel.[113] But it is important, too, I think, to treat that attribute not simply as an isolated phenomenon but as one cognitive style, one among others, fashioned in a cultural environment in which the question of pertinence is almost always linked to the question of responsibility. Within that context, the novel must be seen as an exceptionally intriguing document, in that its temporal and spatial boundaries are so amorphous, its connections so thick and intricate, its threshold of extraneousness so nearly nonexisting. The boundaries of the novel are everywhere expandable, and,


in placing the scope of responsibility within those uncertain boundaries, it offers a striking counterpoint to the "rational ethics" so endlessly celebrated in the nineteenth century, upsetting its forms of resolution and grounds for redress.

In The Rise of Silas Lapham , the radius of pertinence is so broad as to appear at times to be outside the bounds of plausibility. The book is what Henry James would definitely call a "loose baggy monster." And as loose baggy monsters go, this one is worse than most—primarily because of two subplots, one only marginally related to the main story and the other apparently not related at all.[114] The first has to do with Milton K. Rogers, a former partner of Lapham's, squeezed out by him when the business began to prosper, who returns to feed on his guilt and to borrow money from him. This borrowing helps to bring about Lapham's downfall, and Rogers in that regard has something to contribute to the plot. His contribution is not strictly necessary, however, since bad investments alone could have ruined Lapham, and the plot hardly depends on this extra help. Even so, Rogers is more integral to the story than Miss Dewey, a typist in Lapham's office and the center of the other subplot. Her only contribution to the story is to provoke Mrs. Lapham into a fit of unfounded jealousy, but otherwise she seems completely superfluous.

In their semidetached state, Rogers and Miss Dewey would seem to confirm our usual view of the novel as an unruly concoction of plot and subplot, intrigues and entanglements. Here, I want to propose a somewhat different account of this phenomenon, beginning with a conception of the plot and subplot as competing lengths of pertinent time, competing widths of pertinent space. The thickly multiplying subplots, in their unwieldy, unwarranted extension, mark the furthest reach of the novel's causal radius, its most thoroughgoing adventure in connectedness. And since that adventure is ultimately not only far-flung but also far-fetched, each subplot represents as well something of an epistemological crisis, which the main plot must try to rectify, contain, counteract. On this view, plot and subplot would seem to be related, not in thematic collaboration but in cognitive contestation. The novel is thus internally divided, propelled by conflicting narrative coordinates, conflicting grounds of intelligibility, conflicting senses of the extraneous. If it ever achieves a structural equilibrium, that is not so much an effortless given as a precarious effect, a generic crisis confronted and averted. And where that equi-


librium falls short, that too is not so much an aberration as a constitutive failing, a return to what appears to be a generic disposition, a return to the claims of the incommensurate.

The tenuous ties linking Rogers and Miss Dewey are important, then, precisely because they are tenuous, because they define a radius of pertinence so wide as to be virtually untenable. In such a world of causal infinitude, human responsibility becomes infinitely problematic. Is Lapham still responsible for the fate of Rogers, after all these years? How long should he keep on making amends, and how far must he go? That is the very question Mrs. Lapham asks, and her answer is unequivocal. "I want you should ask yourself," she urges her husband, "whether Rogers would ever have gone wrong, or got into these ways of his, if it hadn't been for your forcing him out of the business when you did. I want you should think whether you're not responsible for everything he's done since" (262).

Mrs. Lapham has "a woman's passion for fixing responsibility" (277), Howells tells us, and she certainly seems to be indulging it on this occasion. Still, her passion turns out to be not uniform but strategic and sporadic. She has no desire to "fix responsibility," for instance, when the responsibility involves taking care of the widow and child of a dead army buddy. In fact, she is as vehemently opposed, on this point, as she is vehemently insistent on the other. "One of the things she had to fight [Lapham] about was that idea of his that he was bound to take care of Jim Millon's worthless wife and her child because Millon had got the bullet that was meant for him" (340). As far as she is concerned, this is just "willful, wrong-headed kind-heartedness" (341) on Lapham's part, for he has no moral responsibility to speak of in this case and no reason to "look after a couple of worthless women who had no earthly claim on him" (362). Fight as she does, however, she cannot "beat [the ideal out of" his stubborn head, because, on this occasion at least, Lapham is committed to a wider causal circumference than her own. Seeing himself as the cause of Jim Millon's death, he voluntarily puts himself under what seems to be a lifelong obligation toward the dead man's family. That is why Miss Dewey is in his office to begin with: she is Jim Millon's daughter, and Lapham feels "bound to take care of" her and her mother. The subplot revolving around the typist, then, turns out to be exactly analogous to the one revolving around Rogers. In both cases, a distant event generates a network of complications and entanglements, giv-


ing rise to a universe of ever-receding and ever-expanding causation: a universe of unlimited pertinence and unlimited responsibility.

It is the unlimited responsibility, of course, that precipitates Lapham's downfall. The causal universe he inhabits is not only fatally expansive but also fatally expensive. Moral obligations have a way of becoming financial liabilities here, because both Rogers and Miss Dewey (as well as her mother) use their moral claims to exact money: a fact disturbing not only in its own right, but also in the havoc that it wreaks on the very principle of commensurability which elsewhere had seemed so reassuring to the jurists and philanthropists of the nineteenth century. Lapham is speaking both too prophetically and too soon, then, when he says, "I don't think I ever did Rogers any wrong . . . but if I did do it—if I did—I'm willing to call it square, if I never see a cent of my money back again" (132). The money that he will "never see a cent of back again" turns out to be the sum total of his fortune, for Milton K. Rogers, Lapham later discovers, has a way of "let[ting] me in for this thing, and that thing, and [has] bled me every time" (274).

What is striking here, in Howells's narrative universe, a universe of infinite antecedence, is the degree to which the moral and the economic are here not commensurate—or rather, are commensurate only in the most ironic sense, only through the cruellest inversion of their joint agency. Structured as it is by an almost boundless radius of pertinence, Lapham's moral universe is economically disastrous for just that reason. Far from being self-regulating, self-compensating, it has no rational checks and balances to speak of, nothing to save it from utter collapse, utter disaster. Lapham himself suggests as much. All his trouble began with "Rogers in the first place," he says. "It was just like starting a row of bricks. I tried to catch up, and stop 'em from going, but they all tumbled, one after another. It wasn't in the nature of things that they could be stopped till the last brick went" (364).

A morality given over to excess—a radius of pertinence extended beyond prudential limits—can lead only to the worst case predicted by domino theory. But if so, the very nature of the problem already suggests a solution of sorts. For if the trouble here is a morality gone awry, a morality at odds with its economic foundation, the logical thing to do would simply be to repair that breach, to reconstitute the moral domain once again as a balanced proposition, a self-regulating circuit of gains and losses. It is here, I think, that the category of moral


"character" is especially important—important to the novel's attempt to harness its erring morality, to restore it to the fold of the economic—for the "rise" of Silas Lapham, the moral ascent which the novel advertises, is of course purchased by the corresponding financial downfall he is made to endure. The very category of "character," in other words, is based on a kind of internal bookkeeping, which in effect transforms the radius of pertinence into a circumscribed radius, turning Lapham's life itself into a compensatory structure, a trade-off between suffering and edification.

From this perspective, Lapham's beginnings—the assets that initially grace his person—are especially worthy of notice. And "assets" is the right word because, in the first part of the book, Lapham is noticeably well endowed: endowed with bodily parts that are not only conspicuous but downright obtrusive. Over and over again, we hear about his "bulk" (4), his "huge foot" (3), his "No. 10 boots" (6). He is in the habit of "pound[ing] with his great hairy fist" (3), and, instead of closing the door with his hands, he uses "his huge foot" (3). When he talks to Bartley Hubbard, he puts "his huge foot close to Bartley's thigh" (14). Lapham's body is prominently on display in the opening scene, and in the succeeding chapters we continue to hear about his "hairy paws" (84), his "ponderous fore-arms" (202), and his "large fists hang[ing] down . . . like canvased hams" (188).

In short, Lapham comes with a body, a body grossly physical and grossly animal, and that is the sum and measure of who he is. Such a body, not surprisingly, is often linked with his failures to "rise"—failures first literal and then not so literal. When Bartley Hubbard shows up at the office, for instance, Lapham "did not rise from the desk at which he was writing, but he gave Bartley his left hand for welcome, and he rolled his large head in the direction of a vacant chair" (3). Similarly, when he needs to close the door, he does not rise, but "put[s] out his huge foot" to push it shut (4). So far, Lapham's failure to rise is literally just that: he does not get up from his chair, his body stays put. Things become more worrisome, however, when this bodily inertia becomes metaphorical: Lapham's head, we are told, rests on "a short neck, which does not trouble itself to rise far" (4). Some tyranny of physique seems to be keeping him down, and perhaps it is only to be expected that he should have failed to rise on another occasion as well, when it would have behooved his moral character to do so. Years ago, when he had to decide whether to keep Rogers on or to force him


out, Lapham found that he could not "choose the ideal, the unselfish part in such an exigency," he "could not rise to it" (50). This is a fatal mistake, of course, although with a body like his, it is all but a foregone conclusion.

But Lapham does eventually rise and indeed is destined to do so, as the title promises. Between the unrisen Lapham at the beginning of the book and the risen Lapham at the end, some momentous change has taken place. Or perhaps we should say momentous ex change, for Lapham is able to rise only insofar as he is destined to fall in equal measure, only insofar as his fictive career narrativizes a principle of economy into an edifying trajectory. What he possesses at the end is no longer the bodily vitality he once flaunted, but rather "a sort of pensive dignity that . . . sometimes comes to such natures after long sickness, when the animal strength has been taxed and lowered" (349). In short, a gain and a loss seem to have occurred somewhere, "taxing" Lapham's animal strength to raise his moral capital. Suffering ennobles then precisely because, by virtue of the loss that it entails, it is able to bring about a new ratio in one's composition of character, a new balance of attributes, and a gain commensurate with the loss.

Here, then, is Howells's attempt at a compensatory equilibrium, one that puts the realist novel directly in the company of Herbert Spencer and directly in the company of nineteenth-century humanitarianism and tort law. Like these advocates of a functionalist ethics, the novel too tries to imagine a morality within bounds: a morality commensurate with economic reason, a morality premised on internal regulation and internal adequacy. Such a morality is, of course, the sine qua non of a twentieth-century theorist such as Richard Posner. In The Rise of Silas Lapham , it is the Reverend Sewell who is its chief advocate and who, in proposing an "economy of pain," would seem to be gesturing toward just such a rationalized universe, in which every misfortune carries its organic benefit and every suffering its organic anodyne.

And yet—such is the radius of pertinence in the novel, and such its messy complications—it is not Lapham, after all, who would furnish the human illustration to this economy, nor is he even the occasion for its pronouncement. Sewell, in his recommendation, actually has in mind an entirely different problem: not the economy of a pain that compensates for itself, but the economy of a pain that must be ra-


tioned out, a pain that, because it must fall on some particular person, must turn every act of distribution into a crisis of allocation. What Sewell is doing, after all, is to try to single out one recipient of pain, when there are three equally eligible candidates: Tom, Penelope, and Irene. It is this sense of economy—economy as differential distribution rather than adequate compensation—that would emerge as the obsessive concern not only of The Rise of Silas Lapham but of the realist novel as a genre. Since there is no possibility (and indeed no pretense) that this distribution would ever be fully self-justifying, fully equitable to all concerned, "satisfaction" as a novelistic category is not so much realized as it is ironized, not so much affirmed as renounced. And to the extent that there remains a residue—a character uncompensated, an injury unaccounted for—the novel would seem to have smuggled, into the very heart of its crisis of allocation, something like a generic question mark, a generic ground for disagreement. For all its deference toward a rational order that settles everything and amends everything, the novel's narrative medium is not quite a neutralizing agency, not quite an all-purpose solvent. And so, even though the Reverend Sewell is emphatic about the "economy of pain," even though he is emphatic that "one suffer instead of three," the novel nonetheless finds itself perversely asking a question that it is not supposed to ask, namely, "Why this particular one?"

Incomplete Rationalization

In its less than convincing answer to that question, its less than convincing attempts to justify its distributive effects, the novel is less of an "economy" than we may think, less fortified by that long tradition of presumptive and indeed prescriptive commensurability. To honor the novel in its dissent from that tradition, to honor the persistent sense of injury that it manages to keep alive, we need a theory, I think, about its structures of failed resolution, about the range of satisfactions it refuses to claim, let alone to grant. We need a theory, that is, about the "incomplete rationalization" of the novel, about the narrative form itself as an imperfect form of closure, an imperfect form of justification. Along those lines the novel would appear to be less summarizable by any single line of reasoning and less exhausted by any single adducible logic. By the same token, its distributive relations too might turn out to fall only partially within the domain of rational


explanation, which is to say, they might turn out simply to obtain, but might not necessarily be justified as such, and might not even claim justification as a premise.

Some such distributive relation, in any case, is what Tom Corey notices as The Rise of Silas Lapham draws to a close. And, whether justified or not, it certainly gives him hope, as he watches the synchronized gain and loss being meted out to himself and to the man who, he hopes, will one day be his father-in-law. "Lapham's potential ruin," for Tom, is nothing short of a windfall, because this is a case where "another's disaster would befriend him, and give him the opportunity to prove the unselfishness of his constancy" (272). This is not just wishful thinking either. It actually comes to pass: Lapham's trouble does indeed "befriend" Tom, and his marriage to Penelope does indeed take place, to the tune of his father-in-law's financial downfall. In some mysterious fashion, the two events seem to have compensated for each other in a kind of cosmic trade-off, a cosmic exchange of fortunes. Here then, once again, are the familiar motions of a compensatory equilibrium. And yet the terms of the equilibrium are such as to provoke questions in turn. In what sense, and by what calculus, is the marital bliss of Tom and Penelope a fit compensation for Lapham's financial disaster (not to mention the elder Coreys's afflicted sensibility)? What rate of exchange—to put the question most crudely—measures these occurrences and certifies their commensurability?

That such questions can be asked, that some of them can even be answered, is a tribute to the economized ethics of the novel, its dream of a rational order resolvable into matching terms. This is its hoped-for foundation, its hoped-for justificatory ground, from which it derives a sequence, a circumference, a principle of economy doubling as a principle of narration. And yet the shakiness of such a foundation—its lapses not only in coverage but also in guarantee—suggests that the novel is, after all, both more complex and more vulnerable than the concept of an "economy" would make of it. There can be no full adequation here. Instead, what the novel registers, over and over again, is something like the traces of an insoluble residue: a residue offered up to us, unhappily but also quite unsparingly, as the limits of commensurability and the limits of any justice founded upon its image.

In The Rise of Silas Lapham , the limits of justice begin with a trou-


bling mismatch, a troubling lack of commensurability, among the characters. Unexplained but also quite unmistakable, it manifests itself not only in a disparity of intelligence but also in a nonreciprocity of affections. This creates a headache for all concerned, and none is more aware of it than Mrs. Lapham. "She isn't really equal to him" (109), she announces with palpable misgivings, when she first toys with the idea of a romance between Tom and Irene. Many pages later, when that romance has become no more than a monstrous illusion, this is the theme she comes back to: "But she never was equal to him. I saw that from the start" (226). Against that brute fact of inequality, Penelope's marriage to Tom, an equal match in all respects, would seem to stand both in contrast and in remedy. But if so, that remedial equality turns out to be a kind of differential effect, generated out of inequalities and rendered intelligible only by that contrary phenomenon. It is this mind-boggling logic—and the mind-boggling "justice" predicated on its terms—that Howells would wrestle with, not only in The Rise of Silas Lapham but also in an essay entitled "Equality as the Basis of Good Society" (1895), published ten years later.

"Humanity is always seeking equality," Howells writes. "The patrician wishes to be with his equals because his inferiors make him uneasy; the plebeian wishes to be with his equals because his superiors make him unhappy. This fact accounts for inequality itself, for classes." The desire for equality turns out, in short, to be the basis for inequality. This is certainly a discouraging prospect, although (if it is any consolation) the inverse turns out to be true as well. Equality, as it happens, can also be born out of a desire for its antithesis. "People often wish to get into good society because they hope to be the superiors of those who remain out of it; but when they are once in it, the ideal of their behavior is equality." This is so because "if you are asked to a house, the theory is that you are the equal of every person you meet there, and if you behave otherwise, you are vulgar." Such behavior can be kept up "only on a very partial and restricted scale, and of course the result is an effect of equality, and not equality itself, or equality merely for the moment." Still, this is better than nothing, and Howells offers the dubious advice that "good society," "though it is the stronghold of the prejudices which foster inequality, yet it is the very home of equality."[115]

In its self-confounding logic, "Equality as the Basis of Good Society" stands as a kind of remote coda to The Rise of Silas Lapham . In


the dubiousness of its advice (not to say the dubiousness of its consolation), it also echoes the novel's curious reluctance to claim for itself anything like full satisfaction. "The marriage came after so much sorrow and trouble," Howells tells us, "and the fact was received with so much misgiving for the past and future, that it brought Lapham none of the triumph" (358–359). As for the Coreys, "the differences remained uneffaced, if not uneffaceable, between [them] and Tom Corey's wife" (359). "That was the end of their son and brother for them; they felt that"; and, with so much "blank misgiving," such a "recurring sense of disappointment," all Mrs. Corey could do was to "say bravely that she was sure they all got on very pleasantly as it was, and that she was perfectly satisfied if Tom was" (360–361).

By the force of convention, the marriage between Penelope and Tom counts as a happy ending; and yet what Howells chooses to dwell on is the manifest un happiness it occasions, to which the supposedly happy ending attaches itself almost as an afterthought. The intrusive shades of disappointment certainly seem striking in The Rise of Silas Lapham . But the same might also be said of numerous other nineteenth-century novels as well. To cite only the most obvious examples, the best-known (and most-lamented) marriages in the novels of George Eliot—between Dinah Morris and Adam Bede, between Dorothea Brooke and Will Ladislaw, between Daniel Deronda and Mirah Lapidoth—are "happy endings" only in name; in every other respect they bring a disconcerting inflection to that label. Within this context, the famous last line of The Bostonians (1886) might serve as an epigraph for the entire genre. When Verena goes off with Basil Ransom, she is discovered, James tells us, to be "in tears." And he goes on, "It is to be feared that with the union, so far from brilliant, into which she was about to enter, these were not the last she was destined to shed."[116]

Howells does not say that about Penelope, of course, and we have every hope that she will fare better than Verena Tarrant. Still, even in The Rise of Silas Lapham , a book that otherwise has little in common with The Bostonians , the happily married heroine is not allowed to go off without shedding some tears of her own.[117] When Penelope finally departs with Tom, she too is seen "cry[ing] on his shoulder" (361). That activity is perhaps more appropriate to Verena, but it is not entirely out of place even here, for Penelope's marriage too carries with it something like a generic signature of the novel, a generic


sense of the unappeased. The ending, then, is not so much a full resolution as a problematization of that very concept. Inadequate to all that has gone on before and inadequate, above all, to the phenomenon of pain which the novel foregrounds as its subject, the ending marks not the passing of its crisis of allocation, but the rewriting of that crisis into a generic condition for residue.

It is Irene, of course, who stands out, as the most unyielding and most inconvenient of residues, in the crisis of allocation which animates and confounds The Rise of Silas Lapham . Significantly, Howells does not choose to supply Irene with a suitor, a figure of commensurability, someone who would have rectified her mismatch with Tom, even though there are certainly available candidates, including her cousin Will and the young West Virginian who has taken over her father's business. Irene is uncompensated in her marital fortunes, and she is uncompensated, as well, in her moral bookkeeping. For even though she is indeed educated by her suffering—"toughened and hardened" by it, as a host of nineteenth-century scientists and philanthropists would have predicted—it is not at all clear that her account is truly balanced, that her pain is truly its own reward. If anything, the emphasis here is on the discrepancy between suffering and edification, between the injury sustained and the recompense received. Irene has "necessarily lost much," Howells writes. "Perhaps what she had lost was not worth keeping; but at any rate she had lost it" (347). At the end of the book, we see her treating "both Corey and Penelope with the justice which their innocence of voluntary offense deserved. It was a difficult part, and she kept away from them as much as she could" (347).

The transformation of sister and lover into recipients of "justice"—recipients of some generalized "desert"—marks the logical outcome as well as the logical limit of an economized ethics, an ethics respectfully invoked in The Rise of Silas Lapham but, I would argue, also respectfully contested, if not quite rejected out of hand. For Irene as for Howells, justice is defensible (and indeed practicable) only when it is recognized for what it is, which is to say, an attempt to map our reason onto the world, an attempt which in the end can be no more than that, an attempt. Irene is just to Tom and Penelope; she cannot stay far enough away from them. In the necessary proximity—and necessary dissonance—of those two attitudes, we see the shadows haunting the cognitive domain of the novel, a domain that, while informed by the


dream of a commensurate order, is nonetheless not fully integrated into it. In all those moments (and they are numerous) when things refuse to tally, when injuries go uncompensated, when resolutions fall short, the novel offers itself as the most eloquent of failures: a failure in the economics of justice.


4— Pain and Compensation

Preferred Citation: Dimock, Wai Chee. Residues of Justice: Literature, Law, Philosophy. Berkeley:  University of California Press,  c1996 1996.