ELECTRONIC PUBLISHING, DIGITAL LIBRARIES, AND THE SCHOLARLY ENVIRONMENT
Richard E. Quandt and Richard Ekman
By now it is commonplace to observe that the economic position of research libraries has been deteriorating for at least 20 or more years and that derivative pressures have been experienced by publishers of scholarly monographs. The basic facts have been discussed in Cummings et al. (1992), and the facts and their interpretations have been analyzed in countless articles -ground that we do not need to cover from the beginning. Contemporaneously with these unfavorable changes, an explosive growth has occurred in information technology: processing speeds of computers have doubled perhaps every 18 months, hard-disk storage capacities have changed from 10 Mbytes for the first IBM-XTs to 6 to 8 Gbytes, and networks have grown in speed, capacity, and pervasiveness in equal measure. In chapter 21 of this volume, Michael Lesk shows that the number of Internet hosts has grown 10-fold in a four-year period. Parallel with the hardware changes has come the extraordinary development of software. The Web is now a nearly seamless environment about which the principal complaint may be that we are being inundated with too much information, scholarly and otherwise.
Some five years ago, more or less isolated and enthusiastic scholars started to make scholarly information available on the growing electronic networks, and it appeared that they were doing so at very low cost (per item of information). A few electronic journals started to appear; the hope was voiced in some quarters that modern information technology would supplant the traditional print-based forms of scholarly communication and do so at a substantially lower cost. The Association of Research Libraries, compiler of the Directory of Electronic Scholarly Journals, Newsletters, and Academic Discussion Lists since 1991, reports that there are currently approximately 4,000 refereed electronic journals.
The Andrew W. Mellon Foundation, which has had a long-standing commit-
ment to support research libraries in their mission, announced a new initiative in 1994 with the dual objective of (1) supporting electronic and digital publishing and library projects that would make significant contributions to assisting scholarly communication, and (2) supporting them in a manner that would, at the same time, permit detailed and searching studies of the economics of these projects. The objective of the projects was not so much the creation of new hardware or software, but the thoughtful application of existing hardware and software to problems of scholarly communication-in Joseph Schumpeter's terms, the emphasis was to be more on "innovation" than on "invention." The Foundation also planned to diversify its portfolio of projects along functional lines-that is, to work with publishers as well as libraries; to deal with journals as well as monographs, reference works, and multimedia approaches; and to support liberal arts colleges as well as research universities. All grantees were required to include in their proposals a section that outlined the methodology to be used in the project to track the evolution of developmental and capital costs as well as continuing costs (the supply side of the equation) and to measure the usage of any new product created, preferably under varying pricing scenarios (the demand side of the equation). Out of these efforts, it was hoped, the outlines of a "business plan" would emerge from which one could ultimately judge the long-term viability of the product created by the project and examine whether it did, indeed, save libraries money in comparison with the conventional print-based delivery mechanism for an analogous product (Ekman and Quandt 1994).
The papers in the present volume represent, for the most part, the findings and analyses that have emerged from the first phase of the Foundation's grant making in this area. They were all presented and discussed at a conference held under the auspices of the Foundation at the Emory University Conference Center in Atlanta, Georgia, on April 24-25, 1997. They fall roughly into five categories: (1) papers that deal with important technical or methodological issues, such as techniques of digitizing, markup languages, or copyright; (2) papers that attempt to analyze what has, in fact, happened in particular experiments to use electronic publishing of various materials; (3) papers that deal specifically with the patterns of use and questions of productivity and long-term viability of electronic journals or books; (4) papers that consider models of how electronic publishing could be organized in the future; and (5) papers that deal with broader or more speculative approaches.
The purpose of this introductory essay is not to summarize each paper. Although we will refer to individual papers in the course of discussion, we would like to raise questions or comment on issues emerging from the papers in the hope of stimulating others to seek answers in the coming months and years.
Information Technology and the Productivity Puzzle
The argument in favor of the wholesale adoption of the new information technology (IT) in universities, publishing houses, libraries, and scholarly communication
rests on the hope-indeed the dogma-that IT will substantially raise productivity. It behooves us to take a step back to discuss briefly the general relationship between IT, on the one hand, and economic growth and productivity increases on the other.
There seems to be solid agreement among experts that the trend growth rate of real GDP in the United States has been between 2.0 and 2.5% per annum during the past 20 years, with 2.2% being perhaps the best point estimate. About 1.1% of this increase is accounted for by the growth in the labor force, leaving 1.1% for annual productivity growth. This figure is very unremarkable in light of the miracles that are supposed to have occurred in the past 20 years in IT. Technology communications and information gathering have grown tremendously: for example, some steel factories now have hardly any workers in them, and banking is done by computers. Yet the productivity figures do not seem to reflect these savings. Admittedly, there are measurement problems: to the extent that we overestimate the rate of inflation (and there is some evidence that that we do), we also underestimate the rate of growth of GDP and productivity. It may also be true that our measurement of inflation and hence productivity does not correctly measure the quality improvements caused by IT: perhaps an argument in support of that view is that the worst productivity performance is seen to be in industries in which measurement of output is chronically very difficult (such as in financial intermediaries). But it is difficult to escape the conclusion that IT has not delivered what the hype surrounding it has claimed.
What can we say about the effects of IT in universities, libraries, and publishing houses, or in teaching, research, and administration? Productivity increases are clearly a sine qua non for improvement in the economic situation of universities and libraries, but labor productivity increases are not enough. If every worker in a university produces a greater output than before and the university chooses to produce the greater output without diminishing the number of workers employed (who now need more and more expensive equipment to do their work), its economic situation will not improve. As a minimum, we must secure increases in "total factor productivity"; that is, real output per real composite input ought to rise. But will this improvement be forthcoming? And how do we measure labor or total factor productivity in an institution with highly varied products and inputs, most of which cannot be measured routinely by the piece or by weight or volume? What is the "output contribution" of students being able to write papers with left and right margins aligned and without-thanks to spell checkers-too many spelling errors? What is the output contribution (that is, the contribution to producing "truth" in particle physics) of Ginsparg's preprint server in Los Alamos?
Scott Bennett's important contribution to this volume (chapter 4) gives a specific example of how one might tackle the question of the effect of IT on productivity in teaching and shows that the answer depends very much on the time horizon one has in mind. This conclusion is important (and extremely reasonable). The investments that are necessary to introduce IT in teaching need to be amortized, and that takes time. One need look only as far as the eighteenth and nineteenth centuries to recognize that the inventions that fueled the Industrial Revolu-
tion did not achieve their full effects overnight; on the contrary, it took many decades for the steam engine, railroads, and later, electricity to diffuse throughout the economy. Hence, even the most productive IT breakthroughs in an isolated course will not show up in overall university productivity figures: the total investment in IT technology is too small a fraction of aggregate capital to make much difference.
In the papers by Malcom Getz (chapter 6) and Robert Shirrell (chapter 10), we have examples of the potential impact on research productivity. Getz illustrates why libraries will prefer to buy large packages of electronic journals, and Shirrell stresses that productivity is likely to be higher (and costs lower) over longer horizons. By coincidence, both authors happened to choose as their specific illustration the American Economic Review.
Bennett's, Getz's, and Shirrell's papers, among others, suggest that much could be learned by studying academic productivity more systematically and in more detail. We need to study particular examples of innovation in teaching and to analyze the productivity changes that occur over suitably long horizons and with full awareness that a complete understanding of productivity in teaching must cope with the problem of how to measure whether students have learned faster or better or more. We also need to pay more explicit attention to research productivity, mindful of the possibility that research productivity may mean different things in different disciplines.
But these considerations raise particularly murky questions. When journals are electronic and access to information is much faster and requires less effort, do scholars in the sciences and social sciences write more papers and do humanists write more books? Or do they write the same number of articles and books, but these writings are better than they would have been without the IT aids? What measures do we have for scholarly productivity? Obviously, every self-respecting tenure-and-promotion committee will shudder at the thought that productivity is appropriately measured by the quantity of publications; but how do we measure quality? And what is the relationship between the quality of access to information and the quality of ideas? While we agree with Hal Varian's view (chapter 25) that journals tend to have an agreed-upon pecking order and we find his suggestions for a new model of electronic publishing fascinating and promising, we still believe that in the short run, much could be learned by studying the impact of particular IT advances on scholarly productivity in specific fields.
It is possible that in the short run our views about productivity enhancements from IT in universities, libraries, and publishing houses must be expressions of faith. But unlike previous eras, when inventions and innovations did not always lead to self-conscious and subsequently documented examinations of the productivity effects, we have remarkable opportunities to measure the productivity effects in the discrete applications of IT by universities, libraries, and scholarly presses, and thus provide earlier feedback on the innovation process than would otherwise occur.
Measuring Demand and Supply: The Foundations for Pricing Strategies and Survival
It may be too facile a generalization to say that the early, "heroic" period of electronic library products was characterized by enormous enthusiasm on the part of their creators and not much concern about costs, usage, and business plans. But the early history of electronic publishing is filled with examples of devoted academics giving freely of their time and pursuing their dreams in "borrowed" physical space and with purloined machine cycles on computers that were originally obtained for other purposes. An instructive (and amusing) example of this phenomenon can be found in the papers by Richard Hamilton (chapter 12) and James J. O'Donnell (chapter 24), which describe the early days of the Bryn Mawr Reviews and how the editors improvised to provide space, hardware, and labor for this effort. Creating electronic library products seemed to be incredibly easy, and it was.
But as we gradually learned what was technically possible, started to learn what users might like or demand, and realized the scope of the efforts that might be involved in, say, digitizing large bodies of materials, it was unavoidable that sooner or later even not-for-profit efforts would be informed by the realities of the marketplace. For example, if we create an electronic counterpart to an existing print-based journal, should the electronic counterpart look identical to the original? What search capabilities should there be? Should the corpus of the electronic material be added to in the future? What are the staffing requirements of creating and maintaining an electronic publication? (See, for example, Willis G. Regier [chapter 9].) Marketplace realities were further compounded by the recognition that commercial publishers were looking to enter the field of electronic publication. In their case the question of pricing had to be explicitly considered, as is amply illustrated in the paper by Karen Hunter (chapter 8).
Of course, pricing cannot be considered in the abstract, and the "proper" pricing strategy will generally depend on (1) the objectives to be accomplished by a pricing policy, (2) costs, and (3) demand for the product. While it is much too early in the development of electronic information products to propose anything beyond casual answers, it is not too early to consider the dimensions of these problems. We shall briefly discuss each of three key elements on which pricing has to depend.
Objectives to Be Accomplished
The important fact is that there are numerous agents in the chain from the original creator of intellectual property to the ultimate user. And the creator-the author-may himself have divided interests. On the one hand, he may want to have the largest conceivable circulation of the work in question in order to spread his academic reputation. On the other hand-and this point is characteristically relevant only for books-he may want to maximize royalty income. Or, indeed, the
author may have a compromise solution in mind in which both royalty income and circulation get some weight.
Next in line comes the publisher who is well aware that he is selling a differentiated product that confers upon him some monopoly power: the demand curve for such products is downward sloping and raising the price will diminish the quantity demanded. The market motivations of commercial and not-for-profit publishers may not be very different, but in practice, commercial publishers appear to charge higher prices. Since print-based materials are difficult to resell (or copy in their entirety), publishers are able to practice price discrimination-that is, sell the identical product at different prices to different customers, as in the case of journal subscriptions, which are frequently priced at a higher level for libraries than for individuals. The naive view might be that a commercial publisher would charge a price to maximize short-term profit. But the example of Elsevier, particularly in its TULIP Project, suggests that the picture is much more complicated than that. While Elsevier's design of TULIP may not be compatible with long-run profit maximization, the correct interpretation of that project is still open to question.
Scholars and students want access to scholarly materials that is broad and inexpensive to them, although they do not much care whether their universities bear a large cost in acquiring these materials. On the other hand, academic administrators want to contain costs, perhaps even at the risk of reducing the flow of scholarly information, but also have a stake in preserving certain aspects of the journal and book production process (such as refereeing) as a way of maintaining their ability to judge academic excellence, even if this approach adds to the cost of library materials. The libraries, on the other hand, would like to provide as large a flow of information to their clients as possible and might seek the best combination of different library materials to accomplish this objective.
While none of us can clearly foresee how the actual prices for various types of electronic library products will evolve, there are two general ways in which electronic library products can be defined and two general ways in which they can be priced. Either the product itself can be an individual product (for example, a given journal, such as The Chicago Journal of Theoretical Computer Science, or a particular monograph, or even an individual paper or chapter), or it can be a bundle of journals or monographs with the understanding that the purchaser in this latter case buys the entire bundle or nothing. If the product is a bundle, a further question is whether the items bundled are essentially similar (that is, good substitutes for one another), as would be the case if one bundled into a single product 20 economics journals; whether the items bundled are sufficiently dissimilar so that they would not be good substitutes for one another, such as Project MUSE; or whether the bundle is a "cluster of clusters" in the sense that it contains several subsets that have the characteristic of high substitutability within but low substitutability across subsets, as is the case in JSTOR.
With regard to pricing, the vendor may offer site licenses for the product (how-
ever defined in light of the discussion above), which provide the purchaser with very substantial rights of downloading, printing, and so on, or charge the user each time the user accesses the product (known as "charging by the drink"). The principal difference here is not in who ultimately bears the cost, since even in the latter case universities may cover the tabs run up by their members. Two contrasting approaches, JSTOR and Project MUSE, are described in the papers by Kevin M. Guthrie (chapter 7) and Regier, respectively. In the case of JSTOR, both initial fees and annual maintenance charges vary by the size of the subscribing institution. Project MUSE, in addition, offers-not unproblematic-discounts for groups of institutions joined in consortia.
Several papers in this volume discuss the issue of costs. The complexity of the cost issues is staggering. Everybody agrees that journals as well as monographs have first-copy costs, which much resemble what the economist calls fixed costs, and variable costs. Printing, binding, and mailing are fairly sizable portions of total costs (23% for the American Economic Review if we ignore the fixed component and more like 36% if we include it), and it is tempting to hope that electronic publications will completely avoid these costs. (It is particularly bothersome that the marginal cost of producing an additional unit of an electronic product is [nearly] zero; hence a competitive pricing strategy would prescribe an optimal price of zero, at which, however, the vendor cannot make ends meet.) While it is true that publishers may avoid these particular costs, they clearly incur others, such as hardware, which periodically needs to be replaced, and digitizing or markup costs. Thus, estimates by Project MUSE, for example, are that to provide both the print-based and the electronic copies of journals costs 130% of the print-based publication by itself, whereas for Immunology Today, the combined price is set at 125% of the print version (see chapter 8). But these figures just underscore how much in the process is truly variable or adjustable: one could, presumably, save on editorial costs by requiring authors to submit papers ready to be placed on the Web (to be sure, with some risk of deteriorating visual quality).
Most important, the cost implications of electronic publication are not only those costs from actually producing the product. Suddenly, the costs incurred by other entities are also affected. First is the library. Traditionally the library has borne costs as a result of providing access to scholarly information: the book or journal has to be ordered, it has to be cataloged, sometimes bound (and even rebound), shelved and reshelved, circulated, and so on. But electronic products, while they may offer some savings, also bring new costs. Libraries, for example, now have to provide workstations at which users can access the relevant materials; they must devote resources to archiving electronic materials or to providing help desks for the uninitiated. The university's computer center may also get involved in the process. But equally important, the costs to a user may also depend on the
specific type of electronic product. Meanwhile, to the extent that a professor no longer has to walk to the library to consult a book, a benefit is conferred that has the effect of a de facto cost reduction. But let us agree that university administrators may not care much about costs that do not get translated into actual dollars and cents and that have to be actually disbursed (as Bennett points out). Nevertheless, there may well be actual costs that can be reduced. For example, a digital library of rare materials may obviate the need for a professor to undertake expensive research trips to distant libraries (which we may therefore call avoided costs). This factor may represent a saving to the university if it normally finances such trips or may be a saving to the National Science Foundation or the National Endowment for the Humanities if they were to end up paying the tab. The main point is that certain costs that used to be deemed external to the library now become internal to a broader system, and the costs of the provision of information resources must be regarded, as a minimum, on a university-wide basis. Hence these costs belong not only in the librarian's office (who would not normally care about the costs of professors' research trips) but in the provost's office as well.
Finally, we should note that many types of electronic products have up-front development costs that, given the current state of the market for such products, may not be recouped in the short run. (See, for example, Janet H. Fisher [chapter 5].) But to the extent that electronic library products will be more competitive at some future time, investing in current development efforts without the expectation of a payback may be analogous to the infant industry argument for tariff protection and may well have a lot of justification for it.
Usage and Demand
One area that we know even less about than costs is usage and demand. The traditional view has been that scientists will adapt rapidly to electronic publications, whatever they may be, and the humanists will adapt rather slowly, if at all. The picture is probably more complicated than that.
Some kinds of usage-for example, hits on the Web-may be easy to measure but tell us correspondingly little. Because hits may include aimless browsing or be only a few seconds in duration, the mere occurrence of a hit may not tell us a great deal. Nor are we able to generate in the short run the type of information from which the econometrician can easily estimate a demand function, because we do not have alternative prices at which alternative quantities demanded can be observed. But we can learn much from detailed surveys of users in which they describe what they like and what they do not like in the product and how the product makes their lives as researchers or students easier or harder (see the surveying described by Mary Summerfield and Carol A. Mandel in chapter 17). Thus, for example, it appears that critical mass is an important characteristic of certain types of electronic products, and the TULIP project may have been less than fully successful because it failed to reach the critical mass.
Electronic library products make access to information easier in some respects and certainly faster; but these benefits do not mean that the electronic information is always more convenient (reading the screen can be a nuisance in contrast to reading the printed page), nor is it clear that the more convenient access makes students learn better or faster. In fact, the acceptance of electronic products has been slower than anticipated in a number of instances. (See the papers about the Chicago Journal of Theoretical Computer Science [chapter 5], Project MUSE [chapters 9 and 15], JSTOR [chapters 7 and 11], and the Columbia On-line Books project [chapter 17].) But all the temporary setbacks and the numerous dimensions that the usage questions entail make it imperative that we track our experiences when we create an electronic or digital product; only in the light of such information will we be able to design products that are readily acceptable and marketable at prices that ensure the vendor's long-term survival.
A special aspect of usage is highlighted by the possibility that institutions may join forces for the common consortial exploitation of library resources, as in the case of the Associated Colleges of the South (Richard W. Meyer [chapter 14]) and Case Western Reserve/Akron Universities (Raymond K. Neff [chapter 16]). These approaches offer potentially large economies but may face new problems in technology, relations with vendors, and consortial governance (Andrew Lass [chapter 13]). When the consortium is concerned not only with shared usage, but also with publishing or compilation of research resources (as in the cases of Project MUSE and the Case Western/Akron project), the issues of consortial governance are even more complex.
A Look into the Future: Questions But No Answers (Yet)
A key question is not whether electronic publishing will grow in the future at the expense of print-based publishing nor whether electronic access to scholarly materials in universities will account for an increasing share of all access to such materials. The answers to both these broad questions are clearly "yes." But some of the more important and interrelated questions are the following: (1) How will the costs of electronic and conventional publishing evolve over time? (2) How will products be priced? (3) What kind of use will be made of electronic information products in teaching and in research? (4) How will the use affect the productivity of all types of academic activities? (5) What will be the bottom line for academic institutions as a result of the changes that are and will be occurring?
At present, the cost comparison between electronic and conventional publications may be ambiguous, and the ambiguity is due, in part, to our inability to reduce first-copy costs substantially: electronic publications save on fulfillment costs but require high initial investments and continued substantial editorial involvement. Andrew Odlyzko, in chapter 23, argues that electronic journals can be published at a much lower per-page cost than conventional journals, but this reduc-
tion does not appear to be happening yet. Even if we were able to reduce costs substantially by turning to electronic journals wholesale, the question for the future of library and university budgets is how the costs of electronic journals will increase over time relative to other university costs.
Curiously, most studies of the determinants of journal prices have focused on what makes one journal more expensive than another and not on what makes journal prices increase faster than, say, the prices of classroom seats. If electronic journals were half the price of comparable print-based journals, universities could realize a one-time saving by substituting electronic journals for print-based ones; but if these journals increased in price over time as rapidly as paper journals, eventually universities would observe the same budget squeeze that has occurred in the last decade.
In speculating about the future evolution of costs, we can paint both optimistic and pessimistic scenarios. Hardware capabilities have increased enormously. PC processors have gone from 8-bit processors running at just over 4 Mhertz to 32-bit processors running at 450 Mhertz. Any given piece of software will run blindingly fast on a PC of the latter type in comparison with one of the former. But there is a continuing escalation of software: as soon as faster PCs appear on the market, more demanding software is created that will not perform adequately on an older PC. Will software developments continually make our hardware obsolete? If so, we may be able to carry out more elaborate functions, but some may not serve directly the objective of efficient access to scholarly information and all will be at the cost of an unending stream of equipment upgrades or replacements. On the other hand, some software improvements may reduce first-copy costs directly. It is not difficult to imagine a "learned-paper-writing software" that has the feel of a Windows 95 application, with a drop-down menu that allows the user to select the journal in whose style the paper is to be written, similarly to select mathematical notation, and so on. Perhaps under such circumstances editing and copyediting might consist of little more than finding errors of logic or substance. Furthermore, as the complexity of hardware and software grows, will the need for technical support staff continue to grow and perhaps represent an increasing share of the budget? It would take a crystal ball to answer all these questions. The questions provide, perhaps, some justification for O'Donnell's skeptical paper in this volume about the possibilities of measurement at this early stage in the history of electronic libraries.
Other questions asked at the beginning of this section cannot be answered in isolation from one another. The usage made of electronic products-which, one imagines, will be paid for mostly by universities and other academic institutions- will depend on the price, and the price will clearly depend on the usage: as in the standard economic model, quantity and price are jointly determined, neither being the independent cause of the other. But it is certain that usage will lag behind the hype about usage. Miller (1997) cites a state legislator who believes that the entire holdings of the Harvard University library system have (already) been
digitized and are available to the public free of charge, and at least one East European librarian has stated that conventional library acquisitions are no longer relevant, since all important material will be available electronically.
In returning to the productivity puzzle, it is important to be clear about what productivity means. One may be awed by the fact that some 30 years ago the number of shares traded daily on the New York Stock Exchange was measured in the millions or perhaps tens of millions, whereas today a day with 700 million shares traded is commonplace. However, the productivity of the brokerage industry is not measured by the number of shares traded, but by the value added per worker in that industry, a figure that exhibits substantially lower rates of growth. Likewise, in instruction or research, productivity is not measured by the number of accesses to information but by the learning imparted (in instruction) or by the number of ideas or even papers generated (in research). If information gathering is a relatively small portion of total instructional activity (that is, if explanation of the underlying logic of an argument or the weaving of an intellectual web represent a much larger fraction), the productivity impact in teaching may end up being small. If information gathering is a small portion of research (that is, if performing laboratory experiments or working out solutions to mathematical models are much larger fractions), then the productivity impact in research may end up being low. And in these fundamental instructional and research activities there may be no breakthroughs resulting from the information technology revolution, just as you still need exactly four people to perform a string quartet and cannot increase productivity by playing it, say, twice as fast (see Baumol and Bowen 1966).
Teaching and research methods will change, but less rapidly than some may expect. The change will be more rapid if searching for the relevant information can be accomplished effectively. Effective search techniques are less relevant for instructional units (courses) in which the professor has canned the access procedures to information (such as the art course materials described in Bennett's paper in this volume). But individualized electronic products are not likely to sweep the broad ranges of academia: the specific art course at Yale, discussed by Bennett, is likely to be taught only at Yale despite the fact that it is similar to courses taught at hundreds of universities. For scholars who are truly searching for new information, Web searches that report 38,732 hits are not useful and suggest that neither students nor faculty members are well trained in effective search techniques. It is fortunate that important new research into better search algorithms is being carried out. Librarians will play a vital role in this process by helping to guide scholars toward the best electronic sources, just as they have helped generations of scholars to find their way among print-based sources. This new role may, of course, require that librarians themselves redefine their functions to some extent and acquire new expertise; but these changes are happening anyway.
And what about the bottom line? This question is the most difficult one of all, and only the wildest guesses can be hazarded at this time. Taking a short-run or intermediate-run perspective (that is, a period of time up to, say, seven years from
now), we do not expect that the revolution in information technology is going to reduce university costs; in fact it may increase them. Beyond the intermediate horizon, things may well change. Hardware costs may decline even more precipitously than heretofore, software (including search engines) may become ever more effective, and the cost savings due to information technology that are not centered on library activities may become properly attributed to the electronic revolution. In the long run, the budgetary implications are probably much more favorable than the short-or intermediate-term implications. But what we need to emphasize is that the proper way of assessing the longer term evolution of costs is not by way of one part of an institution-say, the library-or even by way of an individual institution viewed as a whole system, but ideally by way of an interdependent multiinstitutional system. Just as electronic technology may provide a university with savings that do not fall within the traditional library budget, thus implying that savings are spread university-wide, so too will the savings be spread over the entire higher educational system. Even though some costs at individual institutions may rise, we are confident that system costs will eventually fall.