The Future of Electronic Journals
Hal R. Varian
It is widely expected that a great deal of scholarly communication will move to an electronic format. The Internet offers much lower cost of reproduction and distribution than print, the scholarly community has excellent connectivity, and the current system of journal pricing seems to be too expensive. Each of these factors is helping push journals from paper to electronic media.
In this paper I want to speculate about the impact this movement will have on the form of scholarly communication. How will electronic journals evolve?
Each new medium has started by emulating the medium it replaced. Eventually the capabilities added by the new medium allow it to evolve in innovative and often surprising ways. Alexander Graham Bell thought that the telephone would be used to broadcast music into homes. Thomas Edison thought that recordings would be mostly of speech rather than music. Marconi thought that radio's most common use would be two-way communication rather than broadcast.
The first use of the Internet for academic communication has been as a replacement for the printed page. But there are obviously many more possibilities.
Demand and Supply
In order to understand how journals might evolve, it is helpful to start with an understanding of the demand and supply for scholarly communication today.
Supply of Scholarly Communication
The academic reward system is structured to encourage the production of ideas. It does this by rewarding the production and dissemination of "good" ideas- ideas that are widely read and acknowledged.
Scholarly publications are produced by researchers as part of their jobs. At
most universities and research organizations, publication counts significantly toward salary and job security (e.g., tenure). All publications are not created equally: competition for space in top-ranked journals is intense.
The demand for space in those journals is intense because they are highly visible and widely read. Publication in a topflight journal is an important measure of visibility. In some fields, citation data have become an important observable proxy for "impact." Citations are a way of proving that the articles that you publish are, in fact, read.
Demand for Scholarly Communication
Scholarly communication also serves as an input to academic research. It is important to know what other researchers in your area are doing so as to improve your own work and to avoid duplicating their work. Hence, scholars generally want access to a broad range of academic journals.
The ability of universities to attract topflight researchers depends on the size of the library collection. Threats to cancel journal subscriptions are met with cries of outrage by faculty.
The Production of Academic Journals
Tenopir and King  have provided a comprehensive overview of the economics of journal production. According to their estimates, the first-copy costs of an academic article are between $2,000 and $4,000. The bulk of these costs are labor costs, mostly clerical costs for managing the submission, review, editing, typesetting, and setup.
The marginal cost of printing and mailing an issue of a journal is on the order of $6. A special-purpose, nontechnical academic journal that publishes four issues per year with 10 articles each issue would have fixed costs of about $120,000. The variable costs of printing and mailing would be about $24 per year. Such a journal might have a subscriber list of about 600, which leads to a break-even price of $224.
Of course, many journals of this size are sold by for-profit firms and the actual prices may be much higher: subscription prices of $600 per year or more are not uncommon for journals of this nature.
If the variable costs of printing and shipping were eliminated, the break-even price would fall to $200. This simple calculation illustrates the following point: fixed costs dominate the production of academic journals; reduction in printing and distribution costs because of electronic distribution will have negligible effect on break-even prices.
Of course, if many new journals are produced and distributed electronically, the resulting competition may chip away at the $600 monopoly prices. But if these new journals use the same manuscript-handling processes, the $200 cost per subscription will remain the effective floor to journal prices.
Two other costs should be mentioned. First is the cost of archiving. Cooper  estimates that the present value of the storage cost of a single issue of a journal to a typical library is between $25 and $40.
Another interesting figure is yearly cost per article read. This figure varies widely by field, but I can offer a few order-of-magnitude guesses. According to a chart in Lesk [1997, p. 218], 22% of scientific papers published in 1984 were not cited in the ensuing 10-year period. The figure rises to 48% for social science papers and a remarkable 93% for humanities papers!
Odlyzko  estimates that the cost per reader of a mathematical article may be on the order of $200. By comparison, the director of a major medical library has told me that his policy is to cancel journals for which the cost per article read appears to be over $50.
It is not commonly appreciated that one of the major impacts of on-line publication is that use can be easily and precisely monitored. Will academic administrators really pay subscription rates implying costs per reading of several hundred dollars?
Reengineering Journal Production
It seems clear that reduction in the costs of academic communication can only be achieved by reengineering the manuscript handling process. Here I use "reengineering" in both its original sense-rethinking the process-and its popular sense-reducing labor costs.
The current process of manuscript handling is not particularly mysterious. The American Economic Review works something like this. The author sends three paper copies of an article to the main office in Princeton. The editor assigns each manuscript to a coeditor based on the topic of the manuscript and the expertise of the coeditor. (The editor also reviews manuscripts in his own area of expertise.) The editor is assisted in these tasks by a staff of two to three FTE clerical workers.
The manuscripts arrive in the office of the coeditor, who assigns them to two or more reviewers. The coeditor is assisted in this task by a half-time clerical worker. After some nudging, the referees usually report back and the coeditor makes a decision about whether the article merits publication. At the AER, about 12% of the submitted articles are accepted.
Typically the author revises accepted articles for both content and form, and the article is again sent to the referees for further review. In most cases, the article is then accepted and sent to the main office for further processing. At the main office, the article is copyedited and further prepared for publication. It is then sent to be typeset. The proof sheets are sent to the author for checking. After corrections are made, the article is sent to the production facilities where it is printed, bound, and mailed.
Much of the cost in this process is in coordinating the communication: the author sends the paper to the editor, the editor sends it to the coeditor, the coeditor sends it to referees, and so on. These costs require postage and time, but most important, they require coordination. This role is played by the clerical assistants.
Universal use of electronic mail could undoubtedly save significant costs in this component of the publication process. The major enabling technology are standards for document representation (e.g., Microsoft Word, PostScript, SGML, etc.) and multimedia e-mail.
Revelt  sampled Internet working paper sites to determine what formats were being used. According to his survey, PostScript and PDF are the most popular formats for e-prints, with TEX being common in technical areas and HTML for nontechnical areas. It is likely that standardization on two to three formats would be adequate for most authors and readers. My personal recommendation would be to standardize on Adobe PDF since it is readily available, flexible, and inexpensive.
With respect to e-mail, the market seems to be rapidly converging to MIME as a standard for e-mail inclusion; I expect this convergence to be complete within a year or two.
These developments mean that the standards are essentially in place to move to electronic document management during the editorial and refereeing process. Obviously, new practices would have to be developed to ensure security and document integrity. Systems for time-stamping documents, such as Electronic Postmarks, are readily available; the main barrier to their adoption is training necessary for their use.
Impact of Reengineering
If all articles were submitted and distributed electronically, I would guess that the costs of the editorial process would drop by a factor of 50% due to the reduction in clerical labor costs, postage, photocopying, and so on. Such costs comprise about half the first-copy costs, so this savings would be noteworthy for small journals. (See Appendix A for the cost breakdown of a small mathematics journal.)
Once the manuscript was accepted for publication, it would still have to be copyedited and converted to a uniform style. In most academic publishing, copy editing is rather light, but there are exceptions. Conversion to a uniform style is still rather expensive because of the idiosyncrasies of authors' word processing systems and writing habits.
It is possible that journals could distribute electronic style sheets that would help authors achieve a uniform style, but experience thus far has not given great reason for optimism on this front. Journals that accept electronic submissions report significant costs in conversion to a uniform style.
One question that should be taken seriously is whether these conversion costs for uniform style are worth it. Typesetting costs are about $15 to $25 per page for
moderately technical material. Markup costs probably require two to three hours of a copyeditor's time. These figures mean that preparation costs for a 20-page article are on the order of $500. If a hundred people read the article, is the uniform style worth $5 apiece to them? Or, more to the point, if 10 people read the article, is the uniform style worth $50 apiece?
The advent of desktop publishing dramatically reduced the cost of small-scale publication. But it is not obvious that the average quality of published documents went up. The earlier movement from hard type to digital typography had the same impact. As Knuth  observes, digitally typeset documents cost less but had lower quality than did documents set manually.
My own guess about this benefit-cost trade-off is that the quality from professional formatted documents isn't worth the cost for material that is only read by small numbers of individuals. The larger the audience, the more beneficial and cost-effective formatting becomes. I suggest a two-tiered approach: articles that are formatted by authors are published very inexpensively. Of these, the "classics" can be "reprinted" in professionally designed formats.
A further issue arises in some subjects. Author-formatted documents may be adequate for reading, but they are not adequate for archiving. It is very useful to be able to search and manipulate subcomponents of an article, such as abstracts and references. This archiving capability means that the article must be formatted in such a way that these subcomponents can be identified. Standardized Generalized Markup Language (SGML) allows for such formatting, but it is rather unlikely that it could be implemented by most authors, at least using tools available today.
The benefits from structured markup are significant, but markup is also quite costly, so the benefit-cost trade-off is far from clear. I return to this point below.
In summary, reengineering the manuscript-handling process by moving to electronic submission and review may save close to half of the first-copy costs of journal production. If we take the $2,000 first-copy costs per article as representative, we can move the first-copy costs to about $1,000. Shifting the formatting responsibility to authors would reduce quality, but would also save even more on first-copy costs. For journals with small readership, this trade-off may be worth it. Indeed, many humanities journals have moved to on-line publication for reasons of reduced cost.
Odlyzko  estimates that the cost of Ginsparg's  electronic preprint server is between $5 and $75 per paper. These papers are formatted entirely by the authors (mostly using TE X) and are not refereed. Creation and electronic distribution of scholarly work can be very inexpensive; you have to wonder whether the value added by traditional publishing practices is really worth it.
Up until now we have only considered the costs of preparing the manuscript for publication. If the material were subsequently distributed electronically, there would be further savings. We can classify these as follows:
• Shelf space savings to libraries. As we've seen, these savings could be on the order of $35 per volume in present value. However, electronic archiving is not free. Running a Web server or creating a CD is costly. Even more costly is updating the media. Books that are hundreds of years old can easily be read today. Floppy disks that are 10 years old may be unreadable because of obsolete storage media or formatting. Electronic archives will need to be backed up, transported to new media, and translated. All these activities are costly. (Of course, traditional libraries are also costly; the ARL estimates this cost to be on the order of $12,000 per faculty member per year. Electronic document archives will undoubtedly reduce many of the traditional library costs once they are fully implemented.)
• Monitoring. As mentioned above, it is much easier to monitor the use of electronic media. Since the primary point of the editorial and refereeing process is to economize on readers' attention, it should be very useful to have some feedback on whether articles are actually read. This feedback would help university administrators make more rational decisions about journal acquisition, faculty retention, and other critical resource allocation issues.
• Search. It is much easier to search electronic media. References can be immediately displayed using hyperlinks. Both forward and reverse bibliographic searches can be done using on-line materials, which should greatly aid literature analysis.
• Supporting materials. The incremental costs to storing longer documents are very small, so it is easy to include data sets, images, detailed analyses, simulations, and so on that can improve scientific communication.
Chickens and Eggs
The big issue facing those who want to publish an electronic journal is how to get the ball rolling. People will publish in electronic journals that have lots of readers; people will read electronic journals that contain lots of high-quality material.
This kind of chicken-and-egg problem is known in economics as a "network externalities" problem. We say that a good (such as an electronic journal) exhibits network externalities if an individual's value for the product depends on how many other people use it. Telephones, faxes, and e-mail all exhibit network externalities. Electronic journals exhibit a kind of indirect form of network externalities since the readers' value depends on how many authors publish in the journal and the number of authors who publish depends on how many readers the journal has.
There are several ways around this problem, most of which involve discounts for initial purchasers. You can give the journal away for a while, and eventually charge for it, as the Wall Street Journal has done. You can pay authors to publish, as the Bell Journal of Economics did when it started. It is important to realize that the
payment doesn't have to be a monetary one. A very attractive form of payment is to offer prizes for the best articles published each year in the journal. The prizes can offer a nominal amount of money, but the real value is being able to list such a prize on your curriculum vitae. In order to be credible, such prizes should be juried and promoted widely. This reward system may be an attractive way to overcome young authors' reluctance to publish in electronic journals.
When Everything is Electronic
Let us now speculate a bit about what will happen when all academic publication is electronic. I suggest that (1) publications will have more general form; (2) new filtering and refereeing mechanisms will be used; (3) archiving and standardization will remain a problem.
The fundamental problem with specialized academic communication is that it is specialized. Many academic publications have fewer than 100 readers. Despite these small numbers, the academic undertaking may still be worthwhile. Progress in academic research comes by dividing problems up into small pieces and investigating these pieces in depth. Painstaking examination of minute topics provides the building blocks for grand theories.
However, much can be said for the viewpoint that academic research may be excessively narrow. Rumor has it that a ghost named Pedro haunts the bell tower at Berkeley. The undergrads make offerings to Pedro at the Campanile on the evening before an exam. Pedro, it is said, was a graduate student in linguistics who wanted to write his thesis on Sanskrit. In fact, it was a thesis about one word in Sanskrit. And, it was not just one word, but in fact was on one of this word's forms in one of the particularly obscure declensions of Sanskrit. Alas, his thesis committee rejected Pedro's topic as "too broad."
The narrowness of academic publication, however, is not entirely due to the process of research, but is also due to the costs of publication. Editors encourage short articles, partly to save on publication costs but mostly to save on the attention costs of the readers. Physics Letters is widely read because the articles are required to be short. But one way that authors achieve the required brevity is to remove all "unnecessary" words-such as conjunctions, prepositions, and articles.
Electronic publication eliminates the physical costs of length, but not the attention costs. Brevity will still be a virtue for some readers; depth will be a virtue for others. Electronic publication allows for mass customization of articles, much like the famous inverted triangle in journalism: there can be a one-paragraph abstract, a one-page executive summary, a four-page overview, a 20-page article, and a 50 page appendix. User interfaces can be devised to read this "stretchtext."
Some of these textual components can be targeted toward generalists in a field,
some toward specialists. It is even possible that some components could be directed toward readers who are outside the academic specialty represented. Reaching a large audience would, presumably, provide some incentive for the time and trouble necessary to create such stretchtext documents.
This possibility for variable-depth documents that can have multiple representations is very exciting. Well-written articles could appeal both to specialists and to those outside the specialty. The curse of the small audience could be overcome if the full flexibility of electronic publication were exploited.
As I noted earlier, one of the critical functions of the academic publishing system is to filter. Work cannot be cumulative unless authors have some faith that prior literature is accurate. Peer review helps ensure that work meets appropriate standards for publication.
There is a recognized pecking order among journals, with high-quality journals in each discipline having a reputation for being more selective than others. This pecking order helps researchers focus their attention on areas that are thought by their profession to be particularly important.
In the last 25 years many new journals have been introduced, with the majority coming from the private sector. Nowadays almost anything can be published somewhere -the only issue is where. Publication itself conveys little information about quality.
Many new journals are published by for-profit publishers. They make money by selling journal subscriptions, which generally means publishing more articles. But the value of peer review comes in being selective, a value almost diametrically opposed to increasing the output of published articles.
I mentioned above that one of the significant implications of electronic publication was that monitoring costs are much lower. It will be possible to tell with some certainty what is being read. This monitoring will allow for more accurate benefit-cost comparisons with respect to purchase decisions. But perhaps even more significantly, it will allow for better evaluation of the significance of academic research.
Citation counts are often used as a measure of the impact of articles and journals. Studies in economics [Laband and Piette 1994] indicate that most of the citations are to articles published in a few journals. More articles are being published, a smaller fraction of which are read [de Sola Pool 1983]. It is not clear that the filtering function of peer review is working appropriately in the current environment.
Academic hiring and promotion policies contribute an additional complication. Researchers choose narrower specialties, making it more difficult to judge achievement locally. Outside letters of evaluation have become worthless because of the lack of guarantees of privacy. All that is left is the publication record and
the quantity of publication, whose merits are easier to convey to nonexperts than quality of publication.
The result is that young academics are encouraged to publish as much as possible in their first five to six years. Accurate measures of the impact of young researchers' work, such as citation counts, cannot be accumulated in this short a time period. One reform that would probably help matters significantly would be to put an upper limit on the number of papers submitted as part of tenure review. Rather than submitting everything published in the last six years, assistant professors could submit only their five best articles. This reform would, I suggest, lead to higher quality work and higher quality decisions on the part of review boards.
Dimensions of Filtering
If we currently suffer from a glut of information, electronic publication will only make matters worse. Reduced cost of publication and dissemination is likely to make more and more material available. This proliferation isn't necessarily bad; it simply means that the filtering tools will have to be improved.
I would argue that journals filter papers on two dimensions: interest and correctness. The first thing a referee should ask is, "is this interesting?" If the paper is interesting, the next question should be, "is this correct?" Interest is relatively easy to judge; correctness is substantially more difficult. But there isn't much value in determining correctness if interest is lacking.
When publication was a costly activity, it was appropriate to evaluate papers prior to publication. Ideally, only interesting and correct work manuscripts would undergo the expensive transformation of publication. Furthermore, publication is a binary signal: either a manuscript is published or not.
Electronic publication is cheap. Essentially everything should be published, in the sense of being made available for downloading. The filtering process will take place ex post, so as to help users determine which articles are worth downloading and reading. As indicated above, the existing peer review system could simply be translated to this new medium. But the electronic media offer possibilities not easily accomplished in print media. Other models of filtering may be more effective and efficient.
A Model for Electronic Publication
Allow me to sketch one such model for electronic publishing that is based on some of the considerations above. Obviously it is only one model; many models should and will be tried. However, I think that the model I suggest has some interesting features.
First, the journal assembles a board of editors. The function of the board is not just to provide a list of luminaries to grace the front cover of the journal; they will actually have to do some work.
Authors submit (electronic) papers to the journal. These papers have three
parts: a one-paragraph abstract, a five-page summary, and a 20-to 30-page conventional paper. The abstract is a standard part of academic papers and needs no further discussion. The summary is modeled after the Papers and Proceedings issue of the American Economic Review: it should describe what question the author addresses, what methods were used to answer the question, and what the author found. The summary should be aimed at as broad an audience as possible. This summary would then be linked to the supporting evidence: mathematical proofs, econometric analysis, data sets, simulations, and so on. The supporting evidence could be quite technical and would probably end up being similar in structure to current published papers.
Initially, I imagine that authors would write a traditional paper and pull out parts of the introduction and conclusion to construct the summary section. This method would be fine to get started, although I hope that the structure would evolve beyond this.
The submitted materials will be read by two to three members of the editorial board who will rate them with respect to how interesting they are. The editors will be required only to evaluate the five-page summary and will not necessarily be responsible for evaluating the correctness of the entire article. The editors will use a common curve; e.g., no more than 10% of the articles get the highest score. The editorial score will be attached to the paper and be made available on the server. Editors will be anonymous; only the score will be made public.
Note that all papers will be accepted; the current rating system of "publish or not" is replaced by a scale of (say) 1-5. Authors will be notified of the rating they received from the editors, and they can withdraw the paper at this point if they choose to do so. However, once they agree that their paper be posted, it cannot withdrawn (unless it is published elsewhere), although new versions of it can be posted and linked to the old one.
Subscribers to the journal can search all parts of the on-line papers. They can also ask to be notified by e-mail of all papers that receive scores higher than some threshold or that contain certain keywords. When subscribers read a paper, they also score it with respect to its interest, and summary statistics of these scores are also (anonymously) attached to the paper.
Since all evaluations are available on-line, it would be possible to use them in quite creative ways. For example, I might be interested in seeing the ratings of all readers with whom my own judgments are closely correlated (see Konstan et al.  for an elaboration of this scheme). Or I might be interested in seeing all papers that were highly rated by Fellows of the Econometric Society or the Economic History Society.
This sort of "social recommender" system will help people focus their attention on research that their peers-whomever they may be-find interesting. Papers that are deemed interesting can then be evaluated with respect to their correctness.
Authors can submit papers that comment on or extend previous work. When
they do so, they submit a paper in the ordinary way with links to the paper in question as well as to other papers in this general area. This discussion of a topic forms a thread that can be traversed using standard software tools. See Harnad  for more on this topic.
Papers that are widely read and commented on will certainly be evaluated carefully for their correctness. Papers that aren't read may not be correct, but that presumably has low social cost. The length of the thread attached to a paper indicates how many people have (carefully) read it. If many people have read the paper and found it correct, a researcher may have some faith that the results satisfy conventional standards for scientific accuracy.
This model is unlike the conventional publishing model, but it addresses many of the same design considerations. The primary components are as follows:
• Articles have varying depths, which allows them to appeal to a broad audience as well as satisfy specialists.
• Articles are rated first with respect to interest by a board of editors. Articles that are deemed highly interesting are then evaluated with respect to correctness.
• Readers can contribute to the evaluation process.
• The unit of academic discourse becomes a thread of discussion. Interesting articles that are closely read and evaluated can be assumed to be correct and therefore serve as bases for future work.
Appendix A Cost of a Small Math Journal
The production costs of the Pacific Journal of Mathematics3 have been examined by Kirby . This journal publishes 10 issues of about 200 pages each per year. A summary of its yearly costs is given in Table 25.1.
The PJM charges $275 per subscription and has about 1,000 subscribers. The journal also prints about 500 additional copies per year, which go to the sponsoring institutions in lieu of rent, secretarial support, office equipment, and so on.
The first-copy costs per page are about $64, while the variable cost per page printed and distributed is about 3.5 cents. The average article in this journal is about 20 pages long, which makes the first-copy cost per article about $1,280, somewhat smaller than the $2,000 figure in Tenopir and King . However, the PJM does not pay for space and for part of its secretarial support; adding in
these costs would reduce the difference. The cost of printing and distributing a 200-page issue is about $7 per issue, consistent with the figure used in this paper.
Research support from NSF grant SBR-9320481 is gratefully acknowledged.
Michael Cooper. A cost comparison of alternative book storage strategies. Library Quarterly, 59(3), 1989.
Ithiel de Sola Pool. Tracking the flow of information. Science, 221(4611):609-613, 1983.
Paul Ginsparg. Winners and losers in the global research village. Technical report, Los Alamos, 1996. http://xxx.lanl.gov/blurb/pg96unesco.html.
Stevan Harnad. The paper house of cards (and why it's taking so long to collapse). Ariadne, 8, 1997. http://www.ariadne.ac.uk/issue8/harnad/.
Stevan Harnad. The post-Gutenberg galaxy: How to get there from here. Times Higher Education Supplement, 1995. http://cogsci.ecs.soton.ac.uk:80/ harnad/THES/thes.html.
Rob Kirby. Comparative prices of math journals. Technical report, UC Berkeley, 1997. http://math.berkeley.edu/ kirby/journals.html.
Donald Knuth. TE X and Metafont: New Directions in Typesetting. American Mathematical Society, Providence, R.I., 1979.
Joseph A. Konstan, Bradley N. Miller, David Maltz, Jonathan L. Herlocker, Lee R. Gordon, and John Riedl. Grouplens: Applying collaborative filtering to Usenet news. Communications of the ACM, 40(3):77-87, 1997.
David N. Laband and Michael J. Piette. The relative impact of economics journals: 1970-1990. Journal of Economic Literature, 32(2):640-66, 1994.
Michael Lesk. Books, Bytes, and Bucks: Practical Digital Libraries. Morgan Kaufmann, San Francisco, 1997.
Andrew Odlyzko. The economics of electronic journals. Technical report, AT&T Labs, 1997.
David Revelt. Electronic working paper standards. Technical report, UC Berkeley, 1996. http://alfred.sims.berkeley.edu/working-paper-standards.html.
Carol Tenopir and Donald W. King. Trends in scientific scholarly journal publishing. Technical report, School of Information Sciences, University of Tennessee, Knoxville, 1996.