Tulip (The University Licensing Program)
Elsevier Science has been working toward the electronic delivery of its journals for nearly two decades. Its early discussions with other publishers about what became
ADONIS started in 1979. Throughout the 1990s there have been a number of large and small programs, some experimental, some commercial. Each has given us some knowledge of user behavior in response to price, although in some cases the "user" is the institution rather than the end user. The largest experimental program was TULIP (The University LIcensing Program).
TULIP was a five-year experimental program (1991-1995) in which Elsevier partnered with nine leading U.S. universities (including all the universities within the University of California system) to test desktop delivery of electronic journals. The core of the experiment was the delivery of initially 43, later an additional optional 40, journals in materials science. The files were bitmapped (TIFF) format, with searchable ASCII headers and unedited, OCR-generated ASCII full text. The universities received the files and mounted them locally, using a variety of hardware and software configurations. The notion was to integrate or otherwise present the journals consistently with the way other information was offered on campus networks. No two institutions used the same approach, and the extensive learning that was gained has been summarized in a final report (available at http://www.elsevier.com/locate/TULIP ).
These are a few relevant observations from this report. First, the libraries (through whom the experiment was managed) generally chose a conservative approach in a number of discretionary areas. For example, while there was a document delivery option for titles not subscribed to (each library received the electronic counterparts of their paper subscriptions), no one opted to do this. Similarly, the full electronic versions of nonsubscribed titles were offered at a highly discounted rate (30% of list) but essentially found no takers. The most frequently expressed view was that a decision had been made at some time not to subscribe to the title, so its availability even at a reduced rate was not a good purchasing decision.
Second, one of the initial goals of this experiment was to explore economic issues. Whereas the other goals (technology testing and evaluating user behavior) were well explored, the economic goal was less developed. That resulted perhaps from a failure in the initial expectations and in the experimental design. From our side as publisher, we were anxious to try out different distribution models on campus, including models where there would be at least some charge for access. However, the charging of a fee was never set as a requirement, nor were individual institutions assigned to different economic tests. And, in the end, all opted to make no charges for access. This decision was entirely understandable, because of both the local campus cultures and the other issues to be dealt with in simply getting the service up and running and promoting it to users. However, it did mean that we never gathered any data in this area.
From the universities' side, there was a hope that more progress would be made toward developing new subscription models. We did have a number of serious discussions, but again, not as much was achieved as might have been hoped for if the notion was to test a radical change in the paradigm. I think everyone is now more experienced and realizes that these issues are complex and take time to evolve.
Finally, the other relevant finding from the TULIP experiment is that use was very heavily related to the (lack of) perceived critical mass. Offering journals to the desktop is only valuable if they are the right journals and if they are supplied on a timely basis. Timeliness was compromised because the electronic files were produced after the paper-a necessity at the time but not how we (or other publishers) are currently proceeding. Critical mass was also compromised because, although there was a great deal of material delivered (11 GB per year), materials science is a very broad discipline and the number of journals relevant for any one researcher was still limited. If the set included "the" journal or one of the key journals that a researcher (or more likely, graduate student) needed, use was high. Otherwise, users did not return regularly to the system. And use was infrequent even when there was no charge for it.