Six
When Champions Aren't Enough
Europe by 1980 was a land of giants as far as telematics is concerned. The hopes of governments for domestic telematics industries rested on champions whose ordination I described in Chapter 4. But the local favorites were not the only titans doing battle for European markets. American and Japanese multinationals exported to the Old Continent and also established their own factories or joint ventures on European soil. By 1980 the Japanese and Americans dominated both the European and world markets for semi-conductors and computers, and were cutting deeply into Europe's traditional trade surplus in telecommunications. Furthermore, the scale and pace of U.S. and Japanese R&D threatened to increase the technical advantages of Europe's rivals. These factors amounted to an equation for crisis in Europe.
The continued (and increasing) dominance of American and Japanese telematics firms signaled to many Europeans the failure of traditional policies. A generalized sense of crisis arose, with old approaches discredited and no fresh strategy to take their place. The early 1980s thus saw a technology-gap scare similar to the crisis of the mid-1960s. The feeling of falling steadily and perhaps irretrievably behind the United States and Japan recalled the panic and even much of the rhetoric of the earlier scare.[1] The difference the second
[1] See, for example, Margaret Sharp and Claire Shearman, European Technological Collaboration , chap. 1; and Andrew J. Pierre, ed., A High Technology Gap?
time around was that European leaders added a collaborative element to their policy response.
Japan's emergence constituted the major structural change at the world level in telematics. Europeans had been accustomed to American dominance of electronics markets. Japan's successful challenge to U.S. preeminence provided an additional catalyst to the emerging telematics crisis in Europe. The crisis was aggravated by American and Japanese programs that threatened to shove Europe even further to the rear.
But relative international weakness cannot explain why the European countries collaborated. The Europeans had always been weak in telematics relative to their international competitors yet had never collaborated before. The international setting defined the problem facing European governments but did not (and could not) determine the nature of the European response. That collaboration emerged was due to entirely different factors.
This chapter has two major parts. In the first I briefly describe Europe's international position in the telematics sectors circa 1980, as well as American and Japanese telematics initiatives. I then analyze the technological changes that were responsible for the emergence of a powerful consensus in favor of collaboration among Europe's largest telematics firms. The telematics industry played a crucial role in convincing governments of the importance of ESPRIT and RACE.
The International Setting
Painting a picture of Europe's troubles and Japan's good fortune will be a matter of numbers and anecdotes. The data show how market shares shifted; the anecdotes illustrate the fate of specific firms.
L'Europe Couchante
As pointed out in Chapter 4, European components makers (and governments) failed to appreciate the importance of digital, largescale ICs. During the 1970s, digital ICs, both memory chips and microprocessors, became the essential raw material for data-processing, telecommunications, industrial automation, and military and
|
consumer electronics. European producers, formerly strong in discrete and analog IC devices, steadily lost market shares. The share of West European firms in world semiconductor markets fell from about 16 percent in 1978 to about 12 percent in 1983; by 1988 it was down to 10 percent, though it rose slightly in 1990 to almost 11 percent.[2] Europe's performance in ICs, the crucial category of semiconductors, was even more dismal. By 1978 the share of Western Europe in world production of ICs was only 6.7 percent; it declined to 5.8 percent in 1980 and rose slightly to 5.9 percent in 1982.[3] Even in the European market American companies dominated, as shown in Table 6.1.
In addition growth rates for demand and production of ICs in Europe were well below the rates for the United States and Japan. While demand expanded at 20 percent per year in the United States and 19 percent per year in Japan over the period 1978–82, it grew by only 13 percent per annum in Europe. Production also increased more rapidly in the United States (17 percent annually) and Japan (25 percent annually) than in Europe (12 percent per year) during the same stretch.[4] Thus, Europe was not benefiting from the microelectronics revolution in the same way that its trade rivals were. One revealing indicator of this discrepancy is the per capita consumption of semiconductors. In 1984 the consumption of semiconductors in the United States had a value of $52 per capita, and in
[2] Jonathan Weber, "U.S. Gains Ground in World Chip Market," Los Angeles Times , 3 January 1991, p. D1.
[3] OECD, Semiconductor Industry , 102.
[4] Ibid., 103.
Japan $61 per capita. The average for all Europe was $14 per capita, with Germany marking the high end at $22 per capita.[5] Not only was Europe behind, but its competitors were accelerating faster.
The situation was so bad that even by the mid-1970s there was not a single European producer of standard ICs.[6] Some firms were successful in niche markets, like Ferranti in uncommitted gate arrays. Most European producers built custom or semicustom chips largely for use in their own final products (computers, telecommunications systems, consumer electronics). The IC divisions of the electronics champions were all losing money: In 1980 Siemens had not shown a profit in ICs since 1965 and SGS-Ates never had.[7] Thomson was a consistent loser through 1984.[8] Philips showed a profit on ICs only in 1979, with help from the American firm Signetics, which it purchased in 1975. About half of Philips's IC production went to in-house uses. Philips has been the only European IC maker big enough to make the world top ten, and it slipped from fourth place in 1979 to sixth place in 1983.[9]
The situation in computers was no better. Table 6.2 shows the world's top twenty-five firms in the data-processing industry for 1978, 1983, and 1986. The European computer makers hover near the middle of the top twenty-five, with Siemens and Olivetti moving up and finally cracking the top ten. Broken down by category, the picture is no better. Only one European maker cracked the 1983 top ten in mainframes (Siemens in eighth place), and its two top-of-the-line models had been manufactured under license from Fujitsu since 1978.[10] ICL sold Fujitsu's Atlas 10 (IBM-compatible) mainframe until 1984, and after that its advanced Series 39 contained forty-three chips developed by Fujitsu under a technology agreement. ICL's microcomputers have been Sun workstations.[11] In minicomputers only Olivetti cracked the top ten from Europe (in
[5] Michael G. Borrus, Competing for Control , 199.
[6] See Malerba, Semiconductor Business , 119. Standard ICs are commodity ICs sold on the world market as opposed to custom or semicustom chips (sometimes called application-specific integrated circuits, or ASICs).
[7] Ibid., 166, 171.
[8] Guy de Jonquieres and Paul Betts, "The Euphoria Is Over," Financial Times , 7 February 1985, p. 14; Paul Betts, "Thomson Has Another Try," Financial Times , 25 October 1985, p. 18.
[9] Malerba, Semiconductor Business , 164; OECD, Semiconductor Industry , 116.
[10] Pamela Archbold and John Verity, "A Global Industry: The Datamation 100," 38; Laurence P. Solomon, "The Top Foreign Contenders," 81.
[11] Kelly, British Computer Industry , 47; Guy de Jonquieres, "Electronics in Europe," Financial Times , 28 March 1984, Survey, p. 1.
seventh place), and in microcomputers Olivetti was again the only European member of the top ten, in ninth place.[12]
European firms had not been able to challenge IBM's dominance. IBM as of 1975 held over half the computer market in France, Germany, and Italy, and 40 percent of the market in the United Kingdom.[13] Even in Britain by 1985 IBM mainframes installed outnumbered those of the national champion, ICL, and were selling faster.[14] Indeed, in 1983 IBM Europe had data-processing revenues more than seven times greater than those of its nearest rival, Bull. And Bull was losing money: It showed losses of FFr 1.35 billion in 1982, FFr 625 million in 1983, and FFr 489 million in 1984.[15] In 1983, nine of the top fifteen computer makers in Europe were American firms. IBM alone had data-processing revenues greater than those of the next nine largest manufacturers combined. IBM's share was 42 percent, up from 38 percent in 1981. Of the top twenty-five firms operating in Europe in 1983, thirteen were American.[16] Their combined share of the European market was 81 percent.[17] All this American dominance had happened despite government subsidies and protected markets for the national champions.
Contrary to the situation in semiconductors and computers, in telecommunications in the early 1980s Europe was not suffering from obvious and longstanding failings. In fact, European telecommunications technology was among the most advanced; the French developed the first fully digital switching system in the mid-1970s. Furthermore, none of the European countries with indigenous telecoms-equipment production showed a trade deficit in the sector, as shown in Table 6.3. The EC countries as a bloc managed a telecommunications-equipment trade surplus with the rest of the world of about $1.7 billion in 1982. In other words, the telecoms sector in the early 1980s did not appear to be crying out for help.
But prying into the statistics shows that the rosy overall picture was deceptive. The strong EC trade surplus was built on exports to Third World countries, especially to members of the Organization
[12] Archbold and Verity, "A Global Industry," 38.
[13] Malerba, Semiconductor Business, 181.
[14] Kelly, British Computer Industry, 15.
[15] Guy de Jonquieres and Paul Betts, "The Euphoria Is Over," Financial Times, 7 February 1985, p. 14; Guy de Jonquieres, "The Harsh Imperatives of Surival," Financial Times, 24 June 1985, Survey, p. 16.
[16] Guy de Jonquieres, "Bull Emerges as Biggest European Computer Maker," Financial Times, 17 August 1984, p. 6.
[17] Malerba, Semiconductor Business, 181.
|
|
|
of Petroleum Exporting Countries (OPEC). In fact, about 80 percent of equipment exports in 1980 (not counting trade within the EC) went to the Third World, as seen in Table 6.4. The EEC trade surplus in telecoms equipment began declining in the early 1980s; 1985 was the third straight year of contraction, with the surplus dropping to 1,247 MECU from 1,533 MECU the year before. In addition the EC registered a growing trade deficit in the sector vis-à-vis the United States and Japan. Its deficit with the United States grew 25 percent in 1985 to 657 MECU, and that with Japan rose by 61 percent to 582 MECU.[18] These trade deficits with the United States and Japan suggested that Europe was weak in the most advanced sectors of the telecommunications market.[19]
Certainly Europe has been slower than the United States and Japan in the diffusion of new products and services. This lag is due in part to the traditional role of the PTTs, which have monopolized the provision of networks (except in the United Kingdom), limited the provision of new services (like VANs), and until recently controlled
[18] CEC, Towards a Dynamic European Economy, 158.
[19] Borrus et al., Telecommunications Development, 38.
|
the kinds of terminal equipment that could be attached to the network. Thus, for example, on-line data services in Europe were worth about $200 million in 1982 as compared with $800 million in the United States. In videotex-type services (interactive on-line data banks), there were more subscribers in the United States than in all the EC. Europe lagged behind the United States in commercial satellites by about ten years; the marketing and promotional budget of just one American business satellite system, Satellite Business Systems, at $200 million, surpassed the entire EC investment in such services.[20] Facsimile, an advanced document-transmission technology, spread in Europe more slowly than in the United States: While the United States had over 225,000 terminals installed in 1980, Europe could count only 47,400.[21] All these signs led European telecommunications administrators, suppliers, and their governments to sense a dangerous weakening in Europe's traditional electronics stronghold.
Le Japon Levant
Some of the tables in the preceding section depict the broad lines of Japan's accelerated rise in advanced electronics. But additional details will show more clearly why Japan's surge was so frightening
[20] As estimated by A. D. Little, European Telecommunications: Strategic Issues and Opportunities for the Decade Ahead, Executive Report, 25, 29.
[21] van Tulder and Junne, European Multinationals, 33.
to the Europeans. Remember that even in the 1960s Europe may have been slightly ahead of the Japanese in telematics because European scientists and enterprises pioneered many of the early advances in solid-state physics and semiconductors. Japan, following the guidance of MITI, first built its way to world leadership in steel and shipbuilding. Its entrée into electronics was consumer products—transistor radios, calculators, and televisions. By the 1980s Japanese producers dominated world markets for videotape recorders and compact-disc players (the high end of the consumer-electronics spectrum). Consumer products provided the initial demand for Japanese semiconductor companies, but computers and industrial applications rapidly became the chief sources of demand pull for advanced ICs in Japan.[22]
Japan overtook the United States first in standard memory chips. Although a technology follower in the first generations of RAM circuits, Japan developed and produced 64K DRAMs (dynamic RAMs) ahead of American companies.[23] Whereas Japanese firms captured only 12 percent of the world market for 4K DRAMs in the mid-1970s, they took 70 percent of the world market for 64K chips in 1981 and 90 percent for 256K DRAMs in 1984.[24] Japan's trade balance in ICs converted from a $142 million deficit in 1976 to a $239 million surplus in 1981.[25] Although the Japanese share of the European IC market remained below 10 percent, it was growing. Japanese firms exported ICs worth only $12 million in 1976; that value rose to $165 million in 1980.[26]
Japan eventually surpassed the United States in the RAM market. In fact, by 1985 all but two American merchant producers of DRAMS had been driven from the market (Texas Instruments and tiny Micron Technology remained). Japanese firms were the first to produce one-megabit memory chips (with four times as much memory capacity as a 256K chip) and four-megabit circuits. In 1985, for the first time, a Japanese company, Nippon Electric Company (NEC), was the world's largest producer of semiconductors.[27] A year later
[22] Malerba, Semiconductor Business, 176.
[23] Borrus, Competing for Control, 143.
[24] Malerba, Semiconductor Business, 155.
[25] Ibid., 156.
[26] Ibid., 139.
[27] Michael Feibus, "Chip Companies Look for a Second Good Year," San Jose Mercury News, 4 January 1988, p. C2.
Japan's total world market share for ICs surpassed that of the United States for the first time, at just over 45 percent.[28] Furthermore, by 1986 Japanese companies were beginning to challenge U.S. dominance of the microprocessor realm.[29]
The story is not quite as dramatic in computers and telecommunications equipment, but Japanese advantages in IC production were increasingly conferring advantages on Japanese systems producers. In fact, the major Japanese semiconductor makers (NEC, Toshiba, Hitachi, Fujitsu, Mitsubishi, and Matsushita) all sold computers and telecommunications systems as well. As seen in Table 6.2, the number of Japanese computer firms in the world's top twenty-five rose from four in 1978 to six in 1986. Furthermore, Japanese companies like Fujitsu and Hitachi were providing computer technology (and sometimes the machines themselves) to European producers like Siemens, Badische Anilin-und Sodafabrik (BASF) and ICL. In telecommunications equipment the share of European manufacturers in total world exports in 1983 had been contracting at a rate of 1 percent per year for ten years, while the Japanese share had been increasing at a similar rate.[30] EEC imports of Japanese telecommunications equipment did not constitute a large share of the market, but they had been growing steadily. The Commission lamented in its Green Paper on telecoms that imports from Japan in 1985 totaled 616 MECU, while exports to Japan totaled only 34 MECU.[31] Of course, this discrepancy is as much a trade problem as a technology problem, but the Europeans had much reason to be nervous about the technological side also, as we shall see later in this chapter.
Striking as they are, the figures on Japan's telematics surge tell only half the story. Europeans were concerned not solely with the data but with how Japan achieved its breakthroughs. A set of government institutions and policies were behind the Japanese miracle. Europeans feared that the Japanese state would continue to force the technology pace, propelling the country ever further ahead. Although scholars have written shelves of books on Japanese industrial
[28] John Burgess, "U.S. Seeks Silicon Island Beachhead," International Herald Tribune, 1 April 1987.
[29] Borrus, Competing for Control, 176–77.
[30] A. D. Little, European Telecommunications, 39.
[31] CEC, Towards a Dynamic European Economy, 167.
policies, it is possible to summarize succinctly the principal features of the Japanese system that have encouraged accelerated growth in the telematics sectors.
Japan has confounded conventional economic wisdom by channeling economic and technological resources into sectors chosen by government agencies as being conducive to long-term growth.[32] There have been three principal elements of the Japanese approach:
First, targeting technologies is the province of MITI, which selects those industries that are likely to experience rapid technological change and world market growth over the long run. MITI accomplishes this task through constant consultations with industry and university scientists. MITI had already in the late 1960s selected semiconductors and computers as the sectors with which to improve Japan's competitiveness.[33]
Second, MITI and other government agencies became gatekeepers, protecting the home market for key sectors. Tariffs and quotas limited foreign access to markets where Japanese firms were trying to establish themselves, like semiconductors and computers. The government also reviewed all applications for foreign investment in Japan; foreign companies could establish themselves within the Japanese market only as junior partners in joint ventures with domestic firms and only by granting Japanese firms access to advanced technologies. Thus, in the period when American firms dominated world semiconductor and computer markets, only Texas Instruments and IBM could establish subsidiaries in Japan.[34]
A third and crucial part of the Japanese story has been government-sponsored, state-of-the-art R&D programs, which diffuse results among the principal firms. Two essential features of Japanese technology-development programs have been their cooperative character and government funding. For instance, the first major telematics R&D program aimed at producing the technologies to compete with IBM's 360 series. All the major electronics firms participated in the research and all had equal access to resulting patents
[32] In the account that follows, for general Japanese practices I rely on Freeman, Technology Policy, especially 33–49; and Dosi, Tyson, and Zysman, "Trade, Technologies, and Development." For specifics about Japanese promotion of the semiconductor and computer industries I rely on Borrus, Competing for Control; and Marie Anchordoguy, "Mastering the Market: Japanese Government Targeting of the Computer Industry," 509–43.
[33] Borrus, Competing for Control, 119–27.
[34] Ibid., 119–21; Anchordoguy, "Mastering the Market," 513–17.
(which the government owned). The New Series program launched in 1971 aimed at developing large-scale integration components and the production technologies needed to compete with IBM's 370 series. MITI financed joint R&D (to the tune of hundreds of millions of dollars) and oversaw the formation of industry groupings. NTT (the government telecommunications authority) supervised much of the research.
In 1976 MITI, NTT, and the five largest semiconductor/computer companies joined in setting up the VLSI program. Again, the companies collaborated on R&D, with government funding of about $121 million. The VLSI program produced the breakthroughs that sent Japanese firms into the lead in ICs and allowed them to produce IBM-compatible computers that outperformed IBM. Japanese firms derived advantages from economizing on R&D by cooperating on generic technologies, then competing among themselves in the protected Japanese market.[35]
In short, the problem for Europe was not just that Japan had grown but how it had grown. The combination of government policies and practices encouraged a formidable system of innovation that promised to propel Japan a generation ahead of the competition, leaving Europe to glean the kernels left behind by Japan's fast-moving reaper.
Un Monde Menaçant
A final element in the telematics world that threatened further erosion of Europe's position was that both the United States and Japan had in place fresh programs designed to benefit their enterprises. I will describe first U.S. and Japanese initiatives in the semiconductor and computer fields, then those in telecommunications.
In the United States the assault on advanced microelectronics and computing came from the Department of Defense. Given the U.S. military's increasing reliance on high-tech weaponry, possessing leading-edge technologies had become a matter of national security. This was the logic behind the very-high-speed integrated-circuit (VHSIC) program and the Strategic Computing Initiative (SCI). The VHSIC program, launched in 1980, aimed at developing components comparable in level of integration (the number of electronic
[35] Borrus, Competing for Control, 126–27; Anchordoguy, "Mastering the Market," 526–30.
elements packed onto the chip) to next-generation commercial ICs. But VHSIC chips also had to meet military requirements for low power consumption, low maintenance demands, built-in testing, durability, and radiation hardening. The R&D would be carried out by American companies selected by competitive bidding. For the first two phases firms on the VHSIC roster included such American telematics stars as IBM, TRW, Motorola, Sperry, Signetics, Burroughs, Texas Instruments, National Semiconductor, CDC, Honeywell, Fairchild, Westinghouse, and Hewlett-Packard. The total program received a budget of $680 million over eight years, on top of regular Department of Defense spending on electronics R&D.[36]
In October 1983 the Pentagon's Defense Advanced Research Projects Agency announced a new program designed to make breakthroughs in advanced computing and artificial intelligence (AI). SCI R&D covers microelectronics, hardware and software architectures, intelligent functions (reasoning, vision, speech recognition), and military applications (like an autonomous land vehicle for the Army and a "pilot's associate" for the Air Force). Initial funding for SCI was $600 million for the first five years.[37]
Of course, the granddaddy of all American high-tech defense projects is SDI, or Star Wars. With an initial projected budget of $26 billion, SDI promised to advance the state of the art in numerous fields: lasers, new materials, communications, AI, particle beams, and others. As with the other major U.S. technology programs, SDI's impacts on civilian industry were ambiguous. Within the United States intense debate arose over whether SDI (and the VHSIC program and SCI) would help or handicap American industry in commercial competition.[38] In Europe the virtually universal perception was that the vast sums of money involved and the ambitious research agenda would push U.S. contractors to breakthroughs that would entail commercial advantages. American defense R&D programs in the 1980s reminded Europeans of the earlier NASA and Air Force programs that made possible the early American blastoff in ICs and computers. Indeed, as Chapter 9 will
[36] This description of the VHSIC program comes from Leslie Brueckner, with Michael G. Borrus, Assessing the Commercial Impact of the VHSIC Program, 8–14.
[37] Dwight B. Davis, "Assessing the Strategic Computing Initiative," 41–49.
[38] See Brueckner, Assessing the Commercial Impact; Jay Stowsky, Beating Our Plowshares into Double-Edged Swords; and Davis, "Assessing the Strategic Computing Initiative."
show, EUREKA was triggered by SDI and the fears it inspired among European decision-makers.
The Japanese deployed an impressive array of programs designed to push the technology frontier. Among these were the programs in optoelectronic components, budgeted at $77.5 million for the 1980s, and in New Function Elements, with $100 million for the period 1982–89.[39] The New Function Elements program was aimed at next-generation components, including super-lattice and three-dimensional elements and hardened (radiation-proof) circuits. A further project targeted supercomputers, with $92.3 million over 1982–90. But the most impressive, and to the Europeans the most frightening, program was the so-called Fifth Generation Computer (5G) project.
The 5G project, begun in 1982, aims at developing the software and hardware technologies needed for the next generation of computers. With 5G Japan hopes to leapfrog into the lead in AI. This goal will require technologies enabling computers to "think"—that is, to manipulate information via rules of logic (as opposed to simply carrying out instructions to perform mathematical operations). Like the VLSI program, 5G is run by MITI at a laboratory dedicated to that project with participation by NTT and the six principal Japanese computer firms (non-Japanese researchers were invited to participate but in practice 5G is virtually all Japanese). The original budget for 5G was about $400 million over ten years, though actual spending will fall short of that level. Spending for the first three-year phase totaled $33.7 million, somewhat below the $40 million originally foreseen.[40] Still, the 5G program alone provoked a series of imitator programs in Europe, as we shall see in the next chapter.
In the telecommunications realm developments in the United States and Japan caused serious consternation in Europe. The American telecommunications system coalesced in 1934 with the creation of the Federal Communications Commission (FCC) and the recognition of a natural monopoly in the provision of telecommunications, namely, AT&T's. Having bought up most of the small private telephone companies across the nation, AT&T would thereafter offer universal service, under the regulatory eye of the FCC. The monopoly
[39] OECD, Semiconductor Industry, 79.
[40] Tom Manuel, "Cautiously Optimistic Tone Set for 5th Generation," 57–58.
extended beyond networks, as AT&T owned its own R&D facilities in Bell Labs, as well as a manufacturing arm, Western Electric, to supply the system. However, AT&T was prohibited from competing overseas; its foreign subsidiaries had mostly been bought by a company that called itself ITT. This was the arrangement that would, over several decades, be deregulated.
The government brought an antitrust suit against AT&T in 1954. The consent decree agreed to by the parties meant that AT&T kept its U.S. monopoly, but Bell Labs was required to make available its patents, and Western Electric could not sell equipment abroad. AT&T could not enter computer markets, though it possessed the technologies and the resources to do so. In 1959 the FCC allowed the creation of microwave links for data communications within companies. The FCC began in 1966 a far-ranging inquiry (now called Computer I) into data communications and the network and service monopolies of AT&T generally. The Carterphone decision in 1968 permitted the attachment of non-AT&T terminal equipment to the network. An FCC decision in 1971 allowed other companies to offer data-transmission services. In 1976 the FCC made it possible for specialized carriers to resell network capacity leased from AT&T; this decision in effect meant that competitors could offer both basic telephone and data services fully connected to the AT&T system. Companies like MCI and Sprint began to compete for the provision of basic long-distance services. Yet AT&T could offer only basic telephony, even though its competitors could provide all services.
Finally, in 1982 a second antitrust case and the Computer II inquiry concluded with AT&T and the Justice Department agreeing to a modified final judgment (it modified the 1954 consent decree). Under its terms AT&T could compete in enhanced services after divesting itself of the local Bell operating companies (BOCs). The twenty-two local BOCs were hived off to seven regional holding companies, which could offer both basic and enhanced services under FCC regulation. AT&T retained only its interexchange (long-distance) lines. More important for world competition, AT&T could enter the computer market and sell equipment overseas. The breakup of AT&T constituted the deregulatory volley that was heard around the world.
Simultaneously, the Department of Justice settled its longstanding antitrust suit against IBM, permitting the computer colossus to enter telecommunications markets, from which it had been blocked.
Thus, as of 1984 the two giants AT&T and IBM would each attack the other's base markets—AT&T entering computers and IBM, telecoms.[41] Furthermore, the world would be the battleground, and because Japan constituted a fairly closed market, that meant Europe. The fears aroused in Europe by that prospect were reaching their peak precisely during the years RACE was getting underway, namely, 1984–85. The American threat therefore figured prominently in the selling of RACE, as I will show in Chapter 8.
Events seemed to confirm many European apprehensions. AT&T moved quickly into Europe by forging alliances with European companies. First came a joint venture with Philips, AT&T-Philips Telecommunications, to develop and market digital exchanges. Then AT&T bought a 25 percent stake in Olivertti. Since 1984 AT&T has not succeeded in selling its ESS-5 digital switch in Europe (winning orders only in the Netherlands). But in the initial postdivestiture period, the threat was real. No one could have known then that AT&T would fail to execute a coherent strategy.
IBM also opened its offensive in Europe; IBM was not interested in public switches but in other network equipment (like PABXs) and especially in hardware and software for VANs.[42] IBM vigorously sought a contract with the Bundespost to develop and supply its videotex system, Bildschirmtext. IBM won the contract, though
[41] The IBM ventures into telecommunications have not panned out. In fact, by 1989 IBM had washed its hands of both Satellite Business Systems (satellite communications networks) and Rolm (PABXs). Nevertheless, at the time, the IBM moves appeared as genuine threats in Europe and were constantly referred to as such.
[42] A note on terminology is appropriate here. VANs stands for value-added networks, also called value-added services. The term implies a difference from basic services. Basic services are those involving the transmission of voice or nonvoice information, without changing or manipulating the information. Voice telephone, telex, and facsimile services fall in the basic-services category. Telex and teletext services constitute a gray area; if no additional services besides transmission (such as storing or forwarding) are performed, they are basic services. With storing or forwarding, telex and teletext could be considered value-added services. PTTs wishing to protect their monopolies prefer to call them basic services, even with the additional operations. Enhanced services are those in which something in addition to mere transmission is provided, "when the information provided by the sender is changed, stored, manipulated or otherwise acted upon in the network." Within enhanced services Aronson and Cowhey distinguish information services from VANs. Information services include databases, data-processing services, and on-line services. VANs include protocol conversion for linking terminals, packet assembly and switching, storage and forwarding of messages. For simplicity, I will lump information services and value-added services together under the label VANs . These definitions and quotes come from Jonathan David Aronson and Peter F. Cowhey, When Countries Talk: International Trade in Telecommunications Services, 85–99.
reportedly by bidding so low that it has not been profitable.[43] A similar arrangement with British Telecom was nixed at the last minute by the British agency regulating mergers. IBM created a joint venture with Fiat-Telettra (1986) to develop data networks but only after IBM had failed to land an agreement for establishing a data network with Societa Italiana per l'Esercizio delle Telecomunicazioni (SIP), the concession operating 80 percent of Italy's networks.
The American approach to deregulation has exerted pressure on traditional telecoms institutions abroad in other important ways. One result of deregulation has been that American users, especially the large corporations, have had access to new equipment, advanced services, and competing networks. In fact, the largest corporate users today, like Hewlett-Packard and McKesson, build and run private telecoms networks, complete with their own transmission facilities and switches. Most businesses employ a combination of private and public telecommunications networks. Thus, U.S. businesses have benefited from better telecoms facilities at lower prices than their European counterparts. Lack of access to advanced services and higher telecommunications costs overall led European businesses to favor Commission plans for coordinated, Europe-wide modernization and liberalization.
Japan also posed challenges to Europe's fragmented telecommunications systems. The Japanese had ambitious plans for the transition to broadband networks and had also altered the traditional PTT arrangement. A new telecommunications law in April 1985 provided for the gradual privatization of up to 49 percent of the shares in NTT, the government remaining the chief shareholder. Under the new regulation NTT had to compete with other companies for the provision of both basic and enhanced services. The former monopoly provider of international telecommunications services, Kokusai Denshin Denwa (KDD), also had to face competition from a rival company authorized by the ministry of posts and telecommunications.
Beyond these liberalization measures Japan planned an aggressive drive into future broadband communications. NTT was to be the engine for the development of the Information Network System, aimed at providing the hardware, software, and services to carry simultaneously voice, data, audio, and visual signals to all subscribers,
[43] Hart, "The Politics of Global Competition," 186.
not just the major companies in dense urban areas. The technologies needed were to be developed by NTT with its family of traditional suppliers (NEC, Fujitsu, Hitachi, and Oki), following the pattern of successful Japanese technology programs of the past, like VLSI. In addition NTT was to place large procurement orders. As a result Europeans believed that Japanese firms might very well develop the technologies first (they already led the world in optoelectronic components, an essential part of fiber-based broadband systems[44] ) and achieve early economies of scale in production by supplying NTT. The Japanese threat therefore figured prominently in European discussions of collaboration.
In short, in every branch of telematics new government programs and regulatory developments in Japan and the United States threatened to worsen Europe's already shaky position. It became obvious to European policy-makers, both in Brussels and in the national capitals, that the old policies had fallen short and that new approaches would be needed for the future.
Technological Change and Collaboration
Patterns of Interfirm Alliances in High-Technology Sectors
Firms operating in high-technology sectors have always formed links with other firms, frequently to obtain (or exchange) patents, manufacture a product under license, or start a joint venture. Beginning in the late 1970s, however, the formation of these interfirm alliances accelerated. The result was an increasingly dense network of alliances among firms, especially among those companies in the information technologies. Although there are multiple reasons for which enterprises seek out partners, the upsurge in alliances was due in large part to technological changes and uncertainties. As James Thompson demonstrated, organizations attempt to stabilize sources of uncertainty in their environment and in their core technology (the set of techniques used to accomplish their basic productive task).[45]
When new products and processes emerge torrentlike in a stream of innovations (as they did in the telematics industries in the early
[44] See Jonathan Joseph, "How the Japanese Became a Power in Optoelectronics," 50–51.
[45] James Thompson, Organizations in Action .
1980s), both the task environment and the technological core of many companies are in constant flux. It is nearly impossible to foresee what markets will develop for what products or what the competition might introduce. The dilemma is sharpened by the increasingly common phenomenon of cross-sectoral technological links. Many products now embody innovations from fields that have heretofore been unrelated. For instance, fiber optics (which is at the heart of next-generation broadband telecoms systems) has brought together companies traditionally based in electronics (like Siemens) with firms from the glass and ceramics industry (like Corning). Thus a company that thinks it has a stable technological core will find itself left behind by the new combinations of technologies that are today's hallmark. In other words, alliances are a way of dealing with uncertainties arising from technological change.[46]
Alliances can also be a response to the needs of enterprises for outside know-how or resources that are necessary to bring an innovation to market successfully. David Teece calls these resources "complementary assets." Complementary assets can include technologies needed as components or inputs (technological assets) as well as capabilities in nontechnological areas like marketing, manufacturing, and after-sales support (market-oriented assets). Gary Pisano and Teece argue that firms can acquire both kinds of complementary assets via a range of approaches. At one extreme is the purely market approach: purchasing the resources through "armslength" contracts. At the other extreme is the possibility of building up the needed capability in-house. The purely market option runs the risk of exploitation by the outside contractor; the in-house option is virtually certain to be impossibly expensive. Thus the middle option: interfirm alliances.[47]
A note on interfirm cooperation is in order here. In summarizing the growing body of research on the phenomenon, the OECD concludes that the definition of an "interfirm technical cooperation
[46] Researchers at the Centre d'Etudes et des Recherches sur l'Entreprise Multinationale have constructed a large database on interfirm alliances involving European firms in a variety of sectors. They also conclude that technological uncertainties are at the heart of the recent surge in cooperative agreements. See LAREA/CEREM, Les stratégies d'accord des groupes de la CEE: Intégration ou éclatement de l'espace industriel européen, 8–15.
[47] David J. Teece, "Profiting from Technological Innovation"; Gary Pisano and David J. Teece, Collaborative Arrangements and Global Technology Strategy: Some Evidence from the Telecommunications Equipment Industry, 20–30.
agreement" must be broad and inclusive.[48] Such agreements take myriad forms, beyond the well-known joint-venture structure, such as one-way and two-way patent transfers, research agreements, joint product development, licensing, and equity purchases. Mergers and acquisitions that result in the disappearance of a corporate entity do not count as cooperative agreements. At the other extreme, one-time purchases do not count. In the studies cited below, anything in between the two extremes counts. The key is that the arrangement be long-term and involve some form of collaboration.
Reasons for Interfirm Alliances in Telematics
A primary conclusion of the research on interfirm alliances in telematics is that the number of agreements involving European firms was rising dramatically in the early 1980s. The total number of such agreements per year was as follows:[49]
|
The data from the Centre d'Etudes et de Recherches sur l'Entreprise Multinationale at the Laboratoire de Recherche en Economie Appliquée (LAREA/CEREM) also show that out of a total of 587 cooperative agreements identified for the period 1980–85, 302 (or 51 percent) involved telematics.[50] As other studies prove, the phenomenon was worldwide, with American and Japanese firms joining European firms in globe-spanning, interlocking networks of alliances.[51]
[48] For a discussion of definitional problems, see OECD, Technical Co-operation Agreements between Firms: Some Initial Data and Analysis, 6–14.
[49] LAREA/CEREM, in OECD, Technical Co-operation Agreements, 21.
[50] The LAREA/CEREM data cover four major sectors: information technologies (including telecommunications, computers, ICs, computer-aided design and manufacturing, software and services), new materials, biotechnologies, and aerospace and civil aviation. LAREA/CEREM, Les stratégies d'accord .
[51] See Herbert I. Fusfeld and Carmela S. Haklisch, "Cooperative R&D for Competitors," 60–76; OECD, Technical Co-operation Agreements .
Why did telematics enterprises become so deeply involved in cooperative agreements? Researchers who have investigated the question generally agree on a set of related factors driving the phenomenon. In the discussion that follows, I distill the smallest number of separate factors possible from the broad (and frequently overlapping) lists extant.[52] The principal technological changes transforming the strategies of firms were (1) vertical technology links; (2) horizontal convergence across sectors; (3) rapid innovation; (4) escalating costs of R&D; and (5) globalization of markets.
Vertical Technology Links
Complex telematics products are increasingly designed from the components up—that is, as the number of functions that can be packed onto a chip soars because of VLSI technology, more and more of the final system can be built into the chip. Thus, in order to produce a state-of-the-art public exchange (or private branch exchange or workstation or personal computer, and so on), the manufacturer of the end product must work closely with chip designers and software experts so as to achieve the optimal balance between hardware (what is built into the circuitry) and software (what is programmed in the final system). As Borrus explains:
Semiconductor firms now find themselves in a position where their VLSI device technology, which permits the design of logic systems in silicon, is so powerful that it forces a reconceptualizing of the design and production of final systems products. The potential impact of VLSI, in short, is to upset established design parameters in final systems, as well as in components.[53]
There were two major consequences of this trend in the early 1980s. First, the market for custom and semicustom (or ASIC) chips was booming, drawing in droves of new competitors. Over forty start-up companies entered the ASIC field in 1982–83 alone. Furthermore, the established companies in standard ICs (like Texas Instruments, Motorola, Intel, National Semiconductor, Signetics, Mostek, Harris) began to offer custom and semicustom devices.[54] Second, companies were expanding up and down the vertical chain
[52] My classification of the factors behind increasing interfirm technical agreements parallels those in Sharp and Shearman, European Technological Collaboration , chap. 1; and Rob van Tulder and Gerd Junne, European Multinationals in Core Technologies , chap. 7.
[53] Borrus, Competing for Control , 149.
[54] Ibid., 152–58.
from components to final systems. Builders of computers, telecoms equipment, industrial systems, and consumer products were all acquiring semiconductor capabilities. Conversely, a number of semiconductor companies were moving into the markets for final systems—for example, Texas Instruments and National Semiconductor into computer systems and Motorola into telecommunications.[55] Frequently, the forward and backward linkages were forged through interfirm alliances. Thus, Carmela Haklisch in her study of technical agreements involving the world's forty-one largest semiconductor companies showed an increase from two such agreements in 1978 to twenty-two in 1981 to forty-two in 1984.[56]
The integrated European houses (Philips and Siemens) had always produced everything from chips to computer and telecommunications systems. But their weakness in ICs led even these giants to create links with semiconductor companies. Philips bought the American company Signetics in 1975 and struck agreements with RCA, CDC, Intel, and Siemens.[57] Siemens established ties with a plethora of firms (Table 6.5). Other linkups between systems and semiconductor companies involving European firms include Olivetti (with Zilog), GEC (Mitel), Ferranti (GTE, Nixdorf), CII-HB (Trilogy), and ICL (Fujitsu).
Horizontal Convergence across Sectors
The convergence of computers and telecommunications systems has been commonplace since the late 1970s. But the phenomenon of horizontal technology links is much broader. Microelectronics is creating overlaps among computers, telecoms, industrial automation, office equipment, and, in the near future, consumer electronics.
The convergence of data-processing and telecommunications proceeded from both ends. With the advent of digital switches, telephone exchanges increasingly resembled large computers: They processed digitized electronic impulses according to programmed instructions. Telecoms-equipment makers began to rely on technologies developed for the computer industry. On the other side, the latest development in computing, networking (or distributed processing), involved long-distance communication of data, as well as networks of computers able to share data files and programs.
[55] Ibid., 165–68.
[56] Carmela S. Haklisch, "Technical Alliances in the Semiconductor Industry."
[57] OECD, Semiconductor Industry , pp. 125–37.
|
Computer networking could take the form of local area networks (linking computers directly one to another via cable) or could occur through private branch exchanges (small switches that can connect computers and other equipment like facsimile machines). In other words, data-processing increasingly relied on communications technologies.
The result of these trends was the formation of alliances between computer and telecommunications firms. For instance, the computer-industry giant, IBM, established ties for telecoms equipment (Rolm), networks (MCI and SIP of Italy), and enhanced services (with the Bundespost and NTT). The U.S. telecoms giant, AT&T, acquired a 25 percent stake in Olivetti and began marketing computers from the Italian firm.
In addition, new transmission techniques spawned other kinds of alliances for telecommunications firms. Microwave transmission brought in manufacturers of radio equipment. Satellite communications drew in aerospace companies (many with long Department of Defense experience) like Hughes, Ford Aerospace, TRW, and Lockheed. The optical-fiber transition is currently producing links
between telecoms companies and makers of glass fibers and lasers. Corning, for example, has licensed its fiber technology to a number of traditional cable producers and has a joint venture with Siemens (Siecor).
The advent of programmable machine tools and automated production created opportunities for partnerships of computer companies and the makers of industrial equipment. The blending of production equipment and computers resulted in computer-integrated manufacturing (CIM): a factory in which computers control the operations of the machinery and the flow of materials through them. Agreements between automation and electronics companies include those linking Siemens to Fujitsu Fanuc, Thorn-EMI to Yaskawa, Selenia-Elsag to IBM, and Comau (a Fiat subsidiary) to DEC.[58] The next stage will likely be the transmission of high-fidelity stereo and HDTV into homes via the future broadband networks. At that point consumer-electronics companies will be working with telecoms firms and broadcasting and production interests.
The point of these examples is that as final-product markets converged and overlapped, companies were forced to expand out of their traditional activities and enter markets where they had no technical expertise. One way to acquire quickly the necessary know-how was to ally with companies already established in the field. Thus, the need to integrate technologies from a broad spectrum of sectors pushed the formation of interfirm alliances.
Rapid Innovation
Not only was technology breaking down traditional barriers between sectors, but innovation in each sector was occurring at an increasingly rapid rate. No company could predict how and when technology would change, much less cover all the necessary R&D bases. Because the basic raw material of electronics systems was the IC, rapid innovation in semiconductors meant rapid product change in all the final-use sectors. Thus, the rate of change in ICs illustrates the problem for all of telematics. In 1983 the Commission estimated that information-technology products had a life expectancy of just three years; in some subsectors, it was less than that.[59] Memory chips (specifically, DRAMs) require the greatest
[58] van Tulder and Junne, European Multinationals in Core Technologies , 240–41.
[59] CEC, Proposal for a Council Decision Adopting the First European Strategic Programme for Research and Development in Information Technologies (ESPRIT) , 51.
density of integration and thus have driven innovation in VLSI. The 16K DRAM (16,000 bits of information) entered the market in 1976. The 64K DRAM was introduced in 1980, a quadrupling of memory capacity in four years. The 256K DRAM became available in 1982, the one-megabit chip (over one million bits) in 1984, and the four-megabit chip in 1987.[60] Thus, chip producers were able to double memory capacity approximately every two to three years.
Illustrations of rapid technological innovation abound in the telecommunications field. Public data networks (like the Minitel system in France or Bildschirmtext in Germany) did not exist in 1980, nor did services like teleconferencing and videophones. The life expectancy of products is shrinking. For instance, in the early 1980s a digital switch lasted about ten years before it was economical to replace it; the electromechanical switches replaced by the digital exchanges had a life span of thirty years.[61] The replacement rate for PABXs increased from 5 percent of total installed equipment per year to between 10 and 20 percent in the early 1980s.[62] And now progress in optoelectronics is leading to a whole new generation of switches and terminals based on photons instead of electrons. Makers are under severe pressure to provide their customers with upgrades of technical quality comparable to those of competing producers and to provide them as rapidly.
In this environment in the early 1980s no firm could hope to stay abreast of all the market-making developments by itself. Partnering was a way of reducing risks, as one partner might pick up on trends that the other partner missed. Or, in other words, in an era of rapid technological innovation, interfirm alliances reduced the risk of missing out on a crucial development.
Escalating Costs of R&D
Given the spectrum of technologies involved and the rapid pace of change, it became increasingly difficult for any one firm to muster the financial and human resources needed to develop new generations of products. For instance, as chip complexity went up and the line width on ICs went down (to below one micron), the development cost of a new chip soared. The one-kilobit memory chip cost about $2 million to develop; the one-megabit
[60] See Borrus, Competing for Control , 176.
[61] A. D. Little, European Telecommunications , 39.
[62] OECD, Telecommunications , 77.
RAM cost around $100 million. Whereas development costs for the four-bit microprocessor ran about $15 million, the most recent models cost about $150 million. Furthermore, increasingly complex logic chips with one million or more elements are impossible to design without specialized computer programs (computeraided design, or CAD).[63]
The need for such programs raises the problem of software. Software (or the programmed instructions needed to run all telematics systems) was becoming the costliest part of R&D and final products by 1983. While hardware costs (on a per-function basis) had declined steadily, software costs (on a per-line basis) had remained constant or even increased. Discounting for inflation, a line of programming in the early 1980s cost about $10 to $50, about the same as it had in 1955. The problem was that current systems employed more lines than before, sometimes hundreds of thousands of lines. Thus, computer firms devoted more than half their R&D resources (money and manpower) to software, and software accounted for well over half the cost to the user of a final computer system.[64] The Commission estimated that although software accounted for 20 percent of the cost of R&D for a public exchange in 1970, it would reach 80 percent of the R&D cost by 1990.[65] In the early 1980s software already accounted for over 60 percent of R&D on public exchanges.[66]
Finally, telecommunications equipment illustrates the soaring costs of R&D in final systems. ITT spent $30–$40 million to develop its Pentaconta switching system in the early 1960s; it spent $300–$500 million on its 1240 system in the late 1970s.[67] By 1986 the R&D expense associated with developing a current-generation digital public exchange ranged from $500 million (Ericsson's AXE) to $1.4 billion (GEC/Plessey's System X).[68]
Rising R&D costs motivated companies to seek partners that could share the burden. The constraints on R&D resources were not always financial either. In Europe the primary bottleneck might well have been in the supply of qualified scientists, engineers, and technicians.
[63] See OTA, International Competitiveness in Electronics , 77.
[64] Ibid., 86–87.
[65] CEC, Towards a Dynamic European Economy , 90.
[66] OECD, Telecommunications , 54.
[67] Ibid., 54.
[68] Godefroy Dang Nguyen, "Telecommunications: A Challenge to the Old Order," 108.
As one executive of a German computer company told me, in Europe the scarcest resource was trained personnel, and that scarcity motivated many of the alliance strategies of European firms.[69]
Globalization of Markets
Because of high R&D costs, companies had to have vast sales to amortize the investment in R&D. Thus, telematics markets were becoming increasingly global. Alliances were the means of getting inside a market protected by official and unofficial barriers to trade. A partner established in a national market could sell the goods of an outside company either directly or as a second source. Some firms sought allies for access to their distribution networks. In their study of the telecommunications-equipment sector, Teece, Pisano, and Michael Russo divided the motivations for seeking interfirm alliances into two main categories: technology access and market access. Their data (from Futuro Organizzazione Risorse in Rome) showed that out of 117 total agreements, distribution/marketing was the primary motive in 35 (29.9 percent). Distribution/marketing was joined with other objectives (R&D, production) in 15 more agreements, meaning that the market-access motive applied in 42.7 percent of the agreements. Technology access was the sole motive in 36 agreements (30.8 percent) and figured in 47.0 percent of the agreements overall.[70]
LAREA/CEREM data showed an even greater percentage of agreements in the overall telematics sector having marketing considerations as their primary motive. Marketing was a factor in 39 percent of 316 agreements, while knowledge generation (technology) figured in 19 percent and production in 16 percent.[71] It should be noted that marketing motives frequently tie directly to technology concerns, as in the marketing of another firm's product in order to fill out a company's product range. Again, it is increasingly difficult for a single enterprise to cover all the technology bases it needs.
[69] Interview 46.
[70] David J. Teece, Gary Pisano, and Michael Russo, Joint Ventures and Collaborative Arrangements in the Telecommunications Equipment Industry , 27, 63.
[71] LAREA/CEREM, Les stratégies d'accord , 47.
|
beginning to close for intra-European alliances (see Table 6.6). Explanations for the evident early preference on the part of European firms for American rather than other European partners abound.[72] Some of them are sociocultural: European firms were too accustomed to seeing each other as competitors to think about cooperating. Others are more technological: American firms frequently possessed the most advanced technologies. Still others stress market access: Links with American firms provided a way to enter the huge American market. What is important for this study is that around 1980 few interfirm accords linked European companies. Whatever potential existed for such ties had not been explored, much less exhausted.
Summary
This chapter has detailed the origins of the shift in Europe from national-champion strategies toward collaboration. Two principal factors drove the movement toward collaboration. First, intense international competition in telematics fueled a crisis in European telematics policy-making. European enterprises continued to fare poorly in competition with United States and Japanese firms. After a decade
[72] What is not so clear is why there was not a boom in alliances between European and Japanese firms, especially given the technological strengths of the Japanese. Answering that question would require research into the perceptions and decision-making of European corporate leaders, a task beyond the scope of this study. It may be that Japanese firms were not as eager to ally or were less willing on average to grant access to their technologies.
of national-champion policies Europe's share of its own markets was declining in semiconductors and computers. Even the traditional European stronghold, telecommunications, was slipping. Contrasting with Europe's failures, Japan had risen from behind Europe to technological prominence. Finally, new government-led R&D programs in the United States and Japan and the freeing of IBM and AT&T to compete in new markets threatened to sink Europe even deeper in its hole.
Second, technological changes also disposed telematics companies to seek interfirm alliances. European enterprises around 1980 were becoming heavily involved in strategic alliances, especially with American companies. Technological change assumes critical importance in the argument I develop here: Because technological changes motivated European companies to seek interfirm alliances, the major European telematics enterprises were receptive to Commission initiatives in support of collaboration. In an important sense, therefore, technological change paved the way for the formation of the transnational industrial coalition that was the heart of ESPRIT and RACE.