Preferred Citation: Ames, Karyn R., and Alan Brenner, editors Frontiers of Supercomputing II: A National Reassessment. Berkeley:  University of California Press,  c1994 1994. http://ark.cdlib.org/ark:/13030/ft0f59n73z/


 
A Vision of the Future at Sun Microsystems

A Vision of the Future at Sun Microsystems

Bill Joy

Bill Joy is well known as a founder of Sun Microsystems, Inc., a designer of the network file system, a codesigner of Scalable Processor ARChitecture (SPARC), and a key contributor in Sun's creation of the open-systems movement. Before coming to Sun, Bill created the Berkeley version of the UNIX operating system, which became the standard for academic and scientific research in the late 1970s and early 1980s. At Berkeley, he founded the Berkeley Standard Distribution, which first distributed applications software for the PDP-11 and, later, complete systems for the VAX. He is still known as the creator of the "VI" text editor, which he wrote more than 10 years ago.

In recent years, Bill has traveled widely to speak on the future of computer technology—hardware, software, and social impacts. In the early 1980s, Bill framed what has become known as "Joy's law," which states that the performance of personal microprocessor-based systems can be calculated as MIPS = 2yr-84 . This prediction, made in 1984, is still widely held to be the goal for future system designs.

About 15 years ago I was at the University of Michigan working on large sparse matrix codes. Our idea was to try to decompose and "VAX-solve" a 20,000-by-20,000 sparse matrix on an IBM 370, where the computer center's charging policy charged us for virtual memory. So we, in fact, did real I/O to avoid using virtual memory. We used these same codes


288

on early supercomputers, I think that set for me, 15 years ago, an expectation of what a powerful computer was.

In 1975 I went to the University of California-Berkeley, where everyone was getting excited about Apple computers and the notion of one person using one computer. That was an incredibly great vision. I was fortunate to participate in putting UNIX on the Digital Equipment Corporation VAX, which was meant to be a very popular machine, a very powerful machine, and also to define the unit of performance for a lot of computing simply because it didn't get any faster. Although I was exposed to the kinds of things you could do with more powerful computers, I never believed that all I needed was a VAX to do all of my computing.

Around 1982, the hottest things in Silicon Valley were the games companies. Atari had a huge R&D budget to do things that all came to nothing. In any case, if they had been successful, then kids at home would have had far better computers than scientists would, and clearly, that would have been completely unacceptable.

As a result, several other designers and I wanted to try to get on the microprocessor curve, so we started talking about the performance of a desktop machine, expressed in millions of instructions per second (MIPS), that ought to equal the quantity 2 raised to the power of the current year minus 1984. Now in fact, the whole industry has signed up for that goal. It is not exactly important whether we are on the curve. Everyone believes we should be on the curve, and it is very hard to stay on the curve. So this causes a massive investment in science, not in computer games, which is the whole goal here.

The frustrating thing in 1983 was to talk to people who thought that was enough, although it clearly was not anywhere near enough. In fact, hundreds of thousands of megabytes did not seem to me to be very much because I could not load a data set for a large scientific problem in less than 100 or 1000 megabytes. Without that much memory, I had to do I/O. I had already experienced striping sparse matrices and paging them in and out by hand, and that was not very much fun.

I think we are on target now. Enough investments have been made in the world to really get us to what I would call a 300-megapixel machine in 1991 and in 1995, a 3000-megaflops machine, i.e., a machine capable of 3000 million floating-point operations per second (FLOPS). Economics will affect the price, and different things may skew the schedule plus or minus one year, but it will not really make that much difference.

You will notice that I switched from saying megapixel to megaflops, and that is because with RISC architectures and superscalar implementations,


289

you have the same number of MFLOPS as MIPS, if not more, in the next generation of all the RISC microprocessors. The big change in the next decade will be that we will not be tied to the desktop machine.

In the computer market now, I see an enormous installed base of software on single-CPU, single-threaded code on Macintoshes, UNIX, and DOS converging so that we can port the applications back and forth. This new class of machines will be shipped in volume with eight to 16 CPUs because that is how many I can get on a small card. In a few years, on a sheet-of-paper-size computer, I can get an eight- to 16-CPU machine with several hundred bytes or a gigabyte of memory, which is a substantial computer, quite a bit faster than the early supercomputers I benchmarked.

That creates a real problem in that I don't think we have much interesting software to run on those computers. In fact, we have a very, very small number of people on the planet who have ever had access to those kinds of computers and who really know how to write software, and they've been in a very limited set of application domains. So the question is, how do we get new software? This is the big challenge.

In 1983, I should have bought as much Microsoft Corporation stock as I could when it went public because Microsoft understood the power of what you might call the software flywheel, which is basically, once you get to 100,000 units of a compatible machine a year, the thing starts going into positive feedback and goes crazy. The reason is, as soon as you have 100,000 units a year, software companies become possible because most interesting software companies are going to be small software companies clustered around some great idea. In addition, we have a continuing flow of new ideas, but you have got to have at least 10 people to cater to the market—five people in technical fields and five in business. They cost about $100,000 apiece per year, each, which means you need $1 million just to pay them, which means you need about $2 million of revenue.

People want to pay about a couple hundred dollars for software, net, which means you need to ship 10,000 copies, which means since you really can only expect about 10 per cent penetration, you have got to ship 100,000 units a year. You can vary the numbers, but it comes out to about that order of magnitude. So the only thing you can do, if you've got a kind of computer that's shipping less than 100,000 units a year, is to run university-, research-, or government-subsidized software. That implies, in the end, sort of centralized planning as opposed to distributed innovation, and it loses.

This is why the PC has been so successful. And this is, in some sense, the big constraint. It is the real thing that prevents a new architecture, a new kind of computing platform, from taking off, if you believe that


290

innovation will occur. I think, especially in high technology, you would be a fool not to believe that new ideas, especially for software, will come around. No matter how many bright people you have, most of them don't work for you. In addition, they're on different continents, for instance, in eastern Europe. They're well educated. They haven't had any computers there. They have lots of time to develop algorithms like the elliptical sorts of algorithms. Because there are lots of bright people out there, they are going to develop new software. They can make small companies. If they can hit a platform—that is, 100,000 units a year—they can write a business model, and they can get someone to give them money.

There are only four computing platforms today in the industry that have 100,000 units a year: DOS with Windows, Macs and UNIX on the 386, and UNIX on Scalable Processor ARChitecture (SPARC). That's it. What this tells you is that anybody who isn't on that list has got to find some way to keep new software being written for their platform. There is no mechanism to really go out and capture innovation as it occurs around the world. This includes all the supercomputers because they're equipped with way too much low volume, and they're off by orders of magnitude.

Some platforms can survive for a small amount of time by saying they're software-compatible with another one. For instance, I can have UNIX on a low-volume microprocessor, and I can port the apps from, say, SPARC or the 386 to it. But there's really no point in that because you do the economics, and you're better off putting an incremental dollar in the platform that's shipping in volume than taking on all the support costs of something that didn't make it into orbit. So this is why there's a race against time. For everyone to get their 100,000 units per year is like escaping the gravity field and not burning up on reentry.

Now, here's the goal for Sun Microsystems, Inc. We want to be the first company to ship 100,000 multiprocessors per year. This will clearly make an enormous difference because it will make it possible for people to write software that depends on having a multiprocessor to be effective. I can imagine hundreds or thousands of small software companies becoming possible.

Today we ship $5000 diskless, monochrome workstations and $10,000 standalone, color workstations; both of these are shipping at 100,000 a year. So I've got a really simple algorithm for shipping 100,000 color and 100,000 monochrome workstations a year: I simply make those multiprocessors. And by sticking in one extra chip to have two instead of one and putting the software in, people can start taking advantage of it. As you stick in more and more chips, it just gets better and better. But without


291

this sort of a technique, and without shipping 100,000 multis a year, I don't see how you're going to get the kind of interesting new software that you need. So we may have to keep using the same 15-year-old software because we just don't have time to write any new software. Well, I don't share that belief in the past. I believe that bright new people with new languages will write new software.

The difficulty is, of course, you've got all these small companies. How are they going to get the software to the users? A 10-person company is not a Lotus or a Microsoft; they can't evangelize it as much. We have a problem in the computer industry in that the retail industry is dying. Basically, we don't have any inventory. The way you buy software these days, you call an 800 number, and you get it by the next morning. In fact, you can call until almost midnight, New York time, use your American Express card, and it will be at your door before you get up in the morning. The reason is that the people typically put the inventory at the crosspoint for, say, Federal Express, which is in Memphis, so that it only flies on one plane. They have one centralized inventory, and they cut their costs way down.

But I think there's even a cheaper way. In other words, when you want to have software, what if you already have it? This is the technique we're taking. We're giving all of our users compact-disk (CD) ROMs. If you're a small company and you write an application for a Sun, we'll put it on one of our monthly CD-ROMs for free for the first application that you do if you sign up for our software program, and we'll mail it to every installation of Sun.

So if you get a Sun magazine that has an ad for your software, you can pull a CD-ROM you already have off the shelf, boot up the demo copy of the software you like, dial an 800 number, and turn the software on with a password. Suppose there are 10 machines per site and a million potential users. That means I need 100,000 CDs, which cost about $3 apiece to manufacture. That's about $300,000. So if I put 100 applications on a CD, each company can ship its application to a million users for $3000. I could almost charge for the space in Creative Computer Application Magazine . The thing can fund itself because a lot of people will pay $10 for a disk that contains 100 applications that they can try, especially if it's segmented, like the magazine industry is segmented.

This is a whole new way of getting people software applications that really empowers small companies in a way that they haven't been empowered before. In fact, you can imagine if these applications were cheap enough, you could order them by dialing a 900 number where there wouldn't even have to be a human; the phone company would do the billing, and you'd just type in on your touch-tone phone the serial


292

number of your machine, and it would read you the code back. In that case, I think you could probably support a one-person company—maybe a student in a dorm who simply pays $3000 to put a zap on the thing and arranges with some BBS-like company to do the accounting and the billing. These new ways of distributing software become possible once you spin up the flywheel, and I think they will all happen.

The workstation space I said I think will bifurcate into the machines that run the existing uniprocessor software should be shipping about a million units a year, about 100 MIPS per machine site, because that's not going to cost any more than zero MIPS. In fact, that's what you get roughly for free in that time frame. That's about $6 billion for the base hardware—maybe a $12 billion industry. I may be off by a factor of two here, but it's just a rough idea.

Then you're going to have new space made possible by this new way of letting small software companies write software, eight to 16 CPUs. That's what I can do with sort of a crossbar, some sort of simple bus that I can put in a sheet-of-paper-sized, single-board computer, in shipping at least 100,000 a year, probably at an average price of $30,000, and doing most of the graphics in software. There would not be much specialpurpose hardware because that's going to depend on whether all those creative people figure out how to do all that stuff in software. And that's another, perhaps, $3 billion market.

I think what you see, though, is that these machines have to run the same software that the small multis do because that's what makes the business model possible. If you try to do this machine without having this machine to draft, you simply won't get the applications, which is why some of the early superworkstation companies have had so much trouble. It's the same reason why NeXT will ultimately fail—they don't have enough volume.

So across this section of the industry, if I had my way, it looks like we're going to ship roughly 200 TFLOPS in 1995, with lots and lots of interesting new, small software applications. The exception is that we're going to ship the 200 TFLOPS mostly as 100,000, 1000-MIP machines instead of as a few TFLOPS machines. I just have a belief that that's going to make our future change, and that's going to be where most of the difference is made—in giving 100,000 machines of that scale to 100,000 different people, which will have more impact than having 100 TFLOPS on 100 computers.

The economics are all with us. This is free-market economics and doesn't require the government to help. It will happen as soon as we can spin up the software industry.


293

A Vision of the Future at Sun Microsystems
 

Preferred Citation: Ames, Karyn R., and Alan Brenner, editors Frontiers of Supercomputing II: A National Reassessment. Berkeley:  University of California Press,  c1994 1994. http://ark.cdlib.org/ark:/13030/ft0f59n73z/