Future of the Internet Symposium: The Roles of Technology and Economics

I’m delighted to have this opportunity to participate in this symposium.  I’m a computer scientist, not a law professor; most of my comments will tend to be at the intersection of technology and public policy.

When reading Jonathan Zittrain’s book — and I agree with his overall thesis about generativity — it’s important to take into account what was technically and economically possible at various times.  Things that are obvious in retrospect may have been obvious way back when, too, but the technology didn’t exist to do them in any affordable fashion.  While I feel that there are a number of sometimes-serious historical errors in the early part of the book — for example, AT&T, even as a monopoly, not just leased modems but also modified its core network to support them; data networking was not solely a post-Carterphone phenomenon — the more serious problems stem from ignoring this perspective.  I’ll focus on one case in point: the alleged IBM control of mainframes.

Yes, IBM did have a lot of control.  Many of the biggest machines were leased, not sold, but for the same sort of reason that people lease cars from the manufacturers: purchasing them required a large capital outlay for an item — an automobile or a computer — that you would likely want to upgrade in a very few years.  Mainframes cost millions; for many companies, it was either lease from IBM or borrow money from a bank.  Yes, there was a Borg-like attitude: I vividly recall one IBM hardware engineer, circa 1971, refusing to repair the 360/50 I was doing systems programming for until we disconnected the “customer-owned equipment” — he refused to identify it even by function, despite the fact that IBM didn’t even make a competitive product.  But IBM did not control the system or supply all of the software.  It did supply the operating system, some important utilities such as file-copy programs, and the compilers.   Compilers — programs necessary to translate from human-readable programming languages into the zeros and ones that computers actually understand — are necessary if and only if you’re going to write your own programs.  IBM gave them away with its computers because the computers weren’t very useful without compilers: everyone did in fact have to write their own programs.

This situation — mostly locally-written applications — persisted from the beginning of the computer era through at least the 1970s.  IBM did not supply most application software.  There was not much in the way of third-party software, either, for a very simple reason: there wasn’t a big enough market to support such an industry; everyone’s needs were just different enough.  Only generic software — operating systems, compilers, and a few utilities, notably programs to sort files into alphabetic or numeric order — were useful enough to enough different companies that a market could even conceivably develop.  The 1969 antitrust suit was partly about these programs, but also about the way that the operating system was bundled with the hardware, making it very difficult for hardware competitors — RCA, Amdahl, Fuji, and Hitachi, if memory serves — to build compatible machines that still ran IBM operating systems and compilers. While IBM did sell some application software packages, these were a minor part of its business and of minor importance to most customers.  After all, the cost of hiring your own application programmers was quite small compared to the cost of buying or leasing the hardware and hiring the operators and system programmers you needed just to make the machine usable.

From this perspective, today’s computers are more generative not because of the demise of IBM’s grasp but because there are enough of them out there that a vibrant, mass-market, 3rd-party software industry has developed.  For the most part, users do not and cannot program their own computers.  It is indeed ironic that virtually no personally-owned machines even have compilers installed, in contrast to the mainframes of 30+ years ago.  The difference is the cost of the machines and hence the number of them available.  I played an ancestor of Asteroids in 1971.  The software was free to anyone who had  $250,000 to buy the necessary hardware — but no one was going to build a business selling single-user games that required that sort of cash outlay!

The interesting questions arise when more than one choice is technically and economically possible, or when they’re close enough that regulation or tax policy can tip the balance.  That can cut both ways.  Suppose, for example, that malware targeting online banking becomes pervasive, to the point of causing massive losses for banks and their customers.  Banking appliances will become a necessity, not a choice; nothing else will be affordable.  (Whether or not such appliances, especially if layered on top of a generic PC, will indeed be secure is a separate question; I don’t think so, but that’s a subject for another post.)  Conversely, if general-purpose devices become more secure and more usable (again, I’m skeptical), the scales will tip the other way.

What direction will things move?  I don’t know, and I have a lousy track record as a prophet.  I do suspect that competing technologies, often designed without any awareness of the generativity paradigm, will be influential.  To quote Donald Cotter, former director of Sandia National Labs, “Hardware makes policy”.

You may also like...