Has the Future of the Internet happened?

I wrote the Future of the Internet — And How to Stop It, and its precursor law review article the Generative Internet, between 2004 and 2007. I wanted to capture a sense of just how bizarre the Internet — and the PC environment — were.  How much the values and assumptions of, metaphorically, dot-org and dot-edu, rather than just dot-com, were built into the protocols of the Internet and the architecture of the PC.  The amateur, hobbyist, backwater origins of the Internet and the PC were crucial to their success against more traditional counterparts, but also set the stage for a new host of problems as they became more popular.

The designers and makers of the Internet and PC platforms did not expect to come up with the applications for each — they figured unknown others would do that.  So, unlike CompuServe, AOL, or Prodigy, the Internet didn’t have a main menu.  And once for-profit ISPs started rolling the Internet out to anyone willing to subscribe, there came to be a critical mass of eyeballs ready to experience varieties of content and services — the providers of which didn’t have to negotiate a business deal with some Internet Overseer the way they did for CompuServe et al.  Some content and services could be paid for, at least as soon as credit cards could function cheaply online, and other could be free — either because of a separate business model like advertising, or because the provider didn’t feel inclined to monetize visiting eyeballs.  Tim Berners-Lee could invent the World Wide Web and have it run as just another application, seeking neither a patent on its workings nor an architecture for it that placed him in a position of control.  Today, of course, the Web is so ubiquitous that people often confuse it with the Internet itself.

When bad apples emerge on an unmediated platform — and they do as soon as there are enough people using it to make it worth it to subvert it — it can be difficult to deal with them.  If someone spams you on Facebook, the first step is to make it a customer service issue — complain to Facebook, and they can discipline the account.  If someone spams you on email, it’s much trickier, because there’s no Email Manager — just lots of email servers, some big, some little, and many of them with accounts hacked by others.  That’s one reason why a newer generation of Internet users prefers Facebook or Twitter messaging to old fashioned email.  Same for the PC itself: with no PC Manager, there’s no easy way to get help or exact justice when exposed to malware.  I worried that malware in particular, and cybersecurity in general, would be a fulcrum point in pushing “regular” people away from the happenstance of generative platforms designed by nerds who figured they could worry about security later.  Hence a migration to less generative platforms managed like services rather than products.

I understand and sympathize with that migration.  But it’s important to recognize its downsides — particularly if one is among the libertarian set, which has been comprised some of the most vocal critics of the Future of the Internet.  Whether software developer or user, volunteering control over one’s digital environment to a Manager means that the manager can change one’s experience at any time — or worse, be compelled to by outside pressures.  I write about this prospect at length here.  The famously ungovernable Internet suddenly becomes much more governable, an outcome most libertarian types would be concerned about.  Many Internet freedom proponents aren’t willing to argue for or trust those freedoms to a “mere” political process; they prefer to see them de facto guaranteed by a computing environment largely immune to regulation.

Lessig now seems to disagree with that; his view in Code 2.0 is that:

citizens of any democracy should have the freedom to choose what speech they consume.  But I would prefer they earn that freedom by demanding it through democratic means than that a technological trick give it to them for free.

It’s an interesting bookend to a small gem of an article he wrote in 1999, where he said:

The architecture of cyberspace embeds a set of values, as it embeds or constitutes the possible. But beyond the values built into this architecture, there are values that are implicated by the ownership of code. Its ownership can enable a kind of check on government’s power-a separation of powers that checks the extent that government can reach. Just as our Constitution embeds the values of the Bill of Rights while also embedding the protections of separation of powers,[] so too should we think about the values that cyberspace embeds, as well as its structure.

Randal Picker, in a terrific article revisiting the famed Sony case that upheld the right of manufacturers to make and sell VCRs, despite the fact that surely many people were using them to infringe copyright by recording shows for their personal libraries, outright welcomes new forms of regulation made possible by software becoming a service.  My brief response to (and disagreement with) his article is here, but both of us agree that new kinds of regulation lie in our future.

So, has the future happened?  Certainly young coders today are writing for the Facebook and iPhone apps platforms more than they are for Windows, OS X, or GNU/Linux.  Those platforms haven’t been “sterile” — e.g. resistant to all outside development, as the book’s introduction feared.  Rather, they’re what I called “contingently generative” and what Sarah Rotman Epps more pithily calls “curated computing.”  The idea is the same: to be generative enough to welcome outside coders — indeed, if wildly successful, to turn other platforms into ghost towns — but to be able to modify what they do at any time, before or after the fact.  Not only does that set the stage for monopolistic behavior — developers, many coding for fun, build empires that are then hard to move to a new platform when the rules change — but also for new regulation.  Android is an interesting development here — a sort of canary in the coal mine, as the Android platform contemplates more “off roading” by users, running unapproved apps, than the iPhone does.  It’s too early to say which model will prevail, especially as either one, being contingent, can evolve towards the other.  Steve Jobs could announce freedom to run outside code on iPhones tomorrow, and Google could revise Android so that only apps from the official Android store can persist.  Either vendor can kill an app, or the entire phone, at a distance, if it detects jailbreaking, or for any other reason.

In 2004, the Web was going strong, but much of our time was spent outside a browser: email was Outlook or Eudora, word processing was Word, spreadsheets were Excel, etc.  If you were given only a browser, there’s a lot of work you’d have a hard time doing.  Today that’s simply not true.  Google docs and spreadsheets are spreading, and Microsoft is hastening to catch up with Windows Live.  Yet some have trumpeted the end of the open Web, and cited the Future of the Internet to buttress their claims.  They have a point.  Just because something can be accessed by a Web browser doesn’t make it part of the Web.  (You can even just open a file on your hard drive using your browser, most easily if it ends in .html.)

If the services we migrate to online are still controlled and curated by only a handful of gatekeepers, we run all the risks, and stand to lose many of the benefits, of the generative Internet.  I’m not ready, as others may be, to say that essentially every new technology has its infancy and adolescence, where it’s chaotic and there are lots of players and lots of innovation, to be followed by boring adulthood as the losers lose and the few winners win and consolidate.  My hope was, and is, to be able to take on the “bad apples” problem in a way that doesn’t terribly compromise generativity — the way that Wikipedia, so far, has managed to stop spammers and vandals without wholesale abandoning the precept that anyone can edit a page, whether registered or not.  I wrote some thoughts on how to do that in the book, and have since followed up with a piece called “The Fourth Quadrant.”  It seems all the more pressing to me as concerns about cybersecurity, and now cyberwarfare, are very much on the mind of governments around the world.

I’m not exactly a pessimist.  I recognize, and celebrate, the fact that the digital environment of 2010 is the coolest, most interesting, most option-filled it’s ever been.  In that sense, mirroring the situation with Internet access despite censorship around the world, the slope of the generative curve is positive.  But, also mirroring the situation with censorship and filtering, I see the pieces further moving into place for a step change in how the Internet works.  In where new innovations come from.  And in how readily regulators can pull the plug on services and content they don’t like.  At its core, the Future of the Internet is an argument against complacency, and against the simplicity of thinking that if only market forces are allowed to work their magic, everything else we care about will more or less fall into place.

I look forward to the week’s discussions.  …JZ

You may also like...

1 Response

  1. nommh says:

    Thanks for that article. I’m not exactly internet savvy, but I do know that the comparison with other technologies and the internet is surprising, to say the least. Very few of the society changing technologies of the past were within reach of the individual. With the internet it is ‘just’ software. No railroad tracks to lay, no telephone cables to bury, no costly equipment to broadcast your news.
    I’m a little confused about the Lessig quote from 2.0. Why should we claim democratic rights in an area where politics notoriously struggle to keep up with developments. An area that started out with this freedom more or less intact. Does Lessig wish us to see these freedoms erode first and then ask for them back?
    So Lessig’s idea is problematic, even if our democracies were perfect – which they are not.