Debate on the Future of the Internet

I think Deven’s advice is great, but for those who want to sample Zittrain’s new book (The Future of the Internet and How to Stop It) before buying it, it’s excerpted in the Boston Review this month. (There’s also an outline of its ideas in this Harvard Law Review article.) It’s a very thoughtful analysis of some of the most difficult issues affecting internet policy. I will have more to say in future posts, but I just wanted to highlight this work, and the all-star respondents who comment on it in the same issue.

One of the biggest problems that Zittrain spots is that “bad code is now a business:”

So long as spam remains profitable, [many crimes] will persist. . . [including] viruses that compromise PCs to create large zombie “botnets” open to later instructions. Such instructions have included directing PCs to become their own e-mail servers, sending spam by the thousands or millions to e-mail addresses harvested from the hard disk of the machines themselves or gleaned from Internet searches, with the entire process typically proceeding behind the back of the PCs’ owners.

Botnets can also be used to launch coordinated attacks on a particular Internet endpoint. For example, a criminal can attack an Internet gambling Web site and then extort payment to make the attacks stop. The going rate for a botnet to launch such an attack is reputed to be about $50,000 per day.

What to do? I’ll just append a very brief excerpt of the Boston Review piece below the fold.


After considering the shortcomings of “appliancization” of personal computers (i.e., making them as tamper-proof as TiVos and cellphones), Zittrain states:

We need a strategy that addresses the emerging security troubles of today’s Internet and PCs without killing their openness to innovation. This is easier said than done, because our familiar legal tools are not particularly attuned to maintaining generativity. A simple regulatory intervention—say, banning the creation or distribution of deceptive or harmful code—will not work because it is hard to track the identities of sophisticated wrongdoers, and, even if found, many may not be in cooperative jurisdictions. Moreover, such intervention may have a badly chilling effect: much of the good code we have seen has come from unaccredited people sharing what they have made for fun, collaborating in ways that would make businesslike regulation of their activities burdensome for them. They might be dissuaded from sharing at all. . . .

We can find a balance between needed change and undue restriction if we think about how to move generative approaches and solutions that work at one “layer” of the Internet—content, code, or technical—to another. Consider Wikipedia, the free encyclopedia whose content—the entries and their modifications—is fully generated by the Web community. . . .

The effectiveness of the social layer in Web successes points to two approaches that might save the generative spirit of the Net, or at least keep it alive for another interval. The first is to reconfigure and strengthen the Net’s experimentalist architecture to make it fit better with the vast expansion in the number and types of users. The second is to develop new tools and practices that will enable relevant people and institutions to help secure the Net themselves instead of waiting for someone else to do it.

Generative PCs with Easy Reversion. Wikis are designed so that anyone can edit them. This creates a genuine and ongoing risk of bad edits, through either incompetence or malice. The damage that can be done, however, is minimized by the wiki technology, because it allows bad changes to be quickly reverted. All previous versions of a page are kept, and a few clicks by another user can restore a page to the way it was before later changes were made. So long as there are more users (and automated tools they create) detecting and reverting vandalism than there are users vandalizing, the community wins. (Truly, the price of freedom is eternal vigilance.) Our PCs can be similarly equipped.

****

Building on . . . ideas about measurement and code assessment, Harvard University’s Berkman Center and the Oxford Internet Institute—multidisciplinary academic enterprises dedicated to charting the future of the Net and improving it—have begun a project called StopBadware, designed to assist rank-and-file Internet users in identifying and avoiding bad code. The idea is not to replicate the work of security vendors like Symantec and McAfee, which for a fee seek to bail new viruses out of our PCs faster than they pour in. Rather, these academic groups are developing a common technical and institutional framework that enables users to devote some bandwidth and processing power for better measurement of the effect of new code.

I’ll try to summarize and respond to Zittrain’s arguments later this week. But for now I offer these excerpts as a taste of the challenging ideas now on offer in his new book.

You may also like...