On Defining Generativity, Openness, and Code Failure

I’ve really enjoyed the back-and-forth in this symposium about the many issues raised in Jonathan Zittrain’s Future of the Net, and I appreciate that several of the contributors have been willing to address some of my concerns and criticisms in a serious way. I recognize I’m a bit of a skunk at the garden party here, so I really do appreciate being invited by the folks at Concurring Opinions to play a part in this. I don’t have much more to add beyond my previous essay, but I wanted to stress a few points and offer a challenge to those scholars and students who are currently researching these interesting issues.

As I noted in my earlier contribution, I’m very much hung up on this whole “open vs. closed” and “generative vs. sterile/tethered” definitional question. Much of the discussion about these concepts takes place at such a high level of abstraction that I get very frustrated and want to instead shift the discussion to real-world applications of these concepts.  Because when we do, I believe we find that things are not so clear-cut.  Again, “open” devices and platforms rarely are perfectly so; and “closed” systems aren’t usually completely clamped down.  Same goes for the “generative vs. sterile/tethered” dichotomy.

That’s one reason I’ve given Jonathan such grief for making Steve Jobs and his iPhone the villain of his book, which is highlighted in the very first and last line of Future of the Net as the model of what we should hope to avoid. But is it really?  Ignore the fact that there are plenty of more “open” or “generative” phones / OSs on the market.  The more interesting question here is how “closed” is the iPhone really?  And how does it stack up next to, say, Android, Windows Mobile, Blackberry, Palm, etc.?  More importantly, how and when do we take the snapshot and measure such things?

I’ve argued that Zittrain’s major failing in FoTN—and Lessig’s in Code—comes down to a lack of appreciation for just how rapid and unpredictable the pace of change in this arena has been and will continue to be. The relentlessness and intensity of technological disruption in the digital economy is truly unprecedented.  We’ve had multiple mini-industrial revolutions within the digital ecosystem over the past 15 years. I’ve referred to this optimistic counter-perspective in terms of “evolutionary dynamism” but it’s really more like revolutionary dynamism.  Nothing—absolutely nothing—that was sitting on our desks in 1995 is still there today (in terms of digital hardware / software, I mean).  Heck, I doubt that much of what was on our desk in 2005 is still there either—with the possible exception of some crusty desktop computers running Windows XP.

Just think about that for a moment.  Compare the half-life of Windows PC operating systems—which Jonathan indirectly glorifies in his book as generativity nirvana—to the half-life of Android operating systems.  Since I picked up my Droid last Fall, I’ve already had three Android operating systems upgrades.  Since 2007, Apple has rolled out several iOS upgrades, too. This is an astonishing thing, no?

Some application developers actually complain about this frantic pace of mobile OS “revolutions,” especially with the Android OS since they are dealing with multiple devices and OSs instead of just one Apple iPhone.  They’d rather see more OS consistency among the Android devices they’re developing for to facilitate quicker and more stable rollouts. It creates serious complications when you are trying to push an app to market and you’re developing for multiple phones with multiple versions of an OS.   And then they have to consider whether and how to develop the same app for several other competing platforms.

Meanwhile, jumping back to the PC operating system realm, software developers in that context have had a more “stable” developing platform to play with because Microsoft rolls out OS upgrades at a much slower pace. So, the interesting question here is whether we should consider an OS with a slower upgrade trajectory more “generative” than an OS that experiences constant upgrading if, in practice, the former allows for more “open” (and potentially rapid) independent innovation?  Of course, I understand that other factors play into the “generativity” equation, but I still find it a bit ironic that we’d be placing the PC OS model on the higher pedestal of generativity than the rapidly-evolving mobile OS ecosystem.

So, this leads to my challenge. I’d like to see Jonathan and other scholars place real-world devices, services, operating systems, etc., on a continuum and justify what goes where in their opinion.  For example, on the generativity spectrum, does Apple’s iPhone really go on the far end (“sterile & tethered”) while an Android-based platform is on the other? Or are they both more toward the middle with truly closed systems like set-top boxes at one extreme and the Linux OS at the other?  I realize this isn’t an exact science, but if scholars might start attempting to plot things along this continuum, it would help move us away from the level of abstraction that continues to cloud the discussion.

Of course, once they do start plotting these things, I’ll probably have plenty to say in opposition to some of the specific plottings!  Heck, I can’t even get past the “old tech” plottings that Jonathan offered on page 75 of his book when he threw chess, checkers, and dice into the “generativity” camp for games.  We squabbled over this at a New America Foundation event in 2008 (video here) where I protested such assignments on the grounds that there are only so many squares of operational movement on chess and checker boards, and there are only so many sides to dice.  Moreover, player movements are restricted in chess and checkers to pre-assigned patterns.  And then there’s all that funky en passant and castling nonsense in chess. How can all that be generative?!  Yes, yes… I am being a bit petty here, but the point I was trying to make is that “generativity” is a far more relative concept than Jonathan cares to admit.  There are gradations of generativity that need to be explored.

Here’s my second big definitional beef with Zittrain, Lessig and others: In light of the radical revolutions constantly unfolding in this space and upending existing models, I think it’s important we avoid “defining down” market failure.  This is one reason I’ve been so critical of Lessig and his laments about “code failure,” which is really just digerati-speak for what economists have for decades called market failure.

To be clear (if not a bit repetitive), my view of things is shaped not by a blind faith in free markets, but rather a profound appreciation for the fact that when markets are built upon code, the pace and nature of change becomes unrelenting and utterly unpredictable.  Indeed, I believe—contra Lessig’s lament in Code that “Left to itself, cyberspace will become a perfect tool of control”—cyberspace has proven far more difficult to “control” or regulate than any of us ever imagined. Again, the volume and pace of technological innovation we have witnessed over the past decade has been nothing short of stunning.

This is why I keep coming back to my mantra: “Give evolutionary dynamism a chance!”  Sometimes it’s during what appears to be a given sector’s darkest hour that the most exciting things are happening within it.  In my earlier essay, I mentioned all the hand-wringing over AOL and its “market power” circa 1999-2002 and the scary stories told around the cyber-campfire about how this nefarious behemoth would become something akin to a digital black hole from which nothing in (cyber)space would escape. The fever-pitch reached its zenith just after AOL announced its merger with Time Warner, which was followed by predictions of the rise of “new totalitarianisms” (Norman Soloman) and the birth of corporate “Big Brother” (Robert Scheer). Again, as I document in my previous essay and this white paper, the result of that merger was indeed cataclysmic – but only for AOL-Time Warner shareholders.  In the larger scheme of things, it’s already become an afterthought in our chaotic cyber-history. But I’m not so willing to let those old critics forget about their lugubrious lamentations. They said the sky would fall, and it most certainly did not. (Also, all the hand-wringing about AOL’s looming monopolization of instant messaging seems particularly silly now since anyone can download a free chat client like Digsby or Adium to manage IM services from AOL, Yahoo!, Google, Facebook and just about anyone else, all within a single interface—essentially making it irrelevant which chat service your friends use.)

Here’s another example which others may remember and appreciate even more. In the late 1990s, many – including governments both here and in the EU – claimed Microsoft had a dominant lock on the browser market.  Dour predictions of perpetual Internet Explorer lock-in followed.  For a short time, there was some truth to this.  But innovators weren’t just sitting still; exciting things were happening precisely because of this.  In particular, the seeds were being planted for the rise of Firefox and Chrome as robust challengers to IE’s dominance—not to mention mobile browsers.

Of course, it’s true that over half of all websurfers still use a version of IE today.  But so what? We have viable, impressive alternatives now.  That’s all that counts. The world changed, and for the better, despite all the hand-wringing less than a decade ago about Microsoft’s supposed dominance.

So, here’s that other challenge: I would love to see scholars plot case studies or incidents along a “code failure” continuum.  I realize this is even less scientific than the “generativity continuum” above, but I would hope it could serve some didactic purpose or help us to frame these debates.

In both the case of the “code failure” and “generativity continuums,” however, I would argue that the continuum itself is constantly evolving and that this evolution is taking place at a much faster clip in this arena than it does in other markets. Coders don’t sit still. People innovate around “failure.” Indeed, “market failure” is really just the glass-is-half-empty view of a golden opportunity for innovation. Markets evolve. New ideas, innovations, and companies are born. And things generally change for the better—and do so rapidly.

That’s why I’m more bullish on our cyber-future than Zittrain, Lessig and other cyber-worrywarts.  Have a little faith, people!

Adam Thierer

Adam is a senior research fellow at the Mercatus Center at George Mason University. He previously served as President of the Progress & Freedom Foundation, Director of Telecom. Studies at the Cato Institute, and Fellow in Economic Policy at the Heritage Foundation.

You may also like...

5 Responses

  1. Orin Kerr says:

    Nice post, Adam. I tend to agree.

  2. Anon says:

    Great post. I also noticed that, after calling you out, Pasquale has yet to reply to your response here: http://concurringopinions.com/archives/2010/09/future-of-the-internet-symposium-a-challenge-to-thierer-and-zittrain.html#comments

  3. Nick B says:

    I guess what I’m missing from both the AOL narrative and the IE narrative above is a sense of what role the government played in promoting competitive responses in each of these cases. Could AOL itself – and eventually the thousands of competitors to AOL (see http://bit.ly/asUuCb [pdf]) – have arisen in the absence of the Title II requirements placed upon the transmission/access services of phone companies? If the DOJ had never pushed back on Microsoft’s ability to bolt IE and related services into its dominant operating system, would we be using safari/firefox/chrome to the degree we are today?

    Do you see a role for the FCC, DOJ, or others in enabling the conditions (or planting the seeds) for “innovation around failure”? More broadly, on what kind of scale do you balance the benefits and positive spillovers of governmental intervention with the costs and excesses in different innovation contexts?

    These are honest questions — I’m not sure of the answer to them! It’d just be useful to have a more specific account of how markets become free and escape from uncompetitive dynamics — and an account that includes the actors and rules determining the possibilities for exchange. Without this account, I’m afraid that “things generally change for the better” is an incomplete history. Not overly optimistic, just incomplete.

  4. The Internet is like the NFL. In the “Nirvana Days”, the spectators stood at the edge of the field. As fan interest grew they had to be pushed back into grand-stands. Thus began the Thick-Client vs. Thin-Client debates.

    Some insiders expect to always be standing at the edge of the field. They seem to think it is their birth-right. If one hails from one of the dozen “right places” (Harvard, Yale, Stanford…) they get to stand at the edge of the field.

    But, the people at the edge of the field, start to see that SkyBoxes have been built. They want to see what goes on in those boxes. Their position at the edge of the field may not be so ideal. They see people sending messages directly from one box to another.

    In a more important evolution, a new spectator base emerges in the parking lot. They are the people in the tail-gate parties. They don’t even care about the game. They like to party. The Big.Lie.Society starts to realize that their prime position at the edge of the field may not be the most desirable. Is the NFL morphing into more than football? How can that be?

    An even more significant evolution occurs, PEOPLE ARE WATCHING ON TV!!! Oh my God, The.Big.Lie.Society insiders
    standing at the edge of the field are now in the way of the
    camera shots. Nope, they ain’t moving. They really believe that everyone can be standing at the edge of the field. They could not care less that the laws of physics prevent that. They take care of Number ONE first.

    It Seeks Overall Control

  5. Ionut Pop says:

    There is a difference between aspirational guideposts that say “go no further past this point” and heavy-handed regulation from the top down.

    I can kind of see Zittraine, et al.’s point that certainly lots of bad things *could* occur without intervention. But is it really top-down regulation they seek, or simply some signals from the powers that be that “hey, we’re watching you [insert name of dominant market player here] so don’t get too cute.” I can’t say for certain; some weeks it’s one, some weeks the other.

    I think the specter of government intervention by itself is enough to keep most companies honest in this economy. That and the specter of turning your customer’s base of goodwill for your product against you by restricting their access to and enjoyment of the web. Anything more simply risks regulatory capture of one sort or another.