Future of the Internet Symposium: How can we create even better incentives?
Disclaimer: The views expressed here are mine alone and do not in any way represent those of my employer.
I appreciated Orin Kerr’s suggestion to take Adam Thierer’s seven objections to the Zittrain thesis as a starting point for further discussion. I’m particularly interested in exploring objection #2, that incentives already exist to check closed systems that negatively impacted consumer welfare. In general, I agree with Adam’s assertion that these incentives exist, particularly in market economies. But, I think the core value of the Jonathan’s thesis is not so much an assertion that these incentives do not exist today, but rather a question as to whether we could create even more powerful ones through generative design.
The “perfect enforcement” consequences of tethered design that Jonathan explores seem to be very real in some countries, if you believe recent news about efforts in some countries to shut down services entirely if surveillance and censorship mechanisms are not put in place. As an American who has lived in the US most of her life, I can’t comment extensively on the extent to which incentives exist globally the way they do in the US. Here, I have faith that our right to free speech enshrined in the Constitution would enable a whistleblower to identify behavior of that nature. I also have faith that our competitive marketplace would lead to alternative services springing up quickly. It’s not clear to me that these and other incentives exist globally, so I’d like to broaden Adam’s point and ask how we could think about designing generative systems that would create the types of social and economic incentives required to check bad behavior on the part of powerful actors.
In 2009 Google launched a little project called Measurement Lab, an open platform of servers on which developers and researchers can deploy Internet measurement tools. It’s one example of an attempt to decentralize the power that comes with measurement and open up access to data that has otherwise been available only to a handful of backbone and last-mile providers. M-Lab, as it’s called, couldn’t have been launched by Google alone; it required collaboration between a diverse group of academic researchers, non-profit organizations and companies, few of whom (if any!) had any direct financial interest in the project, not entirely unlike the Internet if at a much smaller scale. The outcome of this project is that policymakers can have access to independent, objective data and research about Internet speed, latency, and accessibility. It’s I think fair to say that it’s a generative approach to solving the Internet accessibility problems.
When I think about the generativity thesis, these are the types of solutions to hard problems that come to mind. As Adam observes, Jonathan doesn’t lay out a concrete proposed solution(s) for tackling the vast array of policy problems he observes, but in my view the primary contribution of the work is not in a proposed solution. It is in a framework for thinking about a possible solution space.
I’ve often thought, for example, about applying this framework to identity. Jonathan wrote yesterday about his argument for reputation bankruptcy, which touches loosely on the same topic, but I’d like to address a slightly different aspect of identity than reputation: embedded identities. As we move toward a more social web, our identities are increasingly embedded in a system, much more like our real world experience than the early Internet experience of identity. Ten years ago on the Internet, identity was a username and password, and maybe a few additional characteristics tagged on for color. Increasingly today, identity is that username and password embedded in a network of social connections and behaviors. It’s not just the graph, it’s the graph overlaid with data about behavior over time. For example, a Twitter identity is embedded in a graph, but also in a timeline of comments, many of which may be geo-tagged or may include links to other individuals. That entire system of social, geographical and longitudinal information has become relevant to thinking about identity, privacy and anonymity.
There are all sorts of theoretical policy challenges we could imagine. To take just one, imagine a user whose embedded identity is “turned off” or “deleted” by a service provider because the user is incorrectly deemed to be engaged in fraudulent, spammy or malicious behavior. Of course the service providers need to secure their service and keep it free of spam, and they ought to have every right to do so. The ease of tackling problems like spam and fraud in a more “closed” as Thierer would call it or “tethered” system may be a real benefit of those systems. But the unintended consequence for this one user experiencing a false positive are potentially destructive. What sorts of generative solutions might we imagine to preserve the security and other benefits of closed/tethered systems while enabling some form of recourse and repair to that small group of users experiencing negative, unintended consequences?
Twitter has tackled this challenge by issuing “verified identities.” I might consider this borderline generative, it provides a signal that theoretically could enable the collective network of users to be more suspicious of non-verified identities, adding value to the attainment of a verified identity. I’m not sure this has worked in practice, I at least still seem to get spammy followers, but it seems to me an attempt worth evaluating within the generativity framework alongside other solutions, perhaps things like the web of trust cryptography model.
I began this post by revisiting Thierer’s seven objections to the generativity thesis, and in general I do agree with his observations. But I think the important insights to be gained here are in the questions raised, rather than solutions (or lack thereof) proposed. I’d love to see as an outcome of this discussion a curated list of difficult policy and design questions we will face as tethered and generative systems continue their mutual march toward the future. Could we come up with a list of “Hilbert’s problems” for network design? I’ll get that list started by asking: How can we preserve the ability to remain anonymous online while reaping all the benefits that an embedded identity system can provide?