Mind the Gap (Symposium on Configuring the Networked Self)

Julie Cohen has written a great book, perhaps the most important Cyberlaw book since Code. I say this even though I recognize the many virtues of Cyberlaw books written by Jonathan Zittrain, Tim Wu, Yochai Benkler, and Barbara van Schewick, privacy books written by Dan Solove, Lior Strahilevitz, Viktor Mayer-Schönberger, and many other books published recently. But not since Code has one book challenged the way we conceptualize and try to solve technology problems as much or as well as this book does.

In this post, I want to focus on “semantic discontinuity,” the label Cohen gives to the most novel and interesting construct in the book. Semantic discontinuity is one of three “principles that should inform the design of legal and technical architectures,“ along with “access to knowledge” and “operational transparency.” In her words, “semantic discontinuity is the opposite of seamlessness. . . . It is a function of interstitial complexity within . . . institutional and technical frameworks.” It serves a “vital” function, “creat[ing] space for the semantic indeterminacy that is a vital and indispensable enabler of the play of everyday practice.” (Kindle location 4288)

In other words, semantic discontinuity valorizes noise, inefficiency, constraints, and imperfections. As this list illustrates, the most striking thing about this book is the size of the herd of sacred cows it leads to the slaughter.

But, to repeat the question Cohen asked during this symposium, how do you operationalize semantic discontiuity? Focusing on privacy law, semantic discontinuity leads to what she calls a principle of “just aggregation,” which will animate “interventions aimed at preserving the commercial, technical, and spatial disconnects that separate contexts from one another.” (Kindle 4843) To put it more metaphorically and concretely, “privacy law and policy should reinforce and widen gaps within the semantic web so that situated subjects can thrive.” (Kindle 4765)

This is heady stuff, and I really love it. I am attracted to this metaphor, that privacy law (and copyright law, and unauthorized access law) must protect, create, or widen “gaps” in enforcement, coverage, and definition. It provides a goal with an easy-to-understand label, built upon a deep theoretical base, for defending aggressive regulatory interventions that are likely to improve privacy.

But I think Cohen has not yet done enough to explain how we get from the abstraction to concrete, defensible solutions. Somehow, the principle of just aggregation leads her to something like the Fair Information Practice Principles (FIPPs) on steroids. We should treat companies who hold information about us as “data fiduciaries,” and force them to obey restrictions on how they can use data and with whom they can share data. And when they are done using the data, they must destroy it. (Kindle 4836) Then, we should temper these obligations in certain contextual situations, giving more freedom for data retention and sharing when the data are being used to advance individual well-being or medical research or shared on social networking platforms. (Kindle 4852) Law enforcement access will be subject to its own set of rules, ones developed by those who understand the perils of an obsession with risk management and a tendency to engage in security theater. (Kindle 4900)

This is a fine list. I think a society that embraces and enforces rules like these would enjoy significantly more privacy than we do today, albeit at some significant cost. It is hard to justify the cost for now, however, because the book skips too many steps from the abstract idea of “gaps” to this fine-grained set of prescriptions. And I agree with Anita Allen that many theorists have used liberal political theory and rights-talk to end up with very similar lists.

I think we need to do more work to better explain what the “gaps” of information privacy law should look like. As a modest start, I would like to make a claim about the nature of these gaps, one I see woven throughout the book, but never stated plainly enough: we will be forced to carve out these gaps using machetes not scalpels. It seems hard, almost by definition, to design legal or technological architectures finely-tuned to bolster “the play of everyday practice” and foster “evolving subjectivity.” The very ideas of play and subjectivity seem tied in important ways to the unexpected. As Cohen puts it:

[A]n important function of play is the opening of spaces or gaps into which evolving subjectivity (and also evolving collectivity) might move. Evolving subjectivity, or the everyday practice of self, responds to the play-of-circumstances in unanticipated and fundamentally unpredictable ways. . . . [T]he play-of-circumstances operates as a potent engine of cultural dynamism, mediating both evolving subjectivity and evolving collectivity, and channeling them in unexpected ways.

(Kindle 2591) (emphases added). By calling for machetes, I understand that I might be confusing the thing we are trying to produce with the tool we need to produce it. It may be that a precisely defined, narrowly tailored, and rigidly constructed set of “gaps” in law or technology might somehow best foster “fundamentally unpredictable” results. But I doubt it. It seems to me that the type of architecture best suited to channeling responses “in unexpected ways” will themselves be unpredictably lumpy, misshapen and even somewhat illogical. As Cohen explains in the most bumper-sticker-worthy passage in the book, “privacy consists in setting of limits precisely where logic would object to drawing lines.” (Kindle 4846)

This nicely highlights both the exciting possibilities and the potential Achilles Heel of semantic discontinuity. By freeing us from the hyper-rational causes-and-effects modes of privacy policy we tend to embrace today, Cohen gives us license to dream up creative new solutions we dare never to have tried before (Mandated use of the decaying variables like those in implemented in the Entropy programming language, anyone?) But by unmooring us, at least a little, from strict rationality and predictability and precision, Cohen invites solutions that are sure to be attacked as anathema to the First Amendment by the noisy libertarian wing of tech law and policy, if not by the liberal political theorists themselves.

I’m still not sure whether Cohen will manage to pull us out of the flightpath to zero privacy in which we find ourselves, but for now, wherever she’s going, I’m happy to follow.

You may also like...

3 Responses

  1. Julie Cohen says:

    Paul, I’m so glad you liked the book. To your point that “Cohen has not yet done enough to explain how we get from the abstraction to concrete, defensible solutions,” you are absolutely right. That wasn’t the point of this particular book, which tried to do its heavy lifting at the theory stage and which certainly doesn’t need to be any longer! But it definitely needs to be the point of subsequent projects … and those projects also need to smooth out some of the terminology so more of it is, if not bumper-sticker worthy, at least amenable to shorter sentences!

    I love your insight about the machete and think it is absolutely right. We get into trouble when we try to craft “precisely defined, narrowly tailored, and rigidly constructed” rules. We just aren’t as foresighted as we like to think we are.

    And you’re right that someone needs to tackle the project of understanding and explaining exactly where and how information privacy rules do/should intersect with freedom of expression rules, another task the book did not take on. (I’m being deliberately clunky there – information privacy concerns are global so “the First Amendment” can’t be the be-all and end-all; the discussion has to be situated within global information privacy and free expression frameworks.)

  2. Frank Pasquale says:

    Great post, Paul. I really like the valorization of semantic discontinuity in the context of internet marketing and tracking. I think it gets trickier when we add in concerns about public health and security.

    For example, imagine that, a decade from now, a complete digital copy of your medical record exists. Doctor and hospital visits, prescription records, genetic tests, even records of gym visits and food purchased at the grocery story are part of one, unified computer file. If you’re a member of the “quantified self” movement, you can also insert verified tracking of your average pulse, sleep time, weight, and meters walked per day. Consider the following possible uses of the file:

    1) A physician wants to determine how people with profiles similar to yours have responded to a drug she wants to prescribe for you.
    2) A pharmaceutical firm wants to market certain products to you based on your medical profile.
    3) A reputational intermediary offers to build a “personal prospectus” that demonstrates how healthy you are. Once it certifies the document, you can include it in job applications to self-insured employers to try to demonstrate how little you will cost them.
    4) A domestic intelligence “fusion center” at the Department of Homeland Security is trying to determine the priority for vaccine distribution if an avian flu strikes in late 2023.
    5) A search engine wants to use the data to optimize ads for you.
    6) An employer wants the record to determine if you should be promoted.

    Each of these scenarios reflects current legal controversies about access to information that is increasingly easier to store and analyze. I think the practical question for proponents of semantic discontinuity is: how do we configure architectures of health information storage so that certain of these uses can be barred, or reported to the data subject, and others can happen as smoothly and swiftly as possible? I think the final chapters of this report are a good starting point:

    But it has not been greeted with enthusiasm by corporate stakeholders in the HIT field.

  3. Paul Ohm says:

    Frank, thanks for the comment. You raise a key problem. Julie’s solution seems to be: broad baseline limitations on all uses of data with narrow exceptions for “advancing individual well-being”. But as I begin to say in the post, I can’t imagine how we’d support this solution (or any solution) starting from semantic discontinuity until we develop a more sophisticated model for operationalizing.

    You end your comment focusing on the debate itself, the fact that stakeholders will object to any change along these lines. This is where I see real power in Julie’s book. The way she fearlessly takes on sacred cows like “information equals knowledge” and “we need to weed out structural inefficiencies” gives me hope that we might finally break out of the tired old rhetorical patterns we always find ourselves replaying.