Unraveling Privacy as Corporate Strategy

The biometric technologies firm Hoyos (previously Global Rainmakers Inc.) recently announced plans to test massive deployment of iris scanners in Leon, Mexico, a city of over a million people. They expect to install thousands of the devices, some capable of picking out fifty people per minute even at regular walking speeds. At first the project will focus on law enforcement and improving security checkpoints, but within three years the plan calls for integrating iris scanning into most commercial locations. Entry to stores or malls, access to an ATM, use of public transportation, paying with credit, and many other identity-related transactions will occur through iris-scanning & recognition. (For more details, see Singularity’s post with videos.) Hoyos has the backing to make this happen: on October 12th they also announced new investment of over $40M to fund their growth.

There are obviously lots of interesting privacy- and tech-related issues here. I’ll focus on one: the company’s roll-out strategy is explicitly premised on the unraveling of privacy created by the negative inferences & stigma that will attach to those who choose not to participate. Criminals will automatically be scanned and entered into the database upon conviction. Jeff Carter, Chief Development Officer at Hoyos, expects law abiding citizens to participate as well, however. Some will do so for convenience, he says, and then he expects everyone to follow: “When you get masses of people opting-in, opting out does not help. Opting out actually puts more of a flag on you than just being part of the system. We believe everyone will opt-in.” (For the full interview, see Fast Company’s post on the project.)

In a forthcoming article, I’ve written at length about the unraveling effect and why it now poses a serious threat to privacy. This biometric deployment is one of many examples, but it most explicitly illustrates that unraveling has moved beyond unexpected consequence to become corporate strategy.

The unraveling effect holds that under certain conditions every member of a pool will ultimately reveal its type, even if at first it seems unwise for each to do so. Those with the “best” traits disclose first because their type is above average, and thus being lumped together with the rest of the pool is a detriment. As the average quality of those remaining in the pool drops, however, the new “best” individuals find themselves with the same incentive to disclose. As Robert Frank puts it in Passions Within Reason, “[t]he unraveling process is set in motion, and in the end all [individuals] must either [disclose] or live with the knowledge that [others] will assume they are of the ‘worst’ type … [T]he lack of evidence that something resides in a favored category will often suggest that it belongs in a less favored one.” The key is the negative inferences that attach to staying silent (or in this case, to not participating in GRI’s plans).

It is not surprising that firms would understand unraveling and its power to incentivize disclosure. I’ll admit that at first I did find it a bit surprising that Carter would be so transparent about his company’s hope that unraveling would undermine privacy in Leon, but I’ve come to realize that his message simply exemplifies the problem for privacy scholars and advocates that I’ve tried to identify in my recent piece.  Unraveling can be framed as individual self interest–“the consumer is consenting to be scanned because it makes her life easier. What’s objectionable about that?” Such framing dismisses any worry that some will find their consent forced by the negative stigma attached to not participating; they are still “consenting,” after all, and who really knows why? Maybe those in the middle or bottom of the pool felt a little pressure, but in the end full disclosure will help everyone, right?


By the way, no, sunglasses don’t deter the scanners.

More on unraveling in future posts. And, by the way, thanks to Concurring Opinions for inviting me to participate this month. I’m looking forward to it!

You may also like...

4 Responses

  1. Frank Pasquale says:

    Your article looks very interesting and I look forward to reading it.

    I’d take your observations in two directions here. First, privacy appears to be something of a reverse positional good. As a recent paper stated, “Many goods have positional value (as opposed to functional value) because they increase the status of their consumers.” But you’re projecting a world where people are lower status, the less they reveal. Or, to reverse it, people who “purchase” more privacy end up paying for it with lower social position.

    Unfortunately, I am afraid that mainstream economics can’t really accommodate the concerns raised by positional or reverse-positional goods. That’s one reason my piece in Northwestern this year argues for supplementing (and perhaps replacing) economic reasoning on privacy with broader accounts of the social aims of privacy law, fair data practices, and reputation regulation.

    I think one of those social aims is to rebalance power between the watchers and the watched. My “thought piece” at this symposium pursues that idea, calling it “reciprocal transparency:”

    Unraveling may mean that we can’t do much to stop corporate and government officials from learning ever more details about our lives. But we can at least implement immutable audit logs to assure that every use the “watchers” make of that data is itself logged.

    Finally, for some scary directions this is all going, check out this prediction from Sharona Hoffman:

    “Employers or their hired experts may develop complex scoring algorithms based on electronic health records to determine which individuals are likely to be high-risk and high-cost workers.”
    (in 19-SPG Kan. J.L. & Pub. Pol’y 409)

  2. Frank says:

    PS: I have a sense that the defenders of unraveling will argue that their system only determines good “fit”, not winners and losers. If so, this post of mine on zero-sum games may be of interest:


  3. Anonymous Coward says:

    If the problem is that people are being coerced to disclose information about themselves, the solution seems simple enough: Make sure the law does not allow anyone to be punished for intentionally disclosing false information. Then the unraveling process is destroyed because anyone who is coerced to disclose can just disclose false information.

    Another alternative is shunning. That is, everyone who is not of the “best” type boycotts or otherwise retaliates against anyone who discloses so that the cost of disclosure is greater than the benefit for those of the “best” type and the unraveling never gets started. (Obviously this won’t work if the large majority is the “best” type.)

  4. Ryan Calo says:

    @Scott: I couldn’t get a decent credit card fifteen years ago. I had to build up some history. But one hundred years ago, I could describe my spending history until I was blue in the face—credit only flowed to the already rich. Unraveling, or a better lending technique?

    @Frank: I forgot to mention this at Yale but Samuel Bray has a fantastic article about how some rules exist to change power dynamics. It’s called “Power Rules” and I believe it’s in Columbia Law Review.