The Promise of Even Stronger Internet Intermediaries?
Recent privacy scholarship has begun to focus on the role of internet intermediaries (Google, Facebook, ISPs, etc.) and appropriate means to regulate their growing power. The general impression is that intermediaries have gotten too strong and too secretive, and that we need effective means to monitor and potentially control their behavior. Take them down a notch, so to speak, or at least make them more accountable.
There may be a problem with a general attack on intermediaries, however: what if in some circumstances privacy could best be protected by stronger, more secretive, intermediaries–entities with legal privilege to protect the information in their care, legal obligations to those entrusting them with that information, and the ability to only selectively share that information with others in filtered and sometimes somewhat obfuscated ways?
Would it be worthwhile to create the legal architecture needed to effect such stronger intermediaries?
Take as a given that you do not want to live in a world in which everyone knows all of your characteristics, preferences, and history. In other words, you’d like some privacy; some ability to influence what others know about you. You do not want to have to share everything with everyone all the time. Also take as a given, however, that you do want to live in the modern world. You like to use computers, you want to apply for a mortgage, you understand that some tracking and scoring of your behaviors and/or disclosure of those behaviors to others will be necessary to navigate through the economic demands of modern life. In other words, although you may not like being sorted, screened, or quantified, you understand that the bureaucratic demands of a technocratic economy demand data about you in order to objectify and process transactions with you.
How to reconcile these competing realities?
One possible solution is to inject “neutral” third parties into information-dependent transactions to act as filters on the information–making it useful without requiring full disclosure. This structural change to a transaction is similar to the role of a mediator in dispute resolution. As Nobel Prize winner Thomas Schelling explained fifty years ago, this is one of the most powerful functions of an intermediary–the ability to compare information, verify that the comparison satisfies some pre-determined criteria, but not actually share the raw data with either party. As Schelling put it, an intermediary “makes possible certain limited comparisons that are beyond the mental powers of the participants, since no player can persuasively commit himself to forget something.” (The Strategy of Conflict 144 ).
I discussed how this concept could apply in the privacy context in a recent draft on mortgage markets, and am continuing to explore it in a work-in-progress (titled Privacy Trustees). The general idea is that in some contexts, Privacy Trustees may be able to provide the information needed for efficiency purposes while keeping much raw data private. For example, when a consumer applies for a bank loan, the bank wants to know whether they qualify according to several criteria. Rather than share the raw data with the bank (employment and salary information, personal health information, etc.) and risk that the bank could then sell that information to others, a consumer might give the information to a Privacy Trustee. The trustee could then compare the consumer’s information to the bank’s criteria and give the bank a simple “yes” or “no” answer about whether the consumer is qualified.
This is not an entirely new idea. Others have touched on it, including:
- computer scientists, who have studied “interactive techniques” involving active data administrators who selectively filter and disclose information; (see Paul Ohm’s Broken Promises for discussion);
- health regulators, who have built the (somewhat related) idea of “health information trustees” or “information custodians” into draft health information legislation;
- mobile technology (and other software) developers, who are exploring the idea of “data vaults” to make sensor data useful but private. (see Frank Pasquale’s recent post, and/or descriptions here or here)
To succeed, Privacy Trustees must have several characteristics. They must be trusted by both sides of the transaction. They must be able to hold the information in confidence and protect it from attack (either technological attack such as hacking or legal attack such as subpoena). They must be able to efficiently process the criteria of data users (e.g., the bank) and run comparisons against their data sets. They must have legal obligations to keep the data secure, as well as obligations to honor the wishes of those on both sides of the transaction (the data-providing consumer and the data-using firm, for example).
Is there a role for Privacy Trustees? Could such stronger intermediaries provide consumers with a means to share their data without disclosing it fully? Or have we passed the point at which such information architecture might have been possible and/or mattered?