Tagged: Privacy


The Buzzword of the Year: “Multistakeholder”

Greetings to Concurring Opinion readers. I thank the editors for inviting me to guest blog. I am looking forward to the opportunity to write more informally than I have done for a long time. I am out of the administration, and don’t have to go through the painful process of “clearing” every statement. And I am focusing on researching and writing rather than having clients. So the comments are just my own.

I suspect I’ll be writing about quite a range of privacy and tech issues. Many of my blog-sized musings will likely be about the European Union proposed Data Protection Regulation, and the contemporaneous flowering of privacy policy at the Federal Trade Commission and in the Administration.

From the latter, I propose “multistakeholder” as the buzzword of the year so far. (“Context” is a close second, which I may discuss another time.) The Department of Commerce has received public comments on what should be done in the privacy multistakeholder process. (My own comment focused on the importance of defining “de-identified” information.)

Separately, the administration has been emphasizing the importance of multistakeholder processes for Internet governance, such as in a speech by Larry Strickling, Administrator of the National Telecommunications and Information Administration.

Here’s a try at making sense of this buzzword. On the privacy side, my view is that “multistakeholder” is mostly a substitute for the old term “self regulation.” Self regulation was the organizing theme when the U.S. negotiated the Safe Harbor agreement with the EU in 2000 for privacy. Barbara Wellbery (who lamentably is no longer with us) used “self regulation” repeatedly to explain the U.S. approach. The term accurately describes the legal regime under Section 5 of the FTC Act – an entity (all by itself) makes a promise, and then it’s legally enforceable by others. As I have written since the mid-1990’s, this self regulatory approach can be better than other approaches, depending on the context.

The term “self regulation”, however, has taken on a bad odor. Many European regulators consider “self regulation” as the theme of the Safe Harbor, which they consider weaker than it should have been. Many privacy advocates have also justifiably said that the term puts too much emphasis on the “self”, the company that decides what promises to make.

Enter stage left with the new term, “multistakeholder.” The term directly addresses the advocates’ issue. Advocates should be in the room, along with regulators, entities from affected industries, and perhaps a lot of other stakeholders. It’s not “self regulation” by a “selfish” company. It is instead a process that includes the range of players whose interests should be considered.

I am comfortable with the new term “multistakeholder” for the old “self regulation.” The two are different in the way that the new term includes more of those affected. They are the same, however, because they stand in contrast to top-down regulation by the government. Depending on the facts, multistakeholder may be better, or worse, than the government alternative.

Shifting to Internet governance, “multistakeholder” is a term that resonates with the bottom-up processes that led to the spectacular flowering of the Internet. Examples include organizations such as the Internet Engineering Task Force and the World Wide Web Consortium. Somehow, almost miraculously, the Web grew in twenty years from a tiny community to one numbering in the billions.

The term “multi-stakeholder” is featured in the important OECD Council Recommendation On Principles for Internet Policy Making, garnering 13 mentions in 10 pages. As I hope to discuss in a future blog post, this bottom-up process contrasts sharply with efforts, led by countries including Russia and China, to have the International Telecommunications Union play a major role in Internet governance. Emma Llansó at CDT has explained what is at stake. I am extremely skeptical about an expanded ITU role.

So, administration support for “multi stakeholder process” in both privacy and Internet governance. Similar in hoping that bottom-up beats top-down regulation. Different, I suspect, in how well the bottom-up has done historically. The IETF and the W3C have quite likely earned a grade in the A range for what they have achieved in Internet governance. I doubt that many people would give an A overall to industry self-regulation in the privacy area.

Reason to be cautious. The same word can work differently in different settings.


The Wake Forest Law Review Online: “The Myth of Perfection”

The Wake Forest Law Review Online

The Wake Forest Law Review Online has published an essay on internet privacy, online censorship and intellectual property rights: The Myth of Perfection by Derek E. Bambauer.

In The Myth of Perfection, Derek Bambauer explores the impact of the pursuit of perfection on internet privacy, online censorship and intellectual property protection.  Bambauer argues that the “obsession” with perfection may threaten innovation and detract from more pressing privacy concerns.  Ultimately, Bambauer concludes  that in the place of perfection, “we should adopt the more realistic, and helpful, conclusion that often good enough is . . . good enough.”

Preferred citation:

Derek E. Bambauer, The Myth of Perfection, 2 Wake Forest L. Rev. Online 22 (2012), http://wakeforestlawreview.com/the-myth-of-perfection.


Better Stories, Better Laws, Better Culture

I first happened across Julie Cohen’s work around two years ago, when I started researching privacy concerns related to Amazon.com’s e-reading device, Kindle.  Law professor Jessica Littman and free software doyen Richard Stallman had both talked about a “right to read,” but never was this concept placed on so sure a legal footing as it was in Cohen’s essay from 1996, “A Right to Read Anonymously.”  Her piece helped me to understand the illiberal tendencies of Kindle and other leading commercial e-readers, which are (and I’m pleased more people are coming to understand this) data gatherers as much as they are appliances for delivering and consuming texts of various kinds.

Truth be told, while my engagement with Cohen’s “Right to Read Anonymously” essay proved productive for this particular project, it also provoked a broader philosophical crisis in my work.  The move into rights discourse was a major departure — a ticket, if you will, into the world of liberal political and legal theory.  Many there welcomed me with open arms, despite the awkwardness with which I shouldered an unfamiliar brand of baggage trademarked under the name, “Possessive Individualism.”  One good soul did manage to ask about the implications of my venturing forth into a notion of selfhood vested in the concept of private property.  I couldn’t muster much of an answer beyond suggesting, sheepishly, that it was something I needed to work through.

It’s difficult and even problematic to divine back-story based on a single text.  Still, having read Cohen’s latest, Configuring the Networked Self, I suspect that she may have undergone a crisis not unlike my own.  The sixteen years spanning “A Right to Read Anonymously” and Configuring the Networked Self are enormous.  I mean that less in terms of the time frame (during which Cohen was highly productive, let’s be clear) than in terms of the refinement in the thinking.  Between 1996 and 2012 you see the emergence of a confident, postliberal thinker.  This is someone who, confronted with the complexities of everyday life in highly technologized societies, now sees possessive individualism for what it is: a reductive management strategy, one whose conception of society seems more appropriate to describing life on a preschool playground than it does to forms of interaction mediated by the likes of Facebook, Google, Twitter, Apple, and Amazon.

In this Configuring the Networked Self is an extraordinary work of synthesis, drawing together a diverse array of fields and literatures: legal studies in its many guises, especially its critical variants; science and technology studies; human and computer interaction; phenomenology; post-structuralist philosophy; anthropology; American studies; and surely more.  More to the point it’s an unusually generous example of scholarly work, given Cohen’s ability to see in and draw out of this material its very best contributions.

I’m tempted to characterize the book as a work of cultural studies given the central role the categories culture and everyday life play in the text, although I’m not sure Cohen would have chosen that identification herself.  I say this not only because of the book’s serious challenges to liberalism, but also because of the sophisticated way in which Cohen situates the cultural realm.

This is more than just a way of saying she takes culture seriously.  Many legal scholars have taken culture seriously, especially those interested in questions of privacy and intellectual property, which are two of Cohen’s foremost concerns.  What sets Configuring the Networked Self apart from the vast majority of culturally inflected legal scholarship is her unwillingness to take for granted the definition — you might even say, “being” — of the category, culture.  Consider this passage, for example, where she discusses Lawrence Lessig’s pathbreaking book Code and Other Laws of Cyberspace:

The four-part Code framework…cannot take us where we need to go.  An account of regulation emerging from the Newtonian interaction of code, law, market, and norms [i.e., culture] is far too simple regarding both instrumentalities and effects.  The architectures of control now coalescing around issues of copyright and security signal systemic realignments in the ordering of vast sectors of activity both inside and outside markets, in response to asserted needs that are both economic and societal.  (chap. 7, p. 24)

What Cohen is asking us to do here is to see culture not as a domain distinct from the legal, or the technological, or the economic, which is to say, something to be acted upon (regulated) by one or more of these adjacent spheres.  This liberal-instrumental (“Netwonian”) view may have been appropriate in an earlier historical moment, but not today.  Instead, she is urging us to see how these categories are increasingly embedded in one another and how, then, the boundaries separating the one from the other have grown increasingly diffuse and therefore difficult to manage.

The implications of this view are compelling, especially where law and culture are concerned.  The psychologist Abraham Maslow once said, “it is tempting, if the only tool you have is a hammer, to treat everything as if it were a nail.”  In the old, liberal view, one wielded the law in precisely this way — as a blunt instrument.  Cohen, for her part, still appreciates how the law’s “resolute pragmatism” offers an antidote to despair (chap. 1, p. 20), but her analysis of the “ordinary routines and rhythms of everyday practice” in an around networked culture leads her to a subtler conclusion (chap. 1, p. 21).  She writes: “practice does not need to wait for an official version of culture to lead the way….We need stories that remind people how meaning emerges from the uncontrolled and unexpected — stories that highlight the importance of cultural play and of spaces and contexts within which play occurs” (chap. 10, p. 1).

It’s not enough, then, to regulate with a delicate hand and then “punt to culture,” as one attorney memorably put it an anthropological study of the free software movement.  Instead, Cohen seems to be suggesting that we treat legal discourse itself as a form of storytelling, one akin to poetry, prose, or any number of other types of everyday cultural practice.  Important though they may be, law and jurisprudence are but one means for narrating a society, or for arriving at its self-understandings and range of acceptable behaviors.

Indeed, we’re only as good as the stories we tell ourselves.  This much Jaron Lanier, one of the participants in this week’s symposium, suggested in his recent book, You Are Not a Gadget.  There he showed how the metaphorics of desktops and filing, generative though they may be, have nonetheless limited the imaginativeness of computer interface design.  We deserve computers that are both functionally richer and experientially more robust, he insists, and to achieve that we need to start telling more sophisticated stories about the relationship of digital technologies and the human body.  Lousy stories, in short, make for lousy technologies.

Cohen arrives at an analogous conclusion.  Liberalism, generative though it may be, has nonetheless limited our ability to conceive of the relationships among law, culture, technology, and markets.  They are all in one another and of one another.  And until we can figure out how to narrate that complexity, we’ll be at a loss to know how to live ethically, or at the very least mindfully, in an a densely interconnected and information rich world.  Lousy stories make for lousy laws and ultimately, then, for lousy understandings of culture.

The purposes of Configuring the Networked Self are many, no doubt.  For those of us working in the twilight zone of law, culture, and technology, it is a touchstone for how to navigate postliberal life with greater grasp — intellectually, experientially, and argumentatively.  It is, in other words, an important first chapter in a better story about ordinary life in a high-tech world.


Stanford Law Review Online: The Drone as Privacy Catalyst

Stanford Law Review

The Stanford Law Review Online has just published a piece by M. Ryan Calo discussing the privacy implications of drone use within the United States. In The Drone as Privacy Catalyst, Calo argues that domestic use of drones for surveillance will go forward largely unimpeded by current privacy law, but that the “visceral jolt” caused by witnessing these drones hovering above our cities might serve as a catalyst and finally “drag privacy law into the twenty-first century.”

Calo writes:

In short, drones like those in widespread military use today will tomorrow be used by police, scientists, newspapers, hobbyists, and others here at home. And privacy law will not have much to say about it. Privacy advocates will. As with previous emerging technologies, advocates will argue that drones threaten our dwindling individual and collective privacy. But unlike the debates of recent decades, I think these arguments will gain serious traction among courts, regulators, and the general public.

Read the full article, The Drone as Privacy Catalyst by M. Ryan Calo, at the Stanford Law Review Online.


Unraveling Privacy as Corporate Strategy

The biometric technologies firm Hoyos (previously Global Rainmakers Inc.) recently announced plans to test massive deployment of iris scanners in Leon, Mexico, a city of over a million people. They expect to install thousands of the devices, some capable of picking out fifty people per minute even at regular walking speeds. At first the project will focus on law enforcement and improving security checkpoints, but within three years the plan calls for integrating iris scanning into most commercial locations. Entry to stores or malls, access to an ATM, use of public transportation, paying with credit, and many other identity-related transactions will occur through iris-scanning & recognition. (For more details, see Singularity’s post with videos.) Hoyos has the backing to make this happen: on October 12th they also announced new investment of over $40M to fund their growth.

There are obviously lots of interesting privacy- and tech-related issues here. I’ll focus on one: the company’s roll-out strategy is explicitly premised on the unraveling of privacy created by the negative inferences & stigma that will attach to those who choose not to participate. Criminals will automatically be scanned and entered into the database upon conviction. Jeff Carter, Chief Development Officer at Hoyos, expects law abiding citizens to participate as well, however. Some will do so for convenience, he says, and then he expects everyone to follow: “When you get masses of people opting-in, opting out does not help. Opting out actually puts more of a flag on you than just being part of the system. We believe everyone will opt-in.” (For the full interview, see Fast Company’s post on the project.)

In a forthcoming article, I’ve written at length about the unraveling effect and why it now poses a serious threat to privacy. This biometric deployment is one of many examples, but it most explicitly illustrates that unraveling has moved beyond unexpected consequence to become corporate strategy.

Read More


On the Colloquy: The Credit Crisis, Refusal-to-Deal, Procreation & the Constitution, and Open Records vs. Death-Related Privacy Rights


This summer started off with a three part series from Professor Olufunmilayo B. Arewa looking at the credit crisis and possible changes that would focus on averting future market failures, rather than continuing to create regulations that only address past ones.  Part I of Prof. Arewa’s looks at the failure of risk management within the financial industry.  Part II analyzes the regulatory failures that contributed to the credit crisis as well as potential reforms.  Part III concludes by addressing recent legislation and whether it will actually help solve these very real problems.

Next, Professors Alan Devlin and Michael Jacobs take on an issue at the “heart of a highly divisive, international debate over the proper application of antitrust laws” – what should be done when a dominant firm refuses to share its intellectual property, even at monopoly prices.

Professor Carter Dillard then discussed the circumstances in which it may be morally permissible, and possibly even legally permissible, for a state to intervene and prohibit procreation.

Rounding out the summer was Professor Clay Calvert’s article looking at journalists’ use of open record laws and death-related privacy rights.  Calvert questions whether journalists have a responsibility beyond simply reporting dying words and graphic images.  He concludes that, at the very least, journalists should listen to the impact their reporting has on surviving family members.


How Useful is Facebook Users’ Information?

A lot has been written on Facebook and its users loss of privacy. In fact, for some, Facebook and loss of privacy have become synonyms. A major fear involves the use of Facebook users’ personal information by information aggregators who will use the data to target the sale of products.  I do not intend to contest here that Facebook users disclose a lot of personal information. But, I want to look at how accurate is the information that Facebook users reveal on Facebook. 

When people surf the Internet their personal information, websites and searches are collected by cookies. As I have written, people tend to disregard these privacy threats at least partly due to their lack of visibility. Even those who know that their information can be collected by cookies, tend to forget it as they use the Internet on a daily basis.  As a result the information collected by cookies reveals relatively true preferences. Cookies will reveal embarrassing or secret facts, such as visits to pornography sites or to  medical sites to investigate a worrying medical condition.

But Facebook is different. Facebook users are constantly aware they are being viewed. True, they may not be thinking about the companies that may eventually aggregate the information. But, for sure they are thinking of the hundreds of friends who will be reading their status updates, examining their favorite books, favorite movies and linked websites. Facebook users “package” themselves. They present themselves to the world the way they want to be perceived. Their real preferences and tastes may be somewhat or even completely different from those they present on Facebook. A criminal law professor may have in her Facebook library collection legal theory books, while in fact in her spare time she is an avid purchaser and reader of chick lit books. A twenty year old college student may want to appear cool placing links to trendy music, although his real passion remains the collection of Star Wars figures.

Some information on Facebook, such as date of birth or marriage status is less likely to be mispresented by users and provides rich ground for data mining.  But Facebook users “packaging”  raises two issues. Companies seeking to target consumers with products they actually want to purchase may find Facebook information less useful than believed. And from a privacy perspective, it is not merely the disclosure of true personal information that we should be concerned about but the creation of false or misleading  individual profiles by data mining companies that can eventually change the information and consumption options available to these Facebook users.


The Havasupai Indians, Genetic Research and the Problem of Informed Consent

Researchers can gain significant genetic information by studying indigenous and preferably isolated populations. Although both researchers and indigenous populations can gain from this collaboration, the two  groups often do not see eye to eye.  This was the case of the collaboration between the Havasupai Indians and researchers from Arizona State University, which resulted in a long legal fight. The Havasupai Indians were suffering from high prevalence of diabetes and agreed to give their blood samples for genetic research on Diabetes. The members of the tribe were infuriated when they found out later that their blood samples were used for other purposes, among them genetic research on schizophrenia.

The New York Times reported yesterday that this conflict resulted in a settlement in which Arizona State University agreed to pay $700,000 to the tribe members and also return the blood samples. The Havasupai Indians’ main legal claim was of violation of informed consent. Informed consent requires that patients and research subjects receive full information that will enable them to decide whether to adopt a certain medical treatment plan or participate in research. Here, the Havasupai Indians argued that the informed consent principle was violated because they were told that their blood samples will be used for one purpose while, in fact, they were used for another.

No doubt, the Havasupai Indians informed consent argument resulted in their victorious settlement. But, the harder question is whether informed consent principle can be feasibly applied  in the area of genetics.  Genetic information is not just individual information it also provides information about groups and families. For example, assume there is a tribe in which some members agree to participate in genetic research investigating Manic Depression.  Other members of the tribe refuse because they are concerned that a result showing that there is a prevalent genetic mutation for Manic Depression among them could stigmatize them and even lead to discrimination against the tribe. The researchers collect samples only from the members of the group who agree to the research. But,  the results  still provide genetic information on all members of the tribe even those who refused to participate because of their genetic connection to those who participated. 

The result in the Havasupai settlement cannot be seen then as a victory for the principle of informed consent in the area of genetics. Restricting genetic researchers to use of samples only for the purpose for which they were collected only partly resolves the informed consent problem. The group nature of genetic information makes the application of informed consent to genetic research much more complicated than that.


23andMe – Has GINA Failed to Live Up to its Promise?

 23andMe is a genetic testing Internet site, which offers testing for over 100 genetic diseases and traits as well as ancestry testing. Many viewed 23andMe as the vehicle, which will bring genetic testing to the masses. It was promoted by “spit parties”  in which attendees spat into a test tube to have their saliva analyzed to produce their genetic profile. Yet, recently the New York Times reported that two and half years after it commenced service 23andMe has not attained its expected popularity. The report tied 23andMe’s lack of popularity to the limited usefulness of genetic information — genetic science’s inability to predict with certainty that a person is going to get sick.

And true, genetic science is all about probabilities. A genetic test can rarely predict with a 100% certainty that a person will incur a disease. I doubt, however, that this limitation is holding 23andMe back. Unfortunately, people are not very good at understanding the statistical results of genetic testing.  If anything, a woman who is told that she has a 60% of getting breast cancer is likely to dismiss the actual statistics and believe she is going to get sick. It is quite unlikely that people decided not to use 23andMe because of the low probabilities that accompany many genetic tests’ results.

Instead, fears of genetic discrimination likely played an important role in 23andMe’s failure to popularize genetic testing. People are afraid that if they undergo genetic testing and receive positive results they may lose their health insurance or their employment. As I have documented, these fears prevail although empirical data shows that genetic discrimination is in fact rare. Consequently, many individuals are inhibited by genetic discrimination concerns and choose not to undergo genetic testing.

Recently, the government enacted a relatively comprehensive federal law against genetic discrimination – the Genetic Information Nondiscrimination Act of 2008 (GINA). An important goal in legislating GINA was to alleviate fears of genetic discrimination. It was hoped that the enactment of a comprehensive federal law will provide a sense of protection and reduce genetic discrimination anxiety.  The failure of 23andMe to attain widespread popularity indicates that at least so far GINA has not been as successful as was hoped in quieting fears and encouraging the use of genetic testing technology.


Seeing With Your Tongue: No Really

Not much law here, yet. Researchers have taken theoretical work begun decades ago and developed a “brain port,” a device that uses technology to allow people to reorganize how they process sensory data. In the example below, blind people are able to see images. The device takes visual input, processes it, sends impulses to a pad that sits on someone’s tongue, and then the person is able to see some images. It takes quite a bit of training and in some cases folks have been able to use the device such that they actually re-train the brain and can reduce use of the device. Yes in a sense they have “rewired” their brain. This advance is just cool. The video also explains that the advances in this field trace to Professor Paul Bach-y-Rita who apparently had to overcome a fair amount of resistance in his fields of neurobiology and rehabilitation, because he was challenging many accepted beliefs regarding the way the brain works and more (all hail Kuhn). Will the law become involved in this area? It probably already is insofar as patents and copyright are being used to govern the technology. In addition, as I have noted before, the advances in embedded or sensory enhancing devices raise numerous questions regarding privacy, the ownership of data, bioethics, and research ethics. So welcome to the future and take a look at the video. It really is amazing and wonderful that scientists have made these breakthroughs. At the very least, anyone questioning how basic research can lead to unforeseen benefits should pause after seeing this work.