Category: Privacy (Consumer Privacy)

4

In Honor of Alan Westin: Privacy Trailblazer, Seer, and Changemaker

Privacy leading light Alan Westin passed away this week.  Almost fifty years ago, Westin started his trailblazing work helping us understand the dangers of surveillance technologies.  Building on the work that Warren and Brandeis started in “The Right to Privacy” in 1898, Westin published Privacy and Freedom in 1967.  A year later, he took his normative case for privacy to the trenches.  As Director of the National Academy of Science’s Computer Science and Engineering Board, he and a team of researchers studied governmental, commercial, and private organizations using databases to amass, use, and share personal information.  Westin’s team interviewed 55 organizations, from local law enforcement, federal agencies like the Social Security Administration, and direct-mail companies like R.L. Polk (a predecessor to our behavioral advertising industry).

The 1972 report, Databanks in a Free Society: Computers, Record-Keeping, and Privacy, is a masterpiece.  With 14 case studies, the report made clear the extent to which public and private entities had been building substantial computerized dossiers of people’s activities and the risks to economic livelihood, reputation, and self-determination.  It demonstrated the unrestrained nature of data collection and sharing, with driver’s license bureaus selling personal information to direct-mail companies and law enforcement sharing arrest records with local and state agencies for employment and licensing matters.  Surely influenced by Westin’s earlier work, some data collectors, like the Kansas City Police Department, talked to the team about privacy protections, suggesting the need for verification of source documents, audit logs, passwords, and discipline for improper use of data. Westin’s report called for data collectors to adopt ethical procedures for data collection and sharing, including procedural protections such as notice and chance to correct inaccurate or incomplete information, data minimization requirements, and sharing limits.

Westin’s work shaped the debate about the right to privacy at the dawn of our surveillance era. His changing making agenda was front and center of  the Privacy Act of 1974.  In the early 1970s, nearly fifty congressional hearings and reports investigated a range of data privacy issues, including the use of census records, access to criminal history records, employers’ use of lie detector tests, and the military and law enforcement’s monitoring of political dissidents. State and federal executives spearheaded investigations of surveillance technologies including a proposed National Databank Center.

Just as public discourse was consumed with the “data-bank problem,” the courts began to pay attention. In Whalen v. Roe, a 1977 case involving New York’s mandatory collection of prescription drug records, the Supreme Court strongly suggested that the Constitution contains a right to information privacy based on substantive due process. Although it held that the state prescription drug database did not violate the constitutional right to information privacy because it was adequately secured, the Court recognized an individual’s interest in avoiding disclosure of certain kinds of personal information. Writing for the Court, Justice Stevens noted the “threat to privacy implicit in the accumulation of vast amounts of personal information in computerized data banks or other massive government files.”  In a concurring opinion, Justice Brennan warned that the “central storage and easy accessibility of computerized data vastly increase the potential for abuse of that information, and I am not prepared to say that future developments will not demonstrate the necessity of some curb on such technology.”

What Westin underscored so long ago, and what Whalen v. Roe signaled, technologies used for broad, indiscriminate, and intrusive public surveillance threaten liberty interests.  Last term, in United States v. Jones, the Supreme Court signaled that these concerns have Fourth Amendment salience. Concurring opinions indicate that at least five justices have serious Fourth Amendment concerns about law enforcement’s growing surveillance capabilities. Those justices insisted that citizens have reasonable expectations of privacy in substantial quantities of personal information.  In our article “The Right to Quantitative Privacy,” David Gray and I are seeking to carry forward Westin’s insights (and those of Brandeis and Warren before him) into the Fourth Amendment arena as the five concurring justices in Jones suggested.  More on that to come, but for now, let’s thank Alan Westin for his extraordinary work on the “computerized databanks” problem.

 

1

Data Brokers in the FTC’s Sights

The ethos of our age is the more data, the better, and nowhere is that more true than the data-broker industry.  Data-broker databases contain dossiers on hundreds of millions of individuals, including their Social Security numbers, property records, criminal-justice records, car rentals, credit reports, postal and shipping records, utility bills, gaming, insurance claims, divorce records, social network profiles, online activity, and drug- and food-store records.  According to FTC Chairman Jon Leibowitz, companies like Acxiom are the ‘invisible cyberazzi’ that follow us around every where we go on- and offline, or as Chris Hoofnagle has aptly called them “Little Brothers” helping Big Brother and industry.  Data brokers are largely unbridled by regulation. The FTC’s enforcement authority over data brokers stems from the Fair Credit Reporting Act (FCRA), which was passed in 1970 to protect the privacy and accuracy of information included in credit reports.  FCRA requires consumer reporting agencies to use reasonable procedures to ensure that entities to which they disclose sensitive consumer data have a permissible purpose for receiving that data.  Under FCRA, employers are required to inform individuals about intended adverse actions against them based on their credit reports.  Individuals get a chance to explain inaccurate or incomplete information and to contact credit-reporting agencies to dispute the information in the hopes of getting it corrected.  During the past two years, the FTC has gone after social media intelligence company and online people search engine on the grounds that they constituted consumer reporting agencies subject to FCRA.  In June 2012, the FTC settled charges against Spokeo, an online service that compiles and sells digital dossiers on consumers to human resource professionals, job recruiters, and other businesses.  Spokeo assembles consumer data from on- and offline sources, including social media sites, to create searchable consumer profiles.  The profiles include an individual’s full name, physical address, phone number, age range, and email address, hobbies, photos, ethnicity, religion, and social network activity.  The FTC alleged that Spokeo failed to adhere to FCRA, including its obligation to ensure the accuracy of consumer reports.  Ultimately, it obtained a $800,000 settlement with the company.  That’s helpful, to be sure, but given the FTC’s limited resources may not lead to more accurate dossiers.  (It also may mean that employers will keep online intelligence in-house and thus their use of unreliable online information outside the reach of FCRA, as my co-blogger Frank Pasquale wrote so ably about in The Offensive Internet: Speech, Privacy, and Reputation).  More recently,the FTC issued orders requiring nine data brokerage companies to provide the agency with information about how they collect and use data about consumers.  The agency will use the information to study privacy practices in the data broker industry.  The nine data brokers receiving orders from the FTC were (1) Acxiom, (2) Corelogic, (3) Datalogix, (4) eBureau, (5) ID Analytics, (6) Intelius, (7) Peekyou, (8) Rapleaf, and (9) Recorded Future.  In its press release, the FTC explained that it is seeking details about: “the nature and sources of the consumer information the data brokers collect; how they use, maintain, and disseminate the information; and the extent to which the data brokers allow consumers to access and correct their information or to opt out of having their personal information sold.”  The FTC called on the data broker industry to improve the transparency of its practices as part of a Commission report, Protecting Consumer Privacy in an Era of Rapid Change: Recommendations for Businesses and Policymakers.  FTC Commissioner Julie Brill has been a tireless advocate for greater oversight over data brokers–here is hoping that her efforts and those of her agency produce important reforms.

 

 

 

0

Identity Theft: Coming to Screens Near You (and Not Just the Movies)

Identity theft, now so common, we can joke about it.

Or as Alan Alda’s character in Woody Allen’s Crimes and Misdemeanors says, “comedy is tragedy plus time.”  Time to transform tragedy into comedy, indeed.  Scanning the Privacy Rights Clearinghouse database demonstrates that reported data breaches are a daily occurrence.  Since January 1, 2013, private and public entities have reported over 20 major data breaches.  Included on the list were hospitals, universities, and businesses.  Sometimes, the most vulnerable are targeted.  For instance, on January 8, 2013, a dishonest employee of the Texas Department of Health and Human Services was arrested on suspicion on misusing client information to apply for credit cards and to receive medical care under their names.  Bad enough that automated systems erroneously take recipients of public benefits off the rolls, as my work on Technological Due Process explores.  Those designed to help them are destroying their medical and credit histories as well.

We have had over 600 million records breached since 2005, from approximately 3,500 reported data breaches.  Of course, those figures represented those officially reported, likely due to state data breach laws, whose requirements vary and leave lots of discretion with regard to reporting up to the entities who have little incentive to err on the side of reporting if they are not legally required to do so.  So the bad news is that identity theft is prevalent, but at least we can laugh about it.

5

The Importance of Section 230 Immunity for Most

Why leave the safe harbor provision intact for site operators, search engines, and other online service providers do not attempt to block offensive, indecent, or illegal activity but by no means encourage or are principally used to host illicit material as cyber cesspools do?  If we retain that immunity, some harassment and stalking — including revenge porn — will remain online because site operators hosting it cannot be legally required to take them down.  Why countenance that possibility?

Because of the risk of collateral censorship—blocking or filtering speech to avoid potential liability even if the speech is legally protected.  In what is often called the heckler’s veto, people may abuse their ability to complain, using the threat of liability to ensure that site operators block or remove posts for no good reason.  They might complain because they disagree with the political views expressed or dislike the posters’ disparaging tone.  Providers would be especially inclined to remove content in the face of frivolous complaints in instances where they have little interest in keeping up the complained about content.  Take, as an illustration, the popular newsgathering sites Digg.  If faced with legal liability, it might automatically take down posts even though they involve protected speech.  The news gathering site lacks a vested interest in keeping up any particular post given its overall goal of crowd sourcing vast quantities of news that people like.  Given the scale of their operation, they may lack the resources to hire enough people to cull through complaints to weed out frivolous ones.

Sites like Digg differ from revenge porn sites and other cyber cesspools whose operators have an incentive to refrain from removing complained-about content such as revenge porn and the like.  Cyber cesspools obtain economic benefits by hosting harassing material that may make it worth the risk to continue to do so.  Collateral censorship is far less likely—because it is in their economic interest to keep up destructive material.  As Slate reporter and cyber bullying expert Emily Bazelon has remarked, concerns about the heckler’s veto get more deference than it should in the context of revenge porn sites and other cyber cesspools.  (Read Bazelon’s important new book Sticks and Stones: Defeating the Culture of Bullying and Rediscovering the Power of Character and Empathy).  It does not justify immunizing cyber cesspool operators from liability.

Let’s be clear about what this would mean.  Dispensing with cyber cesspools’ immunity would not mean that they would be strictly liable for user-generated content.  A legal theory would need to sanction remedies against them.  Read More

15

Harvard Law Review Symposium on Privacy & Technology

This Friday, November 9th, I will be introducing and participating in the Harvard Law Review’s symposium on privacy and technology.  The symposium is open to the public, and is from 8:30 AM to 4:30 PM at Harvard Law School (Langdell South).

I have posted a draft of my symposium essay on SSRN, where it can be downloaded for free.  The essay will be published in the Harvard Law Review in 2013.  My essay is entitled Privacy Self-Management and the Consent Paradox, and I discuss what I call the “privacy self-management model,” which is the current regulatory approach for protecting privacy — the law provides people with a set of rights to enable them to decide for themselves about how to weigh the costs and benefits of the collection, use, or disclosure of their data. I demonstrate how this model fails to serve as adequate protection of privacy, and I argue that privacy law and policy must confront a confounding paradox with consent.  Currently, consent to the collection, use, and disclosure of personal data is often not meaningful, but the most apparent solution — paternalistic measures — even more directly denies people the freedom to make consensual choices about their data.

I welcome your comments on the draft, which will undergo considerable revision in the months to come.  In future posts, I plan to discuss a few points that I raise my essay, so I welcome your comments in these discussions as well.

The line up of the symposium is as follows:

Symposium 2012:
Privacy & Technology

Daniel J. Solove
George Washinton University
“Introduction: Privacy Self-Management and the Consent Paradox”

Jonathan Zittrain
Harvard Law School

Paul Schwartz
Berkeley Law School
“The E.U.-U.S. Privacy Collision”

Lior Strahilevitz
University of Chicago
“A Positive Theory of Privacy”

Julie Cohen
Georgetown University
“What Privacy is For”

Neil Richards
Washington University
“The Harms of Surveillance”

Danielle Citron
University of Maryland

Anita Allen
University of Pennsylvania

Orin Kerr
George Washington University

Alessandro Acquisti
Carnegie Mellon University

Latanya Sweeney
Harvard University

Joel Reidenberg
Fordham University

Paul Ohm
University of Colorado

Tim Wu
Columbia University

Thomas Crocker
University of South Carolina

Danny Weitzner
MIT

6

PETs, Law and Surveillance

In Europe, privacy is considered a fundamental human right. Section 8 of the European Convention of Human Rights (ECHR) limits the power of the state to interfere in citizens’ privacy, ”except such as is in accordance with the law and is necessary in a democratic society”. Privacy is also granted constitutional protection in the Fourth Amendment to the United States Constitution. Both the ECHR and the US Constitution establish the right to privacy as freedom from government surveillance (I’ll call this “constitutional privacy”). Over the past 40 years, a specific framework has emerged to protect informational privacy (see here and here and here and here); yet this framework (“information privacy”) provides little protection against surveillance by either government or private sector organizations. Indeed, the information privacy framework presumes that a data controller (i.e., a government or business organization collecting, storing and using personal data) is a trusted party, essentially acting as a steward of individual rights. In doing so, it overlooks the fact that organizations often have strong incentives to subject individuals to persistent surveillance; to monetize individuals’ data; and to maximize information collection, storage and use.

Read More

0

More on government access to private sector data

Last week I blogged here about a comprehensive survey on systematic government access to private sector data, which will be published in the next issue of International Data Privacy Law, an Oxford University Press law journal edited by Christopher Kuner. Several readers have asked whether the results of the survey are available online. Well, now they are – even before publication of the special issue. The project, which was organized by Fred Cate and Jim Dempsey and supported by The Privacy Projects, covered government access laws in AustraliaCanadaChinaGermanyIsraelJapanUnited Kingdom and United States.

Peter Swire’s thought provoking piece on the increased importance of government access to the cloud in an age of encrypted communications appears here. Also see the special issue’s editorial, by Fred, Jim and Ira Rubinstein.

 

2

On systematic government access to private sector data

The Sixth Circuit Court of Appeals has recently decided in United States v. Skinner that police does not need a warrant to obtain GPS location data for mobile phones. The decision, based on the holding of the Supreme Court in US v. Jones, highlights the need for a comprehensive reform of rules on government access to communications non-contents information (“communications data”). Once consisting of only a list of phone numbers dialed by a customer (a “pen register”), communications data have become rife with personal information, including location, clickstream, social contacts and more.

To a non-American, the US v. Jones ruling is truly astounding in its narrow scope. Clearly, the Justices aimed to sidestep the obvious question of expectation of privacy in public spaces. The Court did hold that the attachment of a GPS tracking device to a vehicle and its use to monitor the vehicle’s movements constitutes a Fourth Amendment “search”. But it based its holding not on the persistent surveillance of the suspect’s movements but rather on a “trespass to chattels” inflicted when a government agent ever-so-slightly touched the suspect’s vehicle to attach the tracking device. In the opinion of the Court, it was the clearly insignificant “occupation of property” (touching a car!) rather than the obviously weighty location tracking that triggered constitutional protection.

Suffice it to say, that to an outside observer, the property infringement appears to have been a side issue in both Jones and Skinner. The main issue of course is government power to remotely access information about an individual’s life, which is increasingly stored by third parties in the cloud. In most cases past – and certainly present and future – there is little need to trespass on an individual’s property in order to monitor her every move. Our lives are increasingly mediated by technology. Numerous third parties possess volumes of information about our finances, health, online endeavors, geographical movements, etc. For effective surveillance, the government typically just needs to ask.

This is why an upcoming issue of International Data Privacy Law (IDPL) (an Oxford University Press law journal), which is devoted to systematic government access to private sector data, is so timely and important. The special issue covers rules on government access in multiple jurisdictions, including the US, UK, Germany, Israel, Japan, China, India, Australia and Canada.

Read More

3

Big Data for All

Much has been written over the past couple of years about “big data” (See, for example, here and here and here). In a new article, Big Data for All: Privacy and User Control in the Age of Analytics, which will be published in the Northwestern Journal of Technology and Intellectual Property, Jules Polonetsky and I try to reconcile the inherent tension between big data business models and individual privacy rights. We argue that going forward, organizations should provide individuals with practical, easy to use access to their information, so they can become active participants in the data economy. In addition, organizations should be required to be transparent about the decisional criteria underlying their data processing activities.

The term “big data” refers to advances in data mining and the massive increase in computing power and data storage capacity, which have expanded by orders of magnitude the scope of information available for organizations. Data are now available for analysis in raw form, escaping the confines of structured databases and enhancing researchers’ abilities to identify correlations and conceive of new, unanticipated uses for existing information. In addition, the increasing number of people, devices, and sensors that are now connected by digital networks has revolutionized the ability to generate, communicate, share, and access data.

Data creates enormous value for the world economy, driving innovation, productivity, efficiency and growth. In the article, we flesh out some compelling use cases for big data analysis. Consider, for example, a group of medical researchers who were able to parse out a harmful side effect of a combination of medications, which were used daily by millions of Americans, by analyzing massive amounts of online search queries. Or scientists who analyze mobile phone communications to better understand the needs of people who live in settlements or slums in developing countries.

Read More