Category: Cyber Civil Rights

Platform Responsibility

Internet platforms are starting to recognize the moral duties they owe their users. Consider, for example, this story about Baidu, China’s leading search engine:

Wei Zexi’s parents borrowed money and sought an experimental treatment at a military hospital in Beijing they found using Baidu search. The treatment failed, and Wei died less than two months later. As the story spread, scathing attacks on the company multiplied, first across Chinese social networks and then in traditional media.

After an investigation, Chinese officials told Baidu to change the way it displays search results, saying they are not clearly labeled, lack objectivity and heavily favor advertisers. Baidu said it would implement the changes recommended by regulators, and change its algorithm to rank results based on credibility. In addition, the company has set aside 1 billion yuan ($153 million) to compensate victims of fraudulent marketing information.

I wish I could include this story in the Chinese translation of The Black Box Society. On a similar note, Google this week announced it would no longer run ads from payday lenders. Now it’s time for Facebook to step up to the plate, and institute new procedures to ensure more transparency and accountability.

0

FAN 102.2 (First Amendment News) Latest First Amendment Salon: Cyber Harassment & The First Amendment

Danielle Citron & Laura Handman

     Danielle Citron & Laura Handman

Professor Danielle Citron (author of of Hate Crimes in Cyberspace) was in fine form as she made her case to an audience (in Washington, D.C. & New York) of First Amendment experts — lawyers, journalists, and activists. Laura Handman (a noted media lawyer) responded with talk of her own cyber harassment experience and then proceeded to make a strong case for the need to develop industry guidelines to protect privacy and reputational interests. Ilya Shapiro (a Cato Institute constitutional lawyer) moderated the discussion with lively and thought-provoking questions, including one about the wisdom of the European “right to be forgotten.” All in all, it was an engaging and informative discussion — yet another between a representatives from the legal academy and the practicing bar.

Laura Handman, Ilya Shapiro & Danielle Citron

Laura Handman, Ilya Shapiro & Danielle Citron

It was the initial First Amendment Salon of 2016. The by-invitation discussions take place at the offices of Levine Sullivan Koch & Schulz in Washington, D.C., and New York and sometimes as well on the Yale Law School campus at the Floyd Abrams Institute for Freedom of Expression.

Selected Excerpts

Professor Citron: Unfortunately, we have “network tools used not as liberty-enhancing mechanisms, but instead as liberty-denying devices.”

Professor Citron: “I am modest in my demands of the law because I am a civil libertarian. My proposals are modest.”

Among others, probing questions and comments were offered by Ashley MessengerLisa Zycherman, Lee Levine, and Victor A. Kovner.

 YouTube video of discussion here.

 Next First Amendment Salon 

May 16, 2016, Chicago: Professor Geoffrey Stone will do a public interview with Judge Richard Posner on the topic of the First Amendment and freedom of speech.

Previous First Amendment Salons 

(Note: the early salons were not recorded)

November 2, 2015
Reed v. Gilbert & the Future of First Amendment Law

Discussants: Floyd Abrams & Robert Post
Moderator: Linda Greenhouse

August 26, 2015
The Roberts Court & the First Amendment 

Discussants: Erwin Chemerinsky & Eugene Volokh
Moderator:Kelli Sager

March 30, 2015
Is the First Amendment Being Misused as a Deregulatory Tool?

Discussants: Jack Balkin & Martin Redish
Moderator: Floyd Abrams

March 9, 2015
Hate Speech: From Parisian Cartoons to Cyberspace to Campus Speech Codes

Discussants: Christopher Wolf & Greg Lukianoff
Moderator: Lucy Dalglish

July 9, 2014
Campaign Finance Law & the First Amendment 

Discussants: Erin Murphy & Paul M. Smith
Moderator: David Skover

November 5, 2014
What’s Wrong with the First Amendment? 

Discussants: Steven Shiffrin & Robert Corn-Revere
Moderator: Ashley Messenger

April 28, 2014
Abortion Protestors & the First Amendment

Discussants: Steve Shapiro & Floyd Abrams
Moderator: Nadine Strossen

Salon Co-Chairs

  • Ronald K.L. Collins, University of Washington School of Law
  • Lee Levine, Levine Sullivan Koch & Schulz
  • David M. Skover, Seattle University, School of Law

Salon Advisory Board

  • Floyd Abrams, Cahill Gordon & Reindel
  • Erwin Chemerinsky, University of California at Irvine, School of Law
  • Robert Corn-Revere, Davis Wright Tremaine
  • Robert Post, Yale Law School
  • David Schulz, Floyd Abrams Institute for Freedom of Expression
  • Paul M. Smith, Jenner & Block
  • Geoffrey Stone, University of Chicago, School of Law
  • Nadine Strossen, New York Law School
  • Eugene Volokh, UCLA School of Law
0

Unequal Exposure

Towards the end of the breathless and impassioned tour through privacy, surveillance, carcerality, and desire that is Exposed, Bernard Harcourt writes that “the emphasis on what we must do as ethical selves, each and every one of us – us digital subjects – may be precisely what is necessary for us to begin to think of ourselves as we. Yes, as that we that has been haunting this book since page one” (283). The call for unity and solidarity is seductive: if “we” are all exposed and vulnerable, then “we” can all resist and demand change. But that “we” – that reassuring abstraction of humanity and human experience – is not in fact what haunts this book. That “we” – unquestioned, undifferentiated, unmarked – is taken for granted and treated as the universal subject in this book. What truly haunts this book is everything that this “we” obscures and represses. Harcourt’s “we” is remarkably undifferentiated. Nearly every point Harcourt makes about how “we” experience digital subjectivity, surveillance, and exposure would and should be contested by women, people of color, the poor, sexual minorities (and those who belong to more than one of the categories in this non-exhaustive list). It is unfair, of course, to expect any one book or any one author to capture the full complexity of human experience on any topic. One writes about what one knows, and nuance must sometimes be sacrificed for the sake of broad theory. But there is a difference between falling short of conveying the diversity of human experience and barely acknowledging the existence of differentiation. If one of Harcourt’s goals is to lead us to “think of ourselves as we,” it is vital to recognize that  “we” in the digital age are not equally represented, equally consenting or resisting, or equally exposed.

Let’s begin with Harcourt’s characterization of the digital age as a study in shallow positivity: “We do not sing hate, we sing praise. We ‘like,’ we ‘share,’ we ‘favorite.’ We ‘follow.’ We ‘connect.’ ‘We get LinkedIn.’ Ever more options to join and like and appreciate. Everything today is organized around friending, clicking, retweeting, and reposting. … We are appalled by mean comments – which are censored if they are too offensive”(41). This is a picture of the digital world that will be  unrecognizable to many people. There is no mention of online mobs, targeted harassment campaigns, career-destroying defamation, rape and death threats, doxxing, revenge porn, sex trafficking, child porn, online communities dedicated to promoting sexual violence against women, or white supremacist sites. No mention, in short, of the intense, destructive, unrelenting hatred that drives so much of the activity of our connected world. Harcourt’s vision of our digital existence as a sunny safe space where occasional “mean comments” are quickly swept from view is nothing short of extraordinary.

Next, consider Harcourt’s repeated insistence that there are no real distinctions between exposer and exposed, the watcher and the watched: “There is no clean division between those who expose and those who surveil; surveillance of others has become commonplace today, with nude pictures of celebrities circulating as ‘trading fodder’ on the more popular anonymous online message boards, users stalking other users, and videos constantly being posted about other people’s mistakes, accidents, rants, foibles, and prejudices. We tell stories about ourselves and others. We expose ourselves. We watch others” (129). There are, in fact, important divisions between exposers and the exposed. With regard to sexual exposure, it is overwhelmingly the case that women are the subjects and not the agents of exposure. The nude photos to which Harcourt refers weren’t of just any celebrities; they were with few exceptions female celebrities. The hacker in that case, as in nearly every other case of nude photo hacking, is male, as is nearly every revenge porn site owner and the majority of revenge porn consumers. The “revenge porn” phenomenon itself, more accurately described as “nonconsensual pornography,” is overwhelmingly driven by men exposing women, not the other way around. Many of Harcourt’s own examples of surveillance point to the gender imbalance at work in sexual exposure. The LOVEINT scandal, the CCTV cameras pointed into girls’ toilets and changing rooms in UK schools (229), and Edward Snowden’s revelations of how the NSA employees share naked pictures (230) primarily involve men doing the looking and women and girls being looked at. The consequences of sexual exposure are also not gender-neutral: while men and boys may suffer embarrassment and shame, girls and women suffer these and much more, including being expelled from school, fired from jobs, tormented by unwanted sexual propositions, and threatened with rape.

There are also important distinctions to be made between those who voluntarily expose themselves and those who are exposed against their will. In the passage above, Harcourt puts nude photos in the same list as videos of people’s “rants, foibles, and prejudices.” The footnote to that sentence provides two specific examples: Jennifer Lawrence’s hacked photos and video of Michael Richards (Seinfeld’s Kramer) launching into a racist tirade as he performed at a comedy club (311). That is a disturbing false equivalence. The theft of private information is very different from a public, voluntary display of racist hatred. In addition to the fact that naked photos are in no way comparable to casual references to lynching and the repeated use of racial slurs, it should matter that Jennifer Lawrence was exposed against her will and Michael Richards exposed himself.

It’s not the only time in the book that Harcourt plays a bit fast and loose with the concepts of consent and voluntariness. In many places he criticizes “us” for freely contributing to our own destruction: “There is hardy any need for illicit or surreptitious searches, and there is little need to compel, to pressure, to strong-arm, or to intimidate, because so many of us are giving all our most intimate information and whereabouts so willingly and passionately – so voluntarily” (17).  And yet Harcourt also notes that in many cases, people do not know that they are being surveilled or do not feel that they have any practical means of resistance. “The truth is,” Harcourt tells us with regard to the first, “expository power functions best when those who are seen are not entirely conscious of it, or do not always remember. The marketing works best when the targets do not know that they are being watched” (124). On the second point, Harcourt observes that “when we flinch at the disclosure, most of us nevertheless proceed, feeling that we have no choice, not knowing how not to give our information, whom we would talk to, how to get the task done without the exposure. We feel we have no other option but to disclose” (181-2). But surely if people are unaware of a practice or feel they cannot resist it, they can hardly be considered to have voluntarily consented to it.

Also, if people often do not know that they are under surveillance, this undermines one of the more compelling concerns of the book, namely, that surveillance inhibits expression. It is difficult to see how surveillance could have an inhibiting effect if the subjects are not conscious of the fact that they are being watched. Surreptitious surveillance certainly creates its own harms, but if subjects are truly unaware that they are being watched – as opposed to not knowing exactly when or where surveillance is taking place but knowing that it is taking place somewhere somehow, which no doubt does create a chilling effect – then self-censorship is not likely to be one of them.

Harcourt suggests a different kind of harm when he tells us that “[i]nformation is more accessible when the subject forgets that she is being stalked” (124). That is, we are rendered more transparent to the watchers when we falsely believe they are not watching us. That seems right. But what exactly is the harm inflicted by this transparency? Harcourt warns that we are becoming “marketized subjects – or rather subject-objects who are nothing more than watched, tracked, followed, profiled at will, and who in turn do nothing more than watch and observe others” (26). While concerns about Big Data are certainly legitimate (and have been voiced by many scholars, lawyers, policymakers, and activists), Harcourt never paints a clear picture of what he thinks the actual harm of data brokers and targeted Target advertisements really is. In one of the few personal and specific examples he offers of the harms of surveillance, Harcourt describes the experience of being photographed by a security guard before a speaking engagement. Harcourt is clearly unsettled by the experience: “I could not resist. I did not resist. I could not challenge the security protocol. I was embarrassed to challenge it, so I gave in without any resistance. But it still bothers me today. Why? Because I had no control over the dissemination of my own identity, of my face. Because I felt like I had no power to challenge, to assert myself” (222). While one sympathizes with Harcourt’s sense of disempowerment, it is hard to know what to think of it in relation to the sea of other surveillance stories: women forced to flee their homes because of death threats, parents living in fear because the names of their children and the schools they attend have been published online, or teenaged girls committing suicide because the photo of their rape is being circulated on the Internet as a form of entertainment.

Harcourt uses the term “stalk” at least eight times in this book, and none of these references are to actual stalking, the kind that involves being followed by a particular individual who knows where you live and work and means you harm, the kind that one in six women in the U.S. will experience in her lifetime, the kind that is encouraged and facilitated by an ever-expanding industry of software, gadgets, and apps that openly market themselves to angry men as tools of control over the women who have slipped their grasp. What a privilege it is to be able to treat stalking not as a fact of daily existence, but as a metaphor.

Harcourt’s criticism of what he considers to be the Supreme Court’s lack of concern for privacy adds a fascinating gloss to all of this. Harcourt takes particular aim at Justice Scalia, asserting that even when Scalia seems to be protecting privacy, he is actually disparaging it: “Even in Kyllo v. United States…. where the Court finds that the use of heat-seeking technology constitutes a search because it infringes on the intimacies of the home, Justice Scalia mocks the humanist conception of privacy and autonomy.” The proof of this assertion supposedly comes from Scalia’s observation that the technology used in that case “might disclose, for example, at what hour each night the lady of the house takes her daily sauna and bath – a detail that many would consider ‘intimate.’” Harcourt assumes that Scalia’s reference to the “lady of the house” is an ironic expression of contempt. But Scalia is not being ironic. Elsewhere in the opinion, he emphatically states that “[i]n the home… all details are intimate details,” and many of his other opinions reinforce this view. Scalia and many members of the Court are very concerned about privacy precisely when it involves the lady of the house, or the homeowner subjected to the uninvited drug-sniffing dog on the porch (Florida v. Jardines, 2013), or the federal official subjected to the indignity of a drug test (Treasury Employees v. Von Raab, 1989 (dissent)). These same members of the Court, however, are remarkably unconcerned about privacy when it involves a wrongfully arrested man subjected to a humiliating “squat and cough” cavity search (Florence v. Burlington, 2012), or a driver searched after being racially profiled (Whren v. US, 1996), a pregnant woman tricked into a drug test while seeking prenatal care (Ferguson v. Charleston, 2001 (dissent)). In other words, the problem with the Supreme Court’s views on privacy and surveillance is not that it does not care about it; it’s that it tends to care about it only when it affects interests they share or people they resemble.

The world is full of people who do not have the luxury of worrying about a growing addiction to Candy Crush or whether Target knows they need diapers before they do. They are too busy worrying that their ex-husband will hunt them down and kill them, or that they will be stopped and subjected to a humiliating pat down for the fourth time that day, or that the most private and intimate details of their life will be put on public display by strangers looking to make a buck. These people are not driven by a desire to expose themselves. Rather, they are being driven into hiding, into obscurity, into an inhibited and chilled existence, by people who are trying to expose them. If “we” want to challenge surveillance and fight for privacy, “they” must be included.

0

The Fragility of Desire

In his excellent new book Exposed, Harcourt’s analysis of the role of desire in what he calls the “expository society” of the digital age is seductive. We are not characters in Orwell’s 1984, or prisoners of Bentham’s Panopticon, but rather are enthusiastic participants in a “mirrored glass pavilion” that is addictive and mesmerizing. Harcourt offers a detailed picture of this pavilion and also shows us the seamy side of our addiction to it. Recovery from this addiction, he argues, requires acts of disobedience but there lies the great dilemma and paradox of our age: revolution requires desire, not duty, but our desires are what have ensnared us.

I think that this is both a welcome contribution as well as a misleading diagnosis.

There have been many critiques of consent-based privacy regimes as enabling, rather than protecting, privacy. The underlying tenor of many of these critiques is that consent fails as a regulatory tool because it is too difficult to make it truly informed consent. Harcourt’s emphasis on desire shows why there is a deeper problem than this, that our participation in the platforms that surveil us is rooted in something deeper than misinformed choice. And this makes the “what to do?” question all the more difficult to answer. Even for those of us who see a stronger role for law than Harcourt outlines in this book (I agree with Ann Bartow’s comments on this) should pause here. Canada, for example, has strong private sector data protections laws with oversight from excellent provincial and federal privacy commissioners. And yet these laws are heavily consent-based. Such laws are able to shift practices to a stronger emphasis on things like opt-in consent, but Harcourt leaves us with a disquieting sense that this might just be just an example of a Pyrrhic victory, legitimizing surveillance through our attempts to regulate it because we still have not grappled with the more basic problem of the seduction of the mirrored glass pavilion.

The problem with Harcourt’s position is that, in exposing this aspect of the digital age in order to complicate our standard surveillance tropes, he risks ignoring other sources of complexity that are also important for both diagnosing the problem and outlining a path forward.

Desire is not always the reason that people participate in new technologies. As Ann Bartow and Olivier Sylvain point out, people do not always have a choice about their participation in the technologies that track us. The digital age is not an amusement park we can choose to go to or to boycott, but deeply integrated into our daily practices and needs, including the ways in which we work, bank, and access government services.

But even when we do actively choose to use these tools, it is not clear that desire captures the why of all such choices. If we willingly enter Harcourt’s mirrored glass pavilion, it is sometimes because of some of its very useful properties — the space- and time-bending nature of information technology. For example, Google calendar is incredibly convenient because multiple people can access shared calendars from multiple devices in multiple locations at different times making the coordination of calendars incredibly easy. This is not digital lust, but digital convenience.

These space- and time-bending properties of information technology are important for understanding the contours of the public/private nexus of surveillance that so characterizes our age. Harcourt does an excellent job at pointing out some of the salient features of this nexus, describing a “tentacular oligarchy” where private and public institutions are bound together in state-like “knots of power,” with individuals passing back and forth between these institutions. But what is strange in Harcourt’s account is that this tentacular oligarchy still appears to be bounded by the political borders of the US. It is within those borders that the state and the private sector have collapsed together.

What this account misses is the fact that information technology has helped to unleash a global private sector that is not bounded by state borders. In this emerging global private sector large multinational corporations often operate as “metanationals” or stateless entities. The commercial logic of information is that it should cross political borders with ease and be stored wherever it makes the most economic sense.

Consider some of the rhetoric surrounding the e-commerce chapter of the recent TPP agreement. The Office of the US Trade Representative indicates that one of its objectives is to keep the Internet “free and open” which it has pursued through rules that favour cross-border data flows and prevent data localization. It is easy to see how this idea of “free” might be confused with political freedom, for an activist in an oppressive regime is better off in exercising freedom of speech when that speech can cross political borders or the details of their communications can be stored in a location that is free of the reach of their state. A similar rationale has been offered by some in the current Apple encryption debate — encryption protects American business people communicating within China and we can see why that is important.

But this idea of freedom is the freedom of a participant in a global private sector with weak state control; freedom from the state control of oppressive regimes also involves freedom from the state protection of democratic regimes.

If metanationals pursue a state-free agenda, the state pursues an agenda of rights-protectionism. By rights protectionism I mean the claim that states do, and should, protect the constitutional rights of their own citizens and residents but not others. Consider, for example, a Canadian citizen who resides in Canada and uses US cloud computing. That person could be communicating entirely with other Canadians in other Canadian cities and yet have all of their data stored in the US-based cloud. If the US authorities wanted access to that data, the US constitution would not apply to regulate that access in a rights-protecting manner because the Canadian is a non-US person.

Many see result as flowing from the logic of the Verdugo-Urquidez case. Yet that case concerned a search that occurred in a foreign territory (Mexico), rather than within the US, where the law of that territory continued to apply. The Canadian constitution does not apply to acts of officials within the US. The data at issue falls into a constitutional black hole where no constitution applies (and maybe even international human rights black hole according to some US interpretations of extraterritorial obligations). States can then collect information within this black hole free of the usual liberal-democratic constraints and share it with other allies, a situation Snowden documented within the EU and likened to a “European bazaar” of surveillance.

Rights protectionism is not rights protection when information freely crosses political boundaries and state power piggybacks on top of this crossing and exploits it.

This is not a tentacular oligarchy operating within the boundaries of one state, but a series of global alliances – between allied states and between states and metanationals who exert state-like power — exploiting the weaknesses of state-bound law.

We are not in this situation simply because of a penchant for selfies. But to understand the full picture we do need to look beyond “ourselves” and get the global picture in view. We need to understand the ways in which our legal models fail to address these new realities and even help to mask and legitimize the problems of the digital age through tools and rhetoric that are no longer suitable.

Lisa Austin is an Associate Professor at the University of Toronto Faculty of Law.

The Emerging Law of Algorithms, Robots, and Predictive Analytics

In 1897, Holmes famously pronounced, “For the rational study of the law the blackletter man may be the man of the present, but the man of the future is the man of statistics and the master of economics.” He could scarcely envision at the time the rise of cost-benefit analysis, and comparative devaluation of legal process and non-economic values, in the administrative state. Nor could he have foreseen the surveillance-driven tools of today’s predictive policing and homeland security apparatus. Nevertheless, I think Holmes’s empiricism and pragmatism still animate dominant legal responses to new technologies. Three conferences this Spring show the importance of “statistics and economics” in future tools of social order, and the fundamental public values that must constrain those tools.

Tyranny of the Algorithm? Predictive Analytics and Human Rights

As the conference call states

Advances in information and communications technology and the “datafication” of broadening fields of human endeavor are generating unparalleled quantities and kinds of data about individual and group behavior, much of which is now being deployed to assess risk by governments worldwide. For example, law enforcement personnel are expected to prevent terrorism through data-informed policing aimed at curbing extremism before it expresses itself as violence. And police are deployed to predicted “hot spots” based on data related to past crime. Judges are turning to data-driven metrics to help them assess the risk that an individual will act violently and should be detained before trial. 


Where some analysts celebrate these developments as advancing “evidence-based” policing and objective decision-making, others decry the discriminatory impact of reliance on data sets tainted by disproportionate policing in communities of color. Still others insist on a bright line between policing for community safety in countries with democratic traditions and credible institutions, and policing for social control in authoritarian settings. The 2016 annual conference will . . . consider the human rights implications of the varied uses of predictive analytics by state actors. As a core part of this endeavor, the conference will examine—and seek to advance—the capacity of human rights practitioners to access, evaluate, and challenge risk assessments made through predictive analytics by governments worldwide. 

This focus on the violence targeted and legitimated by algorithmic tools is a welcome chance to discuss the future of law enforcement. As Dan McQuillan has argued, these “crime-fighting” tools are both logical extensions of extant technologies of ranking, sorting, and evaluating, and raise fundamental challenges to the rule of law: 

According to Agamben, the signature of a state of exception is ‘force-of’; actions that have the force of law even when not of the law. Software is being used to predict which people on parole or probation are most likely to commit murder or other crimes. The algorithms developed by university researchers uses a dataset of 60,000 crimes and some dozens of variables about the individuals to help determine how much supervision the parolees should have. While having discriminatory potential, this algorithm is being invoked within a legal context. 

[T]he steep rise in the rate of drone attacks during the Obama administration has been ascribed to the algorithmic identification of ‘risky subjects’ via the disposition matrix. According to interviews with US national security officials the disposition matrix contains the names of terrorism suspects arrayed against other factors derived from data in ‘a single, continually evolving database in which biographies, locations, known associates and affiliated organizations are all catalogued.’ Seen through the lens of states of exception, we cannot assume that the impact of algorithmic force-of will be constrained because we do not live in a dictatorship. . . .What we need to be alert for, according to Agamben, is not a confusion of legislative and executive powers but separation of law and force of law. . . [P]redictive algorithms increasingly manifest as a force-of which cannot be restrained by invoking privacy or data protection. 

The ultimate logic of the algorithmic state of exception may be a homeland of “smart cities,” and force projection against an external world divided into “kill boxes.” 


We Robot 2016: Conference on Legal and Policy Issues Relating to Robotics

As the “kill box” example suggests above, software is not just an important tool for humans planning interventions. It is also animating features of our environment, ranging from drones to vending machines. Ryan Calo has argued that the increasing role of robotics in our lives merits “systematic changes to law, institutions, and the legal academy,” and has proposed a Federal Robotics Commission. (I hope it gets further than proposals for a Federal Search Commission have so far!)


Calo, Michael Froomkin, and other luminaries of robotics law will be at We Robot 2016 this April at the University of Miami. Panels like “Will #BlackLivesMatter to RoboCop?” and “How to Engage the Public on the Ethics and Governance of Lethal Autonomous Weapons” raise fascinating, difficult issues for the future management of violence, power, and force.


Unlocking the Black Box: The Promise and Limits of Algorithmic Accountability in the Professions


Finally, I want to highlight a conference I am co-organizing with Valerie Belair-Gagnon and Caitlin Petre at the Yale ISP. As Jack Balkin observed in his response to Calo’s “Robotics and the Lessons of Cyberlaw,” technology concerns not only “the relationship of persons to things but rather the social relationships between people that are mediated by things.” Social relationships are also mediated by professionals: doctors and nurses in the medical field, journalists in the media, attorneys in disputes and transactions.


For many techno-utopians, the professions are quaint, an organizational form to be flattened by the rapid advance of software. But if there is anything the examples above (and my book) illustrate, it is the repeated, even disastrous failures of many computational systems to respect basic norms of due process, anti-discrimination, transparency, and accountability. These systems need professional guidance as much as professionals need these systems. We will explore how professionals–both within and outside the technology sector–can contribute to a community of inquiry devoted to accountability as a principle of research, investigation, and action. 


Some may claim that software-driven business and government practices are too complex to regulate. Others will question the value of the professions in responding to this technological change. I hope that the three conferences discussed above will help assuage those concerns, continuing the dialogue started at NYU in 2013 about “accountable algorithms,” and building new communities of inquiry. 


And one final reflection on Holmes: the repetition of “man” in his quote above should not go unremarked. Nicole Dewandre has observed the following regarding modern concerns about life online: 

To some extent, the fears of men in a hyperconnected era reflect all-too-familiar experiences of women. Being objects of surveillance and control, exhausting laboring without rewards and being lost through the holes of the meritocracy net, being constrained in a specular posture of other’s deeds: all these stances have been the fate of women’s lives for centuries, if not millennia. What men fear from the State or from “Big (br)Other”, they have experienced with men. So, welcome to world of women….

Dewandre’s voice complements that of US scholars (like Danielle Citron and Mary Ann Franks) on systematic disadvantages to women posed by opaque or distant technological infrastructure. I think one of the many valuable goals of the conferences above will be to promote truly inclusive technologies, permeable to input from all of society, not just top investors and managers.

X-Posted: Balkinization.

0

MLAT – Not a Muscle Group Nonetheless Potentially Powerful

MLAT. I encountered this somewhat obscure thing (Mutual Legal Assistance Treaty) when I was in practice and needed to serve someone in Europe. I recall it was a cumbersome process and thinking that I was happy we did not seem to have to use it often (in fact the one time). Today, however, as my colleagues Peter Swire and Justin Hemmings argue in their paper, Stakeholders in Reform of the Global System for Mutual Legal Assistance, the MLAT process is quite important.

In simplest terms, if a criminal investigation in say France needs an email and it is stored in the U.S.A., the French authorities ask the U.S. ones for aid. If the U.S. agency that processes the request agrees there is a legal basis for the request, it and other groups seek a court order. If that is granted, the order would be presented to the company. Once records are obtained, there is further review to ensure “compliance U.S. law.” Then the records would go to France. As Swire and Hemmings note, the process averages 10 months. For a civil case that is long, but for criminal cases that is not workable. And as the authors put it, “the once-unusual need for an MLAT request becomes routine for records that are stored in the cloud and are encrypted in transit.”

Believe it or not, this issue touches on major Internet governance issues. The slowness and the new needs are fueling calls for having the ITU govern the Internet and access to evidence issues (a model according to the paper favored by Russia and others). Simpler but important ideas such as increased calls for data localization also flow from the difficulties the paper identifies. As the paper details, the players–non-U.S. governments, the U.S. government, tech companies, and civil society groups–each have goals and perspectives on the issue.

So for those interested in Internet governance, privacy, law enforcement, and multi-stakeholder processes, the MLAT process and this paper on it offer a great high-level view of the many factors at play in those issues for both a specific topic and larger, related ones as well.

Posner
8

On Privacy, Free Speech, & Related Matters – Richard Posner vs David Cole & Others

I’m exaggerating a little, but I think privacy is primarily wanted by people because they want to conceal information to fool others. Richard Posner

Privacy is overratedRichard Posner (2013)

 Much of what passes for the name of privacy is really just trying to conceal the disreputable parts of your conduct. Privacy is mainly about trying to improve your social and business opportunities by concealing the sorts of bad activities that would cause other people not to want to deal with you.Richard Posner (2014)

This is the seventh installment in the “Posner on Posner” series of posts on Seventh Circuit Judge Richard Posner. The first installment can be found here, the second here, the third here, the fourth here, the fifth here, and the sixth one here.

Privacy has been on Richard Posner’s mind for more than three-and-a-half decades. His views, as evidenced by the epigraph quotes above, have sparked debate in a variety of quarters, both academic and policy. In some ways those views seem oddly consistent with his persona – on the one hand, he is a very public man as revealed by his many writings, while on the other hand, he is a very private man about whom we know little of his life outside of the law save for a New Yorker piece on him thirteen years ago.

On the scholarly side of the privacy divide, his writings include:

  1. The Right of Privacy,” 12 Georgia Law Review 393 (1978)
  2. Privacy, Secrecy, and Reputation,” 28 Buffalo Law Review 1 (1979)
  3. The Uncertain Protection of Privacy by the Supreme Court,” 1979 Supreme Court Review 173
  4. The Economics of Privacy,” 71 The American Economic Review 405 (1981)
  5. Privacy,” Big Think (video clip, nd)
  6. Privacy is Overrated,” New York Daily News, April 28, 2014

For a sampling of Judge Posner’s opinion on privacy, go here (and search Privacy)

(Note: Some links will only open in Firefox or Chrome.)

_____________________

Privacy – “What’s the big deal?”

Privacy interests should really have very little weight when you’re talking about national security. The world is in an extremely turbulent state – very dangerous. — Richard Posner (2014)

Recently, Georgetown Law Center held a conference entitled “Cybercrime 2020: The Future of Online Crime and Investigations” (full C-SPAN video here). In the course of that event, Judge Posner joined with others in government, private industry, and in the legal academy to discuss privacy, the Fourth Amendment, and free speech, among other things. A portion of the exchange between Judge Posner and Georgetown law professor David Cole was captured on video.

Judge Richard Posner

Judge Richard Posner

Scene: The Judge sitting in his office, speaking into a video conference camera — As he rubbed his fingers across the page and looked down, Posner began: “I was thinking, listening to Professor Cole, what exactly is the information that he’s worried about?” Posner paused, as if to setup his next point: “I have a cell phone – iPhone 6 – so if someone drained my cell phone, they would find a picture of my cat [laughter], some phone numbers, some e-mail addresses, some e-mail texts – so what’s the big deal?”

He then glanced up from the text he appeared to be reading and spoke with a grin: “Other people must have really exciting stuff. [laughter] Could they narrate their adulteries or something like that?” [laughter] He then waved his hands in the air before posing a question to the Georgetown Professor.

“What is it that you’re worrying about?” Posner asked as if truly puzzled.

At that point, Cole leaned into his microphone and looked up at the video screen bearing the Judge’s image next to case reports on his left and the American flag on his right.

Cole: “That’s a great question, Judge Posner.”

Professor Cole continued, adding his own humor to the mix: “And I, like you, have only pictures of cats on my phone. [laughter] And I’m not worried about anything from myself, but I’m worried for others.”

On a more substantive note, Cole added: “Your question, which goes back to your original statement, . . . value[s] . . . privacy unless you have something to hide. That is a very, very shortsighted way of thinking about the value [of privacy]. I agree with Michael Dreeben: Privacy is critical to a democracy; it is critical to political freedom; [and] it is critical to intimacy.”

The sex video hypothetical

And then with a sparkle in his spectacled eye, Cole stated: “Your question brings to mind a cartoon that was in the New Yorker, just in the last couple of issues, where a couple is sitting in bed and they have video surveillance cameras over each one of them trained down on the bed [Cole holds his hands above his head to illustrate the peering cameras]. And the wife says to the husband: ‘What are you worried about if you’ve got nothing to hide, you’ve got nothing to fear.’”

Using the cartoon as his conceptual springboard, Cole moved on to his main point: “It seems to me that all of us, whether we are engaged in entirely cat-loving behavior, or whether we are going to psychiatrists, or abortion providers, or rape crises centers, or Alcoholics Anonymous, or have an affair – all of us have something to hide. Even if you don’t have anything to hide, if you live a life that could be entirely transparent to the rest of the world, I still think the value of that life would be significantly diminished if it had to be transparent.”

Without missing a beat, Cole circled back to his video theme: “Again you could say, ‘if you’ve got nothing to hide, and you’re not engaged in criminal activity, let’s put video cameras in every person’s bedroom. And let’s just record the video, 24/7, in their bedroom. And we won’t look at it until we have reason to look at it. You shouldn’t be concerned because . . .’”

At this point, Posner interrupted: “Look, that’s a silly argument.”

Cole: “But it’s based on a New Yorker cartoon.”

The Judge was a tad miffed; he waved his right hand up and down in a dismissive way: “The sex video, that’s silly!Waving his index finger to emphasize his point, he added: “What you should be saying, [what] you should be worried about [are] the types of revelation[s] of private conduct [that] discourage people from doing constructive things. You mentioned Alcoholics Anonymous . . .”

Cole: “I find sex to be a constructive thing.”

Obviously frustrated, Posner raised his palms up high in protest: “Let me finish, will you please?”

Cole: “Sure.”

Posner: “Look, that was a good example, right? Because you can have a person who has an alcohol problem, and so he goes to Alcoholics Anonymous, but he doesn’t want this to be known. If he can’t protect that secret,” Posner continued while pointing, “then he’s not going to go to Alcoholics Anonymous. That’s gonna be bad. That’s the sort of thing you should be concerned about rather than with sex videos. . . . [The Alcoholics Anonymous example] is a good example of the kind of privacy that should be protected.”

David Cole

Professor David Cole

Privacy & Politics 

Meanwhile, the audience listened and watched on with its attention now fixed on the Georgetown professor.

Cole: “Well, let me give you an example of sex privacy. I think we all have an interest in keeping our sex lives private. That’s why we close doors into our bedroom, etc. I think that’s a legitimate interest, and it’s a legitimate concern. And it’s not because you have something wrong you want to hide, but because intimacy requires privacy, number one. And number two: think about the government’s use of sex information with respect to Dr. Martin Luther King. They investigated him, intruded on his privacy by bugging his hotel rooms to learn [about his] affair, and then sought to use that – and the threat of disclosing that affair – to change his behavior. Why? Because he was an active, political, dissident fighting for justice.”

“We have a history of that,” he added. “Our country has a history of that; most countries have a history of that; and that’s another reason the government will use information – that doesn’t necessarily concern [it] – to target people who [it is] concerned about . . . – not just because of their alcohol problem [or] not just because of their sexual proclivities – but because they have political views and political ideas that the government doesn’t approve of.”

At this point the moderator invited the Judge to respond.

Posner: “What happened to cell phones? Do you have sex photos on your cell phones?”

Cole: “I imagine if Dr. Martin Luther King was having an affair in 2014, as opposed to the 1960s, his cell phone, his smart phone, would have quite a bit of evidence that would lead the government to that affair. He’d have call logs; he might have texts; he might have e-mails – all of that would be on the phone.”

The discussion then moved onto the other panelists.

Afterwards, and writing on the Volokh Conspiracy blog, Professor Orin Kerr, who was one of the participants in the conference, summed up his views of the exchange this way:

“I score this Cole 1, Posner 0.”

The First Amendment — Enter Glenn Greenwald Read More

1

The Flawed Foundations of Article III Standing in Surveillance Cases (Part IV)

In my first three posts, I’ve opened a critical discussion of Article III standing for plaintiffs challenging government surveillance programs by introducing the 1972 Supreme Court case of Laird v. Tatum. In today’s post, I’ll examine the Court’s decision itself, which held that chilling effects arising “merely from the individual’s knowledge” of likely government surveillance did not constitute adequate injury to meet Article III standing requirements.

The Burger Court

It didn’t take long for courts to embrace Laird as a useful tool to dismiss cases where plaintiffs sought to challenge government surveillance programs, especially where the complaints rested on a First Amendment chill from political profiling by law enforcement. Some judges took exception to a broad interpretation of Laird, but objections largely showed up in dissenting opinions. For the most part, early interpretations of Laird sympathized with the government’s view of surveillance claims.

Read More

3

Dispatches from Durham: Sexual Double Standards, Victim Blaming, and Online Abuse

In a series of recent pieces, the Duke Chronicle documented the experience of female students who were shamed for their expressing their sexuality. In one case, a young woman sent an e-mail to her sorority sisters saying that she had sex with a well-known performer who visited campus. The e-mail was leaked to multiple fraternity listservs, the site Betches Love This, and anonymous gossip site Collegiate ACB. On the site, the student was called a “whore, cum dumpster, and swamp monkey.” The various posts received hundreds of similar comments. The student deactivated her Facebook profile, deleted her Instagram, and disabled her Twitter account. Duke freshman “Lauren” was working in the porn industry to earn money to defray some of her college expenses. Lauren had not told anyone about her porn work, until a male classmate confronted her after watching her in a porn film. The student shared his discovery at a fraternity rush event. The story of the “freshman pornstar” went viral. The day after the student talked to his friends, Lauren received more than 230 friend requests on Facebook. Within days, the topic “Freshman Pornstar” was trending on Collegiate ACB. As Lauren confided to the school newspaper, the torment on Duke’s fourth campus–the online campus of the “towering chapel of Facebook,” the “student center of Twitter,” and the “grungy alleyways of Collegiate ACB”–was unrelenting. In a month’s time, the “Freshman Pornstar” thread on Collegiate ACB had 136 comments. The post was the seventh-most-recently commented post on Duke’s page on the gossip site. Some of the now-188 comments were vile, urging readers to write in once they have “banged” her and claiming that she slept with specific individuals and members of fraternities. Some were dangerous, noting her name and address. Comments blamed her for the abuse she was getting: “we going to pretend like she was unaware of the social consequences of going into that business? she made a decision, now she needs to live with the consequences;” “There’s no way she’s going to become a lawyer being a porn star (no law school is going to accept her). Seriously, she needs to get over herself and face the consequences of being a slut. I’ll be surprised if Duke doesn’t kick her out;” “Congratulations, you’ve ruined your own life.” Others defended the student: “you’re seriously making fun of her for that? um.. yeah this is the epitome of bullying.. you guys have written on a public forum her full name and where she lives (leaving her open for stalking and harassment) . as well as calling her a slut and attacking her personal beliefs.” As Lauren told the Chronicle, she feels harassed, hated, and discriminated against. She questions her decision to go to Duke given the abuse.

The Duke Chronicle’s editorial board wrote that the elite university is an “embittered battleground and discussions about Lauren–a first-year porn actress–have extracted salacious and sexist commentary from Duke’s student community.” The board found two primary themes in the commentary: characterizations of Lauren as a morally bankrupt slut and comments expressing a lewd desire to have sex with her. A third, unexamined theme, however, was also apparent–that Lauren was to blame for anything bad coming her way. She chose to do porn, so she assumed the risk of online harassment, poor employment opportunities, social shunning, and the possibility of getting kicked out of school.

Blaming the victim is a typical response to individuals facing online harassment, individuals who are mostly female and who are mostly attacked in sexually demeaning and threatening ways, as my articles and forthcoming book Hate Crimes in Cyberspace explore. After tech blogger Kathy Sierra was threatened with rape and strangulation via e-mail and on her blog, the response was that she chose to blog, so if she could not handle the heat, she should get out of the kitchen. College students blogging about sex were told that they “asked for” rape threats, defamatory lies, and the non-consensual posting of their nude photos because they blogged about their sexuality. Lena Chen’s experience was typical. When Chen attended Harvard, she wrote Sex and the Ivy. Anonymous commenters attacked her not with substantive criticisms of her opinions, but rather with death threats, suggestions of sexualized violence, and racial slurs. On a gossip blog, someone posted her sexually explicit photos, taken by her ex-boyfriend, without her consent. As Slate writer Amanda Hess reported (who would later face rape threats herself, see her recent article about her experience), Chen’s nude photos were reposted all over the Internet. The abuse continued even after she shut down the blog. Chen was accused of provoking the abuse by “making a blog about her personal sex life.” She was labeled an “attention whore” who deserved what she got. Commentators said that she leaked her own naked photos to get attention. Others said that she wrote about sex because she wanted posters to make sexual advances. We hear the same about victims of revenge porn.

Blaming the victim is a recurring theme. Society once blamed female employees for provoking their employers’ sexual advances. Wives were once told that they provoked domestic abuse. Just as society now recognizes sexual harassment at work and domestic abuse as serious social problems that victims did not bring on themselves, female college students are not to blame for online abuse if they have sex or make porn. Bloggers who write about sex are not to blame for online attacks. Revenge porn victims should not be blamed when harassers violate their trust and vindictively post their nude photos. Sexual double standards are at the heart of this response. Would we, for instance, say the same to men writing about sex? Tucker Max earned millions from writing books and a blog about his drunken sexual experiences with hundreds of women. By contrast, female sex bloggers have been attacked and told that they “asked for it.” As the Duke chronicle insightfully noted, the wildly different responses to the sexual escapades of Duke graduates Tucker Max and Karen Owen confirm that a sexual double standard is alive and well.