Category: Technology

Facebook is More Like a Cable Network than a Newspaper

As I worried yesterday, Facebook’s defenders are already trying to end the conversation about platform bias before it can begin. “It’s like complaining that the New York Times doesn’t publish everything that’s fit to print or that Fox News is conservative,” Eugene Volokh states.

Eight years ago, I argued that platforms like Google are much more like cable networks than newspapers–and should, in turn, be eligible for more governmental regulation. (The government can’t force Fox News to promote Bernie Sanders–but it can require Comcast to carry local news.) The argument can be extended to dominant social networks, or even apps like WeChat.

As I note here, to the extent megaplatforms are classifiable under traditional First Amendment doctrine, they are often closer to utilities or cable networks than newspapers or TV channels. Their reach is far larger than that of newspapers or channels. Their selection and arrangement of links comes far closer to the cable network’s decision about what channels to program (where such entities, by and large, do not create the content they choose to air), than it does to a newspaper which mostly runs its own content and has cultivated an editorial voice. Finally, and most importantly, massive internet platforms must take the bitter with the sweet: if they want to continue avoiding liability for intellectual property infringement and defamation, they should welcome categorization as a conduit for speech, rather than speaker status itself.

Admittedly, if there is any aspect of Facebook where it might be said to be cultivating some kind of editorial voice, it is the Trend Box. It is ironic that they’ve gotten in the most trouble for this service, rather than the much more problematic newsfeed. But they invited this trouble with their bland and uninformative description of what the Trend Box is. Moreover, if the Trend Box is indeed treated as “media” (rather than a conduit for media), it could betoken a much deeper challenge to foundational media regulation like sponsorship disclosures–a topic I’ll tackle next week.

Platform Responsibility

Internet platforms are starting to recognize the moral duties they owe their users. Consider, for example, this story about Baidu, China’s leading search engine:

Wei Zexi’s parents borrowed money and sought an experimental treatment at a military hospital in Beijing they found using Baidu search. The treatment failed, and Wei died less than two months later. As the story spread, scathing attacks on the company multiplied, first across Chinese social networks and then in traditional media.

After an investigation, Chinese officials told Baidu to change the way it displays search results, saying they are not clearly labeled, lack objectivity and heavily favor advertisers. Baidu said it would implement the changes recommended by regulators, and change its algorithm to rank results based on credibility. In addition, the company has set aside 1 billion yuan ($153 million) to compensate victims of fraudulent marketing information.

I wish I could include this story in the Chinese translation of The Black Box Society. On a similar note, Google this week announced it would no longer run ads from payday lenders. Now it’s time for Facebook to step up to the plate, and institute new procedures to ensure more transparency and accountability.

0

Unequal Exposure

Towards the end of the breathless and impassioned tour through privacy, surveillance, carcerality, and desire that is Exposed, Bernard Harcourt writes that “the emphasis on what we must do as ethical selves, each and every one of us – us digital subjects – may be precisely what is necessary for us to begin to think of ourselves as we. Yes, as that we that has been haunting this book since page one” (283). The call for unity and solidarity is seductive: if “we” are all exposed and vulnerable, then “we” can all resist and demand change. But that “we” – that reassuring abstraction of humanity and human experience – is not in fact what haunts this book. That “we” – unquestioned, undifferentiated, unmarked – is taken for granted and treated as the universal subject in this book. What truly haunts this book is everything that this “we” obscures and represses. Harcourt’s “we” is remarkably undifferentiated. Nearly every point Harcourt makes about how “we” experience digital subjectivity, surveillance, and exposure would and should be contested by women, people of color, the poor, sexual minorities (and those who belong to more than one of the categories in this non-exhaustive list). It is unfair, of course, to expect any one book or any one author to capture the full complexity of human experience on any topic. One writes about what one knows, and nuance must sometimes be sacrificed for the sake of broad theory. But there is a difference between falling short of conveying the diversity of human experience and barely acknowledging the existence of differentiation. If one of Harcourt’s goals is to lead us to “think of ourselves as we,” it is vital to recognize that  “we” in the digital age are not equally represented, equally consenting or resisting, or equally exposed.

Let’s begin with Harcourt’s characterization of the digital age as a study in shallow positivity: “We do not sing hate, we sing praise. We ‘like,’ we ‘share,’ we ‘favorite.’ We ‘follow.’ We ‘connect.’ ‘We get LinkedIn.’ Ever more options to join and like and appreciate. Everything today is organized around friending, clicking, retweeting, and reposting. … We are appalled by mean comments – which are censored if they are too offensive”(41). This is a picture of the digital world that will be  unrecognizable to many people. There is no mention of online mobs, targeted harassment campaigns, career-destroying defamation, rape and death threats, doxxing, revenge porn, sex trafficking, child porn, online communities dedicated to promoting sexual violence against women, or white supremacist sites. No mention, in short, of the intense, destructive, unrelenting hatred that drives so much of the activity of our connected world. Harcourt’s vision of our digital existence as a sunny safe space where occasional “mean comments” are quickly swept from view is nothing short of extraordinary.

Next, consider Harcourt’s repeated insistence that there are no real distinctions between exposer and exposed, the watcher and the watched: “There is no clean division between those who expose and those who surveil; surveillance of others has become commonplace today, with nude pictures of celebrities circulating as ‘trading fodder’ on the more popular anonymous online message boards, users stalking other users, and videos constantly being posted about other people’s mistakes, accidents, rants, foibles, and prejudices. We tell stories about ourselves and others. We expose ourselves. We watch others” (129). There are, in fact, important divisions between exposers and the exposed. With regard to sexual exposure, it is overwhelmingly the case that women are the subjects and not the agents of exposure. The nude photos to which Harcourt refers weren’t of just any celebrities; they were with few exceptions female celebrities. The hacker in that case, as in nearly every other case of nude photo hacking, is male, as is nearly every revenge porn site owner and the majority of revenge porn consumers. The “revenge porn” phenomenon itself, more accurately described as “nonconsensual pornography,” is overwhelmingly driven by men exposing women, not the other way around. Many of Harcourt’s own examples of surveillance point to the gender imbalance at work in sexual exposure. The LOVEINT scandal, the CCTV cameras pointed into girls’ toilets and changing rooms in UK schools (229), and Edward Snowden’s revelations of how the NSA employees share naked pictures (230) primarily involve men doing the looking and women and girls being looked at. The consequences of sexual exposure are also not gender-neutral: while men and boys may suffer embarrassment and shame, girls and women suffer these and much more, including being expelled from school, fired from jobs, tormented by unwanted sexual propositions, and threatened with rape.

There are also important distinctions to be made between those who voluntarily expose themselves and those who are exposed against their will. In the passage above, Harcourt puts nude photos in the same list as videos of people’s “rants, foibles, and prejudices.” The footnote to that sentence provides two specific examples: Jennifer Lawrence’s hacked photos and video of Michael Richards (Seinfeld’s Kramer) launching into a racist tirade as he performed at a comedy club (311). That is a disturbing false equivalence. The theft of private information is very different from a public, voluntary display of racist hatred. In addition to the fact that naked photos are in no way comparable to casual references to lynching and the repeated use of racial slurs, it should matter that Jennifer Lawrence was exposed against her will and Michael Richards exposed himself.

It’s not the only time in the book that Harcourt plays a bit fast and loose with the concepts of consent and voluntariness. In many places he criticizes “us” for freely contributing to our own destruction: “There is hardy any need for illicit or surreptitious searches, and there is little need to compel, to pressure, to strong-arm, or to intimidate, because so many of us are giving all our most intimate information and whereabouts so willingly and passionately – so voluntarily” (17).  And yet Harcourt also notes that in many cases, people do not know that they are being surveilled or do not feel that they have any practical means of resistance. “The truth is,” Harcourt tells us with regard to the first, “expository power functions best when those who are seen are not entirely conscious of it, or do not always remember. The marketing works best when the targets do not know that they are being watched” (124). On the second point, Harcourt observes that “when we flinch at the disclosure, most of us nevertheless proceed, feeling that we have no choice, not knowing how not to give our information, whom we would talk to, how to get the task done without the exposure. We feel we have no other option but to disclose” (181-2). But surely if people are unaware of a practice or feel they cannot resist it, they can hardly be considered to have voluntarily consented to it.

Also, if people often do not know that they are under surveillance, this undermines one of the more compelling concerns of the book, namely, that surveillance inhibits expression. It is difficult to see how surveillance could have an inhibiting effect if the subjects are not conscious of the fact that they are being watched. Surreptitious surveillance certainly creates its own harms, but if subjects are truly unaware that they are being watched – as opposed to not knowing exactly when or where surveillance is taking place but knowing that it is taking place somewhere somehow, which no doubt does create a chilling effect – then self-censorship is not likely to be one of them.

Harcourt suggests a different kind of harm when he tells us that “[i]nformation is more accessible when the subject forgets that she is being stalked” (124). That is, we are rendered more transparent to the watchers when we falsely believe they are not watching us. That seems right. But what exactly is the harm inflicted by this transparency? Harcourt warns that we are becoming “marketized subjects – or rather subject-objects who are nothing more than watched, tracked, followed, profiled at will, and who in turn do nothing more than watch and observe others” (26). While concerns about Big Data are certainly legitimate (and have been voiced by many scholars, lawyers, policymakers, and activists), Harcourt never paints a clear picture of what he thinks the actual harm of data brokers and targeted Target advertisements really is. In one of the few personal and specific examples he offers of the harms of surveillance, Harcourt describes the experience of being photographed by a security guard before a speaking engagement. Harcourt is clearly unsettled by the experience: “I could not resist. I did not resist. I could not challenge the security protocol. I was embarrassed to challenge it, so I gave in without any resistance. But it still bothers me today. Why? Because I had no control over the dissemination of my own identity, of my face. Because I felt like I had no power to challenge, to assert myself” (222). While one sympathizes with Harcourt’s sense of disempowerment, it is hard to know what to think of it in relation to the sea of other surveillance stories: women forced to flee their homes because of death threats, parents living in fear because the names of their children and the schools they attend have been published online, or teenaged girls committing suicide because the photo of their rape is being circulated on the Internet as a form of entertainment.

Harcourt uses the term “stalk” at least eight times in this book, and none of these references are to actual stalking, the kind that involves being followed by a particular individual who knows where you live and work and means you harm, the kind that one in six women in the U.S. will experience in her lifetime, the kind that is encouraged and facilitated by an ever-expanding industry of software, gadgets, and apps that openly market themselves to angry men as tools of control over the women who have slipped their grasp. What a privilege it is to be able to treat stalking not as a fact of daily existence, but as a metaphor.

Harcourt’s criticism of what he considers to be the Supreme Court’s lack of concern for privacy adds a fascinating gloss to all of this. Harcourt takes particular aim at Justice Scalia, asserting that even when Scalia seems to be protecting privacy, he is actually disparaging it: “Even in Kyllo v. United States…. where the Court finds that the use of heat-seeking technology constitutes a search because it infringes on the intimacies of the home, Justice Scalia mocks the humanist conception of privacy and autonomy.” The proof of this assertion supposedly comes from Scalia’s observation that the technology used in that case “might disclose, for example, at what hour each night the lady of the house takes her daily sauna and bath – a detail that many would consider ‘intimate.’” Harcourt assumes that Scalia’s reference to the “lady of the house” is an ironic expression of contempt. But Scalia is not being ironic. Elsewhere in the opinion, he emphatically states that “[i]n the home… all details are intimate details,” and many of his other opinions reinforce this view. Scalia and many members of the Court are very concerned about privacy precisely when it involves the lady of the house, or the homeowner subjected to the uninvited drug-sniffing dog on the porch (Florida v. Jardines, 2013), or the federal official subjected to the indignity of a drug test (Treasury Employees v. Von Raab, 1989 (dissent)). These same members of the Court, however, are remarkably unconcerned about privacy when it involves a wrongfully arrested man subjected to a humiliating “squat and cough” cavity search (Florence v. Burlington, 2012), or a driver searched after being racially profiled (Whren v. US, 1996), a pregnant woman tricked into a drug test while seeking prenatal care (Ferguson v. Charleston, 2001 (dissent)). In other words, the problem with the Supreme Court’s views on privacy and surveillance is not that it does not care about it; it’s that it tends to care about it only when it affects interests they share or people they resemble.

The world is full of people who do not have the luxury of worrying about a growing addiction to Candy Crush or whether Target knows they need diapers before they do. They are too busy worrying that their ex-husband will hunt them down and kill them, or that they will be stopped and subjected to a humiliating pat down for the fourth time that day, or that the most private and intimate details of their life will be put on public display by strangers looking to make a buck. These people are not driven by a desire to expose themselves. Rather, they are being driven into hiding, into obscurity, into an inhibited and chilled existence, by people who are trying to expose them. If “we” want to challenge surveillance and fight for privacy, “they” must be included.

0

The Fragility of Desire

In his excellent new book Exposed, Harcourt’s analysis of the role of desire in what he calls the “expository society” of the digital age is seductive. We are not characters in Orwell’s 1984, or prisoners of Bentham’s Panopticon, but rather are enthusiastic participants in a “mirrored glass pavilion” that is addictive and mesmerizing. Harcourt offers a detailed picture of this pavilion and also shows us the seamy side of our addiction to it. Recovery from this addiction, he argues, requires acts of disobedience but there lies the great dilemma and paradox of our age: revolution requires desire, not duty, but our desires are what have ensnared us.

I think that this is both a welcome contribution as well as a misleading diagnosis.

There have been many critiques of consent-based privacy regimes as enabling, rather than protecting, privacy. The underlying tenor of many of these critiques is that consent fails as a regulatory tool because it is too difficult to make it truly informed consent. Harcourt’s emphasis on desire shows why there is a deeper problem than this, that our participation in the platforms that surveil us is rooted in something deeper than misinformed choice. And this makes the “what to do?” question all the more difficult to answer. Even for those of us who see a stronger role for law than Harcourt outlines in this book (I agree with Ann Bartow’s comments on this) should pause here. Canada, for example, has strong private sector data protections laws with oversight from excellent provincial and federal privacy commissioners. And yet these laws are heavily consent-based. Such laws are able to shift practices to a stronger emphasis on things like opt-in consent, but Harcourt leaves us with a disquieting sense that this might just be just an example of a Pyrrhic victory, legitimizing surveillance through our attempts to regulate it because we still have not grappled with the more basic problem of the seduction of the mirrored glass pavilion.

The problem with Harcourt’s position is that, in exposing this aspect of the digital age in order to complicate our standard surveillance tropes, he risks ignoring other sources of complexity that are also important for both diagnosing the problem and outlining a path forward.

Desire is not always the reason that people participate in new technologies. As Ann Bartow and Olivier Sylvain point out, people do not always have a choice about their participation in the technologies that track us. The digital age is not an amusement park we can choose to go to or to boycott, but deeply integrated into our daily practices and needs, including the ways in which we work, bank, and access government services.

But even when we do actively choose to use these tools, it is not clear that desire captures the why of all such choices. If we willingly enter Harcourt’s mirrored glass pavilion, it is sometimes because of some of its very useful properties — the space- and time-bending nature of information technology. For example, Google calendar is incredibly convenient because multiple people can access shared calendars from multiple devices in multiple locations at different times making the coordination of calendars incredibly easy. This is not digital lust, but digital convenience.

These space- and time-bending properties of information technology are important for understanding the contours of the public/private nexus of surveillance that so characterizes our age. Harcourt does an excellent job at pointing out some of the salient features of this nexus, describing a “tentacular oligarchy” where private and public institutions are bound together in state-like “knots of power,” with individuals passing back and forth between these institutions. But what is strange in Harcourt’s account is that this tentacular oligarchy still appears to be bounded by the political borders of the US. It is within those borders that the state and the private sector have collapsed together.

What this account misses is the fact that information technology has helped to unleash a global private sector that is not bounded by state borders. In this emerging global private sector large multinational corporations often operate as “metanationals” or stateless entities. The commercial logic of information is that it should cross political borders with ease and be stored wherever it makes the most economic sense.

Consider some of the rhetoric surrounding the e-commerce chapter of the recent TPP agreement. The Office of the US Trade Representative indicates that one of its objectives is to keep the Internet “free and open” which it has pursued through rules that favour cross-border data flows and prevent data localization. It is easy to see how this idea of “free” might be confused with political freedom, for an activist in an oppressive regime is better off in exercising freedom of speech when that speech can cross political borders or the details of their communications can be stored in a location that is free of the reach of their state. A similar rationale has been offered by some in the current Apple encryption debate — encryption protects American business people communicating within China and we can see why that is important.

But this idea of freedom is the freedom of a participant in a global private sector with weak state control; freedom from the state control of oppressive regimes also involves freedom from the state protection of democratic regimes.

If metanationals pursue a state-free agenda, the state pursues an agenda of rights-protectionism. By rights protectionism I mean the claim that states do, and should, protect the constitutional rights of their own citizens and residents but not others. Consider, for example, a Canadian citizen who resides in Canada and uses US cloud computing. That person could be communicating entirely with other Canadians in other Canadian cities and yet have all of their data stored in the US-based cloud. If the US authorities wanted access to that data, the US constitution would not apply to regulate that access in a rights-protecting manner because the Canadian is a non-US person.

Many see result as flowing from the logic of the Verdugo-Urquidez case. Yet that case concerned a search that occurred in a foreign territory (Mexico), rather than within the US, where the law of that territory continued to apply. The Canadian constitution does not apply to acts of officials within the US. The data at issue falls into a constitutional black hole where no constitution applies (and maybe even international human rights black hole according to some US interpretations of extraterritorial obligations). States can then collect information within this black hole free of the usual liberal-democratic constraints and share it with other allies, a situation Snowden documented within the EU and likened to a “European bazaar” of surveillance.

Rights protectionism is not rights protection when information freely crosses political boundaries and state power piggybacks on top of this crossing and exploits it.

This is not a tentacular oligarchy operating within the boundaries of one state, but a series of global alliances – between allied states and between states and metanationals who exert state-like power — exploiting the weaknesses of state-bound law.

We are not in this situation simply because of a penchant for selfies. But to understand the full picture we do need to look beyond “ourselves” and get the global picture in view. We need to understand the ways in which our legal models fail to address these new realities and even help to mask and legitimize the problems of the digital age through tools and rhetoric that are no longer suitable.

Lisa Austin is an Associate Professor at the University of Toronto Faculty of Law.

2

Irresistible Surveillance?

Bernard Harcourt’s Exposed: Desire and Disobedience in the Digital Age offers many intriguing insights into how power circulates in contemporary society.  The book’s central contribution, as I see it, is to complicate the standard model of surveillance by introducing the surveilled’s agency into the picture.  Exposed highlights the extent to which ordinary people are complicit in regimes of data-monitoring and data-mining that damage their individual personhood and the democratic system.  Millions upon millions of “digital subjects,” Harcourt explains, have come to embrace forms of exposure that commoditize their own privacy.  Sometimes people do this because they want more convenience when they navigate capitalist culture or government bureaucracies.  Or because they want better book recommendations from Amazon.  Other times, people wish to see and be seen online—increasingly feel they need to be seen online—in order to lead successful social and professional lives.

So complicit are we in the erosion of our collective privacy, Harcourt suggests, that any theory of the “surveillance state” or the “surveillance industrial complex” that fails to account for these decentralized dynamics of exhibition, spectacle, voyeurism, and play will misdiagnose our situation.  Harcourt aligns himself at times with some of the most provocative critics of intelligence agencies like the NSA and companies like Facebook.  Yet the emphasis he places on personal desire and participatory disclosure belies any Manichean notion of rogue institutions preying upon ignorant citizens.  His diagnosis of how we’ve lost our privacy is more complex, ethically and practically, in that it forces attention on the ways in which our current situation is jointly created by bottom-up processes of self-exposure as well as by top-down processes of supervision and control.

Thus, when Harcourt writes in the introduction that “[t]here is no conspiracy here, nothing untoward,” what might seem like a throwaway line is instead an important descriptive and normative position he is staking out about the nature of the surveillance problem.  Exposed calls on critics of digital surveillance to adopt a broader analytic lens and a more nuanced understanding of causation, power, and responsibility.  Harcourt in this way opens up fruitful lines of inquiry while also, I think, opening himself up to the charge of victim-blaming insofar as he minimizes the social and technological forces that limit people’s capacity to change their digital circumstances.

The place of desire in “the expository society,” Harcourt shows, requires rethinking of our metaphors for surveillance, discipline, and loss of privacy.  Exposed unfolds as a series of investigations into the images and tropes we conventionally rely on to crystallize the nature of the threat we face: Big Brother, the Panopticon, the Surveillance State, and so forth.  In each case, Harcourt provides an erudite and sympathetic treatment of the ways in which these metaphors speak to our predicament.  Yet in each case, he finds them ultimately wanting.  For instance, after the Snowden disclosures began to garner headlines, many turned to George Orwell’s novel 1984 to help make sense of the NSA’s activities.  Book sales skyrocketed.  Harcourt, however, finds the Big Brother metaphor to be misleading in critical respects.  As he reminds us, Big Brother sought to wear down the citizens of Oceania, neutralize their passions, fill them with hate.  “Today, by contrast, everything functions by means of ‘likes,’ ‘shares,’ ‘favorites,’ ‘friending,’ and ‘following.’  The drab blue uniform and grim gray walls in 1984 have been replaced by the iPhone 5C in all its radiant colors . . . .”  We are in a new condition, a new paradigm, and we need a new language to negotiate it.

Harcourt then considers a metaphor of his own devising: the “mirrored glass pavilion.”  This metaphor is meant to evoke a sleek, disorienting, commercialized space in which we render ourselves exposed to the gaze of others and also, at the same time, to ourselves.  But Harcourt isn’t quite content to rest with this metaphor either.  He introduces the mirrored glass pavilion, examines it, makes a case for it, and keeps moving—trying out metaphors like “steel mesh” and “data doubles” and (my favorite) “a large oligopolistic octopus that is enveloping the world,” all within the context of the master metaphor of an expository society.  Metaphors, it seems, are indispensable if imperfect tools for unraveling the paradoxes of digital life.

The result is a restless, searching quality to the analysis.  Exposed is constantly introducing new anecdotes, examples, paradigms, and perspectives, in the manner of a guided tour.  Harcourt is clearly deeply unsettled by the digital age we have entered.  Part of the appeal of the book is that he is willing to leave many of his assessments unsettled too, to synthesize a vast range of developments without simplifying or prophesizing.

Another aspect of Exposed that enhances its effect is the prose style.  Now, I wouldn’t say that Harcourt’s Foucault-fueled writing has ever suffered from a lack of flair.  But in this work, Harcourt has gone further and become a formal innovator.  He has developed a prose style that uncannily mimics the experience of the expository society, the phenomenology of the digital subject.

Throughout the book, when describing the allure of new technologies that would rob us of our privacy and personhood, the prose shifts into a different register.  The reader is suddenly greeted with quick staccato passages, with acronyms and brand names thrown together in a dizzying succession of catalogs and subordinate clauses.  In these passages, Harcourt models for us the implicit bargain offered by the mirrored glass pavilion—inviting us to suspend critical judgment, to lose ourselves, as we get wrapped up in the sheer visceral excitement, the mad frenzy, of digital consumer culture.

Right from the book’s first sentence, we confront this mimetic style:

Every keystroke, each mouse click, every touch of the screen, card swipe, Google search, Amazon purchase, Instagram, ‘like,’ tweet, scan—in short, everything we do in our new digital age can be recorded, stored, and monitored.  Every routine act on our iPads and tablets, on our laptops, notebooks, and Kindles, office PCs and smart-phones, every transaction with our debit card, gym pass, E-ZPass, bus pass, and loyalty cards can be archived, data-mined, and traced back to us.

Other sentences deploy a similar rhetorical strategy in a more positive key, describing how we now “‘like,’ we ‘share,’ we ‘favorite.’ We ‘follow.’ We ‘connect.’ We get ‘LinkedIn”—how “[e]verything today is organized around friending, clicking, retweeting, and reposting.”

There is a visceral pleasure to be had from abandoning oneself to the hyper-stimulation, the sensory overload, of passages like these.  Which is precisely the point.  For that pleasure underwrites our own ubiquitous surveillance and the mortification of self.  That pleasure is our undoing.  More than anything else, in Harcourt’s telling, it is the constant gratifications afforded by technologies of surveillance that have “enslaved us, exposed us, and ensnared us in this digital shell as hard as steel.”

*  *  *

I hope these brief comments have conveyed some of what I found so stimulating in this remarkable book.  Always imaginative and often impressionistic, Exposed is hazy on a number of significant matters.  In the hope of facilitating conversation, I will close by noting a few.

First, what are the distributional dimensions of the privacy crisis that we face?  The implicit digital subject of Exposed seems to be a highly educated, affluent type—someone who would write a blog post, wear an Apple Watch, buy books on Amazon.  There may well be millions of people like this; I don’t mean to suggest any narcissism in the book’s critical gaze.  I wonder, though, how the privacy pitfalls chronicled in Exposed relate to more old-fashioned forms of observation and exploitation that continue to afflict marginalized populations and that Harcourt has trenchantly critiqued in other work.

Second, what about all the purported benefits of digital technology, Big Data, and the like?  Some commentators, as Harcourt notes in passing, have begun to argue that panoptic surveillance, at least under the right conditions, can facilitate not only certain kinds of efficiency and security but also values such as democratic engagement and human freedom that Harcourt is keen to promote.  I share Harcourt’s skepticism about these arguments, but if they are wrong then we need to know why they are wrong, and in particular whether they are irredeemably mistaken or whether they might instead point us toward useful regulatory reforms.

And lastly, what would dissent look like in this realm?  The final, forward-looking chapter of Exposed is strikingly short, only four pages long.  Harcourt exhorts the reader to fight back through “digital resistance” and “political disobedience.”  But remember, there is no conspiracy here, nothing untoward.  Rather, there is a massively distributed and partially emergent system of surveillance.  And this system generates enormous perceived rewards, not just for those at the top but for countless individuals throughout society.  It is something of a puzzle, then, what would motivate the sort of self-abnegating resistance that Harcourt calls for—resistance that must be directed, in the first instance, toward our own compulsive habits and consumptive appetites.  How would that sort of resistance develop, in the teeth of our own desires, and how could it surmount collective action barriers?

These are just a few of the urgent questions that Exposed helps bring into focus.

*  *  *

David Pozen is an associate professor at Columbia Law School.

Bernard Harcourt Exposed 02
0

Surveillance and Our Addiction to Exposure

Bernard Harcourt ExposedBernard Harcourt’s Exposed: Desire and Disobedience in the Digital Age (Harvard University Press 2015) is an indictment of  our contemporary age of surveillance and exposure — what Harcourt calls “the expository society.” Harcourt passionately deconstructs modern technology-infused society and explains its dark implications with an almost poetic eloquence.

Harcourt begins by critiquing the metaphor of George Orwell’s 1984 to describe the ills of our world today.  In my own previous work, I critiqued this metaphor, arguing that Kafka’s The Trial was a more apt metaphor to capture the powerlessness and vulnerability that people experience as government and businesses construct and use “digital dossiers” about their lives.  Harcourt critiques Orwell in a different manner, arguing that Orwell’s dystopian vision is inapt because it is too drab and gray:

No, we do not live in a drab Orwellian world.  We live in a beautiful, colorful, stimulating, digital world that is online, plugged in, wired, and Wi-Fi enabled.  A rich, bright, vibrant world full of passion and jouissance–and by means of which we reveal ourselves and make ourselves virtually transparent to surveillance.  In the end, Orwell’s novel is indeed prescient in many ways, but jarringly off on this one key point.  (pp. 52-53)

City wet-868078_960_720 pixabay b

Orwell’s Vision

City NYC new-117018_960_720 pixabay b

Life Today

Neil Postman Amusing Ourselves to DeathHarcourt notes that the “technologies that end up facilitating surveillance are the very technologies we crave.”  We desire them, but “we have become, slowly but surely, enslaved to them.” (p. 52).

Harcourt’s book reminds me of Neil Postman’s Amusing Ourselves to Death, originally published about 30 years ago — back in 1985.  Postman also critiqued Orwell’s metaphor and argued that Aldous Huxley’s Brave New World was a more apt metaphor to capture the problematic effects new media technologies were having on society.

Read More

Is Eviction-as-a-Service the Hottest New #LegalTech Trend?

Some legal technology startups are struggling nowadays, as venture capitalists pull back from a saturated market. The complexity of the regulatory landscape is hard to capture in a Silicon Valley slide deck. Still, there is hope for legal tech’s “idealists.” A growing firm may bring eviction technology to struggling neighborhoods around the country:

Click Notices . . . integrates its product with property management software, letting landlords set rules for when to begin evictions. For instance, a landlord could decide to file against every tenant that owes $25 or more on the 10th of the month. Once the process starts, the Click Notices software, which charges landlords flat fees depending on local court costs, sends employees or subcontractors to represent the landlord in court (attorneys aren’t compulsory in many eviction cases).

I can think of few better examples of Richard Susskind’s vision for the future of law. As one Baltimore tenant observes, the automation of legal proceedings can lead to near-insurmountable advantages for landlords:

[Click Notices helped a firm that] tried to evict Dinickyo Brown over $336 in unpaid rent. Brown, who pays $650 a month for a two-bedroom apartment in Northeast Baltimore, fought back, arguing the charges arose after she complained of mold. The landlord dropped the case, only to file a fresh eviction action—this time for $290. “They drag you back and forth to rent court, and even if you win, it goes onto your record,” says Brown, who explains that mold triggers her epilepsy. “If you try to rent other properties or buy a home, they look at your records and say: You’ve been to rent court.”

And here’s what’s truly exciting for #legaltech innovators: the digital reputation economy can synergistically interact with the new eviction-as-a-service approach. Tenant blacklists can assure that merely trying to fight an eviction can lead to devastating consequences in the future. Imagine the investment returns for a firm that owned both the leading eviction-as-a-service platform in a city, and the leading tenant blacklist? Capture about 20 of the US’s top MSA‘s, and we may well be talking unicorn territory.

As we learned during the housing crisis, the best place to implement legal process outsourcing is against people who have a really hard time fighting back. That may trouble old-school lawyers who worry about ever-faster legal processes generating errors, deprivations of due process, or worse. But the legal tech community tends to think about these matters in financialized terms, not fusty old concepts like social justice or autonomy. I sense they will celebrate eviction-as-a-service as one more extension of technologized ordering of human affairs into a profession whose “conservatism” they assume to be self-indicting.

Still, even for them, caution should be in order. Bret Scott’s skepticism about fintech comes to mind:

[I]f you ever watch people around automated self-service systems, they often adopt a stance of submissive rule-abiding. The system might appear to be ‘helpful’, and yet it clearly only allows behaviour that agrees to its own terms. If you fail to interact exactly correctly, you will not make it through the digital gatekeeper, which – unlike the human gatekeeper – has no ability or desire to empathise or make a plan. It just says ‘ERROR’. . . . This is the world of algorithmic regulation, the subtle unaccountable violence of systems that feel no solidarity with the people who have to use it, the foundation for the perfect scaled bureaucracy.

John Danaher has even warned of the possible rise of “algocracy.” And Judy Wajcman argues that ““Futuristic visions based on how technology can speed up the world tend to be inherently conservative.” As new legal technology threatens to further entrench power imbalances between creditors and debtors, landlords and tenants, the types of feudalism Bruce Schneier sees in the security landscape threaten to overtake far more than the digital world.

(And one final note. Perhaps even old-school lawyers can join Paul Gowder’s praise for a “parking ticket fighting” app, as a way of democratizing advocacy. It reminds me a bit of TurboTax, which democratized access to tax preparation. But we should also be very aware of exactly how TurboTax used its profits when proposals to truly simplify the experience of tax prep emerged.)

Hat Tip: To Sarah T. Roberts, for alerting me to the eviction story.

The Emerging Law of Algorithms, Robots, and Predictive Analytics

In 1897, Holmes famously pronounced, “For the rational study of the law the blackletter man may be the man of the present, but the man of the future is the man of statistics and the master of economics.” He could scarcely envision at the time the rise of cost-benefit analysis, and comparative devaluation of legal process and non-economic values, in the administrative state. Nor could he have foreseen the surveillance-driven tools of today’s predictive policing and homeland security apparatus. Nevertheless, I think Holmes’s empiricism and pragmatism still animate dominant legal responses to new technologies. Three conferences this Spring show the importance of “statistics and economics” in future tools of social order, and the fundamental public values that must constrain those tools.

Tyranny of the Algorithm? Predictive Analytics and Human Rights

As the conference call states

Advances in information and communications technology and the “datafication” of broadening fields of human endeavor are generating unparalleled quantities and kinds of data about individual and group behavior, much of which is now being deployed to assess risk by governments worldwide. For example, law enforcement personnel are expected to prevent terrorism through data-informed policing aimed at curbing extremism before it expresses itself as violence. And police are deployed to predicted “hot spots” based on data related to past crime. Judges are turning to data-driven metrics to help them assess the risk that an individual will act violently and should be detained before trial. 


Where some analysts celebrate these developments as advancing “evidence-based” policing and objective decision-making, others decry the discriminatory impact of reliance on data sets tainted by disproportionate policing in communities of color. Still others insist on a bright line between policing for community safety in countries with democratic traditions and credible institutions, and policing for social control in authoritarian settings. The 2016 annual conference will . . . consider the human rights implications of the varied uses of predictive analytics by state actors. As a core part of this endeavor, the conference will examine—and seek to advance—the capacity of human rights practitioners to access, evaluate, and challenge risk assessments made through predictive analytics by governments worldwide. 

This focus on the violence targeted and legitimated by algorithmic tools is a welcome chance to discuss the future of law enforcement. As Dan McQuillan has argued, these “crime-fighting” tools are both logical extensions of extant technologies of ranking, sorting, and evaluating, and raise fundamental challenges to the rule of law: 

According to Agamben, the signature of a state of exception is ‘force-of’; actions that have the force of law even when not of the law. Software is being used to predict which people on parole or probation are most likely to commit murder or other crimes. The algorithms developed by university researchers uses a dataset of 60,000 crimes and some dozens of variables about the individuals to help determine how much supervision the parolees should have. While having discriminatory potential, this algorithm is being invoked within a legal context. 

[T]he steep rise in the rate of drone attacks during the Obama administration has been ascribed to the algorithmic identification of ‘risky subjects’ via the disposition matrix. According to interviews with US national security officials the disposition matrix contains the names of terrorism suspects arrayed against other factors derived from data in ‘a single, continually evolving database in which biographies, locations, known associates and affiliated organizations are all catalogued.’ Seen through the lens of states of exception, we cannot assume that the impact of algorithmic force-of will be constrained because we do not live in a dictatorship. . . .What we need to be alert for, according to Agamben, is not a confusion of legislative and executive powers but separation of law and force of law. . . [P]redictive algorithms increasingly manifest as a force-of which cannot be restrained by invoking privacy or data protection. 

The ultimate logic of the algorithmic state of exception may be a homeland of “smart cities,” and force projection against an external world divided into “kill boxes.” 


We Robot 2016: Conference on Legal and Policy Issues Relating to Robotics

As the “kill box” example suggests above, software is not just an important tool for humans planning interventions. It is also animating features of our environment, ranging from drones to vending machines. Ryan Calo has argued that the increasing role of robotics in our lives merits “systematic changes to law, institutions, and the legal academy,” and has proposed a Federal Robotics Commission. (I hope it gets further than proposals for a Federal Search Commission have so far!)


Calo, Michael Froomkin, and other luminaries of robotics law will be at We Robot 2016 this April at the University of Miami. Panels like “Will #BlackLivesMatter to RoboCop?” and “How to Engage the Public on the Ethics and Governance of Lethal Autonomous Weapons” raise fascinating, difficult issues for the future management of violence, power, and force.


Unlocking the Black Box: The Promise and Limits of Algorithmic Accountability in the Professions


Finally, I want to highlight a conference I am co-organizing with Valerie Belair-Gagnon and Caitlin Petre at the Yale ISP. As Jack Balkin observed in his response to Calo’s “Robotics and the Lessons of Cyberlaw,” technology concerns not only “the relationship of persons to things but rather the social relationships between people that are mediated by things.” Social relationships are also mediated by professionals: doctors and nurses in the medical field, journalists in the media, attorneys in disputes and transactions.


For many techno-utopians, the professions are quaint, an organizational form to be flattened by the rapid advance of software. But if there is anything the examples above (and my book) illustrate, it is the repeated, even disastrous failures of many computational systems to respect basic norms of due process, anti-discrimination, transparency, and accountability. These systems need professional guidance as much as professionals need these systems. We will explore how professionals–both within and outside the technology sector–can contribute to a community of inquiry devoted to accountability as a principle of research, investigation, and action. 


Some may claim that software-driven business and government practices are too complex to regulate. Others will question the value of the professions in responding to this technological change. I hope that the three conferences discussed above will help assuage those concerns, continuing the dialogue started at NYU in 2013 about “accountable algorithms,” and building new communities of inquiry. 


And one final reflection on Holmes: the repetition of “man” in his quote above should not go unremarked. Nicole Dewandre has observed the following regarding modern concerns about life online: 

To some extent, the fears of men in a hyperconnected era reflect all-too-familiar experiences of women. Being objects of surveillance and control, exhausting laboring without rewards and being lost through the holes of the meritocracy net, being constrained in a specular posture of other’s deeds: all these stances have been the fate of women’s lives for centuries, if not millennia. What men fear from the State or from “Big (br)Other”, they have experienced with men. So, welcome to world of women….

Dewandre’s voice complements that of US scholars (like Danielle Citron and Mary Ann Franks) on systematic disadvantages to women posed by opaque or distant technological infrastructure. I think one of the many valuable goals of the conferences above will be to promote truly inclusive technologies, permeable to input from all of society, not just top investors and managers.

X-Posted: Balkinization.

0

Beatles in the Ether or Streaming

By now many may know that The Beatles catalog (or most of it) is available for streaming on the major services. I happen to love The Beatles and easily recommend Cirque du Soleil’s Love in Las Vegas. But the streaming option presents some questions to which I have not seen answers. First, did the services offer anything extra or special to get the rights (I can’t recall the state of streaming license law as far as flat rate or baseline rate to stream if the rights are granted)? Second, will the rights holders (I can’t recall where those have ended up) track the money from streaming versus selling the tracks and albums? If they do what will they find? Work on P2P music sharing and its effect on music and a study on the effect of free options for film may shed light on the future for Beatles revenues. The film study offered:

Together our results suggest that creative artists can use product differentiation and market segmentation strategies to compete with freely available copies of their content. Specifically, the post-broadcast increase in DVD sales suggests that giving away content in one channel can stimulate sales in a paid channel if the free content is sufficiently differentiated from its paid counterpart. Likewise, our finding that the presence of pirated content does not cannibalize sales for the movies in our sample suggests that if free and paid products appeal to separate customer segments, the presence of free products need not harm paid sales.

If music works in a way similar to film, The Beatles rights holders may expand their pie, not reduce it.

Either way I am happy to enjoy the streaming options while they last.

0

Cyberpunk Because You Forgot to Get Someone a Gift

OK Cyberpunk can be great for a range of reasons, but I saw this repost from i09 on The Essential Cyberpunk reading list and thought, “A great list with some books I have not read. Wait! It’s a list for folks who need to send a just in time Christmas gift (assuming they are available as eBooks, which I know some are). I easily recommend Neuromancer, Snow Crash, and Mirrorshades. I look forward to reading the rest (Accelerando did not work for me but I may try it again). Plus this genre really does a great job of positing worlds and issues that are pressing the tech-law space right now, so that is another reason to jump in.