Category: Technology

0

University of Toronto Law Journal Vol. 68, No. supplement 1

 

Special Issue: Artificial Intelligence, Technology, and the Law

University of Toronto Law JournalVolume 68, No. S1 

Articles: 

Introduction: Artificial intelligence, technology, and the lawSimon Stern

Law as computation in the era of artificial legal intelligence: Speaking law to the power of statisticsMireille Hildebrandt

Warming up to inscrutability: How technology could challenge our concept of lawBrian Sheppard 

Prediction, persuasion, and the jurisprudence of behaviourism – Frank PasqualeGlyn Cashwel

Transformative legal technology and the rule of law – Paul Gowder

How artificial intelligence will affect the practice of lawBenjamin AlarieAnthony NiblettAlbert H Yoon

1

FAN 173.1 (First Amendment News) ROBOTICA EROTICA — Robotic Strippers Dance in Las Vegas

To suggest that the state can regulate robot dancers because they may stir erotic feelings is to say that the government may control the imagination. — Robert Corn-Revere

Dateline Pornotopia. The very thought of it would have made Doctor Freud blush, this new pleasure-principle frontier. As for Anthony Comstock, he would be in moral shock. What about Aldous Huxley? He would have said, “This is something right out of my Brave New World.” And most assuredly Professor Fred Schauer would view such eroticized acts as well beyond the First Amendment pale of protection. Then there is The Death of Discourse (1996), which predicted that the new technologies would serve the libido of future generations.

Well, make of it what you will, but it is nonetheless now a fact: Robotic strippers have come to Las Vegas at the 50th Consumer Electronics Show. Side-by-side with real dancers, the robotic strippers gyrate with erotic pulsation.  (Video here).

MANDEL NGAN/AFP/Getty Images

As reported by Kurt Wagner of CNBC: “The Sapphire Gentleman’s Club, a strip club right off Vegas’s main drag, paid to showcase the robots as a way to drum up interest from press and customers. . . . The robots were as advertised: They gyrated on a stripper pole to music from 50 Cent and Pharrell, with dollar bills scattered on the stage and the floor. A half-dozen human dancers, most of whom were dressed in tight, shiny robot costumes, repeatedly took pics in front of their metallic colleagues.”

Giles Walker (Islington Tribune)

Inventor: “They’re the work,” adds Wagner, “of an artist named Giles Walker, a 50-year-old Brit who describes himself as a scrap metal artist with a passion for building animatronic robots. One of his other projects, The Last Supper, features 13 robots interacting around a table.”

“Walker says he got the idea for pole-dancing robots more than seven years ago, when he noticed the rise of CCTV cameras being used as a way to surveil people in Britain for safety purposes, what he called ‘mechanical peeping Toms.’ He was inspired by the idea of voyeurism, or watching others for pleasure, and decided to try and turn the cameras into something sexy on their own.”

So, are these robots art? Well, they could be.  Again, consider Corn-Revere’s reply to this question: “If stationary sculptures are expressive art that the First Amendment protects – and they are – then moving sculptures can be as well.”

Question: what does this all portend for the future of eroticized expression and the First Amendment? For openers, consider Collins & Skover, Robotica: Speech Rights & Artifical Intelligence (Cambridge University Press, June 2018) —  Robotica Erotica may be the sequel.  Stay tuned!

Robot Lady (credit: Salon)

Nude Dancing: Assuming that erotic robotic dancing is covered under the First Amendment, might a state either ban or regulate such dancing? Recall in this regard the line of First Amendment cases ranging from Schad v. Mount Ephraim (1981) to Barnes v. Glen Theatre, Inc. (1991) to City of Renton v. Playtime Theatres, Inc. (1986) to Erie v. Pap’s A.M. (2000).

See also, David Hudson, “Nude Dancing,” First Amendment Online Library (“Nude dancing is a form of expressive conduct that when restricted, requires First Amendment review. However, the Supreme Court has upheld restrictions on totally nude dancing based on the secondary effects doctrine. Thus, in many cities and counties, dancers must don a modicum of clothing, arguably tempering their erotic messages.”)

Sex Toys?: Are such erotic bots akin to “sex toys” such that they might not qualify for any First Amendment protection? Consider Noah Feldman, Courts playing with the constitutionality of sex toys, Chicago Tribune, August 4, 2016 (“There’s no constitutional right to sex toys — yet. That’s according to a federal appeals court, which declined to strike down a Georgia city’s ordinance that prohibits selling sexual aids. But the three-judge panel invited the full court to rehear the case and strike down the law, stating that it was “sympathetic” to the claim but constrained by precedent. Eventually, the right to sex toys is likely to be accepted in all jurisdictions, as it already is in some. The basis will be the right to sexual intimacy recognized by the U.S. Supreme Court in the landmark 2003 case Lawrence v. Texas. And that raises a question about the evolving nature of constitutional rights: How did we get here? How does a decision framed around the autonomous right of two people to create an intimate sexual relationship come to cover access to toys? And should it?”) See Flanigan’s Enterprises v. City of Sandy Springs Georgia (11th Cir., en banc, Aug. 24, 2017).

Related

Meet “Harmony” – the sex robot with a Scottish accent (considerably more “appealing” than her Las Vegas mechanical counterparts) (YouTube video here)

→ Aurora Snow, Sex Robots Are Here, and They’re Incredibly Lifelike. But Are They Dangerous?, The Daily Beast, July 22, 2017

→ Eric Lieberman, Sex Robots Are Here And Could Change Society Forever, The Libertarian Republic, July 17, 2017

Rethinking the Political Economy of Automation

The White House recently released two important reports on the future of artificial intelligence. The “robot question” is as urgent today as it was in the 1960s. Back then, worry focused on the automation of manufacturing jobs. Now, the computerization of services is top of mind.

At present, economists and engineers dominate public debate on the “rise of the robots.” The question of whether any given job should be done by a robot is modeled as a relatively simple cost-benefit analysis. If the robot can perform a task more cheaply than a worker, substitute it in. This microeconomic approach to filling jobs dovetails with a technocratic, macroeconomic goal of maximizing some blend of GDP and productivity.

In the short run, these goals appear almost indisputable–the dictates of market reason. In the long run, they presage a jobs crisis. As Michael Dorf recently observed, even though “[i]t is possible that new technologies will create all sorts of new jobs that we have not imagined yet,” it is hard to imagine new mass opportunities for employment. So long as a job can be sufficiently decomposed, any task within it seems (to the ambitious engineer) automatable, and (to the efficiency-maximizing economist) ripe for transferring to software and machines. The professions may require a holistic perspective, but other work seems doomed to fragmentation and mechanization.

Dorf is, nevertheless, relatively confident about future economic prospects:

Standard analyses…assume that in the absence of either socialism or massive philanthropy from future tech multi-billionaires, our existing capitalist system will lead to a society in which the benefits of automation are distributed very unevenly. . . . That’s unlikely. Think about Henry Ford’s insight that if he paid his workers a decent wage, he would have not only satisfied workers but customers to buy his cars. If the benefits of technology are beyond the means of the vast majority of ordinary people, that severely limits the ability of capitalists and super-skilled knowledge workers to profit from the mass manufacture of the robotic gizmos. . . . Enlightened capitalists would understand that they need customers and that, with automation severely limiting the number of jobs available, customers can only be ensured through generous government-provided payments to individuals and families.

I hope he is right. But I want to explore some countervailing trends that militate against wider distribution of the gains from automation:
Read More

Social Media for Scholars

For about a year now, Nic Terry and I have been hosting “The Week in Health Law” podcast. (We did miss a few weeks–so we’re actually more like “This 8.3 Days in Health Law”–but we’re pretty reliable!) We interview law professors, social scientists, and other experts, mainly from the US, though with some international presence. We recently convened a “meta-podcast” with 3 past show guests (and the editor of Pharmalot, an influential pharma industry blog) on the importance of social media presence for engaged academics. Our show notes also link to some good guides from other scholars. Like the “No Jargon” podcast of the Scholars Strategy Network, we try to bring informed commentary on complex ideas (like agency guidance on wellness programs) to a broad audience. We’ve received positive feedback from around the world, and I’m often surprised by the range of people who are tuning in (from hospital administrators to bar leaders to general counsel).

I just wanted to add one cautionary note to the emerging commentary on engaged scholarship and social media. I often see participation in blogs, podcasts, or Twitter framed in corporate or neoliberal discourse–the need to “build a brand,” “increase citations,” “leverage a network,” and so on. Even I engage in that in the podcast when I discuss altmetrics. But at its core, the scholarly identity is a very different one than the metricized self of performance optimization. Our best conversations feature a critical distance from the topics at hand and even from the ever more voluminous research apparatus around them. They highlight, rather than gloss over, inevitable conflicts of values that emerge once once tries to apply banalities like the “triple aim” in specific settings. There is a deep interest in an empirical research, and a sober awareness of its limits. (Our discussion with Scott Burris on policies like bike helmet laws is one very good example of this.)

The best moments of the podcast (contrasted with the impoverished neoliberal discourse often used to justify participation in engaged scholarship) highlight two very different meanings of “professionalism” now at work in our culture. The professionalized scholar is often a cite-generator and grant-grubber, more concerned with the external indicia of achievement than the intrinsic value of research they are meant to merely validate or support. But if we consider the academy as a profession, we realize the extraordinary importance of its partial autonomy from both market and state. It exists to create a space for research and conversations that are impossible to monetize immediately (or maybe ever), and which have not been specifically approved by political institutions.

As the state increasingly becomes a cat’s paw of market forces, and market forces themselves are engineered by a shrinking and short-sighted financial elite, preserving the residual autonomy of professions is more important than ever. I hope that future discussions of engaged scholarship focus more on its potential to advance solidarity among those committed to an independent academy–not one keen on ever-preciser rankings of its members, or defensive about proving its value in economic terms that are themselves of questionable utility.

1

UCLA Law Review Vol. 64, Discourse

Volume 64, Discourse

Citizens Coerced: A Legislative Fix for Workplace Political Intimidation Post-Citizens United Alexander Hertel-Fernandez & Paul Secunda 2
Lessons From Social Science for Kennedy’s Doctrinal Inquiry in Fisher v. University of Texas II Liliana M. Garces 18
Why Race Matters in Physics Class Rachel D. Godsil 40
The Indignities of Color Blindness Elise C. Boddie 64
The Misuse of Asian Americans in the Affirmative Action Debate Nancy Leong 90
How Workable Are Class-Based and Race-Neutral Alternatives at Leading American Universities? William C. Kidder 100
Mismatch and Science Desistance: Failed Arguments Against Affirmative Action Richard Lempert 136
Privileged or Mismatched: The Lose-Lose Position of African Americans in the Affirmative Action Debate Devon W. Carbado, Kate M. Turetsky, Valerie Purdie-Vaughns 174
The Right to Record Images of Police in Public Places: Should Intent, Viewpoint, or Journalistic Status Determine First Amendment Protection? Clay Calvert 230
A Worthy Object of Passion Seana Valentine Shiffrin 254
Foreword – Imagining the Legal Landscape: Technology and the Law in 2030 Jennifer L. Mnookin & Richard M. Re i
Imagining Perfect Surveillance
Richard M. Re 264
Selective Procreation in Public and Private Law Dov Fox 294
Giving Up On Cybersecurity Kristen E. Eichensehr 320
DNA in the Criminal Justice System: A Congressional Research Service Report* (*From the Future) Erin Murphy 340
Utopia?: A Technologically Determined World of Frictionless Transactions, Optimized Production, and Maximal Happiness Brett Frischmann and Evan Selinger 372
The CRISPR Revolution: What Editing Human DNA Reveals About the Patent System’s DNA Robin Feldman 392
Virtual Violence Jaclyn Seelagy 412
Glass Half Empty Jane R. Bambauer 434
Social Control of Technological Risks: The Dilemma of Knowledge and Control in Practice, and Ways to Surmount It Edward A. Parson 464
Two Fables Christopher Kelty 488
Policing Police Robots Elizabeth E. Joh 516
Environmental Law, Big Data, and the Torrent of Singularities William Boyd 544

Facebook is More Like a Cable Network than a Newspaper

As I worried yesterday, Facebook’s defenders are already trying to end the conversation about platform bias before it can begin. “It’s like complaining that the New York Times doesn’t publish everything that’s fit to print or that Fox News is conservative,” Eugene Volokh states.

Eight years ago, I argued that platforms like Google are much more like cable networks than newspapers–and should, in turn, be eligible for more governmental regulation. (The government can’t force Fox News to promote Bernie Sanders–but it can require Comcast to carry local news.) The argument can be extended to dominant social networks, or even apps like WeChat.

As I note here, to the extent megaplatforms are classifiable under traditional First Amendment doctrine, they are often closer to utilities or cable networks than newspapers or TV channels. Their reach is far larger than that of newspapers or channels. Their selection and arrangement of links comes far closer to the cable network’s decision about what channels to program (where such entities, by and large, do not create the content they choose to air), than it does to a newspaper which mostly runs its own content and has cultivated an editorial voice. Finally, and most importantly, massive internet platforms must take the bitter with the sweet: if they want to continue avoiding liability for intellectual property infringement and defamation, they should welcome categorization as a conduit for speech, rather than speaker status itself.

Admittedly, if there is any aspect of Facebook where it might be said to be cultivating some kind of editorial voice, it is the Trend Box. It is ironic that they’ve gotten in the most trouble for this service, rather than the much more problematic newsfeed. But they invited this trouble with their bland and uninformative description of what the Trend Box is. Moreover, if the Trend Box is indeed treated as “media” (rather than a conduit for media), it could betoken a much deeper challenge to foundational media regulation like sponsorship disclosures–a topic I’ll tackle next week.

Platform Responsibility

Internet platforms are starting to recognize the moral duties they owe their users. Consider, for example, this story about Baidu, China’s leading search engine:

Wei Zexi’s parents borrowed money and sought an experimental treatment at a military hospital in Beijing they found using Baidu search. The treatment failed, and Wei died less than two months later. As the story spread, scathing attacks on the company multiplied, first across Chinese social networks and then in traditional media.

After an investigation, Chinese officials told Baidu to change the way it displays search results, saying they are not clearly labeled, lack objectivity and heavily favor advertisers. Baidu said it would implement the changes recommended by regulators, and change its algorithm to rank results based on credibility. In addition, the company has set aside 1 billion yuan ($153 million) to compensate victims of fraudulent marketing information.

I wish I could include this story in the Chinese translation of The Black Box Society. On a similar note, Google this week announced it would no longer run ads from payday lenders. Now it’s time for Facebook to step up to the plate, and institute new procedures to ensure more transparency and accountability.

0

Unequal Exposure

Towards the end of the breathless and impassioned tour through privacy, surveillance, carcerality, and desire that is Exposed, Bernard Harcourt writes that “the emphasis on what we must do as ethical selves, each and every one of us – us digital subjects – may be precisely what is necessary for us to begin to think of ourselves as we. Yes, as that we that has been haunting this book since page one” (283). The call for unity and solidarity is seductive: if “we” are all exposed and vulnerable, then “we” can all resist and demand change. But that “we” – that reassuring abstraction of humanity and human experience – is not in fact what haunts this book. That “we” – unquestioned, undifferentiated, unmarked – is taken for granted and treated as the universal subject in this book. What truly haunts this book is everything that this “we” obscures and represses. Harcourt’s “we” is remarkably undifferentiated. Nearly every point Harcourt makes about how “we” experience digital subjectivity, surveillance, and exposure would and should be contested by women, people of color, the poor, sexual minorities (and those who belong to more than one of the categories in this non-exhaustive list). It is unfair, of course, to expect any one book or any one author to capture the full complexity of human experience on any topic. One writes about what one knows, and nuance must sometimes be sacrificed for the sake of broad theory. But there is a difference between falling short of conveying the diversity of human experience and barely acknowledging the existence of differentiation. If one of Harcourt’s goals is to lead us to “think of ourselves as we,” it is vital to recognize that  “we” in the digital age are not equally represented, equally consenting or resisting, or equally exposed.

Let’s begin with Harcourt’s characterization of the digital age as a study in shallow positivity: “We do not sing hate, we sing praise. We ‘like,’ we ‘share,’ we ‘favorite.’ We ‘follow.’ We ‘connect.’ ‘We get LinkedIn.’ Ever more options to join and like and appreciate. Everything today is organized around friending, clicking, retweeting, and reposting. … We are appalled by mean comments – which are censored if they are too offensive”(41). This is a picture of the digital world that will be  unrecognizable to many people. There is no mention of online mobs, targeted harassment campaigns, career-destroying defamation, rape and death threats, doxxing, revenge porn, sex trafficking, child porn, online communities dedicated to promoting sexual violence against women, or white supremacist sites. No mention, in short, of the intense, destructive, unrelenting hatred that drives so much of the activity of our connected world. Harcourt’s vision of our digital existence as a sunny safe space where occasional “mean comments” are quickly swept from view is nothing short of extraordinary.

Next, consider Harcourt’s repeated insistence that there are no real distinctions between exposer and exposed, the watcher and the watched: “There is no clean division between those who expose and those who surveil; surveillance of others has become commonplace today, with nude pictures of celebrities circulating as ‘trading fodder’ on the more popular anonymous online message boards, users stalking other users, and videos constantly being posted about other people’s mistakes, accidents, rants, foibles, and prejudices. We tell stories about ourselves and others. We expose ourselves. We watch others” (129). There are, in fact, important divisions between exposers and the exposed. With regard to sexual exposure, it is overwhelmingly the case that women are the subjects and not the agents of exposure. The nude photos to which Harcourt refers weren’t of just any celebrities; they were with few exceptions female celebrities. The hacker in that case, as in nearly every other case of nude photo hacking, is male, as is nearly every revenge porn site owner and the majority of revenge porn consumers. The “revenge porn” phenomenon itself, more accurately described as “nonconsensual pornography,” is overwhelmingly driven by men exposing women, not the other way around. Many of Harcourt’s own examples of surveillance point to the gender imbalance at work in sexual exposure. The LOVEINT scandal, the CCTV cameras pointed into girls’ toilets and changing rooms in UK schools (229), and Edward Snowden’s revelations of how the NSA employees share naked pictures (230) primarily involve men doing the looking and women and girls being looked at. The consequences of sexual exposure are also not gender-neutral: while men and boys may suffer embarrassment and shame, girls and women suffer these and much more, including being expelled from school, fired from jobs, tormented by unwanted sexual propositions, and threatened with rape.

There are also important distinctions to be made between those who voluntarily expose themselves and those who are exposed against their will. In the passage above, Harcourt puts nude photos in the same list as videos of people’s “rants, foibles, and prejudices.” The footnote to that sentence provides two specific examples: Jennifer Lawrence’s hacked photos and video of Michael Richards (Seinfeld’s Kramer) launching into a racist tirade as he performed at a comedy club (311). That is a disturbing false equivalence. The theft of private information is very different from a public, voluntary display of racist hatred. In addition to the fact that naked photos are in no way comparable to casual references to lynching and the repeated use of racial slurs, it should matter that Jennifer Lawrence was exposed against her will and Michael Richards exposed himself.

It’s not the only time in the book that Harcourt plays a bit fast and loose with the concepts of consent and voluntariness. In many places he criticizes “us” for freely contributing to our own destruction: “There is hardy any need for illicit or surreptitious searches, and there is little need to compel, to pressure, to strong-arm, or to intimidate, because so many of us are giving all our most intimate information and whereabouts so willingly and passionately – so voluntarily” (17).  And yet Harcourt also notes that in many cases, people do not know that they are being surveilled or do not feel that they have any practical means of resistance. “The truth is,” Harcourt tells us with regard to the first, “expository power functions best when those who are seen are not entirely conscious of it, or do not always remember. The marketing works best when the targets do not know that they are being watched” (124). On the second point, Harcourt observes that “when we flinch at the disclosure, most of us nevertheless proceed, feeling that we have no choice, not knowing how not to give our information, whom we would talk to, how to get the task done without the exposure. We feel we have no other option but to disclose” (181-2). But surely if people are unaware of a practice or feel they cannot resist it, they can hardly be considered to have voluntarily consented to it.

Also, if people often do not know that they are under surveillance, this undermines one of the more compelling concerns of the book, namely, that surveillance inhibits expression. It is difficult to see how surveillance could have an inhibiting effect if the subjects are not conscious of the fact that they are being watched. Surreptitious surveillance certainly creates its own harms, but if subjects are truly unaware that they are being watched – as opposed to not knowing exactly when or where surveillance is taking place but knowing that it is taking place somewhere somehow, which no doubt does create a chilling effect – then self-censorship is not likely to be one of them.

Harcourt suggests a different kind of harm when he tells us that “[i]nformation is more accessible when the subject forgets that she is being stalked” (124). That is, we are rendered more transparent to the watchers when we falsely believe they are not watching us. That seems right. But what exactly is the harm inflicted by this transparency? Harcourt warns that we are becoming “marketized subjects – or rather subject-objects who are nothing more than watched, tracked, followed, profiled at will, and who in turn do nothing more than watch and observe others” (26). While concerns about Big Data are certainly legitimate (and have been voiced by many scholars, lawyers, policymakers, and activists), Harcourt never paints a clear picture of what he thinks the actual harm of data brokers and targeted Target advertisements really is. In one of the few personal and specific examples he offers of the harms of surveillance, Harcourt describes the experience of being photographed by a security guard before a speaking engagement. Harcourt is clearly unsettled by the experience: “I could not resist. I did not resist. I could not challenge the security protocol. I was embarrassed to challenge it, so I gave in without any resistance. But it still bothers me today. Why? Because I had no control over the dissemination of my own identity, of my face. Because I felt like I had no power to challenge, to assert myself” (222). While one sympathizes with Harcourt’s sense of disempowerment, it is hard to know what to think of it in relation to the sea of other surveillance stories: women forced to flee their homes because of death threats, parents living in fear because the names of their children and the schools they attend have been published online, or teenaged girls committing suicide because the photo of their rape is being circulated on the Internet as a form of entertainment.

Harcourt uses the term “stalk” at least eight times in this book, and none of these references are to actual stalking, the kind that involves being followed by a particular individual who knows where you live and work and means you harm, the kind that one in six women in the U.S. will experience in her lifetime, the kind that is encouraged and facilitated by an ever-expanding industry of software, gadgets, and apps that openly market themselves to angry men as tools of control over the women who have slipped their grasp. What a privilege it is to be able to treat stalking not as a fact of daily existence, but as a metaphor.

Harcourt’s criticism of what he considers to be the Supreme Court’s lack of concern for privacy adds a fascinating gloss to all of this. Harcourt takes particular aim at Justice Scalia, asserting that even when Scalia seems to be protecting privacy, he is actually disparaging it: “Even in Kyllo v. United States…. where the Court finds that the use of heat-seeking technology constitutes a search because it infringes on the intimacies of the home, Justice Scalia mocks the humanist conception of privacy and autonomy.” The proof of this assertion supposedly comes from Scalia’s observation that the technology used in that case “might disclose, for example, at what hour each night the lady of the house takes her daily sauna and bath – a detail that many would consider ‘intimate.’” Harcourt assumes that Scalia’s reference to the “lady of the house” is an ironic expression of contempt. But Scalia is not being ironic. Elsewhere in the opinion, he emphatically states that “[i]n the home… all details are intimate details,” and many of his other opinions reinforce this view. Scalia and many members of the Court are very concerned about privacy precisely when it involves the lady of the house, or the homeowner subjected to the uninvited drug-sniffing dog on the porch (Florida v. Jardines, 2013), or the federal official subjected to the indignity of a drug test (Treasury Employees v. Von Raab, 1989 (dissent)). These same members of the Court, however, are remarkably unconcerned about privacy when it involves a wrongfully arrested man subjected to a humiliating “squat and cough” cavity search (Florence v. Burlington, 2012), or a driver searched after being racially profiled (Whren v. US, 1996), a pregnant woman tricked into a drug test while seeking prenatal care (Ferguson v. Charleston, 2001 (dissent)). In other words, the problem with the Supreme Court’s views on privacy and surveillance is not that it does not care about it; it’s that it tends to care about it only when it affects interests they share or people they resemble.

The world is full of people who do not have the luxury of worrying about a growing addiction to Candy Crush or whether Target knows they need diapers before they do. They are too busy worrying that their ex-husband will hunt them down and kill them, or that they will be stopped and subjected to a humiliating pat down for the fourth time that day, or that the most private and intimate details of their life will be put on public display by strangers looking to make a buck. These people are not driven by a desire to expose themselves. Rather, they are being driven into hiding, into obscurity, into an inhibited and chilled existence, by people who are trying to expose them. If “we” want to challenge surveillance and fight for privacy, “they” must be included.

0

The Fragility of Desire

In his excellent new book Exposed, Harcourt’s analysis of the role of desire in what he calls the “expository society” of the digital age is seductive. We are not characters in Orwell’s 1984, or prisoners of Bentham’s Panopticon, but rather are enthusiastic participants in a “mirrored glass pavilion” that is addictive and mesmerizing. Harcourt offers a detailed picture of this pavilion and also shows us the seamy side of our addiction to it. Recovery from this addiction, he argues, requires acts of disobedience but there lies the great dilemma and paradox of our age: revolution requires desire, not duty, but our desires are what have ensnared us.

I think that this is both a welcome contribution as well as a misleading diagnosis.

There have been many critiques of consent-based privacy regimes as enabling, rather than protecting, privacy. The underlying tenor of many of these critiques is that consent fails as a regulatory tool because it is too difficult to make it truly informed consent. Harcourt’s emphasis on desire shows why there is a deeper problem than this, that our participation in the platforms that surveil us is rooted in something deeper than misinformed choice. And this makes the “what to do?” question all the more difficult to answer. Even for those of us who see a stronger role for law than Harcourt outlines in this book (I agree with Ann Bartow’s comments on this) should pause here. Canada, for example, has strong private sector data protections laws with oversight from excellent provincial and federal privacy commissioners. And yet these laws are heavily consent-based. Such laws are able to shift practices to a stronger emphasis on things like opt-in consent, but Harcourt leaves us with a disquieting sense that this might just be just an example of a Pyrrhic victory, legitimizing surveillance through our attempts to regulate it because we still have not grappled with the more basic problem of the seduction of the mirrored glass pavilion.

The problem with Harcourt’s position is that, in exposing this aspect of the digital age in order to complicate our standard surveillance tropes, he risks ignoring other sources of complexity that are also important for both diagnosing the problem and outlining a path forward.

Desire is not always the reason that people participate in new technologies. As Ann Bartow and Olivier Sylvain point out, people do not always have a choice about their participation in the technologies that track us. The digital age is not an amusement park we can choose to go to or to boycott, but deeply integrated into our daily practices and needs, including the ways in which we work, bank, and access government services.

But even when we do actively choose to use these tools, it is not clear that desire captures the why of all such choices. If we willingly enter Harcourt’s mirrored glass pavilion, it is sometimes because of some of its very useful properties — the space- and time-bending nature of information technology. For example, Google calendar is incredibly convenient because multiple people can access shared calendars from multiple devices in multiple locations at different times making the coordination of calendars incredibly easy. This is not digital lust, but digital convenience.

These space- and time-bending properties of information technology are important for understanding the contours of the public/private nexus of surveillance that so characterizes our age. Harcourt does an excellent job at pointing out some of the salient features of this nexus, describing a “tentacular oligarchy” where private and public institutions are bound together in state-like “knots of power,” with individuals passing back and forth between these institutions. But what is strange in Harcourt’s account is that this tentacular oligarchy still appears to be bounded by the political borders of the US. It is within those borders that the state and the private sector have collapsed together.

What this account misses is the fact that information technology has helped to unleash a global private sector that is not bounded by state borders. In this emerging global private sector large multinational corporations often operate as “metanationals” or stateless entities. The commercial logic of information is that it should cross political borders with ease and be stored wherever it makes the most economic sense.

Consider some of the rhetoric surrounding the e-commerce chapter of the recent TPP agreement. The Office of the US Trade Representative indicates that one of its objectives is to keep the Internet “free and open” which it has pursued through rules that favour cross-border data flows and prevent data localization. It is easy to see how this idea of “free” might be confused with political freedom, for an activist in an oppressive regime is better off in exercising freedom of speech when that speech can cross political borders or the details of their communications can be stored in a location that is free of the reach of their state. A similar rationale has been offered by some in the current Apple encryption debate — encryption protects American business people communicating within China and we can see why that is important.

But this idea of freedom is the freedom of a participant in a global private sector with weak state control; freedom from the state control of oppressive regimes also involves freedom from the state protection of democratic regimes.

If metanationals pursue a state-free agenda, the state pursues an agenda of rights-protectionism. By rights protectionism I mean the claim that states do, and should, protect the constitutional rights of their own citizens and residents but not others. Consider, for example, a Canadian citizen who resides in Canada and uses US cloud computing. That person could be communicating entirely with other Canadians in other Canadian cities and yet have all of their data stored in the US-based cloud. If the US authorities wanted access to that data, the US constitution would not apply to regulate that access in a rights-protecting manner because the Canadian is a non-US person.

Many see result as flowing from the logic of the Verdugo-Urquidez case. Yet that case concerned a search that occurred in a foreign territory (Mexico), rather than within the US, where the law of that territory continued to apply. The Canadian constitution does not apply to acts of officials within the US. The data at issue falls into a constitutional black hole where no constitution applies (and maybe even international human rights black hole according to some US interpretations of extraterritorial obligations). States can then collect information within this black hole free of the usual liberal-democratic constraints and share it with other allies, a situation Snowden documented within the EU and likened to a “European bazaar” of surveillance.

Rights protectionism is not rights protection when information freely crosses political boundaries and state power piggybacks on top of this crossing and exploits it.

This is not a tentacular oligarchy operating within the boundaries of one state, but a series of global alliances – between allied states and between states and metanationals who exert state-like power — exploiting the weaknesses of state-bound law.

We are not in this situation simply because of a penchant for selfies. But to understand the full picture we do need to look beyond “ourselves” and get the global picture in view. We need to understand the ways in which our legal models fail to address these new realities and even help to mask and legitimize the problems of the digital age through tools and rhetoric that are no longer suitable.

Lisa Austin is an Associate Professor at the University of Toronto Faculty of Law.