Category: Privacy

0

The Fragility of Desire

In his excellent new book Exposed, Harcourt’s analysis of the role of desire in what he calls the “expository society” of the digital age is seductive. We are not characters in Orwell’s 1984, or prisoners of Bentham’s Panopticon, but rather are enthusiastic participants in a “mirrored glass pavilion” that is addictive and mesmerizing. Harcourt offers a detailed picture of this pavilion and also shows us the seamy side of our addiction to it. Recovery from this addiction, he argues, requires acts of disobedience but there lies the great dilemma and paradox of our age: revolution requires desire, not duty, but our desires are what have ensnared us.

I think that this is both a welcome contribution as well as a misleading diagnosis.

There have been many critiques of consent-based privacy regimes as enabling, rather than protecting, privacy. The underlying tenor of many of these critiques is that consent fails as a regulatory tool because it is too difficult to make it truly informed consent. Harcourt’s emphasis on desire shows why there is a deeper problem than this, that our participation in the platforms that surveil us is rooted in something deeper than misinformed choice. And this makes the “what to do?” question all the more difficult to answer. Even for those of us who see a stronger role for law than Harcourt outlines in this book (I agree with Ann Bartow’s comments on this) should pause here. Canada, for example, has strong private sector data protections laws with oversight from excellent provincial and federal privacy commissioners. And yet these laws are heavily consent-based. Such laws are able to shift practices to a stronger emphasis on things like opt-in consent, but Harcourt leaves us with a disquieting sense that this might just be just an example of a Pyrrhic victory, legitimizing surveillance through our attempts to regulate it because we still have not grappled with the more basic problem of the seduction of the mirrored glass pavilion.

The problem with Harcourt’s position is that, in exposing this aspect of the digital age in order to complicate our standard surveillance tropes, he risks ignoring other sources of complexity that are also important for both diagnosing the problem and outlining a path forward.

Desire is not always the reason that people participate in new technologies. As Ann Bartow and Olivier Sylvain point out, people do not always have a choice about their participation in the technologies that track us. The digital age is not an amusement park we can choose to go to or to boycott, but deeply integrated into our daily practices and needs, including the ways in which we work, bank, and access government services.

But even when we do actively choose to use these tools, it is not clear that desire captures the why of all such choices. If we willingly enter Harcourt’s mirrored glass pavilion, it is sometimes because of some of its very useful properties — the space- and time-bending nature of information technology. For example, Google calendar is incredibly convenient because multiple people can access shared calendars from multiple devices in multiple locations at different times making the coordination of calendars incredibly easy. This is not digital lust, but digital convenience.

These space- and time-bending properties of information technology are important for understanding the contours of the public/private nexus of surveillance that so characterizes our age. Harcourt does an excellent job at pointing out some of the salient features of this nexus, describing a “tentacular oligarchy” where private and public institutions are bound together in state-like “knots of power,” with individuals passing back and forth between these institutions. But what is strange in Harcourt’s account is that this tentacular oligarchy still appears to be bounded by the political borders of the US. It is within those borders that the state and the private sector have collapsed together.

What this account misses is the fact that information technology has helped to unleash a global private sector that is not bounded by state borders. In this emerging global private sector large multinational corporations often operate as “metanationals” or stateless entities. The commercial logic of information is that it should cross political borders with ease and be stored wherever it makes the most economic sense.

Consider some of the rhetoric surrounding the e-commerce chapter of the recent TPP agreement. The Office of the US Trade Representative indicates that one of its objectives is to keep the Internet “free and open” which it has pursued through rules that favour cross-border data flows and prevent data localization. It is easy to see how this idea of “free” might be confused with political freedom, for an activist in an oppressive regime is better off in exercising freedom of speech when that speech can cross political borders or the details of their communications can be stored in a location that is free of the reach of their state. A similar rationale has been offered by some in the current Apple encryption debate — encryption protects American business people communicating within China and we can see why that is important.

But this idea of freedom is the freedom of a participant in a global private sector with weak state control; freedom from the state control of oppressive regimes also involves freedom from the state protection of democratic regimes.

If metanationals pursue a state-free agenda, the state pursues an agenda of rights-protectionism. By rights protectionism I mean the claim that states do, and should, protect the constitutional rights of their own citizens and residents but not others. Consider, for example, a Canadian citizen who resides in Canada and uses US cloud computing. That person could be communicating entirely with other Canadians in other Canadian cities and yet have all of their data stored in the US-based cloud. If the US authorities wanted access to that data, the US constitution would not apply to regulate that access in a rights-protecting manner because the Canadian is a non-US person.

Many see result as flowing from the logic of the Verdugo-Urquidez case. Yet that case concerned a search that occurred in a foreign territory (Mexico), rather than within the US, where the law of that territory continued to apply. The Canadian constitution does not apply to acts of officials within the US. The data at issue falls into a constitutional black hole where no constitution applies (and maybe even international human rights black hole according to some US interpretations of extraterritorial obligations). States can then collect information within this black hole free of the usual liberal-democratic constraints and share it with other allies, a situation Snowden documented within the EU and likened to a “European bazaar” of surveillance.

Rights protectionism is not rights protection when information freely crosses political boundaries and state power piggybacks on top of this crossing and exploits it.

This is not a tentacular oligarchy operating within the boundaries of one state, but a series of global alliances – between allied states and between states and metanationals who exert state-like power — exploiting the weaknesses of state-bound law.

We are not in this situation simply because of a penchant for selfies. But to understand the full picture we do need to look beyond “ourselves” and get the global picture in view. We need to understand the ways in which our legal models fail to address these new realities and even help to mask and legitimize the problems of the digital age through tools and rhetoric that are no longer suitable.

Lisa Austin is an Associate Professor at the University of Toronto Faculty of Law.

2

Irresistible Surveillance?

Bernard Harcourt’s Exposed: Desire and Disobedience in the Digital Age offers many intriguing insights into how power circulates in contemporary society.  The book’s central contribution, as I see it, is to complicate the standard model of surveillance by introducing the surveilled’s agency into the picture.  Exposed highlights the extent to which ordinary people are complicit in regimes of data-monitoring and data-mining that damage their individual personhood and the democratic system.  Millions upon millions of “digital subjects,” Harcourt explains, have come to embrace forms of exposure that commoditize their own privacy.  Sometimes people do this because they want more convenience when they navigate capitalist culture or government bureaucracies.  Or because they want better book recommendations from Amazon.  Other times, people wish to see and be seen online—increasingly feel they need to be seen online—in order to lead successful social and professional lives.

So complicit are we in the erosion of our collective privacy, Harcourt suggests, that any theory of the “surveillance state” or the “surveillance industrial complex” that fails to account for these decentralized dynamics of exhibition, spectacle, voyeurism, and play will misdiagnose our situation.  Harcourt aligns himself at times with some of the most provocative critics of intelligence agencies like the NSA and companies like Facebook.  Yet the emphasis he places on personal desire and participatory disclosure belies any Manichean notion of rogue institutions preying upon ignorant citizens.  His diagnosis of how we’ve lost our privacy is more complex, ethically and practically, in that it forces attention on the ways in which our current situation is jointly created by bottom-up processes of self-exposure as well as by top-down processes of supervision and control.

Thus, when Harcourt writes in the introduction that “[t]here is no conspiracy here, nothing untoward,” what might seem like a throwaway line is instead an important descriptive and normative position he is staking out about the nature of the surveillance problem.  Exposed calls on critics of digital surveillance to adopt a broader analytic lens and a more nuanced understanding of causation, power, and responsibility.  Harcourt in this way opens up fruitful lines of inquiry while also, I think, opening himself up to the charge of victim-blaming insofar as he minimizes the social and technological forces that limit people’s capacity to change their digital circumstances.

The place of desire in “the expository society,” Harcourt shows, requires rethinking of our metaphors for surveillance, discipline, and loss of privacy.  Exposed unfolds as a series of investigations into the images and tropes we conventionally rely on to crystallize the nature of the threat we face: Big Brother, the Panopticon, the Surveillance State, and so forth.  In each case, Harcourt provides an erudite and sympathetic treatment of the ways in which these metaphors speak to our predicament.  Yet in each case, he finds them ultimately wanting.  For instance, after the Snowden disclosures began to garner headlines, many turned to George Orwell’s novel 1984 to help make sense of the NSA’s activities.  Book sales skyrocketed.  Harcourt, however, finds the Big Brother metaphor to be misleading in critical respects.  As he reminds us, Big Brother sought to wear down the citizens of Oceania, neutralize their passions, fill them with hate.  “Today, by contrast, everything functions by means of ‘likes,’ ‘shares,’ ‘favorites,’ ‘friending,’ and ‘following.’  The drab blue uniform and grim gray walls in 1984 have been replaced by the iPhone 5C in all its radiant colors . . . .”  We are in a new condition, a new paradigm, and we need a new language to negotiate it.

Harcourt then considers a metaphor of his own devising: the “mirrored glass pavilion.”  This metaphor is meant to evoke a sleek, disorienting, commercialized space in which we render ourselves exposed to the gaze of others and also, at the same time, to ourselves.  But Harcourt isn’t quite content to rest with this metaphor either.  He introduces the mirrored glass pavilion, examines it, makes a case for it, and keeps moving—trying out metaphors like “steel mesh” and “data doubles” and (my favorite) “a large oligopolistic octopus that is enveloping the world,” all within the context of the master metaphor of an expository society.  Metaphors, it seems, are indispensable if imperfect tools for unraveling the paradoxes of digital life.

The result is a restless, searching quality to the analysis.  Exposed is constantly introducing new anecdotes, examples, paradigms, and perspectives, in the manner of a guided tour.  Harcourt is clearly deeply unsettled by the digital age we have entered.  Part of the appeal of the book is that he is willing to leave many of his assessments unsettled too, to synthesize a vast range of developments without simplifying or prophesizing.

Another aspect of Exposed that enhances its effect is the prose style.  Now, I wouldn’t say that Harcourt’s Foucault-fueled writing has ever suffered from a lack of flair.  But in this work, Harcourt has gone further and become a formal innovator.  He has developed a prose style that uncannily mimics the experience of the expository society, the phenomenology of the digital subject.

Throughout the book, when describing the allure of new technologies that would rob us of our privacy and personhood, the prose shifts into a different register.  The reader is suddenly greeted with quick staccato passages, with acronyms and brand names thrown together in a dizzying succession of catalogs and subordinate clauses.  In these passages, Harcourt models for us the implicit bargain offered by the mirrored glass pavilion—inviting us to suspend critical judgment, to lose ourselves, as we get wrapped up in the sheer visceral excitement, the mad frenzy, of digital consumer culture.

Right from the book’s first sentence, we confront this mimetic style:

Every keystroke, each mouse click, every touch of the screen, card swipe, Google search, Amazon purchase, Instagram, ‘like,’ tweet, scan—in short, everything we do in our new digital age can be recorded, stored, and monitored.  Every routine act on our iPads and tablets, on our laptops, notebooks, and Kindles, office PCs and smart-phones, every transaction with our debit card, gym pass, E-ZPass, bus pass, and loyalty cards can be archived, data-mined, and traced back to us.

Other sentences deploy a similar rhetorical strategy in a more positive key, describing how we now “‘like,’ we ‘share,’ we ‘favorite.’ We ‘follow.’ We ‘connect.’ We get ‘LinkedIn”—how “[e]verything today is organized around friending, clicking, retweeting, and reposting.”

There is a visceral pleasure to be had from abandoning oneself to the hyper-stimulation, the sensory overload, of passages like these.  Which is precisely the point.  For that pleasure underwrites our own ubiquitous surveillance and the mortification of self.  That pleasure is our undoing.  More than anything else, in Harcourt’s telling, it is the constant gratifications afforded by technologies of surveillance that have “enslaved us, exposed us, and ensnared us in this digital shell as hard as steel.”

*  *  *

I hope these brief comments have conveyed some of what I found so stimulating in this remarkable book.  Always imaginative and often impressionistic, Exposed is hazy on a number of significant matters.  In the hope of facilitating conversation, I will close by noting a few.

First, what are the distributional dimensions of the privacy crisis that we face?  The implicit digital subject of Exposed seems to be a highly educated, affluent type—someone who would write a blog post, wear an Apple Watch, buy books on Amazon.  There may well be millions of people like this; I don’t mean to suggest any narcissism in the book’s critical gaze.  I wonder, though, how the privacy pitfalls chronicled in Exposed relate to more old-fashioned forms of observation and exploitation that continue to afflict marginalized populations and that Harcourt has trenchantly critiqued in other work.

Second, what about all the purported benefits of digital technology, Big Data, and the like?  Some commentators, as Harcourt notes in passing, have begun to argue that panoptic surveillance, at least under the right conditions, can facilitate not only certain kinds of efficiency and security but also values such as democratic engagement and human freedom that Harcourt is keen to promote.  I share Harcourt’s skepticism about these arguments, but if they are wrong then we need to know why they are wrong, and in particular whether they are irredeemably mistaken or whether they might instead point us toward useful regulatory reforms.

And lastly, what would dissent look like in this realm?  The final, forward-looking chapter of Exposed is strikingly short, only four pages long.  Harcourt exhorts the reader to fight back through “digital resistance” and “political disobedience.”  But remember, there is no conspiracy here, nothing untoward.  Rather, there is a massively distributed and partially emergent system of surveillance.  And this system generates enormous perceived rewards, not just for those at the top but for countless individuals throughout society.  It is something of a puzzle, then, what would motivate the sort of self-abnegating resistance that Harcourt calls for—resistance that must be directed, in the first instance, toward our own compulsive habits and consumptive appetites.  How would that sort of resistance develop, in the teeth of our own desires, and how could it surmount collective action barriers?

These are just a few of the urgent questions that Exposed helps bring into focus.

*  *  *

David Pozen is an associate professor at Columbia Law School.

0

Surveillance and Our Addiction to Exposure

Bernard Harcourt ExposedBernard Harcourt’s Exposed: Desire and Disobedience in the Digital Age (Harvard University Press 2015) is an indictment of  our contemporary age of surveillance and exposure — what Harcourt calls “the expository society.” Harcourt passionately deconstructs modern technology-infused society and explains its dark implications with an almost poetic eloquence.

Harcourt begins by critiquing the metaphor of George Orwell’s 1984 to describe the ills of our world today.  In my own previous work, I critiqued this metaphor, arguing that Kafka’s The Trial was a more apt metaphor to capture the powerlessness and vulnerability that people experience as government and businesses construct and use “digital dossiers” about their lives.  Harcourt critiques Orwell in a different manner, arguing that Orwell’s dystopian vision is inapt because it is too drab and gray:

No, we do not live in a drab Orwellian world.  We live in a beautiful, colorful, stimulating, digital world that is online, plugged in, wired, and Wi-Fi enabled.  A rich, bright, vibrant world full of passion and jouissance–and by means of which we reveal ourselves and make ourselves virtually transparent to surveillance.  In the end, Orwell’s novel is indeed prescient in many ways, but jarringly off on this one key point.  (pp. 52-53)

City wet-868078_960_720 pixabay b

Orwell’s Vision

City NYC new-117018_960_720 pixabay b

Life Today

Neil Postman Amusing Ourselves to DeathHarcourt notes that the “technologies that end up facilitating surveillance are the very technologies we crave.”  We desire them, but “we have become, slowly but surely, enslaved to them.” (p. 52).

Harcourt’s book reminds me of Neil Postman’s Amusing Ourselves to Death, originally published about 30 years ago — back in 1985.  Postman also critiqued Orwell’s metaphor and argued that Aldous Huxley’s Brave New World was a more apt metaphor to capture the problematic effects new media technologies were having on society.

Read More

0

Symposium on Exposed: Desire and Disobedience in the Digital Age

Frank Pasquale and I are delighted to introduce Professor Bernard Harcourt and the participants of our online symposium on his provocative new book Exposed: Desire and Disobedience in the Digital Age (Harvard University Press 2015).  Here is the description of the book from HUP’s webpage:

Social media compile data on users, retailers mine information on consumers, Internet giants create dossiers of who we know and what we do, and intelligence agencies collect all this plus billions of communications daily. Exploiting our boundless desire to access everything all the time, digital technology is breaking down whatever boundaries still exist between the state, the market, and the private realm. Exposed offers a powerful critique of our new virtual transparence, revealing just how unfree we are becoming and how little we seem to care.

Bernard Harcourt guides us through our new digital landscape, one that makes it so easy for others to monitor, profile, and shape our every desire. We are building what he calls the expository society—a platform for unprecedented levels of exhibition, watching, and influence that is reconfiguring our political relations and reshaping our notions of what it means to be an individual.

We are not scandalized by this. To the contrary: we crave exposure and knowingly surrender our privacy and anonymity in order to tap into social networks and consumer convenience—or we give in ambivalently, despite our reservations. But we have arrived at a moment of reckoning. If we do not wish to be trapped in a steel mesh of wireless digits, we have a responsibility to do whatever we can to resist. Disobedience to a regime that relies on massive data mining can take many forms, from aggressively encrypting personal information to leaking government secrets, but all will require conviction and courage.

We are thrilled to be joined by an amazing group of scholars to discuss this groundbreaking work, including Concurring Opinions co-founder Daniel Solove, Frank Pasquale (the co-organizer of this symposium), Lisa Austin, Ann Bartow, Mary Anne Franks, David Pozen, Olivier Sylvain, and, of course, Bernard Harcourt.   They will be posting throughout the week so check in daily and as always, we encourage you to join the discussion.

0

The 5 Things Every Privacy Lawyer Needs to Know about the FTC: An Interview with Chris Hoofnagle

The Federal Trade Commission (FTC) has become the leading federal agency to regulate privacy and data security. The scope of its power is vast – it covers the majority of commercial activity – and it has been enforcing these issues for decades. An FTC civil investigative demand (CID) will send shivers down the spine of even the largest of companies, as the FTC requires a 20-year period of assessments to settle the score.

To many, the FTC remains opaque and somewhat enigmatic. The reason, ironically, might not be because there is too little information about the FTC but because there is so much. The FTC has been around for 100 years!

In a landmark new book, Professor Chris Hoofnagle of Berkeley Law School synthesizes an enormous volume of information about the FTC and sheds tremendous light on the FTC’s privacy activities. His book is called Federal Trade Commission Privacy Law and Policy (Cambridge University Press, Feb. 2016).

This is a book that all privacy and cybersecurity lawyers should have on their shelves. The book is the most comprehensive scholarly discussion of the FTC’s activities in these areas, and it also delves deep in the FTC’s history and activities in other areas to provide much-needed context to understand how it functions and reasons in privacy and security cases.

Read More

1

The Ultimate Unifying Approach to Complying with All Laws and Regulations

Professor Woodrow Hartzog and I have just published our new article, The Ultimate Unifying Approach to Complying with All Laws and Regulations19 Green Bag 2d 223 (2016)  Our article took years of research and analysis, intensive writing, countless drafts, and endless laboring over every word. But we hope we achieved a monumental breakthrough in the law.  Here’s the abstract:

There are countless laws and regulations that must be complied with, and the task of figuring out what to do to satisfy all of them seems nearly impossible. In this article, Professors Daniel Solove and Woodrow Hartzog develop a unified approach to doing so. This approach (patent pending) was developed over the course of several decades of extensive analysis of every relevant law and regulation.

 

The Emerging Law of Algorithms, Robots, and Predictive Analytics

In 1897, Holmes famously pronounced, “For the rational study of the law the blackletter man may be the man of the present, but the man of the future is the man of statistics and the master of economics.” He could scarcely envision at the time the rise of cost-benefit analysis, and comparative devaluation of legal process and non-economic values, in the administrative state. Nor could he have foreseen the surveillance-driven tools of today’s predictive policing and homeland security apparatus. Nevertheless, I think Holmes’s empiricism and pragmatism still animate dominant legal responses to new technologies. Three conferences this Spring show the importance of “statistics and economics” in future tools of social order, and the fundamental public values that must constrain those tools.

Tyranny of the Algorithm? Predictive Analytics and Human Rights

As the conference call states

Advances in information and communications technology and the “datafication” of broadening fields of human endeavor are generating unparalleled quantities and kinds of data about individual and group behavior, much of which is now being deployed to assess risk by governments worldwide. For example, law enforcement personnel are expected to prevent terrorism through data-informed policing aimed at curbing extremism before it expresses itself as violence. And police are deployed to predicted “hot spots” based on data related to past crime. Judges are turning to data-driven metrics to help them assess the risk that an individual will act violently and should be detained before trial. 


Where some analysts celebrate these developments as advancing “evidence-based” policing and objective decision-making, others decry the discriminatory impact of reliance on data sets tainted by disproportionate policing in communities of color. Still others insist on a bright line between policing for community safety in countries with democratic traditions and credible institutions, and policing for social control in authoritarian settings. The 2016 annual conference will . . . consider the human rights implications of the varied uses of predictive analytics by state actors. As a core part of this endeavor, the conference will examine—and seek to advance—the capacity of human rights practitioners to access, evaluate, and challenge risk assessments made through predictive analytics by governments worldwide. 

This focus on the violence targeted and legitimated by algorithmic tools is a welcome chance to discuss the future of law enforcement. As Dan McQuillan has argued, these “crime-fighting” tools are both logical extensions of extant technologies of ranking, sorting, and evaluating, and raise fundamental challenges to the rule of law: 

According to Agamben, the signature of a state of exception is ‘force-of’; actions that have the force of law even when not of the law. Software is being used to predict which people on parole or probation are most likely to commit murder or other crimes. The algorithms developed by university researchers uses a dataset of 60,000 crimes and some dozens of variables about the individuals to help determine how much supervision the parolees should have. While having discriminatory potential, this algorithm is being invoked within a legal context. 

[T]he steep rise in the rate of drone attacks during the Obama administration has been ascribed to the algorithmic identification of ‘risky subjects’ via the disposition matrix. According to interviews with US national security officials the disposition matrix contains the names of terrorism suspects arrayed against other factors derived from data in ‘a single, continually evolving database in which biographies, locations, known associates and affiliated organizations are all catalogued.’ Seen through the lens of states of exception, we cannot assume that the impact of algorithmic force-of will be constrained because we do not live in a dictatorship. . . .What we need to be alert for, according to Agamben, is not a confusion of legislative and executive powers but separation of law and force of law. . . [P]redictive algorithms increasingly manifest as a force-of which cannot be restrained by invoking privacy or data protection. 

The ultimate logic of the algorithmic state of exception may be a homeland of “smart cities,” and force projection against an external world divided into “kill boxes.” 


We Robot 2016: Conference on Legal and Policy Issues Relating to Robotics

As the “kill box” example suggests above, software is not just an important tool for humans planning interventions. It is also animating features of our environment, ranging from drones to vending machines. Ryan Calo has argued that the increasing role of robotics in our lives merits “systematic changes to law, institutions, and the legal academy,” and has proposed a Federal Robotics Commission. (I hope it gets further than proposals for a Federal Search Commission have so far!)


Calo, Michael Froomkin, and other luminaries of robotics law will be at We Robot 2016 this April at the University of Miami. Panels like “Will #BlackLivesMatter to RoboCop?” and “How to Engage the Public on the Ethics and Governance of Lethal Autonomous Weapons” raise fascinating, difficult issues for the future management of violence, power, and force.


Unlocking the Black Box: The Promise and Limits of Algorithmic Accountability in the Professions


Finally, I want to highlight a conference I am co-organizing with Valerie Belair-Gagnon and Caitlin Petre at the Yale ISP. As Jack Balkin observed in his response to Calo’s “Robotics and the Lessons of Cyberlaw,” technology concerns not only “the relationship of persons to things but rather the social relationships between people that are mediated by things.” Social relationships are also mediated by professionals: doctors and nurses in the medical field, journalists in the media, attorneys in disputes and transactions.


For many techno-utopians, the professions are quaint, an organizational form to be flattened by the rapid advance of software. But if there is anything the examples above (and my book) illustrate, it is the repeated, even disastrous failures of many computational systems to respect basic norms of due process, anti-discrimination, transparency, and accountability. These systems need professional guidance as much as professionals need these systems. We will explore how professionals–both within and outside the technology sector–can contribute to a community of inquiry devoted to accountability as a principle of research, investigation, and action. 


Some may claim that software-driven business and government practices are too complex to regulate. Others will question the value of the professions in responding to this technological change. I hope that the three conferences discussed above will help assuage those concerns, continuing the dialogue started at NYU in 2013 about “accountable algorithms,” and building new communities of inquiry. 


And one final reflection on Holmes: the repetition of “man” in his quote above should not go unremarked. Nicole Dewandre has observed the following regarding modern concerns about life online: 

To some extent, the fears of men in a hyperconnected era reflect all-too-familiar experiences of women. Being objects of surveillance and control, exhausting laboring without rewards and being lost through the holes of the meritocracy net, being constrained in a specular posture of other’s deeds: all these stances have been the fate of women’s lives for centuries, if not millennia. What men fear from the State or from “Big (br)Other”, they have experienced with men. So, welcome to world of women….

Dewandre’s voice complements that of US scholars (like Danielle Citron and Mary Ann Franks) on systematic disadvantages to women posed by opaque or distant technological infrastructure. I think one of the many valuable goals of the conferences above will be to promote truly inclusive technologies, permeable to input from all of society, not just top investors and managers.

X-Posted: Balkinization.

0

Holiday Cheer – Creations for Good

A sister notices that her sister’s monitor for her blood sugar level has a weak alarm and does not work well to wake someone up at night, when the alert is critical. Sister decides maybe she can do something, and she does. Who is this mystery girl? Our own Danielle Citron shared with me (and let me share more) that her daughter, JJ, has been designing a new monitor to help diabetics (which her sister has).

JJ applied to a program to help high schoolers with STEM projects and was paired with folks at Northrup Grumman where she spent a day a month developing her idea. Along the way, JJ had to figure out what alarm noise worked best to wake someone up, program a code to link the monitor and bracelet devices, and then wired them. As her school reports

This year, Citron will continue to test and refine the design, creating the bracelet with the help of a 3D printer. When she’s finished, the bracelet will change color to let the user know immediately if their blood sugar is getting too high or too low. The detailed information from the monitor will also be linked to a smartphone app.

3D printing! Color coding! And JJ seems poised to go into computer science.

Although I am friends with Dani and have met JJ, the real point for me is that a teenager saw a problem and felt she had the room to try and fix it. Then she worked on it. Her success is lovely, but the fact of the chance is downright excellent and puts me in a great holiday mood. Of course, with Danielle as her mom, JJ may have to look forward to law professors wondering about patents, privacy, and data ownership, but those are what a good friend of mine once called “high quality problems.” Well done, JJ.

0

MLAT – Not a Muscle Group Nonetheless Potentially Powerful

MLAT. I encountered this somewhat obscure thing (Mutual Legal Assistance Treaty) when I was in practice and needed to serve someone in Europe. I recall it was a cumbersome process and thinking that I was happy we did not seem to have to use it often (in fact the one time). Today, however, as my colleagues Peter Swire and Justin Hemmings argue in their paper, Stakeholders in Reform of the Global System for Mutual Legal Assistance, the MLAT process is quite important.

In simplest terms, if a criminal investigation in say France needs an email and it is stored in the U.S.A., the French authorities ask the U.S. ones for aid. If the U.S. agency that processes the request agrees there is a legal basis for the request, it and other groups seek a court order. If that is granted, the order would be presented to the company. Once records are obtained, there is further review to ensure “compliance U.S. law.” Then the records would go to France. As Swire and Hemmings note, the process averages 10 months. For a civil case that is long, but for criminal cases that is not workable. And as the authors put it, “the once-unusual need for an MLAT request becomes routine for records that are stored in the cloud and are encrypted in transit.”

Believe it or not, this issue touches on major Internet governance issues. The slowness and the new needs are fueling calls for having the ITU govern the Internet and access to evidence issues (a model according to the paper favored by Russia and others). Simpler but important ideas such as increased calls for data localization also flow from the difficulties the paper identifies. As the paper details, the players–non-U.S. governments, the U.S. government, tech companies, and civil society groups–each have goals and perspectives on the issue.

So for those interested in Internet governance, privacy, law enforcement, and multi-stakeholder processes, the MLAT process and this paper on it offer a great high-level view of the many factors at play in those issues for both a specific topic and larger, related ones as well.

0

A Darn Good Read – Paul Schwartz on Data Processing

Almost twenty-five years ago, Paul Schwartz wrote Data Processing and Government Administration: The Failure of the American Legal Response to the Computer, 43 HASTINGS L.J. 1321 (1991) (pdf), and I must say it is worth a read today. Paul identified the problems with government’s and especially the administrative state’s use of computation to do their duties. As he opened:

Computers are now an integral part of government administration. They put a tremendous amount of personal data in the hands of government officials, who base a wide range of decisions on this information. Yet the attention paid to the government’s use of data processing has not been equal to the potential dangers that this application presents. Personal information, when disclosed to family and friends, helps form the basis of trust; in the hands of strangers, this information can have a corrosive effect on individual autonomy. The human race’s rapid development of computer technology has not been matched by a requisite growth in the ability to control these new machines.

That passage may seem familiar, but recall when it was written and note the next point Paul made:

This Article’s goal is to formulate a constructive response to computer processing of personal data. The destruction of computers is no more an answer to informatization than the destruction of earlier machines would have been an answer to industrialization. Accordingly, this Article seeks to understand the results of the government’s processing of
personal data and to develop appropriate legal principles to guide this application of computer technology.

That goal seems to be missing in some discussions, but I think it is a good one. To be clear, I don’t necessarily agree with some of Paul’s prescriptions. But the point of this post is not about that. I recommend the paper despite disagreeing with some of the ideas. I do so because it helped me with the history of the topic, explained issues, presented a structure and jurisprudence to drill into the topic, offered ways to address them, and pushed me to think more on my views. It is a well-written, worthwhile read both for substance and style.

In short, thank you Professor Schwartz.