Category: Cyberlaw

The Emerging Law of Algorithms, Robots, and Predictive Analytics

In 1897, Holmes famously pronounced, “For the rational study of the law the blackletter man may be the man of the present, but the man of the future is the man of statistics and the master of economics.” He could scarcely envision at the time the rise of cost-benefit analysis, and comparative devaluation of legal process and non-economic values, in the administrative state. Nor could he have foreseen the surveillance-driven tools of today’s predictive policing and homeland security apparatus. Nevertheless, I think Holmes’s empiricism and pragmatism still animate dominant legal responses to new technologies. Three conferences this Spring show the importance of “statistics and economics” in future tools of social order, and the fundamental public values that must constrain those tools.

Tyranny of the Algorithm? Predictive Analytics and Human Rights

As the conference call states

Advances in information and communications technology and the “datafication” of broadening fields of human endeavor are generating unparalleled quantities and kinds of data about individual and group behavior, much of which is now being deployed to assess risk by governments worldwide. For example, law enforcement personnel are expected to prevent terrorism through data-informed policing aimed at curbing extremism before it expresses itself as violence. And police are deployed to predicted “hot spots” based on data related to past crime. Judges are turning to data-driven metrics to help them assess the risk that an individual will act violently and should be detained before trial. 

Where some analysts celebrate these developments as advancing “evidence-based” policing and objective decision-making, others decry the discriminatory impact of reliance on data sets tainted by disproportionate policing in communities of color. Still others insist on a bright line between policing for community safety in countries with democratic traditions and credible institutions, and policing for social control in authoritarian settings. The 2016 annual conference will . . . consider the human rights implications of the varied uses of predictive analytics by state actors. As a core part of this endeavor, the conference will examine—and seek to advance—the capacity of human rights practitioners to access, evaluate, and challenge risk assessments made through predictive analytics by governments worldwide. 

This focus on the violence targeted and legitimated by algorithmic tools is a welcome chance to discuss the future of law enforcement. As Dan McQuillan has argued, these “crime-fighting” tools are both logical extensions of extant technologies of ranking, sorting, and evaluating, and raise fundamental challenges to the rule of law: 

According to Agamben, the signature of a state of exception is ‘force-of’; actions that have the force of law even when not of the law. Software is being used to predict which people on parole or probation are most likely to commit murder or other crimes. The algorithms developed by university researchers uses a dataset of 60,000 crimes and some dozens of variables about the individuals to help determine how much supervision the parolees should have. While having discriminatory potential, this algorithm is being invoked within a legal context. 

[T]he steep rise in the rate of drone attacks during the Obama administration has been ascribed to the algorithmic identification of ‘risky subjects’ via the disposition matrix. According to interviews with US national security officials the disposition matrix contains the names of terrorism suspects arrayed against other factors derived from data in ‘a single, continually evolving database in which biographies, locations, known associates and affiliated organizations are all catalogued.’ Seen through the lens of states of exception, we cannot assume that the impact of algorithmic force-of will be constrained because we do not live in a dictatorship. . . .What we need to be alert for, according to Agamben, is not a confusion of legislative and executive powers but separation of law and force of law. . . [P]redictive algorithms increasingly manifest as a force-of which cannot be restrained by invoking privacy or data protection. 

The ultimate logic of the algorithmic state of exception may be a homeland of “smart cities,” and force projection against an external world divided into “kill boxes.” 

We Robot 2016: Conference on Legal and Policy Issues Relating to Robotics

As the “kill box” example suggests above, software is not just an important tool for humans planning interventions. It is also animating features of our environment, ranging from drones to vending machines. Ryan Calo has argued that the increasing role of robotics in our lives merits “systematic changes to law, institutions, and the legal academy,” and has proposed a Federal Robotics Commission. (I hope it gets further than proposals for a Federal Search Commission have so far!)

Calo, Michael Froomkin, and other luminaries of robotics law will be at We Robot 2016 this April at the University of Miami. Panels like “Will #BlackLivesMatter to RoboCop?” and “How to Engage the Public on the Ethics and Governance of Lethal Autonomous Weapons” raise fascinating, difficult issues for the future management of violence, power, and force.

Unlocking the Black Box: The Promise and Limits of Algorithmic Accountability in the Professions

Finally, I want to highlight a conference I am co-organizing with Valerie Belair-Gagnon and Caitlin Petre at the Yale ISP. As Jack Balkin observed in his response to Calo’s “Robotics and the Lessons of Cyberlaw,” technology concerns not only “the relationship of persons to things but rather the social relationships between people that are mediated by things.” Social relationships are also mediated by professionals: doctors and nurses in the medical field, journalists in the media, attorneys in disputes and transactions.

For many techno-utopians, the professions are quaint, an organizational form to be flattened by the rapid advance of software. But if there is anything the examples above (and my book) illustrate, it is the repeated, even disastrous failures of many computational systems to respect basic norms of due process, anti-discrimination, transparency, and accountability. These systems need professional guidance as much as professionals need these systems. We will explore how professionals–both within and outside the technology sector–can contribute to a community of inquiry devoted to accountability as a principle of research, investigation, and action. 

Some may claim that software-driven business and government practices are too complex to regulate. Others will question the value of the professions in responding to this technological change. I hope that the three conferences discussed above will help assuage those concerns, continuing the dialogue started at NYU in 2013 about “accountable algorithms,” and building new communities of inquiry. 

And one final reflection on Holmes: the repetition of “man” in his quote above should not go unremarked. Nicole Dewandre has observed the following regarding modern concerns about life online: 

To some extent, the fears of men in a hyperconnected era reflect all-too-familiar experiences of women. Being objects of surveillance and control, exhausting laboring without rewards and being lost through the holes of the meritocracy net, being constrained in a specular posture of other’s deeds: all these stances have been the fate of women’s lives for centuries, if not millennia. What men fear from the State or from “Big (br)Other”, they have experienced with men. So, welcome to world of women….

Dewandre’s voice complements that of US scholars (like Danielle Citron and Mary Ann Franks) on systematic disadvantages to women posed by opaque or distant technological infrastructure. I think one of the many valuable goals of the conferences above will be to promote truly inclusive technologies, permeable to input from all of society, not just top investors and managers.

X-Posted: Balkinization.


Exploration and Exploitation – Ideas from Business and Computer Science

One of the key reasons I joined GA Tech and the Scheller College of Business is that I tend to draw on technology and business literature, and GA Tech is a great place for both. My current paper Exploration and Exploitation: An Essay on (Machine) Learning, Algorithms, and Information Provision draws on both these literatures. A key work on the idea of exploration versus exploitation in the business literature is James G. March, Exploration and Exploitation in Organizational Learning, 2 ORG. SCI. 71 (1989) which as far as I can tell has not been picked up in the legal literature. A good follow up to that paper is Anil K. Gupta, Ken Smith, and Christina Shalley, The Interplay Between Exploration and Exploitation, 49 ACAD. MGMT. J. 693 (2006). I had come upon the issue as a computer science question when working on a draft of my paper Constitutional Limits on Surveillance: Associational Freedom in the Age of Data Hoarding. That paper was part of my thoughts on artificial intelligence, algorithms, and the law. In the end, the material did not fit there, but it fits the new work. And as I have started to connect with folks in the machine learning group at GA Tech, I have been able to press on how this idea comes up in technology and computer science. The paper has benefitted from feedback from Danielle Citron, James Grimmelmann, and Peter Swire. I also offer many thanks to the Loyola University Chicago Law Journal. The paper started as a short piece (I think I wanted to stay at about five to eight thousand words), but as it evolved, the editors were most gracious in letting me use an asynchronous editing process to hit the final 18,000 or so total word count.

I think the work speaks to general issues of information provision and also applies to current issues regarding the way news and online competition work. As one specific matter, I take on the idea of serendipity which I think “is a seductive, overstated idea. Serendipity works because of relevancy.” I offer the idea of salient serendipity to clarify what type of serendipity matters. The abstract is below.

Legal and regulatory understandings of information provision miss the importance of the exploration-exploitation dynamic. This Essay argues that is a mistake and seeks to bring this perspective to the debate about information provision and competition. A general, ongoing problem for an individual or an organization is whether to stay with a familiar solution to a problem or try new options that may yield better results. Work in organizational learning describes this problem as the exploration-exploitation dilemma. Understanding and addressing that dilemma has become a key part of an algorithmic approach to computation, machine learning, as it is applied to information provision. In simplest terms, even if one achieves success with one path, failure to try new options means one will be stuck in a local equilibrium while others find paths that yield better results and displace one’s original success. This dynamic indicates that an information provider has to provide new options and information to users, because a provider must learn and adapt to users’ changing interests in both the type of information they desire and how they wish to interact with information.

Put differently, persistent concerns about the way in which news reaches users (the so-called “filter bubble” concern) and the way in which online shopping information is found (a competition concern) can be understood as market failures regarding information provision. The desire seems to be to ensure that new information reaches people, because that increases the potential for new ideas, new choices, and new action. Although these desired outcomes are good, current criticisms and related potential solutions misunderstand the nature of information users and especially information provision, and miss an important point. Both information users and providers sort and filter as a way to enable better learning, and learning is an ongoing process that requires continual changes to succeed. From an exploration- exploitation perspective, a user or an incumbent may remain isolated or offer the same information provision but neither will learn. In that case, whatever short-term success either enjoys is likely to face leapfrogging by those who experiment through exploration and exploitation.


MLAT – Not a Muscle Group Nonetheless Potentially Powerful

MLAT. I encountered this somewhat obscure thing (Mutual Legal Assistance Treaty) when I was in practice and needed to serve someone in Europe. I recall it was a cumbersome process and thinking that I was happy we did not seem to have to use it often (in fact the one time). Today, however, as my colleagues Peter Swire and Justin Hemmings argue in their paper, Stakeholders in Reform of the Global System for Mutual Legal Assistance, the MLAT process is quite important.

In simplest terms, if a criminal investigation in say France needs an email and it is stored in the U.S.A., the French authorities ask the U.S. ones for aid. If the U.S. agency that processes the request agrees there is a legal basis for the request, it and other groups seek a court order. If that is granted, the order would be presented to the company. Once records are obtained, there is further review to ensure “compliance U.S. law.” Then the records would go to France. As Swire and Hemmings note, the process averages 10 months. For a civil case that is long, but for criminal cases that is not workable. And as the authors put it, “the once-unusual need for an MLAT request becomes routine for records that are stored in the cloud and are encrypted in transit.”

Believe it or not, this issue touches on major Internet governance issues. The slowness and the new needs are fueling calls for having the ITU govern the Internet and access to evidence issues (a model according to the paper favored by Russia and others). Simpler but important ideas such as increased calls for data localization also flow from the difficulties the paper identifies. As the paper details, the players–non-U.S. governments, the U.S. government, tech companies, and civil society groups–each have goals and perspectives on the issue.

So for those interested in Internet governance, privacy, law enforcement, and multi-stakeholder processes, the MLAT process and this paper on it offer a great high-level view of the many factors at play in those issues for both a specific topic and larger, related ones as well.


Philip K. Dick – Most Important SciFi Author of the 20th Century?

Philip K. Dick may be the most important sci-fi author of the 20th Century akin to Verne and Wells in vision and contemporary relevance well after they wrote. Do Androids Dream of Electric Sheep (Blade Runner), We Can Remember It For You Wholesale (Total Recall, twice), Minority Report (Minority Report, film and TV), Paycheck (Paycheck), A Scanner Darkly (A Scanner Darkly), Adjustment Team (The Adjustment Bureau), and now The Man in the High Castle as an Amazon TV show is just a partial list of Philip K Dick’s work that has been adapted. Although Amazon does not usually release its streaming numbers, The Man in the High Castle has become its “most-streamed original show, overtaking shows like the detective-centric Bosch and Jill Soloway’s feted dramedy Transparent.” The popularity is not the point. As a fan of Dick’s work Ubik and even Valis (though that one is much work to read) both of which have not been adapted to the screen, I am saying that Dick’s novels and short stories did what great sci-fi does. They use technology and maybe some fantasy to comment on where society is headed and how things might evolve. I think it was Dan Solove who once said to me that Dick’s work fits his era, and others in, I think Dan said, the New School were working on the same ideas (apologies, Dan, if I am mistaken about what you said). Regardless of who or what school treads the same area as Dick, for me something about his work catches attention and highlights the way we live more than others.

Take Do Androids Dream of Electric Sheep, the movie is a good adaptation in that it hits themes rather than trying to stay true to the precise way the novel works. The novel has great stuff on machines to dial up a mood. People use it to stimulate anger, happiness, etc. as the situation requires. Did that presage mood drugs and more? Sort of. Did it hit on how we choose to live and ideas of what is authentic life and emotion? Yes. Should we take the messages about the world as reflecting reality today? No.

Although law and literature can, and maybe should, use literature to help understand an idea, saying that the world is now just like Minority Report or some other work is a reach. Using a film or novel to say something is a concern or to illustrate ideas of Orwellian, Kafkan, or other futures and that we wish to ask whether that is real can help. But the key is to rally the facts that show that those fictions are now a reality or that facts are in place that open the door to dystopia. Speaking of dystopia, I wonder how often people use fiction to say that the world or a technology is leading us to a better place. In my experience legal scholars tend to dismiss upbeat outlooks as naive or “just so” stories. I am not sure that Dick is dystopian. But in general if folks have examples where literature or film are examples of a good outcome from technology, please share.

Nonetheless, I offer Philip K. Dick in all his messy glory as my choice for Most Important SciFi Author of the 20th Century.


Not Found, Forbidden, or Censored? New Error Code 451 May Help Figure It Out

When UK sites blocked access to the Pirate Bay following a court order the standard 403 code error for “Forbidden” appeared, but a new standard will let users know that a site is not accessible because of legal reasons. According to the Verge, Tim Bray proposed the idea more than three years ago. The number may ring a bell. It is a nod to Bradbury’s Farenhiet 451. There some “process bits” to go before the full approval, but developers can start to implement it now. As the Verge explains, the code is voluntary. Nonetheless

If implemented widely, Bray’s new code should help prevent the confusion around blocked sites, but it’s only optional and requires web developers to adopt it. “It is imaginable that certain legal authorities may wish to avoid transparency, and not only forbid access to certain resources, but also disclosure that the restriction exists,” explains Bray.

It might be interesting to track how often the code is used and the reactions to it.

Here is the text of how the code is supposed to work:

This status code indicates that the server is denying access to the
resource as a consequence of a legal demand.

The server in question might not be an origin server. This type of
legal demand typically most directly affects the operations of ISPs
and search engines.

Responses using this status code SHOULD include an explanation, in
the response body, of the details of the legal demand: the party
making it, the applicable legislation or regulation, and what classes
of person and resource it applies to. For example:

HTTP/1.1 451 Unavailable For Legal Reasons
Link: ; rel=”blocked-by”
Content-Type: text/html

Unavailable For Legal Reasons

Unavailable For Legal Reasons

This request may not be serviced in the Roman Province
of Judea due to the Lex Julia Majestatis, which disallows
access to resources hosted on servers deemed to be
operated by the People’s Front of Judea.


China, the Internet, and Sovereignty

China’s World Internet Conference is, according to its organizers, about:

“An Interconnected World Shared and Governed by All—Building a Cyberspace Community of Shared Destiny”. This year’s Conference will further facilitate strategic-level discussions on global Internet governance, cyber security, the Internet industry as the engine of economic growth and social development, technological innovation and philosophy of the Internet. It is expected that 1200 leading figures from governments, international organizations, enterprises, science & technology communities, and civil societies all around the world will participate the Conference.

As the Economist points out, “The grand title is misleading: the gathering will not celebrate the joys of a borderless internet but promote “internet sovereignty”, a web made up of sovereign fiefs, gagged by official censors. Political leaders attending are from such bastions of freedom as Russia, Pakistan, Kazakhstan, Kyrgyzstan and Tajikistan.”

One of the great things about being at GA Tech is the community of scholars from a wide range of backgrounds. This year colleagues in Public Policy hired Milton Mueller, a leader in telecommunication and Internet policy. I have known his work for some time, but it has been great getting to hang out and talk with Milton. Not surprising, but Milton has a take on the idea of sovereignty and the Internet. I can’t share it, as it is in the works. But as a teaser, keep your eye out for it.

As a general matter, it seems to me that sovereignty will be a keyword in coming Internet governance debates across all sectors. Whether the term works from a political science perspective or others should be interesting. Thinking of jurisdiction, privacy, surveillance, telecommunication, cyberwar, and intellectual property, I can see sovereignty being asserted, perverted, and converted to serve a range of interests. Revisiting the core international relations theories to be clear about what sovereignty is and should be seems a good project for a law scholar or student as these areas evolve.

Law’s Nostradamus

The ABA Journal “Legal Rebels” page has promoted Richard Susskind’s work (predicting the future automation of much of what lawyers do) as “required reading.” It is a disruptive take on the legal profession. But disruption has been having a tough time as a theory lately. So I was unsurprised to find this review, by a former General Counsel of DuPont Canada Inc., of Susskind’s The End of Lawyers?:

Susskind perceives a lot of routine in the practice of law . . . which he predicts will gradually become the domain of non-professional or quasi-professional workers. In this respect his prediction is about two or three decades too late. No substantial law firm, full service or boutique, can survive without a staff of skilled paralegal specialists and the trend in this direction has been ongoing since IT was little more than a typewriter and a Gestetner duplicating machine. . . .

Law is not practiced in a vacuum. It is not merely a profession devoted to preparing standard forms or completing blanks in precedents. And though he pays lip service to the phenomenon, there is little appreciation of the huge volume of indecipherable legislation and regulation that is promulgated every day of every week of the year. His proposal to deal with this through regular PDA alerts is absurd. . . . In light of this, if anything in Susskind’s thesis can be given short shrift it is his prognostication that demand for “bespoke” or customized services will be in secular decline. Given modern trends in legislative and regulatory drafting, in particular the use of “creative ambiguity” as it’s been called, demand for custom services will only increase.

Nevertheless, I predict Susskind’s work on The Future of the Professions will get a similarly warm reception from “Legal Rebels.” The narrative of lawyers’ obsolescence is just too tempting for those who want to pay attorneys less, reduce their professional independence from the demands of capital, or simply replace legal regulation of certain activities with automated controls.

However, even quite futuristic academics are not on board with the Susskindite singularitarianism of robo-lawyering via software Solons. The more interesting conversations about automation and the professions will focus on bringing accountability to oft-opaque algorithmic processes. Let’s hope that the professions can maintain some autonomy from capital to continue those conversations–rather than guaranteeing their obsolescence as ever more obeisant cogs in profit-maximizing machines.



How CalECPA Improves on its Federal Namesake

Last week, Governor Brown signed the landmark California Electronic Communications Privacy Act[1] (CalECPA) into law and updated California privacy law for modern communications. Compared to ECPA, CalECPA requires warrants, which are more restricted, for more investigations; provides more notice to targets; and furnishes as a remedy both court-ordered data deletion and statutory suppression.  Moreover, CalECPA’s approach is comprehensive and uniform, eschewing the often irrational distinctions that have made ECPA one of the most confusing and under-protective privacy statutes in the Internet era.

Extended Scope, Enhanced Protections, and Simplified Provisions

CalECPA regulates investigative methods that ECPA did not anticipate. Under CalECPA, government entities in California must obtain a warrant based on probable cause before they may access electronic communications contents and metadata from service providers or from devices.  ECPA makes no mention of device-stored data, even though law enforcement agents increasingly use StingRays to obtain information directly from cell phones. CalECPA subjects such techniques to its warrant requirement. While the Supreme Court’s recent decision in United States v. Riley required that agents either obtain a warrant or rely on an exception to the warrant requirement to search a cell phone incident to arrest, CalECPA requires a warrant for physical access to any device, not just a cell phone, which “stores, generates, or transmits electronic information in electronic form.” CalECPA clearly defines the exceptions to the warrant requirement by specifying what counts as an emergency, who can give consent to the search of a device, and related questions.

ECPA’s 1986-drafted text only arguably covers the compelled disclosure of location data stored by a service provider, and does not clearly require a warrant for such investigations. CalECPA explicitly includes location data in the “electronic communication information” that is subject to the warrant requirement when a government entity accesses it from either a device or a service provider (broadly defined).  ECPA makes no mention of location data gathered in real-time or prospectively, but CalECPA requires a warrant both for those investigations and for stored data investigations. Whenever a government entity compels the “the production of or access to” location information, including GPS data, from a service provider or from a device, CalECPA requires a warrant.

Read More


Making Contracts on Kickstarter

11111In 2013, Chapman Ducote, a professional race car driver, and his wife, Kristin Ducote, had an idea for a new book about the world of professional motor sports, to be called Naked Paddock. Rather than the traditional route through book publishing—hiring an agent, seeking a publisher to pay an advance, and having the house handle the rest—they opted for a new approach of crowd-funding and self-publishing.

Crowd-funding refers to project financing generated from among the general public, usually facilitated by an internet-based service designed to match money to ideas. Creators post project proposals on the site and invite backers to buy the product in advance or stake funds in exchange for bonus mementos or voice in production. Proposals state the total amount sought to be raised and the deadline. If the goal is not reached on time, no funds change hands. But otherwise a deal is made: the facilitating site has enabled backers and creators to form a bargain.

Facilitators, such as Kickstarter, present on their web sites “terms of use” that all creators and backers must agree to in order to access the site. Such terms of use include standards designed to promote the commercial efficacy of the site. Kickstarter is where Chapman and Kristin Ducote hatched their book idea, posting their project and thus manifesting their assent to the terms of use.

The couple launched heavy promotional efforts, which included an appearance on a reality TV show—a spin-off of  But within a week, Kickstarer took it down because it violated its rules. The Ducotes sued for breach of contract, saying Kickstarter had no basis to remove the project. But they soon withdrew the suit acknowledging that they had made a contract with Kickstarter to abide by it rules yet failed to do so.

Kickstarter therefore had the right to remove the project.  While neither side disclosed publicly what rules were broken, they revealed that Kickstarrter acted in response to complaints from other users. Among likely violations were rules restricting what creators can do to promote projects—creators may not spam, use link-bomb forums, or promote on other Kickstarter project pages.

Terms of use flourish on the internet, where web site builders use them to define business models and a sense of community norms. While the means of assent vary from traditional means—clicking at prompts rather than signing a form—they have similar purposes, efficacy and limits.  While the traditional rules of contract formation fit the creator-facilitator relationship well, they require adaptation, at least conceptually, when considering other pairs of relationships in crowd-funding.

Consider that between backers and facilitators. On the surface, it may seem that the facilitator has agreed to provide a service to the backer, such as assuring product delivery and quality. But the sites disclaim such a traditional contractual relation, instead establishing the facilitator as a pure middleman without duties.   The Kicktarter terms of use state, for example: “The creator is solely responsible for fulfilling the promises made in their project.” Kickstarter’s terms of use declare that “Kickstarter doesn’t evaluate a project’s claims, resolve disputes, or offer refunds—backers decide what’s worth funding and what’s not.” The facilitator disclaims any duty to backers concerning product delivery, quality, warranties, or refunds. Read More

Privacy Security Novels 02

5 Great Novels About Privacy and Security

I am a lover of literature (I teach a class in law and literature), and I also love privacy and security, so I thought I’d list some of my favorite novels about privacy and security.

I’m also trying to compile a more comprehensive list of literary works about privacy and security, and I welcome your suggestions.

Without further ado, my list:

Franz Kafka, The Trial

Kafka’s The Trial begins with a man being arrested but not told why. In typical Kafka fashion, the novel begins badly for the protagonist . . . and then it gets worse! A clandestine court system has compiled a dossier about him and officials are making decisions about him, but he is left in the dark. This is akin to how Big Data can operate today. The Trial captures the sense of helplessness, frustration, and powerlessness when large institutions with inscrutable purposes use personal data and deny people the right to participate. I wrote more extensively about how Kafka is an apt metaphor for privacy in our times in a book called The Digital Person about 10 years ago.

Franz Kafka The Trial


Read More