Author: Frank Pasquale

The Emerging Law of Algorithms, Robots, and Predictive Analytics

In 1897, Holmes famously pronounced, “For the rational study of the law the blackletter man may be the man of the present, but the man of the future is the man of statistics and the master of economics.” He could scarcely envision at the time the rise of cost-benefit analysis, and comparative devaluation of legal process and non-economic values, in the administrative state. Nor could he have foreseen the surveillance-driven tools of today’s predictive policing and homeland security apparatus. Nevertheless, I think Holmes’s empiricism and pragmatism still animate dominant legal responses to new technologies. Three conferences this Spring show the importance of “statistics and economics” in future tools of social order, and the fundamental public values that must constrain those tools.

Tyranny of the Algorithm? Predictive Analytics and Human Rights

As the conference call states

Advances in information and communications technology and the “datafication” of broadening fields of human endeavor are generating unparalleled quantities and kinds of data about individual and group behavior, much of which is now being deployed to assess risk by governments worldwide. For example, law enforcement personnel are expected to prevent terrorism through data-informed policing aimed at curbing extremism before it expresses itself as violence. And police are deployed to predicted “hot spots” based on data related to past crime. Judges are turning to data-driven metrics to help them assess the risk that an individual will act violently and should be detained before trial. 


Where some analysts celebrate these developments as advancing “evidence-based” policing and objective decision-making, others decry the discriminatory impact of reliance on data sets tainted by disproportionate policing in communities of color. Still others insist on a bright line between policing for community safety in countries with democratic traditions and credible institutions, and policing for social control in authoritarian settings. The 2016 annual conference will . . . consider the human rights implications of the varied uses of predictive analytics by state actors. As a core part of this endeavor, the conference will examine—and seek to advance—the capacity of human rights practitioners to access, evaluate, and challenge risk assessments made through predictive analytics by governments worldwide. 

This focus on the violence targeted and legitimated by algorithmic tools is a welcome chance to discuss the future of law enforcement. As Dan McQuillan has argued, these “crime-fighting” tools are both logical extensions of extant technologies of ranking, sorting, and evaluating, and raise fundamental challenges to the rule of law: 

According to Agamben, the signature of a state of exception is ‘force-of’; actions that have the force of law even when not of the law. Software is being used to predict which people on parole or probation are most likely to commit murder or other crimes. The algorithms developed by university researchers uses a dataset of 60,000 crimes and some dozens of variables about the individuals to help determine how much supervision the parolees should have. While having discriminatory potential, this algorithm is being invoked within a legal context. 

[T]he steep rise in the rate of drone attacks during the Obama administration has been ascribed to the algorithmic identification of ‘risky subjects’ via the disposition matrix. According to interviews with US national security officials the disposition matrix contains the names of terrorism suspects arrayed against other factors derived from data in ‘a single, continually evolving database in which biographies, locations, known associates and affiliated organizations are all catalogued.’ Seen through the lens of states of exception, we cannot assume that the impact of algorithmic force-of will be constrained because we do not live in a dictatorship. . . .What we need to be alert for, according to Agamben, is not a confusion of legislative and executive powers but separation of law and force of law. . . [P]redictive algorithms increasingly manifest as a force-of which cannot be restrained by invoking privacy or data protection. 

The ultimate logic of the algorithmic state of exception may be a homeland of “smart cities,” and force projection against an external world divided into “kill boxes.” 


We Robot 2016: Conference on Legal and Policy Issues Relating to Robotics

As the “kill box” example suggests above, software is not just an important tool for humans planning interventions. It is also animating features of our environment, ranging from drones to vending machines. Ryan Calo has argued that the increasing role of robotics in our lives merits “systematic changes to law, institutions, and the legal academy,” and has proposed a Federal Robotics Commission. (I hope it gets further than proposals for a Federal Search Commission have so far!)


Calo, Michael Froomkin, and other luminaries of robotics law will be at We Robot 2016 this April at the University of Miami. Panels like “Will #BlackLivesMatter to RoboCop?” and “How to Engage the Public on the Ethics and Governance of Lethal Autonomous Weapons” raise fascinating, difficult issues for the future management of violence, power, and force.


Unlocking the Black Box: The Promise and Limits of Algorithmic Accountability in the Professions


Finally, I want to highlight a conference I am co-organizing with Valerie Belair-Gagnon and Caitlin Petre at the Yale ISP. As Jack Balkin observed in his response to Calo’s “Robotics and the Lessons of Cyberlaw,” technology concerns not only “the relationship of persons to things but rather the social relationships between people that are mediated by things.” Social relationships are also mediated by professionals: doctors and nurses in the medical field, journalists in the media, attorneys in disputes and transactions.


For many techno-utopians, the professions are quaint, an organizational form to be flattened by the rapid advance of software. But if there is anything the examples above (and my book) illustrate, it is the repeated, even disastrous failures of many computational systems to respect basic norms of due process, anti-discrimination, transparency, and accountability. These systems need professional guidance as much as professionals need these systems. We will explore how professionals–both within and outside the technology sector–can contribute to a community of inquiry devoted to accountability as a principle of research, investigation, and action. 


Some may claim that software-driven business and government practices are too complex to regulate. Others will question the value of the professions in responding to this technological change. I hope that the three conferences discussed above will help assuage those concerns, continuing the dialogue started at NYU in 2013 about “accountable algorithms,” and building new communities of inquiry. 


And one final reflection on Holmes: the repetition of “man” in his quote above should not go unremarked. Nicole Dewandre has observed the following regarding modern concerns about life online: 

To some extent, the fears of men in a hyperconnected era reflect all-too-familiar experiences of women. Being objects of surveillance and control, exhausting laboring without rewards and being lost through the holes of the meritocracy net, being constrained in a specular posture of other’s deeds: all these stances have been the fate of women’s lives for centuries, if not millennia. What men fear from the State or from “Big (br)Other”, they have experienced with men. So, welcome to world of women….

Dewandre’s voice complements that of US scholars (like Danielle Citron and Mary Ann Franks) on systematic disadvantages to women posed by opaque or distant technological infrastructure. I think one of the many valuable goals of the conferences above will be to promote truly inclusive technologies, permeable to input from all of society, not just top investors and managers.

X-Posted: Balkinization.

(R)evolution in Law & Economics

book-calabresiIt is a real pleasure to read Guido Calabresi’s The Future of Law and Economics almost 20 years after taking his torts class. Calabresi always struck me as a warm and inspiring presence at Yale. He’s attained eminence as a scholar, teacher, and public servant. There is much to learn from and celebrate in his work. I’ll start with his latest book’s major contributions, and then go on to raise some questions about just what future(s) might be in store for law & economics.

Bentham’s Shadow

Jeremy Bentham casts a long shadow over the legal academy. As Fred Schauer helpfully recounts, Bentham was extraordinarily suspicious of the complexity of law, and wanted it “to be understood by ordinary people without the intervention of lawyers and the interpretation of judges.” Bentham’s utilitarian legacy also stalks the profession of law. Following the lead of cost-benefit analysts, administrators may decide that legal regularity should shrink in importance as a value in comparison with quantified estimates of, say, consumer welfare. As another former Yale dean observed, the reduction of difficult conflicts to purely economic (or philosophical) questions threatens to undermine the autonomy of law as a field.

Calabresi advances this discussion with his crystalline distinction between “Economic Analysis of Law” and “Law & Economics.” I will quote at length here, since this distinction is central to the book:

What I call the Economic Analysis of Law uses economic theory to analyze the legal world. . . . In its most aggressive and reformist mode, having looked at the world from the standpoint of economic theory, if it finds that the legal world does not fit, it proclaims that world to be “irrational.” And this, of course, is exactly what Bentham did when he tested laws and behavior on the basis of utilitarianism and, in his most aggressive moments, dismissed what did not fit as nonsense. . . .

What I call Law and Economics instead begins with an agnostic acceptance of the world as it is, as the lawyer describes it to be. It then looks to whether economic theory can explain that world, that reality. And if it cannot, rather than automatically dismissing that world as irrational, it asks two questions.

The first is, are the legal scholars who are describing the legal reality looking at the world as it really is? Or is there something in their way of seeing the world that has led them to mischaracterize that reality? . . . . If . . . even a more comprehensive view of legal reality discloses rules and practices that economic theory cannot explain, Law and Economics asks a second question. Can economic theory be amplified, can it be made broader or more subtle . . . so that it can explain why the real world of law is at it is?

For Calabresi, behavioral economics is a great example of the kind of “bilateral relationship between economic theory and the world as it is” that he calls Law and Economics, because it has expanded economic theory to account for humans’ predictable irrationalities, and for some higher principles of altruism and fair play.

Calabresi’s chapter on non-profit institutions is a particularly strong vindication of the “Law and Economics” (as opposed to “Economic Analysis of Law”) perspective.  For market enthusiasts, the lack of profit motive at universities and hospitals is the key to understanding all that ails them. But from a more cosmopolitan perspective, one could just as easily conclude that the excess marketization of US systems of health and education (relative to, say, a European benchmark) is the better explanation.

Nevertheless, we can still expect plenty of government and corporate agitation to promote the profit motive in these sectors, however bad its results may be. Ugo Mattei (in a 2006 essay on Calabresi’s work) helps explain why:

Read More

The State of Legal Scholarship: A View from Health Law

Based on Ron Collins’ post below, I read the interview with Judge Edwards. The judge states:

In order for legal scholarship to be relevant outside the legal academy, law professors should balance abstract scholarship with scholarly works that are of interest and use to lawyers, legislators, judges, and regulators who serve society through legal arguments, decision-making, regulatory initiatives, and enforcement actions.

Fortunately, every legal academic that Nicolas Terry and I have hosted in our 41 episodes of The Week in Health Law has done so. Perhaps that’s a biased sample. But it’s undoubtedly better than the sampling practiced by Justice Breyer, another critic of legal scholarship.

For now, I will take some comfort that, about a year into our podcasting, we have heard from general counsels, attorneys, regulators, and journalists who are big fans of the show–which primarily focuses on the work of legal academics. And I will remain dubious of generalized critiques of legal scholarship, which fail to analyze the merits of particular fields.

Complicating the Narrative of Legal Automation

Richard Susskind has been predicting “the end of lawyers” for years, and has doubled down in a recent book coauthored with his son (The Future of the Professions). That book is so sweeping in its claims—that all professions are on a path to near-complete automation–that it should actually come as a bit of a relief for lawyers. If everyone’s doomed to redundancy, law can’t be a particularly bad career choice. To paraphrase Monty Python: nobody expects the singularity.

On the other hand, experts on the professions are offering some cautions about the Susskinds’ approach. Howard Gardner led off an excellent issue of Daedalus on the professions about ten years ago. He offers this verdict on the Susskinds’ perfunctory response to objections to their position:

In a section of their book called “Objections,” they list the principal reasons why others might take issue with their analyses, predictions, and celebratory mood. This list of counter-arguments to their critique includes the trustworthiness of professionals; the moral limits of unregulated markets; the value of craft; the importance of empathy and personal interactions; and the pleasure and pride derived from carrying out what they term ‘good work.’ With respect to each objection, the Susskinds give a crisp response.

I was disappointed with this list of objections, each followed by refutation. For example, countering the claim that one needs extensive training to become an expert, the Susskinds call for the reinstatement of apprentices, who can learn ‘on the job.’ But from multiple studies in cognitive science, we know that it takes approximately a decade to become an expert in any domain—and presumably that decade includes plenty of field expertise. Apprentices cannot magically replace well-trained experts. In another section, countering the claim that we need to work with human beings whom we can trust, they cite the example of the teaching done online via Khan Academy. But Khan Academy is the brainchild of a very gifted educator who in fact has earned the trust of many students and indeed of many teachers; it remains to be seen whether online learning à la Khan suffices to help individuals—either professionals or their clients—make ‘complex technical and ethical decisions under conditions of uncertainty.’ The Susskinds recognize that the makers and purveyors of apps may have selfish or even illegal goals in mind. But as they state, “We recognize that there are many online resources that promote and enable a wide range of offenses. We do not underestimate their impact of threat, but they stand beyond the reach of this book” (p. 233).

Whether or not one goes along with specific objections and refutations, another feature of the Susskinds’ presentation should give one pause. The future that they limn seems almost entirely an exercise in rational deduction and accordingly devoid of historical and cultural considerations.

Experts with a bit more historical perspective differ on the real likelihood of pervasive legal automation. Some put the risk to lawyers at under 4%. Even the highly cited study by Carl Frey and Michael Osborne (The Future of Employment: How Susceptible Are Jobs to Automation) placed attorneys in the “low risk” category when it comes to replacement by software and robots. They suggest paralegals are in much more danger.

But empirical research by economist James Bessen has complicated even that assumption:“Since the late 1990s, electronic document discovery software for legal proceedings has grown into a billion dollar business doing work done by paralegals, but the number of paralegals has grown robustly.” Like MIT’s David Autor, Bessen calls automation a job creator, not a job destroyer. “The idea that automation kills jobs isn’t true historically,” Steve Lohr reports, and is still dubious. The real question is whether we reinforce policies designed to promote software and robotization that complements current workers’ skills, or slip into a regime of deskilling and substitution.

A Review of The Black Box Society

I just learned of this very insightful and generous review of my book, by Raizel Liebler:

The Black Box Society: The Secret Algorithms that Control Money and Information (Harvard University Press 2015) is an important book, not only for those interested in privacy and data, but also anyone with larger concerns about the growing tension between transparency and trade secrets, and the deceptiveness of pulling information from the ostensibly objective “Big Data.” . . .

One of the most important aspects of The Black Box Society builds on the work of Siva Vaidhyanathan and others to write about how relying on the algorithms of search impact people’s lives. Through our inability to see how Google, Facebook, Twitter, and other companies display information, it makes it seem like these displays are in some way “objective.” But they are not. Between various stories about blocking pictures of breastfeeding moms, blocking links to competing sites, obscurity sources, and not creating tools to prevent harassment, companies are making choices. As Pasquale puts it: “at what point does a platform have to start taking responsibility for what its algorithms go, and how their results are used? These new technologies affect not only how we are understood, but also how we understand. Shouldn’t we know when they’re working for us, against us, or for unseen interests with undisclosed motives?”

I was honored to be mentioned on the TLF blog–a highly recommended venue! Here’s a list of some other reviews in English (I have yet to compile the ones in other languages, but was very happy to see the French edition get some attention earlier this Fall). And here’s an interesting take on one of those oft-black-boxed systems: Google Maps.

Highly Recommended: Chamayou’s The Theory of The Drone

robocop1924_02Earlier this year, I read a compelling analysis of drone warfare, Gregoire Chamayou’s The Theory of The Drone. It is an unusual and challenging book, of interest to both policymakers and philosophers, engineers and attorneys. As I begin a review of it:

At what point do would-be reformers of the law and ethics of war slide into complicity with a morally untenable status quo? When is the moralization of force a prelude for the ration­alization of slaughter? Grégoire Chamayou’s penetrating recent book, A Theory of the Drone, raises these uncomfortable questions for lawyers and engineers both inside and out of the academy. Chamayou, a French philosopher, dissects legal academics’ arguments for targeted killing by unmanned vehicles. He also criticizes university research programs purporting to engineer ethics for the autonomous weapons systems they view as the inevitable future of war. Writing from a tradition of critical theory largely alien to both engineering and law, he raises concerns that each discipline should address before it continues to develop procedures for the automation of war.

As with the automation of law enforcement, advocacy, and finance, the automation of war has many unintended consequences. Chamayou helps us discern its proper limits.

Image Credit: 1924 idea for police automaton.

From Territorial to Functional Governance

Susan Crawford is one of the leading global thinkers on digital infrastructure. Her brilliant book Captive Audience spearheaded a national debate on net neutrality. She helped convince the Federal Communications Commission to better regulate big internet service providers. And her latest intervention–on Uber–is a must-read. Crawford worries that Uber will rapidly monopolize urban ride services. It’s repeatedly tried to avoid regulation and taxes. And while it may offer a good deal to drivers and riders now, there is no guarantee it will in the future.

A noted critic of the sharing economy, Tom Slee, has backed up Crawford’s concerns, from an international perspective. “For a smallish city in Canada, what happens to accountability when faced with a massive American company with little interest in Canadian employment law or Canadian traditions?”, Slee asks, raising a very deep point about the nature of governance. What happens to a city when its government’s responsibilities are slowly disaggregated, functionally? Some citizens may want to see the effective governance of paid rides via Uber, of spare rooms via AirBnB, and so on. A full privatization of city governance awaits, from water to sidewalks.

If you’re concerned about that, you may find my recent piece on the sharing economy of interest. We’ll also be discussing this and similar issues at Northeastern’s conference “Tackling the Urban Core Puzzle.” Transitions from territorial to functional governance will be critical topics of legal scholarship in the coming decade.

The Larger Debate on Federal Credit Programs

Earlier today I criticized a New York Times proposal regarding law school loans. Whatever you think about the proper cost of legal education, the NYT is off-base, because it ignores the role of private finance in our economy.

Education finance policy is difficult because it raises fundamental issues in political economy and public finance generally. It also only makes sense with some historical context.

Back in the 1970s and ’80s, an anti-tax coalition operated on the presumption that state support for education had to drop. Financialization plugged the resulting hole in funding: responsibility for paying for school shifted from (relatively well-off) taxpayers to students. By the 1990s, private lenders realized that they could make tremendous profits from such loans–particularly if they could privatize profits, while sticking the government with losses. That arrangement became so scandalous by 2010 that it was curtailed as part of PPACA. The federal government directly offers many loans now.

But the private lenders did not simply give up. Current efforts to “reform” federal student loans are part of their much larger effort to shrink federal credit programs. The basic idea is simple: to force the US government to account for its credit programs as if it could and should charge interest rates (and impose terms) prevailing among private lenders.

It’s a strange move, especially since, as Matt Yglesias states, “costs reported in the budget are generally lower than the costs to the most efficient private financial institutions because the government’s costs of funds are in fact lower.” David Kamin has also questioned this accounting tactic. But if it succeeds, there is little rationale for any federal credit program–it will simply duplicate extant private lenders’ work. That redundancy will lead to further privatization of federal credit programs, raising costs to borrowers and diverting more money to the finance sector. It’s not a great outcome for students–but it is a logical outgrowth of reflexive hostility to the type of state intervention that actually could improve students’ finances while maintaining quality.

Law’s Nostradamus

The ABA Journal “Legal Rebels” page has promoted Richard Susskind’s work (predicting the future automation of much of what lawyers do) as “required reading.” It is a disruptive take on the legal profession. But disruption has been having a tough time as a theory lately. So I was unsurprised to find this review, by a former General Counsel of DuPont Canada Inc., of Susskind’s The End of Lawyers?:

Susskind perceives a lot of routine in the practice of law . . . which he predicts will gradually become the domain of non-professional or quasi-professional workers. In this respect his prediction is about two or three decades too late. No substantial law firm, full service or boutique, can survive without a staff of skilled paralegal specialists and the trend in this direction has been ongoing since IT was little more than a typewriter and a Gestetner duplicating machine. . . .

Law is not practiced in a vacuum. It is not merely a profession devoted to preparing standard forms or completing blanks in precedents. And though he pays lip service to the phenomenon, there is little appreciation of the huge volume of indecipherable legislation and regulation that is promulgated every day of every week of the year. His proposal to deal with this through regular PDA alerts is absurd. . . . In light of this, if anything in Susskind’s thesis can be given short shrift it is his prognostication that demand for “bespoke” or customized services will be in secular decline. Given modern trends in legislative and regulatory drafting, in particular the use of “creative ambiguity” as it’s been called, demand for custom services will only increase.

Nevertheless, I predict Susskind’s work on The Future of the Professions will get a similarly warm reception from “Legal Rebels.” The narrative of lawyers’ obsolescence is just too tempting for those who want to pay attorneys less, reduce their professional independence from the demands of capital, or simply replace legal regulation of certain activities with automated controls.

However, even quite futuristic academics are not on board with the Susskindite singularitarianism of robo-lawyering via software Solons. The more interesting conversations about automation and the professions will focus on bringing accountability to oft-opaque algorithmic processes. Let’s hope that the professions can maintain some autonomy from capital to continue those conversations–rather than guaranteeing their obsolescence as ever more obeisant cogs in profit-maximizing machines.

 

Greene & Kesselheim vs. Kardashian

Jeremy Greene and Aaron Kesselheim have a fascinating piece on the new challenges facing the FDA as selfie-driven marketing reaches Instagram. After promoting an anti-nausea drug (for morning sickness, not in anticipation of celebrity-phobic viewers), Kardashian had to follow up with the following “corrective advertisement:”

#CorrectiveAd I guess you saw the attention my last #morningsickness post received. The FDA has told Duchesnay, Inc., that my last post about Diclegis (doxylamine succinate and pyridoxine HCl) was incomplete because it did not include any risk information or important limitations of use for Diclegis.

As Greene and Kesselheim observe:

The rise of social media has raised a parade of new questions for the agency: How is it supposed to monitor person-to-person pharmaceutical recommendations? Can something be considered an advertisement if it’s only 140 characters long? Who is responsible for the accuracy of tweets about a drug? But this isn’t the first time evolving technology has forced the FDA to rethink its role. Before Instagram, television advertising was once new; before television, radio. Since the agency’s founding, its ability to regulate drugs has been consistently challenged by new forms of communication.

For more on the controversy, check out The Week in Health Law, where Nicolas Terry and I discuss the case with Kesselheim. And don’t worry, it’s not all about Kardashians–we also cover a new study of ACOs, proposed budget cuts for AHRQ, worry over unintended consequences of readmission penalties, and EHR gag clauses (and developer codes of conduct).