Author: Frank Pasquale

A Social Theory of Surveillance

LocksBernard Harcourt’s Exposed is a deeply insightful analysis of data collection, analysis, and use by powerful commercial and governmental actors.  It offers a social theory of both surveillance and self-exposure. Harcourt transcends methodological individualism by explaining how troubling social outcomes can be generated by personal choices that each seem rational at the time they are made. He also helps us understand why ever more of daily life is organized around the demands of what Shoshanna Zuboff calls “surveillance capitalism:” intimate monitoring of our daily lives to maximize our productivity as consumers and workers.

The Chief Data Scientist of a Silicon Valley firm told Zuboff, “The goal of everything we do is to change people’s actual behavior at scale. When people use our app, we can capture their behaviors, identify good and bad behaviors, and develop ways to reward the good and punish the bad. We can test how actionable our cues are for them and how profitable for us.” Harcourt reflects deeply on what it means for firms and governments to “change behavior at scale,” identifying “the phenomenological steps of the structuration of the self in the age of Google and NSA data-mining.”

Harcourt also draws a striking, convincing analogy between Erving Goffman’s concept of the “total institution,” and the ever-denser networks of sensors and training (both in the form of punishments and lures) that powerful institutions use to assure behavior occurs within ranges of normality. He observes that some groups are far more likely to be exposed to pain or inconvenience from the surveillance apparatus, while others enjoy its blandishments in relative peace. But almost no one can escape its effects altogether.

In the space of a post here, I cannot explicate Harcourt’s approach in detail. But I hope to give our readers a sense of its power to illuminate our predicament by focusing on one recent, concrete dispute: Apple’s refusal to develop a tool to assist the FBI’s effort to reveal the data in an encrypted iPhone. The history Harcourt’s book recounts helps us understand why the case has attracted so much attention—and how it may be raising false hopes.

Read More

The UK’s “Democratization” of the Professions: Case Studies

Read a few techno-utopian pieces on the future of US legal practice, and you’ll see, again and again, “lessons from Britain.” The UK “legal industry” is lauded for its bold innovation and deregulatory verve. Unfortunately, it appears that in its enthusiasm to make a neoliberal omelette, the green and pleasant land is breaking a few eggs:

Gap-year students are being recruited by the Home Office to make potentially life or death decisions on asylum claims, the Observer has learned. The students receive only five weeks’ training. . . . Immigration lawyers and asylum seekers have condemned the practice, pointing out that, after completing a degree, immigration lawyers undergo a further four years’ training. . . .

A health professional from west Africa who was granted refugee status last year said his claim was initially refused and it took four years of appeals to win his refugee status. “I attempted suicide after my asylum claim was refused because I knew my life would be in danger if I was forcibly returned home,” he said. “I became friendly with a family where the son had taken a gap year during his university degree to work as a Home Office decision-maker. I could not believe that he was making these life and death decisions about complex cases like mine. I am not sure that students are capable of the complex level of critical analysis required to make asylum decisions.”

Meanwhile, the British Health Secretary is telling parents that, hey, Dr. Google may be just as good as a regular physician. Expect to see the new “democratization of the professions” accelerate fastest among those without the resources to resist.

Is Eviction-as-a-Service the Hottest New #LegalTech Trend?

Some legal technology startups are struggling nowadays, as venture capitalists pull back from a saturated market. The complexity of the regulatory landscape is hard to capture in a Silicon Valley slide deck. Still, there is hope for legal tech’s “idealists.” A growing firm may bring eviction technology to struggling neighborhoods around the country:

Click Notices . . . integrates its product with property management software, letting landlords set rules for when to begin evictions. For instance, a landlord could decide to file against every tenant that owes $25 or more on the 10th of the month. Once the process starts, the Click Notices software, which charges landlords flat fees depending on local court costs, sends employees or subcontractors to represent the landlord in court (attorneys aren’t compulsory in many eviction cases).

I can think of few better examples of Richard Susskind’s vision for the future of law. As one Baltimore tenant observes, the automation of legal proceedings can lead to near-insurmountable advantages for landlords:

[Click Notices helped a firm that] tried to evict Dinickyo Brown over $336 in unpaid rent. Brown, who pays $650 a month for a two-bedroom apartment in Northeast Baltimore, fought back, arguing the charges arose after she complained of mold. The landlord dropped the case, only to file a fresh eviction action—this time for $290. “They drag you back and forth to rent court, and even if you win, it goes onto your record,” says Brown, who explains that mold triggers her epilepsy. “If you try to rent other properties or buy a home, they look at your records and say: You’ve been to rent court.”

And here’s what’s truly exciting for #legaltech innovators: the digital reputation economy can synergistically interact with the new eviction-as-a-service approach. Tenant blacklists can assure that merely trying to fight an eviction can lead to devastating consequences in the future. Imagine the investment returns for a firm that owned both the leading eviction-as-a-service platform in a city, and the leading tenant blacklist? Capture about 20 of the US’s top MSA‘s, and we may well be talking unicorn territory.

As we learned during the housing crisis, the best place to implement legal process outsourcing is against people who have a really hard time fighting back. That may trouble old-school lawyers who worry about ever-faster legal processes generating errors, deprivations of due process, or worse. But the legal tech community tends to think about these matters in financialized terms, not fusty old concepts like social justice or autonomy. I sense they will celebrate eviction-as-a-service as one more extension of technologized ordering of human affairs into a profession whose “conservatism” they assume to be self-indicting.

Still, even for them, caution should be in order. Bret Scott’s skepticism about fintech comes to mind:

[I]f you ever watch people around automated self-service systems, they often adopt a stance of submissive rule-abiding. The system might appear to be ‘helpful’, and yet it clearly only allows behaviour that agrees to its own terms. If you fail to interact exactly correctly, you will not make it through the digital gatekeeper, which – unlike the human gatekeeper – has no ability or desire to empathise or make a plan. It just says ‘ERROR’. . . . This is the world of algorithmic regulation, the subtle unaccountable violence of systems that feel no solidarity with the people who have to use it, the foundation for the perfect scaled bureaucracy.

John Danaher has even warned of the possible rise of “algocracy.” And Judy Wajcman argues that ““Futuristic visions based on how technology can speed up the world tend to be inherently conservative.” As new legal technology threatens to further entrench power imbalances between creditors and debtors, landlords and tenants, the types of feudalism Bruce Schneier sees in the security landscape threaten to overtake far more than the digital world.

(And one final note. Perhaps even old-school lawyers can join Paul Gowder’s praise for a “parking ticket fighting” app, as a way of democratizing advocacy. It reminds me a bit of TurboTax, which democratized access to tax preparation. But we should also be very aware of exactly how TurboTax used its profits when proposals to truly simplify the experience of tax prep emerged.)

Hat Tip: To Sarah T. Roberts, for alerting me to the eviction story.

Private Lenders’ Troubling Influence on Federal Loan Policy

Hundreds of billions of dollars are at stake in the upcoming reauthorization of the Higher Education Act (HEA). Like the confirmation of a new Supreme Court justice, it may be delayed into 2017 (or beyond) by partisan wrangling. But as that wrangling happens, Washington insiders are drafting “radical” proposals to change the federal government’s role.

Faculty at all institutions need to examine these proposals closely. The law and public finance issues raised by them are complex. But if we fail to understand them, and to weigh in against the worst proposals, we could witness developments that will fundamentally change (and perhaps end) the university as we know it. Moreover, even if universities find ways to shield themselves from change, some proposals will leave students vulnerable to worse financing terms and lower-quality programs.

In a series of posts over the next few weeks, I’ll be explaining the stakes of the HEA reauthorization. For now, I want to start with a thought experiment on how education finance may change, based on recent activities of large banks and digital lending services I’ve studied. What would be ideal, in terms of higher education finance, for them?

Financiers consider government a pesky and unfair competitor. While federal loans offer options to delay payments (like deferment and forbearance), and discharge upon a borrower’s death or permanent disability (with certain limitations), private loans may not offer any of these options. Private lenders often aim to charge subprime borrowers more than prime borrowers; federal loans offer generally uniform interest rates (though grad students pay more than undergrads, and Perkins loans are cheaper than average). Alternatively, private lenders may charge borrowers from wealthy families (or attending wealthy institutions) less. Rates might even fluctuate on the basis of grades: just as some students now lose their scholarships when they fail to maintain a certain GPA, they may face a credit hit for poor performance.*

Now in conventional finance theory, that’s a good thing: the “pricier” loan sends a signal warning students that their course may not be as good an idea as they first thought. But the commitment to get a degree is not really analogous to an ordinary consumer decision. A simple Hayekian model of “market as information processor” works well in a supermarket: if bananas suddenly cost far more than apples, that signal will probably move a significant number of customers to substitute the latter for the former. But education does not work like that. College degrees (and in many areas further education) are necessary to get certain jobs. The situation is not as dire as health care, the best example of how the critical distinction between “needs” and “wants” upends traditional economic analysis. But it is still a much, much “stickier” situation than the average consumer purchase. Nor can most students simply “go to a cheaper school,” without losing social networks, enduring high transition costs, and sacrificing program quality.

For financiers, a sliding scale of interest rates makes perfect sense as “calculative risk management.” But we all know how easily it can reinforce inequality. A rational lender would charge much lower interest rates than average to a student from a wealthy family, attending Harvard. The lender would charge far more to a poorer student going to Bunker Hill Community College. “Risk-based pricing” is a recipe for segmenting markets, extracting more from the bottom and less from the top. The same logic promoted the tranching of mortgage-backed securities, restructuring housing finance to meet investor demands. Some investors wanted income streams from the safest borrowers only–they bought the AAA tranches. Others took more risk, in exchange for more reward. Few considered how the lives of the borrowers could be wrecked if the “bets” went sour.

Now you might ask: What’s the difference between those predictable disasters, and those arising out of defaults of federal loans? They’re very difficult to discharge in bankruptcy. But federal loans have income-based repayment options. For loans made after 2007, lenders in distress can opt into a payment plan keyed to their income level, which eventually forgives the debt. Private loans don’t offer IBR.

But IBR is not that great a deal, you may counterAnd in many cases, you’re right, it isn’t! Interest can accumulate for 20 or 25 years. Then, when the debt is finally forgiven, the forgiven amount could be treated as income which must be taxed. There is no IBR for the tax payment. Moreover, the impact of growing debt (even it is eventually to be forgiven) on future opportunities is, at present, largely unknown. Many consumer scores may factor it in, without even giving the scored individual notice that they are doing so.

So why keep up the federal role in higher ed finance? Because one key reason federal loans are so bad now is because private lenders have had such a powerful role in lobbying, staffing the key loan-disbursing agency (Department of Education), and supporting (indirectly or directly) think tank or analyst “research” on higher ed finance. When government is your competitor, you use the regulatory process to make the government’s “product” as bad as possible, to make your own look better by comparison. And the more of the market private lenders take, the more money they’ll have to advocate for higher rates and worse terms for federal loans–or getting rid of them altogether.

—————————-

*The CFPB has warned lenders that using institutional cohort default rates to price loans could violate fair lending laws, and that may have scared some big players away from doing too much risk based pricing. However, with the rise of so many fringe and alternative lenders, and the opacity of algorithmic determinations of creditworthiness, the risk of disparate impact is still present.

The Emerging Law of Algorithms, Robots, and Predictive Analytics

In 1897, Holmes famously pronounced, “For the rational study of the law the blackletter man may be the man of the present, but the man of the future is the man of statistics and the master of economics.” He could scarcely envision at the time the rise of cost-benefit analysis, and comparative devaluation of legal process and non-economic values, in the administrative state. Nor could he have foreseen the surveillance-driven tools of today’s predictive policing and homeland security apparatus. Nevertheless, I think Holmes’s empiricism and pragmatism still animate dominant legal responses to new technologies. Three conferences this Spring show the importance of “statistics and economics” in future tools of social order, and the fundamental public values that must constrain those tools.

Tyranny of the Algorithm? Predictive Analytics and Human Rights

As the conference call states

Advances in information and communications technology and the “datafication” of broadening fields of human endeavor are generating unparalleled quantities and kinds of data about individual and group behavior, much of which is now being deployed to assess risk by governments worldwide. For example, law enforcement personnel are expected to prevent terrorism through data-informed policing aimed at curbing extremism before it expresses itself as violence. And police are deployed to predicted “hot spots” based on data related to past crime. Judges are turning to data-driven metrics to help them assess the risk that an individual will act violently and should be detained before trial. 


Where some analysts celebrate these developments as advancing “evidence-based” policing and objective decision-making, others decry the discriminatory impact of reliance on data sets tainted by disproportionate policing in communities of color. Still others insist on a bright line between policing for community safety in countries with democratic traditions and credible institutions, and policing for social control in authoritarian settings. The 2016 annual conference will . . . consider the human rights implications of the varied uses of predictive analytics by state actors. As a core part of this endeavor, the conference will examine—and seek to advance—the capacity of human rights practitioners to access, evaluate, and challenge risk assessments made through predictive analytics by governments worldwide. 

This focus on the violence targeted and legitimated by algorithmic tools is a welcome chance to discuss the future of law enforcement. As Dan McQuillan has argued, these “crime-fighting” tools are both logical extensions of extant technologies of ranking, sorting, and evaluating, and raise fundamental challenges to the rule of law: 

According to Agamben, the signature of a state of exception is ‘force-of’; actions that have the force of law even when not of the law. Software is being used to predict which people on parole or probation are most likely to commit murder or other crimes. The algorithms developed by university researchers uses a dataset of 60,000 crimes and some dozens of variables about the individuals to help determine how much supervision the parolees should have. While having discriminatory potential, this algorithm is being invoked within a legal context. 

[T]he steep rise in the rate of drone attacks during the Obama administration has been ascribed to the algorithmic identification of ‘risky subjects’ via the disposition matrix. According to interviews with US national security officials the disposition matrix contains the names of terrorism suspects arrayed against other factors derived from data in ‘a single, continually evolving database in which biographies, locations, known associates and affiliated organizations are all catalogued.’ Seen through the lens of states of exception, we cannot assume that the impact of algorithmic force-of will be constrained because we do not live in a dictatorship. . . .What we need to be alert for, according to Agamben, is not a confusion of legislative and executive powers but separation of law and force of law. . . [P]redictive algorithms increasingly manifest as a force-of which cannot be restrained by invoking privacy or data protection. 

The ultimate logic of the algorithmic state of exception may be a homeland of “smart cities,” and force projection against an external world divided into “kill boxes.” 


We Robot 2016: Conference on Legal and Policy Issues Relating to Robotics

As the “kill box” example suggests above, software is not just an important tool for humans planning interventions. It is also animating features of our environment, ranging from drones to vending machines. Ryan Calo has argued that the increasing role of robotics in our lives merits “systematic changes to law, institutions, and the legal academy,” and has proposed a Federal Robotics Commission. (I hope it gets further than proposals for a Federal Search Commission have so far!)


Calo, Michael Froomkin, and other luminaries of robotics law will be at We Robot 2016 this April at the University of Miami. Panels like “Will #BlackLivesMatter to RoboCop?” and “How to Engage the Public on the Ethics and Governance of Lethal Autonomous Weapons” raise fascinating, difficult issues for the future management of violence, power, and force.


Unlocking the Black Box: The Promise and Limits of Algorithmic Accountability in the Professions


Finally, I want to highlight a conference I am co-organizing with Valerie Belair-Gagnon and Caitlin Petre at the Yale ISP. As Jack Balkin observed in his response to Calo’s “Robotics and the Lessons of Cyberlaw,” technology concerns not only “the relationship of persons to things but rather the social relationships between people that are mediated by things.” Social relationships are also mediated by professionals: doctors and nurses in the medical field, journalists in the media, attorneys in disputes and transactions.


For many techno-utopians, the professions are quaint, an organizational form to be flattened by the rapid advance of software. But if there is anything the examples above (and my book) illustrate, it is the repeated, even disastrous failures of many computational systems to respect basic norms of due process, anti-discrimination, transparency, and accountability. These systems need professional guidance as much as professionals need these systems. We will explore how professionals–both within and outside the technology sector–can contribute to a community of inquiry devoted to accountability as a principle of research, investigation, and action. 


Some may claim that software-driven business and government practices are too complex to regulate. Others will question the value of the professions in responding to this technological change. I hope that the three conferences discussed above will help assuage those concerns, continuing the dialogue started at NYU in 2013 about “accountable algorithms,” and building new communities of inquiry. 


And one final reflection on Holmes: the repetition of “man” in his quote above should not go unremarked. Nicole Dewandre has observed the following regarding modern concerns about life online: 

To some extent, the fears of men in a hyperconnected era reflect all-too-familiar experiences of women. Being objects of surveillance and control, exhausting laboring without rewards and being lost through the holes of the meritocracy net, being constrained in a specular posture of other’s deeds: all these stances have been the fate of women’s lives for centuries, if not millennia. What men fear from the State or from “Big (br)Other”, they have experienced with men. So, welcome to world of women….

Dewandre’s voice complements that of US scholars (like Danielle Citron and Mary Ann Franks) on systematic disadvantages to women posed by opaque or distant technological infrastructure. I think one of the many valuable goals of the conferences above will be to promote truly inclusive technologies, permeable to input from all of society, not just top investors and managers.

X-Posted: Balkinization.

(R)evolution in Law & Economics

book-calabresiIt is a real pleasure to read Guido Calabresi’s The Future of Law and Economics almost 20 years after taking his torts class. Calabresi always struck me as a warm and inspiring presence at Yale. He’s attained eminence as a scholar, teacher, and public servant. There is much to learn from and celebrate in his work. I’ll start with his latest book’s major contributions, and then go on to raise some questions about just what future(s) might be in store for law & economics.

Bentham’s Shadow

Jeremy Bentham casts a long shadow over the legal academy. As Fred Schauer helpfully recounts, Bentham was extraordinarily suspicious of the complexity of law, and wanted it “to be understood by ordinary people without the intervention of lawyers and the interpretation of judges.” Bentham’s utilitarian legacy also stalks the profession of law. Following the lead of cost-benefit analysts, administrators may decide that legal regularity should shrink in importance as a value in comparison with quantified estimates of, say, consumer welfare. As another former Yale dean observed, the reduction of difficult conflicts to purely economic (or philosophical) questions threatens to undermine the autonomy of law as a field.

Calabresi advances this discussion with his crystalline distinction between “Economic Analysis of Law” and “Law & Economics.” I will quote at length here, since this distinction is central to the book:

What I call the Economic Analysis of Law uses economic theory to analyze the legal world. . . . In its most aggressive and reformist mode, having looked at the world from the standpoint of economic theory, if it finds that the legal world does not fit, it proclaims that world to be “irrational.” And this, of course, is exactly what Bentham did when he tested laws and behavior on the basis of utilitarianism and, in his most aggressive moments, dismissed what did not fit as nonsense. . . .

What I call Law and Economics instead begins with an agnostic acceptance of the world as it is, as the lawyer describes it to be. It then looks to whether economic theory can explain that world, that reality. And if it cannot, rather than automatically dismissing that world as irrational, it asks two questions.

The first is, are the legal scholars who are describing the legal reality looking at the world as it really is? Or is there something in their way of seeing the world that has led them to mischaracterize that reality? . . . . If . . . even a more comprehensive view of legal reality discloses rules and practices that economic theory cannot explain, Law and Economics asks a second question. Can economic theory be amplified, can it be made broader or more subtle . . . so that it can explain why the real world of law is at it is?

For Calabresi, behavioral economics is a great example of the kind of “bilateral relationship between economic theory and the world as it is” that he calls Law and Economics, because it has expanded economic theory to account for humans’ predictable irrationalities, and for some higher principles of altruism and fair play.

Calabresi’s chapter on non-profit institutions is a particularly strong vindication of the “Law and Economics” (as opposed to “Economic Analysis of Law”) perspective.  For market enthusiasts, the lack of profit motive at universities and hospitals is the key to understanding all that ails them. But from a more cosmopolitan perspective, one could just as easily conclude that the excess marketization of US systems of health and education (relative to, say, a European benchmark) is the better explanation.

Nevertheless, we can still expect plenty of government and corporate agitation to promote the profit motive in these sectors, however bad its results may be. Ugo Mattei (in a 2006 essay on Calabresi’s work) helps explain why:

Read More

The State of Legal Scholarship: A View from Health Law

Based on Ron Collins’ post below, I read the interview with Judge Edwards. The judge states:

In order for legal scholarship to be relevant outside the legal academy, law professors should balance abstract scholarship with scholarly works that are of interest and use to lawyers, legislators, judges, and regulators who serve society through legal arguments, decision-making, regulatory initiatives, and enforcement actions.

Fortunately, every legal academic that Nicolas Terry and I have hosted in our 41 episodes of The Week in Health Law has done so. Perhaps that’s a biased sample. But it’s undoubtedly better than the sampling practiced by Justice Breyer, another critic of legal scholarship.

For now, I will take some comfort that, about a year into our podcasting, we have heard from general counsels, attorneys, regulators, and journalists who are big fans of the show–which primarily focuses on the work of legal academics. And I will remain dubious of generalized critiques of legal scholarship, which fail to analyze the merits of particular fields.

Complicating the Narrative of Legal Automation

Richard Susskind has been predicting “the end of lawyers” for years, and has doubled down in a recent book coauthored with his son (The Future of the Professions). That book is so sweeping in its claims—that all professions are on a path to near-complete automation–that it should actually come as a bit of a relief for lawyers. If everyone’s doomed to redundancy, law can’t be a particularly bad career choice. To paraphrase Monty Python: nobody expects the singularity.

On the other hand, experts on the professions are offering some cautions about the Susskinds’ approach. Howard Gardner led off an excellent issue of Daedalus on the professions about ten years ago. He offers this verdict on the Susskinds’ perfunctory response to objections to their position:

In a section of their book called “Objections,” they list the principal reasons why others might take issue with their analyses, predictions, and celebratory mood. This list of counter-arguments to their critique includes the trustworthiness of professionals; the moral limits of unregulated markets; the value of craft; the importance of empathy and personal interactions; and the pleasure and pride derived from carrying out what they term ‘good work.’ With respect to each objection, the Susskinds give a crisp response.

I was disappointed with this list of objections, each followed by refutation. For example, countering the claim that one needs extensive training to become an expert, the Susskinds call for the reinstatement of apprentices, who can learn ‘on the job.’ But from multiple studies in cognitive science, we know that it takes approximately a decade to become an expert in any domain—and presumably that decade includes plenty of field expertise. Apprentices cannot magically replace well-trained experts. In another section, countering the claim that we need to work with human beings whom we can trust, they cite the example of the teaching done online via Khan Academy. But Khan Academy is the brainchild of a very gifted educator who in fact has earned the trust of many students and indeed of many teachers; it remains to be seen whether online learning à la Khan suffices to help individuals—either professionals or their clients—make ‘complex technical and ethical decisions under conditions of uncertainty.’ The Susskinds recognize that the makers and purveyors of apps may have selfish or even illegal goals in mind. But as they state, “We recognize that there are many online resources that promote and enable a wide range of offenses. We do not underestimate their impact of threat, but they stand beyond the reach of this book” (p. 233).

Whether or not one goes along with specific objections and refutations, another feature of the Susskinds’ presentation should give one pause. The future that they limn seems almost entirely an exercise in rational deduction and accordingly devoid of historical and cultural considerations.

Experts with a bit more historical perspective differ on the real likelihood of pervasive legal automation. Some put the risk to lawyers at under 4%. Even the highly cited study by Carl Frey and Michael Osborne (The Future of Employment: How Susceptible Are Jobs to Automation) placed attorneys in the “low risk” category when it comes to replacement by software and robots. They suggest paralegals are in much more danger.

But empirical research by economist James Bessen has complicated even that assumption:“Since the late 1990s, electronic document discovery software for legal proceedings has grown into a billion dollar business doing work done by paralegals, but the number of paralegals has grown robustly.” Like MIT’s David Autor, Bessen calls automation a job creator, not a job destroyer. “The idea that automation kills jobs isn’t true historically,” Steve Lohr reports, and is still dubious. The real question is whether we reinforce policies designed to promote software and robotization that complements current workers’ skills, or slip into a regime of deskilling and substitution.

A Review of The Black Box Society

I just learned of this very insightful and generous review of my book, by Raizel Liebler:

The Black Box Society: The Secret Algorithms that Control Money and Information (Harvard University Press 2015) is an important book, not only for those interested in privacy and data, but also anyone with larger concerns about the growing tension between transparency and trade secrets, and the deceptiveness of pulling information from the ostensibly objective “Big Data.” . . .

One of the most important aspects of The Black Box Society builds on the work of Siva Vaidhyanathan and others to write about how relying on the algorithms of search impact people’s lives. Through our inability to see how Google, Facebook, Twitter, and other companies display information, it makes it seem like these displays are in some way “objective.” But they are not. Between various stories about blocking pictures of breastfeeding moms, blocking links to competing sites, obscurity sources, and not creating tools to prevent harassment, companies are making choices. As Pasquale puts it: “at what point does a platform have to start taking responsibility for what its algorithms go, and how their results are used? These new technologies affect not only how we are understood, but also how we understand. Shouldn’t we know when they’re working for us, against us, or for unseen interests with undisclosed motives?”

I was honored to be mentioned on the TLF blog–a highly recommended venue! Here’s a list of some other reviews in English (I have yet to compile the ones in other languages, but was very happy to see the French edition get some attention earlier this Fall). And here’s an interesting take on one of those oft-black-boxed systems: Google Maps.

Highly Recommended: Chamayou’s The Theory of The Drone

robocop1924_02Earlier this year, I read a compelling analysis of drone warfare, Gregoire Chamayou’s The Theory of The Drone. It is an unusual and challenging book, of interest to both policymakers and philosophers, engineers and attorneys. As I begin a review of it:

At what point do would-be reformers of the law and ethics of war slide into complicity with a morally untenable status quo? When is the moralization of force a prelude for the ration­alization of slaughter? Grégoire Chamayou’s penetrating recent book, A Theory of the Drone, raises these uncomfortable questions for lawyers and engineers both inside and out of the academy. Chamayou, a French philosopher, dissects legal academics’ arguments for targeted killing by unmanned vehicles. He also criticizes university research programs purporting to engineer ethics for the autonomous weapons systems they view as the inevitable future of war. Writing from a tradition of critical theory largely alien to both engineering and law, he raises concerns that each discipline should address before it continues to develop procedures for the automation of war.

As with the automation of law enforcement, advocacy, and finance, the automation of war has many unintended consequences. Chamayou helps us discern its proper limits.

Image Credit: 1924 idea for police automaton.