Category: Technology

A Review of The Black Box Society

I just learned of this very insightful and generous review of my book, by Raizel Liebler:

The Black Box Society: The Secret Algorithms that Control Money and Information (Harvard University Press 2015) is an important book, not only for those interested in privacy and data, but also anyone with larger concerns about the growing tension between transparency and trade secrets, and the deceptiveness of pulling information from the ostensibly objective “Big Data.” . . .

One of the most important aspects of The Black Box Society builds on the work of Siva Vaidhyanathan and others to write about how relying on the algorithms of search impact people’s lives. Through our inability to see how Google, Facebook, Twitter, and other companies display information, it makes it seem like these displays are in some way “objective.” But they are not. Between various stories about blocking pictures of breastfeeding moms, blocking links to competing sites, obscurity sources, and not creating tools to prevent harassment, companies are making choices. As Pasquale puts it: “at what point does a platform have to start taking responsibility for what its algorithms go, and how their results are used? These new technologies affect not only how we are understood, but also how we understand. Shouldn’t we know when they’re working for us, against us, or for unseen interests with undisclosed motives?”

I was honored to be mentioned on the TLF blog–a highly recommended venue! Here’s a list of some other reviews in English (I have yet to compile the ones in other languages, but was very happy to see the French edition get some attention earlier this Fall). And here’s an interesting take on one of those oft-black-boxed systems: Google Maps.

Highly Recommended: Chamayou’s The Theory of The Drone

robocop1924_02Earlier this year, I read a compelling analysis of drone warfare, Gregoire Chamayou’s The Theory of The Drone. It is an unusual and challenging book, of interest to both policymakers and philosophers, engineers and attorneys. As I begin a review of it:

At what point do would-be reformers of the law and ethics of war slide into complicity with a morally untenable status quo? When is the moralization of force a prelude for the ration­alization of slaughter? Grégoire Chamayou’s penetrating recent book, A Theory of the Drone, raises these uncomfortable questions for lawyers and engineers both inside and out of the academy. Chamayou, a French philosopher, dissects legal academics’ arguments for targeted killing by unmanned vehicles. He also criticizes university research programs purporting to engineer ethics for the autonomous weapons systems they view as the inevitable future of war. Writing from a tradition of critical theory largely alien to both engineering and law, he raises concerns that each discipline should address before it continues to develop procedures for the automation of war.

As with the automation of law enforcement, advocacy, and finance, the automation of war has many unintended consequences. Chamayou helps us discern its proper limits.

Image Credit: 1924 idea for police automaton.

From Territorial to Functional Governance

Susan Crawford is one of the leading global thinkers on digital infrastructure. Her brilliant book Captive Audience spearheaded a national debate on net neutrality. She helped convince the Federal Communications Commission to better regulate big internet service providers. And her latest intervention–on Uber–is a must-read. Crawford worries that Uber will rapidly monopolize urban ride services. It’s repeatedly tried to avoid regulation and taxes. And while it may offer a good deal to drivers and riders now, there is no guarantee it will in the future.

A noted critic of the sharing economy, Tom Slee, has backed up Crawford’s concerns, from an international perspective. “For a smallish city in Canada, what happens to accountability when faced with a massive American company with little interest in Canadian employment law or Canadian traditions?”, Slee asks, raising a very deep point about the nature of governance. What happens to a city when its government’s responsibilities are slowly disaggregated, functionally? Some citizens may want to see the effective governance of paid rides via Uber, of spare rooms via AirBnB, and so on. A full privatization of city governance awaits, from water to sidewalks.

If you’re concerned about that, you may find my recent piece on the sharing economy of interest. We’ll also be discussing this and similar issues at Northeastern’s conference “Tackling the Urban Core Puzzle.” Transitions from territorial to functional governance will be critical topics of legal scholarship in the coming decade.

Law’s Nostradamus

The ABA Journal “Legal Rebels” page has promoted Richard Susskind’s work (predicting the future automation of much of what lawyers do) as “required reading.” It is a disruptive take on the legal profession. But disruption has been having a tough time as a theory lately. So I was unsurprised to find this review, by a former General Counsel of DuPont Canada Inc., of Susskind’s The End of Lawyers?:

Susskind perceives a lot of routine in the practice of law . . . which he predicts will gradually become the domain of non-professional or quasi-professional workers. In this respect his prediction is about two or three decades too late. No substantial law firm, full service or boutique, can survive without a staff of skilled paralegal specialists and the trend in this direction has been ongoing since IT was little more than a typewriter and a Gestetner duplicating machine. . . .

Law is not practiced in a vacuum. It is not merely a profession devoted to preparing standard forms or completing blanks in precedents. And though he pays lip service to the phenomenon, there is little appreciation of the huge volume of indecipherable legislation and regulation that is promulgated every day of every week of the year. His proposal to deal with this through regular PDA alerts is absurd. . . . In light of this, if anything in Susskind’s thesis can be given short shrift it is his prognostication that demand for “bespoke” or customized services will be in secular decline. Given modern trends in legislative and regulatory drafting, in particular the use of “creative ambiguity” as it’s been called, demand for custom services will only increase.

Nevertheless, I predict Susskind’s work on The Future of the Professions will get a similarly warm reception from “Legal Rebels.” The narrative of lawyers’ obsolescence is just too tempting for those who want to pay attorneys less, reduce their professional independence from the demands of capital, or simply replace legal regulation of certain activities with automated controls.

However, even quite futuristic academics are not on board with the Susskindite singularitarianism of robo-lawyering via software Solons. The more interesting conversations about automation and the professions will focus on bringing accountability to oft-opaque algorithmic processes. Let’s hope that the professions can maintain some autonomy from capital to continue those conversations–rather than guaranteeing their obsolescence as ever more obeisant cogs in profit-maximizing machines.



How CalECPA Improves on its Federal Namesake

Last week, Governor Brown signed the landmark California Electronic Communications Privacy Act[1] (CalECPA) into law and updated California privacy law for modern communications. Compared to ECPA, CalECPA requires warrants, which are more restricted, for more investigations; provides more notice to targets; and furnishes as a remedy both court-ordered data deletion and statutory suppression.  Moreover, CalECPA’s approach is comprehensive and uniform, eschewing the often irrational distinctions that have made ECPA one of the most confusing and under-protective privacy statutes in the Internet era.

Extended Scope, Enhanced Protections, and Simplified Provisions

CalECPA regulates investigative methods that ECPA did not anticipate. Under CalECPA, government entities in California must obtain a warrant based on probable cause before they may access electronic communications contents and metadata from service providers or from devices.  ECPA makes no mention of device-stored data, even though law enforcement agents increasingly use StingRays to obtain information directly from cell phones. CalECPA subjects such techniques to its warrant requirement. While the Supreme Court’s recent decision in United States v. Riley required that agents either obtain a warrant or rely on an exception to the warrant requirement to search a cell phone incident to arrest, CalECPA requires a warrant for physical access to any device, not just a cell phone, which “stores, generates, or transmits electronic information in electronic form.” CalECPA clearly defines the exceptions to the warrant requirement by specifying what counts as an emergency, who can give consent to the search of a device, and related questions.

ECPA’s 1986-drafted text only arguably covers the compelled disclosure of location data stored by a service provider, and does not clearly require a warrant for such investigations. CalECPA explicitly includes location data in the “electronic communication information” that is subject to the warrant requirement when a government entity accesses it from either a device or a service provider (broadly defined).  ECPA makes no mention of location data gathered in real-time or prospectively, but CalECPA requires a warrant both for those investigations and for stored data investigations. Whenever a government entity compels the “the production of or access to” location information, including GPS data, from a service provider or from a device, CalECPA requires a warrant.

Read More

Air Traffic Control for Drones

8435473266_16e7ae4191_zRecently a man was arrested and jailed for a night after shooting a drone that hovered over his property. The man felt he was entitled (perhaps under peeping tom statutes?) to privacy from the (presumably camera-equipped) drone. Froomkin & Colangelo have outlined a more expansive theory of self-help:

[I]t is common for new technology to be seen as risky and dangerous, and until proven otherwise drones are no exception. At least initially, violent self-help will seem, and often may be, reasonable even when the privacy threat is not great – or even extant. We therefore suggest measures to reduce uncertainties about robots, ranging from forbidding weaponized robots to requiring lights, and other markings that would announce a robot’s capabilities, and RFID chips and serial numbers that would uniquely identify the robot’s owner.

On the other hand, the Fortune article reports:

In the view of drone lawyer Brendan Schulman and robotics law professor, Ryan Calo, home owners can’t just start shooting when they see a drone over their house. The reason is because the law frowns on self-help when a person can just call the police instead. This means that Meredith may not have been defending his house, but instead engaging in criminal acts and property damage for which he could have to pay.

I am wondering how we might develop a regulatory infrastructure to make either the self-help or police-help responses more tractable. Present resources seem inadequate. I don’t think the police would take me seriously if I reported a drone buzzing my windows in Baltimore—they have bigger problems to deal with. If I were to shoot it, it might fall on someone walking on the sidewalk below. And it appears deeply unwise to try to grab it to inspect its serial number.

Following on work on license plates for drones, I think that we need to create a monitoring infrastructure to promote efficient and strict enforcement of law here. Bloomberg reports that “At least 14 companies, including Google, Amazon, Verizon and Harris, have signed agreements with NASA to help devise the first air-traffic system to coordinate small, low-altitude drones, which the agency calls the Unmanned Aerial System Traffic Management.” I hope all drones are part of such a system, that they must be identifiable as to owner, and that they can be diverted into custody by responsible authorities once a credible report of lawbreaking has occurred.

I know that this sort of regulatory vision is subject to capture. There is already misuse of state-level drone regulation to curtail investigative reporting on abusive agricultural practices. But in a “free-for-all” environment, the most powerful entities may more effectively create technology to capture drones than they deploy lobbyists to capture legislators. I know that is a judgment call, and others will differ. I also have some hope that courts will strike down laws against using drones for reporting of matters of public interest, on First Amendment/free expression grounds.

The larger point is: we may well be at the cusp of a “this changes everything” moment with drones. Illah Reza Nourbakhsh’s book Robot Futures imagines the baleful consequences of modern cities saturated with butterfly-like drones, carrying either ads or products. Grégoire Chamayou’s A Theory of the Drone presents a darker vision, of omniveillance (and, eventually, forms of omnipotence, at least with respect to less technologically advanced persons) enabled by such machines. The present regulatory agenda needs to become more ambitious, since “black boxed” drone ownership and control creates a genuine Ring of Gyges problem.

Image Credit: Outtacontext.

Corporate Experimentation

Those interested in the Facebook emotional manipulation study should take a look at Michelle N. Meyer’s op-ed (with Christopher Chabris) today:

We aren’t saying that every innovation requires A/B testing. Nor are we advocating nonconsensual experiments involving significant risk. But as long as we permit those in power to make unilateral choices that affect us, we shouldn’t thwart low-risk efforts, like those of Facebook and OkCupid, to rigorously determine the effects of those choices. Instead, we should…applaud them.

Meyer offers more perspectives on the issue in her interview with Nicolas Terry and me on The Week in Health Law podcast.

For an alternative view, check out my take on “Facebook’s Model Users:”

[T]he corporate “science” of manipulation is a far cry from academic science’s ethics of openness and reproducibility. That’s already led to some embarrassments in the crossover from corporate to academic modeling (such as Google’s flu trends failures). Researchers within Facebook worried about multiple experiments being performed at once on individual users, which might compromise the results of any one study. Standardized review could have prevented that. But, true to the Silicon Valley ethic of “move fast and break things,” speed was paramount: “There’s no review process. Anyone…could run a test…trying to alter peoples’ behavior,” said one former Facebook data scientist.

I just hope that, as A/B testing becomes more ubiquitous, we are well aware of the power imbalances it both reflects and reinforces. Given already well-documented resistance to an “experiment” on Montana politics, it’s clear that the power of big data firms to manipulate even the very political order that ostensibly regulates them, may well be on the horizon.

Worker Replaceability: A Question of Values

One reason I decided to write on law practice technology was because of a general unease about the shape of debates on automation. Technologists and journalists tend to look at jobs from the outside, presume that they are routine, and predict they’ll be further routinized by machines. But some reality checks are important here.

As David Rotman observes, “there is not much evidence on how even today’s automation is affecting employment.” Many economists believe that technology will create more jobs than it destroys. MIT’s David Autor, writing for the Federal Reserve Bank of Kansas City’s economic policy symposium on “Reevaluating Labor Market Dynamics,” states that “journalists and expert commentators overstate the extent of machine substitution for human labor and ignore the strong complementarities”—in other words, the ways that automation can increase, rather than decrease, the value of human labor. Consider, for instance, the use of voice recognition software: it may put transcriptionists out of work, but increases the value of the labor of a person who can now, say, transcribe what they’ve dictated 24 hours a day, rather than just when the transcriptionist is near. The selfie-stick may have a similar effect on cameramen and journalists. Legal tech may put some lawyers out of a job, while creating jobs for others.

It’s also easy to overestimate the scope of automation. Autor gives a sobering example of windshield repair:

Most automated systems lack flexibility—they are brittle. Modern automobile plants, for example, employ industrial robots to install windshields on new vehicles as they move through the assembly line. But aftermarket windshield replacement companies employ technicians, not robots, to install replacement windshields. Why not robots? Because removing a broken windshield, preparing the windshield frame to accept a replacement, and fitting a replacement into that frame demand far more real-time adaptability than any contemporary robot can

The distinction between assembly line production and the in-situ repair highlights the role of environmental control in enabling automation. While machines cannot generally operate autonomously in unpredictable environments, engineers can in some cases radically simplify the environment in which machines work to enable autonomous operation.

Admittedly, the “society of control” scenario discussed here, or even milder versions of the “smart city,” may lead to far more controllable environments. But they also raise critical questions about privacy, fair data practices, and liberty.

There are also conflicts over values at stake in worker replacement. Osborn & Frey’s study The Future Of Employment: How Susceptible Are Jobs To Computerisation? tries to rank order 702 positions on the degree of likelihood of their automation. They characterize recreational therapists as least automatable, and title examiners and searchers as the second most automatable. But many video games offer forms of therapy, and therapeutic jobs (like masseur) and even higher-touch jobs could, in principle, be computerized. Furthermore, at least in the United States in the wake of MERS, there has been a loss of “confidence in real property recording systems.” Title insurance may hinge on legal questions that are still up in the air in certain states. Yes, further automation and recognition of things like MERS might “cut the Gordian knot,” but that solution would also inevitably trench on other values of legal regularity and due process.

In summary: automation anxieties could be as overblown now as they were in the 1960s. And the automation of each occupation, and tasks within occupations, will inevitably create conflicts over values and social priorities. Far from a purely technical question, robotization always implicates values. The future of automation is ours to master. Respecting workers, rather than assuming their replaceability of, would be a great start.

Four Futures of Legal Automation

BarbicanThere are many gloom-and-doom narratives about the legal profession. One of the most persistent is “automation apocalypse.” In this scenario, computers will study past filings, determine what patterns of words work best, and then—poof!—software will eat the lawyer’s world.

Conditioned to be preoccupied by worst-case scenarios, many attorneys have panicked about robo-practitioners on the horizon. Meanwhile, experts differ on the real likelihood of pervasive legal automation. Some put the risk to lawyers at under 4%; others claim legal practice is fundamentally routinizable. I’ve recently co-authored an essay that helps explain why such radical uncertainty prevails.

While futurists affect the certainties of physicists, visions of society always reflect contestable political aspirations. Those predicting doom for future lawyers usually harbor ideological commitments that are not that friendly to lawyers of the present. Displacing the threat to lawyers to machines (rather than, say, the decisionmakers who can give machines’ doings the legal effect of what was once done by qualified persons) is a way of not merely rationalizing, but also speeding up, the hoped-for demise of an adversary. Just like the debate over killer robots can draw attention away from the persons who design and deploy them, so too can current controversy over robo-lawyering distract from the more important political and social trends that make automated dispute resolution so tempting to managers and bureaucrats.

It is easy to justify a decline in attorneys’ income or status by saying that software could easily do their work. It’s harder to explain why the many non-automatable aspects of current legal practice should be eliminated or uncompensated. That’s one reason why stale buzzwords like “disruption” crowd out serious reflection on the drivers of automation. A venture capitalist pushing robotic caregivers doesn’t want to kill investors’ buzz by reflecting on the economic forces promoting algorithmic selfhood. Similarly, #legaltech gurus know that a humane vision of legal automation, premised on software that increases quality and opportunities for professional judgment, isn’t an easy sell to investors keen on speed, scale, and speculation. Better instead to present lawyers as glorified elevator operators, replaceable with a sufficiently sophisticated user interface.

Our essay does not predict lawyers’ rise or fall. That may disappoint some readers. But our main point is to make the public conversation about the future of law a more open and honest one. Technology has shaped, and will continue to influence, legal practice. Yet its effect can be checked or channeled by law itself. Since different types of legal work are more or less susceptible to automation, and society can be more or less regulatory, we explore four potential future climates for the development of legal automation. We call them, in shorthand, Vestigial Legal Profession, Society of Control, Status Quo, and Second Great Compression. An abstract appears below.

Read More


Structuring US Law

In 2013, the U.S. House Law Revision Counsel released the Titles of the U.S. Code as “structured data” in xml.  Previously the law had been available only as ordinary text.  This structuring of the law as data allows for interesting visualizations and interactions with the law that were not previously feasible, such as the following:


Click on image to launch Force Directed Explorer App


US Code Explorer Screen shot

Click on image to launch Code Explorer App


This post will discuss what it means for US law to be structured as data and why this has enabled increased analysis and visualization of the law. (You can read more about the visualizations above here and here)

Structuring U.S. Law

The U.S. Code – (the primary codification of Federal Statutory Law) – has always had an implicit structure. However, it now has had an explicit, machine-readable structure.

Read More