Category: Technology

Air Traffic Control for Drones

8435473266_16e7ae4191_zRecently a man was arrested and jailed for a night after shooting a drone that hovered over his property. The man felt he was entitled (perhaps under peeping tom statutes?) to privacy from the (presumably camera-equipped) drone. Froomkin & Colangelo have outlined a more expansive theory of self-help:

[I]t is common for new technology to be seen as risky and dangerous, and until proven otherwise drones are no exception. At least initially, violent self-help will seem, and often may be, reasonable even when the privacy threat is not great – or even extant. We therefore suggest measures to reduce uncertainties about robots, ranging from forbidding weaponized robots to requiring lights, and other markings that would announce a robot’s capabilities, and RFID chips and serial numbers that would uniquely identify the robot’s owner.

On the other hand, the Fortune article reports:

In the view of drone lawyer Brendan Schulman and robotics law professor, Ryan Calo, home owners can’t just start shooting when they see a drone over their house. The reason is because the law frowns on self-help when a person can just call the police instead. This means that Meredith may not have been defending his house, but instead engaging in criminal acts and property damage for which he could have to pay.

I am wondering how we might develop a regulatory infrastructure to make either the self-help or police-help responses more tractable. Present resources seem inadequate. I don’t think the police would take me seriously if I reported a drone buzzing my windows in Baltimore—they have bigger problems to deal with. If I were to shoot it, it might fall on someone walking on the sidewalk below. And it appears deeply unwise to try to grab it to inspect its serial number.

Following on work on license plates for drones, I think that we need to create a monitoring infrastructure to promote efficient and strict enforcement of law here. Bloomberg reports that “At least 14 companies, including Google, Amazon, Verizon and Harris, have signed agreements with NASA to help devise the first air-traffic system to coordinate small, low-altitude drones, which the agency calls the Unmanned Aerial System Traffic Management.” I hope all drones are part of such a system, that they must be identifiable as to owner, and that they can be diverted into custody by responsible authorities once a credible report of lawbreaking has occurred.

I know that this sort of regulatory vision is subject to capture. There is already misuse of state-level drone regulation to curtail investigative reporting on abusive agricultural practices. But in a “free-for-all” environment, the most powerful entities may more effectively create technology to capture drones than they deploy lobbyists to capture legislators. I know that is a judgment call, and others will differ. I also have some hope that courts will strike down laws against using drones for reporting of matters of public interest, on First Amendment/free expression grounds.

The larger point is: we may well be at the cusp of a “this changes everything” moment with drones. Illah Reza Nourbakhsh’s book Robot Futures imagines the baleful consequences of modern cities saturated with butterfly-like drones, carrying either ads or products. Grégoire Chamayou’s A Theory of the Drone presents a darker vision, of omniveillance (and, eventually, forms of omnipotence, at least with respect to less technologically advanced persons) enabled by such machines. The present regulatory agenda needs to become more ambitious, since “black boxed” drone ownership and control creates a genuine Ring of Gyges problem.

Image Credit: Outtacontext.

Corporate Experimentation

Those interested in the Facebook emotional manipulation study should take a look at Michelle N. Meyer’s op-ed (with Christopher Chabris) today:

We aren’t saying that every innovation requires A/B testing. Nor are we advocating nonconsensual experiments involving significant risk. But as long as we permit those in power to make unilateral choices that affect us, we shouldn’t thwart low-risk efforts, like those of Facebook and OkCupid, to rigorously determine the effects of those choices. Instead, we should…applaud them.

Meyer offers more perspectives on the issue in her interview with Nicolas Terry and me on The Week in Health Law podcast.

For an alternative view, check out my take on “Facebook’s Model Users:”

[T]he corporate “science” of manipulation is a far cry from academic science’s ethics of openness and reproducibility. That’s already led to some embarrassments in the crossover from corporate to academic modeling (such as Google’s flu trends failures). Researchers within Facebook worried about multiple experiments being performed at once on individual users, which might compromise the results of any one study. Standardized review could have prevented that. But, true to the Silicon Valley ethic of “move fast and break things,” speed was paramount: “There’s no review process. Anyone…could run a test…trying to alter peoples’ behavior,” said one former Facebook data scientist.

I just hope that, as A/B testing becomes more ubiquitous, we are well aware of the power imbalances it both reflects and reinforces. Given already well-documented resistance to an “experiment” on Montana politics, it’s clear that the power of big data firms to manipulate even the very political order that ostensibly regulates them, may well be on the horizon.

Worker Replaceability: A Question of Values

One reason I decided to write on law practice technology was because of a general unease about the shape of debates on automation. Technologists and journalists tend to look at jobs from the outside, presume that they are routine, and predict they’ll be further routinized by machines. But some reality checks are important here.

As David Rotman observes, “there is not much evidence on how even today’s automation is affecting employment.” Many economists believe that technology will create more jobs than it destroys. MIT’s David Autor, writing for the Federal Reserve Bank of Kansas City’s economic policy symposium on “Reevaluating Labor Market Dynamics,” states that “journalists and expert commentators overstate the extent of machine substitution for human labor and ignore the strong complementarities”—in other words, the ways that automation can increase, rather than decrease, the value of human labor. Consider, for instance, the use of voice recognition software: it may put transcriptionists out of work, but increases the value of the labor of a person who can now, say, transcribe what they’ve dictated 24 hours a day, rather than just when the transcriptionist is near. The selfie-stick may have a similar effect on cameramen and journalists. Legal tech may put some lawyers out of a job, while creating jobs for others.

It’s also easy to overestimate the scope of automation. Autor gives a sobering example of windshield repair:

Most automated systems lack flexibility—they are brittle. Modern automobile plants, for example, employ industrial robots to install windshields on new vehicles as they move through the assembly line. But aftermarket windshield replacement companies employ technicians, not robots, to install replacement windshields. Why not robots? Because removing a broken windshield, preparing the windshield frame to accept a replacement, and fitting a replacement into that frame demand far more real-time adaptability than any contemporary robot can

The distinction between assembly line production and the in-situ repair highlights the role of environmental control in enabling automation. While machines cannot generally operate autonomously in unpredictable environments, engineers can in some cases radically simplify the environment in which machines work to enable autonomous operation.

Admittedly, the “society of control” scenario discussed here, or even milder versions of the “smart city,” may lead to far more controllable environments. But they also raise critical questions about privacy, fair data practices, and liberty.

There are also conflicts over values at stake in worker replacement. Osborn & Frey’s study The Future Of Employment: How Susceptible Are Jobs To Computerisation? tries to rank order 702 positions on the degree of likelihood of their automation. They characterize recreational therapists as least automatable, and title examiners and searchers as the second most automatable. But many video games offer forms of therapy, and therapeutic jobs (like masseur) and even higher-touch jobs could, in principle, be computerized. Furthermore, at least in the United States in the wake of MERS, there has been a loss of “confidence in real property recording systems.” Title insurance may hinge on legal questions that are still up in the air in certain states. Yes, further automation and recognition of things like MERS might “cut the Gordian knot,” but that solution would also inevitably trench on other values of legal regularity and due process.

In summary: automation anxieties could be as overblown now as they were in the 1960s. And the automation of each occupation, and tasks within occupations, will inevitably create conflicts over values and social priorities. Far from a purely technical question, robotization always implicates values. The future of automation is ours to master. Respecting workers, rather than assuming their replaceability of, would be a great start.

Four Futures of Legal Automation

BarbicanThere are many gloom-and-doom narratives about the legal profession. One of the most persistent is “automation apocalypse.” In this scenario, computers will study past filings, determine what patterns of words work best, and then—poof!—software will eat the lawyer’s world.

Conditioned to be preoccupied by worst-case scenarios, many attorneys have panicked about robo-practitioners on the horizon. Meanwhile, experts differ on the real likelihood of pervasive legal automation. Some put the risk to lawyers at under 4%; others claim legal practice is fundamentally routinizable. I’ve recently co-authored an essay that helps explain why such radical uncertainty prevails.

While futurists affect the certainties of physicists, visions of society always reflect contestable political aspirations. Those predicting doom for future lawyers usually harbor ideological commitments that are not that friendly to lawyers of the present. Displacing the threat to lawyers to machines (rather than, say, the decisionmakers who can give machines’ doings the legal effect of what was once done by qualified persons) is a way of not merely rationalizing, but also speeding up, the hoped-for demise of an adversary. Just like the debate over killer robots can draw attention away from the persons who design and deploy them, so too can current controversy over robo-lawyering distract from the more important political and social trends that make automated dispute resolution so tempting to managers and bureaucrats.

It is easy to justify a decline in attorneys’ income or status by saying that software could easily do their work. It’s harder to explain why the many non-automatable aspects of current legal practice should be eliminated or uncompensated. That’s one reason why stale buzzwords like “disruption” crowd out serious reflection on the drivers of automation. A venture capitalist pushing robotic caregivers doesn’t want to kill investors’ buzz by reflecting on the economic forces promoting algorithmic selfhood. Similarly, #legaltech gurus know that a humane vision of legal automation, premised on software that increases quality and opportunities for professional judgment, isn’t an easy sell to investors keen on speed, scale, and speculation. Better instead to present lawyers as glorified elevator operators, replaceable with a sufficiently sophisticated user interface.

Our essay does not predict lawyers’ rise or fall. That may disappoint some readers. But our main point is to make the public conversation about the future of law a more open and honest one. Technology has shaped, and will continue to influence, legal practice. Yet its effect can be checked or channeled by law itself. Since different types of legal work are more or less susceptible to automation, and society can be more or less regulatory, we explore four potential future climates for the development of legal automation. We call them, in shorthand, Vestigial Legal Profession, Society of Control, Status Quo, and Second Great Compression. An abstract appears below.

Read More


Structuring US Law

In 2013, the U.S. House Law Revision Counsel released the Titles of the U.S. Code as “structured data” in xml.  Previously the law had been available only as ordinary text.  This structuring of the law as data allows for interesting visualizations and interactions with the law that were not previously feasible, such as the following:


Click on image to launch Force Directed Explorer App


US Code Explorer Screen shot

Click on image to launch Code Explorer App


This post will discuss what it means for US law to be structured as data and why this has enabled increased analysis and visualization of the law. (You can read more about the visualizations above here and here)

Structuring U.S. Law

The U.S. Code – (the primary codification of Federal Statutory Law) – has always had an implicit structure. However, it now has had an explicit, machine-readable structure.

Read More

Europe Steps Up to the Challenge of Digital Competition Law

Two years ago U.S. authorities abandoned a critical case in digital antitrust. The EC now appears ready to fill the void:

The European Commission is said to be planning to charge Google with using its dominant position in online search to favor the company’s own services over others, in what would be one of the biggest antitrust cases here since regulators went after Microsoft. . . . If Europe is successful in making its case, the American tech giant could face a huge fine and be forced to alter its business practices to give smaller competitors like Yelp greater prominence in its search queries.

I applaud this move. As I’ve argued in The Black Box Society, antitrust law flirts with irrelevance if it fails to grapple with the dominance of massive digital firms. Europe has no legal or moral obligation to allow global multinationals to control critical information sources. Someone needs to be able to “look under the hood” and understand what is going on when competitors of Google’s many acquired firms plunge in general Google search results.

Google argues that its vast database of information and queries reveals user intentions and thus makes its search services demonstrably better than those of its rivals. But in doing so, it neutralizes the magic charm it has used for years to fend off regulators. “Competition is one click away,” chant the Silicon Valley antitrust lawyers when someone calls out a behemoth firm for unfair or misleading business practices. It’s not so. Alternatives are demonstrably worse, and likely to remain so as long as the dominant firms’ self-reinforcing data advantage grows. If EU authorities address that dynamic, they’ll be doing the entire world a service.

PS: For those interested in further reading about competition online:
Read More

Meet the New Boss…

One of the most persistent self-images of Silicon Valley internet giants is a role as liberators, emancipators, “disintermediators” who’d finally free the creative class from the grips of oligopolistic music labels or duopolistic cable moguls. I chart the rise and fall of the plausibility of that narrative in Chapter 3 of my book. Cory Doctorow strikes another blow at it today:

[T]he competition for Youtube has all but vanished, meaning that they are now essential to any indie artist’s promotion strategy. And now that Youtube doesn’t have to compete with other services for access to artists’ materials, they have stopped offering attractive terms to indies — instead, they’ve become an arm of the big labels, who get to dictate the terms on which their indie competitors will have to do business.

Ah, but don’t worry–antitrust experts assure us that competition is just around the corner, any day now. Some nimble entrepreneur in a garage has the 1 to 3 million servers now deployed by Google, can miraculously access past data on organizing videos, and is just about to get all the current uploaders and viewers to switch to it. The folklore of digital capitalism is a dreamy affair.

The Second Machine Age & the System of Professions

Why do we have professions? Many economists give a public choice story: guilds of doctors, social workers, etc., monopolize a field by bribing legislators to keep everyone else out of the guild.* Some scholars of legal ethics buy into that story for our field, too.

But there is another, older explanation, based on the need for independent judgment and professional autonomy. Who knows whether a doctor employed by a drug company could resist the firm’s requirement that she prescribe its products off-label as often as possible. With independent doctors, there is at least some chance of pushback. Similarly, I’d be much more confident in the conclusions of a letter written by attorneys assessing the legality of a client’s course of action if that client generated, say, 1%, rather than 100%, of their business.

Andrew Abbott’s book The System of Professions makes those, and many other, critical points about the development of professions. Genuine expertise and independent judgment depend on certain economic arrangements. For Abbott, the professions exist, in part, to shield certain groups from the full force of economic demands that can be made by those with the most money or power. As inequality in the developed world skyrockets, and the superrich at the very top of the economy accumulate vastly more wealth than the vast majority of even the best-paid professionals, such protections become even more urgent.

I was reminded of Abbott’s views while reading Lilly Irani’s excellent review of Erik Brynjolffson & Andrew McAfee’s The Second Machine Age, and Simon Head’s Mindless. Irani, a former Googler, digs into the real conditions of work at leading firms of the digital economy. She observes that much of what we might consider “making” (pursuant to some professional standards) is a form of “managing:”
Read More

Methodological Pluralism in Legal Scholarship

The place of the social science in law is constantly contested. Should more legal scholars retreat to pure doctrinalism, as Judge Harry Edwards suggests? Or is there a place for more engagement with other parts of the university? As we consider these questions, we might do well to take a bit more of a longue duree perspective–helpfully provided by David Bosworth in a recent essay in Raritan:

No society in history has more emphasized the social atom than ours. Yet the very authority we have invested in individualism is now being called into question by both the inner logic of our daily practices and by the recent findings of our social sciences. . . .

Such findings challenge the very core of our political economy’s self-conception. What, after all, do “self-reliance” and “enlightened self-interest” really mean if we are constantly being influenced on a subliminal level by the behavior of those around us? Can private property rights continue to seem right when an ecologically minded, post-modern science keeps discovering new ways in which our private acts transgress our deeded boundaries to harm or help our neighbors? Can our allegiance to the modern notions of ownership, authorship, and originality continue to make sense in an economy whose dominant technologies expose and enhance the collaborative nature of human creativity? And in an era of both idealized and vulgarized “transparency,” can privacy—-the social buffer that cultivates whatever potential for a robust individualism we may actually possess—-retain anything more than a nostalgic value?

These are provocative questions, and I don’t agree with all their implications. But I am very happy to be part of an institution capable of exploring them with the help of computer scientists, philosophers, physicians, social scientists, and humanists.

I suppose Judge Edwards would find it one more symptom of the decadence of the legal academy that I’ll be discussing my book this term at both the Institute for Advanced Studies of Culture at UVA and at MAGIC at the Rochester Institute of Technology. But when I think about who might be qualified to help lawyers bridge the gap between policy and engineering in the technology-intensive fields I work in, few might be better than the experts at MAGIC. The fellows and faculty at IASC have done fascinating work on markets and culture–work that would, ideally, inform a “law & economics” committed to methodological pluralism.
Read More

The Black Box Society: Interviews

My book, The Black Box Society, is finally out! In addition to the interview Lawrence Joseph conducted in the fall, I’ve been fortunate to complete some radio and magazine interviews on the book. They include:

New Books in Law

Stanford Center for Internet & Society: Hearsay Culture

Canadian Broadcasting Corporation: The Spark

Texas Public Radio: The Source

WNYC: Brian Lehrer Show.

Fleishman-Hillard’s True.

I hope to be back to posting soon, on some of the constitutional and politico-economic themes in the book.