Author: Ryan Calo


Facebook Privacy Dinosaur


I have yet to see it “in the wild,” but media outlets are reporting that Facebook has created a Privacy Dinosaur—a little helper that checks in on users in real-time to help ensure that they understand who will see their update or post.   Whether you think of this as “visceral notice,” a privacy “nudge,” or “obscurity by design,” suffice it to say that this development will be of interest to many a privacy scholar.


Controlling Ghosts

According to a story in The Economist that Deven just flagged, shippers are experimenting with the use of “ghost” vessels without crews to move cargo.  Reasons include greater safety and lower costs, in part because crews make errors and good crews are expensive.  The article has a short section on “rules of the sea” and offers the view of one engineer that there shouldn’t be any legal problems where humans control the vessels from shore.

As it happens, I discuss a version of this issue in Robotics and the New Cyberlaw (at *131):

Craig Allen, a maritime law scholar, recently considers whether unmanned submarines with autonomous capabilities qualify for the full panoply of writes generally afforded vessels in international waters.  International law [e.g., UNCLOS VII, Art. 94(1)] premises these rights on the ability of a flag state to retain, again, “effective control” over the behavior of the vessel and crew.  This has been taken to mean that there are one or more people in charge of the vessel who are beholden to the flag nation in the right ways. The question of whether autonomous systems are beholden to the United States is not (merely) academic: A nation such as China has a strategic incentive to disqualify underwater American military equipment that patrols its sea shelf, such that international bodies may have to confront the question sooner rather than later.

I suspect that remote piloting of vessels will only be the first step—just as so-called “platooning,” where a professional controls multiple vehicles following closely behind, is likely to precede broader deployment of driverless cars.  I wonder whether even the initial deployment will not have some level of autonomy that kicks in where, for instance, contact with the ship is severed.  Certain classes of military drones return to base if they lose contact with the pilot.  Moreover, there are already smaller research vessels that navigate the “high seas,” where these obligations pertain, and collect or relay data without human intervention.  In any event, this issue of effective control—whether in tort, criminal, or apparently maritime law—is one of the ways robotics will pose challenges for law and institutions in the near term.


FAA Appeals Drone Decision

Last week, an administrative law judge invalidated a fine against Raphael Pirker by the Federal Aviation Administration for using a small drone for a commercial purpose.  I discuss the basis for the decision–in short, that the FAA implied that the type of craft Pirker was using was subject only to non-binding guidance–over at Forbes.  In that post and elsewhere, I cautioned drone start ups and others to wait and see what the FAA does in response to the ruling before rushing ahead with their idea for a drone-based business.  Today the FAA announced it was, in fact, appealing the decision.

How the appeal fares may depend on the way the appeals court characterizes the decision.  In issuing rules, the FAA has to follow the strictures of the Administrative Procedure Act, including issuing notice and soliciting comment.  The judge at one point refers to a defect in the FAA’s public notice concerning “unmanned aerial systems.”  According to the judge, “Notice 07-01 does not … meet the criteria for valid legislative rulemaking,” due to defects in title (not called an “Notice of Proposed Rulemaking” or “NPRM”) and timing (not issued 30 days in advance).  My understanding is that courts review procedural defects de novo under the APA.  Now, if an appellate court upholds the administrative judge’s decision on this basis, then the FAA loses authority to regulate drones in general, but only until they follow the proper procedure to create valid rules.

If the basis is that the FAA misinterpreted its own rules, however–i.e., the agency was wrong to sweep the drone Pirker was operating into its definition of “aircraft”–then arguably Seminole Rock / Auer deference applies.  Auer has faced its share of criticism, as my colleague Kathryn Watts explores in a forthcoming article in Georgetown Law Journal.  But it remains the law of the land, and requires courts to uphold agency interpretations unless they are “plainly erroneous” or else inconsistent.  I don’t see the FAA’s decision to include unmanned aircraft systems as aircraft as plainly erroneous.  Otherwise, you could simply replace the pilot of a cargo plane with a robot and suddenly the plane falls outside the authority of the FAA.  But the FAA’s decision could be inconsistent: As the administrative judge notes, official FAA communications repeatedly treat some categories of “model aircraft” or “modelers” separately than other UAS.

The basis of the invalidation of the fine could be a procedural defect, an inconsistent interpretation, or both.  The ruling is not entirely clear.  We will have to wait and see how the court reacts to the FAA’s appeal.  And even if the court upholds the judgment, we should probably expect a drone NPRM from the FAA to follow.  Those of you with deeper training in administrative law should feel free to jump in.

CLARIFICATION (March 13, 2014): Peter Sachs of Drone Law Journal points out that the first layer of appeal here is to a five-member panel of administrative judges.  They could in theory clarify the basis of the decision (or overrule it) before the case heads to an Article III court.  Thanks, Peter!


‘Spritz’ Your Way Through Submission Season?

My social network keeps sharing links to Spritz—by one account, an “insane new app [that] will allow you to read a novels in under 90 minutes.”  You can see how it works by clicking on the image below.  I confess my initial reaction was: I don’t want to read a novel in 90 minutes.   As a member of the occasional program committee, however, it would be very useful to speed-read submissions from time to time.  I imagine that student editors, in particular, would love to be able to read introductions in mere minutes right around now.

Still, I think Spritz may be just a stepping stone to greater reliance on low-level artificial intelligence in digesting content.  Today, AI (like search) is good at finding content.  Tomorrow’s AI will increasingly be able to summarize that content as well.  See, for example, Summly, which Yahoo! just purchased to supports its news feed.  We won’t need to speed read because apps will squeeze everything into a paragraph.

Well, not everything, I hope.  I don’t want to speed into the turn of A Widow’s Yard or navigate A Hundred Years of Solitude in a hundred minutes.  But maybe that stack of articles will look just a little less daunting?



Facebook To Buy Drone Company (UPDATED)

My fantasy is that there is a student editor or two out there reading today’s headlines that Facebook just joined Google, Amazon, and Apple in making a heavy investments in robotics and thinking: didn’t I just see an article about this?  Let me know if so.  :o)

UPDATE (March 7, 2014): Eric Schmidt’s says:

“We’re experimenting with what automation will lead to,” Schmidt said yesterday at a conference in Santa Monica, California. “Robots will become omnipresent in our lives in a good way.”

Google is pushing ahead with products beyond its core search business for new sources of user traffic and revenue in areas such as mobile and online video. The company also has shown a willingness to make bets on longer-term projects, such as wearable technology, robotics and driverless cars.

UPDATE (March 10, 2014): The Washington Post says:

LeClairRyan and McKenna Long are looking to expand their work representing companies that design, manufacture and operate drones in shaping the upcoming FAA regulations, as well as guiding companies through the FAA certification process. The FAA must certify any aircraft, manned or unmanned, that goes into the sky, and anyone who wants to operate a vehicle has to go through an application process.

The drone practice groups at both firms will not bring in new lawyers, but rather include attorneys already at the firms who specialize in aviation, intellectual property, employment, government contracting and general business law.


Robotics and the New Cyberlaw

Cyberlaw is the study of the intersection between law and the Internet.  It should come as no surprise, then, that the defining questions of cyberlaw grew out of the Internet’s unique characteristics.  For instance: an insensitivity to distance led some courts to rethink the nature of jurisdiction.  A tendency, perhaps hardwired, among individuals and institutions to think of “cyberspace” as an actual place generated a box of puzzles around the nature of property, privacy, and speech.

We are now well in to the cyberlaw project.  Certain questions have seen a kind of resolution.  Mark Lemley collected a few examples—jurisdiction, free speech, the dormant commerce clause—back in 2003.  Several debates continue, but most deep participants are at least familiar with the basic positions and arguments.  In privacy, for example, a conversation that began around an individual’s control over their own information has evolved into a conversation about the control information affords over individuals to whoever holds it.  In short, the twenty or so years legal and other academics have spent studying the Internet have paid the dividends of structure and clarity that one would hope.

The problem is that technology has not stood still in the meantime.  The very same institutions that developed the Internet, from the military to household-name Internet companies like Google and Amazon, have initiated a significant shift toward a new transformative technology: robotics.  The word “significant” is actually pretty conservative: these institutions are investing, collectively, hundreds of billions of dollars in robotics and artificial intelligence.  People like the Editor-in-Chief of Wired Magazine—arguably the publication of record for the digital revolution—are quitting to found robotics companies.  Dozens of states now have robot-specific laws.

What do we as academics and jurists make of this shift?  It seems to me, at least, that robotics has a distinct set of essential qualities than the Internet and, therefore, will raise a novel questions of law and policy.  If anything, I see robotics as departing even more abruptly from the Internet than did the Internet from personal computers and telephony.  In a new draft article, I explore in detail how I think cyberlaw (and law in general) will change with the ascendance of robotics as a commercial, social, and cultural force.  I am particularly interested in whether cyberlaw—with its peculiar brand of interdisciplinary pragmatism—remains the proper intellectual house for the study of this new transformative technology.

I follow robotics pretty closely but I don’t purport to have all the answers.  Perhaps I have overstated the importance or robotics, misdiagnosed its likely impact, or otherwise selected an unwise path forward.   I hope you read the paper and let me know.


Third Annual Robotics and Law Conference “We Robot”

hdr-we-robot-2014-1Michael Froomkin, Ian Kerr, and I, along with a wonderful program committee of law scholars and roboticists, have for three years now put on a conference around law, policy, and robotics.  “We Robot” returns to the University of Miami School of Law from Stanford Law School this year and boasts an extraordinary roster of authors, commentators, and participants.  Folks like Jack Balkin, Ann Bartow, Kenneth Anderson, Woodrow Hartzog, Mary Anne Franks, Margot Kaminski, Kate Darling, and David Post, among many others.  Not to mention a demo from a roboticist at the University of Washington whose lab built the surgical robot for the movie Ender’s Game.

I’ve discovered that academics in other disciplines habitually list the acceptance rate of papers.  We Robot III accepted only twenty-five percent of the papers under submission, which compares favorably with the strongest and longest-running conferences in computer science, electrical engineering, and human-computer interaction.  Indeed, judging by the abstracts at least, the papers this year are very exciting, taking on difficult and timely issues from a range of perspectives.

On behalf of our community I invite you to register for and attend We Robot, April 4-5, 2014, in Coral Cables, Florida.  I also hope those who enjoyed We Robot I and II will chime in below, if inclined!  Thank you,

The We Robot III Planning Committee


‘Cognitive Infiltration’: the Dark Side of the Nudge

In their influential book Nudge: Improving Decisions About Health, Wealth, and Happiness, Richard Thaler and Cass Sunstein make the case that the government can and should leverage what it understands about behavioral psychology to “nudge” citizens toward healthier activities and outcomes.  One objection the authors acknowledge is that hacking people’s behavior, while not coercive in the classic sense, is still manipulative.  The authors anticipate this critique and respond by invoking the publicity principle from the work of John Rawls: officials should not nudge people in ways that would raise serious concerns were their methods made public.  “The government should respect the people whom it governs,” write Thaler and Sunstein, “and if adopts policies it could not defend in public, if fails to manifest that respect.”

The publicity principle is not an adequate response to the manipulation critique for a few reasons.  First, the response conflates what is publicly acceptable with what is democratically legitimate.  And second, because it calls for internal deliberation (“could not defend”) rather than actual transparency, the response assumes officials and the public will be on the same page about what methods are objectionable.  This turns out to be a questionable assumption.

I discuss these and other objections to nudging in my recent essay Code, Nudge, or Notice?  What I want to focus on here is the revelation this week that the British government is using psychology to influence and disrupt “hacktivist” and other online communities.  What was particularly astonishing (to me) was that Sunstein advocated for a version of this practice the very year Nudge hit the market.  Specifically, according to reporting by the indomitable Glenn Greenwald, Sunstein co-authored a 2008 memo suggesting the use of “cognitive infiltration” against “anti-government groups.”

I point this out not to demonstrate somehow that Sunstein is a hypocrite.  In addition to being rude and ad hominem, such an allegation finds little support.  I personally have tremendous respect for Sunstein as an intellectual and public servant.  Rather, I offer the example of cognitive infiltration—which, let us be clear, represents the sheep of libertarian paternalism in wolf’s clothing—as a vivid illustration of how a publicity principle falls short.  Some officials in the United States and Great Britain thought it would be just fine to manipulate citizens psychologically in an effort to disrupt inconvenient (if often misguided) ideologies.  Other individuals think the practice is, well, disrespectful.


Judge Posner’s Surveillance Argument Would Not Withstand An Economic Analysis

Judge Richard Posner took the occasion of the Boston bombing to remind us of his view that privacy should lose out to other values.  Privacy, argues Judge Posner, is largely about concealing truths “that, if known, would make it more difficult for us to achieve our personal goals.”  For instance: privacy helps the victims of domestic violence achieve their personal goal of living free from fear; it helps the elderly achieve their personal goal of staying off of marketing “sucker lists;” and it helps children achieve their personal goal of avoiding sexual predators online.

To be fair, Judge Posner acknowledges that some concealment is fine and that privacy laws may even “do some good.”  He worries rather about civil libertarians who would limit the expansion of surveillance to the point that we can neither deter, nor apprehend terrorists like the men responsible for bombing the marathon.  “There is a tendency to exaggerate the social value of privacy,” Judge Posner believes, and it just might get us killed.

Judge Posner is a founding member of the law and economics movement and, as such, it would seem fair to analyze his claim from the perspective of incentives.  Does video surveillance deter crime in general?  Empirical evidence suggests that cameras merely displace crime, and Judge Posner concedes that picking terrorists out of a crowd before they act is impracticable.  Does video surveillance help with identification?   Sure.  But the quick identification of the Boston bombers from private footage suggests we have enough surveillance.   Moreover, hardened terrorists have proven willing to die in an attack, making identification moot.

Then there are the unintended consequences—a mainstay of economic analyses of the law.  The fact that an act of terrorism will be caught on video and spread to every screen in America greatly enhances its intended impact, which in turn makes the option more attractive to our enemies.

One can quibble with my data points.  But any honest, empirically-informed cost-benefit analysis of additional surveillance will yield at best a mixed picture.  I submit that Judge Posner’s argument yesterday is dead wrong by the terms of the very movement he founded.


Robots Take Over University of Washington School of Law

I recently returned from a two-day conference at Stanford Law School on robotics and the law to find that robots had, in my absence, taken over my own law school.  Setting aside my thirty-student (15 law, 15 engineering) robotics and the law seminar, my colleague Lea Vaughn is using a telepresence company as the quarter-long case study in her employment law class.  Bill Covington’s tech policy clinic has a dozen students working on driverless car and drone legislation.  Our entrepreneurial law clinic is helping a robotics start up think about product liability.  And our law review is hosting a symposium on law and artificial intelligence in March 2014 (including a contribution by Concurring Opinions’ own Frank Pasquale and Danielle Citron).  It is increasingly clear to me that I will have to buy this t-shirt.