How To Regulate Drones

I started to think about the intersection of robotics and the law in earnest a few years ago when I left private practice.  In 2011, I came to the conclusion that drones had the potential to create a new Warren and Brandeis moment.  Some combination of our visceral reaction to robotic technology, our fascination with flight, and our association of drones with the theater of war could, I thought, trigger a reexamination of privacy law. Drones have indeed captured the public imagination. And we are entering something of a policy window, to borrow a concept from Priscilla Regan. But just how citizens and lawmakers ultimately come down on the domestic use of drones remains to be seen. In this post, I will talk about what I think are the worst and best ways to regulate drones with respect to privacy.

For many, the word “drone” brings to mind an image of the military-grade Predator. The folks within the DYI Drones movement, however, and most local law enforcement, are more likely to be building or using a quadrocopter.  These devices are light, generally battery-powered, and capable of somewhere between ten and forty five minutes of flight time. Obviously “unmanned aircraft systems,” as Federal Aviation Administration regulations refer to them, are quite varied. But I think three aspects are essential, mostly to distinguish drones from other technologies. The first is that they fly; the second is that they have the ability to sense the world around them; and the third is that they are capable of some small level of operational autonomy.  Thus, the app-enabled Parrot AR Drone counts, whereas a driverless car or remote control plane with a camera does not.

The greatest privacy concern with drones, however we define them, seems to be that they will significantly drive down the costs of routine aerial surveillance, which will lead to more of it. I was a guest on NPR’s Talk of the Nation this week and a caller—apparently a municipal official charged with the enforcement of building code violations—suggested as much. He said he would be more likely to look for minor infractions if he had access to a drone. The other guest, from the law enforcement community, referred several times to the relatively low cost of operating a drone—$25 an hour was the figure he cited. Drones, the argument runs, will make massive surveillance too easy.

Street cameras are reasonably inexpensive as well. But drones are mobile, and may come equipped with the capability to do more than record video and audio. Drones could detect if people are armed, for instance, pinpoint and even intercept electronic communications, generate thermal images, or detect chemical signatures such as that given off by marijuana. It could be prohibitively expensive to put each of these sensors on every street corner, but not necessarily to fly one or two sets of them around a city. I should note that few unmanned aerial systems have these capabilities today, but some do, and the Department of Homeland Security is apparently looking into acquiring more.

Were privacy laws stronger in the United States, we might not worry so much about greater surveillance capacity. But very little in the way of privacy law limits the domestic use of drones. As a general matter, we do not enjoy a reasonable expectation of privacy in public spaces, nor spaces visible from a public vantage (including the airways). Five justices worried aloud in United States v. Jones that following someone around “electronically” for a long time may require a probable cause, but technically Jones was decided on the basis that officers affixed a GPS physically to a car—which drone surveillance does not require. Moreover, citizens do not generally enjoy a reasonable expectation of privacy in contraband. Dogs and other technologies that isolate wrongdoing may not even be searches. The Supreme Court may expand this doctrine in Florida v. Jardines—presenting the question of whether officers need probable cause to conduct a dog sniff of a home. Or perhaps they will find for the defendant on the basis that homeowners do not tacitly consent to bringing a police dog onto their property. But again, no consent is needed, and no trespass or seizure occurs, if a drone equipped with a chemical sensor flies over your person or your greenhouse.

I’ve heard members of the law enforcement community argue that Kyllo v. United States—which holds that officers need a warrant to use thermal imaging or other technologies not in general use to peer into the home—requires probable cause before officers can use a drone to look into your back yard. The officer on Talk of the Nation made this point. I think it clear that a court would instead apply the logic of Florida v. Riley. But even if not, drones will be in general use soon enough—September 2015 being the target date for when the FAA relaxes its ban on commercial drone use within the United States. I am not even convinced that Kyllo requires a warrant to use thermal or chemical sensors, where the drone—like the dog—alerts officers exclusively upon detecting a suspicious heat pattern or odor. Only the drone will know what hour the lady takes her sauna.

Given the increased prevalence of drones, their capacity for (cheap) aerial surveillance, and the limitations of contemporary privacy law, it makes sense that drones are causing a backlash. The question is what will happen in light of that backlash. I think the worst option is the least likely—that nothing will happen. As I discuss in a recent op ed, proposals to regulate drones emanate from all quarters. Even the FAA, which initially demurred on the issue, is currently seeking comment on the privacy impact of drones in connection with its mandate to authorize a half dozen drone testing sites.

Doing nothing would be a bad idea because it would fail to address the legitimate concerns of drone opponents and is likely to lead many to pressure the state and market to reject the technology. Which brings me to the next worse option: banning the use of drones entirely. For all of their dangers, drones have the potential to be a helpful, even game-changing technology. They have assisted in everything from disaster relief to firefighting to spotting polluters. I don’t know about you, but personally, I do not want SWAT officers going into dangerous conditions without situational awareness if we can avoid it. More important still are the uses we have not even dreamed up. Personal drones are essentially flying smart phones, and their potential for commercial innovation could be as great.

The second best option to banning drones is to place limits on their use. This is the approach of several federal and state bills. The state of Washington, for instance, contemplates a warrant requirement for the use of drones for any investigative purpose. There is certainly some logic to this approach—properly executed, drone-specific limits have the potential to allay fears while preserving some of the positive uses of the technology. But there are also several downsides. First, many of the bills I’ve seen only address the use of drones by public agencies. That officers have to get a warrant in some circumstances does little to address the specter of drone paparazzi. Second, as I alluded to above, agreeing on the definition of a drone is a nontrivial hurdle. The bill released for discussion last year by Representative Markey follows the FAA’s “unmanned aircraft systems” language. Whether the robot can fly strikes me as largely beside the point. Robots today are capable of climbing the side of buildings, jumping high into the air, and so on. (If Oz were to ban flying monkeys because of all the mischief they cause, the Wicked Witch might just buy them some motorcycles.)

Of greatest moment is that we would be squandering an opportunity to confront our inadequate privacy doctrines. The best way, I think, to address the privacy problems drones pose is to finally drag our privacy laws into the twenty-first century. Recent efforts by courts—like the D.C. Circuit—and scholars—like Daniel Solove and Danielle Citron (writing with David Gray) of this blog—have begun to develop standards for surveillance of public spaces, whether by drone, cell phone signal, or another technology. These standards, like present day Fourth Amendment doctrine, would also inform the privacy torts and interpretation of civil statutes. There will be line drawing problems, of course. But that is what courts do. It is reasonable to expect privacy in a big office but not in a little mall. There is no exigency when you smell X and none when you hear Y. Yet when you both smell X and hear Y there is suddenly exigency.  The advent of the car or thermal imaging is a big enough change to require an adjustment in the equilibrium between police and perpetrator, but the invention of the bicycle and binoculars maybe not. I think we need a privacy law capable of drawing a distinction between reasonable and excessive public surveillance.

Second, I think we took a wrong turn with the dog-sniffing cases (plus United States v. Jacobsen) having little to do with the possibility of false positives. We are too close to a world in which technology can scan our persons and property—both digital and physical—in search of the myriad substances and pop songs rendered illegal by aggressive drug or copyright policy. No one has a right to commit a crime, but the right to privacy should stand independent. Not all courts have followed the United States in the regard. The approach taken by the Supreme Court of Canada in R. v. Kang Brown, for instance, strikes me as a more sensible one.

I will end on a positive note by saying that good drone policy might bubble up from anywhere. Virginia can ban drones while Nebraska and Colorado court them, and we’ll see what happens. Maybe a progressive state—I’m looking at you, California—will clarify its laws in a way that proves sustainable and just, setting an example for eventual national policy. The beauty of a federalist system is, of course, this opportunity for experimentation. In short, we should treat the domestic use of drones not just as a privacy crisis but also an opportunity. As always, your thoughts warmly welcome.

You may also like...

18 Responses

  1. Brett Bellmore says:

    In regards to the use of drones for drug law enforcement, the real problem is the drug laws. Any time you have a victimless crime, normal law enforcement, (With it’s starting point of a victim letting the police know a crime has happened, if only by their disappearance in some cases.) is going to be at a loss for effective enforcement within constitutional constraints. The use of drones to conduct warrentless searches is just one example of the generalized problem created by such laws, and the only real cure is to get rid of those laws.

  2. Shag from Brookline says:

    I imagine Brett has a “LIBERTARIANS UNITE!” bumper sticker on his car. Of course, once they unite, they cease being libertarians. Perhaps Brett could list what he thinks are “victimless crimes” to understand what makes him tick as a libertarian.

  3. Joe says:

    Drug laws include unregulated drug sales including sales to minors. Putting aside complex questions on what ‘victimless’ means (the limitations of humanity leads society to rightly regulate things that the people involved might not deem a problem), this remains true.

    The “drug war” is a big problem, that gets broad agreement, but there is still going to be areas of regulation here. Likewise, until the libertarian nirvana comes, there are shades of wrong.

  4. Brett Bellmore says:

    “(the limitations of humanity leads society to rightly regulate things that the people involved might not deem a problem)”

    Until the day we find a different species to govern us, the limitations of humanity are the limitations of the people drafting the regulations, too. Making the presumption the regulators know better than the regulated somewhat dubious.

  5. Joe says:

    Human government is imperfect. Gotcha.

    Meanwhile, the rule is followed in numerous cases, including in at least one constitutional amendment (the 13A blocks something that used to be a fairly normal practice, servitude contracts). Until nirvana comes, the limitations of humanity will result in such things, set in place by our modified form of republican government.

    We can realize this and work within the limits of reality or toss around libertarian bona fides.

  6. Orin Kerr says:

    I think a privacy statute to regulate drones is quite sensible. Statutes can easily draw the line between reasonable and excessive; they just write it into the law in some arbitrary way. On the other hand, I think it is a challenge to get to that result under the Fourth Amendment law. Ryan, I read you as endorsing some sort of mosaic approach to get there, to which you add, “There will be line drawing problems, of course. But that is what courts do.” But I think it would be more accurate to say that courts usually strenuously avoid enacting doctrines that create such difficult line-drawing problems. For the reasons I have written about in my article on the mosaic theory, I think the line-drawing problems raised by the switch to a mosaic theory of searches are on a level of difficulty and complexity that explains why even the proponents of the theory won’t say how it applies. Courts don’t routinely enact that kind of doctrine.

  7. Orin Kerr says:

    Another question: Can you say how Dan’s article on the Fourth Amendment develops standards for surveillance of public spaces? Maybe I am just looking at the wrong part of the article, but I believe that the only conclusion Dan reaches on public surveillance is that is that it “deserves at least some degree of oversight and regulation.”

  8. Joe says:

    In some cases, courts might provide basic limits (e.g., some sort of process for enemy combatants) while legislatures can provide the details.

  9. Ian says:

    In the late 1990’s The chosen option was fixed cameras which enabled quantitive surveillance, but which in the search for the exception breached the privacy of everybody. (Many factors affected this choice with the main one appearing to be external pressures creating internal threats – mainly terrorism.) Mobile cameras (generally in/on a vehicle of some sort) which limited privacy intrusions to particular ‘targets’ (giving a degree of quality to the surveillance—not so much public noise from the innocent) were more difficult to ‘police’ and assure they were used correctly to limit privacy intrusions for the innocent.
    Whilst the simpler more generally privacy intrusive option was chosen, the more focused option has not gone away and the challenges that focused option presents have not changed.

    Given the difficulties with surveillance emanate from its use, and the use is driven by the purpose it is used for it seems logical to consider the social character of those proposing the purpose in order to determine the likely methods that will be used both to manage any surveillance and the surveilled.
    If the purpose is coercive, coercion must inevitably form part of the outcome if both those supporting and those using exist within a coercive world view.
    So, what is achieved by layering controlled targeted surveillance measures over a system of wider static surveillance appears as the same question as; What is achieved by installing static surveillance where targeted mobile surveillance could be effectively used.

    Moving on to an allied but more seemingly more important question. If most peoples privacy is continually breached, and a majority react in a reasonably low level way to such privacy threats by presenting a more rigid and brittle exterior in most public situations, would the society they live in (transparently to them) begin to reflect that same external coating? Even accepting a documented baseline exists in the USA such changes in nature would seem feasible as changing pressures in the external environment are successfully used to internally justify more coercive mechanisms. Over the years has the US constitution been refined to provide more social constraints, or kept freedoms at a broadly consistent level within that society? Looked at from a privacy perspective the answer to that appears to be the former, so expressed liberal views and resistance to surveillance from world views where coercive methods are not seen as the norm may be seen as understandable but given over to themselves eventually appear to degenerate as they use the same mechanisms of more refined constraints and coercion as a means of achieving a progression of their own views.
    Do you consider achieving the acceptance of progression and change sufficient reason to coercively narrow (or tighten) broader social controls and if so how are beneficial value transfers from other societies safeguarded?

  10. Ryan Calo says:

    I’m on the road but wanted to thank everyone for their great comments.

    I should be clear that I think statutory fixes are a fine idea, but that they should not be limited to technology capable of flight. I’m also a little skeptical of the distinction between the kind of line drawing courts do all the time, and the kind Orin is saying they run away from. Note that even the question of what constitutes a regular and a “difficult” line-drawing problem requires line drawing…

  11. Ryan Calo says:

    To be clear, I don’t necessarily embrace the mosaic theory as such. Rather, I reject a stark public-private distinction. And while I don’t want to speak for Dan, I read him to be suggesting that we should emphasize the reasonableness of virtually all acts of government surveillance, rather than cut off the inquiry by channeling conduct or technology into buckets like search or not search—a re-imagining that applies to dogs and drones alike. I also recall Dan laid out some criteria to help courts conduct that reasonableness inquiry. Again, I’m a terribly proxy for the professor himself.

  12. Orin Kerr says:


    Thanks for the response. If you think the line-drawing can be done, though, can you provide some idea of how you think courts should do it? As I wrote in my Mosaic Theory article, although I appreciate the confidence in the judiciary that judges can find the answers, I find it noteworthy that proponents of new approaches generally decline to say what the new approaches are.

    As for Dan’s proposal, I realize it is not your proposal, but FWIW, in my view it mostly moves doctrinal boxes around. Current doctrine has two basic questions: 1) which steps are sufficiently low-level as to not be regulated, and 2) among the ones that are not so low-level, how much regulation should the law impose? Dan’s reconstruction of the Fourth Amendment replaces the two questions with one question that has two parts. That is, he says that everything should be regulated in some sense, but some things should be regulated by having no restrictions at all placed on their use. So Dan would have all the doctrine rest on step (2), how much should government conduct be regulated, with the catch that he would say that the answer can include ‘”not at all.” It seems to me that this just existing law relabeled: It effectively replaces a two-part inquiry with an inquiry that has two steps.

  13. Ryan Calo says:


    How does a court know when a change in technology is important or significant enough to require an adjustment of Fourth Amendment equilibrium? What is the test?

    I ask because you’ve claimed, essentially, that courts “routinely enact that kind of doctrine.” Courts do—and should—look for changes in equilibrium and, at some point not specified, use those changes in both selecting and applying Fourth Amendment law.

    Help me square the circle here. What am I missing? I’ll confess, meanwhile, that I do not have a good test for when an expectation of privacy in public becomes reasonable at this time…


  14. Orin Kerr says:


    I think you are missing the distinction between whether to depart from precedent and how to depart from precedent. There are two different issues: (a) whether judges decide that they should depart from precedent in light of technological change, and (b) having decided to depart from precedent, how the judges go about deciding what doctrinal test should replace the old precedent. The theory of equilibrium adjustment is just an explanation for what judges are doing when they do (a); it has nothing to to with (b). So even if we assume that drones bring us to a point where judges will enact some kind of a new rule under (a), we then have to confront the separate issue of (b) — what kind of rule to enact to replace the old doctrine. And at that point Justices tend to be very sensitive to the administrability of new rules.

  15. Ryan Call says:

    I appreciate the distinction, Orin, but not the difference. If judges have to answer a threshold question with a line-drawing problem before they know what doctrine to apply, it seems to me we have the same issue, only on the front end.

    In other words, regardless of whether equilibrium adjustment is just a description of what judges are doing—or, as I read your article, something of a doctrinal roadmap for judges to follow (with concrete advantages)—then it would seem we are doomed to suffer some form of line-drawing problem. At a minimum, that would seem to take the sting out of your critiques of the mosaic theory or reasonableness inquiries.

  16. Orin Kerr says:


    We’re going to have to agree to disagree on this, it seems. On your last point, though, note that my mosaic theory article presumes that courts will engage in equilibrium-adjustment but explains why the mosaic theory is a deeply misguided way to engage in equilibrium adjustment. If judges think that they need to engage in EA to deal with public surveillance, I argue, the answer is to just overturn Knotts rather than to try to create a vague undefined middle ground through the mosaic theory. In other words, do what the Supreme Court did in Katz: Engage in equilibrium adjustment by saying certain conduct is *always* a search rather than try to say that it is *sometimes* a search, with with the circumstances in which it is a search so mysterious that even its proponents decline to say when that is. It’s a bolder approach than the attempted middle ground of the mosaic method, but at least people know what the cops can and can’t do.

  17. Ryan Calo says:

    Thanks, Orin, for this and your other comments. I have yet to read your mosaic theory paper (just your previous thoughts) and will do so with great interest.


  18. Wells says:

    Ryan: great and well-balanced thoughts here; thanks for them.