BRIGHT IDEAS: Talking About Robotics With Ryan Calo
Once just fantasy, robots are increasingly prevalent in the twenty-first century. Ryan Calo, a Senior Research Fellow at the Stanford Center for Internet Society, has been doing fascinating research on the topic. Along with his work at Stanford, Calo serves on the programming group for National Robotics Week and will be co-chairing the Committee on Robotics and Artificial Intelligence for the ABA. (He also tweets about privacy and robotics at twitter.com/rcalo). This month’s ABA Magazine has a terrific article discussing Calo’s work and I wanted to follow up on that piece with an interview of my own. I reproduce my discussion with Calo below.
DC: Tell our readers about your research on robotics.
RC: Thanks very much for your interest. I’m researching essentially two aspects of robotics and the law. First, I’m looking at the potential impact of robots on society—for instance, with respect to privacy—and whether existing laws suffice to address this impact. Second, I’m investigating what the right legal infrastructure might be to promote safety and accountability but also to preserve the conditions for innovation. In each case, my focus has been on “personal” or “service” robots, a rapidly expanding category of consumer technology that encompasses everything from a Roomba to a humanoid Nao. I’m also interested in autonomous vehicles and vehicles features such as lane departure prevention.
DC: What are the most pressing concerns now and what issues do you foresee as pressing in the future?
RC: Today the most pressing concern is the military’s use of robotics. Literally thousands of robots have been deployed in the field, with more on the way. Peter Singer has marshaled extensive evidence that robots may skew individual and military priorities in some instances. On the one hand, I agree that we should be worried about our increased capacity and willingness to kill at a distance. On the other, as Ken Anderson has pointed out, robots may allow for more surgical strikes on enemy targets, reducing so-called “collateral damage” to civilians and infrastructure.
The second pressing concern is the uncertainty around liability for what end-users do with robots. Robots share two key similarities with computers and software: (1) responsibility can be difficult to parse in the event of a malfunction or accident and (2) many of the innovative uses of robotics will be determined by end-users. We’ve managed to domesticate the issue of computer liability with doctrines such as economic loss; you cannot sue Microsoft because Word ate your term paper. But this option is unlikely to be on the table with robots that can cause corporeal harm.
We need to get this issue of liability right. Would you build robots or invest in robotics if you were uncertain of your legal risk? Would you build versatile, “generative” platforms (to borrow a term from Jonathan Zittrain) if you might be held accountable for whatever users do with those platforms? I wouldn’t.
DC: What are the broad areas of law most implicated by advances in robotics?
RC: I believe that robots will eventually have an impact on many, if not all, areas of the law. If we’re talking the next ten years, I would list three in particular: product liability, privacy, and labor law.
Product liability: Classic notions in product liability law will not function well in the context of robotics. Take foreseeability: the possibility of harm is obvious, but its exact mechanism will be extremely difficult to predict and guard or warn against. Or take proximate cause: candidates for why a robot caused a harm include its hardware, its software, its environment, and user input. There could be a different person behind each. The leading “robot operating software” is open source, meaning that there is not even a single author. Meanwhile, robot manufacturers will have relatively deep pockets and, given early use of robotics in areas like eldercare and autism research, plaintiffs will be understandably sympathetic.
Privacy: Robots are essentially a human instrument, and one of the chief uses to which we’ve put that instrument is surveillance. In addition to vastly increasing our power to observe, however, robots have the potential to open new pathways for government and hackers to access the home. Finally, robots have a social meaning that most machines lack. We tend to treat them as though a person were really present, including by experiencing the feeling of being observed and evaluated. It also reveals more about us how we interact with robots in a way it does not with other appliances. For those who are interested, I’ve written a book chapter on this topic, forthcoming from MIT Press and available on SSRN.
Labor law: My colleague Dan Siciliano made an interesting observation to me recently in private conversation. He speculated that were even one job at a fast food chain replaced by a robot—easy enough to imagine—the resulting shift from payroll to capital expenditure could upset an entire state program that depends on payroll tax. For now the model that has emerged out of auto factories, where replacing humans with robots is common, has largely sufficed. But this dynamic may play out differently in new contexts or at larger scales.
DC: Can law address them adequately?
RC: In some cases I think the answer is clearly yes. We can pass laws prohibiting certain uses of robots in warfare or immunizing manufacturers for some of the uses to which consumers put their products. In others, I’m skeptical. Take the third way robots implicate privacy, i.e., through their unique social meaning. If it turns out robots exert a subtle chill on expression, interrupt solitude, or persuade especially efficiently, the law will not be well-positioned to react.
DC: Your remarks to the ABA Magazine suggested that you would like to protect this emerging industry in much the way that law protected industry at the inception of the Industrial Revolution. Do you frame the issue this way or am I reading to much of Mort Horowitz’s Transformation of American Law into your comments?
RC: The short answer to your excellent question is that I’m not thinking on this grand a scale. I’m looking more toward the success of the reigning transformative technologies—computers and the Internet. (I’m not the only person to make this analogy: the name of the recent report to Congress by leading robotics institutions was titled “Roadmap: From Internet to Robotics.”). I believe we can promote the same success with robotics using a handful of statutory interventions; I don’t mean to endorse a sustained economic instrumentalism.
I believe that robotics holds enormous promise along many lines. There is evidence that programs like FIRST and Robogames are helping to promote interest in science, math, technology, and engineering (STEM), dangerously low among young Americans. As Ki Mae Huessner of ABC.com News points out, robots are capable of doing many of the tasks that humans risk their lives to do today. Her example is the recent mining accident; MIT is building a robot miner that could be operated from a safe position. I would add that robots have been used to address several high profile situations of late—a robot helped disassemble the Times Square bomb; robot submarines are involved in the Gulf Coast oil spill; etc. Remember too that the vast majority of car accidents are caused by human error.
I think that ultimately the most interesting uses of robotics will be determined by end-users—individual and corporate customers that modify robots in interesting ways and put them to novel uses. I worry that before we get there, however, there will be a high profile, high stakes accident involving a robot that—if handled the wrong way—will chill investment in, and diversity of, the American robotics industry. Other countries with higher bars to litigation and a greater acceptance of robots could then leap frog the United States with respect to the transformative technology of our time. We did this ourselves after all in the context of the Internet—as Eric Goldman has pointed out, it is no accident that Google, Yahoo!, Microsoft, etc., etc. all hail from the U.S.