A Legal Theory of Autonomous Artificial Agents offers a serious look at several legal controversies set off by the rise of bots. “Autonomy” is one of the key concepts in the work. We would not think of a simple drone programmed to fly in a straight line as an autonomous entity. On the other hand, films like Blade Runner envision humanoid robots that so closely mimic real homo sapiens that it seems churlish or cruel to dismiss their claims for respect and dignity (and perhaps even love). In between these extremes we find already well-implemented, cute automatons. As Sherry Turkle has noted, when confronted by Paro (above right), children “move from inquiries such as “Does it swim?” and “Does it eat?” to “Is it alive?” and “Can it love?””
For today’s post, I want to move to another, perhaps childish, question: can the bot speak? The question will be particularly urgent by 2020, but is relevant even now because corporate and governmental entities want to promote armies of propagandizing bots to disseminate their views and drown out opposing voices. Consider the experiment run by Tim Hwang, of the law firm Robot, Robot, & Hwang, on Twitter, as explained in conversation with Bob Garfield:
GARFIELD: Earlier this year, 500 or so Twitterers received tweets from someone with the handle @JamesMTitus who posed one of several generic questions: How long do you want to live to, for example, or do you have any pets? @JamesMTitus was cheerful and enthusiastic, kind of like those people who comment on the weather and then laugh heartily. Perhaps because of that good nature or perhaps because of his inquiring spirit and interest in others, @JamesMTitus was able to strike up a fair number of continuing conversations. Only thing is, there is no @JamesMTitus. He, or it, is a bot, a software program designed to engage actual humans in social networks.