The Sorcerer’s Apprentice, Or: Why Weak AI Is Interesting Enough

Not many people in the legal academy study artificial intelligence or robotics.  One fellow enthusiast, Kenneth Anderson at American University, posed a provocative question over at Volokh Conspiracy yesterday: will the Nobel Prize for literature ever go to a software engineer who writes a program that writes a novel?

What I like about Ken’s question is its basic plausibility.  Software has already composed original music and helped invent a new type of toothbrush.  It does the majority of stock trading.  Software could one day write a book.  A focus on the achievable is also what I find compelling about Larry Solum’s exploration of whether AI might serve as an executor of a trust or Ian Kerr’s discussion of the effects of software agents on commerce.

I commute back and forth to Stanford from San Francisco and, to pass the time, I listen to the occasional audio book.  A few weeks ago I finished Daniel Wilson’s Robopocalypse, slated to become a Steven Spielberg movie in 2013.  The book was entertaining.  It was also technically quite specific.  Wilson is a roboticist with a PhD from Carnegie Mellon and was able to lend a certain realism to his doomsday scenario.  Many of the robots he described exist in prototype and some of the ethical issues flow from contemporary human-robot interaction literature.

But like most scary robot stories, Wilson’s depiction of a robot revolution helped itself to a quixotic key ingredient: a sentient machine.  The villain in Robopocalypse is a self-aware computer program called Archos that, in what must be a nod to Milo of Microsoft’s project Natal, presents itself as a soft spoken little boy.  This psychotic, artificial toddler decides it would be a good idea to prune the human race by a few billion and therefore sets about coordinating a massive robot assault.

Strong AI, meaning the general intelligence of the sort we might expect from a conscious being, is a common feature of movies involving robots, killer or otherwise.  Think Terminator or 2001: A Space Odyssey.  But machine sentience, let alone malice toward people, is not plausible in anything like the short run.  A friend in robotics at the University of Sidney described the state of the art this way: we have been doing AI since at least the 1950s when that term was coined at Dartmouth College.  Sixty years later, robots are about as smart as insects.

In a lovely essay, Northwestern’s John McGinnis acknowledges the hurdles we would have to overcome to achieve strong AI.  One is vastly increased computational power.  I agree with McGinnis that gains of this sort are likely in light of the unchecked, exponential growth we have seen to date.  The second, however, is software capable of leveraging that computational power into a form of intelligence.  Here I think the case is thin.  Time will tell, of course, and I should note that AI is but one of the technologies McGinnis examines in what promises to be a fascinating book, Accelerating Democracy.

Weak or “narrow” AI, in contrast, is a present-day reality.  Software controls many facets of daily life and, in some cases, this control presents real issues.  One example is the May 2010 “flash crash” that caused a temporary but enormous dip in the market.   A subsequent report on the crash placed much of the blame on high-frequency trading algorithms.  Danielle Keats Citron has written about the problematic role of autonomous software programs deployed by the government.

One of my favorite works of fiction to discuss AI’s potential impact on society is Daemon, a recent novel by Daniel Suarez.  Suarez’s vision is of a series of relatively simple software programs set into motion by a game designer and able to act on the world.  Suarez is a more gifted writer than Wilson, in my view, but the book’s real appeal comes from the fact that most everything in the narrative could happen today.  And, importantly, the book’s villain is a really clever person—one who uses software to manipulate and harm others.  The result is eye-opening, the implications for law and society arguably immediate.

I would recommend any of these works.  I am also happy to report that Solum, McGinnis, Kerr, and an AI expert are coming to Stanford Law School this October to discuss AI and the law on a panel.  We hope to record and display it on Center for Internet and Society’s website.  But in my view, our first priority should be thinking through the negative ramifications of the many computer programs already capable of acting upon the world.  Worrying that robots will become self-aware and hurt people feels a little like worrying that mops and brooms will become enchanted and ruin the sorcerer’s house.

You may also like...

5 Responses

  1. Woody says:

    Great post, Ryan. It caused me to think about bots and the capability of AI to make contractual decisions. One of the reasons most people don’t read terms of use agreements is because they are selectively enforced. The likelihood that a user will suffer any kind of consequence for breach of the agreement is usually quite low.

    However, what if someone developed and deployed AI that could comb a website (or other websites) in search of a terms of use violation? For example, patrolling for unauthorized pseudonyms or copyrighted content? Couldn’t software be used to detect language commonly used in flame wars or bullying and automatically suspend or terminate user accounts? (Aren’t these type of searches already a part of some website’s regular administration?)

    Do you think the mass deployment as AI as a contractual agent to enforce terms of use is realistic? Or would it bring too much consumer attention to the terms that are actually under the hood? We’ve already seen an emphatic reaction to the G+ account terminations.

  2. Ryan Calo says:

    Thanks, Woody. It sounds like you have the makings of a new article! Have you met Harry Surden at Colorado? He is a former Stanford CodeX fellow whose has been thinking about automating law and compliance.

  3. Miriam A. Cherry says:

    Enjoyed your post as well, Ryan. I have not yet read Robopocalypse, but may do so based on your rec. I tend to be somewhat of a tech optimist though (you’ve already heard how much I like my GPS). I hope you’ll do a future post on the effects of human-computer interaction. Best, Miriam

  4. Ryan Calo says:

    Thanks, Miriam! I think you’ll find the book entertaining at a minimum.

    Concurring Opinions has invited me back for a second month so I will be sure to write something on human-computer interaction. Thanks for the request!

  5. Ray Renteria says:

    Thanks for taking the time to write this Ryan. Enjoyed it! I’ve got a few more books on my reading list now!

    I also couldn’t help but think about an Austin, TX company when I read your comment exchange with “Woody” above. The company CSIdentity develops AI agents that patrol chat rooms and message boards to solicit stolen data (and thus identifying data fencers) using the same lexicon it picks up from the dialogue of the hackers.

    I’ve also posited that we will not be able to discern humans from agents on our twitter lists or social network contacts. The more impressionable humans will be at the whim of referrals and recommendations of agents. I’d like to read your thoughts on that topic sometime, too.

    Thanks again!

    –Ray

    (here’s an article about CSIdentity on SFGate http://www.sfgate.com/cgi-bin/article.cgi?f=/c/a/2011/09/18/BUL81L57IL.DTL)