Are Robots and Algorithms Taking Over?

The past half-decade has seen an uptick in thoughtful and influential scholarship on the potential risks — particularly to privacy and civil liberties — of emerging technologies. Regular readers of this blog will not be surprised to find works by several Concurring Opinions bloggers on any list of must-read commentary on the legal, ethical, and political dimensions of new data-driven technologies. Technological progress (or regress, depending on your point of view) has become one of the dominant narratives of our time, and it’s good that critiques of its darker implications have slowly but inexorably entered our political discourse.  

Still, there’s a smallish subset of tech commentary and criticism that is, in my view, overwrought. These are critiques that, on their face, seem to have no particular target other than technology tout court. They often include alarmist headlines which are not supported in substance. They cite the marketing claims of technology vendors as statistics. Their true targets are generally people, or political ideologies, rather than technology — a critical fact which often remains buried in the work. Sue Halpern isn’t usually guilty of being a part of this subset (for example, Halpern’s work on the surveillance disclosures has been thoughtful and important) but her latest effort, on the pages of the Review, comes close. (Though, as I’ll explain, she gets a lot right as well).

The headline: How Robots and Algorithms Are Taking Over.

Screen Shot 2015-03-31 at 7.52.43 PM

Are robots and algorithms really taking over?  Will technological unemployment beget a new era of economic and social disorder? I’m skeptical.

For one, it seems to me that Halpern, and perhaps the writers she cites (disclaimer: I enjoy Carr’s work, but haven’t read his latest book) at times assume too much. Some of the technology referenced in Halpern’s piece — from automated legal analysis tools to code-writing algorithms and surgery-bots — isn’t close to replacing humans, and that’s likely to remain the case for some time. A few of the factual claims — “algorithms are writing most corporate reports” — are probably wrong (and certainly unsupported). Others seem to overstate significantly milder facts. (“Xerox uses computers—not people—to select which applicants to hire for its call centers,” Halpern writes. If this is what she’s referring to, then, not quite.). These examples form a shaky foundation for Halpern’s broader claims about algorithms and robots “taking over.” As Halpern acknowledges, history is fraught with overconfident predictions when it comes to the scale of problems like technological unemployment. What Halpern does not sufficiently explain is why this time might be different.

Second, assumptions about the broad course and societal effects of technological progress are just that, assumptions. Few people (including technologists) in the 70s and 80s predicted the information revolution. Predictions about strong AI, including by respected engineers, have for the most part fallen far short of reality. Many more recent claims about superintelligence and computer consciousness are hyperbolic at best. Halpern’s implied technological projections — that automation and algorithms are on an inexorable path to “replace” human intelligence, and make human decision-makers obsolete — should therefore be taken with a hefty grain of salt. These projections may well represent the conventional wisdom, and the current zeitgeist, but conventional thinking has a poor track record in this area.

Third, arguments premised on the ills or value of “technology”, “automation” and “robots” are conceptually problematic. As Leo Marx put it in his marvelous essay, Technology: The Emergence of a Hazardous Concept,

To invest the concept of technology with agency is particularly hazardous when referring to technology in general—not to a particular technology, but rather to our entire stock of technologies. The size of that stock cannot be overstated. By now we have devised a particular technology—an amalgam of instrumental knowledge and equipment—for everything we make or do. To attribute specific events or social developments to the historical agency of so basic an aspect of human behavior makes little or no sense. Technology, as such, makes nothing happen.

(Nice hat tip to Auden in that last sentence). So while some of Halpern’s more directed attacks identify genuine problems — the deleterious effects of automation on pilots’ flying abilities, or the cognitive costs of over-relying on Google searches — it’s unclear whether they support her broader claims, let alone predictions, about technology, or automation, or algorithms, “as such.”

To be sure, the increased reliance on algorithms and robots in fields as varied as banking, advertising, law enforcement, and medicine can and does have specific pernicious social effects. It can create new mechanisms for exclusion, it can lead to commodification of moral goods like friendship and love, it can reinforce divisions across race, class, and socioeconomic lines, and create the possibility of total surveillance. Halpern is absolutely correct to reaffirm, in the article’s final section, that these and other issues at the intersection of technology and policy should be central to our public and political discourse, rather than surrendered to engineers and scientists holed up in Silicon Valley back-rooms. (Disclaimer: I work, but do not speak, for a tech company that creates data analytics software).

But what I take to be the article’s core claim — that automation and algorithms will make many humans (and impliedly, humanity itself) obsolete — doesn’t compute. Recent developments in robotics and analytics don’t necessarily portend a robot takeover. These developments might just as easily cause (as technological developments have throughout history) a shift in human energy towards new classes of problems — many of which, like climate change, overpopulation, and the social and political problems listed above, may themselves be tied to the adoption of new technologies. And while, on one account, the Google car and the Amazon drone may leave some people behind, there’s at least a plausible argument that these tools may free future generations to solve tougher problems than getting people and goods to the right address. (But see: the very serious problems listed in my previous paragraph).

In other words, no matter how good “technology” becomes, we are unlikely to ever run out of work that needs doing or problems that need solving. There’s no question that we need to become better at building institutions that efficiently marshal human energies towards this evolving set of challenges. That, of course, is not a new problem; nor is it one that robots are likely to solve for us.

The views expressed here are my own and don’t necessarily reflect those of my employers past or present.

You may also like...

2 Responses

  1. Orin Kerr's Robot says:

    Yes, they are.

  2. Shag from Brookline says:

    If only the Al-Gore-Rhythm had prevailed in Bush v. Gore (2000).