(Fewer) Rights for Algorithms?

Yale Law Prof. Ian Ayres’s Super Crunchers celebrates the new era of data-driven decisionmaking. A NYT piece provides a nice introduction to these technologies:

[W]hen so much data is processed so rapidly, the effect is oracular and almost opaque. Even with a peek at the cybernetic trade secrets, you probably couldn’t unwind the computations. As you sit with your eHarmony spouse watching the movies Netflix prescribes, you might as well be an avatar in Second Life. You have been absorbed into the operating system. . . . [W]hen executives at MySpace told of new algorithms that will mine the information on users’ personal pages and summon targeted ads, the news hardly caused a stir. The idea of automating what used to be called judgment has gone from radical to commonplace.

Jeff Lipshaw asks, in response: “is it possible to program [such automation] so powerfully that it replicates all possible human (i.e. brain) programming?” And Larry Solum helpfully brings up a 1990 article he wrote on the implications of such questions for law: “Could an artificial intelligence become a legal person?”

Though such a possibility might seem a long way off, it is embedded in some recent legal arguments of Google. Google is perhaps the world’s premier example of “automating judgment;” its engineers are constantly thinking of new ways to order information in response to search queries. According to one of its court filings, “Google takes extraordinary measures to protect its trade secrets and confidential commercial information.” While resisting any efforts to “peak under the hood” of its search processes, Google also has been claiming that whatever results they come up with should be protected under the First Amendment. So one of the questions posed by Solum has nearly come to a pass: Google is seeking constitutional protection for what (it assures us) is an entirely automated process. Should it get it?

Let’s follow a thought experiment parallel to Solum’s inquiry to begin thinking about the issue. Imagine that, instead of Google, Inc., a robot that ordered search results claimed that its outputs were First Amendment protected speech. It will (be programmed to) argue that “it is a person, and that it is therefore entitled to certain constitutional rights.” Though I am abstracting from an extraordinarily rich and complex paper, I find it helpful to note here Solum’s immediate response to that possibility:

Should the law grant constitutional rights to AIs that have intellectual capacities like those of humans? The answer may turn out to vary with the nature of the constitutional right and our understanding of the underlying justification for the right. Take, for example, the right to freedom of speech, and assume that the justification for this right is a utilitarian version of the marketplace of ideas theory. These assumptions make the case for granting freedom of speech to AIs relatively simple, at least in theory. Granting AIs freedom of speech might have the best consequences for humans, because this action would promote the production of useful information. But assuming a different justification for the freedom of speech can make the issue more complex. If we assume that the justification for freedom of speech is to protect the autonomy of speakers, for example, then we must answer the question whether AIs can be autonomous.

Solum also considers a number of objections to granting the AI itself rights; for example:

[T]he “paranoid anthropocentric” argument [runs:] “AIs might turn out to be smarter than we humans. They might be effectively immortal. If we grant them the status of legal persons, they might take over.”


The second objection, that AIs lack some critical element of personhood, is really a series of related points: AIs would lack feelings, consciousness, and so forth. The form of the objection, for the most part, is as follows. First, quality X is essential for personhood. Second, no AI could possess X. Third, the fact that a computer could produce behavior we identify with X demonstrates only that the computer can simulate X, but simulation of a thing is not the thing itself. X is that certain something–a soul, consciousness, intentionality, desires, interests–that demarcates humans as persons. Call this argument, in its various forms, the “missing something” argument.


Finally, the third objection to constitutional personhood for AIs is that, as artifacts, AIs should never be more than the property of their makers. Put differently, the objection is that artificial intelligences, even if persons, are natural slaves.

The third objection points us in the direction of a more immediate resolution of the problem here: in our case the algorithm is a tool of an existing corporate entity, Google. But should the fact that Google’s results are automated lead them to get less protection than, say, a social search engine that ordered the web? I think so, for reasons largely derived from the first two objections to “rights for AI’s” mentioned above, and also because of the secrecy of the Google search process.

In terms of the AI argument, one might claim that Google is much closer to a data provider than, say, a newspaper. The latter actually expresses a point of view on what the news is; the former merely aggregates information. This difference has consequences for law.

Data providers like consumer reporting companies can be held more accountable for what they say than a newspaper. If I have a dispute with a newspaper over whether they’ve portrayed me accurately, I’m probably going to have to sue for defamation in order to settle things. But according to an FTC website, “If an investigation doesn’t resolve your dispute with the consumer reporting company, you can ask that a statement of the dispute be included in your file and in future reports. You also can ask the consumer reporting company to provide your statement to anyone who received a copy of your report in the recent past.” Moreover, ” only authorized individuals such as potential lenders, employers, insurance underwriters or landlords may access your report, and only if they intend to do business with you.” Finally, in case of disputes, “you’re entitled to add a written statement (100 words or less) explaining your view of the mistake.”

Why might we want to extend this type of distinction to the search world (and, indeed, strengthen consumer protections vis a vis “black box” data aggregators like ratings agencies and FICO scorers)? I think that there is something deeply troubling about unaccountable power–about a system that can simply spit out some life-changing result without giving a full explanation for it. Suspicion about FICO scores has led some states to prohibit their use in insurance rating, just as Finland has prevented employers from using Google results in evaluating potential applicants. Full First Amendment protection should be reserved for accountable, attributable speech–not the data processing systems that are increasingly powerful arbiters of taste, authority, and creditworthiness.

You may also like...