Could Personalized Search Ruin Your Life?

Imagine you’re applying for a job and want to be sure to give the right impression. A diligent self-googler, you think you know everything there is out there on the web about you. Nothing sticks out in the first 15 or so pages of results. But there is someone with name identical to yours who’s got a terrible reputation (or, to make this more concrete, just imagine your name is Tucker Max). And when HR does its background check on you, that’s the first result it sees. You’re never given a reason for being turned down for the job–just a brief form letter.

This scenario may result from what is otherwise one of the most promising trends on the web–personalized search. As you use a search engine more and more, it tends to translate your behavior into a database of usual intentions. That can make searches a lot more efficient for you as a searcher–but creates lots of uncertainty once you are the searched.

I worry a bit about a world where pervasive tailoring of search results means that few of us know what types of information others are receiving in response to particular search terms. What should we do about the resulting information asymmetries? I was surprised to find that there are over 600 papers on SSRN with the term “information asymmetry” in the title, and I’m sure some of these have valuable theoretical insights on the issue (which appears to be at the heart of important legal controversies like those arising out of Twombly). More practically speaking, Finland has set forth some practices for dealing with that eventuality, and I’ve suggested others (SSRN copy available here).


None of this is to dispute the obvious helfpulness of personalized search in daily life. For example, Google Co-op is a “platform which enables you to use your expertise to help other users find information.” According to the FAQs, “When you subscribe to someone in the Google Co-op directory, all of that provider’s labels and subscribed links will be added to your Google search results for relevant searches. The labels and links provide new and useful ways to refine your searches.” I think this is a welcome innovation at what I’ve previously described as a “Black Box” information source.

A new book by Samir Chopra and Scott Dexter helps explain the importance of such openness. As Chopra argues,

Software affects our expressive potential in two ways. First, it allows us

to express algorithmic ideas as programs written, typically, in high-level programming languages. . . . Second, as executing code, software constrains the ways in which we may interact with a computing device. The grammar of this language of interaction is the set of constraints that my software places on me — the structure within which I must operate if it is to understand me.

[W]e only modify our interactions with a computer if we can modify the code that it runs: the only solution to a frustrating interaction with an inflexible interface is to change the interface. But if the software running on a machine is unavailable for inspection and modification, the expressiveness of our language of interaction is severely restricted.

The problem, I suppose, is when one party’s freedom to find information gives it a manifestly unfair or harmful picture of another person or entity. Perhaps the concerns I have can be adequately addressed in employment law (ala the Finnish model). I nevertheless think that they have to be part of an overall “search engine law,” which reciprocally balances the benefits government gives to these entities with public responsibilities.

You may also like...