The Decline of Media Studies (and Privacy) in a Search Engine Society

I often hear statements like “I’m the top Google result for my name!” or “Kiwi is the top search result for shoe polish!” Truth is, there’s no such thing. You can know the top results that you see, and you can survey what others see, but only the search engine knows what everybody is seeing in response to a query. Evgeny Morozov worries about this trend:

There is a danger that we will become even less well-informed, as the web becomes both more personalised and more social. Concerns that the internet traps users in unchallenging information ghettos are not new, stretching back to 2001 and the US legal scholar Cass Sunstein’s book Republic.com. Sunstein argues that, when compared to older media, the internet allows users to seek out opinions and news with which they already agree, creating online news ghettos in which the views of right and left rarely mix.

What is surprising, however, is that today’s technology companies seem to use that book as a to-do-list. Google, for example, has been pushing to provide personalised search results to its users, meaning that two people searching for the same term may now get different results, altered according to what they have clicked on before. In December 2009, Google tweaked its rules in such a way that even users who are not signed into Google—thus denying the search giant access to their previous search history—will see their results personalised too. Facebook is not far behind.

Admittedly, these developments are helpful to individuals—how could anyone use Facebook without hiding Farmville? But they counsel extreme epistemological modesty for anyone who would write about the effects of search engines on the public sphere. Alex Halavais notes in his book Search Engine Society that, “[i]n the process of ranking results, search engines effectively create winners and losers on the web as a whole.” But we have little idea who exactly those winners and losers are at the level of granularity that search engines can operate at.

The search engine’s role here reminds me of WalMart’s power to turn off individual registers around the US from its command center in Bentonville, or ConEd’s power to force certain buildings to turn down their air conditioning on hot days. Our products phone home, our iPhones can turn into iBricks if we annoy Steve Jobs, and cell phones can double as microphones. Everyone’s being profiled, and extraordinary power may well lie with the network that can put all those profiles together.

Internet cheerleaders encourage a generalized gratitude and wonder at all the new technologies can do for us. But we are light-years away from the institutions of accountability and auditing that are necessary for such attitudes to be reasonable. Admittedly, deficiencies in enterprise software and power supplies may keep the “infinite database” from ever being built, but is there any doubt it would be a dream-come-true for business and government? If it is built, existing doctrines of trade secrecy and state secrecy will make it very difficult to figure out how it is operating.

Heraclitus wrote that “for the waking there is one world, and it is common; but sleepers turn aside each one into a world of his own.” In our age of fragmented lifeworlds, narrowcasting, and personalization, internet searchers are increasingly like Heraclitus’s sleepers. They will increasingly consume customized media on the persons and events they take an interest in. Many will unwittingly enter a media environment shaped in ways they can’t understand. While some authors have lamented the effects of the “Daily Me” on politics, and others have noted the Kafkaesque implications of black box databases, few have considered the intersection of these trends. They threaten to make a scholarly understanding of media consumption difficult, as we have less and less objective sense of what’s really being presented as choices.

Image Credit: Bandido of Oz.

Frank Pasquale

Frank is Professor of Law at the University of Maryland. His research agenda focuses on challenges posed to information law by rapidly changing technology, particularly in the health care, internet, and finance industries.

Frank accepts comments via email, at pasqresearch@gmail.com. All comments emailed to pasqresearch@gmail.com may be posted here (in whole or in part), with or without attribution, either as "Dissents of the Day" or as parts of follow-up post(s). Please indicate in your comment whether or not you would like attribution, or would prefer your comment (if it is selected for posting) to be anonymous.

You may also like...

3 Responses

  1. > They threaten to make a scholarly understanding of media consumption difficult, as we have less and less objective sense of what’s really being presented as choices.

    I can’t help but sardonically think that this has never stopped anyone before :-).

    I’ve essentially given up trying to get scholars to understand the huge role Google has played in promoting Wikipedia due to the top-result effect, because it’s basically pushing on a string. The articles are written based on what other nontechnical people want to hear, and so contradictory technical aspects are nigh-irrelevant. So your point is correct, but I think we have a long way to go before personalization of results is the bottleneck.

  2. I’m tempted, as one of those internet cheerleaders, to proffer the universal salve of “media literacy.” Unfortunately, even with my own cheeriness, that is a difficult solution to take seriously.

    Focusing away from the issue of personal data collection (though this is at least as important), and toward the efficacy of search for the individual and the collective, part of the problem you identify could be solved if the user were able to select whether to have results filtered by their social network. I am sometimes seeking “generalized” opinion or knowledge, and other times I am not. I trust Chowhound reviews more the NYT reviews because they have served me better in the past–and while that may lead to some balkanization, I think I should have the right to read (or eat, in this case) what I want to, and not be forced to centralize.

    That said, I think that I should have that choice, or at the very least that choice should be transparent to me. If that were the case, teaching people to be more literate users of search engines would be much easier. But Google follows the mantra of user-centered design to an extreme degree, hoping to make Google usable by the least common denominator of user, and not trouble us with what happens behind the curtain.

    My ideal search engine is a bit like a very high-end electronic keyboard. Yes, there are times when I might want to hit a button and have it play a symphony, but others when I want to be able to play the keys individually, and others when I want to be able to code my own instruments. Placing that instrumentality in the hands of the user is not free from its own problems, but I think they are the kinds of problems that I would prefer to have.

    Among other issues, that kind of flexibility also sets up a new literacy where one has not existed before, and a new gap because of that new literacy. There will be “power” users and those who have to make do with the defaults. Nonetheless, I would far prefer this to forcing everyone to make do with the defaults, by default.

  3. Jason Treit says:

    So much of this criticism reads the benign as evidence of the sinister. That an utterance in two contexts can mean at least two things is no fabrication of Google’s. I’d be far more worried by a global media system that followed Heraclitus’s first premise – “there is one world, and it is common”.

    Plurality is what the net consists of. Like the plurality of language and culture across which it operates, we can’t attend to much of it at once, meaning we have to discard most of what’s there. So experience leads us to biases. Then to aggregates of biases. Critical use is key, but the need is inescapable.

    Filters at Internet scale work as interlocutors, not directories or switchboards. Responding to a query without being responsive to many, many signals of human intent, and without adjusting that sensitivity on the fly, turns out to be a fine habit for broadcast and a terrible one for interlocution.

    More to the point, these many-to-many filters – even the most opaque – are not mute buttons. They are themselves tools for bias detection, analysis, and challenge. Consider #amazonfail. An algorithmic suppression of queer literature didn’t survive the weekend; people noticed, compared notes, then raised their voices. Then others listened. Then Amazon listened. Naturally they used social media to do it. Naturally the character of the backlash invited a counter-backlash, and more critical reflection. Hardly a land of hermetic sleepers.

    This distress over multiplicity, over the quicksand of popular wisdom, sounds decades out of date. In the 1965 article famous for coining “hypertext”, Ted Nelson anticipated it well.

    To the extent that information retrieval is concerned with seeking true or ideal or permanent codes and categories . . . [it seems] fundamentally mistaken. The categories are chimerical . . . Not just the new material, but the capacity for new arrangements and indefinite rearrangements of the old, must be possible.

    Any media researcher driven to despair by the abundance and free exercise of such capabilities in 2010 should take a deep, long breath.

    Homophily I would watch as a trouble area as filters mature, since the self-selecting drive to get our biases reinforced is an old one. The newspaper happened to be a mass medium with built-in drift: across headlines, across partisan lines, across lines of interest. Drift is an agent of civic value we may be losing. But no need to wait for others to recondition such values into the digital space. What it takes first is imagination and good code.

    Just don’t show me the FarmVille section.