Internet Filtering And Confirmation Bias: Some Quick Thoughts

In, Cass Sunstein develops his concern that Internet technologies will assist us to avoid facts and opinions with which we disagree, thereby undermining deliberative democracy.  Eli Pariser’s new book The Filter Bubble argues that we may not even need to sign up for the “Daily Me” the way things are headed: Internet intermediaries silently filter out what they assume we do not want to see.  For Sunstein, never seeing the other side of an argument fosters an ill-informed, partisan body politic.  For Pariser, excessive personalization leads to an unhealthy distaste for the unfamiliar.

These are wonderful, well-argued books.  But their common thesis begs the same question: what of the extensive evidence in support of confirmation bias in the offline world?  Confirmation bias is a complex phenomenon but suffice it say that numerous studies suggest people will seek out confirmatory facts and opinions, ignore that with which they disagree, and construe ambiguous content to support their preconceptions.  If it turns out people will skip or discount the other side of an argument anyway, then why are we worried about technologies that filter it out?

I do not recall a discussion of confirmation bias in and a quick search of 2.0 for “confirmation bias” and “confirmatory bias” yielded no results.  Of course, this does not mean the discussion is missing, only that I missed it.  Many of the threats to democracy Sunstein identifies—for instance, a dearth of shared experiences or the unchecked propagation of misleading facts—do not rely on the Internet filtering out dissent.  And I can imagine other counterarguments: confirmation bias is merely a tendency, not a cognitive or technological fiat.

Pariser, whose book I really liked,* engages directly with confirmation bias.  He sees the phenomenon as ultimately supporting his position.  Internet filtering accelerates confirmation bias, rendering it more dangerous.  Moreover, Pariser sees confirmation bias as an evil that Google and other Internet intermediaries are in a position to help users resist.  This last point, if true, may shed new light on why today’s Internet filtering is a problem.  Maybe it’s a story about opportunity cost. 

Thoughts, corrections, “duh’s” welcome.


* Speaking of bias: Pariser quotes me repeatedly in The Filter Bubble on the impact of design on user behavior. Accordingly, all of his arguments are interesting and sound.

You may also like...

10 Responses

  1. The proliferation of books like this, with remarkably similar theses, is itself an example of confirmation bias and cascade effects.

  2. WL says:

    Herein lies the true value of the Socratic method in legal education, which requires students to confront, distinguish, and defend against contrary authority that is often equally valid.

  3. Ryan Calo says:

    @James – I don’t mean to say that the books are overly similar, only that they start from a common premise. (Unlike, say, Friends With Benefits and No Strings Attached.)

    @WL – Good point, and links up to the immediately preceding post by Gerard Magliocca.

  4. My view of Cass Sunstein’s argument is:

    1) He’s proceeding from an inaccurate, but popular, theoretical framework.

    2) It’s further being taken (whether or not he’s really saying this) as a way for existing media institutions to sneer and deride the Internet as the cause of social ills. There’s a very nasty subtext in the way it all comes across, something like “That darn Internet, it’s full of nasty extremists, even OH MY GOD CREATING TERRORISTS!!!, unlike the sane, sober, sensible, governing that happened in the Golden Age of yore”. Basically, a kind of a moral panic for intellectuals.

    I think the rough proof of this is how much play that subtext gets, compared to how little attention there is for the refutations.

  5. Frank Pasquale says:

    I like a lot of Pariser’s book. But I think both he and Sunstein need to face the problem of a “doom loop” of ever-more-extreme groups being accommodated by those open-minded people who read Pariser and Sunstein. Pariser and Sunstein seem to assume that those who most need to listen actually will listen. In my experience, exactly the opposite is the case. Moreover, many social problems are based on fundamental differences about values, not facts.

    I think the core problem with internet filters is how often we have no chance of understanding how they work. We can study confirmation bias. We are by law barred from understanding trade-secret-protected filtering software.

  6. Marie Boran says:

    Have Pariser et al any definitive proof of confirmation bias in action over a set period of time? Specifically, data on how long it takes to manifest?

    I recently (as part of my masters dissertation) set up two new Google accounts and trained one to be politically liberal and another to be conservative (by feeding them ‘liberal’ and ‘conservative’ blogs, websites, keywords, using Google Bookmarks and Gmailing text from these sites).

    I am looking for confirmation bias in relation to ‘Googling’ information on climate change science and I have my suspicions that, although they exist, personalised filters take quite some time to develop and manifest.

  7. David Fagundes says:

    Confirmation bias is a tendency, not an absolute rule. We tend to agree with information that comports with our preexisting preferences, and to discount contrary information. But this doesn’t mean it happens in every case.

    So I think the difference is that a user who searches the internet himself, even subject to confirmation bias, may well run across something that persuades him of a dissenting opinion despite that bias (see, e.g., Dave Hoffman’s post above about the possibility of persuasion by certain rhetorical means).

    By contrast, a user whose information consumption is guided by a filter, and who doesn’t even _see_ material that conflicts with his preexisting preferences will have a zero chance of changing his mind, as opposed to the small-but-nontrivial chance that would exist in a world where he had to seek out his own information.

  8. Ryan Calo says:

    Thanks, David. I concede in my post that confirmation bias is merely a tendency and I am sympathetic to the distinction you draw here. I have a musing or two in response (and they are just that).

    The first is that filtering is also imperfect. Maybe one day an Internet filter will remove everything with which we disagree. But I don’t think any do so today.

    The second is that even if filtering were perfect, the existence of offline confirmation bias still weakens the complaint against it. Which is not to say I disagree with your helpful distinction.

    Thanks again.

  9. Ryan Calo says:

    @Marie – I just saw your comment. I would love to see the results of your study!

  10. Marie Boran says:

    Hi Ryan, I’m finishing up writing my dissertation right now but I’ll send on the results to you within the next week. It looks as though I’d have to redesign a bigger study and run it for longer, and with more keyword searches in order to get the best possible picture …but my mini-study did yield some interesting results!