An Irrational Undertaking: Why Aren’t We More Rational?

By unanimous reader demand – all one out of one readers voting, as of last week – this post will explore the small topic of the biological basis of “irrationality,” and its implications for law.  Specifically, Ben Daniels of Collective Conscious asked the fascinating question: “What neuro-mechanisms enforce irrational behavior in a rational animal?”

Ben’s question suggests that ostensibly rational human beings often act in irrational ways.  To prove his point, I’m actually going to address his enormous question within a blog post.  I hope you judge the effort valiant, if not complete.

The post will offer two perspectives on whether, as the question asks, we could be more rational than we are if certain “neuro-mechanisms” did not function to impair rationality.  The first view is that greater rationality might be possible – but might not confer greater benefits.  I call this the “anti-Vulcan hypothesis”:  While our affective capacities might suppress some of our computational power, they are precisely what make us both less than perfectly rational and gloriously human – Captain Kirk, rather than Mr. Spock.  A second, related perspective offered by the field of cultural cognition suggests that developmentally-acquired, neurally-ingrained cultural schemas cause people to evaluate new information not abstractly on its merits but in ways that conform to the norms of their social group.  In what I call the “sheep hypothesis,” cultural cognition theory suggests that our rational faculties often serve merely to rationalize facts in ways that fit our group-typical biases.  Yet, whether we are Kirk or Flossie, the implication for law may be the same:  Understanding how affect and rationality interact can allow legal decision-makers to modify legal institutions to favor the relevant ability, modify legal regimes to account for predictable limitations on rationality, and communicate in ways that privilege social affiliations and affective cues as much as factual information.

First, a slight cavil with the question:  The question suggests that people are “rational animal[s]” but that certain neurological mechanisms suppress rationality – as if our powerful rational engines were somehow constrained by neural cruise-control.  Latent in that question are a factual assumption about how the brain works (more on that later) and a normative inclination to see irrationality as a problem to which rationality is the solution.  Yet, much recent work on the central role of affect in decision-making suggests that, often, the converse may be true.  (Among many others, see Jonathan Haidt and Josh Greene; these links will open PDF articles in a new window.)  Rationality divorced from affect arguably may not even be possible for humans, much less desirable.  Indeed, the whole idea of “pure reason” as either a fact or a goal is taking a beating at the hands of researchers in behavioral economics, cognitive neuroscience, and experimental philosophy – and perhaps other fields as well.

Also, since “rational” can mean a lot of things, I’m going to define it as the ability to calculate which behavior under particular circumstances will yield the greatest short-term utility to the actor.  By this measure, people do irrational things all the time: we discount the future unduly, preferring a dollar today to ten dollars next month; we comically misjudge risk, shying away from the safest form of transportation (flying) in favor of the most dangerous (driving); we punish excessively; and the list goes on.

Despite these persistent and universal defects in rationality, experimental data indicates that our brains have the capacity to be more rational than our behaviors would suggest.  Apparently, certain strong affective responses interfere with activity in particular regions of the prefrontal cortex (pfc); these areas of the pfc are associated with rationality tasks like sequencing, comparing, and computing.  Experiments in which researchers use powerful magnets to temporarily “knock out” activity in limbic (affective) brain regions, the otherwise typical subjects displayed savant-like abilities in spatial, visual, and computational skills.  This experimental result mimics what anecdotally has been reported in people who display savant abilities following brain injury or disease, and in people with autism spectrum disorders, who may have severe social and affective impairments yet also be savants.

So: Some evidence suggests the human brain may have massively more computing power than we can to put to use because of general (and sometimes acute) affective interference.  It may be that social and emotional processing suck up all the bandwidth; or, prosocial faculties may suppress activity in computational regions.  Further, the rational cognition we can access can be totally swamped out by sudden and strong affect.  With a nod to Martha Nussbaum, we might call this the “fragility of rationality.”

This fragility may be more boon than bane:  Rationality may be fragile because, in many situations, leading with affect might confer a survival advantage.  Simple heuristics and “gut” feelings, which are “fast and cheap,” let us respond quickly to complex and potentially dangerous situations.  Another evolutionary argument is that all-important social relationships can be disrupted by rational utility-maximizing behaviors – whether you call them free-riders or defectors.  To prevent humans from mucking up the enormous survival-enhancing benefits of community, selection would favor prosocial neuroendocrine mechanisms that suppress or an individual’s desire to maximize short-term utility.  What’s appealing about this argument is that – if true – it means that that which enables us to be human is precisely that which makes us not purely rational.  This “anti-Vulcan” hypothesis is very much the thrust of the work by Antonio Damasio (and here), Dan Ariely, and Paul Zak, among many other notable scholars.

An arguably darker view of the relationship between prosociality and rationality comes from cultural cognition theory.  While evolutionary psychology and behavioral economics suggest that people have cognitive quirks as to certain kinds of mental tasks, cultural cognition suggests that people’s major beliefs about the state of the world – the issues that self-governance and democracy depend upon – are largely impervious to rationality.  In place of rationality, people quite unconsciously “conform their beliefs about disputed matters of fact … to values that define their cultural identities.”

On this view, people aren’t just bad at understanding risk and temporal discounting, among other things, because our prosocial adaptations suppress it.  Rather, from global warming to gun control, people unconsciously align their assessments of issues to conform to the beliefs and values of their social group.  Rationality operates, if at all, post hoc:  It allows people to construct rationalizations for relatively fact-independent but socially conforming conclusions.  (Note that different cultural groups assign different values to rational forms of thought and inquiry.  In a group that highly prizes those activities, pursuing rationally-informed questioning might itself be culturally conforming.  Children of academics and knowledge-workers: I’m looking at you.)

This reflexive conformity is not a deliberate choice; it’s quite automatic, feels wholly natural, and is resilient against narrowly rational challenges based in facts and data.  And that this cognitive mode inheres in us makes a certain kind of sense:  Most people face far greater immediate danger from defying their social group than from global warming or gun control policy.  The person who strongly or regularly conflicts with their group becomes a sort of socially stateless person, the exiled persona non grata.

To descend from Olympus to the village:  What could this mean for law?  Whether we take the heuristics and biases approach emerging from behavioral economics and evolutionary psychology or the cultural cognition approach emerging from that field, the social and emotional nature of situated cognition cannot be ignored.  I’ll briefly highlight two strategies for “rationalizing” aspects of the legal system to account for our affectively-influenced rationality – one addressing the design of legal institutions and the other addressing how legal and political decisions are communicated to the public.

Oliver Goodenough suggests that research on rational-affective mutual interference should inform how legal institutions are  designed.  Legal institutions may be anywhere on a continuum from physical to metaphorical, from court buildings to court systems, to the structure and concept of the jury, to professional norms and conventions.  The structures of legal institutions influence how people within them engage in decision-making; certain institutional features may prompt people bring to bear their more emotive (empathic), social-cognitive (“sheep”), or purely rational (“Vulcan”) capacities.

Goodenough does not claim that more rationality is always better; in some legal contexts, we might collectively value affect – empathy, mercy.  In another, we might value cultural cognition – as when, for example, a jury in a criminal case must determine whether a defendant’s response to alleged provocation falls within the norms of the community.  And in still other contexts, we might value narrow rationality above all.  Understanding the triggers for our various cognitive modes could help address long-standing legal dilemmas.  Jon Hanson’s work on the highly situated and situational nature of decision-making suggests that the physical and social contexts in which deliberation takes place may be crucial to the answers at which we arrive.

Cultural cognition may offer strategies for communicating with the public about important issues.  The core insight of cultural cognition is that people react to new information not primarily by assessing it in the abstract, on its merits, but by intuiting their community’s likely reaction and conforming to it.  If the primary question a person asks herself is, “What would my community think of this thing?” instead of “What is this thing?”, then very different communication strategies follow:  Facts and information about the thing itself only become meaningful when embedded in information about the thing’s relevance to peoples’ communities.  The cultural cognition project has developed specific recommendations for communication around lawmaking involving gun rights, the death penalty, climate change, and other ostensibly fact-bound but intensely polarizing topics.

To wrap this up by going back to the question: Ben, short of putting every person into a TMS machine that makes us faux-savants by knocking out affective and social functions, we are not going to unleash our latent (narrowly) rational powers.  But it’s worth recalling that the historical, and now unpalatable term, for natural savants used to be “idiot-savant”: This phrase itself suggests that, without robust affective and social intelligence – which may make us “irrational” – we’re not very smart at all.

You may also like...

11 Responses

  1. anon says:

    Great post. Keep ’em coming.

  2. Brett Bellmore says:

    “short of putting every person into a TMS machine that makes us faux-savants by knocking out affective and social functions,”

    Why the “faux”? I believe the research suggests that there’s nothing fake about this, it’s not a subjective impression of greater clarity and objectivity, but the real thing.

    Were it safe and easy, periodically subjecting one’s self to such a process might not be a bad idea. After all, anything that keeps you shortsighted, might actually leave you too shortsighted to see a solution that better satisfies rationality AND whatever function cultural cognition is maximizing.

    • Amanda Pustilnik says:


      “Faux,” because TMS induces a transient, savant state rather than organic savant syndrome. Certainly, the savant abilities are real while the person undergoes target TMS. But savant syndrome is a condition with numerous diagnostic criteria; a person in a transient savant state would not meet many of those criteria. By analogy: Some psychedelic drugs can induce a schizoaffective state in the drug user, but, despite the similarity in presentation and the homology of some of the underlying processes, one wouldn’t call the drug user “a schizophrenic.” Beyond the transience of the experience, his or her schizoaffective presentation would only reproduce some of the symptoms and pathologies of organic schizophrenia. But where “state” ends and “trait” begins can often be up for grabs.

      Whether it might be beneficial to enter at TMS-induced savant state: Perhaps! Personally, I would be very curious to try. Wouldn’t it be extraordinary to experience, just briefly, what it would be like to draw whole cities from memory or effortlessly calculate square roots to the tenth decimal? Those would be novelty experiences, though. What more serious purposes do you think it might serve?

  3. Brett Bellmore says:

    Speaking as an engineer, who says those aren’t serious purposes? Man, would I ever love to have a “temporary savant” button I could press once in a while. Might be a very effective study aid, too; How does reading materials under the influence of this effect retention?

    Though I was really thinking of the effect it might have on that “cultural cognition” you spoke of, if people could occasionally stick their heads out of their own boxes, and look around. I encounter so many people on the web convinced that nobody could disagree with them about issue “X” without being evil. Could they maintain that conviction once they’d looked at the issue without their blinkers on, even if they did eventually have to don them again?

    Oh, and I’m really getting tired of “invalid data” every time I hit “submit”.

    • Amanda Pustilnik says:

      Brett – thanks, first, for letting me know about the “invalid data” issue. I don’t maintain this blog but I’ll call it to the attention of the folks who do. Is it possibly a browser issue? Might depend on the version of the browser you’re running. (Although – to let me biases show – my guess would be that you, as an engineer, would already be running a current browser and/or would have thought of that issue already yourself.)

      As for what being temporarily thrust out of the affective frame and into the rational frame might do for people’s perceptions: I suspect that, along with understanding that reasonable minds can differ, we’d all be appalled at how self-serving we would discover ourselves to be – that we would see how frequently our carefully reasoned and deeply held points of view just happen those that align with our interests.

      My wonderful torts professor, Judge Guido Calabresi, used to say, “It’s not that most people lie. It’s just that, when we have an interest at stake, we become … confused.”

  4. A.J. Sutter says:

    A couple of points apropos of “The core insight of cultural cognition is that people react to new information not primarily by assessing it in the abstract, on its merits, but by intuiting their community’s likely reaction and conforming to it.”:

    First, what are something’s “merits” “in the abstract”? E.g., as is well known, the norm that it’s desirable to maximize utility doesn’t follow analytically from the definition of utility itself. Normativity comes from somewhere.

    Second, “legal decision makers” are themselves subject to cultural cognition effects. (That’s so in spades for the legislative and executive branches.) And that has an impact not only on substantive legal decisions per se, but on the design of legal institutions. E.g., back when certain forms of overt discrimination were more OK, it used to be that to vote or serve on a jury in various countries you had to be male, have a certain income of amount of property, etc.; in the current more egalitarian age these qualifications have been relaxed or eliminated in many places. Your post doesn’t deny that these effects are relevant to institutional design, but it doesn’t highlight them either. They’re all too easy to overlook, as in the Thaler and Sunstein dichotomy of “Planners” and “Humans,” which seems to suggest that the decision-makers in the former category have somehow transcended membership in the latter. BTW my point isn’t to attack these effects categorically, but to acknowledge their inescapable impact; whether they lead to good or bad results probably varies from case to case.

    Finally, apropos of your comment about (idiot-)savants, the terminology we use often encodes our cultural biases. While you’re careful to explain that “rationality” isn’t necessarily a sign of an “unimpaired” brain, the word itself still has very strong positive connotations in our culture. Maybe we should escape the 18th Century, and instead of referring to “rational” behavior we should talk about “affectless” behavior instead? (Though by tactfully avoiding to point out that one can be rabidly selfish and yet still “rational” in the economic sense, there is cultural calculation in this suggestion, too.)

  5. A.J. Sutter says:

    [typo: income or amount of property]

  6. A.J. Sutter says:

    “[W]e’d all be appalled at how self-serving we would discover ourselves to be” — assuming we were indeed to discover that, why would such a discovery be appalling, rather than seen as a vindication of the principles of economic science?

  7. Brett Bellmore says:

    It’s quite possible we wouldn’t be appalled while the machine was on, because of it’s effects, and wouldn’t be appalled after it was off, because we’d ceased to be objective, and would just dismiss the effects as delusional. But it would be a cool experiment to run.

    Perhaps just as useful would be something that could monitor our brains, and simply let us know when the affective frame was overriding the rational. Perhaps by sounding an annoying buzzer.

    The degree to which we can control the behavior of our own nervous system, and exert influence over things we think are entirely automatic, if only granted the relevant feedback, is remarkable. Controlling sweat glands in a specific patch of skin. Modulating your heartbeat, or even shutting it off for a beat or two. Invoking the mechanism that paints background across blind spots to make selected objects disappear from view. The operation of our own nervous system is far more under our control than we typically realize, we just lack the signals needed to assert it reliably.

    Perhaps an indication of when exactly we’re being irrational is all we need to gain the ability to be rational whenever we want? To be savants on demand?

    In regards to invalid data, latest Firefox. At least I’ve learned to put my comments into the clipboard before hitting submit…

  8. Devorg says:

    From 40 years spent in development of machines that can think:

    “Rationality” is a fiction — derived from cultural metaphors and myths that have relatively little connection with reality.

    Animal consciousness, including the human kind, attempts to fit perceived patterns into patterns programmed by the physical nature of the animal and by training from experience and other sources. This arrangement neither imputes nor implies rationality.

    The premise of “rational” it is merely another fiction serving the convenience of those who seek to profit by it.

  9. May I suggest that what seems irrational is actually unwise short-term selfishness, the hallmark of our species. From my 22 years as a trial lawyer, that consistent conduct together with just plain dishonesty explains human actions.