I spend a lot of time thinking about confirmation bias: the idea that people generally behave in ways that confirm their prior beliefs. We should really talk in terms of confirmation biases, since there are a variety of different biases in this vein: including the tendency to selectively seek out confirming over conflicting evidence; to subject arguments that challenge our views to more scrutiny than those that support them, and to interpret ambiguous information as supportive of what we already believe.
I should say initially that I do believe something like confirmation bias exists and is a problem. But I’ve also recently become a little more suspicious of how confirmation bias is talked about, both in academic psychology and in more everyday discussions. I’m going to explain where some of that suspicion has come from, and use this to consider the possibility that I (we?) might be wrong about confirmation bias. Because of course, believing in confirmation bias should teach us to be suspicious of all of our beliefs, and to consider the possibility that they might be wrong – even the belief in confirmation bias itself.
(I feel the need to flag that this isn’t going to be a totally clear and thorough treatment of this issue, but rather me just attempting to get down some of my thoughts – though I do hope to write up something more precise at some point. This discussion also raises some issues in how we think about bias more generally, which I hope to explore in more detail later.)
Let’s start with a definition: confirmation bias refers to the tendency to seek out, interpret, and evaluate evidence in a way that confirms what you already believe to be true. To say that something is a bias, we need to be able to show that it somehow deviates from a rational norm, and does so in a systematic way.1 So to establish confirmation bias conclusively, we need to establish that: (a) there is some rational norm which reasoning about evidence should conform to; (b) people systematically deviate from that norm, in a way that means they believe there is stronger evidence for their existing beliefs than there in fact is.
Though there is a lot of research on various forms of confirmation bias, I think a lot of it actually falls short of this standard, in a couple of different ways.
The evidence is weaker than many think
The evidence for a systematic bias towards existing beliefs is simply more mixed than people realise. For example, I think most people have the impression that people tend to prefer reading things they agree with than disagree with. However, the literature on selective exposure in psychology – this very tendency to prefer confirming over conflicting information when given the choice – is actually pretty mixed, and finds this tendency surprisingly difficult to demonstrate. There are a bunch of specific experimental design issues we could get into here, but overall selective exposure is simply not well-supported. Some studies find people prefer to read information they agree with, some find they choose to read more information they disagree with, some come out more even-handed – it all seems to depend on the context. I’ve personally run a number of online studies looking at selective exposure effects, and never found any substantial, significant effect.
A big meta-analysis of all the studies on selective exposure (and there have been a lot), finds overall a “modest preference for congenial over uncongenial information (d=0.36)”, an effect which is moderated by different factors including information quality, confidence, and the value relevance of an issue. Overall, what’s going on here seems subtler and more nuanced than simply, “people prefer to read things they agree with.”
Establishing that something is a “bias” is harder than it seems
It’s often not totally clear what the rational standard is that we are saying people are “biased” in relation to. This might seem unnecessary – it might seem straightforward to say that subjecting counter-evidence to more scrutiny than supportive evidence clearly demonstrates bias. But in fact, there are contexts and ways in which this might be deemed rational: surely my prior beliefs on a topic must, rationally, constrain my thinking to some extent. Certainly if I have a huge amount of evidence already accumulated to believe something – that the earth is round, for example – I’m going to be much more sceptical of someone who presents me with an argument that the world is flat than someone who presents me with additional evidence that the world is round.
To say that someone is biased towards their existing belief, we need to show that they are treating supporting evidence and conflicting evidence differently beyond that which might be rational given their prior beliefs and evidence. It’s not enough to show that you generally consider supporting evidence more persuasive, or subject it to less scrutiny, because depending on your prior beliefs it might be rational for you to do so. But establishing this, in practice, is incredibly difficult – since I don’t know what evidence you’ve already encountered or your reasons for how you treat new information. There are ways to get around this problem empirically, but a lot of the studies heralded as conclusive evidence of confirmation bias don’t adequately deal with this challenge. And in fact, other researchers have proposed that a number of these classic “confirmation bias” studies might be reinterpreted in normative terms – i.e. as rational, given certain assumptions – see here, here, and here, for example.
Of course, just because someone’s behaviour could be interpreted as rational, given certain assumptions, doesn’t mean their behaviour is rational – much of the time these assumptions may not hold, and what we are seeing may well be bias. But what it does mean is that just by showing people give more weight to supporting than conflicting evidence, or choose supporting evidence when given a choice, we haven’t conclusively shown that they are being biased or irrational. In fact, we are demonstrating bias in assuming the biased interpretation when there’s an alternative, possibly rational one.
This might all seem overly pedantic. If people do repeatedly, in different studies, give preferential treatment to evidence that confirms their views, then that seems pretty convincing: it’s unlikely all these cases could be considered rational in the way my being more sceptical of evidence that the earth is flat is. I’m somewhat sympathetic to this, and there’s definitely a voice in my head telling me that obviously people are biased in this way, and it’s just kind of crazy to question this.
But I think this concern – about the lack of a clear normative standard in a lot of the studies of confirmation bias – is worth taking seriously. If we want to study confirmation bias in a more systematic way, and to test ways of reducing it, we need to be able to say more conclusively when people are behaving in a biased way and when they are not. Perhaps this normative standard isn’t necessary for our everyday thinking about confirmation bias, but it does seem important for psychological research. (Which, we might argue, is what underlies a lot of everyday thinking about it anyway.)
I think this taps into a more general issue in the study of bias and rationality. There’s a tradeoff between precision and practicality. If I want to be able to say, very precisely, how someone’s behaviour deviates from some normative standard, then I’m likely going to have to observe their behaviour in very simple and perhaps contrived scenarios where it’s clear what exactly the “rational” thing to do is. But the worry with such scenarios is they end up so far removed from reality that while we can conclude people are biased in a certain way, it’s hard to infer much from that about how they will behave in the “real world.” For example, I might be able to construct a very simple scenario where I look at how someone seeks out new information to test a hypothesis about the number of balls in an urn, but it’s not clear whether anything I find here can be generalised to the much more complex case of someone’s beliefs about politics. On the other end, we might study the thing we’re more interested in directly – how someone seeks out information about their political beliefs – but here the issue is that we don’t know enough or have enough control over the situation to clearly conclude anything about the rationality of the person’s behaviour.
In fact, this is a challenge not just for the study of bias specifically, but for psychology more generally – internal vs external validity. I think a lot of the issues surrounding the recent “replication crisis” in psychology arise from people drawing strong conclusions about the fundamentals of human psychology from experiments and situations that are far too complex to justify those conclusions.
I feel a real tension here, personally, in the research I do. I want to do research that has relevance for the real world – which pushes me towards wanting to study human behaviour in more applied ways, in more naturalistic contexts. But at the same time, I’m deeply sceptical of our ability to draw robust conclusions from even the most controlled experiments in this way, because the world is so complex. This especially applies when it comes to making claims about what is “rational” or “biased”, when we can’t possibly know enough about the context to say conclusively what the rational behaviour would be. This scepticism is making me more and more inclined to look at more fundamental aspects of human cognition and behaviour in more abstract contexts. I do worry that this will end up being too abstract to be useful, but at least it’s more robust.
1. This isn’t, admittedly, the only way we might think about bias, and there’s also more explanation to be given here. But to avoid derailmemt, I’ll assume this definition for now, and leave further discussion to another post…↩