I have this astounding tendency to go “meta” with things. For example, my thinking on my PhD, especially over the last couple of months, has gone something like this:
How do we reduce confirmation bias?
– How do we even measure confirmation bias?
–Wait, what does confirmation bias even mean, exactly?
—Does confirmation bias even exist?!
—-What’s a bias? What does it mean to be rational?
—–Can we even say we should be trying to improve rationality, or fix biases, at all?
——What does “should” mean?????
Ok, that last one was slightly (though actually not entirely) in jest – I am a philosopher at heart, after all. But other than that, I’m not even exaggerating – this is exactly where my head has gone, starting with the first question. And it’s not like the first question was exactly super-concrete to begin with – really, as a PhD student I should be thinking about something much more narrow, like “Is it possible to reduce confirmation bias in people’s beliefs about specific topic x using particular intervention y?”
Why do I have such trouble staying concrete, or narrowing down? And to what extent is this a problem?
It feels to me like the progression of questions described above is a fairly natural one, and to some extent useful: at each stage, I realise there’s a meta-issue that I don’t fully understand, and I feel like I need to understand that issue in order to properly answer the initial question. So, for example, I started thinking about how to reduce confirmation bias, but in doing so, realised that a lot of the ways we measure bias actually aren’t good enough for this purpose. And as I started thinking about the problems with these measures, I felt that part of the problem was there was some confusion about what they were proposing to measure, about what we even mean by “confirmation bias.” Exploring this, I ended up coming across a bunch of literature suggesting that many of the cases that seem to demonstrate confirmation bias might in fact be reinterpreted in rational terms given certain assumptions – casting doubt on the very existence of the bias, or at least on the quality of the evidence we have it. Which in turn led me to re-examine and question the concept of bias more generally, leading to a questioning of when and whether we should be trying to fix biases, which of course requires us to really know what we mean by “should”…
Each step in this chain makes sense. But I seem inclined to continue further, and therefore end up down a deeper rabbit hole, than many. I think part of this is a lack of willingness to make assumptions – to just take certain things for granted, even temporarily. What’s good about this is I end up questioning things that others take for granted, and really get to the bottom of problems. It’s also a good exercise for reminding myself why I really care about a problem – I care about confirmation bias because I care about improving human reasoning, because I think that will improve the world – not necessarily for its own sake.
The risk is that I keep going so far down the rabbit hole that I never end up answering the question at the top – or any questions at all, if I keep going. So the challenge, I think, is knowing where to stop. Being able to see the trail down the rabbit hole without getting sucked down to the bottom. Being able to, at least temporarily, set aside the question of what “should” means, to accept some assumptions about this for the time being. Because if we don’t make any assumptions about the world at all, we’ll never get anywhere.
I say this, and yet I’m still sitting here writing about the second-to-last question on that list. But not the last one, at least…