Bias about bias

I spend a lot of time thinking about confirmation bias: the idea that people generally behave in ways that confirm their prior beliefs. We should really talk in terms of confirmation biases, since there are a variety of different biases in this vein: including the tendency to selectively seek out confirming over conflicting evidence; to subject arguments that challenge our views to more scrutiny than those that support them, and to interpret ambiguous information as supportive of what we already believe.

I should say initially that I do believe something like confirmation bias exists and is a problem. But I’ve also recently become a little more suspicious of how confirmation bias is talked about, both in academic psychology and in more everyday discussions. I’m going to explain where some of that suspicion has come from, and use this to consider the possibility that I (we?) might be wrong about confirmation bias.  Because of course, believing in confirmation bias should teach us to be suspicious of all of our beliefs, and to consider the possibility that they might be wrong – even the belief in confirmation bias itself.

(I feel the need to flag that this isn’t going to be a totally clear and thorough treatment of this issue, but rather me just attempting to get down some of my thoughts – though I do hope to write up something more precise at some point. This discussion also raises some issues in how we think about bias more generally, which I hope to explore in more detail later.)

Let’s start with a definition: confirmation bias refers to the tendency to seek out, interpret, and evaluate evidence in a way that confirms what you already believe to be true. To say that something is a bias, we need to be able to show that it somehow deviates from a rational norm, and does so in a systematic way.1 So to establish confirmation bias conclusively, we need to establish that: (a) there is some rational norm which reasoning about evidence should conform to; (b) people systematically deviate from that norm, in a way that means they believe there is stronger evidence for their existing beliefs than there in fact is.

Though there is a lot of research on various forms of confirmation bias, I think a lot of it actually falls short of this standard, in a couple of different ways.

The evidence is weaker than many think

The evidence for a systematic bias towards existing beliefs is simply more mixed than people realise. For example, I think most people have the impression that people tend to prefer reading things they agree with than disagree with. However, the literature on selective exposure in psychology – this very tendency to prefer confirming over conflicting information when given the choice – is actually pretty mixed, and finds this tendency surprisingly difficult to demonstrate. There are a bunch of specific experimental design issues we could get into here, but overall selective exposure is simply not well-supported. Some studies find people prefer to read information they agree with, some find they choose to read more information they disagree with, some come out more even-handed – it all seems to depend on the context. I’ve personally run a number of online studies looking at selective exposure effects, and never found any substantial, significant effect.

A big meta-analysis of all the studies on selective exposure (and there have been a lot), finds overall a “modest preference for congenial over uncongenial information (d=0.36)”, an effect which is moderated by different factors including information quality, confidence, and the value relevance of an issue. Overall, what’s going on here seems subtler and more nuanced than simply, “people prefer to read things they agree with.”

Establishing that something is a “bias” is harder than it seems

It’s often not totally clear what the rational standard is that we are saying people are “biased” in relation to. This might seem unnecessary – it might seem straightforward to say that subjecting counter-evidence to more scrutiny than supportive evidence clearly demonstrates bias. But in fact, there are contexts and ways in which this might be deemed rational: surely my prior beliefs on a topic must, rationally, constrain my thinking to some extent. Certainly if I have a huge amount of evidence already accumulated to believe something – that the earth is round, for example – I’m going to be much more sceptical of someone who presents me with an argument that the world is flat than someone who presents me with additional evidence that the world is round.

To say that someone is biased towards their existing belief, we need to show that they are treating supporting evidence and conflicting evidence differently beyond that which might be rational given their prior beliefs and evidence. It’s not enough to show that you generally consider supporting evidence more persuasive, or subject it to less scrutiny, because depending on your prior beliefs it might be rational for you to do so. But establishing this, in practice, is incredibly difficult – since I don’t know what evidence you’ve already encountered or your reasons for how you treat new information. There are ways to get around this problem empirically, but a lot of the studies heralded as conclusive evidence of confirmation bias don’t adequately deal with this challenge. And in fact, other researchers have proposed that a number of these classic “confirmation bias” studies might be reinterpreted in normative terms – i.e. as rational, given certain assumptions – see here, here, and here, for example.

Of course, just because someone’s behaviour could be interpreted as rational, given certain assumptions, doesn’t mean their behaviour is rational – much of the time these assumptions may not hold, and what we are seeing may well be bias. But what it does mean is that just by showing people give more weight to supporting than conflicting evidence, or choose supporting evidence when given a choice, we haven’t conclusively shown that they are being biased or irrational. In fact, we are demonstrating bias in assuming the biased interpretation when there’s an alternative, possibly rational one.

This might all seem overly pedantic. If people do repeatedly, in different studies, give preferential treatment to evidence that confirms their views, then that seems pretty convincing: it’s unlikely all these cases could be considered rational in the way my being more sceptical of evidence that the earth is flat is. I’m somewhat sympathetic to this, and there’s definitely a voice in my head telling me that obviously people are biased in this way, and it’s just kind of crazy to question this.

But I think this concern – about the lack of a clear normative standard in a lot of the studies of confirmation bias – is worth taking seriously. If we want to study confirmation bias in a more systematic way, and to test ways of reducing it, we need to be able to say more conclusively when people are behaving in a biased way and when they are not. Perhaps this normative standard isn’t necessary for our everyday thinking about confirmation bias, but it does seem important for psychological research. (Which, we might argue, is what underlies a lot of everyday thinking about it anyway.)

I think this taps into a more general issue in the study of bias and rationality. There’s a tradeoff between precision and practicality. If I want to be able to say, very precisely, how someone’s behaviour deviates from some normative standard, then I’m likely going to have to observe their behaviour in very simple and perhaps contrived scenarios where it’s clear what exactly the “rational” thing to do is. But the worry with such scenarios is they end up so far removed from reality that while we can conclude people are biased in a certain way, it’s hard to infer much from that about how they will behave in the “real world.” For example, I might be able to construct a very simple scenario where I look at how someone seeks out new information to test a hypothesis about the number of balls in an urn, but it’s not clear whether anything I find here can be generalised to the much more complex case of someone’s beliefs about politics. On the other end, we might study the thing we’re more interested in directly – how someone seeks out information about their political beliefs – but here the issue is that we don’t know enough or have enough control over the situation to clearly conclude anything about the rationality of the person’s behaviour.

In fact, this is a challenge not just for the study of bias specifically, but for psychology more generally – internal vs external validity. I think a lot of the issues surrounding the recent “replication crisis” in psychology arise from people drawing strong conclusions about the fundamentals of human psychology from experiments and situations that are far too complex to justify those conclusions.

I feel a real tension here, personally, in the research I do. I want to do research that has relevance for the real world – which pushes me towards wanting to study human behaviour in more applied ways, in more naturalistic contexts. But at the same time, I’m deeply sceptical of our ability to draw robust conclusions from even the most controlled experiments in this way, because the world is so complex. This especially applies when it comes to making claims about what is “rational” or “biased”, when we can’t possibly know enough about the context to say conclusively what the rational behaviour would be. This scepticism is making me more and more inclined to look at more fundamental aspects of human cognition and behaviour in more abstract contexts. I do worry that this will end up being too abstract to be useful, but at least it’s more robust.

1. This isn’t, admittedly, the only way we might think about bias, and there’s also more explanation to be given here. But to avoid derailmemt, I’ll assume this definition for now, and leave further discussion to another post…


A confusion about “being present.”

[Meta: this was an interesting post to write. Normally when I write blog posts, I already have some idea I want to articulate, and writing the post is about figuring out how to best do so. Here, I started out with a confusion – something I wasn’t sure about that seemed interesting to me – which I tried to make sense of as I wrote. So it’s much more a stream of consciousness, and there’s likely less of a coherent point/argument. I don’t know how it will be for someone else to read, but I enjoyed doing this kind of writing and found it really useful – I was worrying less about perfectly articulating a certain idea, and more just using writing as a means of exploration.]


I realised yesterday I have these two, seemingly contradictory, ideas about what it means to be “in the present moment.”

I was reading Gary Drescher’s “Good and Real”, in which he’s attempting to explain consciousness. Drescher suggests that being conscious consists in the ability to “record” experiences and then replay them within our minds. I’m conscious because not only can my brain perceive things – represent aspects of the external world – but it can also perceive itself perceiving things – represent itself, and its experiences.

“To be conscious (of one’s thoughts, feelings, desires, perceptions etc.) is to have those thoughts recorded and played back by a metaphorical Cartesian Camcorder – an actual physical system within our brains… The very playback of the recorded material is among the sorts of events that can be recorded, making the self-awareness potentially self-referential to an arbitrary depth.”

I’m not going to get into a discussion here about whether this adequately captures what it means to be conscious – at very least, it seems an important component of what it means to be conscious. For now, back to my confusion about presence.

I’ve tended to think of presence as something akin to mindfulness, consisting in a broader awareness of one’s experience – not only am I experiencing something, but I also have some higher-order awareness of what I’m experiencing. I’ll sometimes have these moments where I suddenly seem to ‘leap back’ into awareness, to a higher level of consciousness. I’m suddenly aware of what I’ve been experiencing in a way I wasn’t previously – before this point, I was just going about my daily activities in a kind of automatic and mindless way. In this sense, it seems when I’m in a state of higher-order awareness – perceiving myself and my experiences, through something like a “Cartesian camcorder” – I’m more present than when I’m not.

But on second thought, this doesn’t seem quite right. From another perspective, it seems like being in this higher-order state: replaying “recorded” versions of things I’ve experienced, is exactly the opposite of being present. I’m observing my experiences, thinking about them, rather than simply experiencing them. In fact, as Drescher points out, being conscious of an experience in the Cartesian camcorder sense must necessarily be retroactive – i.e. I can only replay an experience in my mind that occurred in the past. (It often doesn’t feel all that retroactive, because everything is moving so fast – I’m replaying something which happened fractions of a second ago – but it’s still technically in the past.) I can’t be in the present moment if I’m retroactively replaying something I experienced a moment ago.


So on the one hand, having a higher-order awareness of what one is experiencing seems part of what it means to be “really present”- without any higher-order awareness, it seems like am not really there, like I’m just behaving in an automatic and mindless way. But on the other hand, higher-order awareness seems like the very antithesis of presence – since if I’m in some higher-order state, replaying or analysing my experience, then by definition I am not simply experiencing.

I’m only just articulating this confusion, so I definitely haven’t resolved it yet. I wonder if this is highlighting some confusion here about what it really means to be “present.” In one sense, what are classically thought of as flow states seem like a candidate for presence, for truly being in the moment – when I’m fully immersed in an activity like running, drawing, or solving a mathematical problem – it certainly seems like I’m “in the moment” in a way I’m not when I’m stressing about how I’m going to get to my next meeting, for example. It’s precisely because I’m not caught up in metacognition about my experience, that I’m simply experiencing, that I think of myself as more present when I’m drawing than when I’m worrying. But by contrast, the meditative practice of noting – essentially noticing and labelling different aspects of your experience (thoughts, sounds, emotions, etc.) on a very granular level (every second or so) – also seems like it’s designed to induce a kind of “presence”, despite the fact that it’s also explicitly focused on bringing aspects of your experience to higher-level, conscious awareness. And the practice of noting certainly seems like a more present experience than when I’m totally lost in thought about something, my mind all over the place, with no higher-level awareness of what I’m thinking or feeling.

So perhaps there are really two different kinds of presence. One involves being totally immersed in my experience right now, in the low-level experience, without any higher-level cognition: without analysis or thinking about the past or future. We can distinguish the kind of “immersion” I get in a flow state from being caught up in my own thoughts and feelings by saying in the first case, I’m caught up in a direct experience without having lots of thoughts about it – whereas in the second case, the thing I’m caught up in is my own thoughts (and so already, perhaps, at a higher level.) The second kind of presence involves being aware, on a moment-to-moment basis, of what I’m experiencing – without getting caught up in that awareness. That is, I can very briefly note what I’m experiencing, without getting pulled away from my actual experience for more than a moment. In a noting practice it seems sort of like what I’m doing is very quickly oscillating between two levels of cognition – direct experience, awareness of that experience, back to direct experience, and so on. Of course you can also imagine getting caught in an infinite regress – noting the experience of noting, and so on… but in practice this doesn’t seem to (often) happen.

I think the first kind of presence is perhaps more the kind people talk about in an everyday sense. It’s much easier for us to access psychologically than the latter: being able to bring higher-order awareness to your experience without getting “lost” in that experience seems incredibly difficult for us to do. The second kind of presence seems more the kind that’s being talked about in various spiritual and particularly Buddhist teachings.

Perhaps one kind of presence we might realistically want to cultivate more of, in a more practical sense, is that of consciously deciding what experiences to immerse ourselves in. The worry with the first kind of presence is that it becomes mindless – if we end up totally immersed in low-level experience without much higher-order cognition, we’re much more likely to fall prey to various impulses and biases that aren’t good for us (procrastination, bad habits in general.) The higher-order cognition exists for a reason. But we also don’t want to risk getting so caught up in analysis and higher-order cognition that we never fully immerse ourselves in experiences (something I think I’m personally at risk of, and have been thinking about a lot lately.) So instead perhaps something to aim for is the ability to use our higher-order consciousness to decide what experiences to fully immerse ourselves in and for how long. Ideally we want to be able to exercise greater control over our states of consciousness: the ability to focus on the lower level when we want to, but also the ability to bring higher-level awareness and analysis into play when it’s useful.

In fact, my impression is that this is what a lot of meditative practices are trying to train – the ability to direct one’s attention, to focus on very low-level aspects of experience when one wants to, but also to bring a higher-order of awareness into play without getting caught up in it. I just hadn’t quite thought of it in this way until now.

Minor everyday stresses

I find it amazing how much very minor, everyday stresses can affect me – provoking a barrage of frustrated feelings and thoughts, leading my mood spiralling downwards. This can happen pretty quickly if I don’t notice the early warning signs, and before I know what’s going on I’m incredibly wound up over something incredibly small.

I’m going to try something I haven’t tried before – to recount, in excruciating detail, an example of how this happened to me this morning, and some of the classic thought patterns and feelings that emerged. I’m not sure how interesting this will be to anyone else – this will perhaps depend on how interested you are in the workings of my mind and/or how much you identify with these kinds of patterns of thinking. But I think it would be useful for me – and might be useful/interesting for others – to really look at what happens in these situations.


I was already feeling a slight sense of stress and urgency – aware of the morning slipping away, and wanting to really get on with something – to sit down and write while the morning was fairly fresh. But there were all these small, annoying, obstacles I had to get out of the way first. I needed to shower, and to get dressed – and also, once I’d done that, I realised, I should probably meditate for at least five minutes, since I’ve been thinking recently that I’d benefit a lot from doing more meditation (always a good way to start a meditation practice, with the word “should.”) Once I’d gotten that out of the way, I realised my desk was a mess, and actually I’d probably feel a lot better if I had a nice clear space to work in – so I started clearing it up, which just made me aware of how much crap I had that needed sorting out, but that I probably wouldn’t get round to doing, and that made me feel bad. Then, in a rush to collect together some things to take upstairs, I dropped a bottle of nail polish on the floor and it smashed. Arghhhhhghghgh. This morning is not going well at all (a classic piece of unhelpful commentary from my brain). Just another thing getting in the way of actually getting anything done. I cleared it up (ish), nicely stinking the kitchen out with the smell of nail polish remover (nothing else would get it off the floor – but fortunately it wasn’t a carpet…) I felt a bit bad that I hadn’t done a properly thorough job of cleaning it up – there were still a few marks that it was hard to get out – and annoyed that I now had red stains on my hands, but whatever, I needed to get on.

With my desk slightly clearer (and my brain now more aware of various things I needed to sort out – that thing I hadn’t gotten round to sending back to eBay, for example, ugh…), and the floor relatively untouched by red nail polish, I set about making a cup of coffee. The kitchen was a bit of a mess, I should probably put away some dishes rather than just stacking them on top of each other like a buckaroo… but later, I’d already wasted enough time this morning. The cap for the aeropress wasn’t quite screwing on properly – cue more feelings of frustration and thoughts that “the world really hates me today” – but I figured it would probably be fine. It wasn’t. Suddenly there was coffee everywhere. Right, something else to clean up. At this point I realised that it might be worth spending ten minutes to wash up the various dishes and glasses festering in the sink, and end my game of buckaroo on the drying rack. I seemed to be in full cleaning-up mode this morning anyway. So I did – which actually took me no more than ten minutes, despite a little voice in my head telling me I just “didn’t have time right now” (also note that this is Sunday morning, at 10am, supposedly my day off – oh, and I’ve already been to the gym – me, put too much pressure on myself?) Kitchen slightly cleaner (but still a bit of a mess really, my brain reminded me – but of course I don’t have time now to do anything about it, so I’ll just feel bad about it instead), I made my coffee, miraculously making no mess at all this time, and finally sat down at my (clear!) desk to do some writing.

Clearing my desk and doing the washing up had made me feel slightly better, but the last half an hour or so had still left me feeling pretty riled up. I had this rising feeling of stress and urgency in my chest, and some residing feelings of guilt from noticing all the things I should really be doing. But I wanted to write, and so I’d write. I already had an idea in my head of something I wanted to write about – based on some experiences yesterday – but I realised I wasn’t quite in the right state of mind for that. My mind was too clouded by the small, but powerful, stresses of the last half hour. And so instead, I started writing about that.


I could just leave it there. Simply writing down my experience, and some of the associated thoughts and feelings, has definitely helped to calm me, to get me more into the present. But as I’m always inclined to, perhaps I’ll step back and make a few observations. I don’t think I want to necessarily try to come up with “solutions” here – there’s a feeling of stress and urgency I get when I think about trying to do that – but just to make slightly better sense of what’s going on.

  • As I said initially, it’s kind of amazing to me how small all of these things were, certainly in the grand scheme of things – and yet how strong the emotions and stress they provoked in me. Clearly the thoughts and feelings being provoked here are about something bigger than just what’s going on. One thing that seems like it might be useful is identifying certain patterns of thoughts and feelings that I might practice noticing in future, as warning signs that I’m getting into this kind of unhelpful state.
  • There’s a lot of thoughts of the form “I should do this thing, but I’m unlikely to do it, so I’ll just feel bad about it instead” (though obviously not that explicit) – which is useful to identify, since this is sort of ridiculous. It’s pretty clear on reflection that I should either do the thing, plan to do the thing in future, or just decide it’s not worth doing and not feel bad about that – “decide not to do this thing but still feel like I should and feel guilt because of that” is the worst of all the worlds. So being wary of these kinds of thoughts and stopping them in their tracks seems like it could be very useful.
  • There’s this feeling of “urgency” that I get – that I noted right at the beginning, that I was already aware of on some level – that I’ve just got to get through all these annoying things in the way before I can get to the thing I really want to do. This reminds me of these dreams I sometimes have, where I’m trying to get somewhere, but all these things keep coming up that I have to do and I’m getting later and later and more and more stressed and I feel totally out of control, like I’ll never get there. Sometimes in these dreams I actually have this feeling of being physically hindered – I literally can’t move or move fast enough, or I’m trying to run and somehow it’s like there’s a tonne of resistance against me. I don’t want to get too psychoanalytic about dreams, but it does seem like there’s something going on here – stress and fear around not being in control of my time and actions, feeling like I have to do certain things before getting to the thing I actually want to do.
  • But of course, reflectively, I have a lot more choice than I realise. I’m choosing to clean up the spilt nail polish and the kitchen – because I prefer to live in an environment that’s not messy and dirty, I’m choosing to spend time making coffee because I want coffee, rather than it just being a thing I need to “get out of the way.” This seems similar to the “not doing things but feeling bad about them” – I’m telling myself I have less agency than I actually do, or something (I mean, we could get into a deeper free will debate here of course, but setting that aside for a moment.)  I’ve got this model where there are the “things I want to do”, and then everything else getting in the way. But actually, given certain constraints on reality – some of which might be annoying sure, like the fact I spilled my nail polish – I’m then just making choices with tradeoffs. Telling myself that I’m doing lots of things I don’t want to doesn’t seem very helpful.
  • It’s interesting how quickly my brain starts constructing a certain, unhelpful, narrative about what’s going on – i.e. once a couple of things have gone slightly wrong, I start having these thoughts that “today just isn’t a good day”, or “everything is against me today.” I think I often don’t notice myself making these huge unwarranted generalisations, and it seems like could easily become self-perpetuating – if I feel like everything is going wrong I’ll look for more things going wrong. The negative/stressed mindset it provokes makes me less likely to pay attention to things, and more likely to do things I know would be a bad idea if I just stopped and thought (like trying to use an aeropress without the lid screwed on.)
  • Deciding to sit down and be with the feelings of stress by writing about them was a really good call. I could have tried to “push through” them – which I often do (again, telling myself I don’t have the time to think about why I’m stressed…), but this is basically never a good plan. The feelings of stress just end up sitting there in the background ignored, and I find it hard to focus and be productive on other things, which just leads to more frustration. I think a pattern like this is actually what leads to a lot of my feelings of overwhelm and burnout when they occur – the buildup of small, ignored, stresses and negative thoughts over time.

I guess my main takeaway here is that, if you can recognise that you’ve gotten into a negative or otherwise unhelpful state of mind, immediately trying to recount what happened in the last half an hour or so (or whatever the relevant time period seems to be), seems like a useful exercise. It forces you to actually focus on and address whatever feelings have arisen, and can help with identifying patterns of thoughts and feelings to look out for in future – even if you don’t necessarily feel like you’ve “solved” the problem. I know for sure that I’ll get into a similar state again at some point – but for now I feel calmer, and slightly better equipped to deal with it.


What are we wrong about?

It’s strange to realise that some of the things you believe are definitely wrong. Humans are imperfect, and our abilities to reason about the world are certainly imperfect, and so there’s just no way you’re not making some mistakes. Even more disconcerting is the realisation that there are certainly things that society as a whole is wrong about. We might hope that these individual irrationalities get evened out by a kind of “collective intelligence.” And maybe they do to some extent. But there’s still a lot we don’t understand. At every stage of history, people had theories they thought explained the world – which were then overturned and judged wrong by later societies. At every stage of history, things were considered acceptable that later generations considered morally abhorrent – just consider slavery, or the violence that nomadic tribes displayed towards anyone who wasn’t part of their group.

We’d be totally naive to think we’re any different: there are definitely things we, as a society, are wrong about. Both on a factual level – there are things we believe about the world and how it works that are totally wrong – and on a moral level – there are things we currently think are morally acceptable that future people will look back on with horror.

This isn’t a new idea, and the question of what we might be wrong about is a question that’s received increasing amounts of attention recently. I was prompted to think about this more when I listened to an EconTalk podcast episode with Chuck Klosterman, entitled “But What If We’re Wrong” – discussing Klosterman’s new book with this title. The discussion is super interesting – I highly recommend listening to it – particularly because it covers the possibility of our beliefs drastically changing in areas I hadn’t even thought about before. A couple of comments that particularly struck me as interesting:

  • Which human characteristics/strengths we value most highly might change in the future. In particular, as artificial intelligence gets more and more advanced and able to do increasingly diverse tasks as well as or better than humans, we’re likely to move towards valuing those abilities in humans that AI is still inferior at: emotional intelligence perhaps being a key one. (Klosterman even makes the very provocative but interesting suggestion that animals might have higher emotional intelligence than humans – and so this might completely change how we think about animals – a claim I’m going to have to look into in more depth…)
  • The writers and thinkers that end up being revered as revolutionary and insightful in a hundred or hundreds of years time likely won’t be those who are most influential now. First, note that this is the historical pattern – many thinkers considered revolutionary today were practically ignored when they were alive. Klosterman also explains why this is likely to be the case in more depth – since the needs and perspectives of society change over time, a thinker who addresses the problems of the ‘current era’ probably won’t be that relevant in future. Whereas one who addresses problems that will be on the minds of future people – someone we might say is ‘ahead of their time’ – is likely to go unappreciated today.

One interesting observation I want to make is that we might argue neither of these things is really something we’re “wrong” about in a strict sense – we might think of them more in terms of society’s values simply changing (without the original values being incorrect). It might therefore be useful to distinguish a few different ways in which society might “change its mind” over time:

  1. Scientific progress: changes in what we believe to be true about the world, and what we see as the best methods for uncovering those truths. (e.g. new/changed theories of physics or psychology, new tools or techniques for studying the world.)
  2. Moral progress: changes in what we believe about right and wrong (e.g. the abolition of slavery, acceptance of homosexuality, increasing empathy for animals)
  3. Shifts in values/priorities: changes in what we value or prioritise that are determined by contingent changes in the world, rather than by finding we were previously wrong about certain values (e.g. changing which human qualities and abilities we value depending on how useful they are to a society)

But what this discussion really made me think about is the higher-level question (there’s always a higher-level question, in my mind at least): how do we need to think in order to identify the things we might be wrong about? What kinds of questions do we need to ask ourselves? What kinds of biases might get in the way of being able to do this effectively? i.e. rather than jumping straight into trying to think of things we might be wrong about directly, we might be better off starting by thinking about how to think about and predict belief changes in advance – certainly not a straightforward task.

My impression is that the way people start off thinking about what society could be wrong about is to think about what they personally believe that disagrees with the rest of society. This is obviously a good starting point, especially if you have a good argument or reasons for why you’re in a better position to get at the truth than society as a whole is. For example, I believe that the way we currently farm animals for their meat and other products is wrong, and I think it’s pretty likely that future generations will agree with me on a much broader scale. But this doesn’t feel like a particularly revolutionary idea – plenty of people already agree with me on this, and the reason things haven’t changed yet is that various practices and norms are just too ingrained to change quickly. But it does feel like the social norms are starting to change on this one.

If we want to dig deeper: to get at things we ourselves are not even currently aware we might be wrong about, how do we need to think? I haven’t thought about this in a very systematic way yet, but a few initial ideas of questions that we might ask (many closely related/overlapping with one another):

  • Looking to the past, at things we clearly see people were wrong about: what factors prevented people, at the time, from being able to see that they were wrong? Was it that they simply didn’t have the right tools to think about the problem? Or was it that certain incentives existed which made it very difficult or undesirable for people to believe anything other than the status quo?
  • Relatedly, what things do we currently have strong incentives to believe and/or rationalise? What things, if true, would suggest radical changes to the way we live – as individuals or society – that would be very difficult to enact and impose large short-term costs? (e.g. the slave trade in the US initially was hugely costly to try to abolish)
  • On what topics is the consensus view in society largely driven by a group of people who have strong incentives other than seeing reality as it is/reaching the truth? (e.g. the state of the housing market before the financial collapse)
  • What would someone in some past era have needed to do – how would they have needed to think, what information could they have had access to, what incentives would they have needed to have – in order to realise that society was wrong about something we now think of as obvious? If we look at the first people who did realise this, can we identify patterns about how they did so?



An experiment in not achieving things

I’ve become increasingly aware recently of how much time I spend in “achievement mode”: worrying about using my time efficiently, whether I’m being productive, and feeling guilty about all the things I could be doing that I’m not. I’m even doing this a lot of the time when I’m not technically “working”, either feeling like I should get back to more productive things, or stressing about whether I could be using my ‘down time’ in a better way.

I’m pretty sure I’m not alone in this, that this is a common state for driven and ambitious types. Just in case it needs saying, I really think it really isn’t healthy to be in this state too much of the time. This isn’t to say it’s not useful to sometimes be focused on achievement: but if you get to a point where you’re constantly putting pressure on yourself, and have difficulty switching this mindset off, I think it’s a real problem. For me personally, it fuels these cycles leading to burnout and overwhelm, and means I gradually lose touch with what I’m actually feling. I also strongly suspect it makes me less productive than I otherwise could be  – that ironically, I’d actually achieve more if I weren’t so concerned with achievement.

I think I really need to do something about this; to spend some time figuring out why this is such a problem for me, and how I might fix it. So I recently started trying to analyse the problem: writing down thoughts and trying to build a better model of what was going on, and somehow… all of this just made me feel even more stressed out.

It seems obvious now, but the problem was that I was turning this into an intellectual project itself, another thing for me to stress about not achieving, another thing to worry about failing at. When I realised this, it first left me feeling quite helpless – my general strategy for solving problems wasn’t going to work, and even worse might just be feeding into the problem itself. I couldn’t use my analytical, high-achieving brain to solve the problems created by those ways of thinking, but I also didn’t really know any other ways of solving problems.

Eventually I realised that I didn’t necessarily need my analytical skills here, because I already understand the problem pretty well. I’ve spent most of the last 20+ years in ‘achieving mode’ without realising it – it’s so rare that I allow myself to just be, without thinking about what goal I’m moving towards. I thought that identifying this problem and setting my brain the task of solving it was a step in the right direction. But actually what I was doing was simply adding “figure out how to put less pressure on yourself” to my list of goals to put pressure on myself about.

Instead what I need, I think, is to actually carve out time and space in my life where I allow myself to just be, without striving for any particular goal.

This is easier said than done for someone like me. There’s a real risk I’ll slip into making “not striving” the goal, and then start stressing about whether I’m achieving that or not. There’s also a little voice in my head that’s sceptical, worrying this might just be an excuse for laziness or self-indulgence. Who am I to talk about my struggles and the need to spend time just doing what I feel like, when there’s so much worse suffering in the world, and so much I could be doing to relieve the suffering of others? It’s hard for me to ignore this voice entirely – I do worry that if I let go of the pressure I put on myself I’ll end up doing less good for the world. But another part of me thinks it’s more likely that I’ll fail to have any impact because I burn myself out. I think I need to learn to switch modes, to compartmentalise the drive and pressure, so I can apply it in a more targeted way to what’s most important – rather than having it operate on a low level, constantly in the back of my mind.

My plan now is to experiment with allowing myself to be not achieving things, and to carve out a much clearer distinction between my time that’s achievement-focused and that which isn’t. This seems so obviously a good idea, and I’m sure I’ve heard variations on this advice a billion times before, but somehow I’m only properly internalising it now. I only now properly see how rarely it is that I manage to switch the “goal-focused” mindset off, and how big a problem that is. I’m going to try making this distinction much more explicit in my daily schedule, and to pay more attention to how much I’m striving and how it feels.

At least to begin with, I’m actually going to keep the achievement-focused time fairly minimal – fewer hours than I’d ideally want in the long-term. I think I need to practice not-achieving right now much more than I need to maximise the number of hours I work. I also think having a limited amount of time in which to be productive is likely to help me prioritise better. Ideally, I think my default mode would be non-achievement – with the ability to explicitly schedule in as much time as necessary for pressure and effort and achievement – rather than the other way round. I think this is going to be bloody difficult, but that it could also be really important.


Why “coherent arbitrariness”?

I’ve borrowed this phrase from a behavioural economics paper by Ariely, Loewenstein and Prelec, who use the term to describe how people appear to value goods and experiences. They start with the idea that people’s preferences are surprisingly arbitrary – how much someone is willing to pay for a holiday can easily be influenced by an irrelevant ‘anchor’ number like their social security number. And yet, these preferences are also generally coherent – if I say that I’m willing to pay a certain amount for a holiday today I’m very unlikely to go back and contradict myself the next day. This coherence gives us the impression that people have stable, fundamental preferences, and somehow just know how much they value things – but the arbitrariness suggests that this is an illusion. If you tell me how much a glass of red wine is worth, I’ll happily tell you how much I’d pay for two, or for a higher quality glass of wine – but if you simply asked me how much I’d pay for some entirely new product unlike anything I’ve ever bought before, I have no idea where to begin. We’re very good at making relative judgements based on our past experience, but very poor at evaluating anything in absolute terms.

I’m using “coherent arbitrariness” in a slightly broader sense. I believe that much of how we think about ourselves, other people, and the world, is subject to a similar kind of coherent arbitrariness: we construct beliefs and theories that are (relatively) coherent, but are more influenced than we think by arbitrary factors. Elaborating slightly on these two components:

    1. Coherence: we have a strong drive to make sense of the world and our experiences, and to do so in a way that’s relatively coherent. To put it another way, we tend to find incoherence – or illegibility – unpleasant and difficult to deal with. And so we try to act in ways that are consistent with our ‘past selves’ or our image of ourselves, we expect others to do the same, we prefer ideas that easily fit with what we already believe, and dislike those that are hard to make sense of.
    2. Arbitrariness: coherence gives us an illusion of stability, that our views and preferences emerge from something more fundamental – but they may be more influenced by random factors than we realise. The kinds of people or groups we happen to come across when young, the kinds of experiences we have, and the varieties of feedback we receive, can hugely influence how we end up seeing the world and ourselves, even if these experiences are pretty ‘arbitrary.’

A few examples of where this coherent arbitrariness might arise:

  • One person happens to comment that you seem to be very creative as a young child. You like the idea of having “creative” as part of your identity, or perhaps you just take their observation seriously without either liking or disliking it, and so invest more energy in creative pursuits. This is self-reinforcing – both because you’re trying to create this coherent self-image, and because the more time you spend doing creative things the more positive feedback you get from others, and the more you generally get labelled as a “creative” person. But it seems pretty plausible that this initial observation someone made wasn’t a result of recognising some deep talent you had – perhaps they just happened to see you painting and seeming to enjoy it and wanted to encourage you, and if you’d instead been doing your maths homework at the time they would have made a comment about you seeming to have a scientific mind instead.
  • While at college, you happen to start dating who is heavily involved in animal rights activism. Lots of conversations with them about their beliefs leads you to decide that you too care deeply about animal welfare, and so you become vegan and also get involved in animal activism. It becomes part of your identity long after you break up with this partner, and influences a lot of how you think about the world – it certainly feels like it’s something you believe in an important fundamental way. But if you’d happened to date someone who instead was campaigning for climate change, perhaps your focus might have ended up different. (Of course, this sort of suggests too strongly that our views about the world and ethics stem entirely from the people we’re surrounded by, which might be unfair/an exaggeration – many of us have reached conclusions on these issues by thinking them through ourselves – but I think it’s also true that a lot of our views are shaped in somewhat ‘arbitrary’ ways depending on things we get exposed to through other people, especially early on in life.)
  • The way that we use language to carve up reality itself displays a kind of coherent arbitrariness – it feels like the words in our head and that come out of our mouths somehow map perfectly onto what exists in the world, but a bit of thought suggests this really isn’t the case. First of all, different languages seem to ‘carve up reality’ in different ways: some languages having words to express concepts that others don’t, for example – potentially resulting in the people speaking those languages actually perceiving reality in different ways. One way to think of language is as a means of abstraction and categorisation – as how we make sense of a reality that’s fundamentally incredibly complex (and made up of entities so small we can’t observe them and struggle to even think about them.) We’re under this illusion that the kinds of categories and abstractions we’ve created somehow are reality – but in fact, we could conceivably have carved up reality in a completely different way, and the way we have in fact done so is in a sense totally arbitrary.

I’m using the idea of “coherent arbitrariness” to name my blog because it serves as a constant reminder of the limitations in how I view the world. It reminds me that I could have been influenced by different ideas and experiences, and if I had been, I might see the world differently. It reminds me that my desire to make things coherent can restrict how I think about new ideas, and make my experience of reality seem more objective than it really is. It reminds me to sometimes take a step back and wonder what I might think or who I might be if things had been different, and to sometimes be ok with not making everything totally coherent.