Signalling and Self-Deception

Published 25 February 2020

Back to writing
Contents (click to toggle)

4,081 words • 21 min read

I recently interviewed Dan Williams about signaling theories of misinformation for the podcast I co-host called Hear This Idea. I found the whole thing extremely interesting and spent a fair bit of time trying to write about some of the ideas that came up. This is my effort.

Signals

Why do peacocks have such elaborate tails? They are heavy, easy spotted by predators, and require lots of energy to grow - so how could they confer a survival advantage? According to one explanation, that’s exactly the point. Peacock tails signal to potential mates that ‘this peacock can survive in spite of this inconveniently extravagent tail; so it must be a strong and healthy!’. This is a kind of signalling hypothesis about peacock tails, and similar explanations abound in evolutionary biology. A signal is just something with the function of conveying information. Contrast signals with cues, which just incidentally convey information. Peacock tails have the function of conveying information about mate-value because they incur a cost which only comparatively strong and healthy peacocks could incur; but signals aren’t always costly. Consider lion’s roars: having a loud roar doesn’t involve a cost, but the loudness of a roar does stand in a lawful relation to the size and strength of the lion, and so to mate value. Weak or small lions just have no way of faking a loud roar, so big roars signal mate value.

(Image) Photo of a peacock

A final example is the strange proclivity for gazelles and deer to jump high into the air while keeping all their legs straight; called ‘stotting’, ‘pronking’ or ‘pronging’. On its face, stotting just makes the animal more vulnerable to predation: it makes it more visible and uses up time and energy which could just be spent escaping. But signaling explanations make the behaviour less puzzling, and it is likely that stotting has more than one signaling function. It may work as a kind of alarm signal to other members of the herd that a predator is nearby, or as an honest signal of the animal’s fitness to the predator (‘don’t waste your time on a prey that can afford to bounce around when it know’s a predator’s nearby!’ (the handicap principle) or ‘look how high I can jump! Don’t bother hunting me!). It might also work as a signal to potential mates for similar reasons.

(Image) Stotting

That’s roughly how signaling works in biology, but signaling explanations extend to economics, philosophy, and a surprising array of social behaviour. Here’s another provocative example: what is the value of education? The standard answer is that the value we get from education, and the reason people sign on to higher education, consists in the skills we develop and the things we learn. Another explanation says that most of the benefits of education come from signalling. That is, we don’t go to university or even school primarily to learn things; but to signal to potential employers that you have desirable characteristics: that you’re intelligent, industrious, conscientious, maybe even conformist. Educational achievements function as signals of those characteristics precisely because much of schooling and higher education is boring and hard, and often requires long periods of diligent work to prepare for tests about topics that we often don’t find interesting or important. Success in such an environment ought to be a good indicator of success in the world of work! It might seem strange why exams are quite as boring and as obviously useless as they sometimes seem, but the signalling hypothesis explains that this is exactly the point: boring exams filter out conscientious, employer-friendly personality types and accredit them with grades and certificates which therefore work as credible signals of employability. Bryan Caplan’s The Case Against Education defends this idea at length.

Beliefs

That’s signaling. The other key concept in this discussion is belief. In particular, Dan is interested in beliefs in the political domain. There are at least two ways that I can fall short of having a true belief about some politically relevant fact. The first is voter ignorance: I might not have learned about some topic, so I just haven’t formed any relevant confident beliefs at all. For instance, most people probably couldn’t explain the difference between fiscal and monetary policy. The second is misinformation: where I have a confident but false belief. Examples of misinformation in politics abound. Voters have inaccurate views about the severity of climate change, or about crime or immigration statistics. Others believe in stranger kinds of conspiracy theories. At the extreme, some people apparently still think that the world is flat.

The idea Dan is exploring is whether a signaling hypothesis might explain some kinds of misinformation. In other words, do some people confidently believe falsehoods because they have a signaling function? Suppose every member of some group believes ϕ\phi, and that outgroup members typically do not. Suppose I have an interest in being part of that group, because I can gain social prestige, material rewards, protections, and so on. If I believed ϕ\phi too, members of that group might be more inclined to recognise me as a member. Therefore, I have an interest in believing ϕ\phi. Very roughly, that’s the idea: that a belief can be (aside from true or false) socially adaptive.

One problem with the signaling hypothesis about beliefs is that beliefs are not observable. For instance, in the examples taken from the natural world, peacock’s tails and lion’s roars worked as signals because they were easily seen or heard. But beliefs are (presumably) not directly observable: we all have plenty of beliefs which nobody finds out about. Perhaps we even have beliefs which we ourselves are not aware of. The answer to this objection is that our beliefs are indirectly observable or inferable through the things we say and the way we behave. If there were no relation between what people said and what they actually believed, and everyone knew as much, then beliefs would not work as signals. But obviously beliefs are so often shared and broadcast that they are, for the most part, as good as observable.

A second potential problem is that while classic examples of signaling work because they can’t be faked, it seems obvious that it is possible to fake beliefs. If it is beneficial for me to have some set of beliefs, then I might just act as if I believe those things while keeping separate my actual beliefs. But the signaling hypothesis doesn’t say that we should expect some people to act as if they have certain beliefs because of their signaling function, but that we should expect some people to actually believe certain things in virtue of their signaling function. So why think beliefs themselves are likely to have a signaling function - why not just belief-behaviour? Well, one response is to concede that very often mere belief-behaviour does have a signaling function all of its own. A recent poll in the US asked Democrats which of the following two outcomes they would prefer to occur on November 2020:

Responses preferred the meteor by a serious margin: 62% to 38%. Clearly, almost no respondent actually believed that outcome would be better. But that question might be seen as an opportunity to signal political affiliation in a really hyperbolised way, independently of actual beliefs. But conscious deception (maintaining a distinction between my actual and my professed beliefs) involves costs: keeping track of what lies I have told, what other people believe I believe, and likely an uncomfortable sense of inauthenticity. Moreover, in the political domain it is rarely in your own practical interests to form true beliefs. For instance, although climate change is a severe threat, forming a belief to that effect is unlikely to change the outcome of any relevant policy decisions or collective behaviours unless you’re extremely lucky or powerful. So the practical incentives to form true beliefs in politics are likely to be dwarved by the incentives to form beliefs that help me get on with my group. And as the case may be, my group might prefer to deny the severity of climate change.

(Image) Industry

Self-delusion and motivated reasoning

A third objection is that (surely) barely anybody consciously decides to have a belief on the grounds that it is socially adaptive. Isn’t there even something conceptually suspicious about the idea of forming a belief not because you think it’s true, but because you have some other practical motive for forming it? For more on that latter question, William James’ classic essay ‘The Will to Believe’ is highly recommended. The answer to this objection is quite simple: the signaling hypothesis is not saying that people form beliefs because they think they’ll have a useful signaling function - presumably the process is unconscious. This is uncontroversially true for other kinds of signaling explanations applied to human behaviour. Maybe people buy Rolexes and luxury cars in part because they have a signaling function - a fact which those people have probably never explicitly considered or articulated. All of us like to namedrop the impressive people we’ve read or met, and use long words to signal intelligence or something like it. And I bet you can recall conversations which morphed into socially acceptable signaling competitions: ‘have you seen this new film? Sure, but it’s got nothing on the directors older, less popular work… Sure, but only posers talk about that director without knowing about this director…’.

Maybe there is something unsettling about the suggestion that we form many of our beliefs on the basis of factors we are not consciously aware of, but there is no doubt that the phenomenon is widespread. Countless psychological studies demonstrate the fact that we often have little to no introspective access to the reasons we form beliefs.

Here are some examples. In one experiments, participants were asked to pick their favourite article of clothing from a line of similar-looking choices. They gave choices, and supported them with reasons (“I prefer the texture of this one” etc.). They also denied, when asked, that the position of the items of clothing influenced their decisions. Two catches: firstly, the position did seem to influence the decisions to a significant degree. Secondly, every item of clothing was identical. There were no differences. So the reasons given for choosing some particular item must have been false. Read about Nisbett and Wilson’s classic experiment here.

Another kind of example is cognitive dissonance or the ‘backfire effect’. In certain circumstances, when a belief that is relevant to a person’s identity is challenged by contradictory evidence, it only gets stronger. To give an extreme example, consider how doomsday cults who predict the end of the world on some specific date react when the date comes and goes and the world keeps turning. In many cases, cult members double down on their beliefs and become more confident that they were right, claiming that they successfully averted the threat through their efforts or something similar. The book When Prophecy Fails follows a small UFO religion who predicted a great flood on December 21st, 1956. When the flood failed to materialise, they agreed that “[their] little group, sitting all night long, had spread so much light that God had saved the world from destruction.” Dan concludes from these examples:

What that gets at is there are important differences in how we treat beliefs based on why we hold those beliefs. If you hold a belief because its a central part of your identity, it’s going to behave very differently to ordinary beliefs.

For an entertaining philosophical treatment, see Eric Schwitzgebel’s ‘The Unreliability of Naive Introspection’.

So it is difficult to deny that we often engage in ‘motivated reasoning’: inventing rationalisations for those things we have some instrumental reason to believe. Dan does note that there are limits of what kinds of beliefs we can reason ourselves into - he calls this a ‘rationalisation constraint’: that only those beliefs which admit of some plausible-sounding post-hoc reasons can be the targets of motivated reasoning.

True, when asked why we formed some belief, most of us can give a plausible-sounding answer. But it would be a mistake to identify the reason we give with the actual cause or causes. The psychologist Jonathan Haidt uses a suggestive metaphor here: he describes the conscious mind which gives explanations for our beliefs and behaviours as a kind of press secretary. In a government or big corporation, press secretaries work on the boundary between internal goings-on and the outside world; justifying and explaining decisions to outsiders. Press secretaries have little to no influence over the decisions that actually get made - but their job title entails endorsing and defending all of them. So it goes when we explain some of our beliefs: although our conscious, explaining selves have little to no introspective access to the real and complicated processes that generate our beliefs, we nonetheless own them and even fool ourselves into thinking we know how we got them.

Other explanations

So people have all kinds of apparently irrational, crazy, downright false beliefs; particularly in politics. And one kind of explanation for that: that some beliefs are socially-adaptive: they have a useful signaling function. But is that the only possible explanation? What are the alternatives?

One story appeals to differences in ‘informational ecosystems’. When we associate more with a particular kind of politics or set of beliefs more generally, we might slightly adjust our choices of what media we consume. Now the information you receive is skewed towards your prior beliefs, in addition to the way you react to that information. You are also likely to trust different sources by different amounts, often for reasons apparently unrelated to the topic at hand. How much factual information is disregarded not on the grounds of having contrary evidence, but because the bearer of the information belongs to the wrong team? Such-and-such a pundit said something I disagree with - but I don’t need to engage with the content of their claims, because they’re fake news! Echo-chambers provide another popular alternative explanation: the polarising feedback loops that can occur in isolated media ‘bubbles’ when extreme views go unchecked by trusted, scutinising outsiders.

Spotting signals

This raises the natural question of how we might tell apart those explanations from the ‘signaling hypothesis’. Dan suggests at least four distinctive features that might distinguish beliefs with a strong signaling function. No single feature is diagnostic, but a convergence of features would add up to a strong case to think a belief is signaling.

  1. Group-specific

In a sense, this is trivial. But if a belief exists because it signals group membership, that tells us it should be the kind of belief outgroup members would have very little to no reason to form; and thus a belief for which there is no good evidence. At the extreme, we might even expect signaling beliefs to be massively at odds with the evidence, precisely because that would provoke stigamatisation from outgroup members and thereby strongly signal group loyalty.

  1. Widely advertised

What would be the point of a signaling belief that nobody knew about?

  1. Most prominent among those with greatest incentive to signal loyalty

This also makes intuitive sense. Signaling a highly group-specific beliefs incurs a cost from the stigmatisation of outgroup members. If I only have a small incentive to be a part of your group, the cost-benefit balance isn’t there for me. But if I’m desperate to be accepted by some social group, the benefit will be higher and so the cost I am willing to incur will be higher. So we might expect low-status individuals to be more vulnerable to signaling beliefs. Dan also suggests such beliefs are more likely to be prominent among males, because there is some evidence women favour dyadic relationships while men prefer group membership.

  1. Strange functional properties

Dan suggests that signaling beliefs are unlikely to strongly interact with other beliefs and behaviour in a normal way. That’s because a signaling belief is useful insofar as it is socially adaptive, but not insofar as it is true, and so time and energy needn’t be wasted on actually considering its implications. The result is a belief or set of beliefs which are strangely isolated from standard, action-generating beliefs. Dan gives two examples. Firstly, many religious people believe that certain kinds of behaviour increase their likelihood of being punished in the afterlife. Since the literal idea of hell is so unspeakably bad - and on most accounts eternal - it is at least curious that those same people often engage in those behaviours without worrying too much. Are they just forgetful? Are they beholden to an irresistibly strong compulsion to sin? It is also surprising that such people do not devote almost all of their waking hours to warning other people of this danger, and of spreading the good news of redemption in the afterlife. A better explanation seems to be that certain religious beliefs of this kind might have a signaling function to some extent: explaining why they appear to be functionally isolated from other behaviours.

Dan’s second example is the ‘pizzagate’ conspiracy theory that emerged during the 2016 presidential election in the United States. The theory linked several high-ranking members of the Democratic party with a human trafficking operation operating from the basement of a pizzaria in Washington, D.C. Considering the seriousness of the allegation, it is surprising that none of its tens of thousands of proponents appeared to have called the police, or actually visit the pizzaria to check. Again, what explains this is that believing in the theory was more like a kind of signal of group membership. The pizzagate example shows that although signaling beliefs might not be like authentic beliefs insofar as they are often functionally isolated from other beliefs and behaviour, they are nonetheless still capable of causing real harm.

The signaling hypothesis can also make partial sense of why politics, particularly in the US, has become far more polarised - in the sense that a person’s opinion on any single issue is now a far better predictor of their opinions about prima facie unrelated issues. As Ezra Klein argues, this is the age of ‘mega-identity’ politics: group identities that tether party affiliation to other aspects of a person’s identity like race, religion, gender, geography, age, and so on. What this means in terms of signaling is that certain political beliefs now have far more potential to function as credible signals of group membership. And this is worrying if Dan is right to suggest that signaling beliefs might begin to take on these characteristics of being untethered to evidence and functionally isolated from behaviour and other beliefs. But maybe that’s reading too much into it.

I asked Dan whether thinking about just how many beliefs are socially adaptive ought to change our confidence in our own beliefs. Consder this excerpt from an exchange between Tyler Cowen and Robin Hanson:

COWEN: If you had to, in as crude or blunt terms as possible, how much of human behavior ultimately can be traced back to some kind of signaling? What’s your short, quick, and dirty answer?

HANSON: In a rich society like ours, well over 90 percent.

That’s behaviour, and we’re interested in beliefs. But still - that’s wild! Surely a claim like that should get us to pause and reflect on the sources of our own beliefs. Dan points out there are two questions here - should thinking about all this update our beliefs, and does it? On the latter question: not much. It’s easy to see how the signaling hypothesis might undermine the beliefs of people we disagree with, but near impossible to update our own.

This is more ammunition when it comes to dismissing the beliefs of other people. It’s a great way of undermining what other people believe… [But] we do seem to have this in-built self-deception of intuitively feeling that our beliefs were formed based on evidence and reason.

The first question is perhaps more interesting. Dan suggests that it should only undermine confidence in his own beliefs is other people do the same. The alternative is that the people who scrutinise their beliefs the least end up placing the most confidence in them and broadcasting them the loudest. As Dan puts it: “the people who are least reflective will have the most influence on political decision-making”. If we want true beliefs to win out and don’t believe most other people are going to start seriously updating some of their own most confidently held beliefs, then the best thing to do is not to update our own beliefs. “It’s a kind of difficult game-theoretic problem”, Dan concludes.

Markets for rationalisations

A sub-question here is whether everyone is equally likely to believe misinformation because it has a useful signaling function. If I’m not the kind of person who is vulnerable to socially adaptive belief, perhaps I don’t need to worry so much about the signaling hypothesis undermining my own beliefs. We already mentioned two potential factors: low-status within a group, and sex. What about intelligence? Is it right to think that smarter people are less vulnerable?

“No”, says Dan. Here’s what the evidence does seem to suggest -

If you look at beliefs about societal risk, if you take an individual from the population at random and you learn their intelligence, their numeracy, their general scientific literacy; that’s hardly predictive at all about what they’ll believe about something like climate change in a country like the US. But if you learn their political affiliation, that’s highly predictive. And it’s actually worse than that, because it is the individuals who are most intelligent who are the most polarised on an issue like that.

Why is this? Well, intelligence might make you slightly better at sorting true beliefs from false ones. But it also makes you are far better at finding creative rationalisations for the things you already believe. Dan points out “there are certain beliefs which are so stupid, you have to be really intelligent to believe them”.

It is interesting to consider how the demand for reasons for beliefs we already hold plays out in economic terms. Dan suggests that the demand for such reasons produces a ‘market for rationalisations’, where media institutions are sustained not by a demand for scrutiny and truth-seeking, but for their comfortable effectiveness in massaging existing beliefs. All very depressing stuff!

Reading Recommendations

(Image) Dan's book choices

If you are interested in this topic, check out Dan’s recommendations at the top of this write-up. I would also recommend a couple of podcast episodes with the economist Robin Hanson, the doyen of weird theories about signaling in economics:



Back to writing