Moral mountaineering
Published 5 January 2025
Back to writingSometimes we have the opportunity to change the world, and we care about changing the world for the better, but we’re worried about our efforts backfiring and being counterproductive.
I don’t want to contribute to more fish dying painfully, so I think I should stop buying tuna. But tuna fish prey on smaller fish and crustaceans. An artificial downward pressure on predator populations might increase the number of smaller fish — which far outnumber larger predators and almost all die uncomfortably before senescence. So maybe buying tuna is good for fish welfare. I just don’t know!
Similarly, maybe I should support investments to make clean energy cheaper; like batteries, solar panels, and fission power. But I’ve heard some people worry that focusing on incremental advances in clean tech might work as a fig leaf for the status quo, letting our politicians feel off-the-hook for more intense ‘systemic’ change, and ultimately backfiring. Relatedly, efforts to improve energy efficiency can cause net increases in total energy use: in some contexts drivers more than compensate for improvements in fuel efficiency. So maybe various clean tech investments and efficiency improvements are (sidenote: Other examples: building defences against engineered pandemics could inspire rogue actors to make bioweapons in the first place. Unblocking sources of scientific slowdown might inadvertently accelerate risky technologies towards unacceptable levels of danger. Cattle farming involves clearing vegetation for ranches and pastures, which would otherwise be home to billions of insects. If insects feel pain, then maybe cattle farming is good for animal welfare in general.)?
The world is chaotic: we can’t quarantine random and hard-to-predict consequences from rippling out to the rest of the world. If I help an old lady across the road, I might delay traffic, causing a meet cute between two psychopaths in a carpooled Uber, who go on to commit massive white collar fraud in an epic crime partnership. But that meet cute could just as well have have been prevented by delaying the traffic: most hard-to-predict consequences (sidenote: Because there’s no reason to think you’re making the psychopath meet cute more or less likely if you help the old lady cross the road, and so on for most downstream consequences — you can safely bracket the expected difference you’re making to the predictable and direct consequences, like brightening the old lady’s day. I say “shouldn‘t matter for decision-making” because ethical theories which emphasise some kind of “act-omission distinction”, and in particular emphasise avoiding causing harm, would seem to recommend doing nothing at all.) .
Sometimes, though, the hard-to-predict consequences should matter. They can matter (sidenote: This is closely related to and takes after the idea of ‘complex cluelessness’ described by Hilary Greaves.):
- You are wondering whether to change the world in some way.
- That change has some direct and predictable consequences which you like.
- You know that change will also have other big, indirect, and systematic effects, and you are very unsure about whether those effects are good or bad.
- The indirect effects are ‘systematic’ in the sense that your ignorance doesn’t depend on the randomness of the world — if you thought a bit harder, in principle you would figure out whether the effects are likely to be good or bad.
- If the indirect consequences are bad, the ‘overall’ effect of the change will be bad, and you’ll regret making it.
Nick Bostrom describes a similar dynamic, where you can flip-flop on the goodness or badness of some project:
[T]here’s the concept of a deliberation ladder, which would be a sequence of crucial considerations, regarding the same high-level subgoal, where the considerations hold in opposing directions.
When the stakes are high — when the indirect effects of a change might be a really big deal — we’re then stuck with a feeling of paralysis. And that can be true even if we all agree on what kind of world we ultimately want to reach. It’s like we’re lost in the dark on a hiking trip, and we all want to get home, but we can’t agree on which turn to make.
One thing you can do is to think even harder about the effects of your actions. In the clean energy example, I could make a quantitative model of how energy investments could ease the pressure on other policy reforms, the value of those reforms, probabilities I place on different outcomes under different levels of investment, and so on. I could also model the information value of more research, and probably I’d find that I should do even more research.
This is a slightly unfair caricature of what I think is nonetheless a real and sometimes counterproductive kind of pathology. Confusingly, certain attitudes I otherwise appreciate make the pathology more likely: caring about the consequences of your actions, being ‘scope-sensitive’ about those consequences, and appreciating the value of quantitatively estimating them. So what gives?
Imagine placing the world on some kind of landscape. Different dimensions of the landscape represent different ways we can change the world. The ‘peaks’ and ‘troughs’ represent how much better or worse the world is at that point. To climb up a hill, then, is to make progress. We can push the world along different dimensions, and make the world better or worse over time. But the landscape is dark, and at some places it’s shrouded in fog. That’s often not a bad image for our predicament.
I haven’t been mountaineering in the dark, but I imagine a good way to gain altitude — and even reach a summit — is just to walk up the slope you’re on whenever possible: gradient ascent. The issue about unpredictable consequences is like trying to set a bearing in the dark. You want to head in a good direction, but you can’t see your hand in front of your face, so you sure can’t tell which direction goes up the mountain. Plain gradient ascent doesn’t work when you don’t know what gradient you’re on.
But sometimes, you can see peaks in the distance. If we’re starting out at base camp, we might not agree on a route which is uphill all the way. But perhaps we do all see a peak, rising out of the fog; lit by a torch. Then we can agree on a plan: head toward that peak. Your hiking party might end up having to descend some before rising again, but if you’re set on making it all the way to the peak — maybe that’s where the next rest point is — then it’s a good strategy.
Similarly, in some of these cases when we’re paralysed by unpredictable consequences, we can agree on features of a world we all agree we want. Probably we don’t want to be suffocating millions of fish for food. Probably we want to live in a world with abundant clean energy (and smart climate policy). If we trust that we can get to these places, collectively, but we disagree about the best path to take right now, then we can switch plans: how do we move in the direction of that world we all agree we want? What if we just walked in the direction of the summit? We don’t know the undulation of the ground between here and the summit, but we do know we’ll reach it if we head in that same direction.
In a world I’d like to live in, cheap alternatives exist to replace the need for animal farming, and people broadly care a notch more about animals suffering. That world involves people not eating tuna. So I should head in that direction. More obviously, I want to live in a world with abundant clean energy. Investing in clean energy is a step towards it. So let’s invest.
What am I getting at? I’m trying to articulate a kind of ‘constructive’ attitude to improving the world. There is a coordinating, focusing, quality to letting yourself be guided by an image of a much improved world which is not immediately accessible. Because the peaks in the distance are so high, a shared ambition to reach that peak can cut through local squabbles over preferred routes.
Consider material abundance, or wealth in the expansive sense. In the short-run, battle-lines are drawn up around who gets a bigger slice of a mostly fixed pie. That’s fertile ground for intractable disagreements. But (all else equal) everyone can prefer a world where everyone is much better-off.
A project is constructive when it asks, “How can I add building blocks now, so I can continue adding blocks in the future?”. The constructive path is like the straight line to the summit in the distance, because there are often many ways to make local improvements, but a narrower path to making sustained improvements.
When you’re building a house, you don’t ask what would make the construction site more superficially resemble a house that day. Instead, your work is guided by which components a finished house must include, and the order they can be added. Scientific results and new technologies are like bricks in a building: later insights rely on earlier ones like one layer of bricks rests on the next; and once placed they don’t easily crumble or get forgotten. (sidenote: Some scaffolding is coated in a protective fabric called ‘scrim’, which can be decorated with a facade image. ) in different ways, and it’s ok if it’s a bit of an eyesore for the street, because eventually it gets stripped away. But when bricklayers place bricks, and technologists figures, or a materials scientist gets some new result about how to make smaller batteries, they can be fairly confident those steps are on the critical path to some more complete project.
Obviously, obviously, this whole ‘moral paralysis’ problem isn’t the kind of thing you can resolve with a (sidenote: In chess, a move can look killer until you spot that one saving response your opponent could make in the critical line. But wait — you have a response to the response. But wait — and so on. Rules of thumb might help a bit, but if you want to play the best move then thinking hard is necessary. You don’t get good at chess by reading aphorisms.). Sometimes the fastest route actually does involve turning away from the summit. Perhaps there are deep crevasses in the shadows along the straight path toward the summit.
Still, I need a takeaway — some reviewers told me they liked an earlier draft of this post but they didn’t know (sidenote: In particular: is this a problem with utilitarianism (or consequentialism more broadly)? Utilitarianism says “do the thing with the best overall consequences”, which sounds like plain gradient ascent. And I seem to be disagreeing, by recommending actions with (sometimes) worse immediate consequences. But no, this is not a deep or interesting problem with utilitarianism. ‘Overall consequences’ shouldn’t mean ‘immediate consequences’. Only an explicitly myopic-in-time version of utilitarianism would go wrong, by acting like a stubbornly lazy hiker in the analogy. I do think utilitarianism alone has less to say about cooperation, but that’s a different story.). Maybe it’s this:
It’s possible to get paralysed by uncertainty over whether an intuitively good project is actually bad because of unintended consequences. But if the intuitively good thing is a constructive part of a future everyone wants, that’s a reason to do the project, and a way to cut through the paralysis.
We have to want big improvements in the world to get them. So even if the sun rises and the fog clears, we still get to decide which peaks to aim for. Maybe we should settle for a manageable if unambitious hike. Or we could look to the taller mountains further along the range. And once we’re up there, maybe we’ll see even more dizzying heights in the distance.
Back to writing