Better Futures essay series
August 2025 • Link to project →
Back to projects‘Better Futures’ is a series of essays about cause prioritisation and axiology, for Forethought. I co-authored two of the essays with Will MacAskill: ‘No Easy Eutopia’, and ‘Convergence and Compromise’. I helped with the remaining pieces that Will sole authored.
You can read the entire series on the Forethought website, and we also recorded a podcast episode about the overall series.
Here is the abstract of the overall series:
Suppose we want the future to go better. What should we do?
One prevailing approach is to try to avoid roughly zero-value futures: reducing the risks of human extinction or of misaligned AI takeover.
This essay series will explore an alternative point of view: making good futures even better. On this view, it’s not enough to avoid near-term catastrophe, because the future could still fall far short of what’s possible. From this perspective, a near-term priority — or maybe even the priority — is to help achieve a truly great future.
That is, we can make the future go better in one of two ways:
- Surviving: Making sure humanity avoids near-term catastrophes (like extinction or permanent disempowerment).
- Flourishing: Improving the quality of the future we get if we avoid such catastrophes.
This essay series will argue that work on Flourishing is in the same ballpark of priority as work on Surviving.
Here’s the gist of the first essay I co-wrote, titled ‘No Easy Eutopia’:
Here’s a first-pass statement of the question we address in this essay: among all the futures humanity could achieve given survival, weighted by how likely those futures would be assuming no serious, coordinated efforts to promote the overall best outcomes (whatever they may be), what fraction of those futures live up to most of the potential we could have achieved?
A “no easy eutopia” view says that only a narrow range of futures achieve most the potential of the best achievable futures; and a wide range of futures fall far short — including futures which might seem fantastically advanced, grand in scale, and full of things we care about.
Here’s the gist of the second essay I co-wrote, titled ‘Convergence and Compromise’, which follows immediately from the one above:
View project →A naive inference from no easy eutopia would be that mostly great futures are therefore very unlikely, and the expected value of the future is barely above 0.
This essay will consider […] whether future humanity will deliberately and successfully hone in on a mostly-great future. [W]e consider two ways in which that might happen:
- First, if there is widespread and sufficiently accurate ethical convergence, where those people who converge on the right moral view are also motivated to promote what’s good overall.
- Second, if there’s partial ethical convergence, and/or partial motivation to promote what’s good overall, and some kind of trade or compromise. We think this is the most likely way in which we reach a mostly-great future if no easy eutopia is true, but only under the right conditions.
[Then] we consider the possibility that even if no one converges onto a sufficiently accurate ethical view, and/or if no one is motivated to promote what’s good overall, we’ll still reach a mostly-great future.