Speaking with Guive Assadi about 'evolutionary futures'

Published 23 July 2023

Back to writing
Contents (click to toggle)

20,095 words • 101 min read

I recently spoke to Guive Assadi about a recent paper of his called ‘Will Humanity Choose Its Future?’, as part of the podcast I do. I thought it was a productive conversation, so I’m sharing the transcript here.

You can listen to the full episode at hearthisidea.com/episodes/assadi.

We discuss:

What follows is a mostly machine-generated transcript of the conversation, and as such it is likely to contain errors!

Intro

Fin: Hey, this is Hear This Idea. In this episode, I spoke with Guive Assadi. Guive is a research scholar at the Centre for the Governance of AI in Oxford, and his work focuses on conceptually clarifying risks posed by emerging technologies. And in particular, we spoke about Guive’s recent paper titled “Will Humanity Choose its Future?”.

So, when you’re thinking about how humanity’s future could turn out, you might imagine a few different cases. Maybe humans will go extinct relatively soon. Maybe the world ends up being controlled by nonhuman systems like goal directed AI. Or maybe the future is meaningfully determined by humans and their values, for better or worse. But Guive is interested in futures where we end up in a world that wasn’t deliberately chosen by anyone.

So we talk about whether the agricultural revolution counts as an historical example, possible future examples like competition between digital minds or rapid space colonization. We talk about value erosion, and of course, strategies for avoiding getting a future virtually no one would have chosen, like through technologies which could enable better international coordination. I really recommend this paper and I’ll link to it in the show notes if you want to read it. But without further ado, here’s. Assadi.

(Link to paper)

What is Guive’s paper about?

Fin: Okay, Guive, thanks for joining me.

Guive: It’s great to be on.

Fin: Cool. So we’re going to talk about the paper you just wrote. It’s called “Will Humanity Choose its Future?” Can you just explain what the paper is about?

Guive: So in discussions of ways the future could be bad, there’s sort of three types of reasons that have gotten a lot of discussion. One reason the future could be bad is there could be no powerful agents to deliberately make the future valuable. So most extinction events that people are concerned about, like pandemics or asteroids, would fall into this category.

Fin: Bad as in, not discernibly good, rather than positively terrible?

Guive: It doesn’t have to be actually negative in value. That’s an important point. But the first category is things where there’s no one around to make it good. And yeah, that could be almost any extinction event. And then another category which is often the subject of dystopian science fiction, is that there could be a powerful human civilization, but it’s directed to bad goals. So that could be if there’s some kind of enduring dictatorship, like the phrase from Orwell is like, if you want an image of the future, imagine a boot stamping on a face forever, this kind of thing. But it doesn’t really have to be a dictatorship. You could have some kind of democratic state where they vote on what to do and they just pick something bad for whatever reason.

Fin: Or just something incredibly mediocre.

Guive: Exactly. I guess this is a point that’s going to come up repeatedly. It seems like a vast range of different things are possible in the future. And so if you really want the best possible outcome, then most possible futures are not that good from that perspective. And then a third category of things that people worry about, which especially in the last few months has been getting a lot of attention, is that there could be powerful, rational agents. But they wouldn’t be human agents and they would drive the future in a bad direction. So obviously, AI would be in this category specifically — like sort of terminator type scenarios where the AI takes over.

Fin: They have bad goals and they’re entrenched, we can’t reverse them.

Guive: Yeah, exactly.

And these are all worrying possibilities. But I also don’t think this list exhausts the reasons why the future might not be as good as we had hoped. And a kind of simple way of seeing that is that if you think about bad changes that have happened historically. So obviously none of those are in the first category. Human extinction has not happened yet. Some of them are in the second category. None of them are in the third category either, unless you mean from the perspective of neanderthals or gorillas or something. The second category for sure includes some events that were deliberately chosen by people and were bad. But also a lot of bad things don’t really seem to have been chosen by anyone per se. They were just the outcome of a kind of an undirected process not chosen by anyone or anything.

But the example that I sort of framed the paper with is that one very popular view of the agricultural revolution, which was stated in a very punchy way by Jared Diamond — and some people think that more recent evidence from the field of ancient DNA has tended to bear this out — that the reason that farmers displace hunter gatherers over almost all of the Earth’s land is because farming just produces more calories per unit land. And farming communities will experience a faster population growth rate and that means that they can out fight hunter gatherers. But farming is a much worse life in various ways.

And so the sort of demographic expansion of the early farmers kind of wiped out a more natural, more appealing lifestyle and replaced it with slavery and much more infectious disease and being further away from nature and other things that people don’t like. And one might be worried that something like this could happen again in the future.

Fin: Yeah. So the idea with the agricultural revolution example is that presumably very few people, or at least it’s not necessary that anyone thought, oh, it would be really nice if the world were just covered in agricultural civilizations, right in place of hunter gatherer ways of life. But it just happened anyway because lots of people were making lots of sensible local marginal decisions, and they added up to something which almost no one, at least immediately after the agricultural revolution, would have preferred. If it’s the case that quality of life tends to be a bit worse than hunter gatherer societies.

Guive: Right, yeah, that’s exactly it. To be clear, it’s not like there are tons of memoirs from this period that we can read about people explaining that they didn’t want to become farmers, but at the margin, that was the best decision. But it seems like a kind of plausible big picture account of one of the most important transitions in the history of our species. And I think that’s enough for this sort of concern about the future to be worth thinking about.

Fin: Yeah, got it. I’ll try resaying what you’ve said so far, to be clear.

So you’re saying different ways the future can be bad — as in not great. One is there’s just no one around to choose a good future. For instance, we go extinct. Another is people choose a bad future. And a third kind of category is that some other kind of agent, like an AI or group of AI agents, chooses a bad future because they’ve got, like, values that we don’t care about or something.

But: that doesn’t exhaust the ways that the future can be bad because there’s this extra way, which is that no one chooses it, but nonetheless we arrive at some future which no one would have wanted or very few people would have wanted.

And the agricultural revolution seems like at least a kind of existence proof that maybe it’s worth worrying about this happening again in the future.

Guive: That’s exactly right.

Evolutionary futures

Fin: Great. All right. Well, could you say more about this term ‘evolutionary future’ to describe this kind of future, but can you say more about exactly what that means, according to you?

Guive: Yeah, so the definition I use is: an evolutionary future is a future where, due to competition between actors, the world develops in a direction almost no one would have chosen.

There’s a couple of issues with this definition. One is actual choice versus what we would have chosen. So it could be that by coincidence, if we go back to the hunter gatherer example, maybe that example is wrong and farming is better. And if that were the case, you would have a process that no one chose, but it was good, and resulted in something that people would have chosen. So these things can come apart. And what I really focus on is more whether there is the actual possibility of choice, not whether the eventual outcome will be something that is objectively worthy of choice or would have been chosen had choice been possible counterfactually.

Fin: That’s a nice clarification. So, I mean, I guess this is relevant for all things considered, normative questions about whether the future will in fact be better, but it’s easier to talk about “will we, from our perspective, choose a future that we, from our perspective, prefer or not”.

I guess one example that comes to mind is, like, I think Robin Hanson talks about this in the ‘em’ context is that maybe we end up modifying our preferences to suit some circumstances better. We just love work more, and then in some sense, ex post, we would have chosen that, but from our side we wouldn’t.

Guive: So this is another issue that I think is definitely related but a bit separate. It’s ex ante versus ex post. And I do mean ex ante: would we have chosen it? Because it does seem like a lot of things function better when the people involved have some preference for it to be that way. And I think regardless of the exact details of what happens in the future, it seems like a reasonable bet that the agents that are involved will think this is a reasonable way for things to be.

Fin: Yeah, right.

Guive: But there’s a further question of whether at some prior time anyone chose for things to develop in that direction. So that’s another issue with the definition. There’s also an epistemic obstacle to choosing the future. Like, you have to know enough to be able to understand what an action now will mean millions of years from now. That’s obviously not possible right now. And it may never be possible, but that’s out of scope for this paper.

Fin: But it’s this, like, conjunction of: we wouldn’t choose the future if we were to be able to see it. And also that future comes about through these competitive pressures.

Guive: So, yeah, that’s where the focus is.

What could an evolutionary future actually look like?

Fin: Okay. A natural next question is whether there are rough examples that come to mind when you’re thinking about what this could actually look like, what this kind of evolutionary future could be.

Guive: Yeah, I mean, that’s a good question. So the thing you just alluded to with Robin Hanson’s work is a big example. So what if at some point it’s the case that a lot of the minds that are involved in economically relevant labor are implemented digitally. Hanson talks about this idea of ‘ems’, which is, like, you scan someone’s brain and then you determine what information processing is happening in the brain and then you duplicate that on a computer. You could just have, like, a de novo AI that doesn’t emerge from a brain scan that counts as a mind in some relevant sense. And if these things are, like, doing most of the useful work in the future, you might think that they’ll kind of rapidly evolve because it might be possible to directly edit them.

Guive: It also might just be that the thing about computer programs is you can copy them as much as you want and so the generation speed could be extremely fast, so they could change a lot very quickly. And there have been various concerns that people have raised about this kind of scenario. I mean, obviously it’s very speculative, but one thing Nick Bostrom brings up in his brief discussion of this issue in Superintelligence is that you might have minds that lose the ability for conscious experience. If that’s kind of dead weight, that doesn’t help them — it doesn’t contribute to filing a lawsuit or whatever they’re supposed to be doing. And so that’s obviously a way the value of the future could be greatly diminished is if there’s nobody to experience it. But it could be other things, too. Like maybe they just kind of lose interest in music. Not that I think that this in particular is super likely.

Fin: So in general the question is what determines what most digital minds could look like after some long period of time through competitive pressures? And things that come to my mind are like, if you’re generally more economically productive, probably you’re more likely to end up being copied. If you just really care about copying yourself, that’s also a thing.

Guive: That’s another one also, maybe if you’re more able to use force in some way, that might help you to be copied.

Fin: Yeah, if you’re like grabby in some sense.

Guive: Yeah. Or I mean, maybe the minds that are really good at executing cyber attacks or something.

But I guess the concern here is something or the idea here is something like we might end up with a world where most of the people are very different from people today and in ways that we might have not liked at the beginning.

Fin: Yes. Makes sense.

Guive: And it is a further normative question whether that’s a good or bad thing.

Fin: Okay, so that’s one example of what you’re calling an evolutionary future. That is, most minds end up being digital minds, and there are these competitive pressures that make them very different to human minds and in a way that we, from our perspective, would think is kind of weird or bad or wouldn’t prefer in any case. Any other examples?

Guive: Yeah, so another one is there’s a hypothetical that’s related to space colonization, which is like, what if you have a kind of wave of different spaceships emanating out from the Earth trying to claim as much of space as possible? And the thought is you might end up with the people that just want to copy themselves as much as possible getting a larger and larger fraction of the colonization wave over. So just to back up a little bit, the reason that there’s this wave structure is because there’s like an inherent speed limit on going through space, speed of light. And that means there’s a very significant first mover advantage. So the first group to pass through an area might be able to get enduring advantages by being first.

Guive: And so if you combine the enduring advantage by being first and the potential advantages that certain potential speed advantages that some kind of agents get, like they just want to copy themselves as quickly as possible. That could lead to a similar reshaping of civilization.

Fin: Yeah, that makes sense. And then in particular, if there’s ever a trade-off between being really good at grabbing more space and quickly and doing any other goal things that we’d like future people to do, then the worry kicks in again that this is like a bad kind of evolutionary future.

Guive: Yeah. Or I mean, may not be bad, but not something we would not prefer.

Fin: Sure. One thing I want to ask about is that when I think about historical examples so we talked about the agricultural revolution there’s also a general dynamic people talk about of a Malthusian trap where, like, population rises to the point of subsistence. It seems notable that the last couple of centuries at least don’t really resemble these worrying evolutionary competitive dynamics. Like: actually we’re well above subsistence talking very generally.

Guive: Yeah, and getting higher all the time.

Why think competitive pressures could return?

Fin: So why think these evolutionary pressures return? Aren’t we just like out of that phase?

Guive: Yeah, I mean, I think that’s a very fair question. So why are we above subsistence right now? The reason is because the rate of economic growth currently exceeds by a lot the rate of population growth in almost everywhere in the world. And that means that the per capita income is growing. So if you think at some point economic growth will slow down relative to population, yeah. In a more basic sense there’s a question: can the current rate of economic growth continue indefinitely? There are various arguments that cannot. The one that’s most persuasive to me is something like I associated this with Eric Drexler. It’s something like, look, we have like fixed laws of nature and for most general classes of tasks there’s probably going to be a best way to do it relative to some fixed laws of nature.

Guive: So if economic growth comes from technological change, eventually technological change has to slow down because you’re kind of approximating the best way to do it.

Fin: Yeah. Okay. There’s like a minimum amount of energy that you need to air condition a room; or indeed grow food.

Guive: Like computer floating point operation or growing food or whatever it is. If there’s not exactly one best way to do it, there’s at some point additional optimization is going to not yield as much or that seems very intuitive to me. I’m open to debate on this point. But if that’s true and if population growth continues, we eventually might be back at subsistence. So you could have this view, which I think Robin Hanson does, and I think he’s not crazy, that we’re kind of in a brief sort of interlude out of a Malthusian period, but eventually we will be back in a kind of Malthusian situation.

Fin: Why expect the population to continue growing back to the point of subsistence given that presumably no one would prefer to live in such a world compared to a world where we have abundance per person?

Guive: Yeah, I mean, you could make a couple of arguments. So one is like, if we’re allowing digitally implemented minds to count, those are super easy to copy. So we might get back to subsistence. It may not take like a broad consensus to get such a population increase in that kind of situation. Another argument that people make, and there’s been some interesting pushback on this one I’m really not as sure about is that, well, what’s more evolutionarily selected for than fertility? So in the long run, that’s one argument people make and that’s kind of associated with this idea that some small groups that have very high fertility right now, like the Amish are a classic example will eventually be like a big proportion by definition. Yeah, I mean, it does have this kind of cute quality to it.

And there’s a paper that I actually highly recommend by a bunch of people, but the one who I know is Kevin Kuruk. So let me see if I can find this and maybe that can be put in the show notes or something like that. Yeah, where they basically argue against this perspective that population growth eventually has to speed up. And anyway, I think that’s still very much an open and debatable thing. And of course, anything to do with digital minds is open and debatable because it’s such a speculative issue. But those are two sort of rough arguments.

Fin: Yep. Is this related to what you mentioned? Robin Hanson has talked about it. This idea of the ‘dreamtime’.

Guive: The dreamtime idea is this thing I’ve been talking about. Like the dreamtime is a time when temporarily population growth is much slower than economic growth. And that means that people are just much more unconstrained in what they can do. And so, like, Hanson has a sort of rich set of associations with the idea of a dreamtime. He associates it with people’s beliefs being much less accurate or at least not accurate and not serving some very specific purpose.

Fin: Okay. Because you don’t have all these hard constraints just like having to believe the right things.

Guive: Exactly. Presumably if you’re a subsistence fisherman and you have the wrong beliefs about where the fish are, that’s not going to last. Whereas my opinion about the rebellion in Russia right now just has very little to do with my evolutionary fitness.

Fin: I feel like that’s also true of a subsistence fisherman.

Guive: Yeah, but I mean, he probably doesn’t spend as much time as you have.

Fin: All this free time to believe about random things.

Guive: You can push back on this though. You could be like, oh, well, he probably has a lot of ideas about water spirits and so on. I think the next step in the dialectic is to say something like yeah, well, those can serve an adaptive purpose. Are you familiar with this thing from these food rituals before that? Yeah, I mean, he talks about this tribe in Canada where they hunted caribou, and they didn’t want to consistently prefer some places because then the caribou would learn that, and they would not go to those places anymore. So they needed to randomize where they would go, and so they would use divination with oracle bones to determine where the caribou were. And the thought is, like, this is a genuine randomization process so there’s no way the caribou can figure that out.

So you might have this kind of analysis of what appear to be strange beliefs of people at subsistence but certainly they seem remarkably adaptive when you look at them closely.

I don’t know. In order to have a balanced view of this, you’d have to have a list of all beliefs or something and you’d have to be like, well, this is the percentage that seems to be adaptive.

Fin: And the idea is that when life is great in this kind of dream time when economic growth is outpacing population growth the pressures to form adaptive beliefs, if not literally correct beliefs, are just much weaker? As in: there’s so much more wiggle room to just have random false beliefs and to do things like ban nuclear power or whatever even though it would be adaptive to do that.

Guive: Yeah, or ban genetic engineering or just not have children because you don’t want to. From an evolutionary perspective, you can see an argument that this kind of preference is not going to last in the long run.

Fin: Yeah. Though this is not normative. It’s not like these things will be good to do, but we’re just, like allowed not to do them.

Guive: Certainly not they would be adaptive, and maybe they would be very bad — but there are a lot of things that would be adaptive that don’t seem to happen very much.

Fin: And I feel as if we have this wiggle room to do non adaptive things because of this dreamtime dynamic yeah, that’s.

Guive: One argument you could make for worrying about a kind of malthusian future even if you think that we’re not in a malthusian present. And I do. I want to say I think most people who talk about the modern world as if it is malthusian are wrong. And I think that perspective has done great harm through such things as China’s one child policy. Having said that, the fact that Paul Ehrlich saying there will only be ten men left alive on the Earth in 20 minutes, or whatever crazy stuff he said in the fact that a very extreme, near term prediction along those lines is wrong, does not mean that no dynamic related to that can be important in the long run.

Fin: Sure. Like, we have examples of this kind of dynamic holding. And we have examples of it not holding!

Guive: Yeah. I don’t know if it’s deep, but there is this sort of theoretical argument for it. So I think it shouldn’t be ruled out. I mean, you can write a differential equation where population will increase until constraints stop population from increasing.

Preventing an evolutionary future

Fin: Okay, so zooming out again. We are talking about this idea of an evolutionary future. Also talked a bit about, like, why, in general, might these kinds of competitive pressures return given that they don’t seem to strongly apply to the world right now or describe the world right now. Okay. And now the rest of your paper talks about possible ways that the world might avoid such an evolutionary future. So could you just say something about, in general, what those possible things could look like?

Guive: Yeah. So basically the way I approach this in the paper is, like, in order to avoid an evolutionary paper as I frame it, you need some way of solving global collective action problems. And this is a drawing on a paper by Nick Bostrom from 2006 called ‘What is a Singleton?’.

And there’s basically, like, three ways, I think, that global collective action problems could be avoided. So one is if there’s a world government. Another is if there’s some kind of multilateral coordination between different nation states or whatever it is they have in the future that’s short of a world government but that is able to solve global collective action problems. And a third is if the relevant things do not have global implications. So I call this a strong defensive advantage. That’s probably the least intuitive of these three.

Guive: But the idea there is basically like, imagine in the future you have five nation states, and it’s basically impossible for them to attack each other because that’s just, like, the way the tech works out. And so they can pretty much ignore what the other ones are doing.

Fin: Are you claiming that we need a conjunction of all three or that each one is sufficient?

Guive: Yeah, so each one of them is. So there’s this paper by J.L. Mackey. I can’t remember what it’s called, but he defines something called an ‘INUS’ condition.

Fin: Bringing back memories from undergrad!

Guive: It’s a necessary part of a condition which is itself necessary but sufficient. Unnecessary, but sufficient for the result. So basically, the idea is, like, any one of these three steps could, in principle, prevent an evolutionary future. However, it also might not. The point that’s being made is that if none of these things exist, if there’s no world government, no strong multilateral coordination, and no strong defensive advantage, then there is no way to prevent an evolutionary future, in my terms.

Fin: Okay.

Guive: Does that make sense?

Fin: So the claim is, without the disjunction of these three things being world government, multilateral, strong coordination, or defensive advantage, then you are claiming surely we should expect an evolutionary future.

Guive: Yes, that is what I think.

Fin: Yeah. And then maybe there’s more to be said about what kind of combination of the three could make us very confident that we won’t get an evolutionary future.

Guive: Maybe. Yeah, I hadn’t really thought about it. That’s not really the central idea, but if we have a strong world government and defensive advantage yeah, I don’t know.

Fin: In any case, important seeming factors for avoiding yeah.

Guive: I mean, without any of these, I don’t see how one can be avoided. And the payoff of that is like if you are the sort of person who likes to make up probabilities and multiply them together, you can get a lower bound on the probability of an evolutionary future by multiplying through the probability that there will be no world government times the probability that there will be no strong multilateral coordination times the probability that there will be no strong defensive advantage. And of course, you have to condition on the previous stage.

Fin: Okay, that makes some sense. All right, let’s take the first of those factors being the possibility of world government. How could that avoid an evolutionary future?

Guive: Yeah. So basically, one way of thinking about a situation in which the world develops along a path that nobody would have chosen is this idea of a collective action problem from economics. And you have a collective action problem when some behavior has a cost and a benefit, but the collective cost or the collective benefit is different from the individual cost or the individual benefit. So one example could be if you have a shared field that different people use to graze their cattle. So if you over-graze, like you take more than your share of the grass for your cattle, the cost to you is that they’re slightly less available in the overall pool.

Fin: Yeah. So if I’m, like, one of 100 grazers, and I double my grazing from 1% to 2%. All that’s doing is reducing the overall amount to graze in the future by 1%. That’s not a big deal.

Guive: You may notice that, but multiplied by everybody, that can be a big cost. Whereas other people don’t realize the benefit of your cattle being 1% fatter, whereas you do.

Fin: For sure.

Guive: And if you think about this from the perspective of the whole group, this can be really catastrophic because maybe everybody thinks this way and there’s no more grass.

Fin: Yeah. Does that make sense? Everyone wants to double their cattle grazing. They just, like, overgraze the entire field. There’s nothing left to grow back.

Guive: Yeah.

Fin: And then everyone reaches a point where everyone would have preferred not to arrange that.

Guive: Exactly. But it’s still individually advantageous for every person to overgraze.

Fin: For sure. I guess not to spoil the punchline. But such that if these people could have agreed somehow…

Guive: Yeha, they could have a cop standing there who’s like, I’m going to shoot you if you overgrade your cattle. I mean, maybe that’s a little extreme, but I’ll give you a fine.

Fin: Sure.

Guive: Then that would be better for everyone and a world government can have that function. I’m not advocating for world government or anything like that, but just like if we have some global collective action problems right now, like an obviously prominent example is global warming, or like the hole in the ozone layer from chlorofluorocarbons in the late 20th century. And what happened there is like a bunch of countries got together and made a treaty to restrict the production of chlorofluorocarbons. But if there had been a world government, it could have been much easier to do that and it probably would have had a higher probability of success because it’s like a lot of theft to sign a treaty and then countries might not follow it.

Guive: Whereas if there’s a world of government, it would just be a matter of domestic environmental regulation for sure, which almost every country does with some success.

World government

Fin: Is there a reason to think that world government is at all possible, given that there’s nothing outside all the countries in the world right now? How do you bootstrap this thing into existence?

Guive: How would it form? Yeah, so I guess to back up a little bit, we can ask do current trends seem to favor a world government? And then we can also ask how would a world government form? And in terms of current trends, I think you can argue this one either way. So here’s a very basic fact. There’s no world government right now. And there has never been anything like that. On the other hand, the size of nation states has gotten much bigger over the past thousand years or whatever, 2000 years since we were in tribes going around in forests. And we now have multilateral institutions like the United Nations that did not exist 100 years ago. And not just the United Nations, but like all kinds of things regulating the Internet, forests, chlorofluorocarbons, CO2, although that one’s kind of difficult.

Nuclear weapons, biological weapons, chemical weapons also. So there’s lots of global governance that happens, although there’s no world government. And you might think this reflects a kind of trend in that direction, and even more abstractly, you might say, like, oh, well, at first there were single celled organisms, there were multi celled organisms, and then there were organisms brains, and then there were social animals like ants or chimpanzees. And then there were humans with tribes and then nation states. And now like this integrated global civilization, you might kind of draw a line, be like oh sure, there’s a wing government at the end. So I think in terms of trend extrapolation, you can argue this way, either way, basically, then in terms of mechanisms how would it actually come about? A natural way to divide that is, between a voluntary formation and an involuntary formation.

Guive: So you could imagine gradually over time, something like the UN becoming stronger and stronger to the point where it can function like a government. And by that I basically mean that it can use force, like states just incrementally cede coercive force or something. Yeah.

And just to reiterate, I’m not saying this is a good idea or a bad idea. It’s just a possibility.

Fin: Just out of interest, does this describe ever how nation states form? Like you have this kind of coordination that gets increasingly strong and then it becomes a de facto government?

Guive: Yeah, I mean, I think you could make this argument about the US. So the US has had two constitutions. Right after the revolution, there were the Articles of Confederation, but people felt for various reasons that it was too weak and that it was eventually replaced with the Federal Constitution. And the Federal Constitution, some scholars have argued, has gone through several different phases. And generally speaking, each phase is stronger than the prior phase. So, like, in between the founding and the Civil War, there was this kind of open legal question, can states secede? Turns out, no, they cannot. That’s illegal. And the federal government became stronger in various ways after the civil war.

Guive: And then also in the 1930s, the federal government sort of arrogated a lot of regulatory powers that it had not previously had to itself and just became a much larger part of the US economy. The Civil War obviously was violent, but the transition from the Articles of Confederation to the federal constitution and the transition from the pre-New Deal order to the post-New Deal order, this is kind of peaceful and by consensus.

Fin: So those are examples or those are illustrative of this, like how states form voluntarily. And this might apply to world governments. Yeah, cool. And then any other ways that world governments might form?

Guive: Yeah. So a big one would be world conquest. And you can divide that further into two kinds of paths. So one would be if through some kind of uneven growth, like one country just becoming one country or probably more realistically, like one alliance becoming so much richer than the rest of the world, it’s relatively easy for them to conquer the rest of the world. And another path would be just changes in the offense defense balance that radically changed power between countries without any kind of broad based economy, without changing their ability to make refrigerators and stuff.

Fin: Okay. Could you say more about what that would look like?

Guive: Okay, so if it became possible to reliably block nuclear missiles for one country, that would really change.

Fin: So tilting the balance in particular in favor of offense.

Guive: So why does North Korea still exist? Like North Korea is something like one 20th as rich as South Korea. But North Korea has nukes now, and before that, they had tons of artillery pointed right at Seoul. And so even if South Korea could theoretically win a war against North Korea, and the war is, technically speaking, ongoing, that’s just not worth it to them to get their capital city, where 50% of their population is completely shelled. But if they invented force fields that could block artillery shells, then that would be easier. I think more important with this consideration is that it can kind of be a blocker to uneven growth acceleration, resulting in world conquest.

Fin: I see. Yeah, they’re related.

Guive: Going back to the uneven growth one. The UK was like 1% of world GDP prior to the Industrial Revolution, and it was not very important politically. But after the Industrial Revolution, it was 15% of world GDP and the biggest empire the world had ever seen. And these things are probably related to each other. So the US. And China right now are both about 15% of world GDP. And you might think that if there’s some new technological revolution that is similar to the Industrial Revolution that speeds up growth locally in some places, just as the Industrial Revolution did, then one of those countries will be like, 95% of world GDP and will basically be a world government.

Fin: Yeah, this makes sense. And then in addition, if this offense defense balance tilts in favor offense, that would make this that would make that easier.

Guive: But if it tilts in favor of defense, which is kind of the situation right now.

The US right now it is way richer than Russia.But say the US is like a thousand times richer than Russia. If it’s still the case that Russia has thousands of nuclear missiles, it may not be a good idea to attack Russia. So the offense defense thing is a reason we shouldn’t just reason from differences in overall ability to produce stuff to differences in power.

Fin: Okay, got it. That makes sense. Okay, so those are like, roughly two and a half ways or something that a world government might form. Yeah. And then maybe you could just say a bit more about how and why this avoids an evolutionary future. Just like, what could this look like?

Guive: Yeah, so you could have a world government that forms in whatever way, and then people okay, let’s say people want to colonize space, and then it could make rules about how much it could sort of assign property rights in space. And it could say, like, you’re allowed to take this much space if you don’t pay your taxes, we’re going to come take it away and make all kinds of rules about that. So it wouldn’t be this kind of process where whoever is most willing to give other things up in order to colonize space will end up in control of space.

Fin: Yeah, got it. And I thought there is one obvious way to make property rights meaningful is to have some coercive governing force which applies to both or all the relevant actors.

Guive: I mean, that does seem to be the main way that property rights work in our world.

Fin: Indeed.

Guive: I guess there are some other ideas that people have. Yeah. And just to say something else about the space thing. It’s not automatically the case just if there’s a world government on Earth that can control space, so it further has to be able to police space in some way, and that’s its own can of worms. And I don’t have a lot of insight into how that would work.

Fin: Yeah, I guess it’s global governance rather than Earth governance.

Guive: That is why I use it.

Strong multilateral coordination

Fin: Okay, so this is like, one general way we might avoid an evolutionary future. Again, not talking normatively about whether this would be good, but it’s a thing that could happen. The second factor you mentioned had something to do with strong kinds of multilateral coordination. Could you say more about what you mean by that?

Guive: Yeah, so to zoom way out, there was this paper that probably everybody has heard of called ‘The Tragedy of the Commons’ by Garrett Hardin, where this is where the grazing analogy comes from. And he says, here’s the problem. The only solutions are either a government that enforces property rights or a government that enforces other kinds of rules about how things can be used. And Eleanor Ostrom, who was an economist and who won the Nobel Prize in Economics, basically pointed out that this does not accurately describe how commons have functioned historically, and that there’s a third option, which is that the users, of a resource can collaboratively enforce rules about how that resource is used without there being any entity with a monopoly on the legitimate use of force.

Fin: So just to give some detail there, Garrett Hardin is suggesting that there are only two ways to avoid this [tragedy of the commons]. One is the thing that we’re roughly just talking about, which is that you assign property rights and that’s enforced by some external entity with monopoly enforcement. Then he said there’s some second option?

Guive: Yeah, I mean, for our purposes, these are the same thing, but you could also have publicly enforced use regulations that are not based on property. So it could be like, everyone gets to graze for ten minutes, as opposed to, like, we divide up the commons into private plots.

Fin: Got it. Okay, nice. But I guess the thing that these two options have in common is that they involve this external force. Yeah.

Guive: And that’s very much Hardin’s point.

Fin: Yeah. And then along comes Eleanor Ostrom. She’s like, hang on, why don’t we just actually look to examples in the world where groups have faced some kind of collective action problem? How have they resolved them? Well, often they’ve successfully resolved them without [either of those things].

Guive: Yeah, I mean, so there are commons of this kind in Japan and Switzerland that have lasted for hundreds of years and not been overgrazed and do not rely on any kind of police power of the state to operate. And so one example that she gives that I really like is about this fishing village in Turkey called Alanya where they needed to know if people took too many fish, that would be bad for the group. So they needed to establish regulations of some kind on that. And the system they came up with was they made a list of everybody who is authorized to fish in that village and then they divided the fishing area into zones, numbered zones, and then they had everybody move over one zone every day. And this prevents any kind of overuse.

Guive: And also it had the virtue of being easily enforceable because if someone is in your zone, that’s a big problem for you. Yeah, and apparently this significantly reduced the overfishing in Alanya and it didn’t require any kind of top down authority per se. So you could imagine something like that happening over the most important issues in the future, like how will we divide up space or what kind of digital mines can people make or what kind of weapon systems are allowed, what kind of environmental damage is allowed?

Fin: And the distinction here is that you get this kind of agreement about what kinds of futures are preferred or not, but then they’re successfully enforced just through cooperation and coordination. Just like mutually by the actors that already exist.

Guive: Exactly. Without forming some, creating some new overwhelmingly powerful actor that will crush anybody who breaks the rules.

Fin: Yeah. Good. All right. So when we were talking about world governments, were or you had mentioned trends that might point in the direction of this being more or less likely. Is there some similar trend we can look to see whether this seems feasible?

Guive: Yeah, it’s an interesting question. So one thing you could point to is that there are some international coordination successes like removing CFCs from industrial products. Also nuclear nonproliferation has been somewhat successful, but obviously leaves something to be desired, I guess more successful than people expected.

Fin: In the 60s when the nonprofit was coming into place.

Guive: But yeah, certainly leaves something to be desired. I guess something that’s relatively new that might be a positive, or not necessarily positive but a sign that multilateral coordination is becoming more feasible over time, is a lot of trade between strangers happens on the internet based on reputation systems. If you’re buying something on Ebay from somebody in Indonesia, you may not be super confident the Indonesian police will arrest this person if they rip you off. But the Ebay reputation system might allow you to have confidence that they won’t rip you off.

Fin: Right. And this is distinctively enabled by this technology, the Internet plus these reputation systems. Indeed. And they just unlock a bunch of trades which wouldn’t have happened otherwise. Yes, it seems good for anticipating more kinds of coordination […]

The way the protocols are structured, basically incentivizes cooperation.

Fin: It’s just like one massive agreement that just holds itself together. Yeah. I was reading a lot about undersea cables recently. Right. And there’s a relatively small number of cables that just connects the entire world together. But the incentives work out. Like everyone wants these.

Guive: Like these are the ones that have those videos of the sharks trying to eat them?

Fin: I haven’t seen these videos, but I think they’re the same cable. Turns out like a ton of the internet just runs under Egypt, going into the Indian Ocean.

Guive: Very cool.

Fin: Yeah. Many more facts available. Okay. These are some general trends, some general reasons to expect this kind of strong global coordination to be possible.

I guess also worth asking what this could actually look like, what the mechanisms could actually be.

Guive: Yeah. How something like that could come about.

So I kind of divided it into two categories. So the first category is that there could be reasons that cooperation with unshared preferences becomes easier having to do with future technology. And it could also separately be the case that preferences become more widely shared so people just become more intrinsically motivated to cooperate. So to the first one. First, a lot of what’s going into this section of the paper is from an international relations paper by James Fearon called ‘Rationalist Explanations for War’. And this is a really great paper, maybe one of the best social science papers that I’m aware of. And he makes this point, which is an old point, that war is intrinsically negative. Like with the current war between Russia and Ukraine, whoever wins will have spent a huge amount of money.

Guive: It’s like hundreds of thousands of people will be dead, a bunch of their stuff will be destroyed. And it would have been better for that side if they could just skip all the fighting and go straight to the part where they get whatever it is that they get.

Fin: So basically always before a potential war, there is some alternative agreement that if both parties could commit to ahead of time, then that would always be better than war.

Fin: It’s like: let’s get to the same ending point just without everyone dying. Yeah. Okay. Why doesn’t this happen all the time?

Guive: Why doesn’t this happen all the time? So there’s a couple of reasons that are out of scope for Fearon. So one is irrational leaders who may not think this way.

Hot-headed leaders — that’s been a problem in a lot of wars. But also, just like, they don’t really know what’s going to happen. So with World War I, it’s not like the fact that the Russian Foreign Minister Zazanov was like, oh yeah, this will mean the destruction of our dynasty and however million of our people will die and it will be this huge catastrophe, but we should do it anyway just because we’re so mad at Austria. They had no idea what was going to happen. And if they had known, if they had been more rational in that sense, they probably would not have gotten involved.

Fin: But not being omniscient, like not knowing exactly how things pan out doesn’t strike me as an example of irrationality.

Guive: But if they had maybe a better distribution over possible outcomes.

Fin: Or in particular if both or more actors before potential war had like a shared distribution over outcomes.

Guive: Yeah, so that’s one big problem. That’s what Fearon calls asymmetric information. Getting away from World War I, but the US right now there are a number of nuclear weapons that have certain technical properties that are stored in a variety of places around the world.

The US. Might like its adversaries to know that it has the power to do certain things with its nuclear weapons. However, it really does not want to allow its adversaries the unrestricted ability to come inspect its nuclear weapons and know where they are and what they’re doing at all times. That would be very damaging. And it may be difficult to get the first kind of knowledge without the second kind of knowledge.

Or like another example, if the US. Allowed Russia to install microphones in the Pentagon, that would probably contribute to trust between the US. And Russia. On the other hand, it would also make the United States much weaker, basically. So there can be a trade off between releasing information that’s relevant for bargaining purposes and retaining your own bargaining position. And it’s possible that this will change in the future.

So if there were a way to verify exactly how many nuclear weapons you have with what properties without allowing inspectors to come check them out, that would be helpful for this purpose. And it might be possible to use various techniques to release all and only the bargaining relevant information.

Fin: So the worry is that if you have disagreement over something like the odds of success in a potential war, that can increase the likelihood of a war. And you’re saying it would be great if there was some way to reach agreement and common knowledge of the odds each side assigns to winning. But currently, that might be infeasible because sharing the relevant kinds of information could also reduce your odds because you’re relying on a kind of secrecy.

Guive: Right.

Fin: But maybe in the future, through some technological means, there will be ways to reach agreement on the odds of winning.

Guive: It might be possible to credibly release only very specific information that’s relevant to bargaining as opposed to a ton of information that can be exploited for strategic advantage.

Fin: This is like one general reason to expect coordination in place of something like war.

Guive: And, I mean, it doesn’t have to be purely technological if there were just highly trusted mediators. So, like, most countries will allow the Red Cross to operate in their territory during a war. And that’s because the Red Cross is seen as credibly neutral between different sides. Like, the Red Cross is not spying on you and telling the United States what you’re doing. If there were some kind of like, international college of weapons inspectors that was trusted in the same way the Red Cross is, that could have a similarly beneficial effect.

Fin: I see. Yeah. So I guess the IAEA is a bit like this, but not trusted by [many countries].

Guive: I think not really trusted at all. Yeah.

Fin: And if somehow they became incredibly trustworthy…

Guive: Robin Hanson talks about it as if there was like a school of neutral diplomats that you would go to at a young age and everyone could look at the curriculum and stuff, maybe that would be helpful in that sense.

Fin: Interesting. Anything else you want to say about this?

Guive: Yeah, so maybe even more important than asymmetric information is commitment problems. So this has come up a lot in the Ukraine war, where some people have suggested, like, oh, as a compromise, maybe Ukraine should recognize Russian control over Crimea or something like that. And then the rebuttal is always, well, who’s to say they won’t invade again in ten years and try to get more? And so that’s a reason from this perspective for continuing the war because you have no guarantee that they can’t do that. So you need to reduce them to a position where, you know, they can’t do that. The way this kind of thing is resolved in business is like, you make contracts that are then enforced by a third party.

Fin: Yeah. Right.

Guive: So if I promise to deliver 1000 trucks to you tomorrow and I don’t do it, you can sue me for damages for failing to adhere to my end of the contract.

Fin: And in general, if you’re an actor smaller than a nation state, it’s probably the case that there’s some enforcing entity that’s bigger than both of you.

Guive: Yes. But there’s no meaningful way that Ukraine can sue Russia for damages commencing with starting another war. However, there might be new commitment devices available in the future. One might be escrowing funds. So if two countries are trying to make a deal with each other, they might just put a ton of money in surety. Under some neutral authority that will confiscate the money if they don’t abide by the terms of the deal. There’s a paper proposing this for environmental treaties. It’s Hovey et al and you could argue that some similar things have worked out in the past. After the Iranian Revolution, there were tons of claims of so relations between the United States and the Islamic Republic of Iran totally broke down. Tons of people had claims against, tons of Iranians had claims against the US. Tons of Americans had claims against Iran.

And the two countries formed a kind of joint commission to settle these claims and they settled them out of a pool of frozen Iranian assets in the US banks.

And this seems to have worked pretty well. They worked through, I think, like 90 plus percent of the claims. So you could imagine things like this happening in the future. More speculatively. It might be possible, for instance, for countries to collaborate on engineering projects that have the effect of enforcing some type of deal. So to take a kind of silly example, if I want to prove that I won’t be late for meetings anymore, and I build a robot that will stab me 50 times if I’m ever late for a meeting again, then I would probably be on time.

Fin: This is where future technology could be relevant. Because just a simple escrow seems like a thing we can do right now.

Guive: Escrow is something we could do right now, and maybe we should really be pushing that. I don’t know. But yeah, in the future it might be possible to get something a bit.

So blockchain is the other side of this. Yeah, I guess I see that as like, on the escrow side. One issue with escrow is like, what if someone steals the money? And who are two countries going to trust intermediate between them? And so you might say, like, oh, well, you can make a DOW, and then the DOW will hold the money, and then the DOW will have an ML system that scrapes the news to see if Russian troops are in Ukraine or something. That’s one possibility. But going beyond Escrow, you might basically collaboratively build robots to enforce a deal. This has been called a treaty, but in some Futurist literature, I think this is like an interesting idea.

Guive: I do think people in general are kind of too. There’s something a bit hasty sometimes about discussions of how advanced AI will enable really strong coordination.

Fin: Yeah. Just to be clear, one thought here is that maybe it’s possible or even easy to build software systems with really transparent preferences, so we can both inspect and agree that they’re not being sneaky. Whereas with humans you can’t really do this.

Guive: Yeah. Although if we’re talking about future technology, why not brain scans or something? So people do have this view that AIS will cooperate really well with each other. Insofar as I understand that, it seems something like, yeah, you can read an AI’s brain and you can also modify it to lock in commitments. So if you could read the code, then you could write in the code like, it will never do XYZ, and then that’s the code. So it’s going to follow that. I think these are the big reasons why people think advanced AI will enable more coordination. A lot of people do think that, I guess some are concerned about this. So are you familiar with the Underhanded C contest?

Fin: Tell me more.

Guive: Yeah, so there used to be this contest for C programmers where they would get a prompt with kind of two layers. So the first layer is some innocuous task, like collecting votes on something, and the second layer is a specific way in which your code has to undermine the task and it has to also pass visual inspection by expert judges, as.

Fin: In, like, someone scrolling through the code.

Guive: Someone reads the code and says, this looks legit, you can’t very blatantly undermine the task. You have to do it in a way that someone who knows won’t notice. And so in the 2015 contest, which was run by the Nuclear Threat Initiative, interestingly enough, was to take a sort of data reflecting an object being dismantled and say, is this or is it not a nuclear weapon? And people found different ways of and that was the Ostensible task. And the sneaky task was to get a program that will basically always say yes. So the country can pretend they’re dismantling nuclear weapons, but they’re not. But they’re just dismantling something else. And this allows them to keep a larger stockpile of nuclear weapons than they’ve said they will. And people found various ways of writing code that looked legit but that had this property. And you might be concerned that if people in the year 2700 are trying to build robots to enforce a deal, this kind of problem might come up again.

Guive: So the next step in the dialectic here is people say, okay, well, C is kind of like, famously a language where you can get away with all kinds of bad behavior. And if you had a formally verifiable programming language that would seriously constrain this kind of behavior yeah.

Fin: What’s a formally verifiable language?

Guive: My understanding is it’s a language where you can have some kind of proof or at least like a very clear argument that the program does what it says it’s going to do and is not executing some kind of malicious code. Yeah. So that’s something that people propose. And it is true that there has been a sort of gradual trend in the direction of doing more things in a formally verifiable way.

These languages, since they’ve been invented, are used for more applications and are less of a purely kind of academic thing, like some apps are built on Rust now, is my understanding.

Fin: How is that an argument that we’re trending towards more formally verifiable software? I might have suspected the opposite, actually.

Guive: How so?

Fin: There’s just this increasingly complicated hierarchy of higher level languages becoming increasingly inscrutable.

Guive: Yeah. But I mean, if people actually do use formally verified as I said, I’m not an expert. There was a time before there were any formally verifiable languages, and then there was a time when there were some and they were not used for anything.

That’s the argument. Anyway,

Fin: I feel like we should kind of zoom out again.

Guive: The basic point is, you might think in the future, the ability to do collaborative engineering projects between adversaries will remove commitment problems. However, there are difficulties with those projects now, and there might continue to be difficulties in the future, and some people think, like, AGI will solve those difficulties, but I’m not personally convinced of that yet, although I think it’s possible.

Fin: Okay, so zooming out. We’re a bit in the weeds there. But the context is this question of whether technology in the future, maybe involving AI agents that can make treaties with one another or something, could enable stronger kinds of coordination and in particular, commitments where currently it’s often hard for things at the level of nations to make credible commitments to one another. And yeah, what’s the like? How do you sum this up?

Guive: Yeah, the way I would sum it up is, like, we know that human beings’ minds are hard to read, and it’s hard for them to absolutely lock in a commitment to do something in a way that’s verifiable to others. We don’t know how AIs in the far future will be, so it’s more likely that AIs will be able to do those things than humans.

Fin: Okay.

Guive: Because the character of AIs is unknown.

Fin: The fact that we don’t know how things will be is a reason for hope.

Guive: Yeah. And also a further argument you could make is like, the ability to make commitments is useful. So if this can be had at low cost, you might expect people to do it.

Tech solutions for global coordination

Fin: Yeah. Okay, great. That makes sense. So I guess zooming out even more. Right. Like, the conversation here is about ways in which strong kinds of multilateral coordination could come about. Just like, what are the mechanisms? And were talking about this like rationalist explanations for war framing ways that it could be easier to commit to agreements in the future. Yeah. Curious if there are other ways that global coordination could come about. You mentioned something about preferences.

Guive: Yeah. So basically, the thought there is like, in a collective action problem, the assumption is you only care about your own welfare, so you’re willing to impose costs on the group that exceed private benefits because you don’t care about costs to the group. If you had a situation where everybody cared just as much about the other users of the field as about themselves. There’s no collective action problem.

Fin: Yeah. Like in the extreme, if we all had literally the same set of preferences we all just care about the world being different in exactly the same way.

Guive: Yeah, exactly.

Fin: Then we just work together.

Guive: Exactly. And there are some arguments that one, selfish preferences will be less in conflict with each other in the future and two, non selfish preferences might converge. And all these arguments sort of point in the direction of greater coordination in the future.

Fin: Okay, cool. Let’s take them in turn. So there’s this point that selfish preferences may conflict less.

Guive: Okay, why is that the idea? This is something that Eric Drexler and Leopold Aschenbrenner have written about in different ways. My own thinking about it is largely drawn from some unpublished papers by Ben Garfinkel which will eventually come out. So imagine there are two people in the world and they have equal wealth. One could attack the other and have a 60% chance of killing him and taking his money and a 40% chance of getting killed, which we can think of as his wealth going to zero. That seems a lot more likely if they each have $1,000. And so the benefit would be going to 2000 as it is much more.

Likely that they would choose to fight than if they each have a billion dollars. Because the utility that you get from money is famously diminishing. And if you just imagine the lifestyle of somebody who goes from having $1,000 to having $2,000, it might be something like they have enough food whereas you go from a billion to 2 billion. The lifestyle difference is much smaller. The happiness you gain is less.

Fin: Making it more abstract: one extreme version is if your utility with money just totally levels off after a certain point. So maybe beyond like a million dollars, I just literally do not care about getting more money. In that case, if Mark and Bob or whatever both had, I can’t remember both had a billion dollars, they’re both going to be literally indifferent about getting any more money. There’s no reason if their utility with money is something like logarithmic then they get equal increments with doublings which means they’ll get a smaller proportional increase in their utility the richer they are. So that’s also I guess it depends on the numbers.

Guive: But also if you take into account the risk that you might die if you attack someone, that becomes much less right.

Fin: So the proposition becomes less and less attractive.

Guive: Yeah, that’s exactly the idea. You might see this as an argument that as the world gets richer in per capita terms, conflict becomes less likely.

Fin: Yeah, this makes sense. I guess also Leopold [Aschenbrenner] makes this point. People might care more about reducing risks to themselves. I guess it’s kind of the same.

Guive: Yeah, because what you lose is worse is more to lose. And so you can think about this also in terms of some historical analogies. So there used to be like tons of pirates where they would go around and have a serious risk of death to maybe steal some gold.

This is much less common now. I mean, there are Somali pirates, but that’s one of the poorest countries in the world. And part of it is probably they don’t have a coast guard and so on. But also probably like in most other countries, people have better things to do than become pirates.

Fin: Yep. And this is also just a general explanation about why rich countries tend not to go to war if you just have so much to lose.

Guive: So they might go to war in a case where there’s no chance they will lose. Like the United States invading, like there was really no risk that Iraq would roll tanks down the streets of Washington, but yeah, they might be much less likely to go to war in a way that could create a massive war.

Fin: Yeah. Okay. Like where there is just like a reasonable chance of losing, then you’re much more sensitive to that chance. The wealthy you are supplies to people, supplies to countries. This is a reason to expect that if the world gets richer on the whole, they have one reason to be more averse to war. Cool. Yeah. Anything else on this general bucket of, like, preferences could change in a way that makes coordination easier.

Guive: Yeah. So another thing is a lot of things that seem like different preferences are arguably the result of differences of opinion about a matter of fact. And if matters of fact come to be more widely known in the future, this might be less of a problem.

This paper from 1952 by Bernard Berelson called ‘Democratic Theory and Public Opinion’ and he basically makes the point that in most policy disputes, different values are not at stake. So people have different ideas about the appropriate level of tax. It’s not typically the case that they disagree.

Fin: This is just like a bedrock fundamental belief. People should be marginally taxed at 41%…

Guive: [Right.] A lot of the time it’s because they have different ideas about what the effect on employment will be on real wage growth, on inflation, things like that. And if you pull people and you’re like, is employment good or bad? They will say it’s good. It doesn’t differ too much by political affiliation is my understanding.

Fin: So, like policy conflicts, you’re saying, tend to have a big empirical component and as long as we just learn more about the effects of policies.

Guive: And it does seem like over time more tends to be known as a broad fact it can go into reverse. Obviously, they apparently forgot how to do astronomy during the European Dark Ages. But, yeah, that seems to be the general trend, and that might cause us to expect there to be less conflict in the future. And then a final thing is like so there are normative disagreements in the world. But if it is the case that there are normative facts in some sense that can be learned, we might think that as part of the process of more facts being learned, over time, people will learn these normative facts and they will internalize them. And then people will agree on what is right.

Fin: Yeah. So this is like applying to disagreements that are kind of like, we’re disagreeing about what’s considered best, rather than just like, we’re both selfish.

Guive: Exactly. And I think that does happen, that kind of disagreement. I guess I’m skeptical that this will actually happen in real life. But if you do think that there are normative facts, you might think people will eventually converge on them. So presumably you think there are physical facts and people will eventually converge on those. And so if you think that normativity is like physics in that sense, you might also expect people will converge on normative facts. And a consequence of that would be per the earlier argument about collective action problems. There will be more cooperation in the future.

Fin: Yeah, and I guess this is a nice way of carving up different reasons to expect coordination, but they’re kind of blurry. So I’m thinking that ethical views are informed by beliefs about souls or something.

Guive: Yeah, these things are not sharply separated, especially if you think that especially if there are normative facts. All facts are kind of entangled in all other facts from a certain perspective. You’re like, just like as you straighten out the web of Bblief, people will come to agree more about what should happen.

Fin: What’s that Quine paper?

Guive: Well, he has a paper called The Web of Belief, and there’s this analytic synthetic distinction.

Fin: Two Dogmas of Empiricism!

Guive: Yeah, I haven’t read that.

Fin: We’re totally not going down that rabbit hole. But cool. In my notes, I have a distinction between realist and subjectivist kinds of convergence on moral facts.

Guive: This is a very fine distinction, but the idea is basically you could think that there are normative facts that exist in a mind independent way, just like there are presumably physical facts that exist in a mind independent way. Or you could just think, like, look, you take the human mind, you apply the correct idealization procedure, you output the correct answer, and you take everybody’s mind and there’s only one right idealization procedure and you’re going to get the same answer for everybody. So you can have this view that there will be convergence on what’s right in the future without normative realism or moral realism per se. You could be a subjectivist and just think that there’s this one to one function, or I guess a many to one special, every mind one output.

Fin: Yeah, for sure. And a lot of, like I want to say, a lot of moral thought looks a bit like figuring out which ideas are confused or not. And it’s kind of I don’t know how to categorize that, but people might.

Guive: I mean, my own reservations about this is it’s like why would there just be one idealization procedure and then would it really be the case that everybody would get the same answer? I don’t know. It seems kind of weird.

Fin: Well, here’s the doubt I have in mind.

Guive: Sure.

Fin: Let’s say roughly 20 years ago, there was lots of disagreement in the academy, like in philosophy, about what kind of view about population ethics was correct. And so there were like lots of views that were kind of on the table that people were throwing around. And you can imagine this being actually a kind of action relevant question, at least in the long run. Yeah, and since then I want to say that there’s been some convergence on which views hold water and which turn out to be confused or really hard to get to work once you just think hard about it. And I don’t know whether that’s like an example of realist convergence or subjectivist convergence, but it’s an example of some kind of convergence towards an action relevance and also normative.

Guive: I mean, if you just lay out like these are the bullets you have to bite in a really clear way and people are like, oh.

[…]

Guive: […] And when you think about it that way, then it becomes pretty obvious that you should accept the repugnant conclusion or whatever. Yeah, maybe it’s really interesting you say that because that was not my impression about population ethics, although I think you probably know maybe I’m more about it than I do. But yeah, it seems like isn’t it the case that people have invented all these kinds of weird high tech views in the last couple of years?

Fin: Again, I’m not an expert either. My impression is that in discussion around the original repugnant conclusion, it’s now harder to hold a range of simple, plausible sounding views.

Guive: Okay.

Fin: I think people have been driven to just accept a bunch of impossibility results.

Guive: Like averageism is less popular.

Fin: Exactly.

Guive: Yeah. Okay, that’s interesting.

Fin: And this happens in general in philosophy. Right? Like occasionally people do just roughly agree for sociological reasons. Don’t you think that seems extremely pessimistic to describe all of philosophy?

Guive: Yeah, probably not all describes a lot of philosophy. I mean, I guess with the philosophy of I don’t know, there was some discussion of this with David Chalmers. He had some document that said, yeah, there’s progress in philosophy because for instance, people don’t believe in God as much anymore. Is that really about philosophy?

Fin: Yeah, I mean, just minimally, right? I take it that it’s possible for me and probably I do just have a bunch of beliefs which seem plausible, but when I think about them more turn out to be confused and then I reject them. And this can happen at a group level. People think about a bunch of beliefs that do seem pretty defensible, which looks a bit less like discovering moral truths and more just like getting clear.

Guive: Yeah, I mean, maybe that would be the kind of what the subjectivist would say.

Coordination through converging preferences?

Fin: Yeah. Okay, nice. We’re talking about convergence in preferences and I guess in particular like normative preferences. Anything else you want to say about this general idea of preferences converging?

Guive: Well, you had this interesting point where if preferences are more diverse that could in its own way kind of enable cooperation.

Fin: I was imagining something like if we encountered some alien civilization with just totally alien preferences. Like they really care about this random chemical compound that we have no use for on Earth. And they really care about making patterns in the sky in some frequency of light that we can’t see or whatever. And they don’t especially care about what we do with stuff on Earth. Then we don’t really have much reason for conflict because we can just happily get along with our own things and we don’t want what you want and vice versa.

Guive: Like if they care about stuff we just have no interest in.

Fin: Yeah. And so maybe if the entire universe gets full and we just wanted to eke out an extra bit of universe, then reason for a combination of that point.

Guive: But I guess that does seem like maybe it seems like this thing you’re describing does happen in real life all the time and that probably is a sign that we shouldn’t say, oh well, in the future that will ever happen anymore. So I think it’s a good point and it’s not one that I address.

Fin: I don’t know how to make this precise, but it does feel like there’s some kind of upside down U curve or something here, right, where if you and I just totally agree on what we want the world to look like, then we’re not going to conflict, we’re just going to work together. Yeah, if we totally care about completely orthogonal things like maybe my relationship to what a penguin cares about or something, then we also just don’t have much reason.

Guive: Famously, we are messing things up with penguins in various ways.

Fin: Okay, so maybe the example is like some random bird in the rainforest and a penguin. They don’t conflict.

Guive: Yeah, sure. But ignore humans when you’re at the level of, like so it seems like we all share one environment and we all kind of want to do different things to it. And this seems to me like that is a source of conflict, especially at the point which we are at now, where you’re kind of reshaping the world to conform to various goals you have. It seems like that’s a problem potentially for all animals.

Fin: Yeah. But okay, if I only care about what happens in my neighborhood and you only care about what happens in your neighborhood, and our neighborhoods are very far apart and very different and not competing for the same result.

You build all your houses out of wood, I build mine out of steel.

Guive: That is my relationship with most people in the world, for sure.

Fin: Right. And that’s why we don’t typically conflict with most people in the world.

Guive: Yeah, indeed. So it’s a good point. But I guess on the other side, the more you kind of have broad preferences about how everything goes, and the more there are these kinds of global environmental variables that we’re all trying to set to different values.

Fin: Like our preferences extend to overlapping things. Yeah.

Guive: Which is true of the Earth’s ecosystem. I don’t know if that’s true in space. I have no idea. Then that would tend to mean that different preferences do conflict.

Fin: Cool. That makes sense. Yeah. And again, I’m not sure how this is. I don’t think this is very precise, but there’s maybe something there. It’s a good slide. Yeah. And then here’s the thing which might be relevant. I remember Robin Hanson talking about this thought that in some sense, recently, as in over the past half century or so, it’s more the case that global leaders, like the global elite, share preferences. There’s just more of a kind of homogeneous elite culture.

Guive: And this might be a reason Robin does have this view. I think I’ve become a bit less convinced of it over time because right before World War I, the world was dominated by European countries, plus the United States and the rulers of Russia, Germany, Britain, Bulgaria and some other places were all grandchildren of Queen Victoria. And that seemed like more cultural integration.

Fin: Than we have today.

Guive: And then prior to that, in the Middle Ages, there was this kind of pan-European elite culture based on attending the great universities and speaking Latin and church stuff of various kinds. It just seems like this is something that goes up and down over time. And arguably now we have one integrated world system, which was obviously not the case in the period of the Roman Empire. And that means that the up periods can be more up. But I guess I’m not sure that there is really this strong trend that Robin sees. So one thing is he talks about how it seems like a big piece of evidence that he uses in favor of this point is a sort of harmony of regulations across jurisdictions.

Fin: Yeah, right. So why do basically all rich countries seem to ban nuclear power in some way?

Guive: I mean, it’s an interesting question, but why did a bunch of different countries ban alcohol consumption in the early 20th century? So his response when I asked him this question was like, well, banning alcohol consumption was not like a way of helping the world at the expense of your country. Arguably, people think about nuclear power in those terms. On the other hand, I might be kind of mischaracterizing his view a little bit, but is that really why nuclear power is banned? Is it like, kind of a global is it like, this is to help the world even though it’s at our expense?

Fin: But just to be clear, when you say to help the world, the thing I had in mind was you care as a world leader, that’s part of some group of world leaders about your reputation on the world stage. And as far as it’s a norm of banning alcohol during prohibition or banning nuclear power, then going along with that norm is going to help.

Guive: Sure, yeah. That’s the mechanism he has in mind. Is that so different between banning alcohol and banning nuclear power? I really don’t know.

Defensive advantages

Fin: Nice. So we’ve talked about world government. We’ve talked about coordination short of government. The third factor you mentioned as a reason that we might avoid an evolutionary future is something to do with there being a defensive advantage. What does that mean?

Guive: Yeah, so to start with an example, so recall the hunter gatherer thing. Like, hunter gatherers were out competed for the most part, but some hunter gatherers were on islands that were in the middle of nowhere, and they were not out competed for this reason because there was no way for farmers to get to them. So, like, the Andamanese might be an example of this.

Fin: The who?

Guive: The people on the Andaman Islands. You know, those guys who are, like, throwing spears at helicopters and stuff.

Fin: Oh, ok.

Guive: Those people may be an example. And basically one way you can think of it is like, ordinarily it’s hard for 100 gatherers to defend against agriculturalists. However, if you’re in the middle of the ocean, it’s very easy to defend against agriculturalists. They just never show up. So you might think that future technology will create a situation similar to the 100 errors on isolated islands, and then that would enable people to avoid collective action problems by avoiding the need for collective action. We have this concern that people are going to build tons of digital mines that are hyper optimized to be economically competitive. And if we don’t do that, in our own place. So then there’s a question of like, okay, different nation states may have different rules about what kinds of digital minds can be made.

Guive: If it’s the case that everybody has an antimatter bomb that blows up the entire world, then as soon as the state that made all the bad digital minds is like, coming to attack you can say, like, hey, really want to find out if I’m going to press a button? And then they have to leave you alone and you could do your own thing, whatever that is.

Fin: So everyone has got their own metaphorical island, well protected island, and that means that everyone can just get along with whatever totally uncompetitive thing they want to get along with.

Guive: Exactly.

Fin: They’re not worried about being overtaken.

Guive: They can choose their own future in that sense because the competitive element is gone.[…]

There is a risk of using this offense defense terminology in a very intuitive way. Something that Lukas Finnveden pointed out. It’s like if you’re talking about how a bomb that can destroy the entire world is like a defensive thing [that’s a strange way to use the term ‘defensive advantage’].

Fin: Is there a technical definition?

Guive: There is. I think we should probably not get into it, but just say, like, if it’s the case that you can protect your own resources without great constraints on what it is you do with your resources, then there’s less of a kind of collective action problem and then there’s less of a kind of possibility of an evolutionary future.

Fin: Yeah. If in some sense it’s cheaper to react to aggression than to cause it. Yeah. Then again, so we’re turning up the dial right towards defense. What happens in that world? If that applies to each country in the world as it is now, then maybe that means that countries need to worry less about arms races, like literal arms races because they’re just like happily sitting there in their well defended territory.

Guive: Exactly.

Fin: And also in general, they need to worry less about ceding competitive pressures to, for instance, be economically competitive and more wealthy and they can just get along with whatever they want.

Guive: Great. Yeah. And so there’s been some discussion of this. Like, Carl Schulman had a blog post about this and Paul Cristiano elaborated on it and it’s actually pretty much all the discussion there’s been. Even though it seems like a pretty important thing to think about, it seems like one idea is that eventually different parts of space will not be accessible to each other. So it could be like if you wait until then, you can have your own place that literally no one can ever come to from the outside. And then how is there going to be competitive pressure? That seems to be one that people.

Fin: Have between those places.

Guive: Between those within is a separate problem. I agree.

Fin: Yeah. I guess before that point, it’s a while in the future there is this question about whether the environment of space favors the defender or offender. So one consideration is that in space, it’s really easy to see someone coming probably from quite a long way away, and maybe that favors the defender. They can just get ready in time.

Guive: So the idea is if you see people coming in space, then you can defend yourself against them, something like this. And so the defender’s advantage in space. Yeah, I don’t know. I don’t have a strong view on the offense defense balance of space. I do think that’s a very important input here. I mean, there’s other stuff which is like, you can get like, a projectile going pretty fast in space, and there’s a huge surface, like they can come from any direction, so how are you going to block them all? And it doesn’t take that big of a projectile relative to a planet to really mess up. So that could be another argument on the other side. Yeah, I don’t have a strong view.

Fin: There’s a really good Gwern post. It’s called Colder Wars. And I think this gives an argument that space favors the offender, the aggressor.

Guive: That’s where I got the fast projectile thing.

Fin: Okay, nice. And also, I guess he makes his point that space favors a first strike policy.

Guive: Yeah.

Fin: Because it’s hard to retaliate. Yeah, that’s right. It’s like it’s often hard to know where the initial aggression, like the first strike, comes from, and so it’s harder to commit to a second strike.

Guive: But then that’s the kind of the opposite of what you were saying.

Fin: Yeah, it is. I was giving it a consideration.

Guive: Fair enough. I don’t think the tagline of this post is: mutually assured destruction will not work in outer space. Preemptive strikes are not guaranteed. So, big if true.

Fin: Got it. And MAD being a kind of defensive mechanism. Right?

Guive: Yeah.

Fin: You can just believably commit to retaliating. And so okay. Bad news for the defender if this argument is correct. Great.

Guive: Yeah. So with a defensive advantage, the basic point I want to make about this is like, I don’t know who has the advantage, and more research is needed.

Fin: Yeah. So, okay, here’s a thought that passed through my mind when I was reading this section. It’s very natural to think or for me to think about offense and defense in the context of literal conquest. So, like literal wars that different countries can fight. But there are other ways where you could have competitive pressures leading to. Undesired futures. One could be that some groups just spread faster than other groups. They just, like, copy themselves really fast or whatever.

Guive: I mean, so you could have a situation where it’s like whoever grabs a star gets it permanently and no violence is allowed. And you could still have a selection for people who really just want to grab.

Fin: Or like, in some sense, it could be kind of intuitively speaking, equally likely to attack and defend regions in space. But still, you get these evolutionary outcomes, right?

Guive: So it’s mostly just about first mover advantage in that kind of or rather it’s about selective, like, who’s just better at fighting, not who’s the defender. Yeah.

Fin: So there’s this obvious point that there are factors other than the offense-defense balance that determine the future. And then my question that does seem right, my question is, are there analogues for the offense defense balance that apply to these other factors, or is it just considering conquest?

Guive: I think it’s not fundamentally about conquest. In the case where everyone is trying to grab parts of space and let’s say there’s no violence, it’s just whoever grabs it gets to hold onto it. You might have the exact same concern that whoever is most obsessive about grabbing will get most of the space. It could just be that being really obsessive about grabbing does not matter that much as a determinant of how much space you get. And I think this is kind of what Carl Schulman is saying in his blog post, where he’s saying, like, look, just the optimal strategy in that world is grab as much as you can now and then save it for later when you don’t need to grab anymore.

Fin: Wait, if I’m remembering the right blog post, it’s called ‘Spreading happiness to the stars seems little harder than just spreading’.

And the consideration here is, like, maybe there is a strong trade off between desirable futures that involve lots of happiness and getting a bunch of space really quickly, like being really grabby.

Guive: Carl’s point is that actually seems like a weak trade off because you just have to preserve the goal and grab. Basically, you just have to like for now, you have to accumulate capital until you’re safe from your adversaries, at which point you turn to doing good stuff with all the space that you’ve grabbed.

Fin: Yeah. How does this fit into the whole framework?

Guive: Because it’s like an example where I would just lean into the competitive advantage ,because it assumes that at some point there’s a time when you’re safe.

Fin: I see. Yeah.

Guive: I think I could have been clearer about that.

Fin: This makes sense. And this is a case where the goals are preserved through the competitive period.

Would an evolutionary future count as an existential catastrophe?

Fin: So those were three big reasons we might avoid what you’ve called an evolutionary future. As a reminder, this is a future where due to competition between actors, the world ends up in a place that almost no one would have chosen, like from a starting point. Let’s talk about what this all means, what it could imply. I guess mostly this has not been like a normative conversation. It’s more just like, how could things shake out conversation. But, yeah, here’s a question. Do you think an evolutionary future, as you’ve described it, would count as an existential catastrophe?

Guive: So it really depends on the definition of an existential catastrophe. So the definition in the paper that coined this term astronomical waste is an existential catastrophe is something that would either annihilate Earth originating intelligent life or permanently and drastically curtail its potential. So it depends on an evolutionary future. The way it’s being defined here would not annihilate originating life. And there’s this question, would it permanently and drastically curtail its potential? So in order to assess that, you need to know what would happen in the evolutionary future. How good or bad would it be in absolute terms? And also, what is the potential of Earth originating life? So one way of interpreting it is the potential is the best possible outcome, and it drastically falls short of its potential. Say if it’s like 1% as good.

Fin: Like the likelihood that you escape from this evolutionary future to a great future, or the fraction of value that this [future would be]?

Guive: The fraction of value is what I meant. It seems like there’s just a really wide range of things that could happen in the future.

Fin

Do you have a citation for that?

Guive

Well, one citation you could use is there’s been various attempts, very speculative, obviously, to estimate how many people could there be in the future. You get ten to some big number. And then if you think about it, that implies that every number that’s less than that.

Fin: Yeah. So one feature of a really, whatever, fat, heavy tailed distribution, in particular a power law distribution, is that if you just draw a number from the distribution, you’re close to guaranteed, or maybe literally guaranteed to be disappointed in the sense that your draw falls below the mean.

Guive: Well, okay, my understanding was that a power law distribution does not have a mean.

Fin: Or the means, in which case or the mean is infinite. So with a power law distribution, you’re guaranteed to draw below the mean. In general, with just, like, fat tail distributions, you might think that it’s, like, just very likely that you draw below the mean and, like, far below the mean.

Guive: Yeah, basically, that’s maybe a more sophisticated way of putting my point, which is something like if there’s all these possibilities for sure. And an evolutionary future is from the perspective of what is best. An evolutionary future is not, like, selected according to being the best. That’s kind of part of the definition. And the correlation between what evolves and what is good is not going to be perfect. Probably. I mean, you’d have to supply some further argument that it is and this sort of implies that you’re not going to get the very best thing from an evolutionary future. And if the difference between the very best couple of things and everything else is vast, and seems intuitive, then you could say an evolutionary future is likely to drastically fall short.

Fin: Got it.

Guive: And according to that definition, that interpretation of astronomical ways, you could say that would be an existential catastrophe.

Fin: Okay, but it seems like so here’s one possibility. We fall into some kind of evolutionary future over the next few centuries, say, and then someone figures out how to get out of it, and then over the long run, things are great again.

Guive: So by figuring out how to get out of it, do you mean two reasons to think that some evolutionary equilibrium might not be reversible in that sense. One is, as you mentioned, people’s preferences might change so that they like the new way that things are, so then they won’t want to reverse it. Another argument which is related to this argument that economic growth might eventually slow down, is like, a lot of the big changes in the history of the world have happened because of changes in mode of production. It’s not exactly what I mean, but hunting and gathering to agriculture, or agriculture to industry.

Guive: And you might think there’s only a certain number of these changes that will ever happen because eventually you’re doing things in the most efficient way, and at that point, there won’t be any of the kind of concomitant social and moral changes that are associated with these big economic changes. And so you might get stuck at the last step.

Could we escape an evolutionary future if we got locked into one?

Fin: Yes. I’m just pointing out that it is an important question how likely it is that these futures really are just like, totally locked in, or whether there is some possibility of escaping to close to ideal futures.

Guive: Yeah, I would agree that’s an important question.

Fin: And it’s like additional to the question of how likely it is that we fall into an evolutionary future in the first place.

Guive: Yeah. So I think it should probably be defined if you’re going to do the multiplication thing, which is kind of goofy, but if you want to do that, you should stipulate that it’s irreversible.

Fin: I see. Okay. Yeah. But then you just gave some reasons to think that it might in fact be quite robust.

Guive: Yeah. I mean, you can make those arguments.

Fin: And one is that it’s quite rare that someone just has a great idea for how to improve the entire world and then succeeds.

Guive: That’s a good point. Typically that doesn’t work out.

Fin: Another is that often you’re relying on these macro trends, and maybe we just don’t have so many macro trends left also just for.

Guive: Big changes to happen in the world. The way that’s happened historically has involved these I mean, not big changes across the board, but the very biggest changes have involved these transitions between different kinds of production, basically. And if there are no more of those changes, that might kind of reduce the scope for political or moral change in the future.

Fin: Yeah, this makes sense. And also, like were talking about at the start, right, there’s some sense in which we’re in this unusual period where lots of things are open and we’re not really subject to world shaping pressures. There’s, like, a lot of wiggle room for a particular country to go its own way.

Guive: Exactly.

Fin: In general, historically, maybe that hasn’t been the case. And also, it’s possible to tell these relatively concrete stories about how things get locked in. Like, you are okay. Like a totalitarian leader that just wants to enforce its regime for a long time. Maybe there are tools which enable it to do that indefinitely long which didn’t exist previously. Maybe that kind of argument carries some kind of tools. Those would be something about AI. Surveillance.

Fin

Something about AI?

Guive: That tends to be the answer to a lot of questions about the future. One very obvious one is life extension […]

Fin: The fact that values can be preserved, complex values can be preserved for a long time just because of digital [storage].

Guive: You mean like a digital because of digital error correction? You could have some object that represents values, and then that thing can resist change. There’s a separate question of whether the existence I mean, there are still a lot of Bibles around, but it’s not clear that those values of the Bible are being implemented. And it’s not as if we had made sure of the character it seems like.

Fin: Necessary, rather than sufficient.

Guive: Yeah, agreed.

Fin: Another thing that’s going on, right, is when I think about totalitarian regimes, often they can enforce themselves for long periods of time, where even if everyone, literally everyone just knows that this thing is bad, it may be in their interest not to defect, because there are, like, strong punishments for defecting. And if it becomes easier to surveil subjects, then you can just enforce these penalties even more strongly.

Guive: Yeah. I mean, in that sense, you could see a totalitarian regime that relies on the kind of coordination problem that prevents rebellion to, say, in power. You could say that is itself kind of an evolutionary future.

Fin: Yeah. Right. Like, there are these kinds of transfers of arguments.

Anything else you think is worth saying about this question of how permanently this kind of evolutionary future could be locked in?

Guive: […] We may want to talk about how good or bad it would actually be.

Fin: Well, let’s do that. How good or bad would it actually be, basically?

Guive: I don’t know. I think people have taken really strong positions on this. Like Scott Alexander and Robin Hanson have really staked out opposite views on this in a somewhat extreme way. So Robin thinks it’s going to be really good, Scott thinks it’s going to be really bad. I basically think we need to look into this more. I think one input might be how good or bad you think the history of life on Earth has been. Okay, whatever you care about. But to keep things simple, let’s pretend to be classical utilitarians and say how much happiness has been experienced by creatures, minus how much suffering.

Fin: Okay.

Guive: And the bigger that number is, then probably the more optimistic you should be about an evolutionary future.

Fin: And it’s a thought that to date, human history, or just the history of life on Earth has not been mostly determined by particular actors with ideas about how things should go, but rather by these evolutionary forces.

Guive: Right. So it’s like when they had, like when the first bony fishes were evolving, they didn’t have a government that was making that happen. And I do think that question remains pretty open. A lot of people have this strong view that like, oh yeah, well, animals are definitely more suffering than happiness. I’m not at all convinced of that personally. On the other hand, you might say, okay, well, yeah, it’s better than nothing, but the best possible future is so much better than that. From that perspective, I can kind of see it may not matter too much if you really kind of want to look at things in this existential risk way.

Fin: Sure, yeah. But it seems like there is this intuitive question separate from whether it’ll be an ideal future, which is just will it kind of suck or will it just seem fine?

Guive: Yeah. How good or bad will it be regardless of whether it is at the existential risk of so-called value erosion.

Fin: Yeah. Or at least one consideration is like, how good has evolution been so far?

Guive: And then another one, which I think is not original to me, but there’s this question of like, what is your account of well-being? Because if you think that well-being is satisfaction of preferences, it seems like evolution creates things that have preferences that are often satisfied in the environment they’re adapted to.

Fin: Is that right? I mean also creates a bunch of preferences that are not satisfied to get frustrated all the time.

Guive: Right? Yeah. But if you have an objective list theory where it’s like people have to sit there and listen to Beethoven or something, that seems like quite a random thing to fixate on […]

Whereas with preferences, the preferences do have to be to serve the adapted function. I think preferences do have to be satisfied sometimes. I could be wrong about that, but.

Fin: Yeah, it seems right.

Guive: If you think about animals, I think their preferences are typically understood to include things that they do in their natural environment, like roaming around, eating and stuff like that.

Fin: Right. Yeah.

Guive: So that would be another input.

Fin: Yeah. Do you have a sense of which direction that pushes?

Guive: I think that pushes if you believe in a kind of preference account of well being, then that would push in favor of optimism about an evolutionary future.

Fin: Interesting.

Guive: And then a third thing is, like, what do you think about Hanson’s view? Robin, I should be consistent about what I’m calling him. His view is consciousness, as I understand it. His view is like, tons of systems are conscious, and then there’s no risk of this thing that Besharam is concerned about. There will be these entities in the future that are not conscious. And so that makes you more open to a wide variety of entities that might evolve.

Fin: And that’s a reason for optimism. As long as you think that consciousness is necessary for a good future or episode facto good. Yeah. Does Robin Hanson give any other positive reasons for expecting an evolutionary future to be relatively good?

Guive: Well, it’s more like a non evolutionary future will be bad.

Fin: I see what he tends to focus on. One thing that comes to mind is markets, broadly speaking, as maybe an example of some kind of evolutionary mechanism, in the sense that it’s determined by competitive forces, where, I don’t know, maybe we’ve just ended up with a bunch of products and services that people 100 years ago wouldn’t have preferred, largely because they probably wouldn’t have imagined them. But from our perspective, we’ve got more stuff and we like this stuff. What are we talking about? Oh, yeah. Are there reasons to think that it could be bad? Scott Eisenhower you mentioned thinks this.

Guive: I mean, yeah, I think it’s sort of like Hanson has this book, the Age of Em, talks about all the different modifications that will happen to M’s, and they’ll be copied to be more efficient in their work, and they might only get run when they’re working. They probably end up with not very much leisure time, which I think empathetically we could say that would be so bad if we didn’t have any leisure time. Now, you can blunt that by saying, like, well, subjectively, they just sort of reverted to the state prior to the League, where they just rested. So it doesn’t feel like they’re exhausted all the time, but they still kind of lose intuitively.

Fin: Like, you read that book and it just sounds kind of bleak. Right. That was, for me, how I felt […]

It’s probably worth saying, theoretically speaking, it’s not the case that the malthusian equilibrium or something just needs to be bad for some reason. Like, you could be at subsistence and really happy.

Guive: Yeah. No, that’s a great point. So also the way that malthusian is at least sometimes defined in the economic history literature, and I think maybe ecology as well is just the point at which the population stops growing.

[…] That could be, like, a great point that’s kind of arguably a hole in this argument.

Fin: Where do you come down? Do you have intuitions about whether such a future, if we reached it, would be good or bad? On that?

Guive: I guess my feeling is it’s probably better than nothing, but we should maybe try to improve on that provided it doesn’t involve taking extreme risks.

Fin: Okay. Yeah. You want to say more about what those risks would be? Or, like, what would the measures be to avoid.

Guive: So one thing you might try to do basically, if we think about the three steps we’ve gone through: the world government multilateral coordination and defensive advantage so we might try to intervene at any of those steps to make that more likely. And we probably can’t affect the defensive advantage thing because that’s kind of a function of what gets invented in the future. We have a little way to control that. But we could try to create a world government. That’s obviously, like, a pretty risky thing to do, though, because you could create a bad regime that controls the whole world and there’s no check on.

Fin: Yeah, like it would make it easier to reach this other failure mode that you described right at the beginning, which is like, we decided to do something bad, or such a regime.

Guive: For the Soviet Union. The reason the Soviet Union, which is a bad regime, collapsed is partly because the leaders were able to look at the rest of the world and see things were going better in other places.

Fin: Right.

Guive: If that’s not possible anymore, I think this point is due to Brian Caplan, his chapter in that book Global Catastrophic Risks. If that’s not possible anymore, then there’s much less of a check on totalitarian regimes.

Fin: Any other ways we could avoid an evolutionary future?

Guive: Yeah, so the other one I hadn’t talked about is like multilateral coordination, and I’m more optimistic about promoting that because I think it’s more robust. Right, more robust, exactly. It reduces other risks, too, like the risk of war or something.

And I think there’s stuff people could work on actually if people want to do that. Basically just like promoting ways for states to cooperate with each other so that could be on the technical side.

If leaders had lie detector machines and then when they were trying to sign, like, an arms limitation treaty, they would say, like, we are not going to violate this treaty. And then this thing would be like.

Fin: Like a big red buzzer?

Guive: Then that would make it easier to negotiate these kinds of deals.

Fin: Build the buzzer!

Guive: Build the buzzer. Yeah, that’s one idea. Another is like, promoting this idea of escrow accounts, liking that paper, but environmental trees and I think there’s a lot of things in this neighborhood that could be explored.

How likely is all this?

Fin: And then okay, another question separate from how good or bad you think an evolutionary future would be, is how likely you think it [an evolutionary future] is.

Guive: Yeah, so I don’t have a super strong take on that. I guess what I will say is my current view is this is like a reasonably likely thing, giving a number. I can do that. So we have the three things that if none of them happens, an evolutionary future will happen, or that’s the model anyway. So we can multiply through the probability that each of those things doesn’t happen contingent on the prior one not happening. And that has the result of giving us a lower bound. And so if I do that, I get something like one tenth or in that ballpark.

Fin: Okay.

That’s lower than I had expected. I can imagine just thinking that this is in some sense the default.

Guive: Yeah, but maybe but I think if you think it’s the default, that might route through thinking like it will happen anyway, even if one of the conjuncts or disjuncts or whatever they are exists.

Fin: Oh, sorry, this is a figure for a lower bound. Yeah, okay.

Second-order coordination problems

Guive: I think if you think it’s going to happen by default, then it’s like, yeah, well, the world government will just mess up, which is like a plausible view. I don’t really know how to evaluate that, but I’m sympathetic to it.

Fin: Yeah. And I guess the last point, which I didn’t mention earlier, is that you can maybe get some dynamic where you temporarily solve a global problem with some intervention like a world government, but then you have a new problem which is within that government. Maybe there are certain kinds of worrying evolutionary type competitive forces. Or within your council of people coordinating with one another.

Guive: That is a good point.

Fin: Eleanor Ostrom talks about these second order or n-th order coordination problems where it’s like you’ve agreed to surveil one another, but how do you enforce that and then how do you enforce the enforcement and so on. So maybe there’s this kind of thing.

Guive: This is probably like a version of the same point, but she says something about saying that a government removes the commitment problem is kind of a question begging because, well, how does the government commit?

Closing questions

Fin: Yeah, totally. Corruption is a thing, for instance.

Okay, let’s do some final questions. Here’s one we ask everyone, and you touched on it a bit, but is there any research or just other work that you would love to see someone do? Maybe even someone listening to this?

Guive: Yeah, I mean, there’s a lot of things that could be interesting. So one question that I think about a lot is like, for it to be true that future technology mitigates structural obstacles to coordination for rational agents, it has to be true that it has these specific properties of being formally verifiable and so on. And it would be cool if somebody who has a background in formally verifiable programming languages or information security or something like that were to just look into this question some way.

So another is, like, I think we mentioned this a little bit, but there’s this question: is there a kind of increasing convergence on norms or, like, political policy norms at the global level? And I think this is very underexplored. And the arguments, pro and con, remain kind of hand wavy.

Fin: Yeah. This was like, what we’re talking about when we’re talking about why is nuclear energy banned in a lot of places? Why did prohibition happen [in many places]?

Guive: Yeah, that kind of stuff.

And then a final thing would just be, like the stuff about the offense-defense balance in space.

Fin: Yeah, I would love to see that.

[…] And can you also recommend things for people to read? Books, papers, anything else?

Guive: Yeah, yeah, a few things. So one is that paper Rationalist Explanations for War by James Fearon. Another, which came up earlier, is the paper about how it’s not necessarily the case that everybody will be Amish in the future, which is in the Journal of Demography, and it’s called Intergenerational Transmission is Not Sufficient for Positive Long Term Population Growth.

Two more things on top of that. The next one would be this talk by Ben Garfinkel, which there’s a transcript of on his website called The Case for Privacy Optimism. I think he gave this talk at DeepMind, which is an AI lab.

Fin: What’s the argument?

Guive: Just basically yeah, briefly, the argument is, like, you might think that future technology will just increase the level of surveillance and privacy will be totally lost. But it might be possible for privacy to be preserved because some future technologies, and in particular privacy preserving machine learning, might enable only the relevant information to come out.

So the example he gives is a bomb sniffing dog, which can smell a bag and determine whether there’s a bomb in it, as opposed to, like, opening the bag and you see everything.

Fin: Great.

Guive: And then the final one would be Meditations on Moloch by Scott Alexander, which is, like, a very nice piece about these issues.

[…]

Fin: Say as many [more] as you like.

Guive: Oh, okay. Yeah. And then two other things. So one is a paper by Nick Bostrom called What is a Singleton? And the final is this book, The Age of Em by Robin Hanson, where he explores one specific evolutionary future scenario.

Fin: Also, there’s a good Scott Alexander review of the book, so we’ll link to both of them. Great.

And then finally, how can people get in touch with you?

Guive: Yeah, so my email address is just my last name and then my first name at gmail dot com, and you can maybe include that in the notes or something.

Fin: Okay, Guive Assadi, thank you so much.

Guive: Thanks so much, Fin. It’s been a pleasure.

Further reading



Back to writing