14 Comments

This is wonderful and helped me solidify my thinking.

It seems strange to me that being a year or more behind in tech with tiny doubling times wouldn't give the US (or another actor) a massive advantage and allow it to suppress other groups, the same way the CCP did inside China.

If one group gets absolute power, they could then dedicate even 50% of their resources to human welfare.

These articles are especially interesting to be as they are almost fully focused on the impact on humans, with AI progress only being mentioned where it impacts humans.

I think Neuralink and similar (including VR) will have massive influences on how the story pans out.

Expand full comment

> It seems strange to me that being a year or more behind in tech with tiny doubling times wouldn't give the US (or another actor) a massive advantage and allow it to suppress other groups, the same way the CCP did inside China.

I agree that short doubling times make it plausible for one actor to outpace all others. Some factors that make this dynamic less extreme here, though, are:

(1) The gap between the US & China in this scenario is not very large (the US has some lead in AI, especially diffusion, but China has an initial lead in non-general-purpose robotics and sheer manufacturing capacity, power, minerals, etc.)

(2) Both the US and China have significant leverage over the other (e.g. classic MAD, but also other avenues of destructive retaliation even as missile shields are built), lots of covert ways to sabotage or slow down the other, and both care existentially about not being squashed during the build out. A combination of overt threats, covert sabotage, and negotiation could lead to a situation where neither gambles on decisively outrunning the other (and potentially provoking retaliation), and instead tensions are managed and both continue as relevant entities. (Even if one is permanently a few times larger due to starting the buildout a bit sooner—but also note that if the robotics doubling time is roughly 6 months as in the above scenario, a 1-year head start from a comparable base means you're only 4x smaller)

Consider the US and USSR during the Cold War. The US did not do a pre-emptive nuclear strike on the Soviet Union to prevent them from developing nukes. Once the Soviet Union had nukes, but only a small number, the US defense establishment mistakenly feared the Soviets were far ahead. The peak (post-1949) moment the US could've gone for a strike and unilateral domination would've been in 1961, when in short succession the Soviets threw a major provocation by making noises about taking West Berlin and then putting up the Berlin Wall, and a new US intelligence estimate corrected the "missile gap" fears and showed the USSR had exactly 4 operational ICBMs. The Kennedy administration commissioned a study, including on "flexible" nuclear reactions. It concluded a counterforce first-strike was feasible for the US. In The Wizards of Armageddon (great book), Fred Kaplan summarises the reaction as:

> “Now, in the early autumn of 1961, when the United States had preponderant nuclear superiority over the Soviet Union, when a virtually disarming counterforce strike appeared technically feasible and when it looked like the United States might have to bring atomic weapons into play, Paul Nitze balked. What if things didn’t go according to plan? What if the surviving Soviet weapons happened to be aimed at New York, Washington, Chicago—in which case, even under the best of circumstances, far more than a few million would die? There were just too many things that could go wrong. And even if they went right, two or three million were a couple of million too many.

> [...]

> If ever in the history of the nuclear arms race, before or since, one side had unquestionable superiority over the other, one side truly had the ability to devastate the other side’s strategic forces, one side could execute the RAND counterforce/no-cities option with fairly high confidence, the autumn of 1961 was that time. Yet approaching the height of the gravest crisis that had faced the West since the onset of the Cold War, everyone said, “No.”"

Expand full comment

This is by far the best and most detailed shot I've seen at trying to predict the next 10 years in detail. Thank you so much for writing it. Extremely impressive. And depressing scenario of course, but I think you are broadly right.

Expand full comment

This is imaginative even if I don’t agree with all the premises (the future is speculative - but we have to speculate so thank you!).

But I work in the marketing world and from my vantage point what I see with AGI hype - where AI takes on strategic type decisions as depicted here - is that it is a form of content marketing by big tech. Keep promising that AGI type take over is around the corner and who wants to miss that investment train?

Of course AI is not nothing but it will come unstuck for the same reasons the dot com bubble burst. The physical world of stuff and flesh and bone is not as ready for it. Pets.com didn’t fail because people weren’t online - it failed because the shipping logistics weren’t there at the scale needed. The costs destroyed them.

The promise of AGI has caused / is causing a bubble that will burst - especially because ‘it’s different this time’ (it always is).

The real innovation will come after that. Meanwhile humans have rather a lot of civics type stuff to sort out - I wrote a similar future sequence mixing in AI hype and the politics of our time : https://open.substack.com/pub/beyondsurvival/p/what-happened-next?r=40ir&utm_campaign=post&utm_medium=web&showWelcomeOnShare=false

Expand full comment

I think this would make a great science fiction novel! Whether prophetic or not...epic in scope and so timeous. Very entertaining and scarily plausible.

Expand full comment

Thanks for this - such an interesting exposition of how tech and politics might shape our future. What impact might climate have do you think? I'm wondering about things like access to resources such as water and food, and how these might play a role. Anyway, lots to think about...

Expand full comment

I'm curious about the social implications a large generational intelligence gap (3+ SD) which would arise from IQ augmentations being more profound in embryos vs adults.

By the time this scenario is feasible, perhaps human intelligence has effectively decoupled as a driver of individual economic advantage. In such a scenario, perhaps the new generation of genius youth is actually disadvanged by their gift. Their intelligence is commodifies and unremarkable, merely serving to alienate them from connecting with / relating to the entrenched "normie" asset holders.

In a world where life-extension is diffuse, this generational intelligence gap could lead to an enduring bifurcation of society and culture. You certainly wouldn't be reading the same articles or playing the same video games (and where else are humans getting their meaning from at that stage).

Expand full comment

All three parts were fantastic. Thank you so much for writing this! For your 2040s+ scenario, you envisioned a world dominated by elites and powerful nation-states—totally plausible. Did you consider an alternate futures? For example, maybe one that goes the opposite of your imagined centralized power and instead goes towards society fragmentation? (Robot powered semi self sufficient homesteads defended by drones???) I think you had a bigger range of possibilities to chose from in part 3. I’d love to hear what other possibilities you considered.

Expand full comment

Cross post from LW in reply to L Rudolf L:

https://www.lesswrong.com/posts/CCnycGceT4HyDKDzK/a-history-of-the-future-2025-2040#JH8pbDS7cGTpRqaWo

Some thoughts:

Maybe! My vague Claude-given sense is that the Moon is surprisingly poor in important elements though. (L Rudolf L)

What elements is the moon poor in that are important for a robot economy? (Me)

This is a good point! However, more intelligence in the world also means we should expect competition to be tighter, reducing the amount of slack by which you can deviate from the optimal. In general, I can see plausible abstract arguments for the long-run equilibrium being either Hansonian zero-slack Malthusian competition or absolute unalterable lock-in. (L Rudolf L)

I think the key crux is that the slack necessary to preserve a lot of values, assuming they are compatible with expansion at all is so negligibly small compared to the resources of the AI economy that even very Malthusian competition means that values aren't eroded to what's purely optimal for expansion, because it's very easy to preserve your original values ~forever.

Some reasons for this are:

Very long lived colonists fundamentally remove a lot of the ways human values have changed in the long run. While humans can change values across their lifetimes, it's generally rare once you are past 25, and it's very hard to persuade people, meaning most of the civilizational drift has been inter-generational, but with massively long-lived humans, AIs embodied as robots, or uploaded humans with designer bodies, you have basically removed most of the source of values change.

I believe that replicating your values, or really everything will be so reliable that you could in theory, and probably in practice make yourself immune to random drift in values for the entire age of the universe, due to error-correction tricks.

It's described more below:

https://www.lesswrong.com/posts/QpaJkzMvzTSX6LKxp/keeping-self-replicating-nanobots-in-check#4hZPd3YonLDezf2bE

To continue the human example, we were created by evolution on genes, but within a lifetime, evolution has no effect on the policy and so even if evolution 'wants' to modify a human brain to do something other than what that brain does, it cannot operate within-lifetime (except at even lower levels of analysis, like in cancers or cell lineages etc); or, if the human brain is a digital emulation of a brain snapshot, it is no longer affected by evolution at all; and even if it does start to mold human brains, it is such a slow high-variance optimizer that it might take hundreds of thousands or millions of years... and there probably won't even be biological humans by that point, never mind the rapid progress over the next 1-3 generations in 'seizing the means of reproduction' if you will. (As pointed out in the context of Von Neumann probes or gray goo, if you add in error-correction, it is entirely possible to make replication so reliable that the universe will burn out before any meaningful level of evolution can happen, per the Price equation. The light speed delay to colonization also implies that 'cancers' will struggle to spread much if they take more than a handful of generations.)

While persuasion will get better, and become incomprehensibly superhuman eventually, they will almost certainly not be targeted towards values that are purely expansionist, except for a few cases. (Me)

I expect the US government to be competent enough to avoid being supplanted by the companies. I think politicians, for all their flaws, are pretty good at recognising a serious threat to their power. There's also only one government but several competing labs.

(Note that the scenario doesn't mention companies in the mid and late 2030s) (L Rudolf L)

Maybe companies have already been essentially controlled by the government in canon, in which case the foregoing doesn't matter (I believe you hint at that solution), but I think the crux is I both expect a lot for competence/state capacity to be lost in the next 10-15 years by default (though Trump is a shock here that accelerates competence decline), and also I expect them to react when a company can credibly automate everyone's jobs, and by that point I think it's too easy to create an automated military which is unchallengable by local governments, and at that point the federal government would have to respond militarily, and I ultimately think what does America in the timeline (assuming companies haven't already been controlled by the government) is the vetocractic aspects/vetocracy.

In essence, I think they will react too slowly such that they get OODA looped by companies.

Also, the persuasion capabilities are not to be underestimated here, and since you have mentioned that AIs have gotten better at all humans by the 2030s at persuasion, I'd expect even further improvements in tandem with planning improvements such that it's very easy to convince the population that corporate governments are more legitimate than the US government. (Me)

In this timeline, a far more important thing is the sense among American political elite that they are freedom-loving people and that they should act in accordance with that, and a similar sense among Chinese political elite that they are a civilised people and that Chinese civilisational continuity is important. A few EAs in government, while good, will find it difficult to match the impact of the cultural norms that a country's leaders inherit and that proscribe their actions.

For example: I've been reading Christopher Brown's Moral Capital recently, which looks at how opposition to slavery rose to political prominence in 1700s Britain. It claims that early strong anti-slavery attitudes were more driven by a sense that slavery was insulting to Britons' sense of themselves as a uniquely liberal people, than by arguments about slave welfare. At least in that example, the major constraint on the treatment of a powerless group of people seems to have been in large part the political elite managing its own self-image. (L Rudolf L)

I was more so imagining a few EAs in the companies like Anthropic or Deepmind, which do have the power to supplant the nation-state, so they are as or more powerful in setting cultural norms as current nations, but if companies are controlled by government so thoroughly they don't rebel, then I agree with you.

I agree unconditionally on what happened regarding China. (Me)

Expand full comment

Given the 2040+ position, I'll try to speculate a little more on what a world will look like after 2040, though I do have to make a few comments first here.

1, while I do think Mars will be exploited eventually, I expect the moon to be first for serious robotics effort, and more effort will be directed towards the moon than mars mostly because of it's closeness and more useful minerals to jump-start the process of a robot economy, combined with plentiful amounts of power.

2, I expect the equation mentioned below to be severely undetermined, such that there are infinitely many solutions, and a big one is I think the relevant equation is needing to replicate fast, not being the fastest amongst them all (because replicating a little better will usually only get a little advantage, not an utterly dominant one), combined with a lot of values being compatible with replicating fast, so value alignment/intent alignment matters more than you think:

But this alone does not let you control the future. A thousand people go to a thousand AIs and say: do like so. The AIs obey, and it is done, but then the world responds: doing this leads to this much power, and doing that leads to that much power. In the vast sea of interactions, there are some patterns that strengthen themselves over time, and others that wind themselves down. Repeat enough times, each time giving to each actor what they sowed last time, and what emerges is not the sum of human wills—even if it is bent by it—but the solution to the equation: what propagates fastest? (L Rudolf L)

As far as it's future goes, I expect the universe to be broadly divided between China, Anthropic, OpenAI, Google, Deepmind and perhaps a UK AISI/company, with the other powers being either irrelevant or having been exterminated.

Given no nationalization of the companies has happened, and they still have large freedoms of action, it's likely that Google Deepmind, OpenAI and Anthropic have essentially supplanted the US as the legitimate government, given their monopolies on violence via robots.

Anthropic will likely be the big pressure group that counters the intelligence curse, due to their leadership being mostly composed of EAs that care about others that do not rely on them being instrumentally valuable, and in general the fact that EA types got hired to some of the most critical positions on AI was probably fairly critical in this timeline for preventing the worst outcomes from the intelligence curse from occurring.

Eventually, someone's going to develop very powerful biotech, neuralinks that can control your mind in almost arbitrary ways, and uploading in the 21st century, assuming AI and robotics are solved by the 2040s-2050s, and once these technologies are developed, it becomes near trivial to both preserve your culture for ~eternity, and makes the successor problem that causes cultures to diverge essentially no longer a problem, which essentially obviates evolutions role except in very limited settings, which means the alignment problem in full generality is likely very soluble by default in the timeline presented.

My broad prediction at this point is that the governance of the Universe/Earth looks to be set between ASI/human emulation dictatorships and states that are like the Sentinel Islands, where no one is willing to attack the nation for their own reasons.

In many ways, the story of the 21st century is the story of the end of evolution/dynamism as a major force in life, and to the extent that evolution matters, it's in much more limited settings that are always constrained by the design of the system.

(Crossposted from LW here):

https://www.lesswrong.com/posts/CCnycGceT4HyDKDzK/a-history-of-the-future-2025-2040#sy3vNNgZhymH5PewC

Expand full comment

Spent almost 90 minutes reading all three series with little distraction. 90 minutes well spent. Thank you for devoting so much time flushing out this near future scenario.

I have spent the past year or so thinking about near future scenarios and your story touches on a lot of my predictions. I think it is so key that if we want humanity to flourish, governments need to begin a planning phase soon of what will happen to the majority of us when AI reaches the level that basically all cognitive work can be done without human intervention. Like you, I believe this point will come around 2030, which is very soon.

For this planning phase to work, it has to be done at a global scale, perhaps by a supranational organization within the UN that involves not only states but also large corporations and capital owners. The problem is, this level of coordination is typically reactive, aka it might only occur AFTER most humans are disempowered and quality of life begins to erode worldwide (and human unrests skyrockets as a result).

Expand full comment

This is monumental and amazing. I'm going through it all via an LLM (after scatter-reading 75% of it, quite a lot) to systematize it and create a few orthogonal-based timelines. I'll be using it as a baseline for my scenarios, like Genome 1. As far as I know, no one has dared to attempt such a specific, detailed, and impressive endeavor until now.

Did you use any LLM-based recursive scenario-drawing aid? (an open sourced one you can share, if you built one yourself I'm obviously not asking for it). It would be fantastic to incorporate such a tool into my workflow. I'm so intrigued that I feel the need to build one myself otherwise, I would, but I really don't have the time. Maybe.

Nonetheless, thank you, this is truly outstanding!

Expand full comment

Glad you enjoyed it! There are definitely many branching points where things could go very differently, that are worth exploring.

I didn't use scenario-drawing aids. Tools are overrated.

(Though at some point, I was transcribing an earlier version to a massive diagram to get feedback on, but it got too messy so I went back to just text.)

Expand full comment

The nearish-term automation of white collar jobs (and eventually blue collar jobs) will have a reckoning on the traditional consumer economy that we operate in today. As you posit, I expect governments will be slow to rollout UBI creating mid-term pressure on our the existing economy.

I wrote a substack post that discusses the nature of how a free market economy will push all companies to adopt AI workers because the decision not to adopt will be competitively punitive. Governments may attempt to regulate the rate at which companies can lay off human workers in response (not sure how it defends against AI startups that remain lean other than awarding monopolies/restricting new entrants).

My view is that physical (and luxury) goods will still reign supreme. As an example consider the current price of a netflix subscription at $20 - it’s worth 20 apples assuming $1 each. The price of netflix will be pushed to zero (and I agree the proliferation of AI slop will force it there). In this future state, a netflix subscription may only be worth 1 apple. How governments deal with providing access to physical (and relatively scarce) goods brings us back to how income is generated (or distributed).

Expand full comment