35 Comments

If I accept that the price of labor goes towards zero and I accept that as a result the current wealth ("capital") distributions are locked-in prior the the advent of labor-replace AI, I still do not necessarily conclude that this is uncomfortable or undesirable for essentially anyone. If, for example, labor costs go to zero then the price of any goods and services with labor as an input also plummets. Thus, the purchasing power of every individual, albeit unemployed, is massively higher. Who cares if my neighbor can afford 10,000x their maximal needs while I can only afford 100x mine? In the end, I think that so many things will go haywire when labor is nearly limitless and free that it will be impossible to predict all the second and third order effects. In general though, I'd say that increasing purchasing power is an end that is innately good and it's hard for me to imagine how the inverse might be true.

Expand full comment

Yes, it is true that in terms of personal consumption, everyone will have enough. But see the sections on state incentives (a topic where Luke Drago now has a more detailed post: https://lukedrago.substack.com/p/the-intelligence-curse) for how this situation might not be stable. Also, I suspect a bunch of bad cultural effects from disruption no longer being possible.

Expand full comment

You're relying on an unstable liberal premise: that your neighbor will respect your rights when your rights conflict with his.

With 100x the power, your neighbor will find your conflicting interests are a trivial matter to resolve in his favor.

With a much larger advantage in power, it's easier for you neighbor to limit the number of potentially conflicting agents entirely - like you would do today with bugspray.

Expand full comment

Well unfortunately land/housing/real estate doesn't depend on labor and is inherently scarce + politically regulated to be so. This is a major problem, though other things we consider costly (healthcare, education) do have labor inputs are more fixable.

My two favorite causes are AI + Society and YIMBY, lol.

Expand full comment

Great post, but as so often when this subject comes up I can't help but think that the missing element is land. Even if we can produce an unimaginable amount of wealth, land will remain a zero sum game and (the most?) important source of inequality in the medium term unless 1) people are happy to live in simulated environments 2) we start building really nice space habitats, or 3) a political solution is found in which individuals cannot hoard land.

Expand full comment

I note that there are other zero-sum resources that don’t act like land although they are required for productivity - e.g., air. Then note that (3) requests a solution to a problem in a hypothetical scenario in which solutions to problems are effectively free (and resulting concentrations of power are in a highly dynamic state). I think we can eventually either expect a solution to that zero-sum land problem through land-use efficiency or some other radically different arrangement.

Expand full comment

Let's certainly hope so. I remain skeptical.

Expand full comment

In theory, land is only an issue if limited to Earth, provided that 1. The universe is not finite / 2. Human population growth follows a reasonable rate as compared to space expansion progress

Expand full comment

I've thought about this a lot too, so it's great to see I'm not alone, because I certainly felt so.

Understanding that the value of humans goes to zero over the next 20 years is a "red pill" moment.

We all know what happens to valueless humans today, they go live in a tent under a bridge.

I've been thinking about writing a related blog post titled "What will happen when AI drives unemployment to 100%".

We can track the progress of AI by its impact:

- Increasing unemployment.

- Increasing homelessness (currently just increased to a record high in the USA).

Too many people are worrying about AI killing everyone, and almost no one is worrying about the impact of the value of labour going to zero, which is the base case.

What happens to the service industry when no one has any money to buy services? Will every company providing services to humans will go out of business? Is there a more complex interplay?

What happens to home values when the people who live in the homes have no value? What will happen to the stock market, does that go to zero or to infinity? What about digital capital like Bitcoin?

Ultimately, how do we defend against this horrific future that only a few people seem to be able to see?

Expand full comment

My take would be: way too few people are still worrying about AI killing everyone (there are only a few hundred people in the world working on AI safety!), a lot of people are worrying about the surface-level consequences of unemployment, but too few people are thinking about the broader shift in institutional incentives that will come from humans not contributing to economics or power. The interesting / non-trivial concerns are the ones not solved by "use AI wealth to give everyone generous UBI" (either concerns that this won't actually happen, or that even if it does something will go majorly wrong).

Expand full comment

100%. Maybe I would have been clearer about this. I think we need to worry about both an AI apocalypse and AI take over. Both are bad outcomes for humans.

Expand full comment

I just wanted to let you know that I've written an article in response to your thought-provoking piece, "Capital, AGI, and Human Ambition." I found your analysis insightful, particularly your concerns about the potential for AGI to exacerbate existing inequalities and diminish human agency.

In my response, I offer an alternative vision for the future, one where AGI empowers humanity and fosters a more equitable and fulfilling society. I explore how we can reimagine our economic and social structures to ensure that the benefits of AGI are shared widely.

I essentially agree with your assessment of the challenges we face, but I believe that by embracing new economic models and prioritizing human well-being, we can create a future where AGI serves as a catalyst for positive change.

I'd be interested in hearing your thoughts on my response. You can find it here: https://hailyb.substack.com/p/humanity-in-the-age-of-agi-reimagining

Expand full comment

L Rudolf L's article paints a grim picture of a post-AGI world where capital reigns supreme and humans are rendered irrelevant. Let's dissect his arguments and expose the flaws in his logic:

1. The Myth of Post-Scarcity: L Rudolf L assumes that AGI will usher in a world of abundance where scarcity becomes obsolete. However, even with advanced AI, resources – energy, raw materials, and even the computational power needed for AI itself – are not infinite. True post-scarcity is a fantasy, not a realistic prediction.

2. Capital is NOT King: The article fixates on the idea that capital will become all-powerful in an AGI world. But capital without consumers is meaningless. Factories without a purpose are just empty buildings. Humans, with our needs, desires, and aspirations, remain the driving force behind any economy, even an AI-powered one.

3. Human Value Beyond Labor: L Rudolf L seems to believe that human value is solely derived from our labor. This is a deeply flawed and outdated notion. Humans are not defined by their jobs. We are artists, scientists, philosophers, parents, friends, and so much more. Our value transcends our economic output.

4. The Illusion of Static Society: The article warns of a static society where social mobility is extinct and power is concentrated in the hands of a select few. However, this ignores the dynamic nature of human societies and our capacity for adaptation and innovation. The rise of AI may lead to new forms of social organization, economic models, and even definitions of "work" and "value" that challenge the traditional capitalist paradigm.

5. The Importance of Human-centric Institutions: L Rudolf L argues that institutions will have no incentive to care about humans in a post-labor AI world. This is absurd. Institutions are created by and for humans. Companies that don't serve human needs will fail, regardless of their AI capabilities. States that neglect their citizens will face unrest and instability.

In Conclusion:

L Rudolf L's analysis is a cautionary tale, but one that ultimately misses the mark. It's a story where abstract economic theories have somehow gained sentience and banished humans to the sidelines of their own existence.

Let's not forget that humans are the protagonists of our own story. We are the creators, the innovators, the dreamers. And in a world increasingly shaped by AI, our humanity, our values, and our capacity for compassion and connection will be more important than ever.

Expand full comment

So many things I've been thinking about lately, so greatly put together. Many thanks for that plus the new reading list compiled from your linked articles.

Expand full comment

Really great post. I was considering writing something similar on how capital will come to dominate everything in a post-labour economy, but this introduced a bunch of angles I hadn't considered.

I'm very interested in the concept of bullshit jobs https://claycubeomnibus.substack.com/p/bullshit-jobs-review

Zvi has a post about how he thinks bs jobs could expand faster than automation displaces labour, creating a kind of bizarrely irrational economy where everyone's employed in sinecures. I'd be interested to know if you think that is a possibility.

Expand full comment

"Bullshit jobs" seem like an especially crappy implementation of UBI, where (a) humans are still useless (so all the incentives to do away with them still apply), (b) society is still static (since your position is determined by the bureaucracy), and (c) you don't even get the freedom that comes from not being forced to work! I think this old Scott Alexander piece makes good points: https://slatestarcodex.com/2018/05/16/basic-income-not-basic-jobs-against-hijacking-utopia/

I think bullshit jobs expanding is a very real possibility, but it doesn't seem like a good one. In particular, if you want states to be incentivised to keep caring about humans, humans need to remain relevant for actual real-world economic/military power (or then you need to lock in a political system where humans are unchangeably important for that political system and the political system is stable and still competitive with all competing states).

Expand full comment

I know this is going to sound as catastrophizing and emotional but I don't care. I have been thinking about this issue for a while and this is pretty much the exact same conclusion I have come up with. This has made me MADLY pissed off with Eliezer Yudkowsky types because they are basically creating a distraction from what actually matters in this issue:

99% of people worldwide are about to become functionally useless due to AI, to the point where I believe the new tech elite will actively just abandon them/us to die as a mercy . What's the point of even being alive if you can't do anything and are just permanently stuck in life, forever? Why even have UBI if you are practically aimless? AI is going to be the death of liberal social norms and values, humanity has never been as divided as it's about to become. Genes and accumulated capital will be the only things humans will be able to launch onto. It's going to be a Nietzschean dystopia and some tech-right circles are aware of this and actively preparing for it. Everything else is just cope and wishful thinking quite frankly.

Expand full comment

This is why we must preserve democracy. It's the only bulwark against this future, honestly - popular social and economic reorganization (the end of capitalism or a substantial reorganization of it creating sufficient meaning and purpose) will be necessary.

As for actual solutions for democracy to implement, I've only come up with either communism or a new "frontier" society, where purpose comes from guiding AI to discovery and expression in art or science, or space colonization.

---

Edit: Also, a reminder that there are other sources of meaning than economic value: personal growth, family, and religion to name a few.

Expand full comment

My main default prediction here is that we will avoid either the absolute best case or the absolute worst case scenario, because I predict intent alignment works well enough to avoid extinction of humanity type scenarios, but I also don't believe we will see radical movements toward equality (indeed the politics of our era is moving towards greater acceptance of inequality), so capitalism more or less survives the transition to AGI.

I do think dynamism will still exist, but it will be very limited to the upper classes/very rich of society, and most people will not be a part of it, and I'm including uploaded humans here in this calculation.

To address this:

"Rationalist thought on post-AGI futures is too solutionist. The strawman version: solve morality, solve AI, figure out the optimal structure to tile the universe with, do that, done. (The actual leading figures have far less strawman views; see e.g. Paul Christiano at 23:30 here—but the on-the-ground culture does lean in the strawman direction.)"

To be somewhat more fair, the worry here is that in a regime where you don't need society anymore because AIs can do all the work for your society, value conflicts become a bigger deal than today, because there is less reason to tolerate other people's values if you can just found your own society based on your own values, and if you believe in the vulnerable world hypothesis, as a lot of rationalists do, then conflict has existential stakes, and even if not, can be quite bad, so one group controlling the future is better than inevitable conflict.

At a foundational level, whether or not our current tolerance for differing values is stable ultimately comes down to we can compensate for the effect of AGI allowing people to make their own society.

Expand full comment

> To be somewhat more fair, the worry here is that in a regime where you don't need society anymore because AIs can do all the work for your society, value conflicts become a bigger deal than today, because there is less reason to tolerate other people's values if you can just found your own society based on your own values, and if you believe in the vulnerable world hypothesis, as a lot of rationalists do, then conflict has existential stakes, and even if not, can be quite bad, so one group controlling the future is better than inevitable conflict.

So to summarise: if we have a multipolar world, and the vulnerable world hypothesis if true, then conflict can be existentially bad and this is a reason to avoid a multipolar world. Didn't consider this, interesting point!

> At a foundational level, whether or not our current tolerance for differing values is stable ultimately comes down to we can compensate for the effect of AGI allowing people to make their own society.

Considerations:

- offense/defense balance (if offense wins very hard, it's harder to let everyone do their own thing)

- tunability-of-AGI-power / implementability of the harm principle (if you can give everyone AGI that can follow very well the rule "don't let these people harm other people", then you can give that AGI safely to everyone and they can build planets however they like but not death ray anyone else's planets)

The latter might be more of a "singleton that allows playgrounds" rather an actual multipolar world though.

Some of my general worries with singleton worlds are:

- humanity has all its eggs in one basket—you better hope the governance structure is never corrupted, or never becomes sclerotic; real-life institutions so far have not given me many signs of hope on this count

- cultural evolution is a pretty big part of how human societies seem to have improved and relies on a population of cultures / polities

- vague instincts towards diversity being good and less fragile than homogeneity or centralisation

Expand full comment

> So to summarise: if we have a multipolar world, and the vulnerable world hypothesis if true, then conflict can be existentially bad and this is a reason to avoid a multipolar world. Didn't consider this, interesting point!

This applies, but weaker even in a non-vulnerable world, because the incentives are way weaker for peaceful cooperation of values in AGI-world.

> Considerations:

- offense/defense balance (if offense wins very hard, it's harder to let everyone do their own thing)

- tunability-of-AGI-power / implementability of the harm principle (if you can give everyone AGI that can follow very well the rule "don't let these people harm other people", then you can give that AGI safely to everyone and they can build planets however they like but not death ray anyone else's planets)

I do think this requires severely restraining open-source, but conditional on that happening, I think the offense-defense balance/tunability will sort of work out.

Some of my general worries with singleton worlds are:

- humanity has all its eggs in one basket—you better hope the governance structure is never corrupted, or never becomes sclerotic; real-life institutions so far have not given me many signs of hope on this count

- cultural evolution is a pretty big part of how human societies seem to have improved and relies on a population of cultures / polities

- vague instincts towards diversity being good and less fragile than homogeneity or centralisation

Yeah, I'm not a fan of singleton worlds, and tend towards multipolar worlds. It's just that it might involve a loss of a lot of life in the power-struggles around AGI.

On governing the commons, I'd say Elinor Ostrom's observations are derivable from the folk theorems of game theory, which basically says that any outcome can be a Nash Equilibrium (with a few conditions that depend on the theorem) can be possible if the game is repeated and players have to deal with each other.

The problem is that AGI weakens the incentives for players to deal with each other, so Elinor Ostrom's solutions are much less effective.

More here:

https://en.wikipedia.org/wiki/Folk_theorem_(game_theory)

>

Expand full comment

Regarding the outcomes, I feel like there should be an emphasis towards how it will redefine what it means to be human:

1. Outside expansion: The AI and it's labor-replacement utility will drive humanity's attention towards the outer space. The same way that in the past you would have different tribes developping different cultures, turning into city-states, countries and empires, I would argue that there will exist several models of human organisation on different planets or other inhabitable space-based accomodations.

2. Inside expansion: The reduction of a need to survive is likely to reshape our ego's design. The survival mind might have less of an impact over the ego (generation after generation). Higher level of consciousness for a higher ratio of human population is to be expected, with potentially a deeper connection to life itself. Also, the prolongation of human lifespan and technological augmentation might reshape what it means to be human on the outside as well as on the inside.

Expand full comment

The big miss in this piece is the utter over estimation of AI and its capabilities.

Expand full comment

That's all well and... bad.

What it (and other writings of this genre) doesn't take into account, which is the key piece, is the "de novo" emergence of a human meta-culture and meta-technic based on unlimited syntropic abundance, that does not depend explicitly or implicitly on the infrastructural or cultural legacy of civilization, but that perforates and beats the entropic economic system on its own terms. This is now actually possible, and a key piece is exactly the fundamental disruption that is so often spoken of and analyzed.

https://substack.com/@freelyfreely/note/c-85163883

Expand full comment

Crucially, it's not a matter of demonstrating or proving the value of living intelligence to technocratic minds and begging them to "implement" it. It is fundamentally emergent and does not ask permission.

Expand full comment

What happens, if we wake up tomorrow and we are in this new reality.

What happens to somone with nothing? Say a person who has lived pay cheque to pay cheque? Somone who own no capitol, no property?

Does this someone die from starvation, or cold?

What if this someone has ambition, an idea, that could generate wealth? Is that even possible or statistically impossible?

Without major govenrment intervention, Elon's nightmare of populaiton decline will occur, if not riots in the streets with at least half of the worlds population, up in arms.

Expand full comment

One thing: chess grandmasters still play chess and the tournaments still take place. Deep Blue did not replace them so far even though it plays better. Roombas did not flood western homes since 2004 - standard hoovers are still in use much more massively. More people prefer playing 16x16 textured Minecraft or mediocore Fortnite than RDR2. Wars are still for some reason conducted with guns and not fully with drones.

Expand full comment

Thought provoking, but I think this might still give AI too much credence. While I agree that AIs will be better than humans at most material work, I remain unconvinced (so far) that it will satisfactorily replace 'the human element'. While it may offer a useful facsimile for some human endeavors, it still won't be the pinnacle of truly human endeavors (poetry, interpersonal care and love, truly great literature, music, etc.). Perhaps, I am of a blissfully ignorant disposition on this, but I still think people will *need* people, and that will be important in a way that we might not yet be able to forecast.

Either way, this is a great piece - thought-provoking and chilling.

Expand full comment

I also read your earlier post that you linked:

https://nosetgauge.substack.com/p/review-foragers-farmers-and-fossil-fuels

What made you change your mind so drastically?

Expand full comment

Change my mind on what exactly?

Expand full comment