The discovery and use of machinery may be … injurious to the labouring class, as some of their number will be thrown out of employment, and population will become redundant.
I think that you're missing something critical here, which is that an AI which meets or exceeds the cognitive capacity of the smartest human is infinitely replicable: AI labor is accumulable in a way that human labor is not. Anyway, here is o1 pro critiquing your essay: https://chatgpt.com/share/677fe233-ac54-8002-a9da-323ec0977f62
Definitely agree that AI labor is accumulable in a way that human labor is not: it accumulates like capital. But it will not be infinitely replicable. AI labor will face constraints. There are a finite number of GPUs, datacenters, and megawatts. Increasing marginal cost and decreasing marginal benefit will eventually meet at a maximum profitable quantity. Then, you have to make decisions about where to allocate that quantity of AI labor and comparative advantage will incentivize specialization and trade with human labor.
Fair, but this breaks down if AI can rapidly improve its own efficiency given the hardware it has access to. We know that a human brain might use about 20 watt hours for a task that takes an hour, but Open AI's o3 uses ~1.8 MWh for a task that might take you about an hour. So there are about 5 orders of magnitude improvement that we know to be possible. And we don't know that humans have peak intellectual energy efficiency, so this can only be seen as an estimated limit to efficiency improvements.
I've been working on measuring the ability of these models to do AI R&D over the last 6 months https://metr.org/blog/2024-11-22-evaluating-r-d-capabilities-of-llms/ . The rate at which the new models are better at these task does not support your idea that future AI will not be able to innovate past these limitations. In short, white collar labor is effed pretty soon. Hopefully you are right and we a get a year or two more before things fall apart.
You're blog is rapidly becoming my favorite. You're really crushing it lately! Thanks for all the short clear writing :) I'm going to be in Nova all of Feb if you're around. I would love to hang out again.
Chris, I would like to learn more about AI model improvement. Is your blog or substack the best resource? May want to feature some data you have accumulated.
My substack is mostly for fun. I guess you could listen to a podcast I've been on recently: https://www.youtube.com/watch?v=8w9bkRZWCYU , but I don't have a good place to share the latest on what we've learned and what we are building. It's complicated as we have NDA agreements with multiple parties. My evals work should be in my blog: I started writing a piece called "How to make Evaluations that cut through AI Hype" and the research for that blog ballooned into a startup that I've been working on for 18 months now, lmao. This link has most of what we can share publicly from the last 6 months on the AI R&D topic: https://metr.org/blog/2024-11-22-evaluating-r-d-capabilities-of-llms/
I get the sense that these results are currently under-appreciated, given the impact that automated AI R&D would have. Given the jump from o1 → o3 on FrontierMath, I wouldn't be surprised to see a similar (or bigger) increase on your benchmark. With OpenAI 'unhobbling' the models for their upcoming agent product (presumably better computer use + larger context + memory), I'd be surprised if an o-series model in 2025 doesn't trigger a 'High' risk level for model autonomy in their preparedness framework.
Have to agree with Chris here. I don't see a future in which human purchasing power doesn't decline quite rapidly once we start to see self replicating autonomous economic actors and completely automated value chains.
This is a very odd response for an economist. The "fixed world" fallacy, the exact same fallacy that came up in the original naive objection to automation.
There is not a fixed number of GPUs and megawatts. There are enough of them to meet demand, and the Wright curve ensures that the supply curve is always moving lower.
The AI catastrophists assume, not always explicitly, that supply will meet demand at a price below that of subsistence for humans - say 1500 Calories per day of food, or lower. That is the crux of their argument.
Sorry about the repeated re-editing - it's 5 AM, I'm about to go to work, and I'm short of sleep.
This is not a response, because it doesn't matter if there is 'always a finite number'. In fact, the 'always' there is a hint that there is something wrong with your claim as it is proving too much: as the universe is not infinite, there will always be a finite number of all things forever everywhere with regard to everything; as that is the case, there being a finite number can hardly be much of a guide to practical questions like 'what would happen if AI got real good that didn't happen before'.
"Mechanical labor will face constraints. There are a finite number of iron ore deposits, gasoline, and rubber supplies. Increasing marginal cost and decreasing marginal benefit will eventually meet at a maximum profitable quantity. Then, you have to make decisions about where to allocate that quantity of mechanical labor and comparative advantage will incentivize specialization and trade with horse labor. Comparative advantage still guarantees mutually beneficial trades."
> There would still not be an infinite or costless supply of intelligence as some assume.
Presumably you're pointing at compute limitations here, which will certainly exist. But we don't yet know how computationally cheap the AGI will be—it is perfectly plausible that it doesn't turn out to be beyond the world's compute capacity to run e.g., a billion AGI agents 24/7. This is not infinite, but I don't see how such scales produce a strong incentive for the AGI to behave like the high-skilled laborer and specialize, because it could just be the case that there is enough compute for the AGIs to do every existing job. Your analysis makes much more sense in a world where compute is more limited and AGIs face hard tradeoffs on what to work on.
There's not a fixed number of jobs. The world population is 7 billion more than it was in 1800 but there is still more work than labor. A billion more AIs won't change that.
Even if you have 10 billion AGIs, Maxwell is right. The people that own the AGIs will mostly likely want to use them for the hardest tasks like curing medical diseases, fusion, unsolved math problems, etc.
We don't know how hard these problems will be to solve, but a lot of them have much higher returns compared to making the AGI wash dishes and fold clothes.
This is the comparative advantage argument and I think that makes way more sense.
“The basic story that motivates fear of AI automation predicts that more automation leads to lower value and leverage for labor, but this story cannot an explain a flat labor share of income since 1800.”
Sure it can. The share of GDP of thinking has stayed flat, not labor. and thinking is about to increase a lot
1. When machines automate a task that labor used to do, labor becomes less valuable.
2. AI will make machines that automate all tasks
3. Therefore, labor will lose all value.
But the historical evidence of a constant labor share under lots of automation cuts against point 1 and the high value of low skilled labor despite competition from strictly better high skilled labor is evidence against 2 ==> 3
The first 3 countervailing effects increase labour income but not labour's share of income. The printing press may have increased authors' incomes, but a higher share (perhaps even the majority share) of book sales now goes to the owners of the printing press.
Yes that's true, most of this post is based on that paper. The three countervailing effects can increase wages and share, but it's only guaranteed for the fourth effect.
Also, I was a bit sloppy in delineating this, but I think a falling labor share with rising wages also qualifies for "labor will not be valueless" so it still goes against the most common AI fears.
Counter to intuition, AGI will increase the value of labor via the Baumol effect. How do we know this? That’s how every other wave of automation has functioned. Automate away factory jobs, the service sector employs more (and more highly compensated workers). Automate away knowledge economy jobs, then other automation-resistant industries will see rising employment / wages.
Making specific predictions is hard, but I expect luxury production will rapidly increase as a result, like how automation of manufacturing paradoxically led to increased demand for hand-crafted, boutique services. Craft knowledge will follow the path of craft production.
> then other automation-resistant industries will see rising employment / wages.
This assumes that any automation resistant industries remain.
Imagine a bunch of cavemen, who are given a steady stream of smartphones via a portal or something. Gradually their economy advances. They get good at metal smelting, so they no longer smash open smartphones for the metal. They discover light bulbs, so no longer use the smartphones to light their homes.
As long as their tech can't make an equally good smartphone, there is Something to do with the portal smartphones. But once they get to an equal tech level, they don't need those portal smartphones at all. A few years later, those same portal smartphones that were treasured from cavemen times to dial up modem times are now outdated tech waste.
If AGI is achieved, then superintelligence is probably right around the corner. We would be in danger of losing control of our fate, let alone the economy. Why would you trade with lesser beings when you could trick them, scam them, kill them and take 100% of what they have? We should not trust market logic for security issues.
One reason people trade services is that their time is limited. (IE a CEO hiring a private chef)
If the AI can easily make a trillion copies of itself, this doesn't apply.
Another reason is the costs of war. Sure the US could win against Ghana, but the cost of equipment and soldiers would still be substantial. (Also, brutal oppression of foreign nations for profit has gone out of fashion. Conquering weaker neighbors and taking their stuff used to be more popular.)
How does this square with your June article about the ice shipping example where an industry disappeared?
Obviously in this post you focus on the big picture, and thus can assume people switch industries and learn new skills. But with exponential AGI do you think this is realistic? If the economy is being rewritten every 5 or so years instead of, say, 25. Seems too rapid for people to adjust.
The ice trade example fits in. It's an example of automation displacing a large number of workers, but the labor share nonetheless remained constant. Refrigeration increased everyone's incomes and people consumed more from other industries that increased their employment. Refrigeration itself also created new tasks for some people to do.
There were definitely costs to some people when the industry collapsed. I'm not sure we should expect switching frictions like these to increase on net though. The pace of change in the economy has already increased a lot but it's not obvious to me that there's way more or more costly frictional employment today than in the past, though I'm not entirely sure why. Technology has made it easier to move, to find jobs/find workers, to reskill.
I think people in the past would not be able to imagine how people today could adjust to a world that moves so fast. But we do adjust. So although I do also find it hard to imagine future people adjusting faster than they do today, I expect that they will find a way.
A modelling tweak that could cause the comparative advantage logic to break down:
1. Let’s imagine sufficiently advanced AI that there are large frictions/transaction costs to human-AI transactions/economic interactions in the production process because of differing financial/monetary/payment systems, differing legal systems or lossy human-AI communication (compared to AI-AI communication).
2. You could model this is as stylized two-sector economy where there is a tariff to be paid for any trade that occurs across the two sectors, or alternatively, you could think of needing to pay a fixed cost to build and update the infrastructure (payments, communication, legal system, etc) for humans and AIs to interact in the production process.
3. In such a model, it seems like an empirical question whether the costs of building and maintaining this infrastructure/paying the tariffs outweigh the gains from trade.
4. A slightly janky analogy from the decline in the population of horses post automobiles:
a. Comparative advantage says that even if motorized vehicles have an absolute advantage in all types of transportation, the opportunity cost of the fuel, materials, human labor, etc required to manufacture and operate cars means that for some trips, the opportunity cost of cars is high enough that we should still use horses for getting around.
b. However, it's impractical to have horses on the same roads as automobiles (frictions) and too costly to build a separate set of roads (fixed costs) for the horses only.
i. So when using cars is impractical, rather than paying the tariffs/fixed costs to build horse infrastructure, we just walk. Horses become useless and mostly disappear from the transportation market.
ii. Empirically, a second reason this happened is that horses take human labor to raise/care for, but I suspect their population still declines even if they require 0 labor because the costs of common infrastructure were too high.
5. While I suspect transactions/fixed costs this high this are a theoretical curiosity, I think a world with billions of superintelligent agents and abundance of the kind some people are imagining is alien enough that it’s hard to say for sure.
a. I think the people who predict fast-takeoff/intelligence explosion/increasing returns to scale are in fact imagining differences in ability/complexity as large as the human v horse differences and probably commensurate differences in financial/legal systems.
"For at least 200 years, 50-60% of GDP has gone to pay workers with the rest paid to machines or materials."
This has to break down at some point. If you have an AI + robotics eco-system that can reproduce itself without any human input it will in theory grow infinitely large for a fixed population of human labourers.
In a world with say 100000x more autonomous robots than human workers, each as capable or more capable as a human worker, there's no way those 1 in 100000 workers that are still human will be appropriating half of GDP.
Not requiring any labour input at all is (at least) one way AI is qualitatively different to previous automation.
But in many ways this did already happen. The amount of computing power or machinery or number of industrial robots has grown far faster than population and is 100000x larger relative to population than in 1800. The amount of human supervision required for these machines has also been decreasing.
You're right that machines aren't completely independent and self replicating. But that is the point of the second section of the piece. Humans are independent and self-replicating. The population of genius humans has grown by 10000x or whatever over the past 200 years but less skilled laborers are still very valuable.
Final point that I didn't include in the piece: you may be right that a small population of human workers won't be able to command 50% of GDP, but that's different from labor being valueless. If labor's share of GDP shrinks because output and productivity go through the roof, then you can still have rapidly growing wages and lots of ways to make income from your labor and move up in the world etc even as the GDP share shrinks.
"The population of genius humans has grown by 10000x or whatever over the past 200 years but less skilled laborers are still very valuable."
Surely that's because the population of geniuses is growing at roughly the same rate as non-geniuses. If an alien spaceship dropped off 1 trillion Von Neuman clones overnight I would expect my wage to go way down.
If the clones kept cloning themselves approaching an infinite number, labours's share of income would -> 0
It's possible total production could outpace that fall so I'd still get an absolute increase in wages even with a falling share, but, labour would have such low marginal productivity at that point that it's more likely transaction costs and other friction make it worth not worth employing me at all.
That's assuming those clones are not controlled and that they want to work on crappy jobs.
I think it's more likely that they would want to work the harder intellectual jobs like science engineering etc. Those jobs and projects have an almost infinite scaling of intelligence requirement.
You'd have to explain to me why they would prefer to fold clothes vs solve nuclear fusion and build Dyson swarms
> You'd have to explain to me why they would prefer to fold clothes vs solve nuclear fusion and build Dyson swarms
Because if there is a sufficiently vast population of geniuses that like working on fusion, then the boring cloths folding jobs will actually pay more.
Come on dude... Jobs like folding clothes will never pay more than curing diseases, unlocking new abundant energy sources etc
Using advanced AGI will cost compute, and there will always be an opportunity cost. Do rich people want to cure X diseases or do they want Steve to have some clean dishes?
Weak AI will be used for simple tasks, but the actual real intelligent AGI compute resources will be used on the hardest problems for the highest reward.
At some point, you have more genius experts than you have test tubes. At this point, either the disease is already cured, or most of the experts are thumb twiddling and waiting for the results.
At some point, each individual AI genius is contributing basically 0 to the disease curing effort.
Of course, dishwashers already exist. And it probably makes more sense to invent a simple dish-doing robot than to have the general AGI doing dishes, unless compute is Very cheap.
If compute is Very cheap, a genius AI that can wash dishes 1% more efficiently is a good deal. The tiny power costs to run the AI are negligible compared to the value of soap and water that it saves.
Though at this point, why do you still have dishes to wash. Can't you use nanobots or something instead?
Oddly enough, horses do not seem to have the same slice of the pie they once did.
Not sure how we can reconcile this with the labour fact - I would've expected horses to stay about as important to the economy due to comparative advantage same as people!
I don’t think this is central to your argument (rising wages, even with falling labor share should suffice), but some pedantry on the empirics of constant labor shares:
1. A flat aggregate labor share is consistent with a falling labor share for the vast majority of the distribution
a. From the abstract of the paper you linked: “we estimate that the US bottom 99 percent labor share has fallen 15 points since 1980. “
b. And from the conclusion: “We find that the labor shares for the bottom 90%, 99% and 99.9% have decidedly fallen since the early 1980s, so much so that the labor share is lower today than at any other period since 1930”
c. So while you’re right that we can’t say that automation leads to lower leverage for ALL labor, we can’t rule out that it leads to lower leverage for 99% of labor-income earners.
i. I don’t actually think we should make inferences from the time-series of falling 99% labor share to a causal claim that it was caused by automation. Just arguing that your prior of no effect should be weaker.
2. Even after adjusting for in kind benefits/social insurance, I think most of the literature does find declines in the AGGREGATE labor share (even if they’re quite small)
This JEP piece shows that the US labor share declines under a bunch of assumptions, though the magnitude can vary quite substantially. The BEA data they use includes in-kind transfers and employer pension contributions. https://pubs.aeaweb.org/doi/pdfplus/10.1257/jep.38.2.107
I don't think the comparison to an influx of new people works for AI. Millions of new people being added to the population doesn't put people out of the job because, although those people compete for labor, they also buy goods and services produced by labor themselves, so they increase the demand for labor just as much as the supply. But an AGI would not need to buy any goods and services beyond what it needs to keep running, which is presumably much less than what a human needs if we're at the point where they can replace human labor. So the AGI will not produce additional demand for labor, or at least not enough of it for everyone to keep their jobs.
What you have laid out makes sense, up until the last part:
> Exactly because of their superior ability at all tasks, high skilled workers give up more when they choose to do something that they could trade for. This applies just as strongly to human level AGIs
You can't copy high skilled workers. They need a large amount of resources and decades of time to be 'created'. If we have self-improving AI geniuses, you can make as many copies as you want. The only constrained is compute. Currently, a massive wave of CapEx is underway to alleviate this constraint.
Zero seems like a bit of a straw man. Comparative advantage means that humans wages are unlikely to drop to zero. The problem is that actual wages might fall to a level which is too low to feed and shelter a family.
I think this is just wrong, AI will render humans unemployable once you account for the cost of a human existing.
It's the same reason that NYC hedge funds don't employ chimpanzees to help with their trading - in theory a chimpanzee could do something that slightly helped, and the different ways it could help trade off against human labour at different rates. But having a chimp is not free - it needs to be housed, fed, cared for etc. So taking that into account, chimpanzees cannot profitably work at hedge funds.
I think that you're missing something critical here, which is that an AI which meets or exceeds the cognitive capacity of the smartest human is infinitely replicable: AI labor is accumulable in a way that human labor is not. Anyway, here is o1 pro critiquing your essay: https://chatgpt.com/share/677fe233-ac54-8002-a9da-323ec0977f62
Definitely agree that AI labor is accumulable in a way that human labor is not: it accumulates like capital. But it will not be infinitely replicable. AI labor will face constraints. There are a finite number of GPUs, datacenters, and megawatts. Increasing marginal cost and decreasing marginal benefit will eventually meet at a maximum profitable quantity. Then, you have to make decisions about where to allocate that quantity of AI labor and comparative advantage will incentivize specialization and trade with human labor.
Fair, but this breaks down if AI can rapidly improve its own efficiency given the hardware it has access to. We know that a human brain might use about 20 watt hours for a task that takes an hour, but Open AI's o3 uses ~1.8 MWh for a task that might take you about an hour. So there are about 5 orders of magnitude improvement that we know to be possible. And we don't know that humans have peak intellectual energy efficiency, so this can only be seen as an estimated limit to efficiency improvements.
I've been working on measuring the ability of these models to do AI R&D over the last 6 months https://metr.org/blog/2024-11-22-evaluating-r-d-capabilities-of-llms/ . The rate at which the new models are better at these task does not support your idea that future AI will not be able to innovate past these limitations. In short, white collar labor is effed pretty soon. Hopefully you are right and we a get a year or two more before things fall apart.
You're blog is rapidly becoming my favorite. You're really crushing it lately! Thanks for all the short clear writing :) I'm going to be in Nova all of Feb if you're around. I would love to hang out again.
Chris, I would like to learn more about AI model improvement. Is your blog or substack the best resource? May want to feature some data you have accumulated.
My substack is mostly for fun. I guess you could listen to a podcast I've been on recently: https://www.youtube.com/watch?v=8w9bkRZWCYU , but I don't have a good place to share the latest on what we've learned and what we are building. It's complicated as we have NDA agreements with multiple parties. My evals work should be in my blog: I started writing a piece called "How to make Evaluations that cut through AI Hype" and the research for that blog ballooned into a startup that I've been working on for 18 months now, lmao. This link has most of what we can share publicly from the last 6 months on the AI R&D topic: https://metr.org/blog/2024-11-22-evaluating-r-d-capabilities-of-llms/
I get the sense that these results are currently under-appreciated, given the impact that automated AI R&D would have. Given the jump from o1 → o3 on FrontierMath, I wouldn't be surprised to see a similar (or bigger) increase on your benchmark. With OpenAI 'unhobbling' the models for their upcoming agent product (presumably better computer use + larger context + memory), I'd be surprised if an o-series model in 2025 doesn't trigger a 'High' risk level for model autonomy in their preparedness framework.
Thanks, I will check it out.
Have to agree with Chris here. I don't see a future in which human purchasing power doesn't decline quite rapidly once we start to see self replicating autonomous economic actors and completely automated value chains.
Have laid out how things might play out here:
https://open.substack.com/pub/assortedthingsinoneplace/p/on-the-systemic-risks-of-auto-catalytic?utm_source=share&utm_medium=android&r=67wt0
This is a very odd response for an economist. The "fixed world" fallacy, the exact same fallacy that came up in the original naive objection to automation.
There is not a fixed number of GPUs and megawatts. There are enough of them to meet demand, and the Wright curve ensures that the supply curve is always moving lower.
The AI catastrophists assume, not always explicitly, that supply will meet demand at a price below that of subsistence for humans - say 1500 Calories per day of food, or lower. That is the crux of their argument.
Sorry about the repeated re-editing - it's 5 AM, I'm about to go to work, and I'm short of sleep.
AI catastrophists claim we are the horses now.
I agree there is not a fixed number, but there is always a finite number. AI intelligence is not "infinitely replicable."
This is not a response, because it doesn't matter if there is 'always a finite number'. In fact, the 'always' there is a hint that there is something wrong with your claim as it is proving too much: as the universe is not infinite, there will always be a finite number of all things forever everywhere with regard to everything; as that is the case, there being a finite number can hardly be much of a guide to practical questions like 'what would happen if AI got real good that didn't happen before'.
"Mechanical labor will face constraints. There are a finite number of iron ore deposits, gasoline, and rubber supplies. Increasing marginal cost and decreasing marginal benefit will eventually meet at a maximum profitable quantity. Then, you have to make decisions about where to allocate that quantity of mechanical labor and comparative advantage will incentivize specialization and trade with horse labor. Comparative advantage still guarantees mutually beneficial trades."
> There would still not be an infinite or costless supply of intelligence as some assume.
Presumably you're pointing at compute limitations here, which will certainly exist. But we don't yet know how computationally cheap the AGI will be—it is perfectly plausible that it doesn't turn out to be beyond the world's compute capacity to run e.g., a billion AGI agents 24/7. This is not infinite, but I don't see how such scales produce a strong incentive for the AGI to behave like the high-skilled laborer and specialize, because it could just be the case that there is enough compute for the AGIs to do every existing job. Your analysis makes much more sense in a world where compute is more limited and AGIs face hard tradeoffs on what to work on.
There's not a fixed number of jobs. The world population is 7 billion more than it was in 1800 but there is still more work than labor. A billion more AIs won't change that.
Even if you have 10 billion AGIs, Maxwell is right. The people that own the AGIs will mostly likely want to use them for the hardest tasks like curing medical diseases, fusion, unsolved math problems, etc.
We don't know how hard these problems will be to solve, but a lot of them have much higher returns compared to making the AGI wash dishes and fold clothes.
This is the comparative advantage argument and I think that makes way more sense.
“The basic story that motivates fear of AI automation predicts that more automation leads to lower value and leverage for labor, but this story cannot an explain a flat labor share of income since 1800.”
Sure it can. The share of GDP of thinking has stayed flat, not labor. and thinking is about to increase a lot
I'm not sure what you mean.
What I mean by "the basic story" is:
1. When machines automate a task that labor used to do, labor becomes less valuable.
2. AI will make machines that automate all tasks
3. Therefore, labor will lose all value.
But the historical evidence of a constant labor share under lots of automation cuts against point 1 and the high value of low skilled labor despite competition from strictly better high skilled labor is evidence against 2 ==> 3
I guess my counterpoint is that machines have never competed against the most valuable thing humans do - think.
And now they might. which might leave what you call labor flat, even while humans lose out.
but I’m not very well versed in this stuff. Happy to have a call.
The first 3 countervailing effects increase labour income but not labour's share of income. The printing press may have increased authors' incomes, but a higher share (perhaps even the majority share) of book sales now goes to the owners of the printing press.
Only the creation of new tasks increases both labour's income and share of income. See Acemoglu (2018): https://www.nber.org/system/files/working_papers/w24196/w24196.pdf
Yes that's true, most of this post is based on that paper. The three countervailing effects can increase wages and share, but it's only guaranteed for the fourth effect.
Also, I was a bit sloppy in delineating this, but I think a falling labor share with rising wages also qualifies for "labor will not be valueless" so it still goes against the most common AI fears.
Good piece! I would sum it up this way:
Counter to intuition, AGI will increase the value of labor via the Baumol effect. How do we know this? That’s how every other wave of automation has functioned. Automate away factory jobs, the service sector employs more (and more highly compensated workers). Automate away knowledge economy jobs, then other automation-resistant industries will see rising employment / wages.
Making specific predictions is hard, but I expect luxury production will rapidly increase as a result, like how automation of manufacturing paradoxically led to increased demand for hand-crafted, boutique services. Craft knowledge will follow the path of craft production.
> then other automation-resistant industries will see rising employment / wages.
This assumes that any automation resistant industries remain.
Imagine a bunch of cavemen, who are given a steady stream of smartphones via a portal or something. Gradually their economy advances. They get good at metal smelting, so they no longer smash open smartphones for the metal. They discover light bulbs, so no longer use the smartphones to light their homes.
As long as their tech can't make an equally good smartphone, there is Something to do with the portal smartphones. But once they get to an equal tech level, they don't need those portal smartphones at all. A few years later, those same portal smartphones that were treasured from cavemen times to dial up modem times are now outdated tech waste.
If AGI is achieved, then superintelligence is probably right around the corner. We would be in danger of losing control of our fate, let alone the economy. Why would you trade with lesser beings when you could trick them, scam them, kill them and take 100% of what they have? We should not trust market logic for security issues.
Why did John Von Neumann trade with anyone? Why doesn't Michael Jordan mow his own lawn? Why does the United States trade with Ghana?
Comparative advantage pushes more intelligent and powerful beings to trade with others.
Sometimes.
One reason people trade services is that their time is limited. (IE a CEO hiring a private chef)
If the AI can easily make a trillion copies of itself, this doesn't apply.
Another reason is the costs of war. Sure the US could win against Ghana, but the cost of equipment and soldiers would still be substantial. (Also, brutal oppression of foreign nations for profit has gone out of fashion. Conquering weaker neighbors and taking their stuff used to be more popular.)
How does this square with your June article about the ice shipping example where an industry disappeared?
Obviously in this post you focus on the big picture, and thus can assume people switch industries and learn new skills. But with exponential AGI do you think this is realistic? If the economy is being rewritten every 5 or so years instead of, say, 25. Seems too rapid for people to adjust.
The ice trade example fits in. It's an example of automation displacing a large number of workers, but the labor share nonetheless remained constant. Refrigeration increased everyone's incomes and people consumed more from other industries that increased their employment. Refrigeration itself also created new tasks for some people to do.
There were definitely costs to some people when the industry collapsed. I'm not sure we should expect switching frictions like these to increase on net though. The pace of change in the economy has already increased a lot but it's not obvious to me that there's way more or more costly frictional employment today than in the past, though I'm not entirely sure why. Technology has made it easier to move, to find jobs/find workers, to reskill.
I think people in the past would not be able to imagine how people today could adjust to a world that moves so fast. But we do adjust. So although I do also find it hard to imagine future people adjusting faster than they do today, I expect that they will find a way.
Optimism is the way!
A modelling tweak that could cause the comparative advantage logic to break down:
1. Let’s imagine sufficiently advanced AI that there are large frictions/transaction costs to human-AI transactions/economic interactions in the production process because of differing financial/monetary/payment systems, differing legal systems or lossy human-AI communication (compared to AI-AI communication).
2. You could model this is as stylized two-sector economy where there is a tariff to be paid for any trade that occurs across the two sectors, or alternatively, you could think of needing to pay a fixed cost to build and update the infrastructure (payments, communication, legal system, etc) for humans and AIs to interact in the production process.
3. In such a model, it seems like an empirical question whether the costs of building and maintaining this infrastructure/paying the tariffs outweigh the gains from trade.
4. A slightly janky analogy from the decline in the population of horses post automobiles:
a. Comparative advantage says that even if motorized vehicles have an absolute advantage in all types of transportation, the opportunity cost of the fuel, materials, human labor, etc required to manufacture and operate cars means that for some trips, the opportunity cost of cars is high enough that we should still use horses for getting around.
b. However, it's impractical to have horses on the same roads as automobiles (frictions) and too costly to build a separate set of roads (fixed costs) for the horses only.
i. So when using cars is impractical, rather than paying the tariffs/fixed costs to build horse infrastructure, we just walk. Horses become useless and mostly disappear from the transportation market.
ii. Empirically, a second reason this happened is that horses take human labor to raise/care for, but I suspect their population still declines even if they require 0 labor because the costs of common infrastructure were too high.
5. While I suspect transactions/fixed costs this high this are a theoretical curiosity, I think a world with billions of superintelligent agents and abundance of the kind some people are imagining is alien enough that it’s hard to say for sure.
a. I think the people who predict fast-takeoff/intelligence explosion/increasing returns to scale are in fact imagining differences in ability/complexity as large as the human v horse differences and probably commensurate differences in financial/legal systems.
"For at least 200 years, 50-60% of GDP has gone to pay workers with the rest paid to machines or materials."
This has to break down at some point. If you have an AI + robotics eco-system that can reproduce itself without any human input it will in theory grow infinitely large for a fixed population of human labourers.
In a world with say 100000x more autonomous robots than human workers, each as capable or more capable as a human worker, there's no way those 1 in 100000 workers that are still human will be appropriating half of GDP.
Not requiring any labour input at all is (at least) one way AI is qualitatively different to previous automation.
But in many ways this did already happen. The amount of computing power or machinery or number of industrial robots has grown far faster than population and is 100000x larger relative to population than in 1800. The amount of human supervision required for these machines has also been decreasing.
You're right that machines aren't completely independent and self replicating. But that is the point of the second section of the piece. Humans are independent and self-replicating. The population of genius humans has grown by 10000x or whatever over the past 200 years but less skilled laborers are still very valuable.
Final point that I didn't include in the piece: you may be right that a small population of human workers won't be able to command 50% of GDP, but that's different from labor being valueless. If labor's share of GDP shrinks because output and productivity go through the roof, then you can still have rapidly growing wages and lots of ways to make income from your labor and move up in the world etc even as the GDP share shrinks.
Thank you for reading and commenting!
"The population of genius humans has grown by 10000x or whatever over the past 200 years but less skilled laborers are still very valuable."
Surely that's because the population of geniuses is growing at roughly the same rate as non-geniuses. If an alien spaceship dropped off 1 trillion Von Neuman clones overnight I would expect my wage to go way down.
If the clones kept cloning themselves approaching an infinite number, labours's share of income would -> 0
It's possible total production could outpace that fall so I'd still get an absolute increase in wages even with a falling share, but, labour would have such low marginal productivity at that point that it's more likely transaction costs and other friction make it worth not worth employing me at all.
That's assuming those clones are not controlled and that they want to work on crappy jobs.
I think it's more likely that they would want to work the harder intellectual jobs like science engineering etc. Those jobs and projects have an almost infinite scaling of intelligence requirement.
You'd have to explain to me why they would prefer to fold clothes vs solve nuclear fusion and build Dyson swarms
> You'd have to explain to me why they would prefer to fold clothes vs solve nuclear fusion and build Dyson swarms
Because if there is a sufficiently vast population of geniuses that like working on fusion, then the boring cloths folding jobs will actually pay more.
That or they make a cloths folding robot.
Come on dude... Jobs like folding clothes will never pay more than curing diseases, unlocking new abundant energy sources etc
Using advanced AGI will cost compute, and there will always be an opportunity cost. Do rich people want to cure X diseases or do they want Steve to have some clean dishes?
Weak AI will be used for simple tasks, but the actual real intelligent AGI compute resources will be used on the hardest problems for the highest reward.
At some point, you have more genius experts than you have test tubes. At this point, either the disease is already cured, or most of the experts are thumb twiddling and waiting for the results.
At some point, each individual AI genius is contributing basically 0 to the disease curing effort.
Of course, dishwashers already exist. And it probably makes more sense to invent a simple dish-doing robot than to have the general AGI doing dishes, unless compute is Very cheap.
If compute is Very cheap, a genius AI that can wash dishes 1% more efficiently is a good deal. The tiny power costs to run the AI are negligible compared to the value of soap and water that it saves.
Though at this point, why do you still have dishes to wash. Can't you use nanobots or something instead?
The population of genius humans grew less fast than the amount of food being produced grew. So the less smart humans could still get plenty of food.
Oddly enough, horses do not seem to have the same slice of the pie they once did.
Not sure how we can reconcile this with the labour fact - I would've expected horses to stay about as important to the economy due to comparative advantage same as people!
I don’t think this is central to your argument (rising wages, even with falling labor share should suffice), but some pedantry on the empirics of constant labor shares:
1. A flat aggregate labor share is consistent with a falling labor share for the vast majority of the distribution
a. From the abstract of the paper you linked: “we estimate that the US bottom 99 percent labor share has fallen 15 points since 1980. “
b. And from the conclusion: “We find that the labor shares for the bottom 90%, 99% and 99.9% have decidedly fallen since the early 1980s, so much so that the labor share is lower today than at any other period since 1930”
c. So while you’re right that we can’t say that automation leads to lower leverage for ALL labor, we can’t rule out that it leads to lower leverage for 99% of labor-income earners.
i. I don’t actually think we should make inferences from the time-series of falling 99% labor share to a causal claim that it was caused by automation. Just arguing that your prior of no effect should be weaker.
2. Even after adjusting for in kind benefits/social insurance, I think most of the literature does find declines in the AGGREGATE labor share (even if they’re quite small)
This JEP piece shows that the US labor share declines under a bunch of assumptions, though the magnitude can vary quite substantially. The BEA data they use includes in-kind transfers and employer pension contributions. https://pubs.aeaweb.org/doi/pdfplus/10.1257/jep.38.2.107
I don't think the comparison to an influx of new people works for AI. Millions of new people being added to the population doesn't put people out of the job because, although those people compete for labor, they also buy goods and services produced by labor themselves, so they increase the demand for labor just as much as the supply. But an AGI would not need to buy any goods and services beyond what it needs to keep running, which is presumably much less than what a human needs if we're at the point where they can replace human labor. So the AGI will not produce additional demand for labor, or at least not enough of it for everyone to keep their jobs.
Presumably the AI wants something more than just self-preservation. Ex: spreading throughout the galaxy and farther
What you have laid out makes sense, up until the last part:
> Exactly because of their superior ability at all tasks, high skilled workers give up more when they choose to do something that they could trade for. This applies just as strongly to human level AGIs
You can't copy high skilled workers. They need a large amount of resources and decades of time to be 'created'. If we have self-improving AI geniuses, you can make as many copies as you want. The only constrained is compute. Currently, a massive wave of CapEx is underway to alleviate this constraint.
How do humans stay competitive in such a world?
Zero seems like a bit of a straw man. Comparative advantage means that humans wages are unlikely to drop to zero. The problem is that actual wages might fall to a level which is too low to feed and shelter a family.
I think this is just wrong, AI will render humans unemployable once you account for the cost of a human existing.
It's the same reason that NYC hedge funds don't employ chimpanzees to help with their trading - in theory a chimpanzee could do something that slightly helped, and the different ways it could help trade off against human labour at different rates. But having a chimp is not free - it needs to be housed, fed, cared for etc. So taking that into account, chimpanzees cannot profitably work at hedge funds.
What's the difference between "cheap labor from overseas" and "cheap labor from robots."
Not much. And the former hollowed out the US manufacturing base, and small town America with it. It's tough to argue against that reality.