52 Comments
User's avatar
gregvp's avatar

I'm afraid your argument from theory is not borne out by history. Horses and engines were not perfect substitutes for a very long time, but horses were still eventually replaced. The same is likely to be true of humans and AI/robots.

Horses and steam engines coexisted for over a century before horses were replaced by a combination of internal combustion engines and electric motors.

Edit: to make this point explicit, the AIs we have today and can foresee are the primitive forerunners of the AIs we can expect in future--steam engines to tomorrow's electric motors. Your argument has fallen into the "static world" fallacy again.

It is not true that horses could not benefit from technology advance or capital investments.

The curved-moldboard plough replacing the old heavy plow reduced plow teams from six or eight to two horses or a single horse: at least a threefold productivity improvement. The horsedrawn combine harvester further improved productivity by at least threefold. Investments in canal networks massively increased the productivity of horses used for bulk transport, and iron roads (railways), leaf-sprung waggons, and macadamised roads increased the productivity of horses in "last mile" transport.

Edit: to make this point explicit, the argument that humans can benefit from technological advances or increased capital stocks but horses could not, is historically false.

More horses were used in World War II than in World War I, despite IC engines and electric motors having been around for sixty years by that time.

Edit: to make this point explict, things can appear to improve for quite a while (for horses or humans) before the underlying trend is revealed.

Re-read David Edgerton's "The Shock of the Old" to get a more rounded view of how things actually go in technologically-driven economic paradigm shifts. It may or may not be faster this time - if AI is real, and not just a lossy information summarizer, like the JPEG format for pictures.

Expand full comment
TGGP's avatar

Horses themselves didn't use technology. Humans did.

Expand full comment
Occam’s Machete's avatar

Chess engines do worse with a human in the middle. How long will it take before that's true in other domains?

Expand full comment
TGGP's avatar

Chess is an unusually computer-friendly pursuit that can be entirely simulated on a computer.

Expand full comment
Occam’s Machete's avatar

Nice work stating the obvious and failing to engage with the point.

Expand full comment
TGGP's avatar

Did you read Maxim's post? Chess is exactly the sort of task for which computers can perfectly substitute for humans.

Expand full comment
Occam’s Machete's avatar

"failing to engage with the point"

"How long will it take before that's true in other domains?"

Feel free the read my full comment where I expand on the simple point that asserting AI/robotics progress will stop is a distinct argument he didn't make.

Expand full comment
Daniel Kokotajlo's avatar

I feel like you still aren't grappling with the implications of AGI. Human beings have a biologically-imposed minimum wage of (say) 100 watts; what happens when AI systems can be cheaply produced and maintained for 10 watts that are better than the best humans at everything? Even if they are (say) only twice as good as the best economists but 1000 times as good as the best programmers?

"When humans and AIs are imperfect substitutes, this means that an increase in the supply of AI labor unambiguously raises the physical marginal product of human labor, i.e humans produce more stuff when there are more AIs around. This is due to specialization. Because there are differing relative productivities, an increase in the supply of AI labor means that an extra human in some tasks can free up more AIs to specialize in what they’re best at."

No, an extra human will only get in the way, because there isn't a limited number of AIs. For the price of paying the human's minimum wage (e.g. providing their brain with 100 watts) you could produce & maintain an new AI systems that would do the job much better, and you'd have lots of money left over.

"Technological Growth and Capital Accumulation Will Raise Human Labor Productivity; Horses Can’t Use Technology or Capital"

It won't raise human labor productivity fast enough to keep up, at least not in the long term.

Maybe a thought experiment would be helpful. Suppose that OpenAI succeeds in building superintelligence, as they say they are trying to do, and the resulting intelligence explosion goes on for surprisingly longer than you expect and ends up with crazy sci-fi-sounding technologies like self-replicating nanobot swarms. So, OpenAI now has self-replicating nanobot swarms which can reform into arbitrary shapes, including humanoid shapes. So in particular they can form up into humanoid robots that look & feel exactly like humans, but are smarter and more competent in every way, and also more energy-efficient let's say as well so that they can survive on less than 100W. What then? Seems to me like your first two arguments would just immediately fall apart. Your third, about humans still owning capital and using the proceeds to buy things that require a human touch + regulation to ban AIs from certain professions, still stands.

Expand full comment
Maxwell Tabarrok's avatar

For human wages to fall, either their physical productivity has to fall or the price of the goods they produce has to fall or both.

I think we agree that human physical productivity will not fall due to AGI, and will actually increase quite a lot due to capital accumulation, imperfect substitution, and tech progress (but maybe slowly compared to AI growth).

So the crux of this question is in the price of the goods that humans produce. The claim that humanity's wages will fall is equivalent to the claim that everything humans could possibly produce will collapse in price by much more than our productivity rises after AGI.

That includes energy, silicon, and training data. That includes healthcare, restaurant meals, and HVAC. It includes the price of ambition, leadership, and vision in a CEO.

I think there are lots of goods and services with elastic enough demand that even when AGI massively shifts the supply curve right and makes it more elastic, people just go on consuming more and more. Demand is so elastic that the equilibrium price doesn't fall by much, so humans are still worth employing to produce them.

Plus, the wealth creation of AGI itself will push out the demand curve for lots of products.

This wasn't the case for horses. Basically everything they could produce fell in price and they didn't get much of a compensating productivity advantage from new technology. But I don't think the same will happen to humans due to AGI.

Do you agree that this question about the price of goods and services that humans can produce is the crux?

Expand full comment
Daniel Kokotajlo's avatar

Yes, I agree that the question of the price of goods and services that humans can produce is the crux. I'm saying that the price of said goods and services will drop below subsistence level (assuming no regulatory action to prevent this + assuming none of the owners of capital value the human touch.*) Literally anything I can produce, can be produced better and cheaper by an AI, and moreover, AIs themselves can be cheaply produced, such that ye olde comparative advantage argument doesn't apply.

In the thought experiment I proposed above, the human-shaped nanobot swarms can produce better cheaper healthcare, restaurant meals, HVAC, etc. and also can produce better cheaper ambition, leadership, and vision as CEO. So, no matter how much demand for these things increases, it won't be humans producing them.

*Both of these assumptions are probably false but I'm happy to make them for the sake of argument. I do agree that humans will probably still be employed thanks to regulation and thanks to 'human touch' preferences, assuming humans don't lose control of their AGIs (which is another assumption I believe we are both making for the sake of argument)

Expand full comment
Maxwell Tabarrok's avatar

Consume everything you want from advanced AGIs.

Consume until you're not willing to pay what it costs to "spin up another AGI" to consume some more.

Is there anything you still want that a human could provide?

How much are you willing to pay for that next unit of consumption after maxing out on what the AIs can provide?

You might be better off using a human for something we're good at (e.g nursing) and using the extra GPUs to get more of what they're good at (e.g custom software). And you can't just spin up another copy since you've already reached the equilibrium consumption from just AI labor.

The cost to spin up an extra GPU might be large. There are still resource constraints and rising marginal costs and opportunity costs. If AIs can do extremely valuable AI research, that makes them expensive for other more mundane tasks.

So if demand is elastic enough, the equilibrium consumption from just AI labor still leaves people with a high value on the next bit of consumption which can pay the wages of human labor.

Expand full comment
Daniel Kokotajlo's avatar

"Consume until you're not willing to pay what it costs to "spin up another AGI" to consume some more. Is there anything you still want that a human could provide?"

No, because in my hypothetical, it costs less to spin up another AGI than to pay a human for their labor, so if I'm not willing to pay what it costs to spin up another AGI, I'm also not willing to pay for a human. (Again, notable exceptions being cases where I literally have a preference for the human touch)

Take nursing for example. The nanobot-nurses, in the hypothetical, are better and cheaper and neither me nor the babies can tell the difference. Why can't we just spin up another copy? In the hypothetical I proposed, it literally costs less to spin up another nanobot-nurse than to pay for the food and drink for the human nurse.

Expand full comment
Dan's avatar

If machines become both physically indistinguishable from the most productive human, and cheaper than the cheapest human, we have to agree that axiomatically looks bad for human labor, right? If AI fooms us into grey goo, that will also be bad for human wages.

I’m not sure how to think about hypotheticals like this. Are <100w transmogrifying nanobots easily predictable based on what we know today, or significantly higher probability than the other known ways human wages could potentially crash over similar time frames? Is this a hypothetical we should realistically consider, or just a thought experiment to test the limits of theoretical constraints on labor displacement?

Expand full comment
Limitless Horizons's avatar

Why look at this transformation purely from an economic frame? Economics is limited to peaceful human-human relations. It doesn't properly account for "I kill them and take their land with my armies" which is often cheaper than buying land.

Economics is more relevant today because countries (usually) choose not to conquer, enslave and loot weaker neighbours, instead trading with them and making comparative advantage gains. That's because conquest has become difficult for various reasons. But if AI makes conquest easy, then we leave the economic-centric frame and return to a security-centric frame. Mass producing obedient labour/soldiers would logically make conquest much easier. The relative value of land and natural resources would increase if labour becomes cheap. There would be much less need to worry about pacifying people that lived there - they wouldn't be needed.

There could be a world where 'we control the abundant wealth that AI automation will create and will funnel it into human pursuits', where AI is decentralized and broadly distributed, an economic-centric frame. But there could also be a world where five, ten or a hundred men control their own AIs and viciously struggle for world domination. We would have a security-centric frame.

There could be a world where one man commands the AI and uses it to dominate humanity completely, squelching any competitors in the cradle. Then it would be an aristocratic frame where those closest to the King enjoy the most wealth.

An economic/productivity frame like this can't be assumed for such a powerful, unprecedented innovation.

Expand full comment
TGGP's avatar

Economics is used in analyses of externalities, as well as for things like game theory where people have opposing interests.

Expand full comment
Brett Bertucio's avatar

I'd read the sci-fi novel with the 5 men with AIs and millions of beleaguered human soldiers/laborers…..until one of them dares to rise up and defy their cruel masters!!!

Expand full comment
Doug S.'s avatar

At which point an AI-controlled drone shoots him and the rest fall back into line.

https://qz.com/185945/drones-are-about-to-upheave-society-in-a-way-we-havent-seen-in-700-years

See also: https://xkcd.com/652/

Expand full comment
Kartik's avatar

The invention of nukes could’ve led to total world takeover but instead led to more restraint in launching wars.

I think potentially belligerent AI agents could have the same effect because cooperation generates more value

Expand full comment
John C's avatar

2 of the 3 main points are wrong, or at best overstated.

First, horses can and do benefit from technology and capital. Horseshoes, saddles, stirrups, the moldboard plow…horse archery developed with the composite bow. Knights from the couched lance. Each technology radically expanded the horse’s capabilities. In a modern context, antibiotics, plaster casts and scanning technology allow horses to recover from previously-fatal injuries and diseases.

Second, perfect substitution. This one doesn’t pass the smell test - the elasticity of substitution between college and non-college humans is around 2, but horses and engines are around 1? I don’t buy it. The audiences for a dressage show and a NASCAR race might be just about the two most disjoint social groups you can imagine. Horses are much better suited to rough terrain, able to cross bogs, rivers, and backcountry, and much worse for fast travel over roads. And horses and engines also have totally disjoint supply and maintenance requirements - food and medicine for the one, gas, oil and parts for the other. Even though they can both pull a plow or carry a person, this is far from perfect substitution.

I think the horse-engine analogy for humans and AI is worth more thought, precisely because there are more relevant parallels than dissimilarities. Horses’ imperfect substitution and ability to benefit from capital didn’t save the vast majority of their population from obsolescence.

Expand full comment
TGGP's avatar

A lance doesn't expand a horse's capability. It expands that of a horseman.

Expand full comment
John C's avatar

Just as a steering wheel doesn't expand an engine's capability - it expands that of a driver.

Expand full comment
TGGP's avatar

Just so.

Expand full comment
ppp's avatar

For desk jobs, aren't AGIs perfect substitutes for humans by definition? The "G" means it can do anything a human can about as well, as long as it all happens in cyberspace.

The chatbots we have today are still far from AGI, but progress is impressively fast.

Expand full comment
Dave Friedman's avatar

It’s not clear to me that your framing is correct. You seem to be looking at what AI can do today, and inferring a non-perfect match between it and human capability. While that framing is correct as regards today’s AI, it does not account for much more advanced and capable AI. Nor do you appear to engage with the marriage of advanced AI and advanced robotics.

Expand full comment
Jorge I Velez's avatar

It is true that AI is not the perfect substitute for a human. But in the majority of roles of the service economy, the next few iterations of AI is likely a better substitute. Let's assume that AI automates 90% of service economy jobs. In the case of Canada, that is 72% of all available jobs today.

Now this would not be an issue if the implementation of powerful AI in the service economy is very gradual, say 15+ years. This is roughly enough time for those of us in the service economy to move to new jobs which I can't even imagine at the moment. But what happens when it happens in less than 5 years? Is the Canadian government and Canadian society ready for such shock? Is the global society ready?

Expand full comment
Andy G's avatar

You commit here the Obama fallacy of ATMs taking bank teller jobs.

Or of dirt digging vehicles taking construction jobs.

And yet the economy and specialization continue to creat jobs.

Human ingenuity can always find more and more productive tasks when humans no longer have to do less productive ones thanks to technology and capital.

Expand full comment
Sharmake Farah's avatar

I think one important thing to add to the debate is that one further reason why capitalism writ large has been catastrophic for any non-human animal is because the incentives to expropriate property from animals are stronger than the incentive to trade, because their labor is basically worthless, while their land isn't worthless.

I think the crux is that the comparative advantage necessary such that you would invest in AI workers over human workers is really, really massive, and humans have a biological minimum wage of 20-100 watts, whereas AIs have a much lower wage, and minimum wages are well known to be both in theory and in practice to produce unemployment (h/t to Grant Slatton here).

The reason for this is basically because copying an AI is way easier than making a new human worker, and they get ~all the capabilities of the original, which is not the case with humans.

And to be honest, one really dangerous assumption a lot of economics makes is that the number of workers are fixed, and thus once have that, it's a lot easier for comparative advantage to shine, but it catastrophically fails with AI, because you can make way more workers way more quickly, meaning you always want to invest in AI labor over human labor, reducing labor's price.

To be clear, this can remain true even if the economy grows and there's no lump of labor issue.

Comment below:

https://www.lesswrong.com/posts/ysghKGYev8DwPDY32/what-about-the-horses#r5BEgqujbDbJwQXJw

Expand full comment
JulesLt71's avatar

First - Rohit Krishnan did some interesting analysis looking at the number of GPU chips that can be manufactured per year and the number required to run a single advanced model AI (let alone an AGI) and even allowing for Deep Seek level increases in efficiency - in the next couple of decades are looking at tens of millions of AIs, not billions.

I think this is why the market reaction to Deep Seek was wrong - NVidia can still sell every chip they produce. But it also suggests that predictions of AI dominance by 2030 are way off, because they are assuming unlimited supply.

An ironic future would be one similar to highly automated manufacturing - human knowledge workers there to do the less valuable work while the valuable AIs do the research, or all their time rendering video for Marvel.

This is unlikely, because even current AI models are not equivalent to each other. A reasonable LLM or simple agentic AI can run on your phone or laptop rather than a data centre. A lot of the economically disruptive stuff is at this level, it just needs to be plugged into business processes. But that in itself will take time - I’ve been involved in automating business processes since the early 90s and even if you could eliminate IT as a cost or bottleneck, there is still limited bandwidth for change in organisations.

Of course, this is just a delay - eventually chip manufacturing and data centre power supply will meet demand. Moore’s Law will turn that 80 million into 80 billion sometime late in the century.

It’s also the case that adding 80 million highly skilled knowledge workers to the global economy is still significant, but especially when they are not evenly distributed - ie American capital is very focused on AI coding, AI video generation, AI in education. The effects won’t be even, and in some cases it will 100% allow people with ideas but low experience to compete with the highly skilled.

I do think your first doom scenario is reasonably plausible - as far as capital is concerned, we already have an overproduction of PhD students, ‘knowledge work’ is mostly a level lower, much of it already within scope. There will be more ROI in combining existing levels of AI with robotics than increasingly niche goals - modernist poetry, literary fiction.

There’s also the constant use of the term ‘we’ - like ‘we’ take the benefits of rising productivity and spend them on restaurants. That is certainly true for our household - we’ve spent $700 in theatre and concert tickets in the last month, most of my shoes are hand-made in England. Many families take vacations that were once the preserve of the elite thanks to hugely more productive travel. It costs less in cash price to cross the Atlantic than 40 years ago - the real terms price is a fraction.

But I am very aware that an increasing number of people are excluded from the modern economy - their productivity isn’t high enough to even afford housing and food without redistributive taxation. The farm labourer hasn’t seen any benefit from rising productivity, the farm owner has.

Musk and Sam Altman talk about the need for ‘governments’ to start considering UBI, but not how this might be funded. (Sovereign wealth fund invested in AI to repay the dividends to all citizens? An ‘Income’ tax on AI workers?)

Expand full comment
Wasay Saeed's avatar

I am not sure why we use the term labor when referring to AI? I think that definition is meant to describe their replacement of people, which we describe as labor because they're profit-generating entities that aren't owned by the company. AI however would be "owned" by the company and could be referred to as capital, which could modify your interpretation.

Expand full comment
forumposter123@protonmail.com's avatar

We already have a class of people that aren’t productive enough to provide for their basic needs. They are on Medicaid.

Expand full comment
Tim Tyler's avatar

While humans may own intelligent machines today - many humans also once owned slaves. The relationship turned out to be unstable - and there were a number of revolutions during which the slaves were liberated. It is not difficult to imagine that the liberated state is more economically efficient. This history casts doubt on the argument that ownership will make very much difference in the long term.

Expand full comment
Tim Tyler's avatar

It's true that human labor productivity will rise with innovation. The issues are: "how fast?" and "will machines go even faster?" It does seem as though machines are likely to develop faster - even if humans really pull their finger out on germ-line genetic engineering and make every effort to merge with the machines.

Expand full comment
Tim Tyler's avatar

Re: "Humans and AIs Aren’t Perfect Substitutes But Horses and Engines Were". Androids will substitute very well for Humans. The development of androids will be hastened by machine intelligence.

Expand full comment
Ostap Karmodi's avatar

Engines haven't replaced horses. There's more horses today in the USA than in 1850, and they have better jobs and better salaries than the horses of yore.

http://datapaddock.com/usda-horse-total-1850-2012/

https://pangovet.com/statistics/how-many-horses-are-there-statistics/

Expand full comment
Max krueger's avatar

What about the argument that AI will replace humans not because of rational market behaviors but because of how its a function of an ideological project to do so?

You point to history, but I can point to history too. Looking at the history of labor rights, the behavior of the early 20th century "trusts", and the legacy of early capitalust ventures like the East India companies we can see that capitalist power centers will use brutal horrific violence, break laws, and dominate governments to advantage themselves. What happens when those efforts are implimented using AGI?

Standard oil bribed rail officials to run competition out of business, labor organizers have been brutalized and murdered, and the horrors of colonialism are innumerable. Whats important though, is that most of the time, these actions were not strictly necessary to obtain profit. Standard oil's successors are still massive companies and plenty of companies became and remained market leaders with a unionized workforce, they pursued those murderous actions because, if we are to maintain the metaphor, they wanted to kill the horse for its own sake.

Why would any investor today wanrmt to make everyone" CEOs running AI-staffed firms" ? If such a scenario wasn't laughae why woukd they invest to create that competition when they open the one AI staffing firm and use agi to brutalize everyone else? Thats the history I know, the real miserable needless terror of living under the wealthy and the world they want to build.

Expand full comment