Will MacAskill went on Rob Wiblin’s 80k hours podcast last week to talk about AI. Will MacAskill is a moral philosopher and Effective Altruist so this wasn’t a technical discussion of near-term capabilities but rather an far-ranging and abstract conversation about the implications that advanced AI has for economics, politics, and the safety of human civilization.
The podcast was four hours long so I will focus only on two points near the beginning that serve as so-called “intuition pumps” for the rest of the discussion.
A century of history crammed into a decade
First, there is the idea of the “accelerated decade” and the challenges this will pose to human institutions. Here’s MacAskill explaining what might happen after AIs begin to accelerate technological progress:
Will MacAskill: We’re thinking about 100 years of progress happening in less than 10. One way to get a sense of just how intense that would be is imagine if that had happened in the past. So imagine if in 1925 we’d gotten a century’s worth of tech progress in 10 years. We should think about all the things that happened between 1925 and 2025 — including satellites, biological and chemical weapons, the atomic bomb, the hydrogen bomb, the scale-up of those nuclear stockpiles. We should think about conceptual developments: game theory, social science, the modern scientific method; things like computers, the internet, AI itself.
But while technological progress runs ahead, other things will grow more slowly or perhaps not at all, creating a potentially dangerous asymmetry.
Will MacAskill: Human decision making and human institutions don’t speed up though. So just taking the case of nuclear weapons: in this accelerated timeline, there’s a three-month gap between the start of the Manhattan Project and the dropping of the nuclear bomb in Hiroshima. The Cuban Missile Crisis lasts a little over a day. There’s a close nuclear call every single year.
This clearly would pose an enormous challenge to institutions and human decision making.
Rob accepts MacAskill’s interpretation of this thought experiment without contest.
Rob Wiblin: I think this helps to bring out the intuition of, holy shit, there would be so much chaos, so much stuff going on, and we wouldn’t have the ability to process it or think about our decisions very well.
But the response to the fears implied by MacAskill’s thought experiment are right there in the premise. What would happen if the previous century of progress were compressed into a decade? Just look around you. We’ve been on this trend for thousands of years. The 20th century was a thousand years of progress or more compressed into century.

We've already increased through several orders of magnitude in how fast the world moves, how rapid economic growth is, how much economic activity each person is responsible for, and how much destructive power each person and government can wield.
If, as MacAskill claims, human decision making hasn’t sped up, we’ve somehow managed to avoid the “holy shit chaos” that Will and Rob predicted would result.
Maybe some particular theory of human psychology and economics can predict that the next speed-up will finally take us over the threshold of human understanding, but just bamboozling with big numbers doesn’t stand up to our already proven success through the even more bamboozling long-view of history.
Just as in the 20th century there will surely be risks, but even considering all of the mistakes and destruction that century wrought, we still came out far ahead.
This is a purely empiricist argument for why MacAskill’s intuition pump of seeing a 1-2 OOM increase in growth and wondering “how could we ever deal with such a thing!” doesn’t hold up to scrutiny. But more constructive arguments for how we actually manage a world economy that is thousands of times larger and faster than it was in the past are available and are useful for thinking about AI progress. Start with I, Pencil and Hayek (1945).
The Philosopher’s Fallacy
Later in the podcast MacAskill makes a similar argument in a more general form. Just as AI will exceed our capacity to organize the economy it also exceeds our capacity to plan the moral future.
Will MacAskill: You should really pause and reflect on the fact that many companies now are saying what we want to do is build AGI — AI that is as good as humans. OK, what does it look like? What does a good society look like when we have humans and we have trillions of AI beings going around that are functionally much more capable?
There’s obviously the loss of control challenge there, but there’s also just the like —
Rob Wiblin: Sam Altman, I’ve got a pen. Can you write down what’s your vision for a good future that looks like this?
Will MacAskill: What’s the vision like? How do we coexist in an ethical and morally respectable way? And it’s like there’s nothing.
Rob Wiblin: Deafening silence.
Will MacAskill: Careening towards this vision that is just a void, essentially. And it’s not like it’s trivial either. I am a moral philosopher: I have no clue what that good society looks like.
In some ways, I agree with MacAskill’s description here. We don’t have a clear vision of what the future will be like and how we’ll all get along within it. We don’t have a plan that Sam Altman could jot down on a piece of paper.
But this fact has no bearing on whether or not the future will be good because we don’t need moral philosophers to have a plan for things to go well.
Evidence for this claim again comes from history. No one had a clear-in-advance vision for how computers would change the world and how we might need to adapt to remain morally good. No one had such a vision for cars or steam engines or electricity. And yet we can all agree that the world nonetheless improved after their invention.
Rob and Will do get into more concrete discussions about morality and AI that don’t rely on this bad argument but they’d be better off dropping this intro.
Reading List for Paid Subscribers
Keep reading with a 7-day free trial
Subscribe to Maximum Progress to keep reading this post and get 7 days of free access to the full post archives.