7 Comments
User's avatar
Kevin's avatar

The whole idea of estimating P(doom) is bad, because nobody can estimate it well and having a bad estimate isn’t useful. One lesson of theoretical cs is that there are functions you cannot approximate well.

I think you are better off thinking in “scenarios”, like the pessimistic scenario is that AI plateaus in the very near future, the doomer scenario is that AI fundamentally outcompetes humanity, a medium/good scenario is that multiple new trillion dollar companies are formed but there is no “singularity”, etc. And then accept that you cannot estimate the likelihood of the different scenarios, but you could still use them as a tool for planning.

Expand full comment
DangerouslyUnstable's avatar

I disagree and think that probabilities are useful, as argued here:

https://www.astralcodexten.com/p/in-continued-defense-of-non-frequentist

Yes, we should remember the limitations of probabilities in these situations (and they are quite limited and uncertain). But even your suggestion: using "scenarios as tools for planning". How much effort should go into planning for each scenario? You listed 4, but are those really the only 4? How far should you split your efforts? Probabilities, even under uncertainty, are how we answer these questions. You may not be using an explicit number, but every decision you make in these scenarios has some sense of likelihood behind it. It can be useful to make these sense explicit.

Expand full comment
Rai Sur's avatar

If you're still going for walks or having dinner with your friends or showering without media, your attention hasn't been saturated.

Sounds like a horrible future but regardless, the incentives are there to increase the size of the attention pie.

Expand full comment
Josh Knox's avatar

Dynomight wrote up some additional analogies...I like your 2x2 framing though.

https://dynomight.net/llms/

Expand full comment
Ben Stallard's avatar

The problem with content production automation is that its consumption requires attention. Indeed, the ad sales which fuel such content production on the internet is based on this attention. Attention requires time. In my view, the West has already reached the point of content production where people must make zero-sum decisions about how to allocate their time among different content streams.

Unless we are envisioning a utopic future in which A.I. automation allows us to reduce working hours (due to competitive dynamics though, it seems those systems which keep humans working will be more productive in absolute terms and therefore propagate) to the point where we can spend more of our inherently limited time consuming content, it seems a printing press scenario is unlikely.

Sure there was plenty of room to grow demand for content when the printing press was invented. Idle time by the elites spent telling each other mediocre stories by the hearth could be replaced with whoever the best storyteller within the radius books could be acquired, and cheaply! Does this headroom still exist in the developed world?

Expand full comment
gregvp's avatar

The demand for cognitive labour may very well reduce. Fertility is below replacement in all high-income countries and in some others, and it is reaching that state in most countries.

LLMs could plausibly drive fertility near zero, if AI girlfriends / boyfriends / children meet people's emotional needs better than real members of the opposite sex or real children do.

The exceptions to sub-replacement fertility in advanced countries are certain highly religious sub-groups that take a very cautious approach to modern technology and science: people who don't have a lot of use for cognitive labour.

This, of course, is the long run. No need to shut down your blog and learn horse-ploughing just yet.

In the shorter run, Malcolm Collins recently asked the rhetorical question, "why bother burning books when nobody reads anymore?" To a first approximation, he's right. (Me? I 'm a relict of a bygone age.)

Expand full comment
Steve Byrnes's avatar

Towards the end you seem to be assuming that AI cannot autonomously set up a website, conceive of blog posts, write them, and chat in the comments section, all much MUCH better than Scott Alexander can. That assumption is obviously valid right now, and will obviously (I claim) become invalid at some point in the future. In other words, you frame the blog post as "let's leave aside the possibility that AI kills everybody" … but what you're REALLY thinking about and talking about is "let's leave aside the possibility that future researchers will develop real AGI" (as defined here— https://www.lesswrong.com/posts/uxzDLD4WsiyrBjnPw/artificial-general-intelligence-an-extremely-brief-faq ). That's a different assumption. And it's fine to make that assumption! That assumption might be valid for many more years! Or even decades! …But if you're making that assumption, you should say that you're doing so.

Expand full comment