Discussion about this post

User's avatar
Kevin's avatar

The whole idea of estimating P(doom) is bad, because nobody can estimate it well and having a bad estimate isn’t useful. One lesson of theoretical cs is that there are functions you cannot approximate well.

I think you are better off thinking in “scenarios”, like the pessimistic scenario is that AI plateaus in the very near future, the doomer scenario is that AI fundamentally outcompetes humanity, a medium/good scenario is that multiple new trillion dollar companies are formed but there is no “singularity”, etc. And then accept that you cannot estimate the likelihood of the different scenarios, but you could still use them as a tool for planning.

Expand full comment
Josh Knox's avatar

Dynomight wrote up some additional analogies...I like your 2x2 framing though.

https://dynomight.net/llms/

Expand full comment
5 more comments...

No posts