14 Comments
User's avatar
Daniel Kokotajlo's avatar

I think this post doesn't engage with the arguments for anti-x-risk AI regulation, to a degree that I am frustrated by. Surely you know that the people you are talking to -- the proponents of anti-x-risk AI regulation -- are well aware of the flaws of governments generally and the way in which regulation often stifles progress or results in regulatory capture that is actively harmful to whatever cause it was supposed to support. Surely you know that e.g. Yudkowsky is well aware of all this... right?

The core argument these regulation proponents are making and have been making is "Yeah, regulation often sucks or misfires, but we have to try, because worlds that don't regulate AGI are even more doomed."

What do you think will happen, if there is no regulation of AGI? Seriously how do you think it will go? Maybe you just don't think AGI will happen for another decade or so?

Expand full comment
Maxwell Tabarrok's avatar

Def agree that e.g Yudkowsky and Zvi are aware of govt failure and consistent. The AI safety advocacy community is much larger than them now though and many are more pollyannish about govt intervention. Idt your core argument describes how Max Tegmark or CSET would frame their positions, for example.

I agree that there are many people making the core argument you give here. I guess its just not as clear to me why regulation is positive expected value when it is likely to preserve the most dangerous parts about AI development and stifle the most promising.

I am not an AGI skeptic, I don't have super long timelines or anything like that. The argument does not rest on denying AI capabilities. The argument also does not rest on optimism for unregulated development of AI. One can think that p(doom | unregulation) is very high but still not much different from p(doom | regulation).

My claim is that the reasons why you might believe that p(doom) is very high are extremely unlikely to be solved by greater govt involvement in the technology and are more likely to be exacerbated. I'm not sure exactly what those reasons are for you. But what risks do you see govt involvement realistically addressing? It seems very clear to me that, even in a highly regulated industry, govts will perform or contract for large training runs, that they will connect AI up to important systems, that they will use AI for killer robots and military strategy. It also seems unlikely that safety research is better funded or more skilled in a highly regulated AI world though I'm a bit less sure about that.

So if we agree about regulation often sucking and misfiring, and that govts personal uses for AI will be extremely dangerous, then what are we getting in return for their involvement?

Expand full comment
Daniel Kokotajlo's avatar

Thanks for the thoughtful reply. Yeah I know less about Max Tegmark and CSET, maybe they are making the errors you describe, idk.

In a sense your argument *does* rest on optimism for unregulated development of AI, because what's at issue is the core question of whether we should be advocating for regulation or against. So we need to compare worlds-with-regulation to worlds-without. This is why I asked you what you thought would happen without regulation. I'm still interested in your answer to that question.

If you'd be down for it I'd love to have a call sometime to discuss! If you like we could even put it up on youtube or something. We could talk about categories of regulatory proposals (e.g. transparency reqs vs. licensing vs. approval-of-runs vs. bans vs. nationalization vs. international megaproject) and game out scenarios for how they might be implemented well or poorly and then how that might play out when AGI arrives.

Again, I totally agree that it's plausible government regulation will make things even worse. But it seems like nevertheless our best hope, because there's a decent chance it will prevent anyone from building catastrophically dangerous AGI until we figure out how to do so safely. Right now we don't know how to do it safely; the probable outcome of continued acceleration to AGI and beyond is doom.

Also, to be clear, I'm not a fan of *generic* regulation-boosting. Like, if I just had a megaphone to shout to the world, "More regulation of AI!" I would not use it. I want to do more targeted advocacy of regulation that I think is more likely to be good and less likely to result in regulatory-capture. Heck, I don't even think focusing on regulation directly is the best use of my time -- better to focus on talking about what I'd consider to be the basics: https://www.lesswrong.com/posts/cxuzALcmucCndYv4a/shortform

Expand full comment
Maxwell Tabarrok's avatar

I would love to talk! My email is maxwell.tabarrok@gmail.com we can coordinate more there.

Expand full comment
Chris's avatar

Post-discussion, would love a follow-up to this post on your thoughts after some of these comments etc.

Expand full comment
Donald's avatar

I expect the "AI development for military" to be basic image recognition made to run on small drones and be highly reliable at identifying enemies.

I don't expect there to be nearly as much of the type of research that leads to AGI.

And I expect the slowdown to be large. Time in which hopefully someone can find a better plan.

Expand full comment
Daniel Kokotajlo's avatar

Reached out! Looking forward to it. Also, I reiterate that I'd love to hear from you what you think will happen if there is no regulation of AI. Could you please write a few paragraphs here sketching out the most likely scenario given that assumption?

Expand full comment
Maxwell Tabarrok's avatar

There are already regulations on ~everything someone might do with AI. So "no regulations on AI" just means no additional regulations about the specific technology, and we rely on existing regulations for dangerous things one might do with the tech (e.g spam or making a bomb or smth).

In that scenario my low confidence most likely prediction is AI developing something like electricity, computing, and the Internet. There will be lots of competing AI companies and open source models. Almost everyone will use AI everyday probably through their phone. Everyone has personalized assistance and intelligence augmentation.

AI is frequently involved in misuse but immune systems adapt relatively quickly and keep the risk moderate like how the balance between cybersecurity and hacking has remained stable through the past decades of massive computing tech changes. There will be lots of high profile scandals where photos, videos, audio, text, etc are faked. Eventually we'll expand pre-existing tools for information verification and maybe come up with some new ones. These tools were too expensive before but now that the risk is high, investment in them is worth it.

Electricity, computing, and the Internet are today essential components of every weapons system and terrorist attack. But this is because they have spread so widely among all tasks rather than any specific advantage they give to destruction. I think that AI will be similar.

I'd like to stress that my argument isn't riding on anyone believing in my most likely scenario. If you think I far underrate misalignment risk, you might be right, but govts regulating Microsoft's competitors out of business and/or training a big model themselves isn't safer on that dimension. Similarly if you think AIs used as weapons is more dangerous than I claim, that world faces huge risks from governments using AI weapons against each other. Getting government to strictly regulate civilian use of AI won't substantially reduce WW3 risk and that is what's most important in that world.

Expand full comment
Daniel Kokotajlo's avatar

OK, thanks! This is super helpful and will probably quick-start our conversation by 15min.

So, my immediate reaction to the above is: The concepts of AGI, ASI, and intelligence explosion seem to be notably absent, nor does there seem to be any differently-named-but-similar substitute concept. (This is consistent with the picture you paint, in which e.g. there are lots of competing companies and e.g. the focus is on what AI is used for, as if it will remain tools rather than general agents)

So, we should talk about that -- it seems like you think AI will be much less powerful (at least for the next decade or so) than I do.

And again, to be clear, I totally agree that if AI regulation amounts to "MSFT's competitors get shafted but MSFT gets to keep going more or less as it wants to" then the regulation was net-negative. I am strongly opposed to such regulation.

Expand full comment
SolarxPvP's avatar

Don't forget the incentives that irrational voting provide.

Expand full comment
J.K. Lund's avatar

Your analogy with nuclear power is telling.

As I, and others, have written, government regulation pulverized nuclear energy into an early and unjustified death.

At the same time, governments monopolized the power of the atom for warfare.

AI regulation could end up doing the same; stifling the positives of a technology and while preserving the negatives under govt monopolies.

That does not bring us the future we want.

Expand full comment
Dave Friedman's avatar

A lot of the AI safety people seem to have a naive understanding of how government works, and what the effects of government intervention in a new technology are.

Expand full comment
Daniel Kokotajlo's avatar

AI safety people are historically of a libertarian bent, and generally quite distrustful of governments. They are NIMBYs and accelerationists about pretty much everything else besides AGI.

Expand full comment
Ben Stallard's avatar

Can you be more specific on what you mean by "a relative speed-up of the most dystopian use cases of AI?" Relative to what? The most utopian use cases?

"AI" is often used as a short-hand these days for "LLM's". Is this how you are using it?

Given that LLM's are such a general tool, arguably the most general tool, wouldn't progress in LLM's for government use also unlock many of the industrial and research capabilities of these systems? Do governments have much of a use for LLM's?

The only LLM military use case I've heard of is as a strategic decision-maker, for which reliability isn't (and might never be) there yet. People want humans steering things for now. Doing low-information-processing work for government bureaucracies? Again, I think issues of trust and accountability in government will basically prevent LLM technology from being useful.

If by "AI" you're referring to the wide range of techniques for automating targeting and control of weapons and reconnaissance systems (that, to my understanding, research into which does not improve our understanding of transformer architectures), then I don't see why having control of that technology means regulating OpenAI/Anthropic/Deep Mind or preventing these general intelligence technologies from having big impacts in industry.

I feel the term AI is misused in much public discussion, and primarily referring to a sci-fi concept rather than the real things we have right now.

Expand full comment