13 Comments

I think this post doesn't engage with the arguments for anti-x-risk AI regulation, to a degree that I am frustrated by. Surely you know that the people you are talking to -- the proponents of anti-x-risk AI regulation -- are well aware of the flaws of governments generally and the way in which regulation often stifles progress or results in regulatory capture that is actively harmful to whatever cause it was supposed to support. Surely you know that e.g. Yudkowsky is well aware of all this... right?

The core argument these regulation proponents are making and have been making is "Yeah, regulation often sucks or misfires, but we have to try, because worlds that don't regulate AGI are even more doomed."

What do you think will happen, if there is no regulation of AGI? Seriously how do you think it will go? Maybe you just don't think AGI will happen for another decade or so?

Expand full comment

Reached out! Looking forward to it. Also, I reiterate that I'd love to hear from you what you think will happen if there is no regulation of AI. Could you please write a few paragraphs here sketching out the most likely scenario given that assumption?

Expand full comment
Apr 19Liked by Maxwell Tabarrok

Don't forget the incentives that irrational voting provide.

Expand full comment
Apr 17Liked by Maxwell Tabarrok

Your analogy with nuclear power is telling.

As I, and others, have written, government regulation pulverized nuclear energy into an early and unjustified death.

At the same time, governments monopolized the power of the atom for warfare.

AI regulation could end up doing the same; stifling the positives of a technology and while preserving the negatives under govt monopolies.

That does not bring us the future we want.

Expand full comment

A lot of the AI safety people seem to have a naive understanding of how government works, and what the effects of government intervention in a new technology are.

Expand full comment

Can you be more specific on what you mean by "a relative speed-up of the most dystopian use cases of AI?" Relative to what? The most utopian use cases?

"AI" is often used as a short-hand these days for "LLM's". Is this how you are using it?

Given that LLM's are such a general tool, arguably the most general tool, wouldn't progress in LLM's for government use also unlock many of the industrial and research capabilities of these systems? Do governments have much of a use for LLM's?

The only LLM military use case I've heard of is as a strategic decision-maker, for which reliability isn't (and might never be) there yet. People want humans steering things for now. Doing low-information-processing work for government bureaucracies? Again, I think issues of trust and accountability in government will basically prevent LLM technology from being useful.

If by "AI" you're referring to the wide range of techniques for automating targeting and control of weapons and reconnaissance systems (that, to my understanding, research into which does not improve our understanding of transformer architectures), then I don't see why having control of that technology means regulating OpenAI/Anthropic/Deep Mind or preventing these general intelligence technologies from having big impacts in industry.

I feel the term AI is misused in much public discussion, and primarily referring to a sci-fi concept rather than the real things we have right now.

Expand full comment