The Offense-Defense Balance Rarely Changes
This crux of the doomer case is on unstable ground
You’ve probably seen several conversations on X go something like this:
Michael Doomer ⏸️: Advanced AI can help anyone make bioweapons
If this technology spreads it will only take one crazy person to destroy the world!
Edward Acc ⏩: I can just ask my AI to make a vaccine
Yann LeCun: My good AI will take down your rogue AI
The disagreement here hinges on whether a technology will enable offense (bioweapons) more than defense (vaccines). Predictions of the “offense-defense balance” of future technologies, especially AI, are central in debates about techno-optimism and existential risk.
Most of these predictions rely on intuitions about how technologies like cheap biotech, drones, and digital agents would affect the ease of attacking or protecting resources. It is hard to imagine a world with AI agents searching for software vulnerabilities and autonomous drones attacking military targets without imagining a massive shift the offense defense balance.
But there is little historical evidence for large changes in the offense defense balance, even in response to technological revolutions.
Consider cybersecurity. Moore’s law has taken us through seven orders of magnitude reduction in the cost of compute since the 70s. There were massive changes in the form and economic uses for computer technology along with the increase in raw compute power: Encryption, the internet, e-commerce, social media and smartphones.
The usual offense-defense balance story predicts that big changes to technologies like this should have big effects on the offense defense balance. If you had told people in the 1970s that in 2020 terrorist groups and lone psychopaths could access more computing power than IBM had ever produced at the time from their pocket, what would they have predicted about the offense defense balance of cybersecurity?
Contrary to their likely prediction, the offense-defense balance in cybersecurity seems stable. Cyberattacks have not been snuffed out but neither have they taken over the world. All major nations have defensive and offensive cybersecurity teams but no one has gained a decisive advantage. Computers still sometimes get viruses or ransomware, but they haven’t grown to endanger a large percent of the GDP of the internet. The US military budget for cybersecurity has increased by about 4% a year every year from 1980-2020, which is faster than GDP growth, but in line with GDP growth plus the growing fraction of GDP that’s on the internet.
This stability through several previous technological revolutions raises the burden of proof for why the offense defense balance of cybersecurity should be expected to change radically after the next one.
The stability of the offense-defense balance isn’t specific to cybersecurity. The graph below shows the per capita rate of death in war from 1400 to 2013. This graph contains all of humanity’s major technological revolutions. There is lots of variance from year to year but almost zero long run trend.
Does anyone have a theory of the offense-defense balance which can explain why the per-capita deaths from war should be about the same in 1640 when people are fighting with swords and horses as in 1940 when they are fighting with airstrikes and tanks?
It is very difficult to explain the variation in this graph with variation in technology. Per-capita deaths in conflict is noisy and cyclic while the progress in technology is relatively smooth and monotonic.
No previous technology has changed the frequency or cost of conflict enough to move this metric far beyond the maximum and minimum range that was already set 1400-1650. Again the burden of proof is raised for why we should expect AI to be different.
The cost to sequence a human genome has also fallen by 6 orders of magnitude and dozens of big technological changes in biology have happened along with it. Yet there has been no noticeable response in the frequency or damage of biological attacks.
Possible Reasons For Stability
Why is the offense-defense balance so stable even when the technologies behind it are rapidly and radically changing? The main contribution of this post is just to support the importance of this question with empirical evidence, but here is an underdeveloped theory.
The main thing is that the clean distinction between attackers and defenders in the theory of the offense-defense balance does not exist in practice. All attackers are also defenders and vice-versa. Invader countries have to defend their conquests and hackers need to have strong information security.
So if there is some technology which makes invading easier than defending or info-sec easier than hacking, it might not change the balance of power much because each actor needs to do both. If offense and defense are complements instead of substitutes then the balance between them isn’t as important.
What does this argument predict for the future of AI? It does not predict that the future will be very similar to today. Even though the offense defense balance in cybersecurity is pretty similar today as in the 1970s, there have been massive changes in technology and society since then. AI is clearly the defining technology of this century.
But it does predict that the big changes from AI won’t come from huge upsets to the offense-defense balance. The changes will look more like the industrial revolution and less like small terrorist groups being empowered to take down the internet or destroy entire countries.
Maybe in all of these cases there are threshold effects waiting around the corner or AI is just completely different from all of our past technological revolutions but that’s a claim that needs a lot of evidence to be proven. So far, the offense-defense balance seems to be very stable through large technological change and we should expect that to continue.
A good post, and one I'll incorporate into my own thinking, but misapplied to bioweapons.
For bioweapons, the overwhelming advantage has always gone to offense. The asymmetry comes from the fact that a single virus can kill millions, but it takes millions of vaccines (and people willing to take said vaccines) to protect millions. (Plus developing the vaccines, the manufacturing and distribution capability, etc.) If AI allows easy creation of both disease and vaccine, that makes the attacker's job easy but only a small part of the defender's job easy.
This is true throughout history - my understanding is that a large majority of the indigenous American people were killed by disease, even without intentional biological warfare from the newcomers, along with the South/Middle American empires.
COVID, while unlikely to be actual biological warfare, is a succinct demonstration that the hard part of defending against a pandemic is manufacturing, distributing, and vaccinating everyone, not developing the vaccine, and AI may not be of much help with that.
The truth is that people just haven't tried biological warfare that hard, yet, most likely because there hasn't been a safe way to do so - one's own population would be just as vulnerable as one's enemy.
Now, if you could develop a disease + a vaccine, spend a few years to vaccinate your own country, and then unleash the disease...AI makes that easier to do.
The graph of war deaths is profoundly unconvincing to me for arguing that attack and defense is balanced. For example, I think most people would agree that world war 1 was mostly defense favored, but if we presumed that "defense favored = less deaths" that would imply that we should see a dip in casualties!
This is because, in war, you defend yourself by killing attackers.
You can also extend this to why this doesn't apply to nukes: with retaliation as the known norm, "offense tech" (like number of missiles, ability to aim and so on) *is* defense tech. And while that is true for humans that would not necessarily be true when your adversary has different weaknesses, hence what you see as an "offense-defense" balance is really an "offense offense" balance. Hence bioweapon risk from AI would be uniquly asymmetrical. Yeah if it was a human designing attacks, they'd have to worry about the disease spreading back, but if it were an AI that would be no longer be a concern.