27 Comments

A good post, and one I'll incorporate into my own thinking, but misapplied to bioweapons.

For bioweapons, the overwhelming advantage has always gone to offense. The asymmetry comes from the fact that a single virus can kill millions, but it takes millions of vaccines (and people willing to take said vaccines) to protect millions. (Plus developing the vaccines, the manufacturing and distribution capability, etc.) If AI allows easy creation of both disease and vaccine, that makes the attacker's job easy but only a small part of the defender's job easy.

This is true throughout history - my understanding is that a large majority of the indigenous American people were killed by disease, even without intentional biological warfare from the newcomers, along with the South/Middle American empires.

COVID, while unlikely to be actual biological warfare, is a succinct demonstration that the hard part of defending against a pandemic is manufacturing, distributing, and vaccinating everyone, not developing the vaccine, and AI may not be of much help with that.

The truth is that people just haven't tried biological warfare that hard, yet, most likely because there hasn't been a safe way to do so - one's own population would be just as vulnerable as one's enemy.

Now, if you could develop a disease + a vaccine, spend a few years to vaccinate your own country, and then unleash the disease...AI makes that easier to do.

Expand full comment

The solution would be to make a vaccine which is contagious and spreads fast without the need for manufacturing and distribution. Much like the first vaccine (cowpox). This is obviously very difficult, but then, so is manufacturing a bioweapon. Also no rational actor (state, large corp) is going to make a virus which runs around killing indiscriminately, so that just leaves irrational actors, which will tend to be individuals or small groups and have a lot less resources. So in practice defense may still have an advantage.

Expand full comment

I love the technological solution - a vaccine that spreads!

Politically, though, can you imagine the pushback in America against the government involuntarily infecting people with novel microorganisms?

As for your second point, I agree, although AI is lowering the bar for irrational actors all the time, which is how the conversation started. (It's also lowering the bar for the good people, too.)

Expand full comment

I think it would be hard to vaccinate your country against an engineered disease without the rest of the world asking "Hey, why are you vaccinating your population against a disease that nobody has found in the wild yet?" Mass vaccination campaigns are not subtle.

Expand full comment

There are groups/people interested in bringing civilisation down entirely, they don't need to vaccinate anyone.

Expand full comment

True. North Korea would be my first thought; they're pretty insulated from the rest of the world, but your point stands.

Mark's below is also valid, however.

Expand full comment

The graph of war deaths is profoundly unconvincing to me for arguing that attack and defense is balanced. For example, I think most people would agree that world war 1 was mostly defense favored, but if we presumed that "defense favored = less deaths" that would imply that we should see a dip in casualties!

This is because, in war, you defend yourself by killing attackers.

You can also extend this to why this doesn't apply to nukes: with retaliation as the known norm, "offense tech" (like number of missiles, ability to aim and so on) *is* defense tech. And while that is true for humans that would not necessarily be true when your adversary has different weaknesses, hence what you see as an "offense-defense" balance is really an "offense offense" balance. Hence bioweapon risk from AI would be uniquly asymmetrical. Yeah if it was a human designing attacks, they'd have to worry about the disease spreading back, but if it were an AI that would be no longer be a concern.

Expand full comment

Thank you for reading and for your thoughtful comment!

Expand full comment

actually come to think of it, the deaths are plotted of a log graph, which hides the volatility in that number over time. Ranges over time from 0.2ish to 200 military deaths per 100,000 people. That's 4 OOMs!

Expand full comment

That's fair, there is volatility in the number but the lack of long term trend is still interesting. I think it's hard to explain that volatility within a constant range with a story about tech progress because tech progress isn't that volatile, it's relatively smooth and monotonic. Other explanations about politics or just small random fluctuations that are self reinforcing might be better.

I'm not super confident about anything in the post, but I haven't seen good answers to these questions yet.

Expand full comment

There's a good argument that even if quantitatively we didn't see an increase in deaths over time due to bioweapons, one good bioweapon could lead to millions of deaths or other significant costs to human well-being. The costs could be fat-tailed due to the dynamics of a viral contagion.

I would feel more comfortable arguing that this won't happen once we do have [more] AI augmented defenses against pathogens. It seems possible to me that there could be high defensive asymmetry. Palmer Luckey made an argument about this on his recent Pirate Wires interview.

However, the arguments I have seen about changes in offense v. defense are about the types of agents or organizations that gain a marginal benefit from a shift in technology. The Sovereign Individual makes several of these arguments that I at least found compelling when I read a few chapters of it. Ie. The invention of the hunting rifle created asymmetries for militias relative to conventional standing armies, at least in certain contexts. A skilled colonial era America hunter arguably had a greater propensity for violence than red coat patrols. This did change the political economy in a way that was conducive to the founding of a republic, even if it didn't necessarily change the number of deaths from warfare over a certain time period.

I think there could be a shift in which parties have more capability even if that didn't result in more money or information lost to cyber criminals. Maybe AI augmented cyber warfare doesn't lead to more powerful cyber pirates but it could lead to certain corporations having greater capacities than most or all nation states. And if that was already the case, maybe the specific actors are different or some other qualitative change in their relationships with governments, adversaries, or customers.

I agree with you that this is definitely a place for greater discussion and inquiry.

Expand full comment

--I think claims about the offense-defense balance are central to the argument from AI-doom-via-bioweapons, but not at all central to the argument from AI-doom-via-misalignment, which is IMO where most of the risk comes from. If we train superintelligence whilst still not knowing how to control it, things will end very badly for us for reasons that aren't well-described as a shift in the O/D balance.

--As others have pointed out, the concern from bio risk isn't necessarily that the offense-defense balance will shift, but that it will stay the same -- namely, it will continue to favor offense.

--Even if it was that the balance will shift, the data on deaths from violent conflict over time is not particularly relevant. The claim wasn't that there was a general trend towards offense across all technologies across all time, and even if it was, the relationship between O/D balance and deaths is not straightforward as the comments on lesswrong pointed out. (That said, I DO agree with your more general point that historically people have sometimes predicted shifts in the offense-defense balance and been totally wrong. These things are sometimes hard to call in advance.)

--For compute and cybersecurity: Compute isn't the driving factor in cybersecurity anyway. I agree this is relevant evidence though, at least about O/D balance in cybersecurity (which is different from biosecurity)

--The evidence about lack of bioterror attacks despite the cost dropping exponentially for decades is the best argument you have here IMO. However, it isn't very strong, because the 'doomer' view (i think that term is unfairly perjorative btw) is compatible with this evidence. Suppose you think that every halving of the cost of bioterror attacks doubles the probability that a randomly selected person or group would succeed at a bioterror attack if they tried. And suppose that there are (say) N people/groups who are willing to try bioterror attacks, but only if they have at least a 10% chance of succeeding. Then your view predicts that as the cost of bioterror attacks drops exponentially over time, you should see.... nothing. nothing, for a long period, and then you should see one attempted bioterror attack, and then two, and then four, and then eight, etc. So the fact that we have seen nothing so far is not strong evidence against this view, though it is weak evidence.

--I think your post is good, though, overall, because it draws attention to the fact that theorizing about the offense/defense balance and how it will shift in some domain in the future is in general a tricky thing to do. I do think that overall the risk of AI doom from terrorists-making-bioweapons is not so high (5%? 10%?) and that what you say here is part of why I think that. (i.e. naively it seems like would-be-bioterrorists will gain more than public health defenders will gain from near-term AI systems, but who knows, these things are hard to predict, and I can easily see it going the other way e.g. because of AI-enabled surveillance and vaccines.)

Expand full comment

Thank you for reading and for your thoughtful comment!

Expand full comment

One thing missing from this analysis is that not every group/person necessarily needs to defend. If you could create a button that destroys the Earth, there will be plenty of people interested in pressing it. Given this asymmetry, the question is could intelligent AI at some point become like a much more democratised version of nuclear weapons. There will be groups who only need to "win" once (and are not deterred by MAD, since their goals is destruction), meanwhile the "good guys" need to win every single time without exception.

Expand full comment

Good post, but I’m not sure I agree with the implications around bioengineered viruses. While I agree we might build defenses that keep overall death rate low, biological and psychological constraints make these defenses pretty unappealing.

You mentioned that while biotech makes it easier for people to develop viruses, it also makes it easier to develop vaccines. But if the ability to create viruses becomes easy for bad actors, we might need to create *hundreds* of different vaccines for the hundreds of new viruses. Even if this works, taking hundreds of vaccines doesn’t sound appealing, and there are immunological constraints on how well broad spectrum vaccines will ever work.

In the face of hundreds of new viruses and the inability to vaccinate against all of them, we’ll need to fall back to isolation or some sort of sterilizing tech like ubiquitous UV. I don’t like isolation, and I don’t like how ubiquitous UV might create weird autoimmune issues or make us especially vulnerable to breakthrough transmissions.

Expand full comment

Thank you for reading and for your thoughtful comment!

Expand full comment

I think this is a good analysis of a new technology, but I worry that it is missing half of "Michael Doomer"''s point. I am not convinced that AI Doom arguments are inevitably, but I think we should address them honestly and in full. Sufficiently advanced AI is not just a new technology that changes the coefficients of the game, it is (at some point of advanced-ness) a new player (and perhaps one who can introduce new technologies that have coefficients we don't know yet). Your assessment of new technologies is accurate, and I would bet that it continues to be accurate for some time, within the "new tech" context - but we need to assess it from the "new player" perspective too. It is fair to argue that AI is nowhere near "new player" status yet, but the biggest positive benefits from AI that AI proponents claim we'll get come from levels of AI capability that would be much closer to that.

"Why is the offense-defense balance so stable even when the technologies behind it are rapidly and radically changing?"

I would like to claim that nuclear weapons actually did indeed break the O/D balance. The reason nukes brought such a quick end to the pacific war was that they were such obviously overpowered offensive weapons that despite the incredibly strong invasion defense (assumed by all parties) that Japan could muster was rendered utterly ineffective, and even more impressively - it was obvious to all those parties (to the point of causing surrender) immediately after the first 2 demonstrations, without having to actual demonstrate that superiority comprehensively in combat, in the way that previous new weapons technologies did.

We did not restore the O/D balance by building anti-nuke defensive weapons, and those that do exist are either A) generally accepted to be unreliable enough that we should still prepare for millions of civilian casualties (i.e. losing) or B) just mitigation to perhaps enable the survival of the species (bunkers, etc...). The reason this offensive imbalance did not lead to those with nukes immediately conquering the world was that we (I mean "we" as in "humans" but certainly a large part of "we" was "the countries with nukes") changed the rules of the game to make A) conquest (with or without nukes) far more penalized that it had ever been in the past and B) use of nukes, specifically, heavily penalized.

Sure, in wars where neither side has nukes, or have agreed to not use them (Vietnam, Afghanistan, Chechnya, Ukraine, Syria, Iraq, etc...) we see a relative balance of O/D, but that's the whole point: we changed the rules such that it is now a Different Game. Gameplay (sorry for the callous video game analogies, but it is effective in highlighting the motivations) in "limited regional wars" retains the O/D balance, but in "unrestricted global warfare" like humanity practiced in WW2, the O/D balance is so completely destroyed that no one wants to play that game any more because it's no fun because everyone loses.

Expand full comment

Yes I think you're right that the offense-defense balance argument is about how AI will be used as a tool by humans or human organizations. There is a whole other section of debate about whether AI can be reliably used as a tool at all. This was the center of the debate a few years ago but as AI safety has expanded beyond the rationalist community it has focused more on the tool-use part.

And yeah that's reasonable about nukes. The graph of war deaths does reach an all time minimum in the 2000s which is notable and nukes are involved in that. I'd like to do more focused study of nuclear weapons in particular.

Expand full comment

I think we should have deep conversations about both! There are lots of things that could happen from "human use of AI, intentionally or not" and also "what would happen if the AI decides to do things on its own" - and there are probably important, beneficial things we should do in both categories to maximize benefit and minimize harm.

re: nukes, I'm not sure that war deaths is a relevant measure, if we accept (as I'm asserting) that certain things can throw the O/D balance so out of whack that it changes the rules entirely, or leads to an entirely different type of game. My under-the-table implication is that sometimes there are "out of context" changes that make previous methods/measures of assessing things no longer applicable, and that nukes are just the most salient example. We need to understand when/if the "flip the board" strategy is in play.

Expand full comment

This feels like an attempt to find a simple rule to make the future seem predictable. I'm wary of such things and it seems like there are counter-examples. A recent example of offense-defense balance changing is drone surveillance in Ukraine making attacks much harder, leading to stalemate conditions once both sides learned enough. So, maybe not so rare?

Expand full comment

A note I shared elsewhere with some addition:

If offence and defence both get faster, but all the relative speeds stay the same, I don’t see how that favours offence (e.g. we get ICBMs, but the same rocketry + guidance etc tech means missile defence gets faster at the same rate). But ideas like this make sense, e.g. if there are any fixed lags in defence (e.g. humans don’t get much faster at responding but need to be involved in defensive moves) then speed favours offence in that respect.

That is to say there could be a 'faster is different' effect, where in the AI case things might move too chaotically fast — faster than the human-friendly timescales of previous tech — to effectively defend. For instance, your model of cybersecurity might be a kind of cat-and-mouse game, where defenders are always on the back foot looking for exploits, but they patch them with a small (fixed) time lag. The lag might be insignificant historically, until the absolute lag begins to matter. Not sure I buy this though.

A related vague theme is that more powerful tech in some sense ‘turns up the volatility/variance’. And then maybe there’s some ‘risk of ruin’ asymmetry if you could dip below a point that’s irrecoverable, but can’t rise irrecoverably above a point. Going all in on such risky bets can still be good on expected value grounds, while also making it much more likely that you get wiped out, which is the thing at stake.

Also, embarassingly, I realise I don't have a very good sense of how exactly people operationalise the 'offence-defence balance'. One way could be something like 'cost to attacker of doing $1M of damage in equilibrium', or in terms of relative spending like Garfinkel and Dafoe do ("if investments into cybersecurity and into cyberattacks both double, should we expect successful attacks to become more or less feasible"). Or maybe something about the cost-per-attacker spending to hold on to some resource (or cost-per-defender spending to sieze it).

This is important because I don't currently know how to say that some technology is more or less defence-dominant than another, other than in a hand-wavery intuitive way. But in hand-wavey terms it sure seems like bioweapons are more offence-dominant than, say, fighter planes. Because it's already the case that you need to spend a lot of money to prevent most the damage someone could cause with not much money at all.

I see the AI stories — at least the ones I find most compelling — as being kinda openly idiosyncratic and unprecedented. The prior from previous new tech very much points against them, as you show. But the claim is just: yes, but we have stories about why things are different this time ¯\_(ツ)_/¯

Great post.

Expand full comment

I'm not sure I understand the explanation, but the observation on per capita death and cybersecurity costs is really interesting and compelling.

Great post!

Expand full comment

Interesting. I wonder if whether a technology advantages the offense versus the defense is completely random, so given enough time they always balance each other out. Kind of like just repeatedly flipping a coin.

As an aside, I cannot imagine the collective amount of work it took to collect the raw data for trends in global conflicts chart. That must have required a legion of military historians to work for decades! Just forming a ball-perk estimate for one war is a huge undertaking.

Expand full comment

And that's why the offense/defense balance between humans and other animals has not changed at all in the past few thousand years, and animals don't have to worry about humans dominating them any time soon!

This article assumes what it's trying to prove. You use examples where the capabilities afforded to both sides increase at the same rate, so of course the balance of power remains the same. This is indeed applicable to future advances in autonomous weapons and bioengineering, but is not applicable to concerns about self-improvement and self-replication, since humans cannot effectively do those.

In addition, an even balance of power does not guarantee good outcomes for individuals. As the US and Russia improved their military technology over the past hundred years, they stayed pretty similar in terms of relative power, but the chance of us all dying in a nuclear apocalypse has gone dramatically up.

War deaths haven't significantly increased because the cost-benefit analysis has not changed; countries are willing to spend a certain fraction of their civilian's lives on conquest, and there's no reason why improved weapon technology would change that.

But give the terrorists and the US government both a button that says "kill all humans", and I wouldn't expect to be alive for much longer.

Expand full comment

If the alignment problem is solved

(That is, we can ask our good AI's for a vaccine and be confident it will give us a vaccine, and just a vaccine, not a vaccine that also has some mind control side effects)

Then it's plausible that the offense/defense balance doesn't shift too drastically.

(Although I suspect it's more likely that whoever reaches ASI first gets to take over all reality, in this hypothetical where alignment is solved)

If the AI's aren't exactly under human control at the best of times, then it's very hard to defend. If your best AI safety techniques are keeping the AI in a box, even if your box is very secure, you aren't going to manage to defend yourself.

If your trying to defend yourself against Godzilla by summoning Cthulhu, well whichever monster wins, things probably won't end well.

Expand full comment

Hey Maxwell! Thanks for this post! You might have seen that I made a response to it. It's on Substack now: https://juralnetworks.substack.com/p/thoughts-on-the-offense-defense-balance?r=2ktfhm.

Expand full comment

This is interesting.

From the comments, nuclear warheads (and the mutually assured destruction defensive maneuver) are an interesting example of an offense - defense renegotiation. I hope you dig into it & post your thoughts about that!

I would be curious to further explore the offense vs 'minimal to no defense' balance — the terrorism balance — which seems to be the type of black swan biotechnology threat that Doomer Michael is most worried about.

I have not researched the relevant information about the statistics with frequency and intensity of terrorist activity, but we've certainly seen technology increase the power of a single individual's actions. Simply, one can do a lot more damage with an AK-47 than a machete...

Expand full comment