Discussion about this post

User's avatar
Daniel's avatar

A good post, and one I'll incorporate into my own thinking, but misapplied to bioweapons.

For bioweapons, the overwhelming advantage has always gone to offense. The asymmetry comes from the fact that a single virus can kill millions, but it takes millions of vaccines (and people willing to take said vaccines) to protect millions. (Plus developing the vaccines, the manufacturing and distribution capability, etc.) If AI allows easy creation of both disease and vaccine, that makes the attacker's job easy but only a small part of the defender's job easy.

This is true throughout history - my understanding is that a large majority of the indigenous American people were killed by disease, even without intentional biological warfare from the newcomers, along with the South/Middle American empires.

COVID, while unlikely to be actual biological warfare, is a succinct demonstration that the hard part of defending against a pandemic is manufacturing, distributing, and vaccinating everyone, not developing the vaccine, and AI may not be of much help with that.

The truth is that people just haven't tried biological warfare that hard, yet, most likely because there hasn't been a safe way to do so - one's own population would be just as vulnerable as one's enemy.

Now, if you could develop a disease + a vaccine, spend a few years to vaccinate your own country, and then unleash the disease...AI makes that easier to do.

Expand full comment
MicaiahC's avatar

The graph of war deaths is profoundly unconvincing to me for arguing that attack and defense is balanced. For example, I think most people would agree that world war 1 was mostly defense favored, but if we presumed that "defense favored = less deaths" that would imply that we should see a dip in casualties!

This is because, in war, you defend yourself by killing attackers.

You can also extend this to why this doesn't apply to nukes: with retaliation as the known norm, "offense tech" (like number of missiles, ability to aim and so on) *is* defense tech. And while that is true for humans that would not necessarily be true when your adversary has different weaknesses, hence what you see as an "offense-defense" balance is really an "offense offense" balance. Hence bioweapon risk from AI would be uniquly asymmetrical. Yeah if it was a human designing attacks, they'd have to worry about the disease spreading back, but if it were an AI that would be no longer be a concern.

Expand full comment
25 more comments...

No posts