A grad student plops down in front of their computer where their code has been running for the past few hours. After several months of waiting, she had finally secured time on the James Webb Space Telescope to observe a freshly discovered exoplanet: Proxima Centauri D.
“That’s strange.” Her eyes dart over the results. Proxima D’s short 5 day orbit meant that she could get observations of both sides of the tidally locked planet. But the brightness of each side doesn’t vary nearly as much as it should. The dark side is gleaming with light.
The Argument
This argument requires a few assumptions.
Strong evidence of intelligent alien life on a nearby planet
Future moral value is not inherently less important than present moral value
Many types of beings contain moral value including nonhuman animals and aliens
I will call people who have a Singer-style wide moral circle and a Parfit-style concern for the long term future “Longtermist EAs.” Given these assumptions, lets examine the basic argument given by Longtermist EAs for why existential risks should be a primary concern.
Start with assumption 2. The lives of future human beings are not inherently less important than the lives of present ones. Should Cleopatra eat an ice cream that causes a million deaths today?
Then consider that humanity may last for a very long time, or may be able to greatly increase the amount of moral value it sustains, or both.
Therefore, the vast majority of moral value in the universe lies along these possible future paths where humanity does manage to last for a long time and support a lot of moral value.
Existential risks make it less likely or impossible to end up on these paths so they are extremely costly and important to avoid.
But now introduce assumptions 1 and 3 and the argument falls apart. The link between the second and third points is broken when we discovery another morally valuable species which also has a chance to settle the galaxy.
Discovering aliens nearby means that there are likely billions of planetary civilizations in our galaxy. If, like Singer, you believe that alien life is morally valuable, then humanity’s future is unimportant to the sum total of moral value in the universe. If we are destroyed by an existential catastrophe, another civilization will fill the vacuum. If humanity did manage to preserve itself and expand, most of the gains would be zero-sum; won at the expense of another civilization that might have taken our place. Most of the arguments for caring about human existential risk implicitly assume a morally empty universe if we do not survive. But if we discover alien life nearby, this assumption is probably wrong and humanity’s value-over-replacement goes way down.
Holding future and alien life to be morally valuable means that, on the discovery of alien life, humanity’s future becomes a vanishingly small part of the morally valuable universe. In this situation, Longtermism ceases to be action relevant. It might be true that certain paths into far future contain the vast majority of moral value but if there are lots of morally valuable aliens out there, the universe is just as likely to end up one of these paths whether humans are around or not so Longtermism doesn’t help us decide what to do. We must either impartially hope that humans get to be the ones tiling the universe or go back to considering the nearer term effects of our actions as more important.
Consider Parfit’s classic thought experiment:
Option A: Peace
Option B: Nuclear war that kills 99% of human beings
Option C: Nuclear war that kills 100% of humanity
He claims that the difference between C and B is greater than between B and A. The idea being that option C represents a destruction of 100% of present day humanity and all future value. But if we’re confident that future value will be fulfilled by aliens whether we destroy ourselves or not then there isn’t much of a jump between B and C. When there’s something else to take our place, there’s little long-run difference between any of the three options so the upfront 99 > 1 wins out.
Many of the important insights of Longtermism remain the same even after this change of perspective. We still underinvest in cultivating long term benefits and avoiding long term costs, even if these costs and benefits won’t compound for a billion years. There are other important differences, however.
The most important practical difference that the discovery of alien life would make to EA Longermist prescriptions is volatility preference. When the universe is morally empty except for humans, the cost of human extinction is much higher than the benefit of human flourishing, so it’s often worth giving up some expected value for lower volatility. Nick Bostrom encapsulates this idea in the Maxipok rule.
Maximize the probability of an okay outcome, where an “okay outcome” is any outcome that avoids existential disaster.
Since morally valuable aliens flatten out the non-linearity between catastrophe and extinction, EA Longermists must be much more open to high volatility strategies after the discovery of alien life. So they don’t want to Maxipok, they want to maximize good ol’ expected value. This makes things like AI and biotechnology which seem capable of both destroying us or bringing techno-utopia a lot better. In comparison, something like climate change which, in the no-aliens world, is bad but not nearly as bad as AI or bio-risk because it has low risk of complete extinction, looks worse than it used to.
The discovery of alien life would therefore bring EA Longtermism closer to the progress studies/Stubborn Attachments view. Avoiding collapse is almost as important as avoiding extinction, compounding benefits like economic growth and technological progress are the highest leverage ways to improve the future, not decreasing existential risks at all costs, and there is room for ‘moral side constraints’ because existential risks no longer impose arbitrarily large utilitarian costs.
Examining and Relaxing Assumptions
Singer
Probably the easiest assumption to drop is the third one which claims that alien life is morally valuable. Humans find it pretty easy to treat even other members of their species as morally worthless, let alone other animals. It would be difficult to convince most people that alien life is morally valuable, although E.T managed it. Many find it intuitive to favor outcomes that benefit homo sapiens even if they come at the expense of other animals and aliens. This bias would make preserving humanity’s long and large future important even if the universe would be filled with other types of moral value without us.
If you support including cows, chickens, octopi, and shrimp in our wide and growing moral circle, then it seems difficult to exclude advanced alien life without resorting to ‘planetism.’ It might be that humans somehow produce more moral value per unit energy than other forms of life which would be a less arbitrary reason to favor our success over aliens. Even after discovering the existence of alien life, however, we would not have nearly enough data to expect anything other than humans being close to average in this respect.
Aliens
The first assumption of observing alien civilization is sufficient but not entirely necessary for the same result. Observation of any alien life, even simple single cellular organisms, on a nearby planet greatly increases our best guess at the rate of formation of intelligent and morally valuable life in our galaxy, thus decreasing humanity’s importance in the overall moral value of the universe.
Longtermism and wide moral circles may dilute existential risk worries even on earth alone. If humans go extinct, what are the chances that some other animal, perhaps another primate or the octopus, will fill our empty niche? Even if it takes 10 million years, it all rounds out in the grand scheme which Longtermism says is most important.
Robin Hanson’s Grabby Aliens theory implies that the very absence of aliens from our observation combined with our early appearance in the history of the universe is evidence that many alien civilizations will soon fill the universe. The argument goes like this:
Human earliness in the universe needs explaining. Earth is 13.8 billion years old, but the average start lives for 5 trillion years. Advanced life is the result of several unlikely events in a row so it’s much more likely for it to arise later on the timeline than earlier.
One way to explain this is if some force prevents advanced life from arising later on the timeline. If alien civilizations settle the universe, they would prevent other advanced civilizations from arising. So the only time we could arise is strangely early on before the universe is filled with other life.
The fact that we do not yet see any of these galaxy settling civilizations implies that they must expand very quickly so the time between seeing their light and being conquered is always low.
The argument goes much deeper, but if you buy these conclusions along with Longtermism and a wide moral circle then humanity’s future barely matters even if we don’t find cities on Proxima D.
Big If True
The base rate of formation of intelligent or morally valuable life on earth and in the universe is an essential but unknown parameter for EA Longtermist philosophy. Longtermism currently assumes that this rate is very low which is fair given the lack of evidence. If we find evidence that this rate is higher, then wide moral circle Longtermists should shift their efforts from shielding humanity from as much existential risk as possible, to maximizing expected value by taking higher volatility paths into the future.
Worth noting that existential risks are not limited to risks that will end humanity. They also include the lock in of bad trajectories (e.g. authoritarian dictatorships, indifference to animal torture, etc.). So there may still be a case for longtermism if we think humans are more likely to achieve decent values than aliens.
There may also be value in humanity's survival for the sake of diversity in the universe, if only to ensure that there isn't the disvaluable lock in later on.
Objections:
1. Do we have strong reasons to think that morally valuable aliens will also be morally upright aliens? Maybe they'll be a moral catastrophe. Maybe they like ritualistic torture or factory farming that makes our version look like paradise.
2. you're arbitrarily limiting existential risk to destroying humans, but human caused disasters could affect aliens as well. Probably not biotech but AGI definitely could.
Both of those arguments seem plausible to me and very substantially weaken your argument for higher volatility.
Having a higher prior for alien civilization existing should perhaps make us somewhat more willing to speed up tech development instead of prioritizing safety, because presumably other aliens civilizations may prioritize safety less or have worse values, but I don't think it transforms long-term nearly as much as you think.