In a recent debate between George Hotz and Eliezer Yudkowsky on Dwarkesh Patel’s podcast, the e/acc-doomer debate about corporations played out mostly as normal.
Hotz, on the accelerationist side, claims that organizations and individual humans differ vastly in intelligence and that organizations are much smarter. Thus, we already live with superintelligences, we just call them corporations and governments.
Eliezer, with the doomers, disagrees. He believes that the range of human intelligence occupies a tiny sliver of what is possible, and that human organizations aren’t an exception. Econ bloggers can make better decisions than massive world governments, so they can’t be beyond the range of individual human intelligence.
If this were a debate competition I’d give Eliezer the points, but the synthesis of their views comes down against the doomer case. Eliezer is correct that organizations make predictably bad decisions. They are also extremely slow. Information propagates through large organizations at the speed of team meetings; 3-5 business days at a time. The slack and distortion from brain-to-words-to-brain-to-words up the chain of command makes organizations far less efficient than individual human intelligence.
But this intelligence asymmetry is what Hotz points out in the beginning, just in the opposite direction. There is a huge gap between the intelligence of individuals and their organizations, but individuals are the ones on top. Your neurons are wired close together with nothing but electricity between them. The “neurons” of a large organization talk at the water cooler but not on Tuesdays or Thursdays because that’s when they work remote.
To complete the synthesis, notice that these huge, lumbering, stupid organizations rule the world. Eliezer can predict when a government’s decisions are wrong but he cannot take over or even avoid their influence. The coexistence of humans and large organizations is not optimistic for our chances against a coming superintelligence because we puny humans can partially align superintelligent governments. It’s optimistic because it proves that giant slow idiots can rule over super-fast information processing agents.
Interesting post! Note there is some debate about the claim that a econ bloggers out perform world governments. https://www.lesswrong.com/posts/woCPxs8GxE7H35zzK/noting-an-error-in-inadequate-equilibria
Agreed that it proves that they *can*, but it doesn't prove that giant slow idiots *will* rule over arbitrarily fast information processing systems, or even that that's the way to bet.
Whales do not usually get their way over human beings, however large and bad at thinking they are. The only reason that there are still whales is because humans value the existence of whales.
And the bringing into existence of even a small number of humans on a virgin planet would doom almost all the existing animals. Give it a few thousand years, and nothing will be left except the things that humans value.