In a recent debate between George Hotz and Eliezer Yudkowsky on Dwarkesh Patel’s podcast, the e/acc-doomer debate about corporations played out mostly as normal.
Hotz, on the accelerationist side, claims that organizations and individual humans differ vastly in intelligence and that organizations are much smarter. Thus, we already live with superintelligences, we just call them corporations and governments.
Eliezer, with the doomers, disagrees. He believes that the range of human intelligence occupies a tiny sliver of what is possible, and that human organizations aren’t an exception. Econ bloggers can make better decisions than massive world governments, so they can’t be beyond the range of individual human intelligence.
If this were a debate competition I’d give Eliezer the points, but the synthesis of their views comes down against the doomer case. Eliezer is correct that organizations make predictably bad decisions. They are also extremely slow. Information propagates through large organizations at the speed of team meetings; 3-5 business days at a time. The slack and distortion from brain-to-words-to-brain-to-words up the chain of command makes organizations far less efficient than individual human intelligence.
But this intelligence asymmetry is what Hotz points out in the beginning, just in the opposite direction. There is a huge gap between the intelligence of individuals and their organizations, but individuals are the ones on top. Your neurons are wired close together with nothing but electricity between them. The “neurons” of a large organization talk at the water cooler but not on Tuesdays or Thursdays because that’s when they work remote.
To complete the synthesis, notice that these huge, lumbering, stupid organizations rule the world. Eliezer can predict when a government’s decisions are wrong but he cannot take over or even avoid their influence. The coexistence of humans and large organizations is not optimistic for our chances against a coming superintelligence because we puny humans can partially align superintelligent governments. It’s optimistic because it proves that giant slow idiots can rule over super-fast information processing agents.
Hmmm. I think there is a flaw here that you choose the most slow moving orgs. Some orgs are extremely competent, as intelligent as very clever individuals and working at much greater scale. Eg spaceX.
Such organisations have not remotely come close to taking over the world.
Interesting post! Note there is some debate about the claim that a econ bloggers out perform world governments. https://www.lesswrong.com/posts/woCPxs8GxE7H35zzK/noting-an-error-in-inadequate-equilibria