13 Comments
Jan 31Liked by Maxwell Tabarrok

It occurs to me that (relative to my consumption ability) there is already approximately infinite content on the Internet.

My consumption of this sea of content is not evenly distributed. I do not randomly sample, and I don't think anyone else does either.

What do we actually do? Our content consumption is socially biased.

1. We consume content that is popular in our social group so that we can talk about something with our friends. (For everyone who is progress-adjacent, Maximum Progress is a must-read!)

2. We become attached to particular online characters, and want to read their work. (I like Max's takes on metascience, so I also want to see what he has to say about land reclamation.)

3. We do searches for particular information where it is in fact more transactional. (This might go away because I can just ask GPT-N≥4.)

To Robin Hanson at you, online writing isn't about information.

Epistemic status: highly speculative.

Expand full comment
author

I do think that human-bias will prop certain things up even after AI can substitute in lots of other ways. But this does rely on humans being the primary consumers of writing and research. I'd like to have some value to a superhuman AI just in case.

I do think your point about the internet already being approximately infinite is right. That's why I think google is the best analogy for AI. It is notable how similar the user experience of google and gpt-4 is. Google has been good for idea generators even though it does make substitutes for their output much cheaper.

Expand full comment
Feb 3Liked by Maxwell Tabarrok

Your analogy with farming for me thinking. Nowadays bad and ok quality food has been commoditised, but rich people pay a premium for high quality, more artisan, more unusual foods. Here in the UK there are exceptions of relatively cheap but high quality content, such as the BBC and the Guardian. On the quality/cost graph, these would be outlier plots, with that status attributable to alternate revenue streams ie. the public license fee (BBC) and donations (the Guardian), coupled with a public service mission. But the general point I’m making stands. I could see the work of ideas people following a similar trajectory. Largely a commoditised and bad-to-ok quality, generated by AI, for a free or low price to the masses, with a premium being paid for the really good stuff (human created, above the level of what AI can do). So pretty similar to how things are now, with paid subscriber-only content. Only that the quantity and quality of the free and cheap stuff will increase. Pushing the average quality from human idea generators higher. The big assumption here is that some humans CAN still generate better ideas than AIs. We’ve seen AI beat the best humans at Go and StarCraft. And do protein folding and other crazy-scale things. But will similar achievements become dominant for ideas? None of us know right now. The tech is improving rapidly but we don’t know when the limits will be reached and what they will be. And over what time frame we are talking - tens of years, hundreds of years? Also: ideas are easy to have, but putting them out with high production values is much harder (but let’s not go down that rabbit hole now!). I suspect we will be waiting a very very long time before AI is routinely able to out-idea a brilliant human with a few AI copilots 🙂

Expand full comment

this is pretty concerning, indeed.

What I will say is that there is a difference between uncreative wordcels that are just kinda good at writing very cliche articles that sound good and wordcels who can dig into the data and produce new insight. If the latter are replaced, than I struggle to think about any career path involving thinking that won't be replaced. Maybe those who work with both mind and hands? Like say experimental biologists, doctors and so on?

But why would a creative wordcel be more replaceable than say, a computer programmer?

Expand full comment

One thing I wonder is whether there is a market for so-called wordcels to combine their humanity with the machine’s output? In other words can we marry the ineffably human to the machine? I suspect we can but I don’t quite know what this looks like.

That said, I can see a world in which pure machine output is sufficient for the vast majority of human consumption.

I don’t think that most people especially value original insight, whether human- or machine-derived. But this was true before AI, and it will be true irrespective of how much more powerful AI becomes.

Expand full comment

sexy woman who reads articles

Expand full comment
Feb 4Liked by Maxwell Tabarrok

As I brought up IRL, even though I can't find an objection to our comparative advantage in the physical world, one cooler way of thinking about it is we might become cyborgs with large AI "cortexes" augmenting (or substituting for large parts of) our brain. Human body + AI mind

Expand full comment
Feb 4Liked by Maxwell Tabarrok

`"Practicing my Handwriting in 1439" is a piece by Maxwell Tabarrok. The piece discusses how handwriting remained an essential skill for centuries after the printing press was invented, and only recently has it become less important. The piece also references a Bloomberg column by Tyler Cowen, which predicts that AI will disrupt the status and earnings of "wordcels" and "ideas people".`

- Google Search's generative AI answer

Expand full comment

Yes, who knows? Remember when word processing was going to reduce paper use because we would not need multiple drafts of documents to edict them? It did eliminate 'secretaries "but did no devalue office work.

Expand full comment
Jan 31Liked by Maxwell Tabarrok

My bet is that although being multilingual will lose usefulness and earning potential, it will be great for status. A proficient speaker should always be quicker and more natural as translation latency will never go away due to differing sentence structure and context windows. That's why I'm streamlining my language learning goals to basically learn a couple of languages to high proficiency rather than becoming a polyglot.

Expand full comment

> Comparative advantage makes me confident that there will be some room left for humans in the world even if AIs become super-human agents.

Comparative advantage theorems say that, if you make some potentially unwarrented assumptions, you must have some non zero economic value to your labor.

There is no theorem saying that this value must be greater than a humans living costs.

Note that comparative advantage theorems apply to chimps in the modern economy just as much as they apply to humans in a future AI economy.

In practice, comparative advantage theorems don't always apply. The theorems assume a pile of tasks that humans or AI can do. But if there is a chance of the humans making expensive mistakes. If the economy is changing so fast that, by the time the AI has explained how to X, then X is obsolete and the AI now needs to explain a new task. If the AI wants security and the humans are easy to social engineer. Then the AI has good reasons not to use humans at all.

Expand full comment

AI has been so nerfed on discussing "controversial" topics, I imagine that idea people writing in the realms of politics, culture, etc. will be important for awhile unless the major tech corporations radically shift in culture (which seems highly unlikely).

Expand full comment

I miss the old Big Tech. At least they used to believe in the future.

Political correctness has done so much damage there. More than I thought would be possible in such a short time. Not that their wouldn't have been problems anyway (SEO metastasis, the sheer bullshit of crypto, and the endless army of bots and scammers, for instance), but the problems that do come always get worse when a mission is made to hire the dumber and less qualified in an ever-increasing number of positions, while simultaneously demanding ever-decreasing freedom of people to speak their minds.

Dumbness doesn't stay contained. It creeps into every crevice.

Expand full comment