21 Comments
User's avatar
AffectiveMedicine's avatar

Great post!

I blame collaboration. The number of authors per paper has steadily increased, and for what? This is only partly in jest. I have published with multiple coauthors and as a solo author, and the solo papers were far easier, faster, and more interesting (I think). If you care about "idea generation" in particular, it's not at all clear that collaboration would help - the need for consensus tends to water ideas down while also drawing out the process.

Expand full comment
J.K. Lund's avatar

Maxwell, I agree. I called it the "Innovation Red Queens Race," but the cause is not readily apparent from Bloom et al. In my article on this very topic (https://www.lianeon.org/p/the-innovation-red-queens-race), I discussed one possible contributor: misplaced incentives.

"Since the 1970s, there has been a growing expectation that researchers not only publish frequently but also publish works that are frequently cited by others (have a large impact). Together, these metrics, known as the “h-score,” are a kind of “batting average” for researchers. The more a researcher publishes and the more his work is cited, the more likely he is to obtain grants and further his career."..."By conducting one’s research in areas in crowded fields where that breakthrough has already occurred, that is, doing incremental work rather than risking exploratory science, researchers are more likely to have their work cited by others."

I do not think that is the only factor at play, of course, there are likely multiple causes.

Expand full comment
Andrus G. Ashoo's avatar

Related to the misplaced (and perverse) incentives is a piece that Adam Mastroianni wrote not too long ago, 'Science is a strong-link problem' (https://www.experimental-history.com/p/science-is-a-strong-link-problem).

Expand full comment
J.K. Lund's avatar

Very interesting. Thank you sharing!

Expand full comment
Randall Parker's avatar

What else has changed?

- Average esearcher quality has declined as more people have gone into research. Average IQ of college students has declined as more enrolled in college. Same has happened at the grad school level. So input as the number of researchers is not the right number.

- Autonomy of researchers has declined. The granting agencies have shifted towards peer group review. A better mind will, on average, get reviewed by lesser minds that don't understand the possibilities.

- Measures of researcher output are gamed by lesser researchers. Better researchers get less money as a result. This makes the decline in quality of people pursuing a research career have even worse impact. More lesser minds reviewing better minds. A similar thing happens in industry btw.

- People who find the granting mechanisms too frustrating will leave a field.

A long time ago sitting in a restaurant in Santa Barbara the founder of the Cornell biochemistry department Ephraim Racker explained to me how Richard Nixon declared his war on cancer and brought together lots of researchers at an NIH building in Bethesda Maryland to plan it out. He said they all contributed ideas and had a big list to fund. But he also said they all knew it wouldn't work. There was too much emphasis placed on treatment development by Nixon. He said they all believed they had to figure out cellular regulation first before treatments could be developed. Makes sense. Curiously, he didn't mention figuring out the immune system and yet antibodies are being used against cancer. Nor did he mention development of DNA sequencing machines and yet that is essential as well. But the point remains. Governments and peers reduce the autonomy of scientists.

Expand full comment
Max Shtein's avatar

As a practicing scientist / entrepreneur, can confirm many of the points made by Maxwell and Randall. Still doesn’t exclude problems of translation into business, many of which are not addressed by the recent growth of spending the granting agencies specifically on commercialization (to which I can also attest from firsthand experience). May have to make a series of my own posts on this topic.

Expand full comment
Thomas Martin's avatar

I haven’t read these papers. But your post made me wonder: if the output measure of interest is economic $ (seems to be), then could research be chugging along just fine and there is a translation loss from research findings to economically measurable results? That is, ideas don’t just magically become money; a lot of executional detail is needed, regulatory hurdles must be overcome, etc.

Expand full comment
Maxwell Tabarrok's avatar

Yes I completely agree. I think this is very likely. In the model this would correspond to the exponent on A in the final goods production function. A, the stock of ideas, is chugging along, but if the exponent is decreasing due to translation costs then we won't see any outcomes measurable in dollars.

Expand full comment
gregvp's avatar

The 99.9th percentile idea almost certainly has negative value. All the value comes from a tiny fraction of ideas. If the costs of evaluating ideas are rising, this goes some way to explaining what's going on.

Another headwind is that in a highly diversified economy, the impact of any successful idea is limited. When 60 percent of people were working in agriculture, a better plow or something of that ilk had a dramatic impact. A better way to make plastic bags (or, sticking with agriculture, a self-driving tractor) is not going to have nearly the same impact in today's economy.

Every idea that creates a new product reduces the impact on GDP of all later ideas. Every idea that creates an new industry or occupation is worse in this regard.

Expand full comment
devesh's avatar

We probably require more resources to research and know more with the passage of time.

For example, scientists need cutting edge equipment in laboratories to advance the field of physics but hundreds of years back, Newton didn't need any of that to completely revolutionize the field and human understanding of it.

Expand full comment
Economía simplificada's avatar

If my memory is correct there is a lot of potential productivity growth that the U.S. has lost because of housing scarcity in it's richest cities, maybe the problem is not with finding ideas but in using them at full potential

Expand full comment
David Hugh-Jones's avatar

I wasn't quite clear - is the point that Bloom et al. assume that sigma = 1 and lambda = 1 in the equations?

Expand full comment
Maxwell Tabarrok's avatar

Yes, that's right

Expand full comment
Donald's avatar

I think it's probably specialization. Consider an economy entirely centered around a single industrial process X. A 1% boost in efficiency of that process gives a 1% boost in GDP.

Now consider a billion separate processes, improving any one of these processes would give a negligible effect on GDP.

Inventing the spinning wheel in ancient rome could well have boosted GDP by 20%.

Most of the modern economy is made of so many processes that improving them all must inevitably take a lot of researchers.

Expand full comment
Grant Dever 🌄's avatar

Great piece, Max. The metaphor was so helpful that even I could understand what the equations meant! Looking forward to your analyses about the forces driving this stagnation. I have my biases but that's likely all they are.

Expand full comment
Moritz Wallawitsch's avatar

> Similarly, doubling the stock of ideas (A) might more than double economic output if ideas can be implemented freely...

The problem here is that there is no fixed stock/pie of ideas. And yes, regulation growth is a neglected parameter overall.

I also wrote a criticism of this economics paper and "The burden of knowledge" here: https://scalingknowledge.substack.com/p/knowledge-burden-or-boost

Expand full comment
Maxwell Tabarrok's avatar

Yes there is no obvious unit of ideas. That's why Bloom et al take "percentage increases in the economic growth rate" as the output since it is more easily measurable. But this is a function of ideas (A), not A itself. It is only a reliable measure of ideas if the exponent on A is constant and =1 over the whole period. They assume this is true but without much justification.

Expand full comment
John Fisher's avatar

I, as a non-economist, would say the whole idea is a gross oversimplification. There is no meaningful single variable relationship between R&D spending and GDP. There may well be a satisfying corollary, that meets expectations, but the confusion here, is, to me at least, a sign of botched hypothesis.

If you compared energy use with GDP (itself a very abstract statistic), you might find some examples of causation. I think maybe it is much more productive to focus on energy, raw materials, trade and the various frictions. And of course, I want economists everywhere to start including the externalities. For example the cost of the switch to renewables and climate resilience should be included in measures of fossil fuels.

Expand full comment
Dmitrii Zelenskii's avatar

Frankly, I'm with Scott Alexander on this one. Bloom et al's assumption should be our _default_ assumption, exponential rises like this are everywhere, "Gods of Smooth Lines" do their work. But interesting food for thought nonetheless.

Expand full comment
Levi Mitze-Circiumaru's avatar

Are academic economists really so naive as to imagine that the only thing to consider when modeling economic growth is R&D spending (plus this diminishing-returns-on-research-investment parameter worked into the model to save it from being smashed on rocky empirics)? You’d think a precocious high school student with zero economic training could see the problems with such a simplistic one-factor model. Do economists just turn off their brains and follow the leaders in the field in lockstep regardless of how unsatisfactory their models are? I’m probably just misunderstanding something. Can anyone explain?

Expand full comment
Ben Stallard's avatar

As someone with a physics background, this was near-impossible to follow. I understand that when economists hear "acceleration" they think of percent growth, but that's not how acceleration happens at all. F=ma. F/m = a. If we assume that a given engine consuming a given amount of fuel produces an amount of force, then constant acceleration is an arithmetic, linear process of increasing velocity, not an exponential one.

Expand full comment