One of the main issues I have with economic approaches to knowledge production like Romer’s is that they approach Knowledge as a singular entity that produces an aggregate economic output, as opposed to a set of n knowledges that each have their own output curves. In practice (and in the technological forecasting lit), there’s a recognition of this fact in the modelling of new technologies as overlapping S-curves:
The S-curve approach recognizes that there is a point where the ROI for continued investment in a particular technology tapers off. So this means that we don’t run out of Ideas, we just overinvest in exploitation of existing ideas until the cost-to-ROI ratio becomes ridiculous. Unfortunately, the structure of many of our management approaches and institutions are built on extraction and exploitation of singular ideas, and we are Very Bad at recognizing when it’s necessary to pivot from exploit to explore.
Even worse, this is not only true for the structure of our industrial-commercial research, but for our basic research apparatus as well. Grants are provided on the basis of previous positive results, as opposed to exploration of net-new areas—resulting in overexploitation of existing ideas and incentivized research stagnation.
In practice, you can think of it like this comic...
except instead of a circle, our knowledge distribution looks like this
Thanks. I generally agree with all these points, but do they change any of the conclusions? These complexities aren’t represented in the models because, well, they would make the models more complex, and it’s not clear we need them. But if it made a crucial difference, then I’m sure this would get worked into the models. (It’s actually not uncommon to see models that break out different variables for each invention or product, it’s just that those details don’t end up being important for high-level summaries like this.)
As with any metric, it comes down to what you’re looking to diagnose—and whether averages across the total system are a useful measure for determining overall health. If someone had a single atrophied leg and really buff arms, the average would tell you they have above-average muscle strength, but that’s obviously not the whole story.
Same goes for innovation: if idea production is booming in a single area and dead everywhere else, it might look like the net knowledge production ecosystem is healthy when it is not. And that’s the problem here, especially when you factor in that new idea production is accelerated by cross-pollination between fields. By taking the average, we miss out on determining which areas of knowledge production need nurturing, which are ripe for cross-pollination, and which are at risk of being tapped out in the near future. And so any diagnostic metric we hope to create to more effectively manage knowledge ecosystems has to be able to take this into account.
The most exciting prospect here, imo, is building capacity to identify underresearched and underinvested foundational knowledge areas, filling those gaps, and then building scaffolding between them so they can cross-pollinate. And doing this recursively, so we can accelerate the pace of knowledge production and translation.
Good piece.
One of the main issues I have with economic approaches to knowledge production like Romer’s is that they approach Knowledge as a singular entity that produces an aggregate economic output, as opposed to a set of n knowledges that each have their own output curves. In practice (and in the technological forecasting lit), there’s a recognition of this fact in the modelling of new technologies as overlapping S-curves:
The S-curve approach recognizes that there is a point where the ROI for continued investment in a particular technology tapers off. So this means that we don’t run out of Ideas, we just overinvest in exploitation of existing ideas until the cost-to-ROI ratio becomes ridiculous. Unfortunately, the structure of many of our management approaches and institutions are built on extraction and exploitation of singular ideas, and we are Very Bad at recognizing when it’s necessary to pivot from exploit to explore.
Even worse, this is not only true for the structure of our industrial-commercial research, but for our basic research apparatus as well. Grants are provided on the basis of previous positive results, as opposed to exploration of net-new areas—resulting in overexploitation of existing ideas and incentivized research stagnation.
In practice, you can think of it like this comic...
except instead of a circle, our knowledge distribution looks like this
Thanks. I generally agree with all these points, but do they change any of the conclusions? These complexities aren’t represented in the models because, well, they would make the models more complex, and it’s not clear we need them. But if it made a crucial difference, then I’m sure this would get worked into the models. (It’s actually not uncommon to see models that break out different variables for each invention or product, it’s just that those details don’t end up being important for high-level summaries like this.)
As with any metric, it comes down to what you’re looking to diagnose—and whether averages across the total system are a useful measure for determining overall health. If someone had a single atrophied leg and really buff arms, the average would tell you they have above-average muscle strength, but that’s obviously not the whole story.
Same goes for innovation: if idea production is booming in a single area and dead everywhere else, it might look like the net knowledge production ecosystem is healthy when it is not. And that’s the problem here, especially when you factor in that new idea production is accelerated by cross-pollination between fields. By taking the average, we miss out on determining which areas of knowledge production need nurturing, which are ripe for cross-pollination, and which are at risk of being tapped out in the near future. And so any diagnostic metric we hope to create to more effectively manage knowledge ecosystems has to be able to take this into account.
I agree that if you want to understand where there might be problems/opportunities, you can’t just look at averages.
The most exciting prospect here, imo, is building capacity to identify underresearched and underinvested foundational knowledge areas, filling those gaps, and then building scaffolding between them so they can cross-pollinate. And doing this recursively, so we can accelerate the pace of knowledge production and translation.
Also strongly recommend the adjacent possible work if you haven’t seen it yet.
https://www.technologyreview.com/2017/01/13/154580/mathematical-model-reveals-the-patterns-of-how-innovations-arise/#:~:text=The%20adjacent%20possible%20is%20all,the%20space%20of%20unexplored%20possibilities.