One thing I’d like to see is a discussion of the potential for measurement error. For example, software is notably deflationary yet we don’t see that in the statistics due to how we count. (We still don’t really know how to value the contribution of software to GDP.) Bloom (2020) may just be measuring the wrong outputs for the current type of progress. For example, if publications or patents are a weaker signal for ideas than in the past, the methodology breaks down.
I agree in general about the measurement challenge. However, one strength of Bloom (2020) is that they look at a variety of areas using different metrics: Moore’s law, agricultural productivity, etc. (They don’t really look at patents, in part because it’s hard to know what patents mean/represent.) In any case, it’s not just GDP. The fact that there are similar patterns across different metrics is evidence that there’s something real going on.
Holden Karnofsky and Scott Alexander go further, albeit with less solid quantitative support, and extend the pattern to art and general human accomplishment. E.g., here’s Scott (this comes right after the block quote above):
Or: what about human excellence in other fields? Shakespearean England had 1% of the population of the modern Anglosphere, and presumably even fewer than 1% of the artists. Yet it gave us Shakespeare. Are there a hundred Shakespeare-equivalents around today? This is a harder problem than it seems – Shakespeare has become so venerable with historical hindsight that maybe nobody would acknowledge a Shakespeare-level master today even if they existed – but still, a hundred Shakespeares? If we look at some measure of great works of art per era, we find past eras giving us far more than we would predict from their population relative to our own. This is very hard to judge, and I would hate to be the guy who has to decide whether Harry Potter is better or worse than the Aeneid. But still? A hundred Shakespeares?
Good point, though I don’t find looking at a selection of areas as too convincing. I could just as easily choose areas with consistent exponential growth that I would guess don’t look like this, like solar panels or genome sequencing. Even if things were getting better on average you would expect some things to get less efficient over time too. (for example, think about that inflation components chart people share all the time)
One last thing: we would probably want to look at output and not inputs. Robert Gordon’s sort-of nemesis, Chad Syverson, has done work on how some big super-trends take a long time to develop and even have an impact on the world, like steam rail and electricity. Might be worth looking into as a counterpoint to the Gordon thesis.
Well, the point of a lot of this is to look at outputs as a function of inputs. That is what Bloom 2020 is looking at. You need some measure of inputs (they basically use R&D spending, deflated by the wage rate) and some measure of output (GDP, transistor density, crop yields, etc.) and then you figure out the quantitative relationship.
If solar panels or genome sequencing don’t look like this, that would be very interesting! My guess would be that they do.
One thing I’d like to see is a discussion of the potential for measurement error. For example, software is notably deflationary yet we don’t see that in the statistics due to how we count. (We still don’t really know how to value the contribution of software to GDP.) Bloom (2020) may just be measuring the wrong outputs for the current type of progress. For example, if publications or patents are a weaker signal for ideas than in the past, the methodology breaks down.
I agree in general about the measurement challenge. However, one strength of Bloom (2020) is that they look at a variety of areas using different metrics: Moore’s law, agricultural productivity, etc. (They don’t really look at patents, in part because it’s hard to know what patents mean/represent.) In any case, it’s not just GDP. The fact that there are similar patterns across different metrics is evidence that there’s something real going on.
Holden Karnofsky and Scott Alexander go further, albeit with less solid quantitative support, and extend the pattern to art and general human accomplishment. E.g., here’s Scott (this comes right after the block quote above):
Good point, though I don’t find looking at a selection of areas as too convincing. I could just as easily choose areas with consistent exponential growth that I would guess don’t look like this, like solar panels or genome sequencing. Even if things were getting better on average you would expect some things to get less efficient over time too. (for example, think about that inflation components chart people share all the time)
One last thing: we would probably want to look at output and not inputs. Robert Gordon’s sort-of nemesis, Chad Syverson, has done work on how some big super-trends take a long time to develop and even have an impact on the world, like steam rail and electricity. Might be worth looking into as a counterpoint to the Gordon thesis.
Well, the point of a lot of this is to look at outputs as a function of inputs. That is what Bloom 2020 is looking at. You need some measure of inputs (they basically use R&D spending, deflated by the wage rate) and some measure of output (GDP, transistor density, crop yields, etc.) and then you figure out the quantitative relationship.
If solar panels or genome sequencing don’t look like this, that would be very interesting! My guess would be that they do.