This week Thomas Philippon posted a paper (PDF) claiming that TFP growth is linear, not exponential. What does this mean, and what can we conclude from this?
A few people asked for my opinion. I’m not an economist, and I’m only modestly familiar with the growth theory literature, but here are some thoughts.
Background for non-economists
Briefly: what is TFP (total factor productivity)? It’s basically a technology multiplier on economic growth that makes capital and labor more productive. It isn’t measured directly, but calculated as a residual by taking capital and labor increases out of GDP growth. What remains is TFP.
In neoclassical growth theory, TFP matters because it enables economic growth. Without increases in TFP, growth would plateau at some per-capita income level limited by technology. We can increase per-capita output if and only if we continue to improve the productivity multiplier from technology.
What does the paper say?
Philippon’s core claim is that a linear model of TFP growth fits the data better than an exponential model. Over long enough time periods, this is actually a piecewise linear model, with breaks.
To demonstrate this, the two models are subjected to various statistical tests on multiple data sets, mostly 20th-century, from the US and about two dozen other countries. In a later section, the models are tested on European data from 1600–1914. The linear model outperforms on pretty much every test.
One theoretical implication of linear TFP growth is that GDP per capita can continue to grow without bound, but that growth will slow over time. Depending on the assumptions you make, growth will converge either to zero or to some positive constant rate.
What to think?
First, I think the evidence Philippon presents at least for the 20th century is compelling. It really does seem that TFP is growing linearly, at least in the last several decades. This is a mathematical model of the Great Stagnation.
Over longer periods of time, however, a pure linear model doesn’t work. TFP, and GDP, clearly grow faster than linear over long periods:
In fact, our best estimates of very long-run GDP growth grow faster than exponential:
(Note that both of these charts are on a log scale.)
Philippon deals with this by making the model piecewise linear. At certain points, the slope of the line discretely jumps to a new value. Philippon puts the 20th-century break at about 1933:
Pre-20th century breaks occurred in 1650 and 1830:
The breaks are determined via statistical tests, but they are presumed to represent general-purpose technologies (GPTs). The 1930s were a turning point in electrification, especially of factories; 1830 was the beginning of railroads and the Second Industrial Revolution. 1650 seems less clear; it might be due to rising labor input that is not accounted for in the calculations, or to the rise of cottage industry.
This makes sense, but it leaves open what is to me the most interesting question: how often do these breaks occur, and how big are the jumps?
Philippon briefly suggests a model in which the breaks are random, the result of a (biased) coin flip each year, with probability ~0.5% to 1%. However, I find this unsatisfying. Again, over the long term, we know that growth is super-linear and probably even super-exponential. If the piecewise-linear model is correct, then over time the breaks should be larger and spaced closer together. But the Poisson process implied by Philippon’s model doesn’t fit this historical pattern. And there isn’t even a suggestion of how to model the size of the change in growth rate at each break. So, this model is incomplete.
The interesting idea this paper points to, IMO, is not about long-run growth but about short-run growth. It suggests to me that economic progress might be a “punctuated equilibrium” rather than more smooth and continuous. I don’t think this changes our view of progress over the long term. But it could change our view of the importance of GPTs. And it could help to explain the recent growth slowdown (“stagnation”).
If this model is right, then “why is GDP growth slowing?” is answered: it is normal and expected. But there might be a different stagnation question: is the next GPT “late”? When should we even expect it? Related, why don’t computers show up as a GPT causing a distinct break in the linear TFP growth path? Or is that still to come (the way that the break from electrification didn’t show up until the 1930s?)
Stepping back a bit, it seems to me that the big puzzle of growth theory is that we see super-exponential growth over the very long term, but sub-exponential growth over the last several decades. I don’t think we yet have a unified model that explains both periods.
Anyway, I found this paper interesting and I hope to see other economists responding to it and building on it.
Is growth linear, not exponential?
Link post
This week Thomas Philippon posted a paper (PDF) claiming that TFP growth is linear, not exponential. What does this mean, and what can we conclude from this?
A few people asked for my opinion. I’m not an economist, and I’m only modestly familiar with the growth theory literature, but here are some thoughts.
Background for non-economists
Briefly: what is TFP (total factor productivity)? It’s basically a technology multiplier on economic growth that makes capital and labor more productive. It isn’t measured directly, but calculated as a residual by taking capital and labor increases out of GDP growth. What remains is TFP.
In neoclassical growth theory, TFP matters because it enables economic growth. Without increases in TFP, growth would plateau at some per-capita income level limited by technology. We can increase per-capita output if and only if we continue to improve the productivity multiplier from technology.
What does the paper say?
Philippon’s core claim is that a linear model of TFP growth fits the data better than an exponential model. Over long enough time periods, this is actually a piecewise linear model, with breaks.
To demonstrate this, the two models are subjected to various statistical tests on multiple data sets, mostly 20th-century, from the US and about two dozen other countries. In a later section, the models are tested on European data from 1600–1914. The linear model outperforms on pretty much every test.
One theoretical implication of linear TFP growth is that GDP per capita can continue to grow without bound, but that growth will slow over time. Depending on the assumptions you make, growth will converge either to zero or to some positive constant rate.
What to think?
First, I think the evidence Philippon presents at least for the 20th century is compelling. It really does seem that TFP is growing linearly, at least in the last several decades. This is a mathematical model of the Great Stagnation.
Over longer periods of time, however, a pure linear model doesn’t work. TFP, and GDP, clearly grow faster than linear over long periods:
In fact, our best estimates of very long-run GDP growth grow faster than exponential:
(Note that both of these charts are on a log scale.)
Philippon deals with this by making the model piecewise linear. At certain points, the slope of the line discretely jumps to a new value. Philippon puts the 20th-century break at about 1933:
Pre-20th century breaks occurred in 1650 and 1830:
The breaks are determined via statistical tests, but they are presumed to represent general-purpose technologies (GPTs). The 1930s were a turning point in electrification, especially of factories; 1830 was the beginning of railroads and the Second Industrial Revolution. 1650 seems less clear; it might be due to rising labor input that is not accounted for in the calculations, or to the rise of cottage industry.
This makes sense, but it leaves open what is to me the most interesting question: how often do these breaks occur, and how big are the jumps?
Philippon briefly suggests a model in which the breaks are random, the result of a (biased) coin flip each year, with probability ~0.5% to 1%. However, I find this unsatisfying. Again, over the long term, we know that growth is super-linear and probably even super-exponential. If the piecewise-linear model is correct, then over time the breaks should be larger and spaced closer together. But the Poisson process implied by Philippon’s model doesn’t fit this historical pattern. And there isn’t even a suggestion of how to model the size of the change in growth rate at each break. So, this model is incomplete.
The interesting idea this paper points to, IMO, is not about long-run growth but about short-run growth. It suggests to me that economic progress might be a “punctuated equilibrium” rather than more smooth and continuous. I don’t think this changes our view of progress over the long term. But it could change our view of the importance of GPTs. And it could help to explain the recent growth slowdown (“stagnation”).
If this model is right, then “why is GDP growth slowing?” is answered: it is normal and expected. But there might be a different stagnation question: is the next GPT “late”? When should we even expect it? Related, why don’t computers show up as a GPT causing a distinct break in the linear TFP growth path? Or is that still to come (the way that the break from electrification didn’t show up until the 1930s?)
Stepping back a bit, it seems to me that the big puzzle of growth theory is that we see super-exponential growth over the very long term, but sub-exponential growth over the last several decades. I don’t think we yet have a unified model that explains both periods.
Anyway, I found this paper interesting and I hope to see other economists responding to it and building on it.
See also Tyler Cowen’s comments.