Furthermore, the near-term prospect of general robotics could disrupt industries reliant on physical labor. Similarly, many expect that in the coming years we will see the emergence of AI agents—autonomous software entities designed to perform tasks or solve problems without constant human intervention that can essentially act as “drop-in remote workers”.
In short, the current wave of AI comes with a wave of automation anxiety. First, there is the challenge of frictional technological unemployment—in other words—a lot of people could lose their jobs because of automation and it would take a bit of time for them to find new employment in a different role. Second, more importantly, there is a concern that as we move towards an AGI economy with millions, then billions, then trillions of AGIs, we will have long-term, structural technological unemployment as we run out of jobs for humans permanently. A lot of the debate on this can be summarized in three short statements:
Concerns about the speed or scope of labor substitution have often been premature or exaggerated in the past.
Labor substitution has been very positive for humanity so far. As many old tasks have been automated, human labor has moved into many new, previously non-existing tasks.
The long-term question that decides structural technological unemployment is whether human labor can keep moving into new tasks.
Experts disagree on whether human labor can keep moving to new tasks indefinitely or not. In this blog post I will suggest a clear answer:
Humans will run out of new tasks to move to when AGI surpasses humans in fluid general intelligence. Fluid general intelligence is the ability to reason, solve novel problems, and think abstractly, independent of acquired knowledge or experience. If and when AGI reaches this, it will be better at learning novel tasks than humans, and the interval between a new task appearing in the economy and its automation falls to zero.
Current AI models still have modest levels of fluid intelligence and there is no consensus timeline on AGI with strong fluid intelligence. Still, even if it may be difficult to agree on specific timelines, this underlines that the idea that we could eventually run out of new jobs to shift to should be taken seriously.
1. Automation anxiety is not novel
As early as 1948 Norbert Wiener warned that “(...) the first industrial revolution, the revolution of the ‘dark satanic mills’, was the devaluation of the human arm by the competition of machinery. (...) The modern industrial revolution is similarly bound to devalue the human brain, at least in its simpler and more routine decisions. (...) taking the second revolution as accomplished, the average human being of mediocre attainments or less has nothing to sell that it is worth anyone’s money to buy.”[1]
More recently, the 2013 Oxford study by Carl Benedikt Frey and Michael Osborne, “The Future of Employment,” estimated that up to 47% of U.S. jobs were at risk of automation within a decade or two, reigniting fears of widespread unemployment.
The fact that someone has mistakenly “cried wolf” doesn’t mean that wolves don’t exist. However, it is a reminder to keep a healthy dose of scepticism and pursue strategies that are robust across scenarios and timelines.
2. We are already technologically unemployed farmers
In pre-industrial societies, the overwhelming majority of people worked as subsistence farmers. However, over time, the labor intensity of farming decreased and crop yields increased thanks to a long list of technological innovations from the plow, to selective breeding, to crop rotation, to seed drills, to threshing machines, to tractors, to fertilizers, to pesticides, to water sprinkler systems, to genetically modified crops.
This transition did not lead to permanent mass unemployment for 90% of the population; instead, it freed up labor to pursue new opportunities in other sectors. The lump of labor fallacy is the incorrect belief that there is a fixed amount of work or jobs in an economy, so if machines take some of these jobs, it must reduce the number of jobs available to humans. In reality we have automated 90% of the existing jobs, but transitioned to more, new, and better jobs.
Such technology-induced shifts happened more than once. For example, Smith is one of the most common occupational surnames in the US, derived from the blacksmith profession. My surname “Kohler” derives from the German word “Köhler,” which means “charcoal burner.” This occupation involved the production of charcoal from wood. Charcoal burning was a significant occupation in medieval Europe, providing fuel for blacksmiths, metalworking, and other industrial processes. However, with the Industrial Revolution charcoal has been replaced by coal from mines in most applications.
I’m rather glad to be a technologically unemployed charcoal burner. So, if history is our guide, even if we automate another 90% of current jobs, we will eventually find more and better jobs somewhere else. In novel tasks that we can’t even imagine yet.
3. Luddite horses
Some economists and intellectuals, such as Wassily Leontief, Gregory Clark[2], Nick Bostrom[3], CGP Grey, and Calum Chace[4] have argued that we should not overgeneralize from the historical evidence that automation has led to more and better jobs, and that there is some future level and/or speed of automation for which this will not hold anymore. The classic example of this camp are horses. Horses used to play a key role in Earth’s economy and the “horse economy” grew well into the 20th century. However, horses were eventually pushed out of the economy by the cheaper “machine muscles” from internal combustion engines. Here is how Max Tegmark[5] describes it:
“Imagine two horses looking at an early automobile in the year 1900 and pondering their future. ‘I’m worried about technological unemployment.’ ‘Neigh, neigh, don’t be a Luddite: our ancestors said the same thing when steam engines took our industry jobs and trains took our jobs pulling stage coaches. But we have more jobs than ever today, and they’re better too: I’d much rather pull a light carriage through town than spend all day walking in circles to power a stupid mine-shaft pump.’ ‘But what if this internal combustion engine really takes off?’ ‘I’m sure there’ll be new jobs for horses that we haven’t yet imagined. That’s what’s always happened before, like with the invention of the wheel and the plow.’
Alas, those not-yet-imagined new jobs for horses never arrived. No-longer-needed horses were slaughtered and not replaced, causing the U.S. equine population to collapse from about 26 million in 1915 to about 3 million in 1960. As mechanical muscles made horses redundant, will mechanical minds do the same to humans?”
4. Fluid intelligence is the key factor
So, are we destined to eventually follow the path of the horse in the economy? Daron Acemoglu & Pascal Restrepo (2018) argue that“the difference between human labor and horses is that humans have a comparative advantage in new and more complex tasks. Horses did not. If this comparative advantage is significant and the creation of new tasks continues, employment and the labor share can remain stable in the long run even in the face of rapid automation.” In other words, the high human general intelligence allows us to be more adaptive and shift to new tasks as the automation of more established tasks rolls forward.
The economists Anton Korinek & Donghyun Suh (2024) have created a model specifically considering why humans might run out of new tasks in the face of AGI and what would happen to wages in such a scenario. Their basic approach is that all possible tasks that could be performed by humans are ordered in terms of computational complexity and as digital computation expands more and more tasks can be automated moving the automation frontier from left to right. This is essentially a restatement of Moravec’s metaphorical landscape of human competences and automation (see figure below). In this metaphor the peaks reflect the most complex human competences, whereas AI automation is represented as a rising tide that continuously moves the shore line up.
If the complexity of economic tasks performed by humans is bounded (in other words, if there is no infinitely high mountain in Moravec’s landscape of human competences), automation will eventually cover all tasks, leading to complete automation. In the short term, automation increases productivity and boosts wages for non-automated tasks. In the long term, humans run out of tasks at which they can outperform machines and the labor share of income collapses fairly steeply as we approach full automation.
It’s useful to have explicit models of the AGI economy and what might happen if we run out of new jobs to move to. Having said that, I would argue that a strict focus on the computational complexity of potential tasks can be misleading. What really decides whether or not humans can keep moving to complex and novel tasks is the comparative advantage of the human brain in those tasks.
Can we always move into more complex tasks?
The complexity of some tasks in disciplines such as futures studies or economics is (de facto) unbounded. However, the maximum complexity that a human brain can represent is bounded. The economically relevant question for such tasks is not whether AI has the computing power to perform these tasks perfectly, but whether AI has better price-performance on them than humans.
For example, both futurists and economists have imperfect prediction records: Few have predicted the Great Financial Crisis of 2008 or used their insights to make money on financial markets. In a more recent example, in late 2022 85% of economists polled by the Financial Times and the University of Chicago predicted the US would have a recession in 2023 - which did not happen.
My judgement is that it’s likely that AI will eventually be able to outperform humans even on tasks with unbounded complexity and irreducible uncertainty. First, in some domains the ability of AI to perform complex tasks can already not be matched by humans. No human can filter mails or social media posts based on 10’000-dimensional decision boundaries. Second, the exponential growth of parameters in artificial neural networks means that, given enough training data and compute, AI can represent an exponentially growing amount of complexity, whereas our biological neural networks have fairly fixed upper limits.
Can we always move into novel tasks?
Current AI systems don’t perform well without lots of training data. This is true both for existing tasks with scarce data (e.g. operating on rare diseases) as well as new tasks that are introduced into the economy. If the limitation of AI requiring substantial initial amounts of human data to imitate persists, it would plausibly allow humans to keep moving to novel frontier tasks and create data on them before AI can take over.
Whether or not it persists comes back to the distinction between fluid and crystallized general intelligence. Crystallized intelligence is the ability to use accumulated skills, knowledge, and experience. Fluid intelligence is the capacity to reason and solve unfamiliar problems, independent of knowledge from the past. It involves the ability to:
Think logically and solve problems in novel situations.
Identify patterns and relationships among stimuli.
Learn new things quickly and adapt to new situations.
Current large language models have a lot of crystallized intelligence but they are weak at logical reasoning and fluid intelligence. People can reasonably disagree on how much fluid intelligence future AGIs will have due to algorithmic innovations or emergence. However, the idea that humans will keep moving from automated tasks to novel tasks is incoherent with the existence of AI with human-level or above human-level fluid general intelligence.
If, at some point in the future, AGI can work at or below the cost of human labor and masters the meta-ability to learn novel tasks at least as quick and as well as humans, we have permanently lost the reskilling race. Then, new tasks can be automated as quickly as they are created.
Will we ever run out of new jobs?
Link post
Large language models like GPT-4 are reshaping the knowledge economy: From automating tasks in customer service, to deskilling management consultants, to fears that they could replace human creativity in movie scriptwriting. In 2023 Goldman Sachs projected that the equivalent of 300 million full-time jobs are exposed to automation by AI.
Furthermore, the near-term prospect of general robotics could disrupt industries reliant on physical labor. Similarly, many expect that in the coming years we will see the emergence of AI agents—autonomous software entities designed to perform tasks or solve problems without constant human intervention that can essentially act as “drop-in remote workers”.
In short, the current wave of AI comes with a wave of automation anxiety. First, there is the challenge of frictional technological unemployment—in other words—a lot of people could lose their jobs because of automation and it would take a bit of time for them to find new employment in a different role. Second, more importantly, there is a concern that as we move towards an AGI economy with millions, then billions, then trillions of AGIs, we will have long-term, structural technological unemployment as we run out of jobs for humans permanently. A lot of the debate on this can be summarized in three short statements:
Concerns about the speed or scope of labor substitution have often been premature or exaggerated in the past.
Labor substitution has been very positive for humanity so far. As many old tasks have been automated, human labor has moved into many new, previously non-existing tasks.
The long-term question that decides structural technological unemployment is whether human labor can keep moving into new tasks.
Experts disagree on whether human labor can keep moving to new tasks indefinitely or not. In this blog post I will suggest a clear answer:
Humans will run out of new tasks to move to when AGI surpasses humans in fluid general intelligence. Fluid general intelligence is the ability to reason, solve novel problems, and think abstractly, independent of acquired knowledge or experience. If and when AGI reaches this, it will be better at learning novel tasks than humans, and the interval between a new task appearing in the economy and its automation falls to zero.
Current AI models still have modest levels of fluid intelligence and there is no consensus timeline on AGI with strong fluid intelligence. Still, even if it may be difficult to agree on specific timelines, this underlines that the idea that we could eventually run out of new jobs to shift to should be taken seriously.
1. Automation anxiety is not novel
As early as 1948 Norbert Wiener warned that “(...) the first industrial revolution, the revolution of the ‘dark satanic mills’, was the devaluation of the human arm by the competition of machinery. (...) The modern industrial revolution is similarly bound to devalue the human brain, at least in its simpler and more routine decisions. (...) taking the second revolution as accomplished, the average human being of mediocre attainments or less has nothing to sell that it is worth anyone’s money to buy.”[1]
Similarly, the US Congress held hearings on Automation and Technological Change as early as 1955, with some worrying that technology could “produce an unemployment situation, in comparison with which the depression of the thirties will seem a pleasant joke.”
More recently, the 2013 Oxford study by Carl Benedikt Frey and Michael Osborne, “The Future of Employment,” estimated that up to 47% of U.S. jobs were at risk of automation within a decade or two, reigniting fears of widespread unemployment.
The fact that someone has mistakenly “cried wolf” doesn’t mean that wolves don’t exist. However, it is a reminder to keep a healthy dose of scepticism and pursue strategies that are robust across scenarios and timelines.
2. We are already technologically unemployed farmers
In pre-industrial societies, the overwhelming majority of people worked as subsistence farmers. However, over time, the labor intensity of farming decreased and crop yields increased thanks to a long list of technological innovations from the plow, to selective breeding, to crop rotation, to seed drills, to threshing machines, to tractors, to fertilizers, to pesticides, to water sprinkler systems, to genetically modified crops.
This transition did not lead to permanent mass unemployment for 90% of the population; instead, it freed up labor to pursue new opportunities in other sectors. The lump of labor fallacy is the incorrect belief that there is a fixed amount of work or jobs in an economy, so if machines take some of these jobs, it must reduce the number of jobs available to humans. In reality we have automated 90% of the existing jobs, but transitioned to more, new, and better jobs.
Such technology-induced shifts happened more than once. For example, Smith is one of the most common occupational surnames in the US, derived from the blacksmith profession. My surname “Kohler” derives from the German word “Köhler,” which means “charcoal burner.” This occupation involved the production of charcoal from wood. Charcoal burning was a significant occupation in medieval Europe, providing fuel for blacksmiths, metalworking, and other industrial processes. However, with the Industrial Revolution charcoal has been replaced by coal from mines in most applications.
I’m rather glad to be a technologically unemployed charcoal burner. So, if history is our guide, even if we automate another 90% of current jobs, we will eventually find more and better jobs somewhere else. In novel tasks that we can’t even imagine yet.
3. Luddite horses
Some economists and intellectuals, such as Wassily Leontief, Gregory Clark[2], Nick Bostrom[3], CGP Grey, and Calum Chace[4] have argued that we should not overgeneralize from the historical evidence that automation has led to more and better jobs, and that there is some future level and/or speed of automation for which this will not hold anymore. The classic example of this camp are horses. Horses used to play a key role in Earth’s economy and the “horse economy” grew well into the 20th century. However, horses were eventually pushed out of the economy by the cheaper “machine muscles” from internal combustion engines. Here is how Max Tegmark[5] describes it:
4. Fluid intelligence is the key factor
So, are we destined to eventually follow the path of the horse in the economy? Daron Acemoglu & Pascal Restrepo (2018) argue that “the difference between human labor and horses is that humans have a comparative advantage in new and more complex tasks. Horses did not. If this comparative advantage is significant and the creation of new tasks continues, employment and the labor share can remain stable in the long run even in the face of rapid automation.” In other words, the high human general intelligence allows us to be more adaptive and shift to new tasks as the automation of more established tasks rolls forward.
The economists Anton Korinek & Donghyun Suh (2024) have created a model specifically considering why humans might run out of new tasks in the face of AGI and what would happen to wages in such a scenario. Their basic approach is that all possible tasks that could be performed by humans are ordered in terms of computational complexity and as digital computation expands more and more tasks can be automated moving the automation frontier from left to right. This is essentially a restatement of Moravec’s metaphorical landscape of human competences and automation (see figure below). In this metaphor the peaks reflect the most complex human competences, whereas AI automation is represented as a rising tide that continuously moves the shore line up.
If the complexity of economic tasks performed by humans is bounded (in other words, if there is no infinitely high mountain in Moravec’s landscape of human competences), automation will eventually cover all tasks, leading to complete automation. In the short term, automation increases productivity and boosts wages for non-automated tasks. In the long term, humans run out of tasks at which they can outperform machines and the labor share of income collapses fairly steeply as we approach full automation.
It’s useful to have explicit models of the AGI economy and what might happen if we run out of new jobs to move to. Having said that, I would argue that a strict focus on the computational complexity of potential tasks can be misleading. What really decides whether or not humans can keep moving to complex and novel tasks is the comparative advantage of the human brain in those tasks.
Can we always move into more complex tasks?
The complexity of some tasks in disciplines such as futures studies or economics is (de facto) unbounded. However, the maximum complexity that a human brain can represent is bounded. The economically relevant question for such tasks is not whether AI has the computing power to perform these tasks perfectly, but whether AI has better price-performance on them than humans.
For example, both futurists and economists have imperfect prediction records: Few have predicted the Great Financial Crisis of 2008 or used their insights to make money on financial markets. In a more recent example, in late 2022 85% of economists polled by the Financial Times and the University of Chicago predicted the US would have a recession in 2023 - which did not happen.
My judgement is that it’s likely that AI will eventually be able to outperform humans even on tasks with unbounded complexity and irreducible uncertainty. First, in some domains the ability of AI to perform complex tasks can already not be matched by humans. No human can filter mails or social media posts based on 10’000-dimensional decision boundaries. Second, the exponential growth of parameters in artificial neural networks means that, given enough training data and compute, AI can represent an exponentially growing amount of complexity, whereas our biological neural networks have fairly fixed upper limits.
Can we always move into novel tasks?
Current AI systems don’t perform well without lots of training data. This is true both for existing tasks with scarce data (e.g. operating on rare diseases) as well as new tasks that are introduced into the economy. If the limitation of AI requiring substantial initial amounts of human data to imitate persists, it would plausibly allow humans to keep moving to novel frontier tasks and create data on them before AI can take over.
Whether or not it persists comes back to the distinction between fluid and crystallized general intelligence. Crystallized intelligence is the ability to use accumulated skills, knowledge, and experience. Fluid intelligence is the capacity to reason and solve unfamiliar problems, independent of knowledge from the past. It involves the ability to:
Think logically and solve problems in novel situations.
Identify patterns and relationships among stimuli.
Learn new things quickly and adapt to new situations.
Current large language models have a lot of crystallized intelligence but they are weak at logical reasoning and fluid intelligence. People can reasonably disagree on how much fluid intelligence future AGIs will have due to algorithmic innovations or emergence. However, the idea that humans will keep moving from automated tasks to novel tasks is incoherent with the existence of AI with human-level or above human-level fluid general intelligence.
If, at some point in the future, AGI can work at or below the cost of human labor and masters the meta-ability to learn novel tasks at least as quick and as well as humans, we have permanently lost the reskilling race. Then, new tasks can be automated as quickly as they are created.
Norbert Wiener. (1948). Cybernetics: Or Control and Communication in the Animal and the Machine. Technology Press. pp. 37&38
Gregory Clark. (2007). A Farewell to Alms. p. 286
Nick Bostrom. (2014). Superintelligence. p. 196
Calum Chace. (2016). The Economic Singularity. p. 189
Max Tegmark. (2017). Life 3.0. pp. 125&126