A foremost goal of the Progress community is to accelerate progress. Part of that involves researching the inputs of progress; another part involves advocating for policies that promote progress. A few of the common policy proposals include:
Fixing the housing supply problem
Improving and increasing research and development spending
Increasing immigration
Repealing excessive regulations, particularly in the energy sector
All of these would be very good and I support them. At the same time, any attempt to increase growth runs against a number of headwinds:
The US and other Western governments appear to be deeply sclerotic, leading to regulatory bloat and perhaps making change difficult
Population growth is collapsing in the US, due to both fewer births and less immigration. Under most growth models, people are the key source of new ideas.
Good ideas are (likely) getting harder to find. Growth on the frontier may simply get harder as we pick “low hanging fruit,” though obviously this is often debated.
The US has grown at, on average, 2.7% since the Reagan administration. The last 10 years have been more disappointing, less than 2%. What could a successful Progress movement be able to accomplish? Raising the rate to 2.5%? To 4%?
I should emphasize that I admire all of the policy and research currently being done by advocates of progress. But usually we approach Progress from the frame of the Great Stagnation: We used to grow quickly, then something happened around 1971, and now we grow slowly. But I wonder if we should also be considering different world views of where we stand in relation to the future.
I’m particularly interested in the view that we’re living in the Most Important Century. In this view, we are nearing a breakthrough that could overcome the headwinds of population decline and the ever more difficult search for new ideas: knowledge production via automation.
Holden Karnofsky calls this AI system PASTA: Process for Automating Scientific and Technological Advancement. If PASTA or something similar were created, we might enter a period of increasing growth that would quickly usher in a radically different future.
It sounds a bit far-fetched, but there’s no hasn’t been a devastating argument made against it. Science sounds like something that would be hard to automate, but AI already isn’t progressing as we expected; rather than slowly working its way up from low skilled to high skilled labor, as was often anticipated, AI seems to be on a crash course with creative progressions like writing (GPT systems) and now illustration (DALL-E). Machine learning is all about training by trial and error without precise instruction. And as impressive as current models are, they aren’t even 1% as big as human brains. But that will quickly change as computing power becomes cheaper (More on AI and bioachnors here).
Plus, when have friends of progress been adverse to sci-fi-sounding futures?
If this seems compelling, Karnofsky’s post on PASTA (and the rest of the Most Important Century series) discusses these scenarios in much more detail.
So should we just build PASTA and reep the rewards of Progress? No–more likely we should be extremely worried. There are serious risk from misaligned artificial intelligence, which could pose a threat to human civilization, and there are possibly also risks from humans colonizing the galaxy without sufficient ethical reflection on how to do that responsibly.
So we’re caught in a funny place: a lot of proximate growth goals look good but not world changing. And the “big bet” may be a suicide mission. I’m not sure what to make of all of this. The implication might simply be to work in AI alignment and policy. I think at a bare minimum it’s worth us being more curious about these discussions.
There’s a big irony here: As pessimistic as EAs are about AI trajectories, they see the possibility of, in Karnofsky’s words, “a stable, galaxy wide civilization.” Wouldn’t it be silly if we were working on NSF spending when the takeoff began?
PASTA and Progress: The great irony
Epistemic status: low
Crossposted on High Modernism
A foremost goal of the Progress community is to accelerate progress. Part of that involves researching the inputs of progress; another part involves advocating for policies that promote progress. A few of the common policy proposals include:
Fixing the housing supply problem
Improving and increasing research and development spending
Increasing immigration
Repealing excessive regulations, particularly in the energy sector
All of these would be very good and I support them. At the same time, any attempt to increase growth runs against a number of headwinds:
The US and other Western governments appear to be deeply sclerotic, leading to regulatory bloat and perhaps making change difficult
Population growth is collapsing in the US, due to both fewer births and less immigration. Under most growth models, people are the key source of new ideas.
Good ideas are (likely) getting harder to find. Growth on the frontier may simply get harder as we pick “low hanging fruit,” though obviously this is often debated.
The US has grown at, on average, 2.7% since the Reagan administration. The last 10 years have been more disappointing, less than 2%. What could a successful Progress movement be able to accomplish? Raising the rate to 2.5%? To 4%?
I should emphasize that I admire all of the policy and research currently being done by advocates of progress. But usually we approach Progress from the frame of the Great Stagnation: We used to grow quickly, then something happened around 1971, and now we grow slowly. But I wonder if we should also be considering different world views of where we stand in relation to the future.
I’m particularly interested in the view that we’re living in the Most Important Century. In this view, we are nearing a breakthrough that could overcome the headwinds of population decline and the ever more difficult search for new ideas: knowledge production via automation.
Holden Karnofsky calls this AI system PASTA: Process for Automating Scientific and Technological Advancement. If PASTA or something similar were created, we might enter a period of increasing growth that would quickly usher in a radically different future.
It sounds a bit far-fetched, but there’s no hasn’t been a devastating argument made against it. Science sounds like something that would be hard to automate, but AI already isn’t progressing as we expected; rather than slowly working its way up from low skilled to high skilled labor, as was often anticipated, AI seems to be on a crash course with creative progressions like writing (GPT systems) and now illustration (DALL-E). Machine learning is all about training by trial and error without precise instruction. And as impressive as current models are, they aren’t even 1% as big as human brains. But that will quickly change as computing power becomes cheaper (More on AI and bioachnors here).
Plus, when have friends of progress been adverse to sci-fi-sounding futures?
If this seems compelling, Karnofsky’s post on PASTA (and the rest of the Most Important Century series) discusses these scenarios in much more detail.
So should we just build PASTA and reep the rewards of Progress? No–more likely we should be extremely worried. There are serious risk from misaligned artificial intelligence, which could pose a threat to human civilization, and there are possibly also risks from humans colonizing the galaxy without sufficient ethical reflection on how to do that responsibly.
So we’re caught in a funny place: a lot of proximate growth goals look good but not world changing. And the “big bet” may be a suicide mission. I’m not sure what to make of all of this. The implication might simply be to work in AI alignment and policy. I think at a bare minimum it’s worth us being more curious about these discussions.
There’s a big irony here: As pessimistic as EAs are about AI trajectories, they see the possibility of, in Karnofsky’s words, “a stable, galaxy wide civilization.” Wouldn’t it be silly if we were working on NSF spending when the takeoff began?