Great article! I think you expressed The Argument well and similarly to how I see it expressed by those who believe it.
I’m always surprised by how many tools are available to evaluate the argument…and that its fans rarely use any of them. It’s great to see you use some of these tools to critique it!
By way of comment: at the same time, your article leaves the argument looking more plausible (to me) than it probably is, just because your critiques don’t include as many angles as it might from progress studies (especially the scientific method and the history of technology). My attempted survey of the possible angles, some but not all of which you tackle:
Most catastrophic risks have a lot of evidence to tell us how much we should worry about them (the history of infectious disease outbreaks, nuclear accidents and near-accidents, etc). The argument never comes with any evidence. Worse yet, it’s rarely presented as a hypothesis to be falsified, but instead as speculation. This is especially surprising because their main catastrophic scenario is an accident, and accidents are one of the most common and well-studied kinds of risk (auto accidents, the Tacoma Narrows Bridge, airplane accidents, policies for canceling ferries in dangerously bad weather, nuclear power plant accidents, accidents involving Covid in a Wuhan laboratory vs. Wuhan seafood market, etc). Accidents are studied by all sorts of people including actuaries, government technocrats, and popular authors. Successful predictions of catastrophe (or anything) are almost always based on evidence.
More generally, the argument is usually presented without any scholarship or context outside of speculative philosophy. But there is lots of scholarship to know (beyond the above) from the histories of technology, human well-being, and predictions of apocalypse, and probably many other domains.
A cost-benefit analysis would be needed if the argument were to be made credible. Lifespans are about 35 years shorter in poor countries than they are for Japanese and Swiss women, and about 15 years longer for the richest US females than the poorest US males, so it’s a good estimate that 25+ years of life are lost by the average person due to risks that can be attacked by anti-poverty, public-health, and economic growth measures alone. Peter Attia is probably right that exercise, sleep, and food account for another 10 years. As you say, the argument glibly assumes that AI will solve pretty much any problem it needs to solve to kill us all. We have no reason to believe that, but those who do surely should also believe that the AI will solve any problem it needs to to gain that 35+ years of life for the average person among the 8 billion of us. At this rate, even Scott’s estimated 2% risk of an AI apocalypse looks like a bargain. The context provided by cost-benefit analysis also reminds us of where we ourselves should focus our attention. And of course the likely upside of AI doesn’t just depend on a glib assumption of AI capabilities — AI is a general purpose technology, so progress studies tells us something about what upside to expect.
Finally, the argument is rarely presented with a plausible mechanism.
Great article! I think you expressed The Argument well and similarly to how I see it expressed by those who believe it.
I’m always surprised by how many tools are available to evaluate the argument…and that its fans rarely use any of them. It’s great to see you use some of these tools to critique it!
By way of comment: at the same time, your article leaves the argument looking more plausible (to me) than it probably is, just because your critiques don’t include as many angles as it might from progress studies (especially the scientific method and the history of technology). My attempted survey of the possible angles, some but not all of which you tackle:
Most catastrophic risks have a lot of evidence to tell us how much we should worry about them (the history of infectious disease outbreaks, nuclear accidents and near-accidents, etc). The argument never comes with any evidence. Worse yet, it’s rarely presented as a hypothesis to be falsified, but instead as speculation. This is especially surprising because their main catastrophic scenario is an accident, and accidents are one of the most common and well-studied kinds of risk (auto accidents, the Tacoma Narrows Bridge, airplane accidents, policies for canceling ferries in dangerously bad weather, nuclear power plant accidents, accidents involving Covid in a Wuhan laboratory vs. Wuhan seafood market, etc). Accidents are studied by all sorts of people including actuaries, government technocrats, and popular authors. Successful predictions of catastrophe (or anything) are almost always based on evidence.
More generally, the argument is usually presented without any scholarship or context outside of speculative philosophy. But there is lots of scholarship to know (beyond the above) from the histories of technology, human well-being, and predictions of apocalypse, and probably many other domains.
A cost-benefit analysis would be needed if the argument were to be made credible. Lifespans are about 35 years shorter in poor countries than they are for Japanese and Swiss women, and about 15 years longer for the richest US females than the poorest US males, so it’s a good estimate that 25+ years of life are lost by the average person due to risks that can be attacked by anti-poverty, public-health, and economic growth measures alone. Peter Attia is probably right that exercise, sleep, and food account for another 10 years. As you say, the argument glibly assumes that AI will solve pretty much any problem it needs to solve to kill us all. We have no reason to believe that, but those who do surely should also believe that the AI will solve any problem it needs to to gain that 35+ years of life for the average person among the 8 billion of us. At this rate, even Scott’s estimated 2% risk of an AI apocalypse looks like a bargain. The context provided by cost-benefit analysis also reminds us of where we ourselves should focus our attention. And of course the likely upside of AI doesn’t just depend on a glib assumption of AI capabilities — AI is a general purpose technology, so progress studies tells us something about what upside to expect.
Finally, the argument is rarely presented with a plausible mechanism.