The thing is, we have many options that aren’t just accelerating or decelerating the whole thing. Like we can choose gain of function research and cutting edge AI capabilities, and accelerate everything except that.
Science is lots of different pieces, differential technological development.
“25% probability that the domain experts are right x 50% chance that it’s not too late for science to affect the onset of the time of perils x 50% chance that science cannot accelerate us to safety = 6.25%”
This smells of the “multistage fallacy”
You think of something. List a long list of “nesscessary steps”. Estimate middling probabilities for each step. And multiply them together for a small end stage probability.
The problem is, often some of the steps, or all of them, turn out to not be that necessary. And often, if a step had actually happened, it would do so in a way that gave you strong new information about the likelihood of other steps.
Ie if a new device needs 100 new components to be invented, and you naively assume the probability is 50⁄50 for each component. But then a massive load of R&D money gets sent towards making the device, and all 100 components are made.
In this particular case, you are assuming a 25% chance the domain experts are right about the level of X-risk. In the remaining 75%, apparently X risk is negligable. There is no possibility for “actually it’s way way worse than the domain experts predicted”.
“x 50% chance that it’s not too late for science to affect the onset of the time of perils x 50% chance that science cannot accelerate us to safety ”
If the form of the peril is a step. Say a single moment when the first ASI is turned on, then “accelerate to safety” is meaningless. You can’t make the process less risky by rushing through the risky period faster. You can’t make Russian roulette safer by playing it real fast, thus only being at risk for a short time.
The thing is, we have many options that aren’t just accelerating or decelerating the whole thing. Like we can choose gain of function research and cutting edge AI capabilities, and accelerate everything except that.
Science is lots of different pieces, differential technological development.
“25% probability that the domain experts are right x 50% chance that it’s not too late for science to affect the onset of the
time of perils x 50% chance that science cannot accelerate us to safety = 6.25%”
This smells of the “multistage fallacy”
You think of something. List a long list of “nesscessary steps”. Estimate middling probabilities for each step. And multiply them together for a small end stage probability.
The problem is, often some of the steps, or all of them, turn out to not be that necessary. And often, if a step had actually happened, it would do so in a way that gave you strong new information about the likelihood of other steps.
Ie if a new device needs 100 new components to be invented, and you naively assume the probability is 50⁄50 for each component. But then a massive load of R&D money gets sent towards making the device, and all 100 components are made.
In this particular case, you are assuming a 25% chance the domain experts are right about the level of X-risk. In the remaining 75%, apparently X risk is negligable. There is no possibility for “actually it’s way way worse than the domain experts predicted”.
“x 50% chance that it’s not too late for science to affect the onset of the
time of perils x 50% chance that science cannot accelerate us to safety ”
If the form of the peril is a step. Say a single moment when the first ASI is turned on, then “accelerate to safety” is meaningless. You can’t make the process less risky by rushing through the risky period faster. You can’t make Russian roulette safer by playing it real fast, thus only being at risk for a short time.