What could a superintelligence really do? The prophets’ answer seems to be “pretty much anything.” Any sci-fi scenario you can imagine, like “diamondoid bacteria that infect all humans, then simultaneously release botulinum toxin.” In this view, as intelligence increases without limit, it approaches omnipotence. But this is not at all obvious to me.
The idea of creating ASI as an omnipotent being, far superior and all-knowing, strikes me as a pseudo-religious argument wrapped in technical and rational language that makes it palatable to atheists. It’s a bit like how the wildest predictions from longevity/curing aging feel a bit like heaven for people who don’t believe in god.
I get how to get from ANI to AGI and then to ASI. It makes sense. But at the same time, something about it doesn’t. Perhaps this is why this position (AGI as a harbinger for extinction) lacks mainstream appeal.
If some rogue AI were to plot against us, would it actually succeed on the first try? Even genius humans generally don’t succeed on the first try of everything they do. The prophets think that AI can deduce its way to victory—the same way they think they can deduce their way to predicting such outcomes.
Is this rationalists anthropomorphizing AI to behave/think like they thing, perhaps?
Is this rationalists anthropomorphizing AI to behave/think like they thing, perhaps?
As someone who thinks AI doom is fairly likely (~65%), I reject this as psychologizing.
I think there is an argument for TAI x-risk which takes progress seriously. The transformative AI does not need to be omnipotent or all-knowing: it simply needs to be more advanced than the capability humanity can muster against it.
Consider the United States versus the world population from 1200: roughly the same size. But if you pitted those two actors against each other in a conflict, it is very clear who would win.
So either one would need to believe that current humanity is very near the ceiling of capability, or that we are not able to create more capable beings. (Which, in narrow domains, has turned out false, and the range of those domains appear to be expanding).
If some rogue AI were to plot against us, would it actually succeed on the first try? Even genius humans generally don’t succeed on the first try of everything they do. The prophets think that AI can deduce its way to victory—the same way they think they can deduce their way to predicting such outcomes.
I claim this is not so outlandish, the current US would win against the 13th century 1000/1000 times. And here’s a fairly fine-grained scenario detailing how that could happen with a single agent trapped on the cloud.
But—it need not be that strict a framing. Humanity losing control might look muchmoreprosaic: We integrate AI systems into the economy, which then over time glides out of our control.
The idea of creating ASI as an omnipotent being, far superior and all-knowing, strikes me as a pseudo-religious argument wrapped in technical and rational language that makes it palatable to atheists. It’s a bit like how the wildest predictions from longevity/curing aging feel a bit like heaven for people who don’t believe in god.
I get how to get from ANI to AGI and then to ASI. It makes sense. But at the same time, something about it doesn’t. Perhaps this is why this position (AGI as a harbinger for extinction) lacks mainstream appeal.
Is this rationalists anthropomorphizing AI to behave/think like they thing, perhaps?
As someone who thinks AI doom is fairly likely (~65%), I reject this as psychologizing.
I think there is an argument for TAI x-risk which takes progress seriously. The transformative AI does not need to be omnipotent or all-knowing: it simply needs to be more advanced than the capability humanity can muster against it.
Consider the United States versus the world population from 1200: roughly the same size. But if you pitted those two actors against each other in a conflict, it is very clear who would win.
So either one would need to believe that current humanity is very near the ceiling of capability, or that we are not able to create more capable beings. (Which, in narrow domains, has turned out false, and the range of those domains appear to be expanding).
I claim this is not so outlandish, the current US would win against the 13th century 1000/1000 times. And here’s a fairly fine-grained scenario detailing how that could happen with a single agent trapped on the cloud.
But—it need not be that strict a framing. Humanity losing control might look much more prosaic: We integrate AI systems into the economy, which then over time glides out of our control.
In general, when considering what AI systems will act like, I try to simulate the actions of a plan-evaluatior, perhaps an outlandishly powerful one.
Edit: Tried to make this comment less snarky.