The thing about e/acc is it’s a mix of the reasonable and the insane doom cult.
The reasonable parts talk about AI curing diseases ect, and ask to speed it up.
Given some chance of AI curing diseases, and some chance of AI caused extinction, it’s a tradeoff.
Now where the optimal point of the tradeoff lands on depend on whether we just care about existing humans, or all potential future humans. And also on how big we think the risk of AI extinction is.
If we care about all future humans, and think ai is really dangerous, we get a “proceed with extreme caution” position. A position that accepts the building of ASI eventually, but is quite keen to delay it 1000 years if that buys any more safety.
On the other end, some people think the risks are small, and mostly care about themselves/current humans. They are more e/acc.
But there are also various “AI will be our worthy successors”, “AI will replace humans, and that’s great” type e/acc who are ok with the end of humanity.
The thing about e/acc is it’s a mix of the reasonable and the insane doom cult.
The reasonable parts talk about AI curing diseases ect, and ask to speed it up.
Given some chance of AI curing diseases, and some chance of AI caused extinction, it’s a tradeoff.
Now where the optimal point of the tradeoff lands on depend on whether we just care about existing humans, or all potential future humans. And also on how big we think the risk of AI extinction is.
If we care about all future humans, and think ai is really dangerous, we get a “proceed with extreme caution” position. A position that accepts the building of ASI eventually, but is quite keen to delay it 1000 years if that buys any more safety.
On the other end, some people think the risks are small, and mostly care about themselves/current humans. They are more e/acc.
But there are also various “AI will be our worthy successors”, “AI will replace humans, and that’s great” type e/acc who are ok with the end of humanity.