How do you (and, separately, the Progress Studies community broadly) relate to hard takeoff risk from AI?
How do you (and, separately, the Progress Studies community broadly) relate to hard takeoff risk from AI?