Also, as Marc Andreessen points out in his piece, AI can also increase safety (this point seems unaddressed in your essay): https://pmarca.substack.com/p/why-ai-will-save-the-world
Moritz Wallawitsch
I don’t understand the point you’re trying to make. Is it “safety is good”? That seems pretty obvious?
I think the problem is that some people think the state should regulate/interfere with how safe something can or should be. Related: https://worksinprogress.co/issue/anti-growth-safetyism
Launching: Scaling Knowledge—A new substack about epistemology, AI, startups, and progress
The existence of most of the ones you listed sounds questionable.
How about economic risk exposure (for a given person/city/state)? I think there is already a ton of research on this.
E.g. funding some new nuclear power research could provide a 10000x ROI but .0000X% danger of destroying the city/area of the research facility.
[Question] How productive are our education systems? And how has our education system scaled over the last hundred years?
Human Progress via Intellectual Progress [DRAFT]
Sounds reasonable. However, a better long-term strategy seems to be complete privatization. I.e. to remove the subsidies and tax breaks. I think Brian Caplan would support this strategy (see his book on the education system).
Why would it not be relevant to the question? What’s the value of only looking at eliminating the potential risk?
Regulating a technology is not just about eliminating the risks of it but about reducing the risks to some extent while still enabling the upside. the upsides need to be clearly analyses and acknowledged.