I don’t see any specific criticism of effective altruism other than “I don’t like the vibes”.
And the criticism from “acrimonious corporate politics”.
“Helen Toner was apparently willing to let OpenAI be destroyed because of a general feeling that the organization was moving too fast or commercializing too much.”
Between the two of them, a philosophy that aims to prevent catastrophic risk in the future seems to be creating its own catastrophes in the present.
Shutting down a company and some acrimonious board room discussion is hardly “catastrophic”. And it can be the right move, if you think the danger exceeds the value the company is creating.
Ie if a company makes nuclear power plants that are melt downs just waiting to happen, or kids toys full of lead or something, shutting that company down is a good move.
The thing about e/acc is it’s a mix of the reasonable and the insane doom cult.
The reasonable parts talk about AI curing diseases ect, and ask to speed it up.
Given some chance of AI curing diseases, and some chance of AI caused extinction, it’s a tradeoff.
Now where the optimal point of the tradeoff lands on depend on whether we just care about existing humans, or all potential future humans. And also on how big we think the risk of AI extinction is.
If we care about all future humans, and think ai is really dangerous, we get a “proceed with extreme caution” position. A position that accepts the building of ASI eventually, but is quite keen to delay it 1000 years if that buys any more safety.
On the other end, some people think the risks are small, and mostly care about themselves/current humans. They are more e/acc.
But there are also various “AI will be our worthy successors”, “AI will replace humans, and that’s great” type e/acc who are ok with the end of humanity.