This feels like a failure at actually engaging with the views you are purporting to criticize. If someone believes AGI will likely kill every living being, exactly what benefits should they consider to ensure that the evaluation is balanced? That our last couple years will be marginally more comfortable? How should the solutionist approach look?
This essay was written not written for the doomers. It was written for the anti-doomers who are inclined to dismiss any concerns about AI safety at all.
I may write something later about where I agree/disagree with the doom argument and what I think we should actually do.
This feels like a failure at actually engaging with the views you are purporting to criticize. If someone believes AGI will likely kill every living being, exactly what benefits should they consider to ensure that the evaluation is balanced? That our last couple years will be marginally more comfortable? How should the solutionist approach look?
This essay was written not written for the doomers. It was written for the anti-doomers who are inclined to dismiss any concerns about AI safety at all.
I may write something later about where I agree/disagree with the doom argument and what I think we should actually do.