I think that AI safety is a real issue. Many (most?) new technologies create serious safety issues, and it’s important to take them seriously so that we can mitigate risk. I think this is mostly a job for the technologists and founders who are actually developing and deploying the technology.
I think that “hard takeoff” scenarios are (almost by definition?) extremely difficult to reason about, and thus necessarily involve a large degree of speculation. I can’t prove that it won’t happen, but any such scenario seems well outside our ability to predict or control.
A more likely AI global catastrophe scenario, to my mind, is: Over the coming years or decades, we gradually deploy AI more and more as the control system for every major part of the economy. AI traders dominate financial markets; AI control systems run factories and power plants; all our vehicles are autonomous, for both passengers and cargo; etc. And then at some point we hit an OOD edge case that causes some kind of crash that ripples through the entire economy, causing trillions of dollars worth of damage. A complex system failure that makes the Great Depression look like a picnic.
In any case, I’m glad some smart people are thinking about AI safety up front and working on it now.
Without referring to other people’s views or research, do you have a personal intuitive point estimate or spread on when we will have AIs that can do all economically important tasks?
I dunno… years is too short and centuries maybe too long, so I guess I’d say decades? That is a very wide spread though.
And if you really mean all, I place non-zero probability on “never” or “not for a very long time.” After all, we don’t even do all economically important manual tasks using machines yet, and we’ve had powered machinery for 300 years.
I can only speak for myself.
I think that AI safety is a real issue. Many (most?) new technologies create serious safety issues, and it’s important to take them seriously so that we can mitigate risk. I think this is mostly a job for the technologists and founders who are actually developing and deploying the technology.
I think that “hard takeoff” scenarios are (almost by definition?) extremely difficult to reason about, and thus necessarily involve a large degree of speculation. I can’t prove that it won’t happen, but any such scenario seems well outside our ability to predict or control.
A more likely AI global catastrophe scenario, to my mind, is: Over the coming years or decades, we gradually deploy AI more and more as the control system for every major part of the economy. AI traders dominate financial markets; AI control systems run factories and power plants; all our vehicles are autonomous, for both passengers and cargo; etc. And then at some point we hit an OOD edge case that causes some kind of crash that ripples through the entire economy, causing trillions of dollars worth of damage. A complex system failure that makes the Great Depression look like a picnic.
In any case, I’m glad some smart people are thinking about AI safety up front and working on it now.
Without referring to other people’s views or research, do you have a personal intuitive point estimate or spread on when we will have AIs that can do all economically important tasks?
I dunno… years is too short and centuries maybe too long, so I guess I’d say decades? That is a very wide spread though.
And if you really mean all, I place non-zero probability on “never” or “not for a very long time.” After all, we don’t even do all economically important manual tasks using machines yet, and we’ve had powered machinery for 300 years.