I have read enough (e.g., Holden Karnofsky’s essays) to understand the case for it. It is a compelling case. What I’m arguing against is a line of thinking like: “AGI will be here soon and it will either kill us or solve all our problems, so there’s no point in working on curing cancer, longevity, nanotech, fusion, or progress studies.” There are just too many unknown unknowns.
On top of which I would add that machine intelligence, however it evolves, is something very different from human intelligence, just as a washing machine is different from a housekeeper and a submarine is different from a whale. Machines “think” in the way that a submarine “swims.” So there are limits on how much we can extrapolate from human intelligence.
Re the first point, I agree. I would tentatively suggest doing something like OpenPhil’s worldview diversification, where research, labor, and capital are divided among a few distinct futures scenarios and each is optimized independently. My point in the piece is that I think the current program is a bit under-diversified.
The aspect as was arguing for as almost certain on the inside view is that we would be able to develop AGI eventually barring catastrophe. I wasn’t extending that to “AGI will be here soon”.
Regarding “AGI kill us or solve all our problems”; I think there are some possible scenarios where we end up with a totalitarian government or an oligarchy controlling AI or the AI keeps us alive for some reason (incl. s-risk scenarios) or being disempowered by AI/”going out with a whimper” as per What failure looks like. But I assign almost no weight on the internal view of AGI just not being that good. (What I mean by that, is I exclude the scenarios that are common in sci-fi where we have AGI and we still have humans doing most things and being better or as good, but not scenarios where humans do things b/c we don’t trust the AI or b/c we need “fake jobs” for the humans to feel important).
Re “we would be able to develop AGI eventually” as “almost certain”: At least up until a year ago I would have said no, definitely not certain, because a computer is very different from a brain and we don’t know yet what it can do. However, as AI advances, I put more probability on it.
Given enough computing power, we should be able to more or less simulate a brain. What is or was your worry? Ability to parallelise? Maybe that even though it may eventually become technically possible, it’ll always be cost-prohibitive? Or maybe that small errors in the simulation would magnify over time?
Well, I’m not a materialist, so it’s not obvious to me that we can successfully simulate a brain, in the ways that matter, on purely material hardware. We just really don’t understand consciousness or how it arises at all. That to my mind is a huge unknown.
I don’t identify as a materialist either (I’m still figuring out my views here), but the question of qualia seems orthogonal to the question of capabilities. A philosophical zombie has the same capability to act in the world as someone who isn’t a zombie.
(I should add, this conversation has been useful to me as it’s helped me understand why certain things I take for granted may not be obvious to other people).
Actually, I can imagine a world where physical brains operated by interacting with some unknown realm that provided some kind of computation capability that the brain lacked itself, although as neuroscience advances, there seems less and less scope for anything like this (not that I know very much about neuroscience at all).
I have read enough (e.g., Holden Karnofsky’s essays) to understand the case for it. It is a compelling case. What I’m arguing against is a line of thinking like: “AGI will be here soon and it will either kill us or solve all our problems, so there’s no point in working on curing cancer, longevity, nanotech, fusion, or progress studies.” There are just too many unknown unknowns.
On top of which I would add that machine intelligence, however it evolves, is something very different from human intelligence, just as a washing machine is different from a housekeeper and a submarine is different from a whale. Machines “think” in the way that a submarine “swims.” So there are limits on how much we can extrapolate from human intelligence.
Re the first point, I agree. I would tentatively suggest doing something like OpenPhil’s worldview diversification, where research, labor, and capital are divided among a few distinct futures scenarios and each is optimized independently. My point in the piece is that I think the current program is a bit under-diversified.
What “current program” are you referring to exactly? (The progress studies community? The world? Or what?)
The aspect as was arguing for as almost certain on the inside view is that we would be able to develop AGI eventually barring catastrophe. I wasn’t extending that to “AGI will be here soon”.
Regarding “AGI kill us or solve all our problems”; I think there are some possible scenarios where we end up with a totalitarian government or an oligarchy controlling AI or the AI keeps us alive for some reason (incl. s-risk scenarios) or being disempowered by AI/”going out with a whimper” as per What failure looks like. But I assign almost no weight on the internal view of AGI just not being that good. (What I mean by that, is I exclude the scenarios that are common in sci-fi where we have AGI and we still have humans doing most things and being better or as good, but not scenarios where humans do things b/c we don’t trust the AI or b/c we need “fake jobs” for the humans to feel important).
Re “we would be able to develop AGI eventually” as “almost certain”: At least up until a year ago I would have said no, definitely not certain, because a computer is very different from a brain and we don’t know yet what it can do. However, as AI advances, I put more probability on it.
What’s your doubt?
Given enough computing power, we should be able to more or less simulate a brain. What is or was your worry? Ability to parallelise? Maybe that even though it may eventually become technically possible, it’ll always be cost-prohibitive? Or maybe that small errors in the simulation would magnify over time?
Well, I’m not a materialist, so it’s not obvious to me that we can successfully simulate a brain, in the ways that matter, on purely material hardware. We just really don’t understand consciousness or how it arises at all. That to my mind is a huge unknown.
I don’t identify as a materialist either (I’m still figuring out my views here), but the question of qualia seems orthogonal to the question of capabilities. A philosophical zombie has the same capability to act in the world as someone who isn’t a zombie.
(I should add, this conversation has been useful to me as it’s helped me understand why certain things I take for granted may not be obvious to other people).
Well, I’m also not sure if p-zombies can exist!
(Although if an AI passed the Turing Test I would be more likely to think it is a p-zombie than to think that it is conscious.)
Actually, I can imagine a world where physical brains operated by interacting with some unknown realm that provided some kind of computation capability that the brain lacked itself, although as neuroscience advances, there seems less and less scope for anything like this (not that I know very much about neuroscience at all).