I’m curious, what’s your main doubt about AGI happening eventually (excluding existential risks or scenarios where we end up back at the stone age)? The existence of humans, created by dumb evolution nonetheless, seems to constitute a strong evidence of physical possibility. And our ability to produce computer chips with astonishingly tiny components seems to suggest that we can actually do the physical manipulations required. So I think it’s one of those things that sounds more speculative than it actually is.
I mean, I guess it’s true that there is some doubt about AGI happening, but when you really get down to it, you can doubt anything. So I guess I’d be curious to have a better idea of what you mean by some doubt—maybe even a rough percent chance? I have a very low percent chance of AGI not happening (barring catastrophic risks as stated above) from within my model of the world, but I have a higher, but still low chance of my model being wrong.
I don’t think it is fair to act like Jason is doubting something so knockdown clear. Yes, to you and I AGI seems obviously possible and within this century seems even seems likely, but Jason said he doesn’t know much about the AI stuff. And his default view is agnosticism, not deference to the LW community. Don’t forget that not everyone has spent the past decade reading about AGI! ;)
I have read enough (e.g., Holden Karnofsky’s essays) to understand the case for it. It is a compelling case. What I’m arguing against is a line of thinking like: “AGI will be here soon and it will either kill us or solve all our problems, so there’s no point in working on curing cancer, longevity, nanotech, fusion, or progress studies.” There are just too many unknown unknowns.
On top of which I would add that machine intelligence, however it evolves, is something very different from human intelligence, just as a washing machine is different from a housekeeper and a submarine is different from a whale. Machines “think” in the way that a submarine “swims.” So there are limits on how much we can extrapolate from human intelligence.
Re the first point, I agree. I would tentatively suggest doing something like OpenPhil’s worldview diversification, where research, labor, and capital are divided among a few distinct futures scenarios and each is optimized independently. My point in the piece is that I think the current program is a bit under-diversified.
The aspect as was arguing for as almost certain on the inside view is that we would be able to develop AGI eventually barring catastrophe. I wasn’t extending that to “AGI will be here soon”.
Regarding “AGI kill us or solve all our problems”; I think there are some possible scenarios where we end up with a totalitarian government or an oligarchy controlling AI or the AI keeps us alive for some reason (incl. s-risk scenarios) or being disempowered by AI/”going out with a whimper” as per What failure looks like. But I assign almost no weight on the internal view of AGI just not being that good. (What I mean by that, is I exclude the scenarios that are common in sci-fi where we have AGI and we still have humans doing most things and being better or as good, but not scenarios where humans do things b/c we don’t trust the AI or b/c we need “fake jobs” for the humans to feel important).
Re “we would be able to develop AGI eventually” as “almost certain”: At least up until a year ago I would have said no, definitely not certain, because a computer is very different from a brain and we don’t know yet what it can do. However, as AI advances, I put more probability on it.
Given enough computing power, we should be able to more or less simulate a brain. What is or was your worry? Ability to parallelise? Maybe that even though it may eventually become technically possible, it’ll always be cost-prohibitive? Or maybe that small errors in the simulation would magnify over time?
Well, I’m not a materialist, so it’s not obvious to me that we can successfully simulate a brain, in the ways that matter, on purely material hardware. We just really don’t understand consciousness or how it arises at all. That to my mind is a huge unknown.
I don’t identify as a materialist either (I’m still figuring out my views here), but the question of qualia seems orthogonal to the question of capabilities. A philosophical zombie has the same capability to act in the world as someone who isn’t a zombie.
(I should add, this conversation has been useful to me as it’s helped me understand why certain things I take for granted may not be obvious to other people).
Actually, I can imagine a world where physical brains operated by interacting with some unknown realm that provided some kind of computation capability that the brain lacked itself, although as neuroscience advances, there seems less and less scope for anything like this (not that I know very much about neuroscience at all).
I’m curious, what’s your main doubt about AGI happening eventually (excluding existential risks or scenarios where we end up back at the stone age)? The existence of humans, created by dumb evolution nonetheless, seems to constitute a strong evidence of physical possibility. And our ability to produce computer chips with astonishingly tiny components seems to suggest that we can actually do the physical manipulations required. So I think it’s one of those things that sounds more speculative than it actually is.
I mean, I guess it’s true that there is some doubt about AGI happening, but when you really get down to it, you can doubt anything. So I guess I’d be curious to have a better idea of what you mean by some doubt—maybe even a rough percent chance? I have a very low percent chance of AGI not happening (barring catastrophic risks as stated above) from within my model of the world, but I have a higher, but still low chance of my model being wrong.
I don’t think it is fair to act like Jason is doubting something so knockdown clear. Yes, to you and I AGI seems obviously possible and within this century seems even seems likely, but Jason said he doesn’t know much about the AI stuff. And his default view is agnosticism, not deference to the LW community. Don’t forget that not everyone has spent the past decade reading about AGI! ;)
I have read enough (e.g., Holden Karnofsky’s essays) to understand the case for it. It is a compelling case. What I’m arguing against is a line of thinking like: “AGI will be here soon and it will either kill us or solve all our problems, so there’s no point in working on curing cancer, longevity, nanotech, fusion, or progress studies.” There are just too many unknown unknowns.
On top of which I would add that machine intelligence, however it evolves, is something very different from human intelligence, just as a washing machine is different from a housekeeper and a submarine is different from a whale. Machines “think” in the way that a submarine “swims.” So there are limits on how much we can extrapolate from human intelligence.
Re the first point, I agree. I would tentatively suggest doing something like OpenPhil’s worldview diversification, where research, labor, and capital are divided among a few distinct futures scenarios and each is optimized independently. My point in the piece is that I think the current program is a bit under-diversified.
What “current program” are you referring to exactly? (The progress studies community? The world? Or what?)
The aspect as was arguing for as almost certain on the inside view is that we would be able to develop AGI eventually barring catastrophe. I wasn’t extending that to “AGI will be here soon”.
Regarding “AGI kill us or solve all our problems”; I think there are some possible scenarios where we end up with a totalitarian government or an oligarchy controlling AI or the AI keeps us alive for some reason (incl. s-risk scenarios) or being disempowered by AI/”going out with a whimper” as per What failure looks like. But I assign almost no weight on the internal view of AGI just not being that good. (What I mean by that, is I exclude the scenarios that are common in sci-fi where we have AGI and we still have humans doing most things and being better or as good, but not scenarios where humans do things b/c we don’t trust the AI or b/c we need “fake jobs” for the humans to feel important).
Re “we would be able to develop AGI eventually” as “almost certain”: At least up until a year ago I would have said no, definitely not certain, because a computer is very different from a brain and we don’t know yet what it can do. However, as AI advances, I put more probability on it.
What’s your doubt?
Given enough computing power, we should be able to more or less simulate a brain. What is or was your worry? Ability to parallelise? Maybe that even though it may eventually become technically possible, it’ll always be cost-prohibitive? Or maybe that small errors in the simulation would magnify over time?
Well, I’m not a materialist, so it’s not obvious to me that we can successfully simulate a brain, in the ways that matter, on purely material hardware. We just really don’t understand consciousness or how it arises at all. That to my mind is a huge unknown.
I don’t identify as a materialist either (I’m still figuring out my views here), but the question of qualia seems orthogonal to the question of capabilities. A philosophical zombie has the same capability to act in the world as someone who isn’t a zombie.
(I should add, this conversation has been useful to me as it’s helped me understand why certain things I take for granted may not be obvious to other people).
Well, I’m also not sure if p-zombies can exist!
(Although if an AI passed the Turing Test I would be more likely to think it is a p-zombie than to think that it is conscious.)
Actually, I can imagine a world where physical brains operated by interacting with some unknown realm that provided some kind of computation capability that the brain lacked itself, although as neuroscience advances, there seems less and less scope for anything like this (not that I know very much about neuroscience at all).