Re “we would be able to develop AGI eventually” as “almost certain”: At least up until a year ago I would have said no, definitely not certain, because a computer is very different from a brain and we don’t know yet what it can do. However, as AI advances, I put more probability on it.
Given enough computing power, we should be able to more or less simulate a brain. What is or was your worry? Ability to parallelise? Maybe that even though it may eventually become technically possible, it’ll always be cost-prohibitive? Or maybe that small errors in the simulation would magnify over time?
Well, I’m not a materialist, so it’s not obvious to me that we can successfully simulate a brain, in the ways that matter, on purely material hardware. We just really don’t understand consciousness or how it arises at all. That to my mind is a huge unknown.
I don’t identify as a materialist either (I’m still figuring out my views here), but the question of qualia seems orthogonal to the question of capabilities. A philosophical zombie has the same capability to act in the world as someone who isn’t a zombie.
(I should add, this conversation has been useful to me as it’s helped me understand why certain things I take for granted may not be obvious to other people).
Actually, I can imagine a world where physical brains operated by interacting with some unknown realm that provided some kind of computation capability that the brain lacked itself, although as neuroscience advances, there seems less and less scope for anything like this (not that I know very much about neuroscience at all).
Re “we would be able to develop AGI eventually” as “almost certain”: At least up until a year ago I would have said no, definitely not certain, because a computer is very different from a brain and we don’t know yet what it can do. However, as AI advances, I put more probability on it.
What’s your doubt?
Given enough computing power, we should be able to more or less simulate a brain. What is or was your worry? Ability to parallelise? Maybe that even though it may eventually become technically possible, it’ll always be cost-prohibitive? Or maybe that small errors in the simulation would magnify over time?
Well, I’m not a materialist, so it’s not obvious to me that we can successfully simulate a brain, in the ways that matter, on purely material hardware. We just really don’t understand consciousness or how it arises at all. That to my mind is a huge unknown.
I don’t identify as a materialist either (I’m still figuring out my views here), but the question of qualia seems orthogonal to the question of capabilities. A philosophical zombie has the same capability to act in the world as someone who isn’t a zombie.
(I should add, this conversation has been useful to me as it’s helped me understand why certain things I take for granted may not be obvious to other people).
Well, I’m also not sure if p-zombies can exist!
(Although if an AI passed the Turing Test I would be more likely to think it is a p-zombie than to think that it is conscious.)
Actually, I can imagine a world where physical brains operated by interacting with some unknown realm that provided some kind of computation capability that the brain lacked itself, although as neuroscience advances, there seems less and less scope for anything like this (not that I know very much about neuroscience at all).