Hi Eli, I read your piece on the regulatory barriers to AI progress having material impacts on society. For me this pushes things in the direction of “we’ll have more AI automation of AI R&D before big societal trends in job automation”, which could imply faster AI progress generally if labs are focused more on their own AI → research automation → better AI feedback loop. I do think that an AI that could perform basically any jobs (not requiring hands) as well as a human for pennies on the dollar would radically transform society, but maybe we don’t see as much change in AI systems until then. This Metaculus question (https://www.metaculus.com/questions/3698/when-will-an-ai-achieve-a-98th-percentile-score-or-higher-in-a-mensa-admission-test/) on when an AI will get a Mensa-worthy iq score (current prediction: April 2028) suggests to me that we’re not far away from AGI. What do you think? My sense is that you’re much less bullish on AI progress than e.g. the LessWrong or EA communities.
I think the kinds of tests that prove that a human is intelligent or sentient or whatever are not the same as the kinds of tests that prove a computer program is sentient.
For example, imagine a test where we timed the test-taker on how long it takes to multiply two 8-digit numbers together. For most humans, this would take several minutes. For even a dollar-store calculator, it would take under a second.
For many decades, Alan Turing’s proposal that a computer that could converse indistinguishably from humans would be a sign of human-level sentience and intelligence was widely accepted. I myself thought, “Sure, sounds good,” when I first heard of it.
But actually, it turns out that carrying out a conversation for machines is easier than we thought. There is no real cognition going on inside ChatGPT. It is spitting out answers based on a statistical function trained on encoded inputs and outputs.
I think it is quite possible that an AI will achieve a 98th percentile score on a Mensa test by 2028 (maybe earlier). What I don’t think is that that will be a sign of human-level sentience or intelligence. It’s a sign of being able to mimic a few salient aspects of human intelligence.
To get to parity with human brain experience, we need several orders of magnitude higher computational efficiency to match neurons. We don’t need to get there all the way on efficiency; we can do some by burning more energy. Even so, it will take a couple of decades in my estimation.
And even so, there is still the possibility that we don’t really understand how the neurons work and we could be way off base! Michael Levin has pointed out that a caterpillar essentially disassociates its brain to become a butterfly, and yet somehow it retains at least some memories. I think we are far from really grokking it.
Huh, it’s hard for me to imagine reaching a 98th-percentile IQ score without the ability to do lots of cognitive work (I’m not talking about some model fine-tuned on IQ tests or whatever, just a general language model that happens to score well on the test). I have different intuitions about the calculator example: the point I take away from it is...we use calculators all the time! I’m perfectly content calling calculators a transformative innovation, though these language models are already much more general than the calculator.
Re: “There is no real cognition going on inside ChatGPT. It is spitting out answers based on a statistical function trained on encoded inputs and outputs.” This seems like a No True Scotsman that will keep you from noticing how their capabilities are improving. SSC’s take on GPT-2 was good for this, and imo got extremely vindicated when the GPT family went from an interesting toy to being able to create real economic value.
Re: Have you read Gwern’s stuff on machine learning scaling? All of the “we don’t really understand it” takes on a very different tone when you read his “deep learning wants to work” take. A technique that AI researchers disdain because it doesn’t match their love for theory, that works anyway, that then the whole SV community realizes is really promising...strikes me as something real and useful we accidentally discovered in the world. That we don’t understand it doesn’t stop it from working, that every basic little trick we try yields more fruit suggests that the fruit is really extremely low-hanging. For me, it’s worrying because I think we need good theory to learn how to control it, but the basic case for this being a thing doesn’t seem in question.
Hi Eli, I read your piece on the regulatory barriers to AI progress having material impacts on society. For me this pushes things in the direction of “we’ll have more AI automation of AI R&D before big societal trends in job automation”, which could imply faster AI progress generally if labs are focused more on their own AI → research automation → better AI feedback loop. I do think that an AI that could perform basically any jobs (not requiring hands) as well as a human for pennies on the dollar would radically transform society, but maybe we don’t see as much change in AI systems until then. This Metaculus question (https://www.metaculus.com/questions/3698/when-will-an-ai-achieve-a-98th-percentile-score-or-higher-in-a-mensa-admission-test/) on when an AI will get a Mensa-worthy iq score (current prediction: April 2028) suggests to me that we’re not far away from AGI. What do you think? My sense is that you’re much less bullish on AI progress than e.g. the LessWrong or EA communities.
I think the kinds of tests that prove that a human is intelligent or sentient or whatever are not the same as the kinds of tests that prove a computer program is sentient.
For example, imagine a test where we timed the test-taker on how long it takes to multiply two 8-digit numbers together. For most humans, this would take several minutes. For even a dollar-store calculator, it would take under a second.
For many decades, Alan Turing’s proposal that a computer that could converse indistinguishably from humans would be a sign of human-level sentience and intelligence was widely accepted. I myself thought, “Sure, sounds good,” when I first heard of it.
But actually, it turns out that carrying out a conversation for machines is easier than we thought. There is no real cognition going on inside ChatGPT. It is spitting out answers based on a statistical function trained on encoded inputs and outputs.
I think it is quite possible that an AI will achieve a 98th percentile score on a Mensa test by 2028 (maybe earlier). What I don’t think is that that will be a sign of human-level sentience or intelligence. It’s a sign of being able to mimic a few salient aspects of human intelligence.
To get to parity with human brain experience, we need several orders of magnitude higher computational efficiency to match neurons. We don’t need to get there all the way on efficiency; we can do some by burning more energy. Even so, it will take a couple of decades in my estimation.
And even so, there is still the possibility that we don’t really understand how the neurons work and we could be way off base! Michael Levin has pointed out that a caterpillar essentially disassociates its brain to become a butterfly, and yet somehow it retains at least some memories. I think we are far from really grokking it.
Huh, it’s hard for me to imagine reaching a 98th-percentile IQ score without the ability to do lots of cognitive work (I’m not talking about some model fine-tuned on IQ tests or whatever, just a general language model that happens to score well on the test). I have different intuitions about the calculator example: the point I take away from it is...we use calculators all the time! I’m perfectly content calling calculators a transformative innovation, though these language models are already much more general than the calculator.
Re: “There is no real cognition going on inside ChatGPT. It is spitting out answers based on a statistical function trained on encoded inputs and outputs.” This seems like a No True Scotsman that will keep you from noticing how their capabilities are improving. SSC’s take on GPT-2 was good for this, and imo got extremely vindicated when the GPT family went from an interesting toy to being able to create real economic value.
Re: Have you read Gwern’s stuff on machine learning scaling? All of the “we don’t really understand it” takes on a very different tone when you read his “deep learning wants to work” take. A technique that AI researchers disdain because it doesn’t match their love for theory, that works anyway, that then the whole SV community realizes is really promising...strikes me as something real and useful we accidentally discovered in the world. That we don’t understand it doesn’t stop it from working, that every basic little trick we try yields more fruit suggests that the fruit is really extremely low-hanging. For me, it’s worrying because I think we need good theory to learn how to control it, but the basic case for this being a thing doesn’t seem in question.