>Throughout history, fearmongering has been used to justify a lot of extreme measures.
And throughout history, people have dismissed real risks and been caught with their pants down. What, in 2018 or Feb 2020 would appear to be pretty extreme measures at pandemic prevention would make total sense from our point of view.
Countries can and do spend a huge pile of money to defend themselves from various things. Including huge militaries to defend themselves from invasion etc.
All sorts of technologies come with various safety measures.
>For a more concrete example, one could draw the possibility that video games might cause the players to emulate behaviors, even though you have to be insane to believe the video games are real, to then start advocating for bans of violent video games. However, one could go a step further and say that building games could also make people believe that it’s easy to build things, leading to people building unsafe houses, and what about farming games, or movies, or books?
If you are unable to distinguish the arguments for AI risk from this kind of rubbish, that suggests either you are unable to evaluate argument plausibility, or you are reading a bunch of strawman arguments for AI risk.
>The community wants you to believe in a very pessimistic version of the world where all the alignment ideas don’t work, and AI may suddenly be dangerous at any time even when their behaviors look good and they’re constantly reward for their good behaviors?
I do not know of any specific existing alignment protocol that I am convinced will work.
And again, if the reward button is pressed every time the AI does nice things, there is no selection pressure one way or the other between an AI that wants nice things, and one that wants to press the reward button. The way these “rewards” in ML work is similar to selection pressure in evolution. And humans were selected on to enjoy sex so they produced more babies, and then invented contraception. And this problem has been observed in toy AI problems too.
This isn’t to say that there is no solution. Just that we haven’t yet found a solution.
>The AI alignment difficulty lies somewhere on a spectrum, yet they insist to base the policy on the idea that AI alignment lies somewhere in a narrow band of spectrum that somehow the pessimistic ideas are true, yet we can somehow align the AI anyway, instead of just accepting that humanity’s second best alternative to survival is to build something that will survive and thrive, even if we won’t?
We know alignment isn’t super easy, because we haven’t succeeded yet. We don’t really know how hard it is.
Maybe it’s hopelessly hard. But if your giving up on humanity before you spend 10% of GDP on the problem, your doing something very wrong.
Think of a world where aliens invaded, and the government kind of took a few pot shots at them with a machine gun, and then gave up. After all, the aliens will survive and thrive even if we don’t. And mass mobilization, shifting to a wartime economy… those are extreme measures.
>Throughout history, fearmongering has been used to justify a lot of extreme measures.
And throughout history, people have dismissed real risks and been caught with their pants down. What, in 2018 or Feb 2020 would appear to be pretty extreme measures at pandemic prevention would make total sense from our point of view.
Countries can and do spend a huge pile of money to defend themselves from various things. Including huge militaries to defend themselves from invasion etc.
All sorts of technologies come with various safety measures.
>For a more concrete example, one could draw the possibility that video games might cause the players to emulate behaviors, even though you have to be insane to believe the video games are real, to then start advocating for bans of violent video games. However, one could go a step further and say that building games could also make people believe that it’s easy to build things, leading to people building unsafe houses, and what about farming games, or movies, or books?
If you are unable to distinguish the arguments for AI risk from this kind of rubbish, that suggests either you are unable to evaluate argument plausibility, or you are reading a bunch of strawman arguments for AI risk.
>The community wants you to believe in a very pessimistic version of the world where all the alignment ideas don’t work, and AI may suddenly be dangerous at any time even when their behaviors look good and they’re constantly reward for their good behaviors?
I do not know of any specific existing alignment protocol that I am convinced will work.
And again, if the reward button is pressed every time the AI does nice things, there is no selection pressure one way or the other between an AI that wants nice things, and one that wants to press the reward button. The way these “rewards” in ML work is similar to selection pressure in evolution. And humans were selected on to enjoy sex so they produced more babies, and then invented contraception. And this problem has been observed in toy AI problems too.
This isn’t to say that there is no solution. Just that we haven’t yet found a solution.
>The AI alignment difficulty lies somewhere on a spectrum, yet they insist to base the policy on the idea that AI alignment lies somewhere in a narrow band of spectrum that somehow the pessimistic ideas are true, yet we can somehow align the AI anyway, instead of just accepting that humanity’s second best alternative to survival is to build something that will survive and thrive, even if we won’t?
We know alignment isn’t super easy, because we haven’t succeeded yet. We don’t really know how hard it is.
Maybe it’s hopelessly hard. But if your giving up on humanity before you spend 10% of GDP on the problem, your doing something very wrong.
Think of a world where aliens invaded, and the government kind of took a few pot shots at them with a machine gun, and then gave up. After all, the aliens will survive and thrive even if we don’t. And mass mobilization, shifting to a wartime economy… those are extreme measures.