Yes, certainly! But that point isn’t relevant to the point I’m making here. And emphasizing that point as a way of arguing against AI risk itself is one of the things I’m discouraging. It would be like responding to concerns about drug safety by saying “but drugs save lives!” Yes, of course they do, but that isn’t relevant to the question of whether drugs also pose risks, and what we should do about those risks.
Why would it not be relevant to the question? What’s the value of only looking at eliminating the potential risk?
Regulating a technology is not just about eliminating the risks of it but about reducing the risks to some extentwhile still enabling the upside. the upsides need to be clearly analyses and acknowledged.
Certainly. You need to look at both benefits and costs if you are talking about, for instance, what to do about a technology—whether to ban it, or limit it, or heavily regulate it, or fund it / accelerate it, etc.
But that was not the context of this piece. There was only one topic for this piece, which was that the proponents of AI (of which I am one!) should not dismiss or ignore potential risks. That was all.
Yes, certainly! But that point isn’t relevant to the point I’m making here. And emphasizing that point as a way of arguing against AI risk itself is one of the things I’m discouraging. It would be like responding to concerns about drug safety by saying “but drugs save lives!” Yes, of course they do, but that isn’t relevant to the question of whether drugs also pose risks, and what we should do about those risks.
Why would it not be relevant to the question? What’s the value of only looking at eliminating the potential risk?
Regulating a technology is not just about eliminating the risks of it but about reducing the risks to some extent while still enabling the upside. the upsides need to be clearly analyses and acknowledged.
Certainly. You need to look at both benefits and costs if you are talking about, for instance, what to do about a technology—whether to ban it, or limit it, or heavily regulate it, or fund it / accelerate it, etc.
But that was not the context of this piece. There was only one topic for this piece, which was that the proponents of AI (of which I am one!) should not dismiss or ignore potential risks. That was all.