A few comments. I like the basic idea, but he article seems too fawning and just does not provide enough of a Scylla and Charybdis of where “safety” goes right and where it can go wrong. The hidden context, I believe, is the high-profile catalyzing exposure of x-risk and longtermist ideas to the broader public.
Here are less than a few thoughts on some of your statements.
“Safety is properly a goal of progress.”
Certainly safety is not properly a goal of progress, any more than seatbelt is a goal of fast transportation. Safety is one method of achieving progress by reducing risks, costs, or “the error rate.”
“We’ve made a lot of progress on these already, but there’s no reason to stop improving as long as the risk is greater than zero.”
The law of diminishing returns applies to safety as to everything else. It’s precisely when people talk about the safety as though “every little bit helps” that we get nonsensical regulation, unnecessarily high costs, disastrous environmental review, IRBs which kill social science. There must be reasonable way of deciding which risks must continually be decreased and which we can and should live with. Safety can be cudgel against progress, even though it can also be a helpmate of it.
“Being proactive about safety means identifying risks via theory, ahead of experience, and there are inherent epistemic limits to this.”
This point is good and could use expansion. What are the limits? When are they greater and when are there lesser epistemic limitations?
“This should be the job of professional “ethics” fields: instead of problematizing research or technology, applied ethics should teach technologists how to respond constructively to risk and how to maximize safety while still moving forward with their careers.”
I don’t know what it means to “problematize research”, research seems problem-ridden already. But also this comment seems to contradict an earlier point where you stated that engineers are best situated to work on the safety of the systems they build. Which is it? The engineers or the ethicists?
I have a bioethicist on my team, and I think he’s invaluable because he offers a coherent method for thinking through ethical problems (especially end of life issues and informed consent issues). But it’s important to recognize that his particular method is dependent, as all ethical systems are, upon a particular metaphysics, to use a dirty term loathed by most ethicists. Not that we have to wait for everyone to have the same metaphysics to work on big safety or big progress—we could never do anything in that case. For in that case, we’d be stuck like Russ Roberts, in his articles against utilitarianism, unable to judge whether free trade is worth the cost of one person’s workforce participation. But metaphysics does offer some guidance about tradeoffs we should and shouldn’t make by providing useful if, sometimes vague, definitions of human life, human flourishing, human moral responsibility. There are real differences between people on these definitions, which lead to very different ethical conclusions about which tradeoffs we should and should not make. Indeed, how much safety we should invest in is informed by our metaphysical and meta-ethical assumptions.
I think trained ethicists with an engineering or medical degree are extremely helpful. In our space that’s a minority opinion. But, like having a lawyer well-versed in case law, the excellent ethicist can quickly see different implications and applications of a process, and be able to provide advice on potential pitfalls, low-cost safety features, and mis-applications that were not immediately obvious consequences.
Hi Jason,
A few comments. I like the basic idea, but he article seems too fawning and just does not provide enough of a Scylla and Charybdis of where “safety” goes right and where it can go wrong. The hidden context, I believe, is the high-profile catalyzing exposure of x-risk and longtermist ideas to the broader public.
Here are less than a few thoughts on some of your statements.
“Safety is properly a goal of progress.”
Certainly safety is not properly a goal of progress, any more than seatbelt is a goal of fast transportation. Safety is one method of achieving progress by reducing risks, costs, or “the error rate.”
“We’ve made a lot of progress on these already, but there’s no reason to stop improving as long as the risk is greater than zero.”
The law of diminishing returns applies to safety as to everything else. It’s precisely when people talk about the safety as though “every little bit helps” that we get nonsensical regulation, unnecessarily high costs, disastrous environmental review, IRBs which kill social science. There must be reasonable way of deciding which risks must continually be decreased and which we can and should live with. Safety can be cudgel against progress, even though it can also be a helpmate of it.
“Being proactive about safety means identifying risks via theory, ahead of experience, and there are inherent epistemic limits to this.”
This point is good and could use expansion. What are the limits? When are they greater and when are there lesser epistemic limitations?
“This should be the job of professional “ethics” fields: instead of problematizing research or technology, applied ethics should teach technologists how to respond constructively to risk and how to maximize safety while still moving forward with their careers.”
I don’t know what it means to “problematize research”, research seems problem-ridden already. But also this comment seems to contradict an earlier point where you stated that engineers are best situated to work on the safety of the systems they build. Which is it? The engineers or the ethicists?
I have a bioethicist on my team, and I think he’s invaluable because he offers a coherent method for thinking through ethical problems (especially end of life issues and informed consent issues). But it’s important to recognize that his particular method is dependent, as all ethical systems are, upon a particular metaphysics, to use a dirty term loathed by most ethicists. Not that we have to wait for everyone to have the same metaphysics to work on big safety or big progress—we could never do anything in that case. For in that case, we’d be stuck like Russ Roberts, in his articles against utilitarianism, unable to judge whether free trade is worth the cost of one person’s workforce participation. But metaphysics does offer some guidance about tradeoffs we should and shouldn’t make by providing useful if, sometimes vague, definitions of human life, human flourishing, human moral responsibility. There are real differences between people on these definitions, which lead to very different ethical conclusions about which tradeoffs we should and should not make. Indeed, how much safety we should invest in is informed by our metaphysical and meta-ethical assumptions.
I think trained ethicists with an engineering or medical degree are extremely helpful. In our space that’s a minority opinion. But, like having a lawyer well-versed in case law, the excellent ethicist can quickly see different implications and applications of a process, and be able to provide advice on potential pitfalls, low-cost safety features, and mis-applications that were not immediately obvious consequences.