This is a draft essay to be published on The Roots of Progress; I wanted to share it here first for feedback. Please comment! [UPDATE: this is now revised and published]
We live in a dangerous world. Many hazards come from nature: fire, flood, storm, famine, disease. Technological and industrial progress have made us safer from these dangers. But technology also creates its own hazards: industrial accidents, car crashes, toxic chemicals, radiation. And future technologies, such as genetic engineering or AI, may present existential threats to the human race. These risks are the best argument against a naive or heedless approach to progress.
So, to fully understand progress, we have to understand risk and safety. I’ve only begun my research here, but what follows are some things I’m coming to believe about safety. Consider this a preliminary sketch of a philosophy of safety:
Safety is a part of progress, not something opposed to it
Safety is properly a goal of progress: safer lives are better lives. Safety is not automatic, in any context: it is a goal we must actively seek and engineer for. This applies both to the hazards of nature and to the hazards of technology.
Historically, safety has been a part of progress across all domains. Technology, industry, and wealth help guard against natural risks and disasters. And improved safety has also been a key dimension of progress for technologies from cars to surgery to factories.
We have even made progress itself safer: today, new technologies are subject to much higher levels of testing and analysis before being put on the market. For instance, a century ago, little to no testing was performed on new drugs, sometimes not even animal testing for toxicity; today they go through extensive, multi-stage trials. This incurs cost and overhead, and it certainly reduces the rate at which new drugs are released to consumers, but it would be wrong to describe drug testing as being opposed to pharmaceutical progress—improved testing is a part of pharmaceutical progress.
Safety is a human problem, and requires human solutions
Inventions such as pressure valves, seat belts, or smoke alarms can help with safety. But ultimately, safety requires processes, standards, and protocols. It requires education and training. It requires law. There is no silver bullet: any one mechanism can fail; defense in depth is required.
Improving safety requires feedback loops, including reporting systems. It greatly benefits from openness: for instance, the FAA encourages anonymous reports of safety incidents, and will even be more lenient in penalizing safety violations if they were reported.
Safety requires aligned incentives: Worker’s compensation laws, for instance, aligned incentives of factories and workers and led to improved factory safety. Insurance helps by aligning safety procedures with profit motives.
Safety benefits from public awareness: The worker’s comp laws came after reports by journalists such as Crystal Eastman and William Hard. In the same era, a magazine series exposing the shams and fraud of the patent medicine industry led to reforms such as stricter truth-in-advertising laws.
Safety requires leadership. It requires thinking statistically, and this does not come naturally to most people. Factory workers did not want to use safety techniques that were inconvenient or slowed them down, such as goggles, hard hats, or guards on equipment.
We need more safety
When we hope for progress and look forward to a better future, part of what we should be looking forward to is a safer future.
We need more safety from existing dangers: auto accidents, pandemics, wildfires, etc. We’ve made a lot of progress on these already, but there’s no reason to stop improving as long as the risk is greater than zero.
And we need to continue to raise the bar for making progress safely. That means safer ways of experimenting, exploring, researching, inventing.
We need to get more proactive about safety
Historically, a lot of progress in safety has been reactive: accidents happen, people die, and then we figure out what went wrong and how to prevent it from recurring.
The more we go forward, the more we need to anticipate risks in advance. Partly this is because, as the general background level of risk decreases, it makes sense to lower our tolerance for risks of all kinds, and that includes the risks of new technology.
Further, the more our technology develops, the more we increase our power and capabilities, and the more potential damage we can do. The danger of total war became much greater after nuclear weapons; the danger of bioengineered pandemics or rogue AI may be far greater still in the near future.
There are signs that this shift towards more proactive safety efforts has already begun. The field of bioengineering has proactively addressed risks on multiple occasions over the decades, from recombinant DNA to human germline editing. The fact that the field of AI has been seriously discussing risks from highly advanced AI well before it is created is a departure from historical norms of heedlessness. And compare the lack of safety features on the first cars to the extensive testing (much of it in simulation) being done for self-driving cars. This shift may not be enough, or fast enough—I am not advocating complacency—but it is in the right direction.
This is going to be difficult
It’s hard to anticipate risks—especially from unknown unknowns. No one guessed at first that X-rays, which could neither be seen or felt, were a potential health hazard.
Being proactive about safety means identifying risks via theory, ahead of experience, and there are inherent epistemic limits to this. Beyond a certain point, the task is impossible, and the attempt becomes “prophecy” (in the Deutsch/Popper sense). But within those limits, we should try, to the best of our knowledge and ability.
Even when risks are predicted, people don’t always heed them. Alexander Fleming, who discovered the antibiotic properties of penicillin, predicted the potential for the evolution of antibiotic resistance early on, but that didn’t stop doctors from massively overprescribing antibiotics when they were first introduced. We need to get better at listening to the right warnings, and better at taking rational action in the face of uncertainty.
Safety depends on technologists
Much of safety is domain-specific: the types of risks, and what can guard against them, are quite different when considering air travel vs. radiation vs. new drugs vs. genetic engineering.
Therefore, much of safety depends on the scientists and engineers who are actually developing the technologies that might create or reduce risk. As the domain experts, they are closest to the risk and understand it best. They are the first ones who will be able to spot it—and they are also the ones holding the key to Pandora’s box.
A positive example here comes from Kevin Esvelt. After coming up with the idea for a CRISPR-based gene drive, he says, “I spent quite some time thinking, well, what are the implications of this? And in particular, could it be misused? What if someone wanted to engineer an organism for malevolent purposes? What could we do about it? … I was a technology development fellow, not running my own lab, but I worked mostly with George Church. And before I even told George, I sat down and thought about it in as many permutations as I could.”
Technologists need to be educated both in how to spot risks, and how to respond to them. This should be the job of professional “ethics” fields: instead of problematizing research or technology, applied ethics should teach technologists how to respond constructively to risk and how to maximize safety while still moving forward with their careers. It should instill a deep sense of responsibility in a way that inspires them to hold themselves to the highest standards.
Progress helps guard against unknown risks
General capabilities help guard against general classes of risk, even ones we can’t anticipate. Science helps us understand risk and what could mitigate it; technology gives us tools; wealth and infrastructure create a buffer against shocks. Industrial energy usage and high-strength materials guard against storms and other weather events. Agricultural abundance guards against famine. If we had a cure for cancer, it would guard against the accidental introduction of new carcinogens. If we had broad-spectrum antivirals, they would guard against the risk of new pandemics.
Safety doesn’t require sacrificing progress
The path to safety is not through banning broad areas of R&D, nor through a general, across-the-board slowdown of progress. The path to safety is largely domain-specific. It needs the best-informed threat models we can produce, and specific tools, techniques, protocols and standards to counter them.
If and when it makes sense to halt or ban R&D, the ban should be either narrow or temporary. An example of a narrow ban would be one on specific types of experiments that try to engineer more dangerous versions of pathogens, where the benefits are minor (it’s not as if these experiments are necessary to advance biology at a fundamental level) and the risks are large and obvious. A temporary ban can make sense until a particular goal is reached in terms of working out safety procedures, as at the 1975 Asilomar conference.
Bottom line: we can—we must—have both safety and progress.
Draft for comment: Towards a philosophy of safety
This is a draft essay to be published onThe Roots of Progress; I wanted to share it here first for feedback. Please comment![UPDATE: this is now revised and published]We live in a dangerous world. Many hazards come from nature: fire, flood, storm, famine, disease. Technological and industrial progress have made us safer from these dangers. But technology also creates its own hazards: industrial accidents, car crashes, toxic chemicals, radiation. And future technologies, such as genetic engineering or AI, may present existential threats to the human race. These risks are the best argument against a naive or heedless approach to progress.
So, to fully understand progress, we have to understand risk and safety. I’ve only begun my research here, but what follows are some things I’m coming to believe about safety. Consider this a preliminary sketch of a philosophy of safety:
Safety is a part of progress, not something opposed to it
Safety is properly a goal of progress: safer lives are better lives. Safety is not automatic, in any context: it is a goal we must actively seek and engineer for. This applies both to the hazards of nature and to the hazards of technology.
Historically, safety has been a part of progress across all domains. Technology, industry, and wealth help guard against natural risks and disasters. And improved safety has also been a key dimension of progress for technologies from cars to surgery to factories.
We have even made progress itself safer: today, new technologies are subject to much higher levels of testing and analysis before being put on the market. For instance, a century ago, little to no testing was performed on new drugs, sometimes not even animal testing for toxicity; today they go through extensive, multi-stage trials. This incurs cost and overhead, and it certainly reduces the rate at which new drugs are released to consumers, but it would be wrong to describe drug testing as being opposed to pharmaceutical progress—improved testing is a part of pharmaceutical progress.
Safety is a human problem, and requires human solutions
Inventions such as pressure valves, seat belts, or smoke alarms can help with safety. But ultimately, safety requires processes, standards, and protocols. It requires education and training. It requires law. There is no silver bullet: any one mechanism can fail; defense in depth is required.
Improving safety requires feedback loops, including reporting systems. It greatly benefits from openness: for instance, the FAA encourages anonymous reports of safety incidents, and will even be more lenient in penalizing safety violations if they were reported.
Safety requires aligned incentives: Worker’s compensation laws, for instance, aligned incentives of factories and workers and led to improved factory safety. Insurance helps by aligning safety procedures with profit motives.
Safety benefits from public awareness: The worker’s comp laws came after reports by journalists such as Crystal Eastman and William Hard. In the same era, a magazine series exposing the shams and fraud of the patent medicine industry led to reforms such as stricter truth-in-advertising laws.
Safety requires leadership. It requires thinking statistically, and this does not come naturally to most people. Factory workers did not want to use safety techniques that were inconvenient or slowed them down, such as goggles, hard hats, or guards on equipment.
We need more safety
When we hope for progress and look forward to a better future, part of what we should be looking forward to is a safer future.
We need more safety from existing dangers: auto accidents, pandemics, wildfires, etc. We’ve made a lot of progress on these already, but there’s no reason to stop improving as long as the risk is greater than zero.
And we need to continue to raise the bar for making progress safely. That means safer ways of experimenting, exploring, researching, inventing.
We need to get more proactive about safety
Historically, a lot of progress in safety has been reactive: accidents happen, people die, and then we figure out what went wrong and how to prevent it from recurring.
The more we go forward, the more we need to anticipate risks in advance. Partly this is because, as the general background level of risk decreases, it makes sense to lower our tolerance for risks of all kinds, and that includes the risks of new technology.
Further, the more our technology develops, the more we increase our power and capabilities, and the more potential damage we can do. The danger of total war became much greater after nuclear weapons; the danger of bioengineered pandemics or rogue AI may be far greater still in the near future.
There are signs that this shift towards more proactive safety efforts has already begun. The field of bioengineering has proactively addressed risks on multiple occasions over the decades, from recombinant DNA to human germline editing. The fact that the field of AI has been seriously discussing risks from highly advanced AI well before it is created is a departure from historical norms of heedlessness. And compare the lack of safety features on the first cars to the extensive testing (much of it in simulation) being done for self-driving cars. This shift may not be enough, or fast enough—I am not advocating complacency—but it is in the right direction.
This is going to be difficult
It’s hard to anticipate risks—especially from unknown unknowns. No one guessed at first that X-rays, which could neither be seen or felt, were a potential health hazard.
Being proactive about safety means identifying risks via theory, ahead of experience, and there are inherent epistemic limits to this. Beyond a certain point, the task is impossible, and the attempt becomes “prophecy” (in the Deutsch/Popper sense). But within those limits, we should try, to the best of our knowledge and ability.
Even when risks are predicted, people don’t always heed them. Alexander Fleming, who discovered the antibiotic properties of penicillin, predicted the potential for the evolution of antibiotic resistance early on, but that didn’t stop doctors from massively overprescribing antibiotics when they were first introduced. We need to get better at listening to the right warnings, and better at taking rational action in the face of uncertainty.
Safety depends on technologists
Much of safety is domain-specific: the types of risks, and what can guard against them, are quite different when considering air travel vs. radiation vs. new drugs vs. genetic engineering.
Therefore, much of safety depends on the scientists and engineers who are actually developing the technologies that might create or reduce risk. As the domain experts, they are closest to the risk and understand it best. They are the first ones who will be able to spot it—and they are also the ones holding the key to Pandora’s box.
A positive example here comes from Kevin Esvelt. After coming up with the idea for a CRISPR-based gene drive, he says, “I spent quite some time thinking, well, what are the implications of this? And in particular, could it be misused? What if someone wanted to engineer an organism for malevolent purposes? What could we do about it? … I was a technology development fellow, not running my own lab, but I worked mostly with George Church. And before I even told George, I sat down and thought about it in as many permutations as I could.”
Technologists need to be educated both in how to spot risks, and how to respond to them. This should be the job of professional “ethics” fields: instead of problematizing research or technology, applied ethics should teach technologists how to respond constructively to risk and how to maximize safety while still moving forward with their careers. It should instill a deep sense of responsibility in a way that inspires them to hold themselves to the highest standards.
Progress helps guard against unknown risks
General capabilities help guard against general classes of risk, even ones we can’t anticipate. Science helps us understand risk and what could mitigate it; technology gives us tools; wealth and infrastructure create a buffer against shocks. Industrial energy usage and high-strength materials guard against storms and other weather events. Agricultural abundance guards against famine. If we had a cure for cancer, it would guard against the accidental introduction of new carcinogens. If we had broad-spectrum antivirals, they would guard against the risk of new pandemics.
Safety doesn’t require sacrificing progress
The path to safety is not through banning broad areas of R&D, nor through a general, across-the-board slowdown of progress. The path to safety is largely domain-specific. It needs the best-informed threat models we can produce, and specific tools, techniques, protocols and standards to counter them.
If and when it makes sense to halt or ban R&D, the ban should be either narrow or temporary. An example of a narrow ban would be one on specific types of experiments that try to engineer more dangerous versions of pathogens, where the benefits are minor (it’s not as if these experiments are necessary to advance biology at a fundamental level) and the risks are large and obvious. A temporary ban can make sense until a particular goal is reached in terms of working out safety procedures, as at the 1975 Asilomar conference.
Bottom line: we can—we must—have both safety and progress.