Adam Thierer is a Senior Research Fellow with the Mercatus Center at George Mason University and the author of “Permissionless Innovation” (2016) and “Evasive Entrepreneurs & the Future of Governance” (2020).
AdamThierer
Karma: 18
Existential Risks & Global Governance Issues around AI & Robotics
How Science Fiction Dystopianism Shapes the Debate over AI & Robotics
What’s the Right Policy Default for AI?
Podcast: Marc Andreessen on the future of innovation
Event on “Governance of Emerging Technologies & Science”
New Governance Frameworks for Emerging Tech Sectors
In his 2013 book, “Smarter Than You Think: How Technology Is Changing Our Minds for the Better,” Clive Thompson noted that “dystopian predictions are easy to generate” and “doomsaying is emotionally self-protective: if you complain that today’s technology is wrecking the culture, you can tell yourself you’re a gimlet-eyed critic who isn’t hoodwinked by high-tech trends and silly, popular activities like social networking. You seem like someone who has a richer, deeper appreciation for the past and who stands above the triviality of today’s life.” (p. 283)
I think that really nails it.
Jason:
If you haven’t already read the work of the late Aaron Wildavsky, I would highly recommend it because he devoted much of his life’s work to the exact issue you tee up here. I’d recommend two of his books to start. The first is Risk and Culture co-authored with Mary Douglas, and the second is his absolutely remarkable Searching for Safety, which served as the inspiration for my book on Permissionless Innovation.
Here are a few choice quotes from Risk and Culture:
“Relative safety is not a static but rather a dynamic product of learning from error over time. . . . The fewer the trails and the fewer the mistakes to learn from, the more error remains uncorrected.” (p. 195)
“The ability to learn from errors and gain experience in coping with a wide variety of difficulties, has proved a greater aid to preservation of the species than efforts to create a narrow band of controlled conditions within which they would flourish for a time… “ (p. 196)
“If some degree of risk is inevitable, suppressing it in one place often merely moves it to another. Shifting risks may be more dangerous than tolerating them, both because those who face new risks may be unaccustomed to them and because those who no longer face old ones may be more vulnerable when conditions change.” (p. 197)
And then in Searching for Safety, Wildavsky went on to build on that logic as he warned of the dangers of “trial without error” reasoning, and contrasted it with the trial-and-error method of evaluating risk and seeking wise solutions to it. Wildavsky argued that wisdom and safety are born of experience and that we can learn how to be wealthier and healthier as individuals and a society only by first being willing to embrace uncertainty and even occasional failure. I’ve probably quoted this passage from that book in more of my work than anything else I can think of:
“The direct implication of trial without error is obvious: If you can do nothing without knowing first how it will turn out, you cannot do anything at all. An indirect implication of trial without error is that if trying new things is made more costly, there will be fewer departures from past practice; this very lack of change may itself be dangerous in forgoing chances to reduce existing hazards. . . . . Existing hazards will continue to cause harm if we fail to reduce them by taking advantage of the opportunity to benefit from repeated trials.”
In my next book on AI governance, I extend this framework to AI risk.