Founder, The Roots of Progress (rootsofprogress.org)
jasoncrawford
Jason’s links digest, 2023-07-28: The decadent opulence of modern capitalism
Why no Roman Industrial Revolution?
Podcasts: Future of Life Institute, Breakthrough Science Summit panel
Jason’s links and tweets, 2023-07-20: “A goddess enthroned on a car”
Highlights from The Industrial Revolution, by T. S. Ashton
Announcing The Roots of Progress Blog-Building Intensive
Jason’s links and tweets, 2023-07-06: Terraformer Mark One, Israeli water management, & more
If you wish to make an apple pie, you must first become dictator of the universe
Jason’s links and tweets, 2023-06-28: “We can do big things again in Pennsylvania”
Levels of safety for AI and other technologies
Thanks a lot, Zvi.
Meta-level: I think to have a coherent discussion, it is important to be clear about which levels of safety we are talking about.
Right now I am mostly focused on the question of: is it even possible for a trained professional to use AI safely, if they are prudent and reasonably careful and follow best practices?
I am less focused, for now, on questions like: How dangerous would it be if we open-sourced all models and weights and just let anyone in the world do anything they wanted with the raw engine? Or: what could a terrorist group do with access to this? And I am not right now taking a strong stance on these questions.
And the reason for this focus is:
The most profound arguments for doom claim that literally no one on Earth can use AI safely, with our current understanding of it.
Right now there is a vocal “decelerationist” group saying that we should slow, pause, or halt AI development. I think this argument mostly rests on the most extreme and IMO least tenable versions of the doom argument.
With that context:
We might agree, at the extreme ends of the spectrum, that:
If a trained professional is very cautious and sets up all of the right goals, incentives and counter-incentives in a carefully balanced way, the AI probably won’t take over the world
If a reckless fool puts extreme optimization pressure on a superintelligent situationally-aware agent with no moral or practical constraints, then very bad things might happen
I feel like we are still at different points in the middle of that spectrum, though. You seem to think that the balancing of incentives has to be pretty careful, because some pretty serious power-seeking is the default outcome. My intuition is something like: problematic power-seeking is possible but not expected under most normal/reasonable scenarios.
I have a hunch that the crux has something to do with our view of the fundamental nature of these agents.
… I accidentally posted this without finishing it, but honestly I need to do more thinking to be able to articulate this crux.
Certainly. You need to look at both benefits and costs if you are talking about, for instance, what to do about a technology—whether to ban it, or limit it, or heavily regulate it, or fund it / accelerate it, etc.
But that was not the context of this piece. There was only one topic for this piece, which was that the proponents of AI (of which I am one!) should not dismiss or ignore potential risks. That was all.
If you wish to make an apple pie, you must first become dictator of the universe [draft for comment]
Jason’s links and tweets, 2023-06-21: Stewart Brand wants your comments
I would call it metascience, and I would include Convergent Research and Speculative Technologies. See also this Twitter thread.
There is no history that I know of, it’s almost too new for that. But here’s an article: “Inside the multibillion-dollar, Silicon Valley-backed effort to reimagine how the world funds (and conducts) science”
Thanks Gergő. We’re doing this as a “work made for hire” meaning that the rights belong to us and we can then license it however we want.