Founder, The Roots of Progress (rootsofprogress.org)
jasoncrawford
Certainly. You need to look at both benefits and costs if you are talking about, for instance, what to do about a technology—whether to ban it, or limit it, or heavily regulate it, or fund it / accelerate it, etc.
But that was not the context of this piece. There was only one topic for this piece, which was that the proponents of AI (of which I am one!) should not dismiss or ignore potential risks. That was all.
I would call it metascience, and I would include Convergent Research and Speculative Technologies. See also this Twitter thread.
There is no history that I know of, it’s almost too new for that. But here’s an article: “Inside the multibillion-dollar, Silicon Valley-backed effort to reimagine how the world funds (and conducts) science”
This essay was written not written for the doomers. It was written for the anti-doomers who are inclined to dismiss any concerns about AI safety at all.
I may write something later about where I agree/disagree with the doom argument and what I think we should actually do.
Yes, certainly! But that point isn’t relevant to the point I’m making here. And emphasizing that point as a way of arguing against AI risk itself is one of the things I’m discouraging. It would be like responding to concerns about drug safety by saying “but drugs save lives!” Yes, of course they do, but that isn’t relevant to the question of whether drugs also pose risks, and what we should do about those risks.
Not just “safety is good”, but: (1) safety is a part of progress, rather than something opposed to it and (2) optimists should confront risks and seek solutions, rather than downplaying or dismissing them.
I think what Allen probably added was a more quantitative investigation of this idea. He gathered the price data for fuel, labor, capital, etc. and did the analysis of rates of profit and return on investment.
Added a little bit in the revised version to try to clarify this. Thanks again for the feedback
Not sure if this is quite what you are looking for, but I’ve been keeping a list of progress-related museums that I have visited or want to visit, large or small, including:
Antique Gas & Steam Engine Museum in Vista, CA
Charles River Museum of Industry and Innovation in Waltham, MA
Henry Ford Museum in Dearborn, MI
Davistown Museum in Maine
Museum of Craft & Design in San Francisco
History of Science museum in Harvard (one room)
Jenner Museum, Gloucestershire; also a statue of Edward Jenner in Kensington Gardens?
Bibliotheque de la Faculte de Medecine in Paris, which houses The Jubilee of Louis Pasteur, by Jean-André Rixens
Fleming’s original Petri dish in the British Museum
Institute of Making, part of University College London
Thanks! Yes, this is definitely part of Allen’s argument (maybe I should make that more clear).
I’ve been meaning to read that Devereaux post/series for a while, thanks for reminding me of it.
However, I don’t you think can argue from “the Industrial Revolution got started in this very specific way” to “that is the only way any kind of an IR could ever have gotten started.” If it hadn’t been flooded coal mines in Britain, there would have been some other need for energy in some other application.
I see it more as: you develop mechanization and energy technology once you reach that frontier—once your economy hits the point where that is the best marginal investment in development. Britain was one of the most advanced economies, so it hit that frontier first.
Was supposed to be “before products are launched”. Fixed, thanks
Related: The Long Now Foundation’s Manual for Civilization
“What books would you want to restart civilization from scratch?”
The Long Now Foundation has been involved in and inspired by projects centered on that question since launching in 01996. (See, for example, The Rosetta Project, Westinghouse Time Capsules, The Human Document Project, The Survivor Library, The Toaster Project, The Crypt of Civilization, and the Voyager Record.) For years, Executive Director Alexander Rose has been in discussions on how to create a record of humanity and technology for our descendants. In 02014, Long Now began building it.
The Manual For Civilization is working toward a living, crowd-curated library of 3,500 books put forward by the Long Now community and on display at The Interval.
See also Lewis Dartnell’s book The Knowledge.
I bet GPT-4 could already do a lot of this work, perhaps with some fine-tuning and/or careful prompt engineering.
The problem with automating compliance documents is not just the time/effort to prepare them. It’s also the time spent waiting to get a response, and in some cases, “user fees” paid to the government to review them. If everyone started using GPT to do compliance, I suspect that the various agencies would just start to build up an ever-growing backlog of un-reviewed applications, until they’re all like immigration and they have decade-long wait times.
Why do you think we don’t have more people starting ambitious genetic engineering projects?
What are the best near-term/foreseeable applications of genetic engineering? What is the low-hanging fruit here that we can see and define and should go after first?
Thanks.
Rather than asking how fast or slow we should move, I think it’s more useful to ask what preventative measures we can take, and then estimate which ones are worth the cost/delay. Merely pausing doesn’t help if we aren’t doing anything with that time. On the other hand, it could be worth a long pause and/or a high cost if there is some preventive measure we can take that would add significant safety.
I don’t know offhand what would raise my p(doom), except for obvious things like smaller-scale misbehavior (financial fraud, a cyberattack) or dramatic technological acceleration from AI (genetic engineering, nanotech).
Are we winning the war on cancer? Is it reasonably fast/steady progress, or has something gone wrong?
What has gone wrong in the fight against Alzheimer’s? Did a “cabal” prevent funding for anything other than the amyloid plaque hypothesis?
What do you think is the cause of Eroom’s Law? Why has it (fortunately) stalled in the last decade? Do we have any hope of reversing it?
Thanks a lot, Zvi.
Meta-level: I think to have a coherent discussion, it is important to be clear about which levels of safety we are talking about.
Right now I am mostly focused on the question of: is it even possible for a trained professional to use AI safely, if they are prudent and reasonably careful and follow best practices?
I am less focused, for now, on questions like: How dangerous would it be if we open-sourced all models and weights and just let anyone in the world do anything they wanted with the raw engine? Or: what could a terrorist group do with access to this? And I am not right now taking a strong stance on these questions.
And the reason for this focus is:
The most profound arguments for doom claim that literally no one on Earth can use AI safely, with our current understanding of it.
Right now there is a vocal “decelerationist” group saying that we should slow, pause, or halt AI development. I think this argument mostly rests on the most extreme and IMO least tenable versions of the doom argument.
With that context:
We might agree, at the extreme ends of the spectrum, that:
If a trained professional is very cautious and sets up all of the right goals, incentives and counter-incentives in a carefully balanced way, the AI probably won’t take over the world
If a reckless fool puts extreme optimization pressure on a superintelligent situationally-aware agent with no moral or practical constraints, then very bad things might happen
I feel like we are still at different points in the middle of that spectrum, though. You seem to think that the balancing of incentives has to be pretty careful, because some pretty serious power-seeking is the default outcome. My intuition is something like: problematic power-seeking is possible but not expected under most normal/reasonable scenarios.
I have a hunch that the crux has something to do with our view of the fundamental nature of these agents.
… I accidentally posted this without finishing it, but honestly I need to do more thinking to be able to articulate this crux.