“Progress is real, desirable, and possible” is an inspiring slogan, but I would suggest that it’s actually mistaken. What we want is differential progress where we accelerate those technologies most likely to be beneficial and slow those technologies most likely to be harmful.
Chris Leong
I’ve done something similar where I’ve asked for backstop funding for a project in case I wasn’t able to get it funded elsewhere.
I think there’s likely to be a bit more tension between EA of today and Progress Studies vs. EA of the past.
The EA of the past was much more focused on global development (progress = good), whilst EA is currently undergoing a hard pivot towards long-termism, most notably bio-risk and ai-risk (progress = bad). Actually, the way I’d frame it is more about the importance of ensuring differential progress rather than progress in general. And I don’t know how optimistic I am about Progress Studies heading that direction because thinking about progress itself is hard enough and differential progress would be even harder.
I’m quite involved in EA, so I’m probably biased towards thinking EA will be more influential than it may very well turn out to be. EA has built up a lot of infrastructure, including 80,000 Hours, EA Globals and student groups at top universities; and a huge number of new projects launched this year. Progress Studies may be able to replicate that, but it remains to be seen.
Fascinating article. I’m surprised that I had never heard of the Bonfire of the Vanities and how it disrupted the Renaissance. I wonder how history would have turned out if it hadn’t been disrupted.
I also found it interesting how those short disruptions were sufficient to end those society’s golden ages, particularly since I would be tempted to argue that our own society has recently been suffering through such a disruption.
For the flip side of the coin, I would like to nominate the invention of the nuclear bomb as one of the most tragic moments in history.
A few thoughts:
I really like the idea of an idea machine. I think more people within EA should consider EA as a system.
I’m surprised to hear “It’s time to build” is different Progress Studies as they seem pretty aligned. Then again, I’ve only really seen that essay by itself. Is there a broader community around it and where can I find out about it?
“We seem to understand that entrepreneurship operates in a free market of ideas, so I’m not sure where the idea comes from that there is, or could be, One True Approach to philanthropy”—Agreed. In particular, I think that a lot of efforts to improve the world through politics shouldn’t occur through EA. I also appreciate that the rationality community is somewhat distinct from EA as that allows it to focus more on epistemics.
Own Cotton-Barratt’s talk Prospecting for Gold has been pretty influential in Effective Altruism in shifting more effort towards lots of small experiments with high-upside and limited downside (but that said a lot of money is still just redirected to the Against Malaria Foundation and other top charities)
Regarding expressive value, I’d suggest Eliezer’s essay—Purchase Fuzzies and Utilions Separately. In order to be an EA you don’t have to choose all your donations or actions according to EA principles. I think of it like being an artist—in order to be an artist you have to produce at least some art, but you can do other things with your time as well.
“EA will continue to grow, but it will never become the dominant narrative because it’s so morally opinionated”—There’s some intentionality here. Lots of people don’t want EA to grow too fast as they are worried that communities that grow too fast can fail to pass on their culture. In contrast, this is probably an accurate statement for Giving What We Can, which is aims to grow as fast as it can, but which is rigorous enough that I expect it will only ever find a niche audience.
“Why aren’t there more effective altruisms?”—Perhaps it’s because being part of EA is appealing enough[1] that many people or groups that could have formed their own movement end up becoming part of EA (take for example AI Safety, although from what I heard at EAG London, AI Safety specific movement building is starting to take off).
One interesting question to ask is why is EA an idea engine and not LW. Again, part of this is some people within LW don’t want it to become more of a movement because they are worried about this distorting its epistemics.
I think it is possible to turn ideas into action without major funders, but unfortunately, EA had limited success here.
- ^
Access to talent and money
The development of the bomb may have been a pretty good period, since it led to nuclear energy and other innovations
I agree that we’re probably ahead at this point, but, I don’t know, seems like a pretty risky bet to take that it’ll remain net-positive over the long-term. Like, sure it’s nice nuclear power is an option, even if we don’t make much use of it, and that we have isotopes for medical use, but that doesn’t really feel worth having a nuclear apocalypse hanging over our heads?
Einstein said: ““I do not know with what weapons World War III will be fought, but World War IV will be fought with sticks and stones.” And from what I’ve heard there’s truth in that. There’s a pretty good chance at least some humans will survive any nuclear conflict, but I’d be quite surprised if we didn’t fall way down the tech tree. So this kind of situation seems like the exact opposite of what we want if we’re in favour of progress.
I suppose TED talks are the closest thing that exists to this. It seems that the popularity of TED seems to have peaked a while ago.
It was likely inevitable anyway
I’d suggest separating the question of whether a certain technology should have been developed from whether it was possible. For example, let’s suppose someone is dying of cancer and we have no way of saving them.
Do we want to save them? Yes
Can we save them? No
I would be very disappointed if people ended up concluding from our inability to save them that we didn’t actually want to save them anyway.
Similarly for nuclear weapons, the table may very well be:
Do we want to avoid them: Yes
Can we avoid them: No
Which is what I would suggest. Or if suppressing this research would have led to a poorer world it may be:
Do we want to avoid them: NoCan we avoid them: No
But I think it’s best to avoid conflating these two questions. Even if we think there’s nothing we can do, if we conflate that with “We wouldn’t want to stop or slow its development anyway” then we would likely refuse an opportunity to make a difference even if we were handed it on a silver platter.
I suppose it would be possible to argue that atomic research led to a richer world, but would question how big this impact really has been? Is it more than a couple of percent? And if not, is this really worth having nuclear apocalypse hanging over our heads? One potentially useful thought experiment: how much would someone have to pay you to convince you to play a game of Russian roulette[1]?
- ^
I only realised after writing this, that the existence of nukes is literally a game of Russian Roulette.
- ^
“The modern version is much more comfortable with technocracy”—I wasn’t aware of that. I would love to see a source on this.
Silicon Valley was originally highly suspicious of the business establishment with its focus on disrupting it, although this seems to have softened somewhat as it has formed its own establishment.
As an example, look at the 1984 Macintosh Commercial.
I expect that 2 is true as well and so it made sense to invent the bomb before another less responsible country, but if we could have waved a wand prevented the invention of nukes then I think it would have been worthwhile even if it cost us nuclear energy or slowed global progress.
I mean, a lot of people oppose progress for pretty silly and not really thought out reasons, but as far as reasons go, “We invented/almost invented something that could potentially have killed everyone on earth” seems like not a bad reason to slow things down for a bit and reflect.
For example, what could be done to make the AlexNet happen 10 years earlier?
I know it might be a heretical question on this forum, but do we really need to accelerate AI? Isn’t there some point at which we can say “fast enough”? Like if we could press a button a make AGI appear today, would be wise to press that button? Are we truly ready for the consequences of what would arguably be the most important moment in our entire history? Aren’t there enough other things in society that we could fix instead?
“If you want to build a ship, don’t drum up people to collect wood and don’t assign them tasks and work, but rather teach them to long for the endless immensity of the sea”—Antoine de Saint Exupéry
This was triggered by news today in my home state of California, where a powerful legislator wants to spend $10 billion of our (temporary bumper) surplus subsidizing housing
For some reason, the media really doesn’t want to spread the message “we need to build more housing”. One theory is that many of the older journalists own property and don’t want more construction in their neighborhoods. This doesn’t seem like a very good explanation as then we might predict the younger journalists who don’t own property would push to build more.
A second theory is that the media is currently pushing the narrative of rich oppressing the poor and this explanation doesn’t fit with this narrative. This seems more likely. Many journalists are struggling financially due to the shift to online, so even if the housing market were fixed it likely wouldn’t fix their issues. Hence they are incentivised to push for a more extensive restructuring of society.
One thing to keep in mind regarding measuring influence by numbers: Because EA started earlier, many EAs will be further into executing their plans. As an example, someone who is a student in 2020 at a top university, might be a senior manager by 2030.
Nuclear non-proliferation has slowed the distribution of nukes; I acknowledge that this is slowing distribution rather than development.
There are conventions against the use of or development of biological weapons. These don’t appear to have been completely successful, but they’ve had some effect.
There has been a successful effort to prevent genetic enhancement—this may be net-positive or net-negative—but it shows the possibility of preventing development of a tech, even in China which was assumed to be the wild West.
But going further, progress studies wouldn’t exist if we didn’t think we could accelerate technologies. And as a matter if logic if we have the option to accelerate something we also have the option to not accelerate it, otherwise it was never an option. So even if we can’t slow a harmful technology relative to a baseline, we can at least not accelerate it.
I enjoyed this interview. I found it particularly interesting to hear how you were originally skeptical of the stagnation view and only came around to it later.
Things that are good are desireable would seem like a tauntology.
But my deeper critique is that whether a motto is a good choice or not depends on the context. And while in the past it may have made sense to abstract out progress as good, we’re now at that point where operating within that abstraction can lead us horribly astray.
One thing to keep in mind is the potential for technologies to be hacked. I think widespread self-driving cars would be amazingly convenient, but also terrifying as companies allow them to be updated over the air. Even though the chance of a hacking attack at any particular instance of time is low, given a long enough time span and enough companies it’s practically inevitable. When it comes to these kind of widescale risks, a precautionary approach seems viable, when it comes to smaller and more management risks a more proactionary approach makes sense.
I think that there is some value in this frame, but I guess I see this as limited to the context where we’re generally replacing bad problems with a less bad problems.
I guess it would seem a bit blase in a context where we take a problem that is only kind of bad and replace it with something that is a catastrophe.
So my tendency would be much more cautious about the potential to create new problems.