You might want to consider posting this as a top-level post as well.
Chris Leong
Notes on Differential Technological Development
I’m curious, what’s your main doubt about AGI happening eventually (excluding existential risks or scenarios where we end up back at the stone age)? The existence of humans, created by dumb evolution nonetheless, seems to constitute a strong evidence of physical possibility. And our ability to produce computer chips with astonishingly tiny components seems to suggest that we can actually do the physical manipulations required. So I think it’s one of those things that sounds more speculative than it actually is.
I mean, I guess it’s true that there is some doubt about AGI happening, but when you really get down to it, you can doubt anything. So I guess I’d be curious to have a better idea of what you mean by some doubt—maybe even a rough percent chance? I have a very low percent chance of AGI not happening (barring catastrophic risks as stated above) from within my model of the world, but I have a higher, but still low chance of my model being wrong.
I enjoyed this interview. I found it particularly interesting to hear how you were originally skeptical of the stagnation view and only came around to it later.
A few thoughts:
I really like the idea of an idea machine. I think more people within EA should consider EA as a system.
I’m surprised to hear “It’s time to build” is different Progress Studies as they seem pretty aligned. Then again, I’ve only really seen that essay by itself. Is there a broader community around it and where can I find out about it?
“We seem to understand that entrepreneurship operates in a free market of ideas, so I’m not sure where the idea comes from that there is, or could be, One True Approach to philanthropy”—Agreed. In particular, I think that a lot of efforts to improve the world through politics shouldn’t occur through EA. I also appreciate that the rationality community is somewhat distinct from EA as that allows it to focus more on epistemics.
Own Cotton-Barratt’s talk Prospecting for Gold has been pretty influential in Effective Altruism in shifting more effort towards lots of small experiments with high-upside and limited downside (but that said a lot of money is still just redirected to the Against Malaria Foundation and other top charities)
Regarding expressive value, I’d suggest Eliezer’s essay—Purchase Fuzzies and Utilions Separately. In order to be an EA you don’t have to choose all your donations or actions according to EA principles. I think of it like being an artist—in order to be an artist you have to produce at least some art, but you can do other things with your time as well.
“EA will continue to grow, but it will never become the dominant narrative because it’s so morally opinionated”—There’s some intentionality here. Lots of people don’t want EA to grow too fast as they are worried that communities that grow too fast can fail to pass on their culture. In contrast, this is probably an accurate statement for Giving What We Can, which is aims to grow as fast as it can, but which is rigorous enough that I expect it will only ever find a niche audience.
“Why aren’t there more effective altruisms?”—Perhaps it’s because being part of EA is appealing enough[1] that many people or groups that could have formed their own movement end up becoming part of EA (take for example AI Safety, although from what I heard at EAG London, AI Safety specific movement building is starting to take off).
One interesting question to ask is why is EA an idea engine and not LW. Again, part of this is some people within LW don’t want it to become more of a movement because they are worried about this distorting its epistemics.
I think it is possible to turn ideas into action without major funders, but unfortunately, EA had limited success here.
- ^
Access to talent and money
Fascinating article. I’m surprised that I had never heard of the Bonfire of the Vanities and how it disrupted the Renaissance. I wonder how history would have turned out if it hadn’t been disrupted.
I also found it interesting how those short disruptions were sufficient to end those society’s golden ages, particularly since I would be tempted to argue that our own society has recently been suffering through such a disruption.
For the flip side of the coin, I would like to nominate the invention of the nuclear bomb as one of the most tragic moments in history.
I don’t identify as a materialist either (I’m still figuring out my views here), but the question of qualia seems orthogonal to the question of capabilities. A philosophical zombie has the same capability to act in the world as someone who isn’t a zombie.
(I should add, this conversation has been useful to me as it’s helped me understand why certain things I take for granted may not be obvious to other people).
What’s your doubt?
Given enough computing power, we should be able to more or less simulate a brain. What is or was your worry? Ability to parallelise? Maybe that even though it may eventually become technically possible, it’ll always be cost-prohibitive? Or maybe that small errors in the simulation would magnify over time?
The aspect as was arguing for as almost certain on the inside view is that we would be able to develop AGI eventually barring catastrophe. I wasn’t extending that to “AGI will be here soon”.
Regarding “AGI kill us or solve all our problems”; I think there are some possible scenarios where we end up with a totalitarian government or an oligarchy controlling AI or the AI keeps us alive for some reason (incl. s-risk scenarios) or being disempowered by AI/”going out with a whimper” as per What failure looks like. But I assign almost no weight on the internal view of AGI just not being that good. (What I mean by that, is I exclude the scenarios that are common in sci-fi where we have AGI and we still have humans doing most things and being better or as good, but not scenarios where humans do things b/c we don’t trust the AI or b/c we need “fake jobs” for the humans to feel important).
Thanks for posting this! I would lean towards saying that it would be more tractable for Progress Studies to make progress on these issues than it might appear from first glance. One major advantage that progress studies has is that it is a big tent movement. Lots of people are affected by the unaffordability of housing and would love to see it cheaper, but very few people care enough about housing policy to show up to meetings about it every month. The topic just isn’t that interesting to most people, myself included, and the conversations would probably get old fast. In contrast, Progress Studies promises to bundle enough ideas together that it has real growth potential.
One thing to keep in mind is the potential for technologies to be hacked. I think widespread self-driving cars would be amazingly convenient, but also terrifying as companies allow them to be updated over the air. Even though the chance of a hacking attack at any particular instance of time is low, given a long enough time span and enough companies it’s practically inevitable. When it comes to these kind of widescale risks, a precautionary approach seems viable, when it comes to smaller and more management risks a more proactionary approach makes sense.
One thing to keep in mind regarding measuring influence by numbers: Because EA started earlier, many EAs will be further into executing their plans. As an example, someone who is a student in 2020 at a top university, might be a senior manager by 2030.
This was triggered by news today in my home state of California, where a powerful legislator wants to spend $10 billion of our (temporary bumper) surplus subsidizing housing
For some reason, the media really doesn’t want to spread the message “we need to build more housing”. One theory is that many of the older journalists own property and don’t want more construction in their neighborhoods. This doesn’t seem like a very good explanation as then we might predict the younger journalists who don’t own property would push to build more.
A second theory is that the media is currently pushing the narrative of rich oppressing the poor and this explanation doesn’t fit with this narrative. This seems more likely. Many journalists are struggling financially due to the shift to online, so even if the housing market were fixed it likely wouldn’t fix their issues. Hence they are incentivised to push for a more extensive restructuring of society.
“If you want to build a ship, don’t drum up people to collect wood and don’t assign them tasks and work, but rather teach them to long for the endless immensity of the sea”—Antoine de Saint Exupéry
I think there’s likely to be a bit more tension between EA of today and Progress Studies vs. EA of the past.
The EA of the past was much more focused on global development (progress = good), whilst EA is currently undergoing a hard pivot towards long-termism, most notably bio-risk and ai-risk (progress = bad). Actually, the way I’d frame it is more about the importance of ensuring differential progress rather than progress in general. And I don’t know how optimistic I am about Progress Studies heading that direction because thinking about progress itself is hard enough and differential progress would be even harder.
I’m quite involved in EA, so I’m probably biased towards thinking EA will be more influential than it may very well turn out to be. EA has built up a lot of infrastructure, including 80,000 Hours, EA Globals and student groups at top universities; and a huge number of new projects launched this year. Progress Studies may be able to replicate that, but it remains to be seen.
It isn’t clear that the offense-defense balance directly affects the number of deaths in a conflict in the way that you claim. For example, machine guns nests benefitted the defenders significantly, but could quite easily have resulted in there being more deaths in warfare, due to the use of tactics that hadn’t yet accounted for them.
If you had told people in the 1970s that in 2020 terrorist groups and lone psychopaths could access more computing power than IBM had ever produced at the time from their pocket, what would they have predicted about the offense defense balance of cybersecurity?
I don’t know why you’d think that compute would be the limiting factor here. Absent AI, there are limited ways in which to deploy more compute.
Most recent thing that pops into mind is Beff trying to spread the meme that EA is just a bunch of communists.
E/acc seems to do a good job of bringing people together in Twitter spaces.
Most of the principles espoused by EA (scientific mindset, openness to falsifying evidence, integrity, and teamwork) are shared by e/acc.
EA here.
Doesn’t seem true as far as I can tell. E/acc doesn’t want to expose it’s beliefs to falsification; that’s why it’s almost always about attacking the other side and almost never about arguing for things on the object level.
E/acc doesn’t care about integrity either. They’re very happy to Tweet all kinds of weird conspiracy theories.Anyway, I could be biased here, but that’s how I see it.
Great post. I really appreciated your comparison of the “more is better attitude” regarding knowledge with the “more is better attitude” regarding food.
I suspect that the board will look better over time as more information comes out.
Here’s some quotes from the Time article where Sam was named CEO of the Year:
In other words, it appears that Sam started the fight and not them. Is it really that crazy for the board to attempt to remove a CEO who was attempted to undermine the board’s oversight over him?
They were definitely outclassed in terms of their political ability, but I don’t think they were incompetent. It’s more that when you go up against a much more skilled actor, they end up making you look incompetent.