Most recent thing that pops into mind is Beff trying to spread the meme that EA is just a bunch of communists.
E/acc seems to do a good job of bringing people together in Twitter spaces.
Most recent thing that pops into mind is Beff trying to spread the meme that EA is just a bunch of communists.
E/acc seems to do a good job of bringing people together in Twitter spaces.
Helen Toner was apparently willing to let OpenAI be destroyed because of a general feeling that the organization was moving too fast or commercializing too much.
I suspect that the board will look better over time as more information comes out.
Here’s some quotes from the Time article where Sam was named CEO of the Year:
But four people who have worked with Altman over the years also say he could be slippery—and at times, misleading and deceptive. Two people familiar with the board’s proceedings say that Altman is skilled at manipulating people, and that he had repeatedly received feedback that he was sometimes dishonest in order to make people feel he agreed with them when he did not. These people saw this pattern as part of a broader attempt to consolidate power. “In a lot of ways, Sam is a really nice guy; he’s not an evil genius. It would be easier to tell this story if he was a terrible person,” says one of them. “He cares about the mission, he cares about other people, he cares about humanity. But there’s also a clear pattern, if you look at his behavior, of really seeking power in an extreme way.”
… Some worried that iterative deployment would accelerate a dangerous AI arms race, and that commercial concerns were clouding OpenAI’s safety priorities. Several people close to the company thought OpenAI was drifting away from its original mission. “We had multiple board conversations about it, and huge numbers of internal conversations,” Altman says. But the decision was made. In 2021, seven staffers who disagreed quit to start a rival lab called Anthropic, led by Dario Amodei, OpenAI’s top safety researcher.
… For some time—little by little, at different rates—the three independent directors and Sutskever were becoming concerned about Altman’s behavior. Altman had a tendency to play different people off one another in order to get his desired outcome, say two people familiar with the board’s discussions. Both also say Altman tried to ensure information flowed through him. “He has a way of keeping the picture somewhat fragmented,” one says, making it hard to know where others stood.
… Altman told one board member that another believed Toner ought to be removed immediately, which was not true, according to two people familiar with the discussions.
In other words, it appears that Sam started the fight and not them. Is it really that crazy for the board to attempt to remove a CEO who was attempted to undermine the board’s oversight over him?
They were definitely outclassed in terms of their political ability, but I don’t think they were incompetent. It’s more that when you go up against a much more skilled actor, they end up making you look incompetent.
Most of the principles espoused by EA (scientific mindset, openness to falsifying evidence, integrity, and teamwork) are shared by e/acc.
EA here.
Doesn’t seem true as far as I can tell. E/acc doesn’t want to expose it’s beliefs to falsification; that’s why it’s almost always about attacking the other side and almost never about arguing for things on the object level.
E/acc doesn’t care about integrity either. They’re very happy to Tweet all kinds of weird conspiracy theories.
Anyway, I could be biased here, but that’s how I see it.
Great post. I really appreciated your comparison of the “more is better attitude” regarding knowledge with the “more is better attitude” regarding food.
You might want to consider posting this as a top-level post as well.
Actually, I can imagine a world where physical brains operated by interacting with some unknown realm that provided some kind of computation capability that the brain lacked itself, although as neuroscience advances, there seems less and less scope for anything like this (not that I know very much about neuroscience at all).
I don’t identify as a materialist either (I’m still figuring out my views here), but the question of qualia seems orthogonal to the question of capabilities. A philosophical zombie has the same capability to act in the world as someone who isn’t a zombie.
(I should add, this conversation has been useful to me as it’s helped me understand why certain things I take for granted may not be obvious to other people).
What’s your doubt?
Given enough computing power, we should be able to more or less simulate a brain. What is or was your worry? Ability to parallelise? Maybe that even though it may eventually become technically possible, it’ll always be cost-prohibitive? Or maybe that small errors in the simulation would magnify over time?
The aspect as was arguing for as almost certain on the inside view is that we would be able to develop AGI eventually barring catastrophe. I wasn’t extending that to “AGI will be here soon”.
Regarding “AGI kill us or solve all our problems”; I think there are some possible scenarios where we end up with a totalitarian government or an oligarchy controlling AI or the AI keeps us alive for some reason (incl. s-risk scenarios) or being disempowered by AI/”going out with a whimper” as per What failure looks like. But I assign almost no weight on the internal view of AGI just not being that good. (What I mean by that, is I exclude the scenarios that are common in sci-fi where we have AGI and we still have humans doing most things and being better or as good, but not scenarios where humans do things b/c we don’t trust the AI or b/c we need “fake jobs” for the humans to feel important).
I’m curious, what’s your main doubt about AGI happening eventually (excluding existential risks or scenarios where we end up back at the stone age)? The existence of humans, created by dumb evolution nonetheless, seems to constitute a strong evidence of physical possibility. And our ability to produce computer chips with astonishingly tiny components seems to suggest that we can actually do the physical manipulations required. So I think it’s one of those things that sounds more speculative than it actually is.
I mean, I guess it’s true that there is some doubt about AGI happening, but when you really get down to it, you can doubt anything. So I guess I’d be curious to have a better idea of what you mean by some doubt—maybe even a rough percent chance? I have a very low percent chance of AGI not happening (barring catastrophic risks as stated above) from within my model of the world, but I have a higher, but still low chance of my model being wrong.
Thanks for posting this! I would lean towards saying that it would be more tractable for Progress Studies to make progress on these issues than it might appear from first glance. One major advantage that progress studies has is that it is a big tent movement. Lots of people are affected by the unaffordability of housing and would love to see it cheaper, but very few people care enough about housing policy to show up to meetings about it every month. The topic just isn’t that interesting to most people, myself included, and the conversations would probably get old fast. In contrast, Progress Studies promises to bundle enough ideas together that it has real growth potential.
One thing to keep in mind is the potential for technologies to be hacked. I think widespread self-driving cars would be amazingly convenient, but also terrifying as companies allow them to be updated over the air. Even though the chance of a hacking attack at any particular instance of time is low, given a long enough time span and enough companies it’s practically inevitable. When it comes to these kind of widescale risks, a precautionary approach seems viable, when it comes to smaller and more management risks a more proactionary approach makes sense.
Things that are good are desireable would seem like a tauntology.
But my deeper critique is that whether a motto is a good choice or not depends on the context. And while in the past it may have made sense to abstract out progress as good, we’re now at that point where operating within that abstraction can lead us horribly astray.
I enjoyed this interview. I found it particularly interesting to hear how you were originally skeptical of the stagnation view and only came around to it later.
Nuclear non-proliferation has slowed the distribution of nukes; I acknowledge that this is slowing distribution rather than development.
There are conventions against the use of or development of biological weapons. These don’t appear to have been completely successful, but they’ve had some effect.
There has been a successful effort to prevent genetic enhancement—this may be net-positive or net-negative—but it shows the possibility of preventing development of a tech, even in China which was assumed to be the wild West.
But going further, progress studies wouldn’t exist if we didn’t think we could accelerate technologies. And as a matter if logic if we have the option to accelerate something we also have the option to not accelerate it, otherwise it was never an option. So even if we can’t slow a harmful technology relative to a baseline, we can at least not accelerate it.
One thing to keep in mind regarding measuring influence by numbers: Because EA started earlier, many EAs will be further into executing their plans. As an example, someone who is a student in 2020 at a top university, might be a senior manager by 2030.
This was triggered by news today in my home state of California, where a powerful legislator wants to spend $10 billion of our (temporary bumper) surplus subsidizing housing
For some reason, the media really doesn’t want to spread the message “we need to build more housing”. One theory is that many of the older journalists own property and don’t want more construction in their neighborhoods. This doesn’t seem like a very good explanation as then we might predict the younger journalists who don’t own property would push to build more.
A second theory is that the media is currently pushing the narrative of rich oppressing the poor and this explanation doesn’t fit with this narrative. This seems more likely. Many journalists are struggling financially due to the shift to online, so even if the housing market were fixed it likely wouldn’t fix their issues. Hence they are incentivised to push for a more extensive restructuring of society.
“If you want to build a ship, don’t drum up people to collect wood and don’t assign them tasks and work, but rather teach them to long for the endless immensity of the sea”—Antoine de Saint Exupéry
It isn’t clear that the offense-defense balance directly affects the number of deaths in a conflict in the way that you claim. For example, machine guns nests benefitted the defenders significantly, but could quite easily have resulted in there being more deaths in warfare, due to the use of tactics that hadn’t yet accounted for them.
I don’t know why you’d think that compute would be the limiting factor here. Absent AI, there are limited ways in which to deploy more compute.