The thing about e/acc is it’s a mix of the reasonable and the insane doom cult.
The reasonable parts talk about AI curing diseases ect, and ask to speed it up.
Given some chance of AI curing diseases, and some chance of AI caused extinction, it’s a tradeoff.
Now where the optimal point of the tradeoff lands on depend on whether we just care about existing humans, or all potential future humans. And also on how big we think the risk of AI extinction is.
If we care about all future humans, and think ai is really dangerous, we get a “proceed with extreme caution” position. A position that accepts the building of ASI eventually, but is quite keen to delay it 1000 years if that buys any more safety.
On the other end, some people think the risks are small, and mostly care about themselves/current humans. They are more e/acc.
But there are also various “AI will be our worthy successors”, “AI will replace humans, and that’s great” type e/acc who are ok with the end of humanity.
I don’t see any specific criticism of effective altruism other than “I don’t like the vibes”.
And the criticism from “acrimonious corporate politics”.
“Helen Toner was apparently willing to let OpenAI be destroyed because of a general feeling that the organization was moving too fast or commercializing too much.”
Between the two of them, a philosophy that aims to prevent catastrophic risk in the future seems to be creating its own catastrophes in the present.
Shutting down a company and some acrimonious board room discussion is hardly “catastrophic”. And it can be the right move, if you think the danger exceeds the value the company is creating.
Ie if a company makes nuclear power plants that are melt downs just waiting to happen, or kids toys full of lead or something, shutting that company down is a good move.
It could be an externality, if the land was randomly reassigned a new owner every year or something. But if the land is sold, that is taken into account. It isn’t an externality. Capitalism has priced this effect in.
Good question, I don’t know. People have been talking about “progress studies” or the “progress movement” or “progress community”, and others have talked about the “abundance agenda”, but none of those lend themselves to personal labels/identities…
A potential area of overlap between effective altruism and Roots of Progress is the non-profit New Harvest, which funds research into making meat, eggs, and milk without animals.
What’s a good alternative word for someone who has a strong conviction in the past, present, and future benefits of technology?
Great point. I wish we had more ideas about how to improve this. So many places we might try to fix this: philanthropists might redirect funding. We might try to provide career paths for these institutions’ employees that spanned the space of current problems and not just the one problem they work on.
I have never particularly liked the term “techno-optimism” anyway. “Optimism” on its own is confusing enough. “Techno-optimism” implies that not only do you think we can solve all problems, but that technology will be the solution to all of them, which is not really true.
Thanks, good point about the flow here.
Thanks Robert. I think progress studies needs a more well-defined value system. I have gestured at “humanism” as the basis for this, but it needs much more.
I agree that Rand’s ideas are important here, particularly her view of creative/productive work as a noble activity and of scientists, inventors and business leaders as heroic figures.
There is an argument to be made that e/acc is the Jungian shadow to EA.
There is a fundamental difference in principles between the two movements in that EA gradually and then suddenly fell into a paternalistic disregard (if not disdain) for the negative feedback that the market provides—e.g., Helen Toner’s belief that the dissolution of OpenAI was an acceptable alternative to resolving differences with the CEO. But with this exception, most of the principles espoused by EA (scientific mindset, openness to falsifying evidence, integrity, and teamwork) are shared by e/acc.
But EA started with philosophical principles and became a mass movement. e/acc more or less has begun as a mass movement, and is only gradually and haltingly identifying its principles.
Both EA and e/acc reflexively repress the valid differences they have in their approach to promoting progress. While e/acc is now on the ascendant and EA on the ropes, until e/acc or EA can integrate their shadow, both will fall short of their potential in activating human energy in service of progress.
What would a fully integrated vision of progress look like? It would acknowledge the valid view of e/acc that markets generally provide the best mechanism for gathering and processing information about the needs of dispersed groups of individuals while at the same time acknowledging and grappling with the reality that there are some important needs that cannot be met by markets (either because the preconditions for market formation have not been or cannot be met).
But I would be very careful posting this sort of essay online right now. You are either for or against at the moment. Anybody trying to nuance things is likely to be sidelined.
In theory, if they could be made to work, self-driving cars would be one of the best technologies ever. In practice, the technology seems stuck in a rut. Although exact statistics are hard to come by, the number of human interventions seems to remain high.
There is a very high burden of proof for self-driving car companies like Cruise and Waymo; they need to convincingly demonstrate, using robust statistical evidence, that their vehicles are indeed significantly safer than human drivers in the same locales. Cruise, Waymo, et al. have certainly had plenty of time to produce such evidence, but they have yet to do so.
As bullish as I once was on self-driving cars, I think it is reasonable for people to be worried about the potential danger posed by these prototypes driving around their streets. If self-driving car companies can’t prove their worries are misplaced, then, well, banning such testing on public roads doesn’t strike me as unreasonable. At the very least, there seems to be little excuse for taking safety drivers out of prototype cars.
My view on self-driving car bans is influenced by my view that fundamental research breakthroughs are needed to make wide-scale commercialization of self-driving cars a reality. I don’t think the bottleneck is more public road testing. Deep learning researchers need to figure things out like self-supervised video prediction. Until then, self-driving cars will continue to spin their wheels.
I think Ezra Klein has a lucid take on the “manifesto”. Ezra observes that it’s a covert anti-wokeness rant:
It’s a mistake to read his manifesto as about technology. It’s about how we were once brave and strong and we have become soft and weak.
In Ezra’s New York Times column on Andreessen’s rant, he writes:
It’s a coalition obsessed with where we went wrong: the weakness, the political correctness, the liberalism, the trigger warnings, the smug elites. It’s a coalition that believes we were once hard and have become soft; worse, we have come to lionize softness and punish hardness.
I would describe myself as a techno-optimist, but I find Andreessen’s rant distasteful and alienating. I think allowing Andreessen to define what constitutes techno-optimism would do significant damage to the techno-optimist cause.
Hi Jason,This is great, I would love to read more about how you believe Progress Studies could become a philosophy on par with Effective Altruism. I think an advantage EA has is its roots in John Stuart Mill and some of his contemporaries. Personally, I’ve found it harder to pinpoint which philosophers were early proponents of Progress Studies—my sense is that the idea of building, whatever the trials and tribulations, is fundamentally a Stoic idea. Indeed, I think Ayn Rand’s ideas, particularly on the importance of individualism, are important if one would like to create an epistemic history of Progress Studies.
Thanks for sharing this draft.
Style suggestion. You could put the penultimate paragraph before the preceding one and delete the final paragraph. That will decrease the preachiness factor at the end and the repetition of ideas in the last and third to last paragraph. Plus going straight from we need serious people to the paragraph about those people is what your structure is asking for.
I’m working on an essay on patents and progress. Does anyone want to give it a read and give me some feedback?
Thank you for sharing! I love the tagline on your article about masks bringing us together over the past 400 years, much like medical progress. Also, I’m having amazing thoughts about progress that you can directly link me to accessing this article from 1905.
Well if it affects one plot of land that is currently the property of just 1 person it can still be an externality because lots of different people will own this land in the future.
Yes, the psychological factor is often cited for discrete events that bring people closer together or highlight a stark idea of what is important in their life. But did COVID initially present a more troubling future? That might work against this idea, because you are pessimistic about the future of a world subject to a global pandemic. However, your point might hold differently for the women highlighted here, since they are in a much more secure place than their peers subject to exposure and uncertainty about their employment.
I’ve also seen discussion about how the opportunity cost of time—what else women could be doing during this period—fast-forwarded plans. Nothing much to do with my free time- might as well have a baby! That could speak to Claudia’s work because her thesis about women’s late fertility has to do with the cost of establishing a career. The time cost of this delays having a family. In the COVID period, many time costs were slashed- i.e. commuting, meetings, besides most social obligations. Might have seemed more feasible to start families with a 2000/2001 view of the balance of time available for both pursuits.
Yep. I wanted to lay out a somewhat more detailed accounting of it, as a basis for future work on how institutions are designed—and how they should be designed, if we want them to be more effective.