Thanks. I posted an overview about the ideas I’m interested in and their relevance. https://progressforum.org/posts/HdFCEkhGn2bJxdpbJ/a-plan-for-making-progress-debate-policies
I feel borderline/neutral about them. I’m interested in these topics and I think a lot of other people would be but it feels auxilliary/tangential
Are essays about epistemology, rationality, error correction, learning methods, debate methodology, etc., considered on-topic or off-topic?
I think Leopold Aschenbrenner’s argument here is interesting to consider: https://worksinprogress.co/issue/securing-posterity/?ref=forourposterity.com
(full paper https://globalprioritiesinstitute.org/wp-content/uploads/Philip-Trammell-and-Leopold-Aschenbrenner-Existential-Risk-and-Growth.pdf)
Regarding discount rate, 0% discount is pretty common in EA circles, I think, although I think many recognize it should be at least a bit above 0% to account for epistemic uncertainty about how long humans will continue to exist.
Worldbuilding Hopeful Futures with AI – Free course now live
Hi everyone,
Foresight Institute just launched a new free, self-paced course on Udemy:
👉 Worldbuilding Hopeful Futures with AIThe goal: to empower more people — not just technologists or policymakers — to actively think about and shape AI’s long-term trajectory. We use storytelling and worldbuilding to crowdsource diverse, ambitious, and grounded visions of the future, and explore practical pathways to get there.
The course brings together insights from a wide range of thinkers across science, governance, and philosophy, including:
Helen Toner – Director of Strategy at CSET and former OpenAI board member
Anthony Aguirre – Co-founder of the Future of Life Institute
Hannah Ritchie – Lead Researcher at Our World in Data
Ada Palmer – Historian and sci-fi author
Glen Weyl – Founder of RadicalxChange, Microsoft Research
Robert Lempert – Senior Scientist at RAND, expert in scenario planning
Each module includes a short video, reading recommendations, and an interactive assignment to build your own future scenario.
We’d love for you to check it out, share it with others, and let us know what you think:
👉 https://www.udemy.com/course/worldbuilding-hopeful-futures-with-ai/Happy to answer any questions or discuss the project further!
Yes, that’s correct. Ord’s writes this about discount rates:
The issue raised by this paper has also been masked in many economic analyses by an assumption of pure time-preference: that society should have a discount rate on value itself. If we use that assumption, we end up with a somewhat different argument for advancing progress — one based on impatience; on merely getting to the good stuff sooner, even if that means getting less of it.
Even then, the considerations I’ve raised would undermine this argument. For if it does turn out that advancing progress across the board is bad from a patient perspective, then we’d be left with an argument that ‘advancing progress is good, but only due to fundamental societal impatience and the way it neglects future losses’. The rationale for advancing progress would be fundamentally about robbing tomorrow to pay for today, in a way that is justified only because society doesn’t (or shouldn’t) care much about the people at the end of the chain when the debt comes due. This strikes me as a very troubling position and far from the full-throated endorsement of progress that its advocates seek.
So what’s the best argument for having a discount rate on value itself?
My first reaction was that this seems to be assuming a zero discount rate on the future. I haven’t had a chance to really dig into it though
Really enjoyed this book, it inspired me to start Roots of Progress! https://blog.rootsofprogress.org/the-idea-of-progress
I would say both immigration and crime are relevant to progress!
Our primitive monkey brains are good at over-estimating very unlikely risks.[2]
I think this is presupposing the question isn’t it.
If a risk is indeed very unlikely, then we will tend to overestimate it. (If the probability is 0 it’s impossible to underestimate)
But for risks that are actually quite likely, then we are more likely to underestimate them.
And of course, bias estimates cut both ways. “Our primitive monkey brains are good at ignoring and underestimating abstract and hard to understand risks”.
Thanks Donald, good feedback. I agree about maximizing good over minimizing bad. Curing aging, or extending healthspan, is a great one. Certainly an easier sell than becoming a multiplanetary species.
This is a linkpost for https://amistrongeryet.substack.com/p/alphaproof-and-openai-o1
The latest advances in AI reasoning come from OpenAI’s o1 and Google’s AlphaProof. In this post, I explore how these new models work, and what that tells us about the path to AGI.
Interestingly, unlike GPT-2 → GPT-3 → GPT-4, neither of these models rely on increased scale to drive capabilities. Instead, both systems rely on training data that shows, not just the solution to a problem, but the path to that solution. This opens a new frontier for progress in AI capabilities: how to create that sort of data?
In this post, I review what is known about how AlphaProof and o1 work, discuss the connection between their training data and their capabilities, and identify some problems that remain to be solved in order for capabilities to continue to progress along this path.
Ok. Firstly I do think your “Embodied information” is real. I just think it’s pretty small. You need the molecular structure for 4 base pairs of DNA, and for 30 ish protiens. And this wikipedia page. https://en.wikipedia.org/wiki/DNA_and_RNA_codon_tables
That seems to be in the kilobytes. It’s a rather small amount of information compared to DNA.
Epigenetics is about extra tags that get added. So theoretically the amount of information could be nearly as much as in the DNA. For example, methyization can happen on A and C, so that’s 1 bit per base pair, in theory.
Also, the structure of DNA hasn’t changed much since early micro-organisms existed. Neither has a lot of the other embodied information.
Therefore the information doesn’t contain optimization over intelligence, because all life forms with a brain had the same DNA.
Humans are better than LLM’s at highly abstact tasks like quantum physics or haskel programming.
You can’t argue that this is a result of billions of years of evolution. Sea sponges weren’t running crude haskel programs a billion years ago.
Therefore, whatever data the human brain has, it is highly general information about intelligence.
Suppose we put the full human genome, plus a lot of data about DNA and protein structure, into the LLM training data. In theory, the LLM has all the data that evolution worked so hard to produce. In practice, LLM’s aren’t smart enough to come up with fundamental insights about the nature of intelligence from the raw human genome.
So there is some piece of data, with a length between a few bits and several megabytes, that is implicitly encoded in the human genome, and that describes an algorithm for higher intelligence in general.
If it’s a collection of millions of unintelligible interacting “hacks” tuned to statistical properties of the environment, then maybe not.
Well those “hacks” would have to generalize well. Modern humans operate WAY out of distribution and work on very different problems.
Would interacting hacks that were optimized to hunt mammoths also happen to work in solving abstract maths problems?
So how would this work. There would need to be a set of complicated hacks that work on all sorts of problems, including abstract maths. Abstract maths has limitless training data in theory. And if the hacks apply to all sorts of problems, then data on all sorts of problems is useful in finding the hacks.
If the hacks contain a million bits of information, and help answer a million true/false questions, then they are in principle findable with sufficient compute.
Also, bear in mind that evolution is INCREADIBLY data inefficient. Yes there are a huge number of ancestors. But evolution only finds out how many children got produced. A human can look at a graph and realize that a 1% increase in parameter X causes a 1% improvement in performance. Evolution randomly makes some individual with 1% more X, and they get killed by a tiger. Bad luck.
And again. Most of the billions of years there were no brains at all. The gap between humans and monkeyish creatures is a few Million years.
AIXI is a theoretical model of an ideal intelligence, it’s a few lines of maths.
I’m not saying it’s totally impossible that there is some weird form of evolution data wall. But mostly this looks like a fairly straightforward insight, possessable, and not possessed by us. I think it’s pretty clear that the human algorithm makes at least a modest amount of sense and isn’t too hard to find with trial and error on the same training dataset. (When the dataset is large, and the amount of outer optimization is fairly modest, the risk of overfitting in the outer stage is small)
https://www.lesswrong.com/posts/ZiRKzx3yv7NyA5rjF/the-robots-ai-and-unemployment-anti-faq
Once AI does get that level of intelligence, jobs should be the least of our concerns. Utopia or extinction, our future is up to the AI.
> It also seems vanishingly unlikely that the pressures on middle class jobs, artists, and writers will decrease even if we rolled back the last 5 years of progress in AI—but we wouldn’t have the accompanying productivity gains which could be used to pay for UBI or other programs.
When plenty of people are saying that AGI is likely to cause human extinction, and the worst scenario you can come up with is middle class jobs, your side is the safe one.
I think your notion of “environmental progress” itself is skewing things.
When humans were hunter gatherers, we didn’t have much ability to modify our surroundings.
Currently, we are bemoaning global warming, but if the earth was cooling instead, we would bemoan that too.
Environmentalism seems to only look at part of the effects.
No one boasts about how high the biodiversity is at zoos. No one is talking about cities being a great habitat for pigeons as an environmental success story.
The whole idea around the environmentalist movement is the naturalistic fallacy turned up to 11. Any change made by humans automatically becomes a problem.
It’s goal seems to be “make the earth resemble what it would look like had humans never existed”.
(Name one way humans made an improvement to some aspect of the environment compared to what it was a million years ago)
A goal that kind of gets harder by default as humanities ability to modify the earth increases.
One system I think would be good is issue based voting.
So for example, there would be several candidates for the position of “health minister”, and everyone gets to vote on that.
And independently people get to vote for the next minister for education.
Other interesting add ons include voting for an abstract organization, not a person. One person that decides everything is an option on the balot, but so are various organizations, with various decision procedures. You can vote for the team that listens to prediction markets, or even some sub-democracy system. (Because the organizations can use arbitrary mechanisms, including more votes, teams of people, whatever they like)
Approval voting is good.
An interesting option is to run a 1-of-many election.
So you can cast a ballot in the health-election or in the education election or in the energy election or …, depending on which issue you feel most strongly about. (But you can’t vote on both issues at one time.) This has a nice property that the fewer people care about a topic, the further your vote goes if you decide to vote on that topic.
Solving global warming
Most of the current attempts that interact with everyday people are random greenwashing trying to get people to sort recycling or use paper straws. Yes solar panel tech is advancing, but that’s kind of in the background to most peoples day to day lives.
And all this goal is promising is that things won’t get worse via climate change. It isn’t actually a vision of positive change.
A future with ultra cheap energy, electric short range aviation in common use etc.
Building true artificial intelligence (AGI, or artificial general intelligence)
Half the experts are warning that this is a poisoned chalice. Can we not unite towards this goal until/unless we come to a conclusion that the risk of human extinction from AGI takeover is low.
Also, if we do succeed in AGI alignment, the line from AGI to good things is very abstract.
What specific nice thing will the AGI do? (The actual answer is also likely to be a bizarre world full of uploaded minds or something. Real utopian futures aren’t obliged to make sense to the average person within a 5 minute explanation.)
Colonizing Mars
Feels like a useless vanity project. (See the moon landings. Lots of PR, not much practical benefit.)
How about something like curing aging? Even the war on cancer was a reasonable vision of a positive improvement.
You are raising good questions, though they are probably beyond the scope for me to answer. My high-level take would be that there are quite a few existing laws that could apply in such a scenario (eg Neuralink-implants to record brain-activity need FDA approval) and that we should expect laws to be adapted to new circumstances caveated with the pacing problem.
Is there a collection of open questions?
Ben Norman, Max Maton, Jian Xin Lim and I are working on a Progress Studies Society in London for students/professionals. Our initial experiment is an 8-week in-person project-based fellowship, aimed at helping talented individuals start working on concrete problems relevant to progress studies.
We’re looking for lists of relevant project ideas—similar to what people have done in AI safety (e.g. here and here). The people working on these would be lower context, but dedicated/smart. We would be very grateful if anyone has suggestions :)