AMA: Allison Duettmann, Foresight Institute
Hi everyone. I am Allison Duettmann, president and CEO of Foresight Institute. Recently, I co-authored the book, ‘Gaming the Future: Technologies for Intelligent Voluntary Cooperation’ on our own Foresight Press with Christine Peterson and Mark S. Millar.
Also at Foresight, I direct the Intelligent Cooperation, Molecular Machines, Biotech & Health Extension, Neurotech, and Space Groups, Fellowships, Prizes, and Tech Trees. I also share Foresight’s work with the public, for instance at the Wall Street Journal, SXSW, O’Reilly AI, WEF, The Partnership on AI, Effective Altruism Global, and TEDx. Additionally, I founded Existentialhope.com, co-edited Superintelligence: Coordination & Strategy, and co-initiated The Longevity Prize. I advise non-profits, companies, and individuals. I hold an MS in Philosophy & Public Policy from the London School of Economics, focusing on AI Safety, and a BA in Philosophy, Politics, Economics from York University.
Ask me anything! I will be here to answer your questions Monday, March 20. Use the comments below to add questions, and upvote any questions you’d like to see me answer.
- The Roots of Progress 2023 in review by 31 Dec 2023 18:16 UTC; 22 points) (LessWrong;
- Progress links and tweets, 2023-04-05 by 5 Apr 2023 16:18 UTC; 20 points) (LessWrong;
- Progress links and tweets, 2023-03-22 by 22 Mar 2023 22:19 UTC; 13 points) (LessWrong;
- Bryan Bishop AMA on the Progress Forum by 11 Apr 2023 21:05 UTC; 8 points) (LessWrong;
- The Roots of Progress 2023 in review by 31 Dec 2023 18:16 UTC; 6 points) (
- Jason’s links and tweets, 2023-04-05 by 5 Apr 2023 16:18 UTC; 4 points) (
- Jason’s links and tweets, 2023-03-22 by 22 Mar 2023 22:19 UTC; 4 points) (
How does Foresight Institute allocate its time/effort across projects? Do you have a way of thinking about how much attention to spend on different speculative areas?
Foresight Institute was established in 1986 on the ideas discussed in Engines of Creation, Drexler, published by Eric Drexler, co-founder of Foresight. The book lays out a network of technologies that have the potential to significantly enhance the human condition, including nanotechnology, biotechnology, information technology, and cognitive science, which are interconnected with other important technologies like robotics and space exploration in complex ways.
Given the broad technology stack Engines considered, the book, and Foresight, became an early Schelling point for scientists and technologists who wanted great futures across the board of technologies.
So within this broad technology stack, we decide on our focus by weighing how much attention an issue we think of as important is already receiving with how much our community is in a position to contribute.
For instance, since our inception until today, the general field of molecular nanotechnology remains undervalued, and our community has a unique potential to contribute to it, so generally advancing the field in a beneficial direction is still where the bulk of our fellowship, prizes, workshops, and seminar strength lies.
Then, within our other technology focus areas (bio, neuro, space, secure human AI cooperation), there are often specific subdomains that are still too niche, exotic, ambitious or interdisciplinary for the mainstream of that field to address.
For instance, when potential AI race dynamics first became an issue, we used to hold annual workshops after the Bay Area EAGs, focused on AGI coordination across great powers and corporations:
2019 / AGI: Toward Cooperation: https://fsnone-bb4c.kxcdn.com/wp-content/uploads/2019/12/2019-AGI-Cooperation-Report.pdf
2018 / AGI: Coordination & Great Powers: https://fsnone-bb4c.kxcdn.com/wp-content/uploads/2018/11/AGI-Coordination-Geat-Powers-Report.pdf
2017 / AGI: Timelines & Policy: https://foresight.org/wp-content/uploads/2022/11/AGI-TimeframesPolicyWhitePaper.pdf
At the time a lot of governance work popped up so we refocused our annual AGI-focused workshops on bridge-building between the security and AI safety communities, as a currently undervalued area that we can meaningfully contribute to given our existing strong security and cryptography community:
2022/ Cryptography, Security, AI workshop: https://foresight.org/crypto-workshop/
2023/ Cryptography, Security, AI workshop: https://foresight.org/intelligent-cooperation-workshop-2023
That being said, we’re currently reviewing whether to take up the AI coordination workshops again given timelines coming down, leading to new interest in revisiting those meetings.
Another area we’re taking up given shortening AI timeline that we think we have a comparative advantage in helping with given our AI and neurotech community is revisiting Whole Brain Emulation as a potential strategy for AI safety, leading to this 2023 workshop, chaired by Anders Sandberg, co-author of the original WBE roadmap in 2007:
2023/ Whole Brain Emulation for AI Safety workshop: https://foresight.org/whole-brain-emulation-workshop-2023
Do you see most of the value/impact/benefit that the Foresight Institute produces as coming from a few key outputs, or from a larger list of projects each of which only produces a fraction of your total value? If the first, what are your key outputs?
I think Foresight’s value comes from a larger list of projects each of which has a small chance at creating a large impact. This comes mostly from the fact that we focus on advancing the beneficial use of a variety of undervalued technologies, including nano, bio, neuro, computing, and space, whose trajectory is harder to predict. We do this through early ecosystem development in these areas, that usually includes tools like our fellowships, prizes, workshops, and virtual seminars. Given that different technologies impact Given that many of the technologies are influenced by the relative speed of other technologies, they will be advancing at varying rates, and tools to accelerate them are differently useful at different stages.
For instance, for driving progress in Molecular Nanotechnology, from 1986 onward, Foresight started hosting annual technical conferences, published research papers, developed a Nanotechnology Roadmap, and launched the Feynman Prizes to award work toward molecular manufacturing.
The road was incredibly bumpy, but in 2016, Sir Fraser Stoddart was finally awarded the Nobel Prize for his work “for the design and synthesis of molecular machines” (https://www.nobelprize.org/prizes/chemistry/2016/press-release/) just nine years after he received Foresight’s Feynman Prize for the exact work: https://foresight.org/foresight-feynman-prizes/
Today, molecular nanotechnology progress is accelerating faster, largely enabled by new AI simulation tools, such as AlphaFold, Rosetta, Samson, CanDo, and more. Simulation tools, combined with progress in newer approaches to molecular nanotechnology, such as DNA origami, led tech-analysts such as Eli Durado declare that it’s Nanotechnology’s spring: https://worksinprogress.co/issue/nanotechnologys-spring
To streamline progress across tool builders, in 2022 we hosted a Design Tool for Molecular Machines Systems-focused workshop https://foresight.org/molecular-workshop, whose 2023 iteration will focus on opportunities for combining insights across tools to work to the design of more complex molecular machinery: https://foresight.org/foresight-molecular-systems-design-workshop-2023
We aren’t a leading driver in each of the technological areas in particular but by providing a container that enables for multidisciplinarity across fields such as ML and molecular machines, we hope to facilitate insight and tech transfer across them.
What are some example decisions that the Foresight Institute’s work has helped influence?
Given that our main effort is to kindle beneficial innovation in undervalued technical domains of importance for the long-term future, such decisions are sometimes hard to trace but are mostly in the area of founding and funding such projects.
Through Foresight matchmaking, members have started companies (such as a carbon drawdown company co-founded by a Foresight Fellow who met their co-founder at a Foresight event and recently raised $30M in follow-on funding), new research projects (such as a major research project building LLM-enabled preference simulations of groups of people which was founded and funded at a Foresight workshop), and existing organizations receiving government funding (more than $30M for a water filtration company, and $15M for a molecular nanotechnology simulation project at a university through Foresight workshops).
Other decisions we shape involve early career path choices, with individuals joining organizations, including a neurotech FRO, several major longevity companies, and security companies, through Foresight events. In rare cases, aid career decisions more actively, for instance by providing J1 visas to promising researchers seeking to move to the US. This more tailored support is particularly prominent with younger applicants who have little default exposure to senior researchers, funders, and entrepreneurs in their domain.
What does Eric Drexler think about the Foresight Institute? If I recall correctly, he was one of the founders?
I think this is a question that is better directed at Eric himself :) I can confirm that he was one of Foresight’s co-founders, and that he did present at a few more Foresight recent events, such as the Decentralized AI workshop (https://www.youtube.com/watch?v=pClSjljMKeA, https://www.youtube.com/watch?v=hNDD-ZbEsJA) and a Molecular Machines workshop (https://www.youtube.com/watch?v=HjgjtAk-lws&t=1s).
I can also definitely say that our community remains excited about his outstanding work, such as Comprehensive AI Services, the Open Agency Architecture, Paretotopian Goal Alignment, and Molecular Nanotechnology.
It looks like the next major technological wave will be AI. How might this change Foresight’s plans or focus areas? Would you focus more on AI? Or, can AI help us with nanotech, longevity, etc. (and how exactly)?
In a few previous comments here, I point out how we integrate ML as a major driver of progress in our areas, e.g. such as molecular machines simulation tools, and how it affects our focus with respect to whole brain emulations. I give a longer review of how computing and AI progress affects each of our technical domains in this Breakthroughs in Computing Series by Protocol Labs: https://www.youtube.com/watch?v=lBvkFZycXRQ
With respect to Foresight’s role in safe AI progress, I think Foresight’s comparative advantage lies in bringing computer security inspired lens to AI development:
This is largely due to Foresight Senior Fellow Mark Miller, who, in 1996, gave this talk on Computer Security as the Future of Law (http://www.caplet.com/security/futurelaw), and together with Eric Drexler, published the foundational Agoric Open Systems Papers (https://papers.agoric.com/papers/), laying out a general model of cooperation enabled by voluntary rules, that applies not only to today’s human economy, but may be transferable to a future ecology, populated by human and AI intelligences.
Mark built on the Agoric papers by following the computer security thread as a necessary condition for building systems in which both humans and AIs could voluntarily cooperate. Recently this thinking culminated in Mark, Christine Peterson (Foresight’s co-founder) and me co-authoring the book Gaming the Future, focusing on specific cryptography and security tools that may help secure human AI cooperation on the path to paretotopian futures: https://foresight.org/gaming-the-future-the-book.
I think Miller’s and Drexler’s work on reframing the traditionally singleton-focused AI safety in terms of secure coordination across human and AI entities that relies on the respect of boundaries is now more relevant than ever, given A infosecurity risks, that have become a larger focus within AI alignment. I have a longer Lesswrong post on this coming next weekend.
What areas of research or technology are most underrated by the broader research world, and why?
Here are a few across different Foresight focus areas:
Biotech:
Xenobots: small self-healing biological machines created from frog cells that can move around, push a payload, retain memory, self-heal, and exhibit collective behavior in the presence of a swarm of other Xenobots. I would also love to see more work on the general potential of bioelectricity for human longevity. See Michael Levin’s Foresight seminar: https://foresight.org/summary/bioelectric-networks-taming-the-collective-intelligence-of-cells-for-regenerative-medicine
Cryonics & nanomedicine: If we don’t reach Longevity Escape Velocity in our lifetime, some may choose cryonics as plan B. Currently, in principle a cryonics patient can be maintained in biostasis but cannot be revived. Conceptual research may explain how nanotechnology can collect information from preserved structures, compute how to fix damages and aid with repair. Rob Freitas book on this topic:https://www.amazon.com/Cryostasis-Revival-Recovery-Cryonics-Nanomedicine/dp/099681535X
Molecular Machines:
A computing room: Imagine tables become computing surfaces, and notepads, captured by overhead cameras, can become the user interface for manipulating small proteins. See Shawn Douglas and Bret Victor’s Foresight presentation: https://youtu.be/_gXiVOmaVSo?t=949
A chemputer: Imagine software translating chemists’ natural language into recipes for molecules that a robot “chemputer” can understand and produce. See Lee Cronin’s Foresight presentation: https://foresight.org/summary/the-first-programmable-turing-complete-chemical-computer-lee-cronin-university-of-glasgow
Security & AI:
Homomorphic AI: Andrew Trask’s work on using homomorphic encryption to fully encrypt a neural network. This means the intelligence of the network is safeguarded against theft, and AI could be trained in insecure environments and across non-trusting parties. Plus, the AI’s predictions are encrypted and can’t impact the real world without a secret key, i.e. the human controlling the key could release the AI into the world, or simply individual predictions that the AI makes. See Andrew Trask’s paper: https://iamtrask.github.io/2017/03/17/safe-ai/
Ocaps & seL4 computer security: Object-capability (ocap) systems enable authorization-based access control across using rights, which grant computational objects access as well as the ability to delegate the right further. This leads to granular, scalable, secure systems. For instance, SeL4, the only operating system microkernel that withstood a series of DARPA red-teams, is using ocaps (and is also formally verified). Given recent AI infosec concerns, I would love to see more work scaling such security approaches to more complex systems. See Gernot Heiser’s Foresight presentation: https://foresight.org/summary/gernot-heiser-sel4-formal-proofs-for-real-world-cybersecurity
What are some new projects you are working on but haven’t published yet?
What are some projects you want to see but wish someone else or another org was working on?
Hi Sam, here are two previews of projects we’re working on but which aren’t published yet.
AI-assisted tech trees enabled by Discourse graphs
Throughout 2022, we have been building technology trees to map our five interest areas; molecular nanotechnology, longevity biotechnology, neurotechnology, secure human AI interaction, and space. The goal is to help onboard new talent and funders into the fields by sketching out which required capabilities are required for the long-term goals of the field, who is working on them, and which open challenges are left to be tackled. The trees contain 50k+ nodes but the current interfaces are still pretty clunky and hard to navigate for outsiders: https://foresight.org/tech-tree
What’s new is that we’ll likely be launching a Discourse graph-enabled tech tree edition, which allows natural language question-based navigation of the trees, making the main info much easier to digest for users. In addition, a gpt integration in the tool itself can automate parts of the research process by populating entire paths of the tree automatically. For instance, when prompting the gpt integration questions such as “what are the ten main labs working on autophagy” or “what are the main technical challenges we need to solve to make progress on privacy-preserving ML?” replies relatively well matched human-generated replies, even though there is still fact-checking and completion to do. This means our tech tree architects can function as reviewers and editors, rather than research assistants combing the web from scratch, making the roadmaps more long-term sustainable.
The discourse graph editions of the trees scheduled to go live by July would allow individuals to contribute to the main trees and fork their own AI-assisted tech trees. They would also enable users to advance progress on highlighted challenges via an integrated bounty tool. Thanks to the amazing Discourse graph team for building the tool and allowing us to use it. More about how the tool works: https://protocol.ai/blog/discourse-graph-qa/
Existential Hope book
We’re currently working on a book proposal on Existential Hope to highlight alternative futures to the currently en vogue doomerism. It’s early stage but may discuss various great future scenarios, plus “eucatastrophes”, i.e. positive turning points, technologies and strategies to get there. Many of the people and resources that inspire the book can be found on: https://www.existentialhope.com
How do you think that framing a discussion about the effects of future technologies as potentially leading to scenarios of existential hope as opposed to existential risk can be helpful? Or is positing a dichotomy between existential hope and risk sort of missing a key point?
That’s an interesting question and I would love to know more about what key point you think it’s missing.
I’m the meantime, here’s two things I’d say:
I do wonder how much the existing heavy focus on specific risks and worst case scenarios may end up steering us those ways. Christine Peterson recently gave the steering car analogy, i.e. that you’re not supposed to stop your car on the side of the highway because drivers automatically steer into it by looking at it. Positive directions to make progress toward can have the benefit of enticing more cooperation on exciting shared goals. A related model is perhaps is Drexler’s talk on Paretotopian Goal Alignment where points out that as automation and AI raise the stakes of cooperation the benefits of cooperating for reaping the rewards may increasingly outweigh costs of non-cooperation leaving them on the table: https://www.effectivealtruism.org/articles/ea-global-2018-paretotopian-goal-alignment
More concretely, I see differential technology development as a promising way to account for risks of technologies while proactively building safety and security enhancing technologies first. What attracted me to Foresight is that it’s comprised of a highly technical community across various domains who nevertheless care a lot about creating secure beneficial long term uses of their applications, so the DTD angle feels like a good fit and framing — at least for our community. More on DTD: https://forum.effectivealtruism.org/posts/g6549FAQpQ5xobihj/differential-technological-development
Very interesting, thanks for the thoughts!
I realize now that my questions were a bit unclear. I tend to think about the world in terms of trade-offs. So my first question was really about the trade-off of thinking about the future in terms of existential hope vs existential risk.
You already addressed a key upside of thinking in terms of existential hope that I hadn’t thought of with your first point, which is that thinking of the future can create a self-fulfilling prophecy, so it’s better to have a positive vision of the future than a negative one.
My second question was mostly about my own reticence to posit trade-offs everywhere since I do it too much probably. Sometimes, there is a false dichotomy in thinking about things in dichotomous ways (“both/and” instead of “either/or”). So perhaps it’s not best to think of thinking about existential hope vs existential risk as a trade-off at all. That’s what I was getting at, about whether I was missing a key point about the way you think about this topic by trying to frame the discussion in terms of a dichotomy.
By the way, I love the idea of existential hope and think it is a beneficial concept, in part to help avoid doomerism. =)