How does Foresight Institute allocate its time/effort across projects? Do you have a way of thinking about how much attention to spend on different speculative areas?
Foresight Institute was established in 1986 on the ideas discussed in Engines of Creation, Drexler, published by Eric Drexler, co-founder of Foresight. The book lays out a network of technologies that have the potential to significantly enhance the human condition, including nanotechnology, biotechnology, information technology, and cognitive science, which are interconnected with other important technologies like robotics and space exploration in complex ways.
Given the broad technology stack Engines considered, the book, and Foresight, became an early Schelling point for scientists and technologists who wanted great futures across the board of technologies.
So within this broad technology stack, we decide on our focus by weighing how much attention an issue we think of as important is already receiving with how much our community is in a position to contribute.
For instance, since our inception until today, the general field of molecular nanotechnology remains undervalued, and our community has a unique potential to contribute to it, so generally advancing the field in a beneficial direction is still where the bulk of our fellowship, prizes, workshops, and seminar strength lies.
Then, within our other technology focus areas (bio, neuro, space, secure human AI cooperation), there are often specific subdomains that are still too niche, exotic, ambitious or interdisciplinary for the mainstream of that field to address.
For instance, when potential AI race dynamics first became an issue, we used to hold annual workshops after the Bay Area EAGs, focused on AGI coordination across great powers and corporations:
At the time a lot of governance work popped up so we refocused our annual AGI-focused workshops on bridge-building between the security and AI safety communities, as a currently undervalued area that we can meaningfully contribute to given our existing strong security and cryptography community:
That being said, we’re currently reviewing whether to take up the AI coordination workshops again given timelines coming down, leading to new interest in revisiting those meetings.
Another area we’re taking up given shortening AI timeline that we think we have a comparative advantage in helping with given our AI and neurotech community is revisiting Whole Brain Emulation as a potential strategy for AI safety, leading to this 2023 workshop, chaired by Anders Sandberg, co-author of the original WBE roadmap in 2007:
How does Foresight Institute allocate its time/effort across projects? Do you have a way of thinking about how much attention to spend on different speculative areas?
Foresight Institute was established in 1986 on the ideas discussed in Engines of Creation, Drexler, published by Eric Drexler, co-founder of Foresight. The book lays out a network of technologies that have the potential to significantly enhance the human condition, including nanotechnology, biotechnology, information technology, and cognitive science, which are interconnected with other important technologies like robotics and space exploration in complex ways.
Given the broad technology stack Engines considered, the book, and Foresight, became an early Schelling point for scientists and technologists who wanted great futures across the board of technologies.
So within this broad technology stack, we decide on our focus by weighing how much attention an issue we think of as important is already receiving with how much our community is in a position to contribute.
For instance, since our inception until today, the general field of molecular nanotechnology remains undervalued, and our community has a unique potential to contribute to it, so generally advancing the field in a beneficial direction is still where the bulk of our fellowship, prizes, workshops, and seminar strength lies.
Then, within our other technology focus areas (bio, neuro, space, secure human AI cooperation), there are often specific subdomains that are still too niche, exotic, ambitious or interdisciplinary for the mainstream of that field to address.
For instance, when potential AI race dynamics first became an issue, we used to hold annual workshops after the Bay Area EAGs, focused on AGI coordination across great powers and corporations:
2019 / AGI: Toward Cooperation: https://fsnone-bb4c.kxcdn.com/wp-content/uploads/2019/12/2019-AGI-Cooperation-Report.pdf
2018 / AGI: Coordination & Great Powers: https://fsnone-bb4c.kxcdn.com/wp-content/uploads/2018/11/AGI-Coordination-Geat-Powers-Report.pdf
2017 / AGI: Timelines & Policy: https://foresight.org/wp-content/uploads/2022/11/AGI-TimeframesPolicyWhitePaper.pdf
At the time a lot of governance work popped up so we refocused our annual AGI-focused workshops on bridge-building between the security and AI safety communities, as a currently undervalued area that we can meaningfully contribute to given our existing strong security and cryptography community:
2022/ Cryptography, Security, AI workshop: https://foresight.org/crypto-workshop/
2023/ Cryptography, Security, AI workshop: https://foresight.org/intelligent-cooperation-workshop-2023
That being said, we’re currently reviewing whether to take up the AI coordination workshops again given timelines coming down, leading to new interest in revisiting those meetings.
Another area we’re taking up given shortening AI timeline that we think we have a comparative advantage in helping with given our AI and neurotech community is revisiting Whole Brain Emulation as a potential strategy for AI safety, leading to this 2023 workshop, chaired by Anders Sandberg, co-author of the original WBE roadmap in 2007:
2023/ Whole Brain Emulation for AI Safety workshop: https://foresight.org/whole-brain-emulation-workshop-2023