It looks like the next major technological wave will be AI. How might this change Foresight’s plans or focus areas? Would you focus more on AI? Or, can AI help us with nanotech, longevity, etc. (and how exactly)?
In a few previous comments here, I point out how we integrate ML as a major driver of progress in our areas, e.g. such as molecular machines simulation tools, and how it affects our focus with respect to whole brain emulations. I give a longer review of how computing and AI progress affects each of our technical domains in this Breakthroughs in Computing Series by Protocol Labs: https://www.youtube.com/watch?v=lBvkFZycXRQ
With respect to Foresight’s role in safe AI progress, I think Foresight’s comparative advantage lies in bringing computer security inspired lens to AI development:
This is largely due to Foresight Senior Fellow Mark Miller, who, in 1996, gave this talk on Computer Security as the Future of Law (http://www.caplet.com/security/futurelaw), and together with Eric Drexler, published the foundational Agoric Open Systems Papers (https://papers.agoric.com/papers/), laying out a general model of cooperation enabled by voluntary rules, that applies not only to today’s human economy, but may be transferable to a future ecology, populated by human and AI intelligences.
Mark built on the Agoric papers by following the computer security thread as a necessary condition for building systems in which both humans and AIs could voluntarily cooperate. Recently this thinking culminated in Mark, Christine Peterson (Foresight’s co-founder) and me co-authoring the book Gaming the Future, focusing on specific cryptography and security tools that may help secure human AI cooperation on the path to paretotopian futures: https://foresight.org/gaming-the-future-the-book.
I think Miller’s and Drexler’s work on reframing the traditionally singleton-focused AI safety in terms of secure coordination across human and AI entities that relies on the respect of boundaries is now more relevant than ever, given A infosecurity risks, that have become a larger focus within AI alignment. I have a longer Lesswrong post on this coming next weekend.
It looks like the next major technological wave will be AI. How might this change Foresight’s plans or focus areas? Would you focus more on AI? Or, can AI help us with nanotech, longevity, etc. (and how exactly)?
In a few previous comments here, I point out how we integrate ML as a major driver of progress in our areas, e.g. such as molecular machines simulation tools, and how it affects our focus with respect to whole brain emulations. I give a longer review of how computing and AI progress affects each of our technical domains in this Breakthroughs in Computing Series by Protocol Labs: https://www.youtube.com/watch?v=lBvkFZycXRQ
With respect to Foresight’s role in safe AI progress, I think Foresight’s comparative advantage lies in bringing computer security inspired lens to AI development:
This is largely due to Foresight Senior Fellow Mark Miller, who, in 1996, gave this talk on Computer Security as the Future of Law (http://www.caplet.com/security/futurelaw), and together with Eric Drexler, published the foundational Agoric Open Systems Papers (https://papers.agoric.com/papers/), laying out a general model of cooperation enabled by voluntary rules, that applies not only to today’s human economy, but may be transferable to a future ecology, populated by human and AI intelligences.
Mark built on the Agoric papers by following the computer security thread as a necessary condition for building systems in which both humans and AIs could voluntarily cooperate. Recently this thinking culminated in Mark, Christine Peterson (Foresight’s co-founder) and me co-authoring the book Gaming the Future, focusing on specific cryptography and security tools that may help secure human AI cooperation on the path to paretotopian futures: https://foresight.org/gaming-the-future-the-book.
I think Miller’s and Drexler’s work on reframing the traditionally singleton-focused AI safety in terms of secure coordination across human and AI entities that relies on the respect of boundaries is now more relevant than ever, given A infosecurity risks, that have become a larger focus within AI alignment. I have a longer Lesswrong post on this coming next weekend.