Cryonics & nanomedicine: If we don’t reach Longevity Escape Velocity in our lifetime, some may choose cryonics as plan B. Currently, in principle a cryonics patient can be maintained in biostasis but cannot be revived. Conceptual research may explain how nanotechnology can collect information from preserved structures, compute how to fix damages and aid with repair. Rob Freitas book on this topic:https://www.amazon.com/Cryostasis-Revival-Recovery-Cryonics-Nanomedicine/dp/099681535X
Molecular Machines:
A computing room: Imagine tables become computing surfaces, and notepads, captured by overhead cameras, can become the user interface for manipulating small proteins. See Shawn Douglas and Bret Victor’s Foresight presentation: https://youtu.be/_gXiVOmaVSo?t=949
Homomorphic AI: Andrew Trask’s work on using homomorphic encryption to fully encrypt a neural network. This means the intelligence of the network is safeguarded against theft, and AI could be trained in insecure environments and across non-trusting parties. Plus, the AI’s predictions are encrypted and can’t impact the real world without a secret key, i.e. the human controlling the key could release the AI into the world, or simply individual predictions that the AI makes. See Andrew Trask’s paper: https://iamtrask.github.io/2017/03/17/safe-ai/
Ocaps & seL4 computer security: Object-capability (ocap) systems enable authorization-based access control across using rights, which grant computational objects access as well as the ability to delegate the right further. This leads to granular, scalable, secure systems. For instance, SeL4, the only operating system microkernel that withstood a series of DARPA red-teams, is using ocaps (and is also formally verified). Given recent AI infosec concerns, I would love to see more work scaling such security approaches to more complex systems. See Gernot Heiser’s Foresight presentation: https://foresight.org/summary/gernot-heiser-sel4-formal-proofs-for-real-world-cybersecurity
Here are a few across different Foresight focus areas:
Biotech:
Xenobots: small self-healing biological machines created from frog cells that can move around, push a payload, retain memory, self-heal, and exhibit collective behavior in the presence of a swarm of other Xenobots. I would also love to see more work on the general potential of bioelectricity for human longevity. See Michael Levin’s Foresight seminar: https://foresight.org/summary/bioelectric-networks-taming-the-collective-intelligence-of-cells-for-regenerative-medicine
Cryonics & nanomedicine: If we don’t reach Longevity Escape Velocity in our lifetime, some may choose cryonics as plan B. Currently, in principle a cryonics patient can be maintained in biostasis but cannot be revived. Conceptual research may explain how nanotechnology can collect information from preserved structures, compute how to fix damages and aid with repair. Rob Freitas book on this topic:https://www.amazon.com/Cryostasis-Revival-Recovery-Cryonics-Nanomedicine/dp/099681535X
Molecular Machines:
A computing room: Imagine tables become computing surfaces, and notepads, captured by overhead cameras, can become the user interface for manipulating small proteins. See Shawn Douglas and Bret Victor’s Foresight presentation: https://youtu.be/_gXiVOmaVSo?t=949
A chemputer: Imagine software translating chemists’ natural language into recipes for molecules that a robot “chemputer” can understand and produce. See Lee Cronin’s Foresight presentation: https://foresight.org/summary/the-first-programmable-turing-complete-chemical-computer-lee-cronin-university-of-glasgow
Security & AI:
Homomorphic AI: Andrew Trask’s work on using homomorphic encryption to fully encrypt a neural network. This means the intelligence of the network is safeguarded against theft, and AI could be trained in insecure environments and across non-trusting parties. Plus, the AI’s predictions are encrypted and can’t impact the real world without a secret key, i.e. the human controlling the key could release the AI into the world, or simply individual predictions that the AI makes. See Andrew Trask’s paper: https://iamtrask.github.io/2017/03/17/safe-ai/
Ocaps & seL4 computer security: Object-capability (ocap) systems enable authorization-based access control across using rights, which grant computational objects access as well as the ability to delegate the right further. This leads to granular, scalable, secure systems. For instance, SeL4, the only operating system microkernel that withstood a series of DARPA red-teams, is using ocaps (and is also formally verified). Given recent AI infosec concerns, I would love to see more work scaling such security approaches to more complex systems. See Gernot Heiser’s Foresight presentation: https://foresight.org/summary/gernot-heiser-sel4-formal-proofs-for-real-world-cybersecurity