Here are a few across different Foresight focus areas:
Biotech:
Xenobots: small self-healing biological machines created from frog cells that can move around, push a payload, retain memory, self-heal, and exhibit collective behavior in the presence of a swarm of other Xenobots. I would also love to see more work on the general potential of bioelectricity for human longevity. See Michael Levin’s Foresight seminar: https://foresight.org/summary/bioelectric-networks-taming-the-collective-intelligence-of-cells-for-regenerative-medicine
Cryonics & nanomedicine: If we don’t reach Longevity Escape Velocity in our lifetime, some may choose cryonics as plan B. Currently, in principle a cryonics patient can be maintained in biostasis but cannot be revived. Conceptual research may explain how nanotechnology can collect information from preserved structures, compute how to fix damages and aid with repair. Rob Freitas book on this topic:https://www.amazon.com/Cryostasis-Revival-Recovery-Cryonics-Nanomedicine/dp/099681535X
Molecular Machines:
A computing room: Imagine tables become computing surfaces, and notepads, captured by overhead cameras, can become the user interface for manipulating small proteins. See Shawn Douglas and Bret Victor’s Foresight presentation: https://youtu.be/_gXiVOmaVSo?t=949
A chemputer: Imagine software translating chemists’ natural language into recipes for molecules that a robot “chemputer” can understand and produce. See Lee Cronin’s Foresight presentation: https://foresight.org/summary/the-first-programmable-turing-complete-chemical-computer-lee-cronin-university-of-glasgow
Security & AI:
Homomorphic AI: Andrew Trask’s work on using homomorphic encryption to fully encrypt a neural network. This means the intelligence of the network is safeguarded against theft, and AI could be trained in insecure environments and across non-trusting parties. Plus, the AI’s predictions are encrypted and can’t impact the real world without a secret key, i.e. the human controlling the key could release the AI into the world, or simply individual predictions that the AI makes. See Andrew Trask’s paper: https://iamtrask.github.io/2017/03/17/safe-ai/
Ocaps & seL4 computer security: Object-capability (ocap) systems enable authorization-based access control across using rights, which grant computational objects access as well as the ability to delegate the right further. This leads to granular, scalable, secure systems. For instance, SeL4, the only operating system microkernel that withstood a series of DARPA red-teams, is using ocaps (and is also formally verified). Given recent AI infosec concerns, I would love to see more work scaling such security approaches to more complex systems. See Gernot Heiser’s Foresight presentation: https://foresight.org/summary/gernot-heiser-sel4-formal-proofs-for-real-world-cybersecurity
That’s an interesting question and I would love to know more about what key point you think it’s missing.
I’m the meantime, here’s two things I’d say:
I do wonder how much the existing heavy focus on specific risks and worst case scenarios may end up steering us those ways. Christine Peterson recently gave the steering car analogy, i.e. that you’re not supposed to stop your car on the side of the highway because drivers automatically steer into it by looking at it. Positive directions to make progress toward can have the benefit of enticing more cooperation on exciting shared goals. A related model is perhaps is Drexler’s talk on Paretotopian Goal Alignment where points out that as automation and AI raise the stakes of cooperation the benefits of cooperating for reaping the rewards may increasingly outweigh costs of non-cooperation leaving them on the table: https://www.effectivealtruism.org/articles/ea-global-2018-paretotopian-goal-alignment
More concretely, I see differential technology development as a promising way to account for risks of technologies while proactively building safety and security enhancing technologies first. What attracted me to Foresight is that it’s comprised of a highly technical community across various domains who nevertheless care a lot about creating secure beneficial long term uses of their applications, so the DTD angle feels like a good fit and framing — at least for our community. More on DTD: https://forum.effectivealtruism.org/posts/g6549FAQpQ5xobihj/differential-technological-development