That’s an interesting question and I would love to know more about what key point you think it’s missing.
I’m the meantime, here’s two things I’d say:
I do wonder how much the existing heavy focus on specific risks and worst case scenarios may end up steering us those ways. Christine Peterson recently gave the steering car analogy, i.e. that you’re not supposed to stop your car on the side of the highway because drivers automatically steer into it by looking at it. Positive directions to make progress toward can have the benefit of enticing more cooperation on exciting shared goals. A related model is perhaps is Drexler’s talk on Paretotopian Goal Alignment where points out that as automation and AI raise the stakes of cooperation the benefits of cooperating for reaping the rewards may increasingly outweigh costs of non-cooperation leaving them on the table: https://www.effectivealtruism.org/articles/ea-global-2018-paretotopian-goal-alignment
More concretely, I see differential technology development as a promising way to account for risks of technologies while proactively building safety and security enhancing technologies first. What attracted me to Foresight is that it’s comprised of a highly technical community across various domains who nevertheless care a lot about creating secure beneficial long term uses of their applications, so the DTD angle feels like a good fit and framing — at least for our community. More on DTD: https://forum.effectivealtruism.org/posts/g6549FAQpQ5xobihj/differential-technological-development
I realize now that my questions were a bit unclear. I tend to think about the world in terms of trade-offs. So my first question was really about the trade-off of thinking about the future in terms of existential hope vs existential risk.
You already addressed a key upside of thinking in terms of existential hope that I hadn’t thought of with your first point, which is that thinking of the future can create a self-fulfilling prophecy, so it’s better to have a positive vision of the future than a negative one.
My second question was mostly about my own reticence to posit trade-offs everywhere since I do it too much probably. Sometimes, there is a false dichotomy in thinking about things in dichotomous ways (“both/and” instead of “either/or”). So perhaps it’s not best to think of thinking about existential hope vs existential risk as a trade-off at all. That’s what I was getting at, about whether I was missing a key point about the way you think about this topic by trying to frame the discussion in terms of a dichotomy.
By the way, I love the idea of existential hope and think it is a beneficial concept, in part to help avoid doomerism. =)
That’s an interesting question and I would love to know more about what key point you think it’s missing.
I’m the meantime, here’s two things I’d say:
I do wonder how much the existing heavy focus on specific risks and worst case scenarios may end up steering us those ways. Christine Peterson recently gave the steering car analogy, i.e. that you’re not supposed to stop your car on the side of the highway because drivers automatically steer into it by looking at it. Positive directions to make progress toward can have the benefit of enticing more cooperation on exciting shared goals. A related model is perhaps is Drexler’s talk on Paretotopian Goal Alignment where points out that as automation and AI raise the stakes of cooperation the benefits of cooperating for reaping the rewards may increasingly outweigh costs of non-cooperation leaving them on the table: https://www.effectivealtruism.org/articles/ea-global-2018-paretotopian-goal-alignment
More concretely, I see differential technology development as a promising way to account for risks of technologies while proactively building safety and security enhancing technologies first. What attracted me to Foresight is that it’s comprised of a highly technical community across various domains who nevertheless care a lot about creating secure beneficial long term uses of their applications, so the DTD angle feels like a good fit and framing — at least for our community. More on DTD: https://forum.effectivealtruism.org/posts/g6549FAQpQ5xobihj/differential-technological-development
Very interesting, thanks for the thoughts!
I realize now that my questions were a bit unclear. I tend to think about the world in terms of trade-offs. So my first question was really about the trade-off of thinking about the future in terms of existential hope vs existential risk.
You already addressed a key upside of thinking in terms of existential hope that I hadn’t thought of with your first point, which is that thinking of the future can create a self-fulfilling prophecy, so it’s better to have a positive vision of the future than a negative one.
My second question was mostly about my own reticence to posit trade-offs everywhere since I do it too much probably. Sometimes, there is a false dichotomy in thinking about things in dichotomous ways (“both/and” instead of “either/or”). So perhaps it’s not best to think of thinking about existential hope vs existential risk as a trade-off at all. That’s what I was getting at, about whether I was missing a key point about the way you think about this topic by trying to frame the discussion in terms of a dichotomy.
By the way, I love the idea of existential hope and think it is a beneficial concept, in part to help avoid doomerism. =)