I guess it depends on the definition of each of these.
Empathy: No matter how advanced AI gets (true AGI or whatever) there will always be room for human <-> human empathy. AI will never truly have the same experiences as humans. It can pretend or fake like it does—and I think this could still be useful in many situations, like customer support or low-level therapy. It just wont be totally replaced by it.
Collaboration: Probably the weakest of the 3. Of course any AI can collaborate, and currently they already do. I still believe that for the near future human <-> human collaboration will be needed—i.e. we will want a “human in the loop” for some time.
Creativity: Still very much needed for the foreseeable future. LLMs, image generators, etc. very much do express creativity by combining ideas in latent space to things that are truly novel. But human brains still think (“explore possibility space”) differently, and are therefore valuable. It’s the same reason I believe having diverse intelligences (human, AI, other species, alien) is valuable. Also helps to push the boundaries of possibility space in a way that these models may have difficulty.
I’m curious your (or others) take on these though.
Max Olson
The Price of Progress
Addressing any remaining LLM skepticism
What’s your guess on which will end up increasing productivity more: the internet, or the latest AI models?
That’s a good point.
I wouldn’t say that “inequality” alone would be a risk category, but more specifically inequality that leads to future brittleness or fragility, as in your example.
Basically in this case it’s path dependant and certain starting conditions could lead to a worse outcome. This obviously could be the case for AI as well.
[Question] How can we classify negative effects of new technologies?
As should be obvious by my initial post I am in complete agreement that the demand side of the equation is underrated (the supply side of progress may be more important, but most of existing discussion seems to focus solely on that).
I appreciate your 2nd point on shifting focus more to how people will directly benefit.
Another question this made me think of is what are the main reasons people don’t already see the benefits? Lack of imagination? Incremental innovation over long periods going unnoticed?
Thanks Sebastian. I think this is a good way to think about it. “Nudging” demand away from the more zero-sum endeavors and toward productive ones.
Awareness and action on climate change is an especially good case study. Of course as other things, climate-related tech is both demand and supply driven, but there’s no doubt that overall climate awareness has pushed sales of things like solar, EVs, plant-based meats, etc. “goods where when demand increases there are long term positive effects” is a good way to put it.
Your steps 2 and 3 are obviously less clear how implement in practice. Especially finding ways to measure these effects. I mean, it’s pretty hard to measure how much good sci-fi has affected tech progress but long-term I think it’s clear it has.
Thanks for the thoughts Jason — helped me think a bit more about the idea.
See my response to @daviskedrosky, but I totally agree that in general it is supply-driven. It’s more that I wanted to give more attention to the demand side because it’s not talked about as much. It is a chicken-and-egg problem in the end (and my post doesn’t really discuss the balance).
In regard to “People don’t clamor explicitly for new products and services” I don’t think this is totally true, and it is a mix. And I do think that demand is much more important than many believe in driving what gets built (and regulated, etc).
Your comment did lead me to think more about what kinds of innovation are more demand or supply driven though, and given all of your research I’m curious to hear your thoughts on it.
It seems to me many more incremental innovations are demand driven, while breakthrough innovations are typically supply driven. The only breakthrough innovations I can think of that were more demand driven are the result of large-scale forcing functions, like war or pandemics that radically change the demand for what is wanted and the urgency it’s needed.
Completely agree, and thank you for that perspective. It’s a bit of a “chicken vs. egg” problem in that sense, and it’s hard to think of anything that is completely supply or demand driven. It does seem like it’s more supply driven in general, although I’m happy to put attention on the demand side because I think it’s being neglected.
To Increase Progress, Increase Desire
I really like this framing of research-y ideas. Matches what I was thinking in a conversation with an angel investor on the difficulties of investing in frontier-tech startups.
It seems like the key for startup funding is finding profitable intermediate goals. Optimizable projects within the bigger, non-global-critical-path effort.
This is basically what SpaceX did to me. Building a colony on mars is a clear goal but has no global critical path. However “drastically lower cost of payload to orbit” does (albeit with some “fatness” in it for sure). And more obviously monitizable despite the capital/time requirements. Add to that the creativity of exploiting other monetizable businesses that are enabled in pursuit of the goal (Starlink) and it makes further financial sense.
Thinking about it like this reminds me of the means-ends heuristic. https://en.wikipedia.org/wiki/Means–ends_analysis You may not know what to optimize for the final goal but there can be profitable intermediate ones that get you closer to the final goal.
Of course, there are some areas where there still won’t be any intermediate + profitable goals. These are the real research-y ideas.
Thinking out loud here...
Would want to keep any agenda specific enough to drive outcomes, but not too specific as to turn people off from petty disagreements. Balance it at some middle level of abstraction, and make it very straightforward/logical that the agenda leads to progress. What are the different areas an agenda could cover?
Specific technologies—as you mentioned, something like “progress is good. energy allows more progress. cheaper energy allows more energy. nuclear is cheaper energy. nuclear is good ⇒ promote nuclear”
Media—definitive stance on pro-progress media (books, videos, podcasts, memes, movies, etc.) and what makes something “pro-progress”
Regulation—hypotheses with lots of data on specific regulations that, if removed or revisited, would increase progress
Agree on these points in general—and believe this is one of the major reasons for optimism around AI. AI models seem particularly good at navigating high dimensional landscapes if we structure them appropriately. My theory is this will allows us to hugely increase #2, as we now have a better method for searching the solution space.