I wouldn’t say that “inequality” alone would be a risk category, but more specifically inequality that leads to future brittleness or fragility, as in your example.
Basically in this case it’s path dependant and certain starting conditions could lead to a worse outcome. This obviously could be the case for AI as well.
That’s a good point.
I wouldn’t say that “inequality” alone would be a risk category, but more specifically inequality that leads to future brittleness or fragility, as in your example.
Basically in this case it’s path dependant and certain starting conditions could lead to a worse outcome. This obviously could be the case for AI as well.