If longtermists existed back when blacks were widely regarded as morally inferior to whites, would the moral calculus of the longtermists have included the prosperity of future blacks or not? It seems like it couldn’t possibly have included that. More generally, longtermism can’t take into account progress in moral knowledge, nor what future generations will choose to value. Longtermists impose their values onto future generations.
It is true that we can’t predict future moral knowledge. However.
An intervention by someone from that time period that helps modern whites and doesn’t harm modern blacks would still be seen as better than doing nothing from the point of view of most people. (excluding the woke fringe) Most random interventions selected to help future white people are unlikely to cause significant net harm to blacks.
If their intervention is ensuring that we are wealthy and knowledgeable, and hence more able to do whatever it is we value, then that intervention would take into account progress and moral knowledge.
In reality, you have to choose to do something. When making decisions that effect future generations, either you impose your current values, or you try to give them as much flexible power to allow moral knowledge, or you basically pretend they don’t exist.
This is an intresting new combination of standard mistakes.
Another issue is that if altruistic morality is taken to its logical conclusion, then everyone would be trying to solve everyone else’s problems. How could that possibly be more effective than everyone trying to solve their own problems?
Altruistic morality in the total utilitarian sense would recognize that solving everyones problems is equally valuable, including our own. In the current world, practically no humans are going to put themselves lower than everyone else, and most of the best opportunities for aultruism are helping others. But in the hypothetical utopia land, people would solve their own problems, there being no more pressing problems to solve.
If we are here to help others, what on Earth are the others here for?
Well imagine the ideal end goal, if we develop some magic tech. Everyone living in some sort of utopia. At this point, most of the aultruists say that there is no one in the world who really needs helping, and just enjoy the utopia. But until then, they help.
What we actually need to be is selfish, not altruistic. We need to make as rapid progress as possible so that the people of the future themselves will be at a starting point where they can make even more rapid progress.
A aulturist argument for selfishness. You are arguing that selfishness is good because it benefits future people.
If you were actually selfish, you would be arguing that selfishness is good because it makes you happy, and screw those future people, who cares about them.
I also don’t know where you got the idea that selfish=max progress.
Suppose I am a genius fusion researcher. (I’m not) I build fusion reactors so future people will have abundant clean energy. If I was selfish, I would play video games all day.
Altruism is subordinating one’s own preferences to those of others. It’s a zero-sum game. It’s not win-win.
In the ideal utilitarian hypothetical utopia, who exactly is loosing. If hypothetically everyone had the exact same goal, the well being of humanity as a whole, valuing their own well being at exactly the same level as everyone elses, that would be a 0 difference game, the exact opposite of a 0 sum game.
It is true that we can’t predict future moral knowledge. However.
An intervention by someone from that time period that helps modern whites and doesn’t harm modern blacks would still be seen as better than doing nothing from the point of view of most people. (excluding the woke fringe) Most random interventions selected to help future white people are unlikely to cause significant net harm to blacks.
If their intervention is ensuring that we are wealthy and knowledgeable, and hence more able to do whatever it is we value, then that intervention would take into account progress and moral knowledge.
In reality, you have to choose to do something. When making decisions that effect future generations, either you impose your current values, or you try to give them as much flexible power to allow moral knowledge, or you basically pretend they don’t exist.
This is an intresting new combination of standard mistakes.
Altruistic morality in the total utilitarian sense would recognize that solving everyones problems is equally valuable, including our own. In the current world, practically no humans are going to put themselves lower than everyone else, and most of the best opportunities for aultruism are helping others. But in the hypothetical utopia land, people would solve their own problems, there being no more pressing problems to solve.
Well imagine the ideal end goal, if we develop some magic tech. Everyone living in some sort of utopia. At this point, most of the aultruists say that there is no one in the world who really needs helping, and just enjoy the utopia. But until then, they help.
A aulturist argument for selfishness. You are arguing that selfishness is good because it benefits future people.
If you were actually selfish, you would be arguing that selfishness is good because it makes you happy, and screw those future people, who cares about them.
I also don’t know where you got the idea that selfish=max progress.
Suppose I am a genius fusion researcher. (I’m not) I build fusion reactors so future people will have abundant clean energy. If I was selfish, I would play video games all day.
In the ideal utilitarian hypothetical utopia, who exactly is loosing. If hypothetically everyone had the exact same goal, the well being of humanity as a whole, valuing their own well being at exactly the same level as everyone elses, that would be a 0 difference game, the exact opposite of a 0 sum game.