Or challenges. One version holds that morality needs to motivate everyone; another that it should not motivate anyone.
Why would a theoretical contract made behind a veil of ignorance be binding”
If the veil of ignorance thing is the correct analysis of ethics, and if moral proposition X can be justified that, way then there is a chain of reasoning that justifies X which means that X is binding on rational agents, in the sense that they should assent to X, for some interpretation of “should”.
Binding is not motivation. Someone who says, “I really should do X, but I can’t be arsed today” is admitting to an obligation, and confessing to a lack of motivation to fulfil it. Akrasia is the gap between obligation and motivation.
Not all motivation is based in immediate reward. People are motivated to accept good rational arguments on the bases of status and identity. Everybody wants to be higher status, for some definition of status, and rational people are often seen as higher status, giving a lot people a certain amount of motivation accept moral claims backed by rational argument.The average person is of course not an ideal rationalist, and many other factors remain in play, not least confirmation bias. Nonetheless, moral persuasion more often takes the form of appeals to objective principles than appeals to personal interest.
So binding and motivation are not the same, which means you common claim that moral proportion are not binding (in principle) just because they are not motivating (to some specific person)
And binding and motivation are not orthogonal or disjoint: a certain moral claim can be motivating because it is binding because it is justifiable because it is true.
But motivating-ness and truth are not theme. You can’t directly refute the claim that some X is an objective moral truth by noting that noone would be motivated to act on it. It would be strange to have a set of moral truths that no one ever acts on. All other things being equal, adding motivation to the set of moral truths would be a nett gain …but all other things aren’t equal. You have to change one thing to change another.
Motivation is psychological, so one way is to use social pressures or whatever to change individual psychology. That is popular, but not, apparently, your approach.
Another way is to hold psychology constant, and change morality. Compromising on truth to achieve motivation is not an unalloyed win.it can still be nett gain.. for instance, if no one is willing to work on the 100% true morality, but willing to work on the 50% true one. But there is no nett gain for the 0% true morality, and that is the problem with ethical egoism. Caligula is extremely well motivated to behave the way he behaves, and that isn’t moral at all. Putting a label reading “this is moral” on de facto behaviour doesn’t change anything in reality, or make anything better.
Caligula is a reductio ad absurdum of the idea that motivating-ness is the only criterion for morality.
There are an endless variety of norms, each of which generates its own set of shoulds, mays and shalt-nots.
Some are opt of ional and/or localized, such as medical ethics or chess rules.The distinguishing feature of moral normativity, of morally-should is that it is binding on all sane adults.
So if there are true moral claims, and you are a sane adult, there are things you morally-should do.
That establishes, conditional in their being any morality, why you should be moral.
The argument that you should not be moral works by collapsing together several different shoulds, several different norms. If you morally-should do something, it might also be against your immediate self interest, so that you selfish-should-not. The argument continues, that, since there is a sense in which you shouldn’t do X…then you should not do X at all.
The analysis in terms of multiple norms, multiple shoulds can be seen to be correct through the existence of moral conflicts and dilemmas.
An epistemic rationalist values, truth, valid argumentation, good epsitemology, lack of bias, etc.
Therefore, an epistemic rationalist will be motivated to accept the truth of well justified moral claims.
Therefore, morality is not incompatible with rationality.
That’s about all I need to answer the challenge.
To answer some further questions:
An epistemic rationalist won’t necessarily act on a moral claim she accepts.They can be akrasic. Altruistic morality requires them to lose utility in some areas, which mayor may not be balanced out others areas. That doesn’t mean they are irrationality losing nett utility every time they act morally.