Homunculi and Moral Demandingness
All sensible people care non-instrumentally about things that happen in the world outside of themselves. Would you prefer that there be more or less extreme poverty? How about war? Cancer? Even if these things don't affect you directly, you probably care about them.
Everyone also cares non-instrumentally about his own well-being. Most everyone also cares more about the well being of his friends and family than about that of strangers.
Sometimes, what is good for you is good for the world. There is probably a correlation. Ceteris paribus, what is good for you is likely to be good for the world. This is because (1) you are part of the world so your well-being counts for something even considered impartially, (2) in order to help, you probably need to be in reasonably good shape. But the correlation between goodness for you and goodness for the world is probably not perfect (why would the correlation be perfect?). The very best thing for you to do, impartially considered, is probably not selfishly the best thing for you to do. And this conflict doesn't depend at all on the details of what you care about. Whether you want to maximize utility, minimize existential risk, realize American national greatness, spread Christianity, or achieve social justice, it is unlikely that what is best for you is best for the world by your own standards.
So you have self-regarding and other-regarding motivations. And these come into conflict, to some degree. How do you decide what to do? Somehow, you must reach a compromise (in almost all cases, it isn't psychologically realistic to commit 100% to either selflessness or selfishness). A simple way of thinking about this would be to come up with a conversion factor between selfish and altruistic value. Say that you value yourself five times as much as you value other people. That sounds like a lot at first--but I think in practice your actions would be indistinguishable from those of an impartial altruist. There are a lot more than five other people. There are yet more animals. There are potentially countless (maybe literally countless) future people. So should you just make the conversion factor extremely large? "I value myself as much as a billion other people"? That also does not sound right. In fact, it sounds like something a villain in a comic book would say. In practice, I don't think the conversion factor model describes what nearly everyone will do in real life, which is to try to compromise between selfish and altruistic actions.
Another way of thinking about is to imagine two homunculi, one selfless and one selfish, bargaining with each other. The selfless homunculus is an impartial altruist. The selfish homunculus just cares about you and your family. The two homunculi negotiate to determine what decisions you will make. They pick a plan together, and each homunculus can veto any plan. Note, this is not a model of moral uncertainty (though it does owe a lot to various theories of action under moral uncertainty, particularly the parliamentary approach). The thought is not: you have an equal credence on ethical egoism and impartial altruism. The thought is: in practice, you will act as if you value some things besides impartial altruism.
There are lots of opportunities for gains from trade between the two homunculi. For instance, money for private consumption is subject to sharply declining marginal utility (how many yachts can one man own?). But marginal utility doesn't decline (or only declines very slowly) with money for altruistic purposes. The thousandth child vaccinated against polio is just as valuable as the first. So if you try to become extremely rich, both homunculi can be happy--the selfish homunculus because you will be able to buy tons of stuff, the selfless homunculus because, even after buying tons of stuff, you will have lots of money to give away. Similarly, many people become researchers because they love it and can't tear themselves away. And if you love biology, you might be able to make both homunculi happy by trying to invent a cure for Alzheimer's Disease.
In general I like this model of compromise between conflicting values. But I also see a few big flaws.
I think that there are a lot of situations where the ability of the homunculi to veto seems intuitively attractive. Liking the veto seems like a similar feeling to wishing that Abraham had told God, 'No, I'm not going to sacrifice my son, and I don't care what you offer me, there's no opportunity for a deal here, just go ahead and strike me down'.
But imagine if you had the opportunity to jump in between Gavrilo Princip and Archduke Franz Ferdinand in 1914, stopping Princip's bullet and preventing the First World War (assume--unrealistically--that you understand the stakes of the situation as it is happening). I would say, if you have that chance, you should definitely take it. Normally, I think it is fine for people to care a lot more about their own lives than the lives of strangers. It's just human nature; it would be a waste of energy to criticize something as built-in as that. You might as well command the tide not to rise. But in some extreme situations, my feeling changes. Normally, it is alright to put yourself above the rest of the world, to some extent. But if you can prevent WWI at the cost of your life, you should do it. I would be sympathetic to someone who was so overcome with fear in the moment that he let Princip shoot the archduke. But if someone just coolly watched it happen, and then said 'look, the homunculi couldn't reach an agreement on this one', I would object to that.
However, cool refusal is exactly what my two homunculi model predicts. Unless the life of your child is at stake, there is no worldly benefit you can be offered that offsets the loss of your life. So the selfish homunculus just will not sell, no matter what the selfless homunculus offers him. There is no deal to be made.
I wonder if we can save the model with some idea of negotiating in advance to make extreme sacrifices in special situations. Imagine the two homunculi, before you are born when they are perfectly ignorant of every fact about you, agreeing that if you have the chance to die to prevent a world war, you should take it. And if you have a chance to live a life of minimal altruistic value but that is sufficiently surpassingly enjoyable, you will also do that (maybe being a great writer or musician--but perhaps that example doesn't work because other people would enjoy your work).
Putting the model aside, I find my own thinking about this issue to be very muddled. I absolutely would give my life to stop WWI, or achieve other comparably important ends. That's not because I don't love life; I do. But, even though I would be willing to sacrifice my life to prevent WWI, there are some seemingly less painful things that I really cannot see myself doing. For instance, if my best opportunity to help the world were something that made my parents hate me, I think I would probably just pass it up. You might object: this isn't necessarily an inconsistency, maybe I care more about filial piety than life itself. But that isn't it. If I had choose between pressing a button that got me excommunicated from my family, or a button that got me killed (but left a beloved memory in my wake), I would definitely press the excommunication button. That is not consistent. (A friend suggests that I may put some epistemic weight on my parents' judgment, which maybe resolves the inconsistency.)
Here's another problem: both homunculi are always on board with instrumental selfishness (or helping yourself now so you can help others later). Put on your own oxygen mask first, as they say on airplanes. But "instrumental selfishness" is poorly defined.
Getting enough sleep is important to doing good work and important to being happy. But what about getting along with your parents? Certainly, some people would be so miserable if they didn't get along with their parents that they wouldn't be able to do good work. What if you require lots of vacation time to do good work? What if you require the finest caviar every night? What about a new Bugatti? Where does it stop? The issue applies to a whole host of decisions, not just financial ones.
Finally, a general worry. It seems like we value a lot of things that are imperfectly correlated with each other. The true, the good, and the beautiful sometimes coincide, and sometimes they don't. And one consequence of thinking of the relationship between these things as a correlation that is less than one is that the maxima of truth, goodness, and beauty will come apart.
We are left with two (by my lights) pretty unattractive options. We can compromise and miss out on the maxima of all three values, and perhaps realize a lower amount of 'total value' (whatever that means); or we can maximize one value uncompromisingly. It sounds attractive to adopt the principle that, even though you normally compromise between X and Y, if you can really hit X out of the park you should just focus on doing that, Y be damned. But I wonder if, in the real world, this principle makes compromise of any kind impossible, that it just mandates zealotry.