About Morality


I believe the following is rather obvious, but it seems that people generally disagree violently. So far, I have not heard counter-arguments that could convince me. Of course I still cannot be certain that these arguments are correct, but here they are nonetheless. 

I propose the following:

Ethics is theoretically solved. The solution is utilitarianism. 

Morality is just a simple subjective approximation of utilitarianism, and it does this because utilitarianism is approximately equal to egoism. 

Firstly, I define

  • morality as "what feels right to do (subjective, depends on the individual)"
  • ethics as "how we should actually act (objectively)"
  • "we should do X" as "We want to live in a world where we do X" (similar to Kant's categorical imperative)
  • utilitarianism as "what is best for the world, all things considered"
  • (genetic) egoism as "what is the best action for the survival of my genes"

If we want to find causal explanations in human behavior, we have to add up the causal effects of genetics and nurture. Genetics is mostly explained by evolution, and for pre-modern human behavior, nurture is a weak argument, because evolution had time to adapt to nurture as well. 

Nearly everything fundamental in our psyche is how it is because it randomly appeared in a genome and turned out to increase the likelihood of this genome's survival, or the survival of our relatives who have similar genes as ours, or the survival of our species, which has somewhat similar genes to us as well. 

The origin of morality and similarity of utilitarianism and egoism are quite easily uncovered by using Darwin's theory of evolution.  

1) Why utilitarianism is approximately equal to egoism (punishment of non-utilitarianism)

We can clearly observe that non-utilitarianistically acting individuals are punished by society. We say that this is the case because they are bad for society. But of course this is not a satisfying explanation. Why do we punish them exactly? After all, we are no hive-mind, but individual game-theoretical agents. Darwinian evolution provides a deeper explanation: 

Meta-strategy and meta-evolution forms a society of individuals which all behave in such a way that egoism is forced to be as close as possible to utilitarianism (because individuals have no reason no to act purely egoistically, so this way the state of things is as utilitarian as possible). The forcing is the process of changing the Nash-Equillibrium: The old egoism may not be good for the group as a whole, but it is a Nash-Equlilibrium (a set of strategies (one for each individual), in which no individual can individually change their strategy without sacrificing profit for themselves), at least in the short-term. Therefore, a bigger effort has to be made in order to change the whole Nash-Equillibrium point and escape the dilemma to increase everyone's profit in the long-run. 

The way this can work: A group (like a species, tribe or just a couple with children) which punishes anti-utilitarian behavior of individuals will force them to act more utilitarian in the short run, and force an evolution towards utilitarian individuals (which have emotions of morality) in the long-run. This is what happens in the repeated prisoners-dilemma. The example of a couple with children is not well suited for genetic strategy evolution. However, the couple can adapt directly (meta-strategy) by figuring out that punishing non-cooperation in your partner can convince them to cooperate in the next playing-round. 

To summarize, utilitarianism has proven adaptive because immoral groups of individuals die faster, and a group will put immoral individuals at a disadvantage (this behavior is meta-adaptive). Utilitarianism now is adaptive, so utilitarianism is approximately equal to egoism. 

2) Why morality approximates utilitarianism

Morality is an emotion which serves to make us act more utilitarian. Calculating the utility function for different outcomes is difficult in practice: We cannot only consider the direct reward of an action (eg. "Let me just kill this person because he is a bad influence for society") need to account for side-effects, including emotional ones (eg. the guy might have family, if we allow self-justice it will be abused; I myself will suffer emotionally due to my moral codex; others will live in fear to be killed, ...). Considering all of these is impractical, so we have evolved approximations which are then followed as principles, such that even the act of following a principle can be seen as an advantage of such an action. Eg. I will be acting better in the long-run if I follow my principles in this specific situation, even if it is (in the short-term) bad in this particular case. As a result we have developed rules and emotions like fairness, responsibility, value of life, compassion etc. . Simple rules like this are way easier to enforce, both for an individual through willpower, and for a society through laws. If you would rather not save a person from drowning than drown a person yourself, this is still utilitarian: This feeling originates from the fact that such principles lead on average to higher utility, if you account for potential imperfect behavior (in which the consequences of the two options might in fact differ). 

3) The solution to Ethics - How should we act?

Ethics asks how we should act. If there are 2 options how to act, we should choose the one which would create a world in which we would rather live than in the other one. Of course, your place in this world would be chosen at random. Now we can theoretically construct a utility function (I argue for its existence below), such that utilitarianism (maximizing this utility function) is the solution. 

Consequently, utilitarianism is the principle after which we should act. 

Additional justifications - Existence of a utility function. 

Utilitarianism assumes that there exists a utility function `V(S)` which judges the state S of the world including all individuals such that everyone would rationally have to agree with the function. "agree" here is defined though the  question "would you like to live in such a world, if your role in it was chosen at random?".  If I write `V_r(S)` I mean the value of a world `S` for an individual who has found themselves in role `r` after the change of roles. 

This function is `V(S) = frac{1}{N} \sum_{r=1}^N V_r(S)`, with `N` individuals and `N` roles in total. Every rational agent would like to live in the world that maximizes this function, and no other. 

The function maximizes the expected value of the world's value for the individual `i`, because the role `r` of `i` is randomly chosen from the population in our thought experiment. 

Problem 1) 

But what if `i` wants to maximize something else, like the value of the worst role they could get dealt? Then `V(S) = \text{min}_r(V_r(S))` for this individual, so the total utility V, not just the subjective utility, seems to be subjective (apparently leading to 3 levels of subjectivity). 

First we can reframe this definition as expected-value maximization: We can redefine V by taking a slowly growing function of it, such as V <- log(V) (chain more logs as you like). Now, variations at low V are more important than at high V, leading to an optimization of the worse fates. Now at least for this individual, total utility can again be written as the average from above: `V(S) = \frac{1}{N} \sum_r^N V_r(S)`. But the individual still disagrees on the form of V with the others. Voting on V doesn't easily solve the lack of one-mindedness, so I have to provide the following argument: 

I believe that any disagreement on this level (the level of total utility function V) is not rational: If they truly change roles, this means all psychological oddities, including risk-avoidance, are transferred. Now that they find themselves in this new role, they regret their previous risk-avoidant choice even before they found out how well-off they actually are (what `V_r(S)` is). So the behavior was not rational. Risk-avoidance can be seen as an emotion helping you to achieve maximum expected payoff, so it is not per-se optimal, even if you emotionally think it is. We can argue in the same way against other strategies. The only rational strategy is to maximize the expectation value. 

Problem 2) 

"But you cannot measure people as a mere number!"

Yes you can, you are doing the same: 

Let me play the devil for a paragraph, and put you inside a trolley problem (two people are tied to a railway with a train approaching. you can pull a lever to divert the train to another rail, but there is another person tied to this rail): Don't want to make the choice? No worries, you choose by whether your eyes are open or closed 6 seconds after I finish this sentence, so there is minimal bias between the coices. So who do you save? You could still try to randomize, but probably you won't come up with a way in such a short time. Most likely, you will choose. After a series of quite insane tests of a similar form we will know more or less accurately how much you value different living beings with respect to another. This can include animals, strangers, family and friends, and yourself. And you can even compare it to objects. If you have strong feelings of morality, you might value your own life no higher than that of others, and objects to be of very low comparative value. And you may value each person equally, no matter who they are. But the fact is, you have a number. 

What's more, I can put you in a scenario where you have to actively do something to save the 2 people. If you prefer not actively interfering in the natural flow of things because you do not feel morally allowed to do so, you will not sacrifice the one person to save the other two. If I do this for N and M people instead of 1 and 2, then this allows me to measure exactly how much more you value your own emotional wellbeing than the live of a human being. (Substitute death by torture and apply the "which world would you like to live in if your role is randomly chosen?" thought experiment, and you will surely not agree with letting very many people die to avoid active action.)

Morality is a number

For the de-facto state of things, actually even just the old genetic argument is enough to see that morality is a mere number, if we accept that morality is an approximation of genetic egoism: The selfish gene judges even moral actions by a number. It is the expected change in number of genes in the next generations. Therefore, our current moral codex is based on a number. The same goes for everything in human behavior: Consequently, our actual behavior tries to maximize the genetic utility function. 

Conclusion

In reality, all this should not change our behavior much. I agree that following the classic values is usually the best choice, but is still explained by the arguments I presented here. There are specific cases, where it may differ, however. Now that we have evolved to be able to make logical conclusions rather than just intuitive ones, we need to accept that ethics is fundamentally just utilitarianism, rather than some unexplainable magical emotion coming from nowhere, in order to get more accurate, sensible ethics. An example is the trolley problem of 1 vs. N people, in which a intuitive, classically moral person would cause great suffering while feeling just great about their behavior. On the other hand however, this test also provides an argument for egoists and calculating utilitarians to act more on general principles of moral value. These principles have evolved for a reason: They are quite useful in practice. But again, this doesn't change the real underlying causality. 











Comments

Popular posts from this blog

The Relational Database Model and Predicate Logic

The Machines