På svenska

Morality

It is Jan Eriksson who inspired me to write about morality. But while he builds his reasoning on experience, literature and education, I base mine on absolutely nothing at all.

To me, utilitarianism is the most interesting moral framework as I believe the behavior of any agents could be described as if it were maximizing a utility function, however complicated that function might be. Utility is strongly linked to values, but I want to attempt to separate the two concepts. Values for me are something simpler and easier to describe, while the utility function could change structure from moment to moment potentially quickly based on values and assessment of risk. For those familiar with reinforcement learning and Bellman's equation, V(x0,a0)=R(x0,a0)+γmaxa1V(x1,a1),V(x_0, a_0) = R(x_0, a_0) + \gamma\,\max_{a_1} V(x_1, a_1), I am trying to say that I by utility function approximately mean VV while I by values approximately mean RR .

What I mean by morality then is precisely the connection between values and utility. However, this connection becomes the most interesting only when the consequences of our actions are uncertain and we must begin to find ways to weigh risk against potential benefit. It is comparatively easier to discuss how to act in a certain situation with complete knowledge of the consequences of each alternative as Bellman's equation provides an obvious example of a reasonable connection between values and utility. Take, for example, the classic Trolley Problem. This type of question, in my opinion, is not very interesting from a moral perspective since it is reasonable to directly apply e.g. Bellman's equation on a person values to know how they would act. (Personally, I don't value whether I've touched lever or not, and would therefore choose to see five people live instead of just one.)

The interesting moral questions, in my opinion, are instead about how one chooses to handle the uncertainty in potential outcome of different choices. Should we for example attempt to maximize the expected value of the utility? However, this method can lead to risk-taking, as we become prepared to bet a lot on an action which with a very low probability produces an extremely good outcome compared to an action that is certain to produce a mediocre outcome, and so on.

Another aspect is whether the way the world reached a certain state should play in to our way of assigning utility to that state. One can argue that a pure utility function does not care about history, such that we do not differentiate between two choices which result in the same end state, but where one eg. entails torture of innocent people while the other does not. If nobody remember the torture, does it then matter whether it actually occurred or not? Similarly, if we do believe ourselves to remember torture, does it matter if it actually did not happen? Abstractly, I believe the answers to these questions should be no, however if one is a believer in T-symmetry or the no-hiding theorem it might thankfully be a non-issue.

Here, though, if not before, I should stop, as I am sailing waters unknown to me but already thoroughly explored by people from natural scientists to philosophers. The only conclusion I might draw is that I would likely do better as a mathematician than as a psychologist.