What IS morality? What does “should” mean?

No, you can’t just say “should means to be morally obligated.” That’s just a synonym. Same question applies; “What does ‘morally obligated’ mean?”


Note: This is one of the most important ideas on the site, and the most difficult to get right.

It’s still very much in progress. Not even most atheists understand how morality is objective without a God. This article has to be accessible, and take someone from ground zero, all the way up to understanding the nature of morality.

If it’s not making any sense in its current form, please tell me. Let me know if there seem to be obvious counters to the ideas presented therein, or if you’re not persuaded and why.

And, of course, any stylistic or organizational advice would be appreciated, to make it catchier, simpler, clearer, etc.



Imagine another species of intelligent beings. Alexander Wales uses “Christmas Elves.” These beings do not care about life, happiness, freedom, discovery, humor and fun like we do. Instead, they care about eggnog, and that people under the age of 18 receive a toy of some kind on a specific day of the year. They care about the existence of a naughty and nice list, and about snowball fights (not because they’re fun; they care about them whether they’re fun or not).


They might tell us to stop wasting time on equality and freedom, and dedicate our time to making millions of tons of eggnog. This is probably so weird to us, that we don’t even really think of it as evil, we just think of it as…ehh, what? Just weird. We just don’t care. Why on earth would we make millions of tons of eggnog? What possible reason…who even cares?


We might tell them to stop using all their vast resources to give toys to children, and use at least some of their resources to give food to those children. We might point out that it’s nice to give toys to wealthy children, but not if it takes away from giving basic food and medicine to starving, diseased children. They should rearrange their priorities.


This is so weird to them that they’d probably just think of it as…ehh, what? Just weird. They couldn’t care less. Why on earth would they give food to millions of starving children? What possible reason…who even cares?


In short, we care about vastly different things. Some would be tempted to say that the elves have a different morality than us. Our morality is about things like health, happiness, fairness, and freedom, and their morality is about things like eggnog, snowmen, toys, and a certain day of the year.


And of course, who could really say which morality is correct? Both species would be persuaded by their own preferences, but who could say if one was really more moral than the other? Moral relativism, in other words. And some people use that relativism even for the differences between humans. Some humans, in some eras of history thought slavery was moral, others in other times disagreed, but who’s to say who’s truly right and who’s wrong?


I disagree. I think these two situations reveal two very basic misunderstandings about what it even means for something to be moral.


The elves are not moral. Not just because I, and humans like me happen to disagree with them, no, certainly not. The elves aren’t even trying to be moral. They don’t even claim to be moral. They don’t care about morality. They care about “The Christmas Spirit,” which is about eggnog and stuff.


Likewise, humans are not Christmas Spirit-ey. Not because the elves happen to disagree with us, no, certainly not. Humans aren’t even trying to be Christmas Spirit-ey. They don’t even claim to be Christmas Spirit-ey. They don’t care about the Christmas Spirit. They care about morality, which is about happiness and stuff.


Alexander Wales’ story The Last Christmas provides an interesting look at what happens when a human and an elf discuss what to do.

“If you give a nice gift to a poor child, their parents will sometimes take it and sell it, which isn’t in the Christmas spirit at all.”

Charles closed his eyes and pinched the bridge of his nose. “I’m hesitant to just reverse all these decisions that were made without first knowing all the reasons behind them, but it seems idiotic to me that we have an effectively unlimited source of material wealth and we’re using it only on a single day of the year to give presents to only children, and in such a way that those who need the least get the most.”

Matilda said nothing, but raised an eyebrow.

“It’s unfair,” said Charles.

“Life is unfair,” said Matilda.

“Does it have to be?” asked Charles. “Is that the Christmas spirit?”

“I don’t know,” said Matilda. “Fairness doesn’t enter into it, I don’t think. Why should Christmas be fair if life isn’t fair?”

“He was such a jolly man though!” said Charles. “And he kept sex slaves?”

“Elves aren’t slaves, Santa,” said Matilda.

“If I asked you – purely hypothetically – to take a knife and disembowel yourself, would you?” asked Charles.

“Yes,” said Matilda.

“You wouldn’t even make any arguments for why you should live?” asked Charles.

“My life is meaningless in the face of the Christmas spirit,” said Matilda.

“But if it didn’t matter to the Christmas spirit,” said Charles, “If I just wanted to see you die for fun?”

“Allowing you to satisfy your desires is part of maintaining the Christmas spirit, Santa,” said Matilda. “A merry Santa means a merry Christmas, as we say.”

I’m trying to do the most good.”

“Hrm,” said Matilda. “Doing good isn’t part of the Christmas spirit.”

“We wouldn’t let him kill anyone. Elves can’t act contrary to the Christmas spirit.”

“You let him hurt you,” he said. “You let him kill you.” The Christmas spirit had loopholes that you could drive a truck through, he’d already found that out. Killing everyone in the world would be as easy as saying the right sentence to a single elf.

“We’re not people,” replied Matilda. “We’re elves.

You see, morality is a fixed, set, concrete thing. Christmas Spirit is a different fixed, set, concrete thing. A certain action, like making eggnog or saving lives is either moral, or it’s not. And it’s either Christmas Spirit-ey or it’s not.


If humans ignore eggnog and save lives instead, that doesn’t make life-saving into a Christmas Spirit-ey thing, it just means that the humans don’t care whether it’s Christmas Spirit-ey or not. The Christmas Spirit doesn’t change. Instead, people either care or don’t care about the Christmas Spirit.


Likewise, if elves ignore saving lives and make eggnog instead, that doesn’t make eggnog production into a moral thing, it just means that elves don’t care whether it’s moral or not. Morality doesn’t change. Instead, people either care or don’t care about morality.


In fact, if you explained to elves what morality was, and they understood it, they might say, “Ah, yes, based on what you’ve told us, we see why saving lives is a moral thing to do.” They wouldn’t even disagree. But then they would go on making eggnog instead of saving people because they don’t care what’s moral or not. They care about Christmas Spirit instead.


And likewise, if elves explained to humans what The Christmas Spirit was, and we understood it, we might say “Ah, yes, based on what you’ve told us, we understand why making eggnog is a Christmas Spirit-ey thing to do.” And then we would go on saving lives instead of making eggnog because we don’t care what’s Christmas Spirit-ey or not. We care about morality instead.


So, to say the elves have their own “morality,” is not quite right. The elves have their own set of things that they care about instead of morality.


It would be equally mistaken to say that humans have their own “Christmas Spirit.” Rather, humans have their own set of things that they care about instead of the “Christmas Spirit.” Humans care about moral things instead.



This helps us see the other problem, when people say that “different people at different times in history have been okay with different things, who can say who’s really right?”


Morality is a fixed thing. Frozen, if you will. It doesn’t change. Rather, humans change. Humans either do or don’t do the moral thing. If they do something else, that doesn’t change morality, but rather, it just means that that human is doing an immoral thing.


Wait, some say, but weren’t we talking about “morality” as the set of things that humans cared about, just like Christmas Spirit is a different set of things that elves care about? If the things that humans care about are what’s moral, then whatever people care about is moral, right?


And so, the misunderstanding comes into full view. Let’s see if we can distance ourselves a little from the disagreement and view it more objectively.


It’s not that a human wanting something makes it moral. It’s not even that something being moral makes humans care about it; they could start caring about something else, instead, if we hacked their brains. Rather, humans just so happen to care about moral things. If they start to care about different things, like slavery, that doesn’t make slavery moral, it just means that humans have stopped caring about moral things. Just like elves caring about different things doesn’t make those things moral, it’s just that the elves don’t care about moral things. Morality stays the same. Christmas Spirit stays the same. It is a feature of minds whether they care about morality, or Christmas Spirit, or both, or neither, or something else.


It is a feature of minds that they respond to morality.

It is a feature of your mind that you respond to morality.


Things are moral whether you care or not, or respond to them or not.

And then you either respond to that morality or you don’t.


So saying that “different people care about different things and who’s to say who’s moral or not” is a misunderstanding. Things don’t stop being moral just because elves are busy doing non-moral other things instead. And things don’t stop being moral just because humans do non-moral things, either.


(See section for why humans sometimes disagree about morality, since, like I’ve been saying, humans as a species all care about the same thing: morality.)


So, when humans disagree about what’s moral, there’s one definite answer. There’s always a moral answer to every question, and if one of the humans has it, then they have the moral answer and the other human doesn’t. And if neither of the humans has the one correct moral answer, then, well…you get the idea.


Now, as for “who’s to say?” that is an important question. A difficult one, often. But, ultimately, an answerable one.


(Of course, it’s not really about “who” says the answer. The answer is right whether anyone says it or not. Morality happens not to be determined by a personality; things cannot be “made” moral by a person’s say-so.

That might be another misunderstanding. Asking “who’s to say what’s moral or not?” almost makes it sound like morality is based on someone’s say-so. So, a better wording is “how can we know what’s moral or not?”)


Anyway, there is an answer. If we can find the answer, then we will know what is the moral thing to do. A Christmas Elf could also figure out what the moral thing to do is. The difference is that we care about morality, so we would then pursue the moral thing. An elf would know it was moral, and they would see that as an uninteresting, unmoving fact.



How do we find that moral answer, then? Unfortunately, there is no simple answer. I can’t say something like, “you can find the answer by asking yourself, ‘what will make everyone most satisfied afterwards?'” or maybe “the answer is whatever free people would choose to do” or something. People often fall for the One Great Guiding Principle Fallacy.


People try to reduce something complicated down to a simple thing. Nietzsche, for example, tried to say that “morality was just the herd-instinct.” On another occasion, he tried to say that “fear is the mother of morality.” The mistake in both cases is to think that the whole of morality can be summed up, reduced down to a single principle, whether it be fear, or herd-instinct, or anything else.


Think of morality instead, as a great, long, complicated equation. The equation has a great many pieces to it. Each piece is a value, a human desire; morality is a thousand shards of desire. William Frankena tried to list a few of these thousand shards:

“Life, consciousness, and activity; health and strength; pleasures and satisfactions of all or certain kinds; happiness, beatitude, contentment, etc.; truth; knowledge and true opinions of various kinds, understanding, wisdom; beauty, harmony, proportion in objects contemplated; aesthetic experience; morally good dispositions or virtues; mutual affection, love, friendship, cooperation; just distribution of goods and evils; harmony and proportion in one’s own life; power and experiences of achievement; self-expression; freedom; peace, security; adventure and novelty; and good reputation, honor, esteem, etc.”


What is the moral thing to do? If you say “that which contributes to life,” then you leave out all the others on the list. If you say it’s “what contributes to life, knowledge, health, and love,” then you leave out adventure, pleasures, beauty, happiness and a thousand other pieces. To sum up the whole of morality, you must say it’s what contributes to “life, consciousness, activity, health, strength, pleasure” and then include everything else on that list. And then a bunch more besides that Frankena didn’t think of.


You see, we don’t know all the pieces of morality, not so we can write them down on paper. And even if we knew all the pieces, we’d still have to weigh which ones are worth how much compared to each other. The Morality Equation might look something like:

“(5)life – (2)happiness / (1.3) pleasure + (((2)strength * ((6)adventure + (2.4)satisfaction) / (6.222)beauty)) + 4)…” and on and on and on for 10 more lines, a thousand values woven together into a vast and complicated relationship, represented in a dozen lines of equation.

Problem 1: Find every piece of the equation.
Problem 2: Correctly and precisely define exactly what each piece means.
Problem 3: Relate every piece to the others the same way they’re related in the brain.



If no one knows the Morality Equation, then how do we ever figure out what’s moral? And how did humans end up having this equation in their heads in the first place?


The Morality Equation is programmed into us since birth, placed there by evolution’s design. That’s why the Morality Equation is so complicated. Evolution’s design strategy is “try whatever random psychological hacks are easiest and then keep the ones that help them survive on the African savanna.” The brain design it coughed up is not simple, or elegant, and certainly not perfect.


Evolution is not guided by any One Great Guiding Principle (except inclusive genetic fitness), so the psychology it designed in humans is not reducible to One Great Guiding Principle.


Our brains do all kinds of things, most things that they do, without our being conscious of it, or how it’s done. Our sense of beauty is a great example. Our brain runs some calculation, some pattern of firing neurons, which makes us feel and appreciate beauty when we see it, but we don’t know what that calculation is. Our brains are doing it, but we don’t know exactly how. Beauty, like morality, is a complicated equation that our brain runs. Like morality, beauty is programmed into our brains from the time of our birth. Evolution designed our brains to run those specific calculations, and to react to them in certain ways.


Of course, if our evolutionary history had gone differently, evolution would have designed us to care about different things, about something other than morality. Indeed, other species have a different evolutionary history, and so, don’t care about morality.


In short, evolution designed the human psychology. It designed us to care about certain things, to hold the Morality Equation. And it did not design us to be conscious of what that Morality Equation is. It could have designed us to care about something else, but it didn’t.



Now. I am duty bound to clear up a final misunderstanding, which I may have contributed to in this article. When I refer to “humans who care about moral things” and those who don’t, I may be giving the impression that the moral differences between people come from some caring about one set of things, and others caring about a different set of things. About whether they care about morality or not, as though some care about it, and others don’t. Like, slave-owners must not have cared about it, right, since they did the immoral thing? This is not true at all.


Humans all care about the same set of things (in the sense I’ve been talking about). Morality. Does this seem contradictory? After all, we all know humans do not agree about what’s right and wrong; they clearly do not all care about the same things.


The first thing to say is that the morality that is deep within us, programmed into our brain by evolution, does not contain values like “the freedom of all human beings,” or “the happiness of all sentient beings,” and certainly not “all people are created equal.”


Indeed, for most of the history of the world, humans have not cared about these things. At all. They cared only for themselves and their loved ones. They routinely committed genocide. Revenge by torture was their preferred method of therapy. Slavery among most peoples for most of the history of the world. What happened since then? Did our brains suddenly evolve to care about the well-being of all people? Is it just culture?


No. Humans are born with the same Morality Equation in their brains, with them since birth. How then all their disagreements? There are three ways for humans to disagree about morals, even though they’re all born with the same morality equation in their heads.


1. Keep a human from calculating the morality of something at all.


This is the most common way. Most people will never bother to think very hard about what’s moral and what’s not, preferring to take their cues from their society instead. It’s only natural that someone who hasn’t bothered to calculate the morality of something might disagree with someone who has.


This is just like a math problem. You might know how to calculate 45 times 79 (I do), but until you get around to doing it, you won’t know the answer (I don’t). But if you and I  ever did calculate it, we would agree as to the answer.


The Morality Equation is hugely complicated. You can show us slavery, but until we sit down, think it through, run it through the Morality Equation and triple-check our answer, we won’t be able to say “Wrong.” For most of the history of the world, people just didn’t think it through.


And here’s the important thing to realize about morality. If two people ever do calculate the morality of an act, they will come to the same conclusion.


Of course, they have to run the calculation correctly, which brings us to the next way to keep humans from agreeing about the morality of something.


2. Keep a human from correctly using the Morality Equation to calculate something.


This is actually very easy, because the Morality Equation is very complicated. See another excerpt from The Last Christmas.


“I want Li Xiu Yang to be given a gift that will leave her permanently physically healthy, uninjured, and with a mental state that is within three sigmas of normal for her age, gender, and culture. I want her to be free from any disability or degradation of any of her senses, organs, or other body parts. Whatever solution you give should age her at the normal rate until her twentieth birthday, at which point she should cease to age.”

The elf gave him a funny look, then began to shape the ball of grey goop. Three minutes later, he presented Charles with a small pebble.

“This is it?” he asked. “And it won’t turn her into some kind of monster, or cause her unbearable pain, or anything like that?”

The elf sighed and took the pebble back, then after a few moments of reconstruction handed it back to Charles.

“Well, that certainly inspired confidence,” he said dryly.”


The point here, is that you can think you’re taking everything important into account, like Santa did in this excerpt. But then, if someone gives you what you say, instead of what you mean, it turns out that there are extra conditions you’ve left out. The Morality Equation is complicated. Easy to forget to carry the 2.



3. Get a human to not want to use the Morality Equation.


If you were a slave-owner, you’d probably not want to think too carefully about the morality of slavery! People who really don’t want to change their mind generally find any excuse that slavery is okay, one that they can convince themselves to believe, and then they quickly stop thinking about the issue. If you bring it up, they’ll parrot out their excuse, defend it stubbornly without ever trying to understand people’s explanations as to why it doesn’t make sense, and then become emotional and change the subject if they feel in danger of changing their mind.


It’s actually really easy to get people not to want to think things through too carefully. It’s called Motivated Cognition, or motivated thinking.


So, when people disagree about morality, it doesn’t mean that they couldn’t agree, just that – they either haven’t calculated it yet,
– have incorrectly calculated it, or
– don’t want to calculate it.
But behind it all, as humans, they have the same Morality Equation, and as humans, they’ve been designed to respond to things that they realize are moral (if they realize they are moral), to care about them.



So if someone threatens to kill me, and I respond “You shouldn’t do that!”, that’s a way of saying:

You don’t want to do that! If you carefully calculated the morality of killing me, you would realize that it wasn’t moral, and since (as a human) you care about morality, you wouldn’t want to kill me. Deep down, you don’t want to kill me. If I could somehow show you my life, have you feel what I’ve felt, understand what I’ve understood, then you would value my life like I do, and like I value yours, and you wouldn’t want to kill me. By working through the Morality Equation, you would realize that you don’t want to do this. It’s just sitting there like an unsolved math problem for you to find the answer to.”


That’s the real question. “What do humans consider moral after they’ve carefully thought it through, and understood perfectly every little detail of a question?” It might be impractical, but if you could have everyone on earth live the life of a slave-owner, and also live the life of a slave, a thousand times each, you could be darn sure that at the end of it they would say together: “Slavery is wrong.”


A Roman soldier and I might disagree about slavery. But if I understood everything he did about slavery, that wouldn’t change my mind. While if he understood everything that I do about slavery, or more, what a slave understands about slavery, then he would change his mind. You just have to find a way to get people to think through the Morality Equation.


When people do think through the Morality Equation, they experience moral progress, and realize the morality of things they never noticed the morality of before. They might say things like “Huh, I never thought of it that way…” or “I never realized…” or “I didn’t think of that…”


And when somebody waits until after they’ve done something to calculate its morality, they sometimes find out after the fact that it’s immoral. Then they experience regret, and say things like “I wouldn’t have done that if I’d realized…” and “I’ll never do that again.”


Moral progress is finding more and more what the Morality Equation says about things, discovering what we care about on the deepest level. If the Romans showed you everything they knew, that wouldn’t convince you that their slavery was moral. But if you showed them everything you knew, that would convince them. Humanity can truly and permanently progress in its morality.


Our obstacles? The three ways mentioned above, three ways humans avoid using their Morality Equation to calculate morality.


They don’t bother to think about it. They make mistakes while thinking about it. They avoid thinking about it.


But these are fundamentally solvable problems.


If we can learn to think about morality…

And if we can learn to examine our thoughts for errors, and ask others to help us do so…

If we can learn to accept the truth, no matter what it is…


Then moral progress can continue. And some day, we may find all the pieces of the morality equation. We may discover how they all fit together. We can write it down and say: Done. We can find the way to calculate the morality of every question, and everyone who thinks it through, clearly, and carefully, and fairly, will see that it’s moral, and because they are human, and humans respond to morality, they will do the right thing.


We will all do the right thing.