PDA

View Full Version : [Thought experiment] If alignments are objective how do we know what they represent?



Saint-Just
2021-02-19, 02:20 PM
A few years ago I read the campaign "log" (more like freeform impressions) from the game with an interesting premise - Sigil has fallen from the Spire.

What the players knew: everything Planescapish happened, and then... portals have closed. Everyone who was in the Sigil was stuck in the Sigil. And this happened some hundreds years ago. By the start of the game Sigil is more claustrophobic and strapped for resources than ever, there is significantly stricter regimenting of life (not all of it handed from above), and a lot of effort is spent on pretty much keeping things running. You can see some tropes typical for the "generation ship" or faraway colony in a hostile environment - you just do what needs doing or everyone will die.

Except players also knew that the above is what passes for "official history", and there is a lot to doubt: yes, people can get mazed but has anyone at all seen the Lady of Pain in all those centuries? Isn't it a little bit too convenient for Advisory Council that some less popular decisions are supposedly not their own? Other things also may not be lining up so neatly.

And on of those "supposedly existing" things is alignment. Well, of course everybody knows alignment can be detected; it is even written on your ID (and you are supposed to have this information officially corrected if you know your alignment has changed). Spells and artifacts that work differently on different alignments work consistently, if you are affected by Protection against X you are affected by Magic Circle against X, no matter who or what cast it when for what purposes. But how do you know it has some relevance to any of the moral or ethics, instead of being some sort of soul blood group? All the usual arguments about subjectivity of evil, about people detecting as opposite alignments at the same time etc are actually told by people in the game - unofficially. And the tagline "Sigil has fallen from the Spire" even offers a conciliatory theory (it's only a theory, nobody knows for sure) - cosmic forces of Law, Good, Evil, Chaos may have existed, but since Sigil has somehow lost connection (which may have or haven't resulted in everything outside unraveling) to the outside whatever those spells and artifacts and abilities have relied on may have shifted so much that calling the alignment Good is about as meaningful as calling the same alignment Orange.

Now the question was not quite answered in that game (nor was the game really set up to do it). But I want to seek your input on the premise: in a situation with uncontactable (or nonexisting) outer planes, detached (Eberron-like) or nonexistent gods etc. how hard it would be to distinguish between two competing theories: alignments are morally based vs alignments are just there? I especially want to focus on "hard" part - e.g. even if 99% of people convicted of murder are Evil to properly prove causation (either way) instead of correlation you'd need significantly more data - and in absence of refined scientific methods it's likely that such theories would be either accepted or rejected on emotional or ideological basis.

OldTrees1
2021-02-19, 05:41 PM
I am going to assume Detect Good/Evil used to detect moral/immoral and I am going to talk about Detect Strange/Charm to signify you don't know what Detect Strange/Charm is actually detecting.

Okay so you are talking about a population that is discussing philosophy and it once had spells that could detect moral character but their modern incarnations of those spells might be detecting strange/charm instead.

1) How many of them believe the spells are useless? IIRC Sigil has a stronger philosophic bend than most cultures, I suspect a few positions:
A: Without knowledge that the spells work, the spell's can't be trusted as evidence. We need to discover moral truth ourselves rather than rely on these suspicious spells. (See IRL responses to the same issue)
B: Clearly Detect Strange can be used to figure out what is moral. We should all behave like that and avoid what Detect Charm detects. They might associate any of the spells as the correct one. Including one group that thinks Detect Poison and Disease is Detect Moral :D.
C: "Clearly Detect Strange and Detect Charm are not opposites. See that person does not ping as either of them."

How would it look if the spells did not initially detect moral truth? Exactly the same. In order to know if the spells can detect moral truth, you need an alternative way to identify moral truth. If you have an alternative way, then I am envious of your universe because you can identify moral truth without the spells (and thus still have that alternative way if the spells change). Ah, but how do you know that method is reliable? Etc etc. The only reliable method is pure logic, and that method can't conclude anything beyond tautologies.

Seto
2021-02-19, 06:30 PM
how hard it would be to distinguish between two competing theories: alignments are morally based vs alignments are just there? I especially want to focus on "hard" part - e.g. even if 99% of people convicted of murder are Evil to properly prove causation (either way) instead of correlation you'd need significantly more data - and in absence of refined scientific methods it's likely that such theories would be either accepted or rejected on emotional or ideological basis.

To distinguish between the two theories, as in, to rationally choose the best one? Pretty easy, I'd say. You don't need to prove causation a priori for that. Observing 99% correlation is more than enough. Because that correlation itself, even if it doesn't prove causation, is an observable phenomenon. The first theory has the advantage of explaining that phenomenon simply and elegantly. The second theory would have to provide an explanation if it were to compete.

Now if your question is: what does it take to prove beyond a doubt that alignments are morally-based? That's significantly harder, perhaps impossible. But then, it is the nature of any scientific theory to never be definitely true. A science has to constantly account for new phenomena. A theory can be considered true as long as it's sufficiently consistent with all known phenomena.
The two things the first theory (the "morality theory") has to do in order to surpass the second, is:
1- explain the apparent contradictions with creatures detecting several alignments at the same time, etc.
2- show that it has predictive value. That is, the theory can not only correlate alignments to certain actions, but also reliably predict that after a person takes certain actions, their alignment will shift in a certain way.

That second point is really fundamental. The fact is that alignment isn't a "soul blood type", that's easy to disprove on the simple basis that alignment can change. It is an observable phenomenon that alignment changes after certain actions. Now any good theory has to account for the reason of the change: what, if anything, was contained in the action that caused alignment change - or if it wasn't the action, what external factors participated in the change.
How does the second theory ("alignment is just there") explain those changes? If it doesn't, it's not a theory, it's just the refusal to engage. If it does, the proponents of that theory should seek to test and prove it, and if they can't, they'll eventually rally to the first theory which at least has a simple and likely explanation.

As a last point, yes, in the absence of definitive proof, most people will find reasons to support theories they like on an emotional and ideological level, no matter how contrived, and reject those they don't, no matter how likely. At a certain level though, it just becomes wilful ignorance, and no scientific theory is safe from that, in any world - no matter how commonly accepted.

Tanarii
2021-02-20, 12:52 AM
It's mostly wrong thought experiment for 5e. Most of the things that make alignment verifiable to people are gone. Unless you can summon a sprite. ;)

Basically, even if Alignments are in-game objective, that doesn't mean that people don't have subjective views about them. At least until after they die, if they live within the great wheel cosmology or one that works similarly. And in some cosmologies, not even then.

So Alignments are always subjective at the DM/player level, may or may not be objective at the in-universe cosmic level depending on the DM's world building, and probably are subjective for most at the in-universe (at least prime material) creature level.

LibraryOgre
2021-02-20, 02:32 PM
Now the question was not quite answered in that game (nor was the game really set up to do it). But I want to seek your input on the premise: in a situation with uncontactable (or nonexisting) outer planes, detached (Eberron-like) or nonexistent gods etc. how hard it would be to distinguish between two competing theories: alignments are morally based vs alignments are just there? I especially want to focus on "hard" part - e.g. even if 99% of people convicted of murder are Evil to properly prove causation (either way) instead of correlation you'd need significantly more data - and in absence of refined scientific methods it's likely that such theories would be either accepted or rejected on emotional or ideological basis.

How do we know about physics or chemistry? Thousands of years of observation.

If alignment can be measured (Detect Good, Detect Evil, or 2e's Know Alignment), then you can see, from people's actions, what results in different alignments. "Oh, Sister Pureheart, who has spent her entire life caring for orphans, is very Good, while Count Baddius, who drowns puppies and makes kittens fight to the death is very Evil." If Sister Pureheart is for some reason pinging as Evil, despite the actions everyone knows about, that's going to be a clear sign that SOMETHING sinister is going on, just like knowing that solid water is less dense than liquid water shows that water has some special properties.

Isaire
2021-02-28, 06:35 AM
Wouldn't the simplest solution be to find people who were thought to be evil / good, kill them, wait a bit, resurrect them, and see what their experience of the afterlife was? Relative to what people had recorded before? Though I'm never really sure how the afterlife is supposed to work any more in 5e.

MoiMagnus
2021-02-28, 07:01 AM
Even if peoples don't know what alignment means, peoples will definitely remark that (more often than random) they have similar values to peoples of the same alignment as them.

Think about what would happen IRL if zodiac symbols were actually able to describe the personality of peoples somewhat precisely. You won't need to wait long before having plenty of stereotypes about the different alignments.

And alignment will probably becomes subject to discrimination, as some communities will try to keep an homogenous alignment within them. And once you have different communities homogeneous in alignment, it will be much easier for scholars to compare the effects of those alignments on how peoples behave.

Obviously, there will be competing theories, but how much it is academic debates about subtleties and how much it is a religion war will mostly depends on the organisation of the society.

Quertus
2021-02-28, 09:32 AM
If someone pings to both "Detect Charm" and "Detect Strange", does "Know Alignment" register them as "Charm Strange"?

If so, could this not be used as evidence for the cracks in the system?

KineticDiplomat
2021-03-02, 07:45 PM
So the basic grounding is that there are, in fact, consistent standards of Good and Evil - their relative moral value is irrelevant for the first step of this, just that there are literal concrete and objective good and evil.

Great, do some industrial scale (and definitely not ethical) science. Set up a nameless one scenario; he re-incarnates as a perfect true neutral so long as his memory is completely gone upon revival.

Next up, baseline a system for metrics. Pick say, 10 things you think are unambiguously good (I recommend at least one be giving to charity - which might end up be big proportional to your own wealth, but hey, lots of tests) and 10 unanimously evil. Run a battery where the subject runs these actions until they get “Detected” as good or evil. Solve for the multi variable equations to create standard units of good and standard units of evil. You have to kill your nameless one a lot for appropriate sample sizes of actions that eventually flip you to good or evil, but hey, science.

If you have SGUs and SEUs (if you can get them close enough, you can create a SMU - standard morality unit) you can now catalog the effects of just about anything on the supposed Good and Evil. What is the charitable giving equivalent of killing an orphan? You can test that! Sure, there will be follow on research and people will refine the equations in the same way Econ grew from a simple algebra problem into John Nash using seven chalkboards at once, but the point is it can now be measured for any experiment you want.

NigelWalmsley
2021-03-02, 08:12 PM
Frankly, I don't see how you can get from something like D&D alignment to any kind of coherent moral theory. Whatever mechanism it is that determines alignment, it's just one mechanism. There is, as it turns out, more than one moral theory. We can therefore conclude that people as a whole are not going to assign any universal correctness to whatever it is that alignment corresponds to, so, particularly if there isn't any kind of afterlife or easily-contactable gods, people will probably just treat it as kind of uninteresting. The only reason we care is that the authors used real-world moral terms for what is, in practice, a descriptor like [Cold] or [Figment].

LibraryOgre
2021-03-02, 08:37 PM
Frankly, I don't see how you can get from something like D&D alignment to any kind of coherent moral theory. Whatever mechanism it is that determines alignment, it's just one mechanism. There is, as it turns out, more than one moral theory. We can therefore conclude that people as a whole are not going to assign any universal correctness to whatever it is that alignment corresponds to, so, particularly if there isn't any kind of afterlife or easily-contactable gods, people will probably just treat it as kind of uninteresting. The only reason we care is that the authors used real-world moral terms for what is, in practice, a descriptor like [Cold] or [Figment].

But, like [Cold] or [Fire], they broadly conform to those real-world terms. My [Fire] spell isn't going to solidify water, my [Cold] spell isn't going to start fires, because they are [Fire] and [Cold]. If I want to melt ice with magic, I'm not going to use [Cold] magic. And folks are going to be interested in these things, because creatures with the [Evil] descriptor are likely to feast on the entrails of your family, while things with the [Good] descriptor are likely to not do that.

I don't know about you, but I greatly prefer things with the [Good] descriptor, just for that reason. And if things with the [Good] descriptor are more likely to help people that act in certain ways... and those ways area also broadly helpful to myself and my community... I think you still wind up with people interested in it, because it has an effect. Not everyone is really interested in the nuances of physics, they know enough to not try to fly.

NigelWalmsley
2021-03-02, 09:25 PM
My [Fire] spell isn't going to solidify water, my [Cold] spell isn't going to start fires, because they are [Fire] and [Cold].

Are you sure? The Codex Alera has an elemental magic system where freezing stuff is Fire magic (because it's heat transfer, and moving heat around is Fire). Avatar has an elemental magic system where freezing stuff is Water magic (because ice is made of Water). Which is to say that these categories are not actually very well defined. Even when you get into morality, there's a great deal of variation between settings. In D&D, casting Animate Dead on a T-Rex is Evil. In The Dresden Files, it's what the hero does to save the day.


And if things with the [Good] descriptor are more likely to help people that act in certain ways... and those ways area also broadly helpful to myself and my community... I think you still wind up with people interested in it, because it has an effect. Not everyone is really interested in the nuances of physics, they know enough to not try to fly.

That's a complicated question.

First, we might consider that sort of thing to be against the spirit of the question. If we're assuming there's no Pelor (who dwells on the upper planes and does nice things for people that are [Good]), we might reasonably argue that there are also no Solars (who dwell on the upper planes and do nice things for people that are [Good]).

Second, is that really how a [Good] creature should behave? I think you be on pretty good grounds arguing that helping people who indulge in cannibalism or go around casting Mind Rape on people is probably not compatible with being [Good] in any useful sense. But as noted above, there are grey areas. If I'm a benevolent (e.g. [Good]) outsider, do I really let the peasants starve because they use Skeleton Oxen in their plow teams?

Third, it's not only the [Good] creatures that are willing to help you and your community. If the local Vampire is willing to protect your village in exchange for a yearly blood tithe, is that really something you're going to refuse because he pings [Evil]? Remember, this is a world that is full of Manticores and Displacer Beasts that, while not [Evil], do think you taste delicious and are not going to be dissuaded by the fact that you do [Good] stuff in your spare time.

Fourth, there are supposedly cultures that are [Evil]. Orcish culture is, apparently, Chaotic Evil. Doubtless, Orcs have produced some moral philosophers. Whatever the writings of those philosophers are, it's hard for me to imagine them referring to their ideals and their society with a word that is a synonym for "wrong", unless you're doing something like A Practical Guide to Evil where alignments are explicitly just teams.

Which is not necessarily to disagree with you that most people will, on balance, do more [Good] stuff than [Evil] stuff. That's almost certainly true. But I don't think it really has anything to do with what Detect Evil has to say on the issue. In the real world, where there is no Detect Evil, most people's moral intuitions agree about most stuff. The existence of a ritual that agrees with one side or the other on a particular ethical dilemma might move opinions on the margin, but it's not going to make the dilemma go away.

Saint-Just
2021-03-02, 10:13 PM
So the basic grounding is that there are, in fact, consistent standards of Good and Evil - their relative moral value is irrelevant for the first step of this, just that there are literal concrete and objective good and evil.

Great, do some industrial scale (and definitely not ethical) science. Set up a nameless one scenario; he re-incarnates as a perfect true neutral so long as his memory is completely gone upon revival.

Next up, baseline a system for metrics. Pick say, 10 things you think are unambiguously good (I recommend at least one be giving to charity - which might end up be big proportional to your own wealth, but hey, lots of tests) and 10 unanimously evil. Run a battery where the subject runs these actions until they get “Detected” as good or evil. Solve for the multi variable equations to create standard units of good and standard units of evil. You have to kill your nameless one a lot for appropriate sample sizes of actions that eventually flip you to good or evil, but hey, science.

If you have SGUs and SEUs (if you can get them close enough, you can create a SMU - standard morality unit) you can now catalog the effects of just about anything on the supposed Good and Evil. What is the charitable giving equivalent of killing an orphan? You can test that! Sure, there will be follow on research and people will refine the equations in the same way Econ grew from a simple algebra problem into John Nash using seven chalkboards at once, but the point is it can now be measured for any experiment you want.

I agree with NigelWalmsley about more than one moral theory existing; more importantly I think there is some misunderstanding of a premise

a) The idea was to know whether Detect G detects Good instead of Strange. Or to put it otherwise how much Good correlates with good. Moral value is not irrelevant because we don't know that there are objective good and evil - only something which is objective and is called Good (likewise for Evil).

b) More importantly, unless you already have enough power to break the setting setting up "industrial scale" alignment science may prove difficult. And alignment is supposed to be objective in most of the D&D settings. So you may try to calculate SMU anywhere - what additional information it offers in the "higher powers and afterlifes are unreachable" scenario that it doesn't offer in the standard Planscape-or-equivalent?

Quertus
2021-03-02, 11:05 PM
And folks are going to be interested in these things, because creatures with the [Evil] descriptor are likely to feast on the entrails of your family, while things with the [Good] descriptor are likely to not do that.

Things with the [Good] descriptor would let the entrails of my family go to waste? Those evil ****s! Kill them! Kill anyone who detects as [Good]!

Cultural values, not objective good and evil, afaict.

Which means that I can certainly see cultures springing up around what Detect [Strange] has to say, if it's close enough to recognized "good" behavior.

And it'd be an absolutely *great* way for a Demon/Devil to guarantee plenty of fallen souls, simply by including a sin or two in the mistakenly believed to be "good" pile.

NigelWalmsley
2021-03-02, 11:19 PM
The idea was to know whether Detect G detects Good instead of Strange. Or to put it otherwise how much Good correlates with good. Moral value is not irrelevant because we don't know that there are objective good and evil - only something which is objective and is called Good (likewise for Evil).

Probably about as well as any two moral systems correlate. Which is pretty well for practical purposes. For all that people have heated debates about the merits of Deontology or Consequentialism or Virtue Ethics or any number of other ideas about what it means to be "Good" (to say nothing of specific examples of those philosophies), people agree about most stuff. A Utilitarian and a Kantian will both tell you that it's wrong to kill someone for cutting you off in traffic, or steal someone's watch because you think it looks pretty, or cheat on your spouse. They may have different justifications for why those things are wrong (and, if you go far enough afield, you can find theories that reach familiar conclusions for reasons that are bizarre or horrifying), but they all agree that they are wrong. Similarly, while the exact list of things that Detect Evil pings in any particular edition are different from the list of things you or I would consider "Evil" (just as the list of things I would consider "Evil" is probably not exactly the same as the list of things you would), it probably correlates pretty well if you look at the issues that people agree on in the really world.

Segev
2021-03-03, 02:29 AM
In the real world, without going into religion, just exmining semantics and language and concepts: how do we know what things carry moral weight? Ignore complex and nuanced scenarios, as their importance is in digging into detail. In broad strokes, how do we have any agreement at all that "Neutral Good" in D&D has any meaning at all?

I think asking the question about how you would know that a detected objective alignment system represents something is begging the question. You would know it represents morality and ethics (using D&D's grid) for the same reason that you recognize when issues deal with morality and ethics in any other setting or discussion.

All that changes is that you know that you can objectively judge that someone or something adheres to a particular alignment, and by how much. You may disagree wildly about whether the alignment they belong to is desirable or healthy or something to admire.

In other discussions of this topic, the question has been raised as to how you know an objective alignment is morally correct. This is the wrong question to ask. Alignments are not morally correct: whether something is morally correct or not is a matter of whether it aligns with the alignment to which one aspires.

With objective morality, a drow matron mother does not dispute that she is evil. She does not reject the concept of "good" and "evil," nor claim she's good because of justifications she comes up with. She openly proclaims herself to be evil, much as any noble paladin announces his goodness if the topic arises. She thinks being evil is morally correct, because she aspires to it. She finds it desirable and superior to being neutral or good.

Tanarii
2021-03-03, 05:45 AM
In the real world, without going into religion, just exmining semantics and language and concepts: how do we know what things carry moral weight? Ignore complex and nuanced scenarios, as their importance is in digging into detail. In broad strokes, how do we have any agreement at all that "Neutral Good" in D&D has any meaning at all?The first question is irrelevant, or rather non-answerable. The answer is: banana. Or if you prefer: mu.

The answer to the second question is it is defined by the game rules, but in a way that it is subjective from person to person, and any agreements must be at the table level. In a forum, it's the kind of question that there will never be agreement on.


I think asking the question about how you would know that a detected objective alignment system represents something is begging the question. You would know it represents morality and ethics (using D&D's grid) for the same reason that you recognize when issues deal with morality and ethics in any other setting or discussion.Agreed it is begging the question. But the answer is because the game rules say so, and tell you what it represents.

But that's not the same as in any other setting or discussion. Because those don't tell us.

OldTrees1
2021-03-03, 05:50 AM
In the real world, without going into religion, just exmining semantics and language and concepts: how do we know what things carry moral weight? Ignore complex and nuanced scenarios, as their importance is in digging into detail. In broad strokes, how do we have any agreement at all that "Neutral Good" in D&D has any meaning at all?

I think asking the question about how you would know that a detected objective alignment system represents something is begging the question. You would know it represents morality and ethics (using D&D's grid) for the same reason that you recognize when issues deal with morality and ethics in any other setting or discussion.

Since nobody knows IRL, they all rely on belief, opinion, and intuition. You are suggesting the characters in this scenario would do the same and find the Detect spell that most closely adhered to their moral intuitions. That would not generate knowledge (their logic would be based on a fallacy), but it would reach a conclusion.


All that changes is that you know that you can objectively judge that someone or something adheres to a particular alignment, and by how much. You may disagree wildly about whether the alignment they belong to is desirable or healthy or something to admire.

In other discussions of this topic, the question has been raised as to how you know an objective alignment is morally correct. This is the wrong question to ask. Alignments are not morally correct: whether something is morally correct or not is a matter of whether it aligns with the alignment to which one aspires.

With objective morality, a drow matron mother does not dispute that she is evil. She does not reject the concept of "good" and "evil," nor claim she's good because of justifications she comes up with. She openly proclaims herself to be evil, much as any noble paladin announces his goodness if the topic arises. She thinks being evil is morally correct, because she aspires to it. She finds it desirable and superior to being neutral or good.

She thinks being evil is morally correct. That is her belief. However if morality is objective rather than subjective, it does not matter what she prefers or desires. Under objective morality, the truth or falsehood of a moral judgement does not depend on the beliefs or feelings of any person. Objective morality is not about being observable, it is about being independent. If there is objective morality in D&D, the truth or falsehood of moral claims does not depend on the alignment of the listener. If for example, we consider the claim "Torture is morally permissible", it does not matter if the listener is that drow matron or a saint, the claim is either true regardless of the listener or false regardless of the listener. This does not mean that everyone believes the same thing, it means their beliefs are about something that has a single truth rather than a subjective truth. Their beliefs can be wrong. She believes being evil is morally correct, and she is mistaken*.

https://en.wikipedia.org/wiki/Moral_universalism

*Unless she is correct and evil is morally correct for everyone.

Tanarii
2021-03-03, 05:57 AM
She believes being evil is morally correct, and she is mistaken*.

https://en.wikipedia.org/wiki/Moral_universalism

*Unless she is correct and evil is morally correct for everyone.
But she can believe that being evil is a superior way to live, and not give a damn about it being morally correct.

Btw I intentionally changed to "superior way to live" from what I was originally going to use: morally superior. Because I'm not sure that's the same thing as just "superior"

OldTrees1
2021-03-03, 06:28 AM
But she can believe that being evil is a superior way to live, and not give a damn about it being morally correct.

Btw I intentionally changed to "superior way to live" from what I was originally going to use: morally superior. Because I'm not sure that's the same thing as just "superior"

I am not sure the distinction. Her belief that evil is a superior way to live is a moral judgement. She is believing she ought to be evil.*
(This is partially due to how moral was defined by its function rather than its contents. If there is a correct/better/superior answer, then that is the right answer.)

* Unless
1) She believes the world is amoral and there is no morally correct, so she goes to amoral judgements about x being better to achieve y.
2) She is engaging in doublethink

NigelWalmsley
2021-03-03, 06:50 AM
I am not sure the distinction. Her belief that evil is a superior way to live is a moral judgement. She is believing she ought to be evil.*

I think if you have to build sentences like this, your definitions are failing you. No one is going to say that "we ought to be Evil", unless "Evil" just means "the team the Drow are on". Once you accept that "Evil" is something that entire societies actually are, the idea of calling it "Evil" stops really making sense. In the real world, even the groups that pretty much everyone thought were the bad guys didn't call themselves "Evil".

Tanarii
2021-03-03, 06:55 AM
I am not sure the distinction. Her belief that evil is a superior way to live is a moral judgement. She is believing she ought to be evil.*
If there is no distinction between superior way to live and morally correct, then Segev is correct, and she is not automatically wrong in her belief that being evil is morally correct. Because that means that "morally correct" is now divorced from the Good alignments.

Which actually makes sense, since afaik none of them are defined as "morally correct" anyway. So we need to find out what Segev was trying to say when he used the term first. On reflection it reads like "correct way to live" to me, as opposed my initial conclusion based on your reply, in which you seem to be assuming it means "acting like one of the good alignments".

Quertus
2021-03-03, 07:00 AM
I think that the discussion of the Drow matron is best worded as, "she believes that [Charm] is the best way to live".

Because, even if you can map these various fields to meanings (those who eat entrails vs those who do not), you are *still* left with the questions, "so, is this a *moral* judgement?", and "if so, which end is 'right'?"

The Drow matron would be no more wrong in declaring [Charm] to be the morally superior position than anyone IRL holding an opposed PoV on the morality of an action.

OldTrees1
2021-03-03, 08:24 AM
I think if you have to build sentences like this, your definitions are failing you. No one is going to say that "we ought to be Evil", unless "Evil" just means "the team the Drow are on". Once you accept that "Evil" is something that entire societies actually are, the idea of calling it "Evil" stops really making sense. In the real world, even the groups that pretty much everyone thought were the bad guys didn't call themselves "Evil".


If there is no distinction between superior way to live and morally correct, then Segev is correct, and she is not automatically wrong in her belief that being evil is morally correct. Because that means that "morally correct" is now divorced from the Good alignments.

Which actually makes sense, since afaik none of them are defined as "morally correct" anyway. So we need to find out what Segev was trying to say when he used the term first. On reflection it reads like "correct way to live" to me, as opposed my initial conclusion based on your reply, in which you seem to be assuming it means "acting like one of the good alignments".


I think that the discussion of the Drow matron is best worded as, "she believes that [Charm] is the best way to live".

Because, even if you can map these various fields to meanings (those who eat entrails vs those who do not), you are *still* left with the questions, "so, is this a *moral* judgement?", and "if so, which end is 'right'?"

The Drow matron would be no more wrong in declaring [Charm] to be the morally superior position than anyone IRL holding an opposed PoV on the morality of an action.

Quertus is right that in this thread's thought experiment, and to satisfy NigelWalmsley's point about characters using loaded terms, I should reword it.

Tanarii is right that it is an assumption to assume alignment and morality coincide. I will call it out explicitly as P6. Without that premise there is a softer conclusion.

Suppose:
P1: This isolated population has the spells Detect [Charm]/[Strange] that detect alignments.
P2: Those alignments are objective. It does not matter who casts the detect spell or if nobody casts the detect spell, things will either have one of those labels or not. There is no quantum position or relativity based on observer.

Based on these premises Segev (and others before them) demonstrates how the isolated population could derive definitions for [Charm] and [Strange]

P3: [Charm] and [Strange] are opposites.
P4: There is an Objective Morality
P5: [Charm] and [Strange] are morally relevant

Based on these additional premises if one makes a claim that "[Charm] is the right/best/superior/moral way to live" then that is a moral statement. Furthermore Objective Morality states that moral statements are either true or false regardless of speaker or listener.

P6: The GM defined the Objective Morality of the campaign and the campaign's definitions of the alignments to have [Charm] and moral coincide. This may be a controversial premise.

To paraphrase, the GM defined moral and [Charm] such that the statement "[Charm] is the right/best/superior/moral way to live" is true. Therefore the statement "[Strange] is the right/best/superior/moral way to live" would be false (as per P3-P6).

P7: Characters know or can at least empirically test P1-3 but rely on beliefs/opinion/intuition for P4-P6. I expect the second half of this to also be a controversial premise.

This means some characters might disbelief P4-6 or believe variations thereof. A common example would be a character with an alternate P6 believing "[Strange] is the right/best/superior/moral way to live". In this case that belief would be incorrect, but characters can believe things that are false.


----

So my overall conclusion is:
Eventually this isolated population will arrive at a consensus around accurate alignment definitions. They have the tools, even if they end up with new names. However you would still get the diverse and mutually contradicting opinions on what is the best way to live because those opinions had to rely on belief/opinion/intuition. Aka one might say [Charm] is best and another might say [Strange] is best. Only the players/GM with metagame knowledge would know whether the campaign was created such that one, the other, or neither is correct.

Quertus
2021-03-03, 09:07 AM
Quertus is right that in this thread's thought experiment, and to satisfy NigelWalmsley's point about characters using loaded terms, I should reword it.

Tanarii is right that it is an assumption to assume alignment and morality coincide. I will call it out explicitly as P6. Without that premise there is a softer conclusion.

Suppose:
P1: This isolated population has the spells Detect [Charm]/[Strange] that detect alignments.
P2: Those alignments are objective. It does not matter who casts the detect spell or if nobody casts the detect spell, things will either have one of those labels or not. There is no quantum position or relativity based on observer.

Based on these premises Segev (and others before them) demonstrates how the isolated population could derive definitions for [Charm] and [Strange]

P3: [Charm] and [Strange] are opposites.
P4: There is an Objective Morality
P5: [Charm] and [Strange] are morally relevant

Based on these additional premises if one makes a claim that "[Charm] is the right/best/superior/moral way to live" then that is a moral statement. Furthermore Objective Morality states that moral statements are either true or false regardless of speaker or listener.

P6: The GM defined the Objective Morality of the campaign and the campaign's definitions of the alignments to have [Charm] and moral coincide. This may be a controversial premise.

To paraphrase, the GM defined moral and [Charm] such that the statement "[Charm] is the right/best/superior/moral way to live" is true. Therefore the statement "[Strange] is the right/best/superior/moral way to live" would be false (as per P3-P6).

P7: Characters know or can at least empirically test P1-3 but rely on beliefs/opinion/intuition for P4-P6. I expect the second half of this to also be a controversial premise.

This means some characters might disbelief P4-6 or believe variations thereof. A common example would be a character with an alternate P6 believing "[Strange] is the right/best/superior/moral way to live". In this case that belief would be incorrect, but characters can believe things that are false.


----

So my overall conclusion is:
Eventually this isolated population will arrive at a consensus around accurate alignment definitions. They have the tools, even if they end up with new names. However you would still get the diverse and mutually contradicting opinions on what is the best way to live because those opinions had to rely on belief/opinion/intuition. Aka one might say [Charm] is best and another might say [Strange] is best. Only the players/GM with metagame knowledge would know whether the campaign was created such that one, the other, or neither is correct.

[Charm] - murder, killing babies, arson.
[Strange] - rape, theft, counterfeiting

Performing [Charm] acts can reduce/remove your [Strange] rating, and vice versa.

They are opposite. They are tied to things with moral weight.

Which one is "good"?

Tanarii
2021-03-03, 09:58 AM
Tanarii is right that it is an assumption to assume alignment and morality coincide. I will call it out explicitly as P6. Without that premise there is a softer conclusion.
That was not my contention. Alignment defines morality, in broad stokes, in any given game. It just doesn't define which is morally correct.

Your assumption appeared to be that morally good was equivalent of morally correct, and morally evil was equivalent of morally incorrect. Or possibly morally wrong. Although that has slightly different connotations, as does morally right.

Whereas they're far more commonly defined as some variation of good = morality that results in nice/non-destructive/helpful to others behavior, and evil = morality that results nasty/destructive/harmful to others behavior. With no contention that one or the other is correct, or superior, etc. morally or otherwise.

In other words this:

P6: The GM defined the Objective Morality of the campaign and the campaign's definitions of the alignments to have [Charm] and moral coincide. This may be a controversial premise.... is not automatically followed by this:

]To paraphrase, the GM defined moral and [Charm] such that the statement "[Charm] is the right/best/superior/moral way to live" is true. Therefore the statement "[Strange] is the right/best/superior/moral way to live" would be false (as per P3-P6).That second part is a house rule in D&D (and Palladium) alignment morality systems, as far as I'm aware. P6 is true for [Good] and [Evil], they both coincide with morality. But the second part (which I'll call P6a) is a separate corollary that does not follow automatically from P6.

Edit: looking at the way you worded P6, with one label being "and moral coincide", it's almost as if you think moral = morally correct = the right/best/superior way to live.

OldTrees1
2021-03-03, 12:51 PM
[Charm] - murder, killing babies, arson.
[Strange] - rape, theft, counterfeiting

Performing [Charm] acts can reduce/remove your [Strange] rating, and vice versa.

They are opposite. They are tied to things with moral weight.

Which one is "good"?

Are [Strange] and [Charm] morally relevant, or are they orthogonal?
A) If the examples have moral weight but the pair of opposite tags don't, then exit the logic at the missing premise.
B) If the examples have moral weight and the pair of opposite tags do too, then is this a case like Virtue Ethics where we have a moral center bordered by immoral extremes? I did not account for that in the logic. Good point.


That was not my contention. Alignment defines morality, in broad stokes, in any given game. It just doesn't define which is morally correct.

Your assumption appeared to be that morally good was equivalent of morally correct, and morally evil was equivalent of morally incorrect. Or possibly morally wrong. Although that has slightly different connotations, as does morally right.

When I use the word "moral" or its derivatives, I am using the IRL words. When I use words like "Good" / "Evil" in the context of D&D I try to only do so in reference to the alignments of the same name. That is why I avoid phrases like "morally good" which has the same IRL definition of "moral", "morally correct", "morally right", etc but could be confused with the Good alignment.

I do not see any difference in connotations.


Whereas they're far more commonly defined as some variation of good = morality that results in nice/non-destructive/helpful to others behavior, and evil = morality that results nasty/destructive/harmful to others behavior. With no contention that one or the other is correct, or superior, etc. morally or otherwise.

Morality could be referring to an objective moral standard, or to the collection of beliefs a person / society has about morality. So instead of using the word morality I will use the phrase "objective morality" for the former and "moral theory" for the latter.

It is indeed common to have in game definitions of alignments named [Good] be based in a moral theory around nice/non-destructive/helpful to others behavior.

You could run it with no contention that one or the other is correct/superior morally. In that case they might not be morally relevant. This happens in a few situations, although it is more common for GMs to follow the naming convention and have [Good] be morally correct (even if they have to replace the definitions with ones that fit the group's moral intuitions).


In other words this:
... is not automatically followed by this:
That second part is a house rule in D&D (and Palladium) alignment morality systems, as far as I'm aware. P6 is true for [Good] and [Evil], they both coincide with morality. But the second part (which I'll call P6a) is a separate corollary that does not follow automatically from P6.

Edit: looking at the way you worded P6, with one label being "and moral coincide", it's almost as if you think moral = morally correct = the right/best/superior way to live.

However P6 is mean to talk about when the group/GM chooses to have the definition of [Good] describe the things that the definition of the objective morality for that campaign would judge as moral/morally correct/morally right/etc. You are right to point out the GM ruling as such can be a house rule (which doesn't invalidate it as a premise).

As for coincide, yes I mean if the GM decides to have the set of things that are [Good] and the set of things that are morally correct be the same set, then they are the same set. They are basically saying "these things give you the alignment [Good] and these things are morally correct". That is a house rule, but a common one.


Edit: looking at the way you worded P6, with one label being "and moral coincide", it's almost as if you think moral = morally correct = the right/best/superior way to live.

Yes. Once upon a time some fool though of a question "What ought one do?". This question was big and important, so they gave a name to the answer before they were done solving the question. They named the answer "moral". Later people have kept the naming convention but disagree about the answer. So there is this common concept of moral as being correct in and of itself but no consensus on what things are moral.

Okay I don't know if that precise story happened. Nor do I know that they were a fool. However that is the best way for me to highlight what moral is in this context. It is a label for the right answer while people continued to search for the right answer.

LibraryOgre
2021-03-03, 01:18 PM
Are you sure? The Codex Alera has an elemental magic system where freezing stuff is Fire magic (because it's heat transfer, and moving heat around is Fire). Avatar has an elemental magic system where freezing stuff is Water magic (because ice is made of Water). Which is to say that these categories are not actually very well defined. Even when you get into morality, there's a great deal of variation between settings. In D&D, casting Animate Dead on a T-Rex is Evil. In The Dresden Files, it's what the hero does to save the day.

None of which are D&D. Avatar doesn't have an objective morality system. I mean, freezing things is also Fire Magic in Ars Magica, but that's not [Fire], that is Perdo Ignem. But those spells don't have the [Fire] or [Cold] descriptor, because those descriptors are from D&D (and, specifically, 3e and later D&D). The [Cold] descriptor indicates that something is cold. The [Fire] descriptor indicates that something uses fire, not just heat transfer or the removal of heat.



Third, it's not only the [Good] creatures that are willing to help you and your community. If the local Vampire is willing to protect your village in exchange for a yearly blood tithe, is that really something you're going to refuse because he pings [Evil]? Remember, this is a world that is full of Manticores and Displacer Beasts that, while not [Evil], do think you taste delicious and are not going to be dissuaded by the fact that you do [Good] stuff in your spare time.

Evil creatures can be motivated by various forms of self-interest.


Fourth, there are supposedly cultures that are [Evil]. Orcish culture is, apparently, Chaotic Evil. Doubtless, Orcs have produced some moral philosophers. Whatever the writings of those philosophers are, it's hard for me to imagine them referring to their ideals and their society with a word that is a synonym for "wrong", unless you're doing something like A Practical Guide to Evil where alignments are explicitly just teams.

They are evil, not [Evil].

In this thought experience, alignments are objective. Your argument seems to be that they are secretly subjective... that cultural perception changes the meaning of Good and Evil, and even [Good] and [Evil].

Segev
2021-03-03, 02:23 PM
She thinks being evil is morally correct. That is her belief. However if morality is objective rather than subjective, it does not matter what she prefers or desires. Under objective morality, the truth or falsehood of a moral judgement does not depend on the beliefs or feelings of any person. Objective morality is not about being observable, it is about being independent. If there is objective morality in D&D, the truth or falsehood of moral claims does not depend on the alignment of the listener. If for example, we consider the claim "Torture is morally permissible", it does not matter if the listener is that drow matron or a saint, the claim is either true regardless of the listener or false regardless of the listener. This does not mean that everyone believes the same thing, it means their beliefs are about something that has a single truth rather than a subjective truth. Their beliefs can be wrong. She believes being evil is morally correct, and she is mistaken*.

https://en.wikipedia.org/wiki/Moral_universalism

*Unless she is correct and evil is morally correct for everyone.

I think the trouble here is that you're assuming "morally correct" means "adhering to being good."

That is a subjective judgment, however, because that asserts that "good is the best alignment to be." We believe that, IRL. But note that even in a subjective morality world, where one culture thinks wearing white after labor day is evil and another thinks wearing anything but white ever is evil, a member of the first culture could still think that "being evil is awesome" and deliberately wear white after labor day. He might even get into arguments over whether he counts as "evil" or not with members of the other culture, since he wears white all the time since he thinks it's the most evil fashion choice he can make, and therefore the most awesome fashion choice he can make. He might even be offended when the second culture tells him he's not evil, but in fact is doing the good and upright thing. He thinks that's lame, and he is definitely not lame.

"Objective morality" doesn't mean you agree on what's desirable. It just means you agree that something is good or evil. There is no disagreement between people who live in an objective morality setting and who are factually correct in their assessments over whether wearing white after labor day is evil or not. There is an objective truth over the evilness of that fashion choice. That doesn't stop Shiro Edgelord from wearing white all the time specifically because he thinks being evil is cool. He agrees it's evil. That's why he likes it.

The drow matron mother in D&D agrees that her behavior is chaotic and evil. For argument's sake, we'll assert that she is objectively correct, and so when the LG dwarven paladin condemns her for her wickedness and duplicity, she proudly agrees, and then condemns him for his weak-minded clinging to rules that only serve to enslave him to pathetic losers who are better off sacrificed for power.

She doesn't see a need to justify that what she does is "good." She believes evil is the morally correct alignment to pursue. And, given her preferences and goals, she is objectively correct that it is the best one for her.

She's evil, and proud of it. Those who oppose evil because it causes harm do so because their goals and desires are different, and objectively incompatible with hers.

Tanarii
2021-03-03, 03:00 PM
When I use the word "moral" or its derivatives, I am using the IRL words. When I use words like "Good" / "Evil" in the context of D&D I try to only do so in reference to the alignments of the same name. That is why I avoid phrases like "morally good" which has the same IRL definition of "moral", "morally correct", "morally right", etc but could be confused with the Good alignment.


Yes. Once upon a time some fool though of a question "What ought one do?". This question was big and important, so they gave a name to the answer before they were done solving the question. They named the answer "moral". Later people have kept the naming convention but disagree about the answer. So there is this common concept of moral as being correct in and of itself but no consensus on what things are moral.

Okay I don't know if that precise story happened. Nor do I know that they were a fool. However that is the best way for me to highlight what moral is in this context. It is a label for the right answer while people continued to search for the right answer.
Ah. That explains it then. I ascribe no particular value or meaning to related to the IRL word moral. Nor am I interested or see any value in IRL moral theory.

So for me, it definitely seems like a huge logical leap from "Good" = morally correct and "Evil" = morally incorrect, and in fact the terms morally correct or morally incorrect should really just be left out of the equation.

There's no particular barrier in objective D&D morality, given the definitions of the various Good Alignments or Evil Alignments to date and assuming that those constitute "objective D&D morality", to an evil individual believing that their way is superior and best way to live. They are not automatically wrong, cosmologically.

That is basically the core conflicting belief at the heart of Planescape, except it's not just good vs evil.

Segev
2021-03-03, 03:16 PM
So for me, it definitely seems like a huge logical leap from "Good" = morally correct and "Evil" = morally incorrect, and in fact the terms morally correct or morally incorrect should really just be left out of the equation.

I mean, it's a question of your goals. There are things that being good promotes. If those things are part of your goals, then you should seek to be good. In an objective morality system, you would be morally correct by trying to adhere to the alignment that aids your goals.

It gets tricky to discuss this within the forum's rules when we get into what, precisely, works in the real world and why "real world moral good" of any stripe is or is not aligning with an individual's or society's goals. In the fictional setting of D&D, where fantastic things happen and make up the stories it's designed to play, it is perfectly plausible to set up the laws of reality such that one's objectively evil behaviors are precisely in line with getting you to your goals, provided your goals are of a particular sort.

In an objective alignment system, you are "morally correct" if you are adhering to the alignment you wish to be. This is loosely defined to cover both aspirations towards specific alignments, and correctly identifying the alignment that will be most conducive towards your goals.

OldTrees1
2021-03-03, 03:37 PM
I think the trouble here is that you're assuming "morally correct" means "adhering to being good."
I am assuming "morally correct" means "what one ought to do". I thought you had been assuming good was moral (so I had been adopting that premise).


"Objective morality" doesn't mean you agree on what's desirable. It just means you agree that something is good or evil. There is no disagreement between people who live in an objective morality setting and who are factually correct in their assessments over whether wearing white after labor day is evil or not. There is an objective truth over the evilness of that fashion choice. That doesn't stop Shiro Edgelord from wearing white all the time specifically because he thinks being evil is cool. He agrees it's evil. That's why he likes it.

Sorry that I am being a bit stubborn, but Objective Morality is a Term of Art in the branch of Ethics and it is one of the few terms I will insist on using correctly. Objective Morality states that moral statements are either true or false. https://en.wikipedia.org/wiki/Moral_universalism It has nothing to do with people agreeing/disagreeing. It is only about moral statements having exactly 1 truth value.

Does it say everyone agrees whether a particular moral statement is true or false? No.
Does it say everyone agrees that the statement "All theft is immoral." is True? No.
Does it say everyone agrees that the statement "All theft is immoral." is False? No.
What does it say? It says the statement "All theft is immoral." has exactly 1 truth value.
It says if John claims "The statement 'All theft is immoral.' is true." and Jane claims "The statement 'All theft is immoral.' is false." then exactly 1 of them is correct. They can disagree, and did disagree, but only 1 of those meta-statements will be true. The other will be false.

So technically yes. Of all the people that make claims about whether the statement "It is immoral to wear white after labor day" is true or false. Everyone that is correct chose the same answer. Either all true or all false depending on whether the statement actual is true or false. However people can be incorrect too. Objective Morality is not claiming there is a consensus, it is claiming there is a correct answer to statements about morality.

Now, statements about morality also tend to dwell on correctness because moral / immoral are labels for correct / incorrect.

So Shiro Edgelord might agree that wearing white is [Strange]*, the might also believe it is moral or amoral because they think [Strange] is cool. But they would not think it is both moral/right/correct and immoral/wrong/incorrect.
*Using the [Strange] alignment to avoid conflating [Good] with moral.

Now in D&D for the exact same reason for why we don't have a consensus about morality IRL, characters in the game could consider [Strange] to be moral. They could even believe that if they called [Strange] "Evil". They could even do it if the GM called [Strange] evil. They could even do it if the GM said [Strange] was immoral.


The drow matron mother in D&D agrees that her behavior is chaotic and evil. For argument's sake, we'll assert that she is objectively correct, and so when the LG dwarven paladin condemns her for her wickedness and duplicity, she proudly agrees, and then condemns him for his weak-minded clinging to rules that only serve to enslave him to pathetic losers who are better off sacrificed for power.

She doesn't see a need to justify that what she does is "good." She believes evil is the morally correct alignment to pursue.

Agreed.


And, given her preferences and goals, she is objectively correct that it is the best one for her.

For this to be true, the alignments would be orthogonal to rather than coincide with moral/immoral. That is a bit unusual but a possible cosmology. Normally alignments like Good and Evil would coincide with moral/immoral.


"Objective morality" doesn't mean you agree on what's desirable. It just means you agree that something is good or evil.

So we are back up here again. I already touched on why objective morality does not imply consensus. However if you are divorcing good/evil from morality, then why would Objective morality have anything to do with them?


Ah. That explains it then. I ascribe no particular value or meaning to related to the IRL word moral. Nor am I interested or see any value in IRL moral theory.

Ah. When I talk about a topic I use the words from that topic. If I were talking about math I would use the IRL terms for it. I would talk about Binomial Coefficients rather than Grabok's Numerals (fictional example). I would make it clear by Binomial Coefficients I meant the term as defined IRL rather than the Chultian Lizard (fictional example) with the same name. However when I did need to switch between game terms and IRL terms, I would try to make some clear like to avoid overloading a word.

That said if you are not interested in the topic of Objective Morality due to having no interest at this time to delve into that branch of philosophy, then that is good too.

There is plenty of amoral alignment discussion for this thought experiment.




So for me, it definitely seems like a huge logical leap from "Good" = morally correct and "Evil" = morally incorrect, and in fact the terms morally correct or morally incorrect should really just be left out of the equation.

That is basically the core conflicting belief at the heart of Planescape, except it's not just good vs evil.

I think Planescape works best if the players/GM don't know what is moral/immoral for that universe. It can work either way but it seems better if the audience is not biased.

Segev
2021-03-03, 05:43 PM
All "moral" means, by your own statement, OldTrees, is "what one ought to do." I assume - and correct me if I am wrong - that there is a hidden component that prevents conflating "you ought to maintain your car so that it doesn't break down" with more traditional conceptions of morality.

Now, most people will assume that you ought to do things to align yourself with "good." Perhaps particularly when alignments are subjective, and you can decide that your ideals are "good" because you say so for yourself.

But for objective morality, this would be a mistake. If your ideals align with a non-good alignment, then you ought to live by that alignment.

Or, put another way, moral behavior brings you in alignment with the moral state you seek to achieve, and immoral behavior drives you away. An evil peraon sees evil as moral. Not because he sees some subjective definition of morality or evil, but because morality is goal-state defined.

In analogy, if you and I sit across the table from each other, and I say that the door is to my left and. Indeed, gesture that direction, you still would see the door as being to your right.

If I am evil and you are good, I will see moral behavior as that which you find immoral. And if we are both honest and correct, we would even agree that both of us are right.

This stems from using "moral" to mean "what you ought to do." The only reason there seems to be contradiction is because of attempts to have it oth mean "what you ought to do" and "good."

Mechalich
2021-03-03, 05:46 PM
So technically yes. Of all the people that make claims about whether the statement "It is immoral to wear white after labor day" is true or false. Everyone that is correct chose the same answer. Either all true or all false depending on whether the statement actual is true or false. However people can be incorrect too. Objective Morality is not claiming there is a consensus, it is claiming there is a correct answer to statements about morality.

Right, and this implies that if a fictional setting postulates an objective moral reality internally that means that while there may be multiple moral theories held in-universe, one of them is correct and all the others are wrong.

Generally I would say that fantasy comes at this from a religious, rather than philosophical question. Specifically most fantasy settings aren't 'all myths are true' they're 'this specific myth is true' and that specific myth usually includes a deity/deities who determine the moral system that governs the setting.

As a result, the question of what alignments represent, in a particular universe, is what the gods say they represent, and questioning the gods is pointless because the gods make the decisions and there's nothing a mortal living in such a setting can do about it.

I think a lot of players find this weird, in part because the kind of deity-derived moral certainty implied works better for a monotheistic system with a distant and presumably all-powerful creator entity rather than a polytheistic system where a bunch of gods squabble amongst themselves. This is why various writers working in FR felt obligated to create a level above the gods in the form of Lord Ao because the idea of FR deities, as presented, mediating an moral system at all was laughable. This is also probably why a huge amount of modern fantasy has retreated from polytheistic systems and back toward distant monotheistic non-interventionist creator deities. One god = one truth is simply a much simpler equation for addressing ethical questions when worldbuilding.

gloryblaze
2021-03-03, 05:59 PM
All "moral" means, by your own statement, OldTrees, is "what one ought to do." I assume - and correct me if I am wrong - that there is a hidden component that prevents conflating "you ought to maintain your car so that it doesn't break down" with more traditional conceptions of morality.

Now, most people will assume that you ought to do things to align yourself with "good." Perhaps particularly when alignments are subjective, and you can decide that your ideals are "good" because you say so for yourself.

But for objective morality, this would be a mistake. If your ideals align with a non-good alignment, then you ought to live by that alignment.

Or, put another way, moral behavior brings you in alignment with the moral state you seek to achieve, and immoral behavior drives you away. An evil peraon sees evil as moral. Not because he sees some subjective definition of morality or evil, but because morality is goal-state defined.

In analogy, if you and I sit across the table from each other, and I say that the door is to my left and. Indeed, gesture that direction, you still would see the door as being to your right.

If I am evil and you are good, I will see moral behavior as that which you find immoral. And if we are both honest and correct, we would even agree that both of us are right.

This stems from using "moral" to mean "what you ought to do." The only reason there seems to be contradiction is because of attempts to have it oth mean "what you ought to do" and "good."

I think the issue here is that the ultimate question ("What ought one do?") in the moral theory proposed by OldTrees (and which seems to be a real philosophy, based on his descriptions) is asked in a vacuum, not from the perspective of any person.

You said that the drow priestess's ideals align with the in-universe alignment "Evil". This can be true. You then said she "ought to live by that alignment." And from her perspective, this is true and she will do so.

The issue is that the question "what ought one do?" in OldTrees's moral theory is not asked from her perspective. It is asked in a vacuum, in a white room. In an RPG, it is defined by the GM. So if the GM decides that it is "moral" to follow the in-universe alignment "Good", then the drow priestess is objectively wrong to follow the in-universe alignment "Evil", even though doing so is what she believes she ought to do based on her ideals. Her ideals themselves are objectively wrong under this moral theory. She is mistaken that one ought to be Evil.

Segev
2021-03-03, 06:22 PM
I think the issue here is that the ultimate question ("What ought one do?") in the moral theory proposed by OldTrees (and which seems to be a real philosophy, based on his descriptions) is asked in a vacuum, not from the perspective of any person.

You said that the drow priestess's ideals align with the in-universe alignment "Evil". This can be true. You then said she "ought to live by that alignment." And from her perspective, this is true and she will do so.

The issue is that the question "what ought one do?" in OldTrees's moral theory is not asked from her perspective. It is asked in a vacuum, in a white room. In an RPG, it is defined by the GM. So if the GM decides that it is "moral" to follow the in-universe alignment "Good", then the drow priestess is objectively wrong to follow the in-universe alignment "Evil", even though doing so is what she believes she ought to do based on her ideals. Her ideals themselves are objectively wrong under this moral theory. She is mistaken that one ought to be Evil.

Asked in a vacuum, with no goal stated, there is no answer to "what ought one to do?"

This is true regardless of moral systems, subjective or objective. "What ought I to do?" cannot be answered if you do not already know the answer to the follow up question, "In order to...?"

That is, "What ought I to do?" depends entirely on what it is you're trying to achieve. If you have no answer to that, then there is nothing you ought to do. Alternatively, the answer may be "nothing. You ought to do nothing." Because if there is no goal, no purpose to your question, then anything you do will be in service to something other than your goal, because you have no goal to serve. And thus there is nothing you ought to do.

This is why most people have an underlying "...to be a good person" or "...to be happy" or "...to assuage my conscience" or "...to go to the best afterlife" or any number of other things they don't say when they ask, "What ought I to do?"

In a system with objective morality, if the underlying "to...?" is answered by "to be a good person," everyone will agree - because there is an objective definition of "good" - that he ought to behave in a good-aligned manner. Why he chose "a good person" over "an evil person" is an open question, but likely has to do with his society and seeking to fit in (which sounds Lawful, but really is rather ethically neutral; even Chaotic social creatures can prefer to be comfortable in their society).

The answer to the thread's topic question is that we know what they represent because we know what they are. And what is moral for one alignment is potentially (even probably) immoral for another. Or, put another way, any act with moral weight is moral, and the question is just which alignment it is moral for.

OldTrees1
2021-03-03, 06:24 PM
All "moral" means, by your own statement, OldTrees, is "what one ought to do." I assume - and correct me if I am wrong - that there is a hidden component that prevents conflating "you ought to maintain your car so that it doesn't break down" with more traditional conceptions of morality.

The lack of a qualifier is the "hidden component". Morality is the end onto itself in contrast to instrumental ends (maintain your car so that it doesn't break down). It is not "What ought one do in order to X?", rather it is just "What ought one do?".


But for objective morality, this would be a mistake. If your ideals align with a non-good alignment, then you ought to live by that alignment.
You keep using the phrase "objective morality" to mean something other than its definition. https://en.wikipedia.org/wiki/Moral_universalism I give up. I have provided links. I have explained it a few times in a few ways. IRL this week is going to be too stressful for me to continue this exercise. So I am bowing out. Thank you for humoring me for this long. While there was a failed communication, it was very respectful discussion.


Right, and this implies that if a fictional setting postulates an objective moral reality internally that means that while there may be multiple moral theories held in-universe, one of them is correct and all the others are wrong.

Generally I would say that fantasy comes at this from a religious, rather than philosophical question. Specifically most fantasy settings aren't 'all myths are true' they're 'this specific myth is true' and that specific myth usually includes a deity/deities who determine the moral system that governs the setting.

As a result, the question of what alignments represent, in a particular universe, is what the gods say they represent, and questioning the gods is pointless because the gods make the decisions and there's nothing a mortal living in such a setting can do about it.

I think a lot of players find this weird, in part because the kind of deity-derived moral certainty implied works better for a monotheistic system with a distant and presumably all-powerful creator entity rather than a polytheistic system where a bunch of gods squabble amongst themselves. This is why various writers working in FR felt obligated to create a level above the gods in the form of Lord Ao because the idea of FR deities, as presented, mediating an moral system at all was laughable. This is also probably why a huge amount of modern fantasy has retreated from polytheistic systems and back toward distant monotheistic non-interventionist creator deities. One god = one truth is simply a much simpler equation for addressing ethical questions when worldbuilding.

Sorry but my gut reaction to Divine Command theory is Socrates' dialogue Euthyphro. If the gods (or powers) dictate the definition of the alignments, then there is a very good argument for the alignments not being morally relevant. I won't go further than reference it here due to forum limitations.

My second reaction is you mentioning the laughable squabbling. I have a headcanon that the shape the cosmology depends on which alignment created the universe (since Aboleths once predated this universe). Obviously the Great Wheel was created under Lawful dominance.

Personally when I do use an objective morality I do it independent of the gods. It is just a fact of that reality.

Tanarii
2021-03-03, 06:46 PM
Ah. When I talk about a topic I use the words from that topic. If I were talking about math I would use the IRL terms for it. I would talk about Binomial Coefficients rather than Grabok's Numerals (fictional example). I would make it clear by Binomial Coefficients I meant the term as defined IRL rather than the Chultian Lizard (fictional example) with the same name. However when I did need to switch between game terms and IRL terms, I would try to make some clear like to avoid overloading a word.Understood. I avoid certain kinds of IRL unprovable philosophical hypothesis like the plague, due to having negative opinions about them that will only start fights.

I'm sure there's a term for me in moral theory, if it's mostly complete. :smallamused:


That said if you are not interested in the topic of Objective Morality due to having no interest at this time to delve into that branch of philosophy, then that is good too.I don't mind talking about it in D&D terms, I just don't think IRL hypothesis and related languages have much bearing or relevance to something that is (to me) something defined within the game itself. Unfortunately they're brought in immediately, because objective morality is a Term (capital T).


I think Planescape works best if the players/GM don't know what is moral/immoral for that universe. It can work either way but it seems better if the audience is not biased.The way you're using the terms moral/immoral, I can see why. :)

Segev
2021-03-03, 07:40 PM
The lack of a qualifier is the "hidden component". Morality is the end onto itself in contrast to instrumental ends (maintain your car so that it doesn't break down). It is not "What ought one do in order to X?", rather it is just "What ought one do?".You're conflating two things, then, which are incompatible, and that is perforce leading to the paradox. Let me address this part to explain:



You keep using the phrase "objective morality" to mean something other than its definition. https://en.wikipedia.org/wiki/Moral_universalism I give up. I have provided links. I have explained it a few times in a few ways. IRL this week is going to be too stressful for me to continue this exercise. So I am bowing out. Thank you for humoring me for this long. While there was a failed communication, it was very respectful discussion.By the link you provided...
Moral universalism (also called moral objectivism) is the meta-ethical position that some system of ethics, or a universal ethic, applies universally, that is, for "all similarly situated individuals",[1] regardless of culture, race, sex, religion, nationality, sexual orientation, or any other distinguishing feature.
I have been using this definition. If you believe me not to be, then I require that you demonstrate how what I'm saying doesn't comport with it.

Objective morality tells you what is good, neutral, and evil, morally. You can be morally good, morally neutral, or morally evil. You can take actions that weigh in on that scale and which reflect your position on it, or reflect a change in that position.

When you turn around and ask, "What ought you to do?" you're asking for a judgment of which of those alignments you ought to be pursuing. It is possible for an objective moral system to have a definitive answer to this...assuming you have something you can settle on as a goal.

There is never an answer as to which alignment you ought to pursue. They are not their own end. Even if "I will be GOOD!" is your declared goal, you're pursuing it because you believe it will make you happy. It pleases you, and there's a reason why it pleases you. But if your goal is, "I WILL BE GOOD," then you ought to do what the Good alignment calls for.

OldTrees1
2021-03-03, 08:21 PM
You're conflating two things, then, which are incompatible, and that is perforce leading to the paradox. Let me address this part to explain:

By the link you provided...
I have been using this definition. If you believe me not to be, then I require that you demonstrate how what I'm saying doesn't comport with it.


Objective morality tells you what is good, neutral, and evil, morally. You can be morally good, morally neutral, or morally evil. You can take actions that weigh in on that scale and which reflect your position on it, or reflect a change in that position.

Tells me? No, Objective morality does not imply the moral agent is informed. It does claim there is an answer, and makes a claim about the answer.

Objective Morality claims that moral statements have a single truth value.
"some system of ethics, or a universal ethic, applies universally"
"if we adopt the principle of universality: if an action is right (or wrong) for others, it is right (or wrong) for us."
If I make a moral statement "X is immoral" then that is either true or false, and is true for everyone.


When you turn around and ask, "What ought you to do?" you're asking for a judgment of which of those alignments you ought to be pursuing. It is possible for an objective moral system to have a definitive answer to this...assuming you have something you can settle on as a goal.
?? That is not what the question asks. Moral is the label given to the answer. The contents of the answer, well that is where moral statements and their truth values come in.

Objective morality means moral statements have a single truth value "if we adopt the principle of universality: if an action is right (or wrong) for others, it is right (or wrong) for us."

So Objective Morality claims there is some system of ethics, or a universal ethic, that applies universally. That ethic is the objective moral standard (there are no claims yet about it being known or knowable). That standard answers the question of ethics (what ought one do?) by answering what is moral (because moral is the label given to the answer to the question "what ought one do?").

So, what ought one do? Shrug, call it "moral" for now, we can use that word to reference the answer before we know it. Then discuss if the answer will be universal or subjective. If subjective discuss how that works. If objective then return to the question to guess what is that unknown universally applying system of ethics.

But this does need to be my last post on this subthread. Despite being interesting and respectful discussion about one of the few things I am passionate about, it is also very stressful. And this week IRL is going to be a bit much. I apologize.

Jason
2021-03-04, 12:13 PM
Generally I would say that fantasy comes at this from a religious, rather than philosophical question. Specifically most fantasy settings aren't 'all myths are true' they're 'this specific myth is true' and that specific myth usually includes a deity/deities who determine the moral system that governs the setting.

As a result, the question of what alignments represent, in a particular universe, is what the gods say they represent, and questioning the gods is pointless because the gods make the decisions and there's nothing a mortal living in such a setting can do about it.

I think a lot of players find this weird, in part because the kind of deity-derived moral certainty implied works better for a monotheistic system with a distant and presumably all-powerful creator entity rather than a polytheistic system where a bunch of gods squabble amongst themselves. This is why various writers working in FR felt obligated to create a level above the gods in the form of Lord Ao because the idea of FR deities, as presented, mediating an moral system at all was laughable. This is also probably why a huge amount of modern fantasy has retreated from polytheistic systems and back toward distant monotheistic non-interventionist creator deities. One god = one truth is simply a much simpler equation for addressing ethical questions when worldbuilding.
In most published D&D settings it is clearly not the case that the gods determine what is good and what is evil.

In the Forgotten Realms it's implied that Ao is the "over god" who actually gets to determine morality, but it's also implied in some places that Ao himself is subject to some other higher power.

In settings like Dragonlance good and evil are basically built into the universe and unchangeable by the gods, with no higher power visibly enforcing them. It's just the way the universe works. The BECMI immortals set worked like this too - the immortals there didn't create the basic laws of the universe, they are just very powerful beings who still have to work within them.

Eberron has objective morality but the gods may or may not actually exist, so nobody really knows why good and evil work they way they do - people just know that it's a law of the universe that magic can detect and interact with alignment. Eberron also made a point of removing most of the "always evil" or "always good" alignment restrictions for creatures.

Segev
2021-03-04, 12:33 PM
Tells me? No, Objective morality does not imply the moral agent is informed. It does claim there is an answer, and makes a claim about the answer.Sorry, I was speaking colloquially. "Objective gravity tells you that gravity points the same direction for everyone, and is not like how it works in the Plane of Air in D&D where it depends on which way you decide is down."


Objective Morality claims that moral statements have a single truth value.
"some system of ethics, or a universal ethic, applies universally"Eh... I was going to agree, then you said this:

"if we adopt the principle of universality: if an action is right (or wrong) for others, it is right (or wrong) for us."
If I make a moral statement "X is immoral" then that is either true or false, and is true for everyone.
Can we agree that the law of gravity, and gravity itself, is objective in the real world? That there is an objective answer, that all who know the full set of facts about a given problem involving it can agree are true? (And if this still is problematic wording, I apologize. I am not saying "consensus determines reality." This is me trying to articulate that there is an objective truth without circularly using the word "objective" to define "objective." One more attempt at clarity: Can we agree that gravity has certain laws and that there is one correct answer for any given physics problem involving gravity?)

Assuming we agree that gravity is objective IRL, I will point out that this does not mean that "if a direction is down for me, it is down for (all) others." The classic example of an American and an Australian, each in their native lands, would not find through empirical testing that both of their "down" vectors have the same direction. Maybe not even the same magnitude, depending on their distance from the center of the Earth.

Yet, gravity is objective. "Down" is objectively determined for each of them. It is not subjective in the sense that they can change their perspective and decide it's another way. Their position on the gravity-generating sphere determines an objective "down" for them. That position is objective - they are definitely at specific positions. Moreover, we have the objective model to whatever precision we wish to define it that allows us to tell what "down" is at any given point on the sphere, and thus what "down" is for each of them.

"Down" is objectively determined. Gravity is objective.

But that doesn't mean that "down" has the same vector representation from all positions on the gravity-generating sphere.



When you turn around and ask, "What ought you to do?" you're asking for a judgment of which of those alignments you ought to be pursuing. It is possible for an objective moral system to have a definitive answer to this...assuming you have something you can settle on as a goal.
?? That is not what the question asks. Moral is the label given to the answer. The contents of the answer, well that is where moral statements and their truth values come in.So you're defining "moral" to be "one end of the alignment axis." Is that correct?

If you are doing this, then "what is moral?" is no longer the same question as "what ought you to do?" No more than you can answer the question, "Which way is down?" if I do not specify to you the coordinates that you're at and the coordinates of the center of the gravity sphere that generates the gravitic pull you're experiencing. If I show you a globe and ask you to draw an arrow in an entirely separate box that indicates which way is "down" on that globe, you have to start making assumptions about what references you have. Maybe you assume the box's relative position to the globe is what to use, and you point towards the globe. Maybe you assume the box is oriented relative to the page, and you point towards the bottom of the page. Maybe you use some other set of assumptions. But without me telling you where on the globe to draw the vector, you're operating on incomplete information. You have to know the frame of


Objective morality means moral statements have a single truth value "if we adopt the principle of universality: if an action is right (or wrong) for others, it is right (or wrong) for us." This only works if you have an axiomatic assertion that "good is moral." This is not actually required for objective morality, any more than an axiomatic assertion that "down is always the direction I indicate with this vector at the moment I say 'mark' and will never change" is required for there to be an objective gravity nor an objective meaning of "down."

I think a more correct way of phrasing what you wrote is this: "Objective morality means moral statements have a single truth value 'if we adopt the principle of universality: if an action is good (or evil) for others, it is good (or evil) for us.'"

"Right" or "wrong" make presumptions about what your desired end state is.

As a thought experiment to illustrate my point, let me ask you this: Why should I be moral?


So Objective Morality claims there is some system of ethics, or a universal ethic, that applies universally. That ethic is the objective moral standard (there are no claims yet about it being known or knowable).Agreed.


That standard answers the question of ethics (what ought one do?) by answering what is moral (because moral is the label given to the answer to the question "what ought one do?").Why ought one to do it? I would have agreed with these statements prior to this discussion with you over them, but I believe you are making untrue assertions (not accusing you of lying, but I do think you're factually and logically wrong) when you state these, because you have underlying assumptions that are impossible.

The objective moral and ethical standard means that there is an objective standard by which you can determine (using D&D's general grid as an example) whether something is lawful or chaotic, and whether it is good or evil. It can answer what is moral by telling you what to do to align yourself with a desired alignment. This is an objective answer; it doesn't matter what alignment to which you belong, you can tell somebody what they ought to do to align themselves with any particular alignment (provided you are perfectly knowledgeable about the objective truth of morality and ethics). But it DOES mean you have to know what alignment they want to be, or what they want that will let you determine which alignment will get them what they want. Because just as an astronomer in America who tells an Australian rocket physicist to "go straight up for 10 light years and you can't miss it" would be extremely wrong in his directions if both of them use "up" from their position, so too is somebody who says "you ought to kill that witness before he can snitch, it's the right thing to do," is wrong (or lying) if he's telling this to somebody who wants to be good. But the drow matron mother telling that to her son that she's raising to be a CE assassin? She's absolutely right about that being the right thing to do.

And any objective observer who knows the facts of the situation, even if he finds evil to be reprehensible, would acknowledge that for an evil person, it's the right thing to do. He just hates evil and thinks people shouldn't be evil.


So, what ought one do? Shrug, call it "moral" for now, we can use that word to reference the answer before we know it. Then discuss if the answer will be universal or subjective. If subjective discuss how that works. If objective then return to the question to guess what is that unknown universally applying system of ethics.

But this does need to be my last post on this subthread. Despite being interesting and respectful discussion about one of the few things I am passionate about, it is also very stressful. And this week IRL is going to be a bit much. I apologize.
The problem here is that, once again, you say, "What you ought to do is be moral."

Well, why?

Why ought I to be moral?

Tanarii
2021-03-04, 12:46 PM
This only works if you have an axiomatic assertion that "good is moral."
If I had to guess, that is probably an assertion, or definition, in IRL moral theory. Good = Moral, and possibly also = the correct/right "way to behave"/"thing to do".

Good = Moral is not how D&D or Palladium define any of the good Alignments. And the most recent version describes Alignment as moral & social attitudes that results in typical behavior.

If so, that means we know that some portion of the Alignment name labels, e.g. Good and Evil, isn't lining up with IRL Moral Theory.

Segev
2021-03-04, 01:15 PM
If I had to guess, that is probably an assertion, or definition, in IRL moral theory. Good = Moral, and possibly also = the correct/right "way to behave"/"thing to do".

Good = Moral is not how D&D or Palladium define any of the good Alignments. And the most recent version describes Alignment as moral & social attitudes that results in typical behavior.

If so, that means we know that some portion of the Alignment name labels, e.g. Good and Evil, isn't lining up with IRL Moral Theory.

I mean, there's a foundational problem, though: for any moral theory you care to outline, I can ask, "Why should I be moral?" There are ways to answer that, but all of them require things that have been rejected when I have proposed them. The trouble is that "should" and "ought" and the like require a purpose. They have a hidden assumption that there is a purpose for which you "should" do things. Without purpose, "should" is meaningless. When you say, "You should be moral," but define "moral" as "what you should do," you have a circular definition with no foundation.

So I ask this of anybody who cares to answer (though OldTrees1 is in particular invited to respond): "Why should I be moral?"

OldTrees1
2021-03-04, 03:52 PM
I mean, there's a foundational problem, though: for any moral theory you care to outline, I can ask, "Why should I be moral?" There are ways to answer that, but all of them require things that have been rejected when I have proposed them. The trouble is that "should" and "ought" and the like require a purpose. They have a hidden assumption that there is a purpose for which you "should" do things. Without purpose, "should" is meaningless. When you say, "You should be moral," but define "moral" as "what you should do," you have a circular definition with no foundation.

So I ask this of anybody who cares to answer (though OldTrees1 is in particular invited to respond): "Why should I be moral?"


When you say, "You should be moral," but define "moral" as "what you should do," you have a circular definition with no foundation.

Correct. You could word the definition that way as a circular tautology. I frame it in a non circular, but no more helpful way.

What if the word "ought" did not require a purpose? What if instead of asking about instrumental ends, it was asking about intrinsic ends? What if it was asking which intrinsic end one ought to adopt/follow? What if that question is hard / impossible to answer so the asker labeled the answer "Moral" to save mental space while they kept working on the problem.

So while it is not framed as a circular definition, it is still not one that helps answer it.

Why should you be moral? Because the right answer is labeled moral. What is moral? That is the label for the right answer. What is the right answer? I can't know, but at least I have a name for it.

That does not stop people from developing theories based on moral intuitions, moral disgust, or their own values. It just means all those theories are rooted in a fallacious jump from the unhelpful tautological / empty start to a helpful but fallacious theory.

Mechalich
2021-03-04, 08:36 PM
So I ask this of anybody who cares to answer (though OldTrees1 is in particular invited to respond): "Why should I be moral?"

In most fantasy settings, including D&D, one answer to this question is 'so that when I die I don't spend an extremely lengthy period of time suffering horribly.'

If there is an explicit afterlife in a fantasy setting, and if that afterlife lasts longer than the mortal life (which is almost always does, whether it's 'eternal' or not), then mortal existence is basically a test that determines outcomes for the actual majority of a being's existence which will in fact occur after death - even with an afterlife that ends after 10,000 years, the average human still spends 99% of their existence in it. Consequently, the reason to be moral is to secure the best possible consequence for this majority period.

Now, D&D is tricky in that some small portion of evil people, like 0.01% or something, actually manage to beat the system. While 99.99% of those who perish with the evil tag applied to their alignment are doomed to an extremely long period of abject misery and suffering as a Larva, Lemure, Manes, or other low-level denizen of the Lower Planes until they are ultimately destroyed in the Blood War, a lucky few manage to escape this fate and ascend the fiendish hierarchy. 'It is better to rule in hell than serve in heaven' is arguably true in D&D, but only for those souls that manage to win the evil lottery. Most people who end up in Hell are getting the full course torture package. However, almost everyone who self-acknowledges as 'evil' in D&D parlance thinks they'll be one of the lucky ones, which means that willful embrace of evil, in D&D, involves an immense level of delusional thinking. Which is just very strange.

NigelWalmsley
2021-03-04, 08:50 PM
Now, D&D is tricky in that some small portion of evil people, like 0.01% or something, actually manage to beat the system. While 99.99% of those who perish with the evil tag applied to their alignment are doomed to an extremely long period of abject misery and suffering as a Larva, Lemure, Manes, or other low-level denizen of the Lower Planes until they are ultimately destroyed in the Blood War, a lucky few manage to escape this fate and ascend the fiendish hierarchy.

Well, it's not like the Good afterlife is all that much better. Most Good people will spend their eternity as a celestial blade of grass or a heavenly doorknob or something. But more than that, even the low end of the fiendish hierarchy isn't necessarily a punishment. If you are a masochistic worshiper of the God of Pain, you presumably consider the eternity you spend in a pit full of flaming spikes to be the Good Ending.

What makes D&D really weird (to the point that even the authors sometimes lose the plot) is that it postulates that there are people who are in favor of things that are, in the real world, considered universally bad. In real life, no one is pro-terror. Even terrorists are doing what they do because they have some political or ideological goal they think it accomplishes. But in D&D, there is an actual God of Terror, whose goal (and that of his followers) is for there to be more terror in the world. Not because he thinks it's necessary to effect change, or even because it hurts some group he hates, but because he actually wants terror as a terminal value. The fact that his followers end up in an afterlife that is full of terror is no more a punishment than the fact that the followers of the God of Cooking end up in an afterlife that is full of food.

Mechalich
2021-03-04, 09:11 PM
Well, it's not like the Good afterlife is all that much better. Most Good people will spend their eternity as a celestial blade of grass or a heavenly doorknob or something.

I don't get where this idea that people who go to the upper planes are transformed into objects comes from. What source actually says this? Good-aligned mortals die and become petitioners on good-aligned planes, the same thing that happens on the neutral and evil planes. Some of the good-aligned petitioners get converted into Celestial beings like Lantern archons, but mostly they just wander around as petitioners in environments that broadly match their understanding of what Heaven would be - like what happened to Roy when he died in OOTS. The big difference for the Lower Planes is that the fiendish powers-that-be convert essentially all arriving petitioners into-bottom rung fiends.


But more than that, even the low end of the fiendish hierarchy isn't necessarily a punishment. If you are a masochistic worshiper of the God of Pain, you presumably consider the eternity you spend in a pit full of flaming spikes to be the Good Ending.

No. If you're a masochistic worshipper of the god of pain in life you are probably sufficiently delusional as to rationalize to yourself spending eternity in a put full of flaming spikes would be awesome, but that isn't actually true, and the suffering is exactly as bad as a neutral observer thinks it would be. Someone who's brain chemistry is actually sufficiently messed up to enjoy spending time immolating while being impaled is sufficiently mentally disturbed as to preclude any alignment other than chaotic neutral.


What makes D&D really weird (to the point that even the authors sometimes lose the plot) is that it postulates that there are people who are in favor of things that are, in the real world, considered universally bad. In real life, no one is pro-terror. Even terrorists are doing what they do because they have some political or ideological goal they think it accomplishes. But in D&D, there is an actual God of Terror, whose goal (and that of his followers) is for there to be more terror in the world. Not because he thinks it's necessary to effect change, or even because it hurts some group he hates, but because he actually wants terror as a terminal value. The fact that his followers end up in an afterlife that is full of terror is no more a punishment than the fact that the followers of the God of Cooking end up in an afterlife that is full of food.

The real issue I find D&D has is that it presumes people like this are common. There are, among humans, a small number of 'people who just want to see the world burn' but they're extremely rare and tend to flame out spectacularly. Somewhere along the lines D&D committed to the idea that, if the Blood War came to an end the forces of evil would drastically outnumber the forces of good and overrun the multiverse and that just doesn't make any sense, mathematically.

But yes, in the divine sphere the various designers of D&D failed to properly differentiate between gods who are evil - because they are unrelenting violent or can't control negative impulses or whatever - and gods of evil which are manifestations of negative impulse given form. Various mythologies include lots of gods who are evil, but very few gods of evil. The exception, of course, is monotheistic traditions which tend to include a single benevolent creator and a single perfectly malevolent oppositional figure. But the number of people who willfully chose to serve such adversaries - as opposed to being blackmailed, tricked, enslaved, or intimidated into such service - is quite small.

Saint-Just
2021-03-04, 09:59 PM
In most fantasy settings, including D&D, one answer to this question is 'so that when I die I don't spend an extremely lengthy period of time suffering horribly.'


I know it's a tangent, but I am having trouble remembering offhand one setting which has not started as an RPG where it is a known reality. I can remember maybe two which has dominant religions teaching that but no proof. In general fantasy as I know it tend to have unknown afterlives (unless they go for reincarnation) and even religions that teach either about unknowable afterlives or about something different from heaven-and-hell.

NigelWalmsley
2021-03-04, 11:28 PM
No. If you're a masochistic worshipper of the god of pain in life you are probably sufficiently delusional as to rationalize to yourself spending eternity in a put full of flaming spikes would be awesome, but that isn't actually true, and the suffering is exactly as bad as a neutral observer thinks it would be. Someone who's brain chemistry is actually sufficiently messed up to enjoy spending time immolating while being impaled is sufficiently mentally disturbed as to preclude any alignment other than chaotic neutral.

That seems like a cop-out to preserve a more traditional (and, to be fair, saner) view of morality in the face of the claims D&D makes. But the setup the game presents (or at least the Great Wheel cosmology that is most directly tied to alignment) is very much that all the afterlives are supposed to be rewards. Some of them are structured differently, but the Evil afterlives aren't punishments, they are rewards for being Evil that are desirable to Evil people.

In particular, the notion that a sincere devote of the God of Pain isn't actually Evil seems to me to be basically a rejection of the validity of the alignment system (which is not necessarily a position I'm opposed to, but seems inconsistent with the thrust of your argument).


The real issue I find D&D has is that it presumes people like this are common.

Well that's basically the issue with alignment. Once you postulate that the Drow and Orcs are culturally Evil in some meaningful sense, you're left with a very difficult dilemma. Either "Evil" means something radically different from its real-world meaning (e.g. A Practical Guide to Evil, where "Evil" is basically just a team), or you start getting into the bizarre implications of D&D morality. If Drow culture is "Evil", then not only do the individual Drow have to be Evil, but the Drow afterlife has to be such that it makes sense for them to choose to be Evil. Or Lolth has to have such tight control over Drow society that she can force them to be Evil against both their wishes and their best interests.

Segev
2021-03-04, 11:43 PM
Correct. You could word the definition that way as a circular tautology. I frame it in a non circular, but no more helpful way.

What if the word "ought" did not require a purpose? What if instead of asking about instrumental ends, it was asking about intrinsic ends? What if it was asking which intrinsic end one ought to adopt/follow? What if that question is hard / impossible to answer so the asker labeled the answer "Moral" to save mental space while they kept working on the problem.You have yet to define these intrinsic ends. What are they? And why ought I adopt/follow them? I am happy to accept "intrinsic" ends, but you still need to answer why I ought to follow/pursue/adopt them.


So while it is not framed as a circular definition, it is still not one that helps answer it.I don't see yet how you've made it non-circular. What makes your "intrinsic ends" better than "instrumental ones" for this definition?

I do not see how you can remove the goal orientation from "ought" and have it still mean anything. It's very definition requires that there be purpose.

I am open to being proven wrong, here. If you can show me how you can use "ought" without it having an unspoken "...in order to [something]," I will be better able to discuss it as you seem to see it.


Why should you be moral? Because the right answer is labeled moral. What is moral? That is the label for the right answer. What is the right answer? I can't know, but at least I have a name for it. Is it always the right answer, under all circumstances, no matter what anybody wants, needs, or is? Because there are all sorts of objective systems - not even of morality or ethics - where the objectively right thing to do differs based on the situation. It is not always the right thing to put the same amount of gas into your gas tank, under all conditions and in all situations. It depends how full the tank is. And whether what you actually need to be doing is scraping ice off the windshield. Or turning the key in the ignition.

What you are trying to do matters to "what you ought to do." And yet, there are objectively correct actions you can perform - right things to do - to achieve your ends with your car. If I ask you, "Why ought I see to regular filter and oil changes?" you can give me an answer, and I can tell you whether I actually care about what you're telling me the reason is. "Nah, I don't need to take care of it that way; I just plan to give it away and buy a new one in a year. I'm rich, you see."

You are objectively correct that I ought to change the oil and filter every few thousand miles if I want to keep the car in good working order. I am not wrong for saying that I don't care about that, though.

You say you posit "intrinsic" things I ought to do as "moral things." I don't know how far the "always the right thing to do" thing goes, though, even as you posit it. Is it absolute? Doing "what is moral" is right, all the time, for everybody, to the point that it is never the wrong answer, never an irrelevant act, independent of all circumstances? Or is it more nuanced than that even with what you're defining "objective morality" to be? (I genuinly am not sure, and want to nail down what you mean.)


In most fantasy settings, including D&D, one answer to this question is 'so that when I die I don't spend an extremely lengthy period of time suffering horribly.'What if I don't care about that? Or I have a way to avoid it, perhaps through utter annihilation of my soul in a painless and fast fashion, and I prefer that to any afterlife?

OldTrees1
2021-03-05, 01:57 AM
I do not see how you can remove the goal orientation from "ought" and have it still mean anything. It's very definition requires that there be purpose.

I am open to being proven wrong, here. If you can show me how you can use "ought" without it having an unspoken "...in order to [something]," I will be better able to discuss it as you seem to see it.

How would showing help? I have shown how it is used. "What Ought one do?" and "Ought one do X?" are examples. There is no hidden/unspoken "in order to [something]". If you change it to "Should one X in order to Y?" by presuming a purpose, I can change back to "Ought one Y?" by challenging the purpose.

It is okay if that is a communication barrier. I cannot make it more clear without more shared premises (even then, this is one of the root concepts so I don't know which other shared premises could help).


Is it always the right answer, under all circumstances, no matter what anybody wants, needs, or is?
The right answer is always named/labeled "moral" as a way to make it talking about it more concise.

You might notice I did not say "It is always the right answer". I said "the right answer" is given the nickname "moral" as a shorter name / label. It is always the case that the right answer is itself, and we will name it "moral" to make it easier to talk about it.

The branch of Metaethics deals (among other things) with the rest of your questions about the nature of the answer to the question. I will not get into those topics at this time (including the subthread I left).

Chauncymancer
2021-03-05, 02:26 AM
Asked in a vacuum, with no goal stated, there is no answer to "what ought one to do?"

This is true regardless of moral systems, subjective or objective. "What ought I to do?" cannot be answered if you do not already know the answer to the follow up question, "In order to...?"



The lack of a qualifier is the "hidden component". Morality is the end onto itself in contrast to instrumental ends (maintain your car so that it doesn't break down). It is not "What ought one do in order to X?", rather it is just "What ought one do?".




When you turn around and ask, "What ought you to do?" you're asking for a judgment of which of those alignments you ought to be pursuing. It is possible for an objective moral system to have a definitive answer to this...assuming you have something you can settle on as a goal.

There is never an answer as to which alignment you ought to pursue. They are not their own end. Even if "I will be GOOD!" is your declared goal, you're pursuing it because you believe it will make you happy. It pleases you, and there's a reason why it pleases you. But if your goal is, "I WILL BE GOOD," then you ought to do what the Good alignment calls for.




I think a more correct way of phrasing what you wrote is this: "Objective morality means moral statements have a single truth value 'if we adopt the principle of universality: if an action is good (or evil) for others, it is good (or evil) for us.'"

"Right" or "wrong" make presumptions about what your desired end state is.

As a thought experiment to illustrate my point, let me ask you this: Why should I be moral?
The objective moral and ethical standard means that there is an objective standard by which you can determine (using D&D's general grid as an example) whether something is lawful or chaotic, and whether it is good or evil. It can answer what is moral by telling you what to do to align yourself with a desired alignment. This is an objective answer; it doesn't matter what alignment to which you belong, you can tell somebody what they ought to do to align themselves with any particular alignment (provided you are perfectly knowledgeable about the objective truth of morality and ethics). But it DOES mean you have to know what alignment they want to be, or what they want that will let you determine which alignment will get them what they want. Because just as an astronomer in America who tells an Australian rocket physicist to "go straight up for 10 light years and you can't miss it" would be extremely wrong in his directions if both of them use "up" from their position, so too is somebody who says "you ought to kill that witness before he can snitch, it's the right thing to do," is wrong (or lying) if he's telling this to somebody who wants to be good. But the drow matron mother telling that to her son that she's raising to be a CE assassin? She's absolutely right about that being the right thing to do.

And any objective observer who knows the facts of the situation, even if he finds evil to be reprehensible, would acknowledge that for an evil person, it's the right thing to do. He just hates evil and thinks people shouldn't be evil.


The problem here is that, once again, you say, "What you ought to do is be moral."

Well, why?

Why ought I to be moral?


I mean, there's a foundational problem, though: for any moral theory you care to outline, I can ask, "Why should I be moral?" There are ways to answer that, but all of them require things that have been rejected when I have proposed them. The trouble is that "should" and "ought" and the like require a purpose. They have a hidden assumption that there is a purpose for which you "should" do things. Without purpose, "should" is meaningless. When you say, "You should be moral," but define "moral" as "what you should do," you have a circular definition with no foundation.

So I ask this of anybody who cares to answer (though OldTrees1 is in particular invited to respond): "Why should I be moral?"

Okay so here's the thing. You know how dragons, giants, and gelatinous cubes are not just things that happen to not exist, but things that cannot exist in Earths physical conditions, and if the laws of physics clearly affecting your characters were applied to them they would just instantly die? Alignment has a relationship to real life ethical reasoning that you are going to find very comparable to huge sized monsters' relationship to the square cube law.

It is not a coincidence that four of the five authors who Gygax borrowed the concept of alignment from were Catholic, that Baator and Celestia are taken directly from the Divine Comedy, that the original writeup of holy symbols only gave the choices "wooden crucifix", and "silver crucifix", and that alignment makes perfect sense if you accept the moral theories of Aquinas and Augustine.

See here's the thing: In the model of human minds used in the specific form of Objective Morality that I consider relevant to D&D, it is 100% of the time the case, for every living sapient being that has ever existed that having an alignment causes wanting things. If you could somehow cancel out someone's alignment, they would stand stock still and stop breathing until they suffocated to death. There is no such thing in this model as having a goal and then choosing a code of ethics, people only have goals because they have codes of ethics. (Saint Augustine specifically claims that people who hold any of the eight non LG alignments don't actually have Free Will.)

So in this model, a Universal Ethical System (which is one of the components of the definition of Objectivity in this case) must answer the question "What should I do?" with the exact same answer, no matter what someone's actual goals are.. The reason this "makes sense" is because it's explicitly a religious argument with an assumed teleology for all sapient beings. Humans (and elves, dwarves, and orcs) are tools they exist to do a job and ethics is the field of figuring out how that job applies to seemingly irrelevant situations.

And while nobody currently writing editions of D&D actually expects you to ride that train, the shape of alignment that was written into 1e, was written by people who basically thought that was how the real world worked. You can discard those assumptions, but you're going to have to do a ground up rewrite of how alignment works.



Now, D&D is tricky in that some small portion of evil people, like 0.01% or something, actually manage to beat the system. While 99.99% of those who perish with the evil tag applied to their alignment are doomed to an extremely long period of abject misery and suffering as a Larva, Lemure, Manes, or other low-level denizen of the Lower Planes until they are ultimately destroyed in the Blood War, a lucky few manage to escape this fate and ascend the fiendish hierarchy. 'It is better to rule in hell than serve in heaven' is arguably true in D&D, but only for those souls that manage to win the evil lottery. Most people who end up in Hell are getting the full course torture package. However, almost everyone who self-acknowledges as 'evil' in D&D parlance thinks they'll be one of the lucky ones, which means that willful embrace of evil, in D&D, involves an immense level of delusional thinking. Which is just very strange.

The real issue I find D&D has is that it presumes people like this are common. There are, among humans, a small number of 'people who just want to see the world burn' but they're extremely rare and tend to flame out spectacularly. Somewhere along the lines D&D committed to the idea that, if the Blood War came to an end the forces of evil would drastically outnumber the forces of good and overrun the multiverse and that just doesn't make any sense, mathematically.

I do not think it is a coincidence that these statements are all perfectly compatible with the philosophical claims made by Gygax's specific religious denomination at the time he was writing D&D. They're also eminently gameable, which has allowed them to persist under writers of different creeds.


I don't get where this idea that people who go to the upper planes are transformed into objects comes from. What source actually says this?
Planescape I believe. Maybe Forgotten Realms wall of the faithless.


No. If you're a masochistic worshipper of the god of pain in life you are probably sufficiently delusional as to rationalize to yourself spending eternity in a put full of flaming spikes would be awesome, but that isn't actually true, and the suffering is exactly as bad as a neutral observer thinks it would be. Someone who's brain chemistry is actually sufficiently messed up to enjoy spending time immolating while being impaled is sufficiently mentally disturbed as to preclude any alignment other than chaotic neutral.
Neurotypicality and mental illness are orthogonal to what alignment you are. There are explicitly multiple creatures with alignments that don't even have minds. Nothing is stopping our inverted pain receptor person from being LE.


But yes, in the divine sphere the various designers of D&D failed to properly differentiate between gods who are evil - because they are unrelenting violent or can't control negative impulses or whatever - and gods of evil which are manifestations of negative impulse given form. Various mythologies include lots of gods who are evil, but very few gods of evil. The exception, of course, is monotheistic traditions which tend to include a single benevolent creator and a single perfectly malevolent oppositional figure. But the number of people who willfully chose to serve such adversaries - as opposed to being blackmailed, tricked, enslaved, or intimidated into such service - is quite small.
I kind of get the impression that the religious orthopraxies of Hellenic paganism that informed the myths that all D&D gods are based off of, do not actually take the position that the gods have free will at all. The god of plagues can't just stop giving out plagues, you can just make sure that he gives the plague with your name on it to someone else.

Grek
2021-03-05, 06:35 AM
I think people are kinda missing the point of the question here. Saint-Just isn't asking "Does Detect Good really detect whether or not a character adheres to the IRL Objective Morality?". I mean, obviously they're not, that's an absurd thing to ask. They're asking how people would figure out the criteria by which the various Detect [Alignment] spells decide how strongly to light stuff up when cast. And for that, you'd use some behavioral studies:

Study One: Are Behavior and Alignment Correlated?
For this study, we want to know whether a specific behavior (or bundle of behaviors, if we already have a hunch about what Detect [Alignment] is checking for) will alter someone's alignment. We start out with an behavioral survey where we ask about all of the different behaviors we're curious about and we subject them to a battery of Detect [Alignment] spells. Then we send everyone on their way, with instructions to report back in next year for a repeat of the survey + alignment testing. We do statistics to see which behaviors tend to correlate to alignment changes. Maybe we find out that Detect [Alignment #3] correlates to eating brussel sprouts. Maybe we find out that Detect [Alignment #2] correlates to rescuing kittens from trees. Whatever we find, that informs the sorts of tests we do in Study Two through Four.

Study Two: Do Behaviors Alter Alignment?
For this study, we want to know whether a person can change their alignment by changing their behavior. We get ourselves a fresh group of volunteers, give them all the behavioral survey + Detect [Alignment] battery again, and then divide them up into control and experimental groups. The control group is instructed to perform some behavior that we're pretty sure isn't correlated with alignment change, while the experimental group is instructed to perform one of the behaviors we're curious about. After both groups are done with their activities, we check their alignments again. If it turns out that the experimental groups asked to eat brussel sprouts do indeed detect more strongly of [Alignment #3] than they did prior to their leafy meal, that's pretty good evidence that eating brussel sprouts will indeed cause someone to detect of [Alignment #3] out in the wild. But that's not quite enough for our purposes, because we also need to do Study Three.

Study Three: Do Alignments Alter Behavior?
This one's the opposite of the above study. We don't just want to know how to alter our alignment, we want to know what the effects of alignment actually are. Maybe there's a stereotype that people who detect strongly of [Alignment #4] enjoy violin music. Fair enough, we can test that with the Atone spell. We take our volunteers, have them subjectively rate their opinion of violin music and then split them into three groups. The control group has nothing done to them. Experimental Group A is Atoned to to [Alignment #2]. Experimental Group B is Atoned to [Alignment #4]. We play them some more violin music and have them rate that. If it turns out that Experimental Group B is the only one with a notable improvement in their opinion of violin music, that probably means that it's the alignment causing it. (If it turns out that both A and B start liking violin, that suggests that it's actually the Atone spell doing it, which would be... weird, but probably not the strangest experimental outcome in a universe where Detect Vegetable Eaters is a spell.

Study Four: Is Alignment A Good Proxy For [Insert Ethics System Here]?
This one is a bit cheeky. Nobody can agree on which ethics system is correct. But we can usually agree on which people are experts on which ethics systems. For this experiment, we check everyone's alignment, then have a panel of experts on [Insert Ethics System Here] interview our test subjects and then rate their level of adherence to [Insert Ethics System Here]. Then we check everyone's alignment again, just to be sure that being declared Highly Ethical by the Mer-Pope (or whoever else we're bringing in on our panel for [Insert Ethics System Here]) isn't one of the things that affects alignment. If it turns out that there's a strong correlation between [Alignment #3] and the having the approval of the Federated Council of Giant Spiders Who Eat Puppies, that tells us something about the relationship between [Alignment #3] and Being A Giant Spider and/or Eating Puppies.

Now, none of this actually tells us whether or not we want to be [Good] or not. After all, in 3.5 the Puppy Eating Arson Spiders From Baator will detect faintly of [Good] if someone casts Protection From Evil on them, so Detect Good is clearly not a foolproof way of deciding if someone is a decent sort or not. But equally clearly, it can at least give us some information about the probable probity of people who detect as [Good] relative to those who detect as [Evil].

OldTrees1
2021-03-05, 08:34 AM
I think people are kinda missing the point of the question here. Saint-Just isn't asking "Does Detect Good really detect whether or not a character adheres to the IRL Objective Morality?". I mean, obviously they're not, that's an absurd thing to ask. They're asking how people would figure out the criteria by which the various Detect [Alignment] spells decide how strongly to light stuff up when cast. And for that, you'd use some behavioral studies:

That is the nature of subthreads I suppose. People answer the main question to more or less detail, then a relevant tangent appears. In this case the subthread spun off of acknowledging that the philosophers of Sigil would still not agree on or know which Ethics System to use.

You provided great detail. Quertus mentioned an interesting possbility and I wonder how it impacts these studies:
What if the Detect spells are Detect Lawful Evil and Detect Chaotic Evil rather than Detect Good and Detect Evil? In this case they are not true opposites but still don't overlap.

Segev
2021-03-05, 11:02 AM
How would showing help? I have shown how it is used. "What Ought one do?" and "Ought one do X?" are examples. There is no hidden/unspoken "in order to [something]". If you change it to "Should one X in order to Y?" by presuming a purpose, I can change back to "Ought one Y?" by challenging the purpose.

It is okay if that is a communication barrier. I cannot make it more clear without more shared premises (even then, this is one of the root concepts so I don't know which other shared premises could help).The part I bolded is, if I am parsing you correctly, the point I've been trying to get at. You seem to me to be trying to say that "the moral thing to do" is always the same thing, and I'm asking "why?" I'm challenging the hidden implied purpose of your statement, "You ought to do X." But I think this next bit is actually moving us closer, so I will address that and hope that this bit discussed here above is not necessary:


The right answer is always named/labeled "moral" as a way to make it talking about it more concise.

You might notice I did not say "It is always the right answer". I said "the right answer" is given the nickname "moral" as a shorter name / label. It is always the case that the right answer is itself, and we will name it "moral" to make it easier to talk about it.

The branch of Metaethics deals (among other things) with the rest of your questions about the nature of the answer to the question. I will not get into those topics at this time (including the subthread I left).
Okay. So the objectively right answer to any (alignment-related) question about what you should do is always "the moral thing." This is tautological and circular, which makes it not a very useful ANSWER, but it still is a useful TERM in this case.

The answer to the question of, "What is the moral thing for me to do?" when posed a situation will always, then, be, "whatever your (target) alignment dictates." If your alignment (or the alignment to which you aspire) dictates that you care nothing for the lives of others if they're not of use to you (evil alignments), then the answer to "What should I do with this child who caught me stealing and might tattle on me?" is probably "kill her, and hide the body/evidence." That would be the moral thing for an evil person to do in an objective morality system, under the definition of "moral" that says "the moral thing to do is defined as the right answer to the question, 'what ought I to do?'"

It feels strange to us, who live in a world and society where everyone at least thinks of "good" as the alignment to which to aspire, to say "it is moral to do evil," but in an objective alignment setting where there are people who actively want to adhere to alignments other than Good, that is a perfectly sensible statement, given the definition of "moral" you, OldTrees1, have given me. (I am not saying it's "your" definition; I believe you are citing other philosophers and philosophies. But I am trying to be very precise that I am using it by that specific definition, and not a definition that, for example, says "moral == good.")

OldTrees1
2021-03-05, 01:03 PM
Okay. So the objectively right answer to any (alignment-related) question about what you should do is always "the moral thing." This is tautological and circular, which makes it not a very useful ANSWER, but it still is a useful TERM in this case.

Yes, it is a term. A term used to reference the right answer to the overall question and to other questions of moral relevance.


The answer to the question of, "What is the moral thing for me to do?" when posed a situation will always, then, be, "whatever your (target) alignment dictates."
This is not a shared premise. Why would doing what your target alignment dictate be the right answer to "What ought one do?". That would be assuming the conclusion.


The part I bolded is, if I am parsing you correctly, the point I've been trying to get at. You seem to me to be trying to say that "the moral thing to do" is always the same thing, and I'm asking "why?" I'm challenging the hidden implied purpose of your statement, "You ought to do X." But I think this next bit is actually moving us closer, so I will address that and hope that this bit discussed here above is not necessary:

Well was established (imperfect word choice?) below (in your post), moral is a term for the right answer to the question.
And the question does not presume a purpose. Assuming a purpose would be begging the question*. For any purpose you can imagine, I can question it by asking "But, ought one follow that purpose?". It may be hard to believe, but there is no hidden implied purpose to qualify the question.

*Asking "What ought one do if we assume X is the answer to 'What ought one do?' ?" is circular logic or an unfounded premise.



It feels strange to us, who live in a world and society where everyone at least thinks of "good" as the alignment to which to aspire, to say "it is moral to do evil, but in an objective alignment setting where there are people who actively want to adhere to alignments other than Good, that is a perfectly sensible statement, given the definition of "moral" you, OldTrees1, have given me. (I am not saying it's "your" definition; I believe you are citing other philosophers and philosophies. But I am trying to be very precise that I am using it by that specific definition, and not a definition that, for example, says "moral == good.")
I struck out some unneeded qualifiers.

In game, evil labels an alignment. It is not strange to discard unrelated moral statements about its namesake.* I can readily imagine a campaign where it is moral to do what is labeled evil. I can also readily imagine a campaign where alignments are amoral. I know some can readily imagine a campaign where the moral character of the alignments follows moral relativism, although I will admit I can't personally readily imagine moral relativism in any context.

*Apologies but a more absurd example popped into my head:
Assume that IRL choices between various icecream flavors was an amoral choice.
Assume that a game was made with alignments named vanilla, rocky road, chocolate, and mint.
There is no reason to assume, unless stated, that the moral statement about icecream flavors IRL is in any way related to statements about the alignments in that game. Two things sharing the same name does not make them necessarily the same.

So statements like "It is moral to do evil" are not inherently self contradicting in the context where "evil" is not merely another word for "immoral". The statement might be false, or true, or mu, or depends. That depends on context not presumed at this time. For example Moral Universalism would declare the statement could only be false xor true. Moral Error Theory would say it was mu. Moral Relativism might say it depends.

PS: I apologize for being the cause of the linguistic gymnastics you had to do in that qualifier around "the definition of 'moral' you, OldTrees1, have given me".

Segev
2021-03-05, 01:48 PM
I'm going to focus on this, because I think it's the core of where we're disagreeing or dancing around each other:

This is not a shared premise. Why would doing what your target alignment dictate be the right answer to "What ought one do?". That would be assuming the conclusion.



Well was established (imperfect word choice?) below (in your post), moral is a term for the right answer to the question.
And the question does not presume a purpose. Assuming a purpose would be begging the question*. For any purpose you can imagine, I can question it by asking "But, ought one follow that purpose?". It may be hard to believe, but there is no hidden implied purpose to qualify the question.

*Asking "What ought one do if we assume X is the answer to 'What ought one do?' ?" is circular logic or an unfounded premise.

I think I have a better way to cut to the chase of this. If you ask me the question, "What ought I to do?" I am going to answer with some variation on, "It depends on what you want." This is all inclusive: It depends on what you want to happen; it depends on what you want to gain; it depends on what you want to achieve; it depends on what you want to experience; it depends on what you want to feel; it depends on what you want the outcomes of your actions to lead to.

Do you want to be happy? Then list XYZ are the things the sufficiently-informed guru/spiritual advisor/life coach can objectively tell you to do to become happy. They are the answer to, "What ought I to do?" if you want to be happy.

Do you want to be a good person? Then (in D&D or a system that uses a similar alignment grid) you ought to do those things which the good alignment says good people do.

It is not presuming the conclusion to say, "You ought to do what the alignment you aspire to requires." If you asked "What is the moral thing to do?" of somebody who knows what alignment you aspire to, he will (assuming he is both correct and honest) tell you what the alignment to which you aspire says one should do in the situation you find yourself in. Because this hypothetical involves an objective alignment system, there is an objectively correct answer provided by each alignment. ("It doesn't matter; this is an amoral question" is also a valid answer, but I am neglecting it for now; we are assuming we have correctly identified our "what should I do?" questions as pertaining to the objective moral alignment system.)

To reiterate: The answer, without making any assumptions about the asker at all, to the question, "What ought I to do?" - regardless of whether morality is objective or subjective - will always be, "It depends on what you want."

OldTrees1
2021-03-05, 02:13 PM
I'm going to focus on this, because I think it's the core of where we're disagreeing or dancing around each other:

To reiterate: The answer, without making any assumptions about the asker at all, to the question, "What ought I to do?" - regardless of whether morality is objective or subjective - will always be, "It depends on what you want."

What someone wants may, or may not be morally relevant, and if relevant may be moral or immoral. So while the overall question "What ought one do?" might have a large answer (John Stuart Mills thought the theory of Utilitarianism was the answer), I have no reason to presume the answer depends on what I want or to presume any other hidden purpose.

This core of disagreement might not be resolvable. And that is okay.

Segev
2021-03-05, 03:24 PM
What someone wants may, or may not be morally relevant, and if relevant may be moral or immoral. So while the overall question "What ought one do?" might have a large answer (John Stuart Mills thought the theory of Utilitarianism was the answer), I have no reason to presume the answer depends on what I want or to presume any other hidden purpose.

This core of disagreement might not be resolvable. And that is okay.

In objective morality, you either must define "moral/immoral" as the fixed axis, in which case you've essentially stated "moral = good" and thus it is no longer the #defined answer to the question "what ought I to do?" with no hidden assumptions about goals, or you must accept that the answer to the question "What ought I to do?" depends on what you want.

The moment you insist that "moral == good," you can try to claim that you ought to do the moral thing, but you are now open to the question, "Why should I be moral?" And now we're right back to it depending on what I want.

The only reason "moral" as defined previously - the answer to the question "What ought I to do?" - depends on what you want is because all "ought" questions require motivation. It is fundamentally impossible to have "ought" without a motivation.

Grek
2021-03-05, 07:11 PM
TWhat if the Detect spells are Detect Lawful Evil and Detect Chaotic Evil rather than Detect Good and Detect Evil? In this case they are not true opposites but still don't overlap.
That's not a huge problem. Suppose we have three teams - the [Cubs], the [Cardinals] and the [White Sox] - and the various Detect [Team] spells ping your [Team] more strongly if you:
Cheer for your [Team].
Boo members of other [Team]s.
Play sports wearing the [Team] logo. (strong!)
Have a giant foam finger for [Team]. (very strong!)
Are currently very very drunk.

Obviously being sufficiently drunk will make you ping faintly of every team, UNLESS you counteract it with sufficient acts of booing and giant foam finger waving. And obviously booing the Cardinals is an act which shifts you steadily toward both [Cubs] and [White Sox] affiliation. But that doesn't mean that [Team] isn't a real concept in universe, it just means that it has properties which correspond less to morality and more to a sports rivalry.

If we switch over to the four D&D extreme alignments, then we end up with results like "People who eat kittens get a strong reaction from Detect [Alignment #2] unless they also rob banks." and "People who eat kittens also get a strong reaction from Detect [Alignment #3], even if they do rob banks (in fact, it's even stronger if they do), but NOT if they do so while obeying the Nine Commandments of the Infernal Order of Baatorian Knights." Eventually, you'd puzzle out that you're looking at something like 'Detect Baatorian Morality', which may very well get summarized as 'Detect Lawful Evil'.

RE: The rest of the thread. It's called the Is-Ought problem, yall. You can't go from "This is what the universe looks like." to "This is what you ought to do." or vis versa without some pre-existing moral intuitions about what you ought to do given the state of the universe. Even if it's a universal truth that having a giant red foam finger saying CARDINALS #1 makes you magically detect of [Cardinals], that still does not tell you whether detecting of [Cardinals] is something to be desired. You have to supply the 'moral' axiom that the Cardinals are the best sports team and everyone should be a Cardinals fan for yourself. It's not a feature of the universe except insofar as it is a feature of you. This is equally true when one of the mystical axes of affiliation the has been labelled [Good] instead of [Cardinals]. [Good] just a label; the fact that the mystical glowing winged humanoids call their team Team [Good] does not necessarily say that you should aspire to detect of [Good]. Ideally, it would work the other way around, where once everyone agrees that the Celestial Parrot Men are actually amazing role models and that everyone should strive to live up to their example, we assign the mystical radiation with wafts off of the Celestial Parrot Men the [Good] label so that everyone knows how society expects them to feel about the Celestial Parrot Men and their disciplines. But even then, pointing to the label and saying "It's clearly labelled [Good], obviously it's morally right." is missing the point. You still have to show what aspects about it are so great that it deserves to be given the label of [Good].

OldTrees1
2021-03-05, 07:54 PM
That's not a huge problem. Suppose we have three teams - the [Cubs], the [Cardinals] and the [White Sox] - and the various Detect [Team] spells ping your [Team] more strongly if you:
Cheer for your [Team].
Boo members of other [Team]s.
Play sports wearing the [Team] logo. (strong!)
Have a giant foam finger for [Team]. (very strong!)
Are currently very very drunk.

Obviously being sufficiently drunk will make you ping faintly of every team, UNLESS you counteract it with sufficient acts of booing and giant foam finger waving. And obviously booing the Cardinals is an act which shifts you steadily toward both [Cubs] and [White Sox] affiliation. But that doesn't mean that [Team] isn't a real concept in universe, it just means that it has properties which correspond less to morality and more to a sports rivalry.

I did not mean to imply there was a problem. Just a puzzle of how to extend your solution to when there are detect spells for 2 adjacent corners rather than detect spells for both axes.

I was curious about if you could differentiate between a pair of opposites, an orthogonal pair, and 2 adjacent corners. However two adjacent corners are an orthogonal pair now that I think about it. You just rotate the axes 45 degrees. If you have something that pings on Detect G and Detect C, then we could call it CG. If you have something that detects on CG and CE, then we could call it CCGE, or C for short.

Grek
2021-03-05, 10:14 PM
Yep. With the Chaotic Evil/Lawful Evil example, you'd eventually work out from testing that there's one list of behaviors that increases responsiveness to Detect [CE] but not Detect [LE], one list of behaviors that increases responsiveness to Detect [LE] but not Detect [CE] and one that does both. If List One reads "Hanging out with Slaad, Breaking Eggs On the Wrong Side and Jaywalking]; List Two is "Hanging Out with Modrons, Filing Taxes and Saluting" and List Three is "Murder, Summoning Demons and Summoning Devils", you'd eventually work out that Detect [CE] is looking at a mixture of lawlessness and evil while Detect [LE] is looking at a mixture of lawfulness and evil.

gloryblaze
2021-03-06, 03:53 AM
In objective morality, you either must define "moral/immoral" as the fixed axis, in which case you've essentially stated "moral = good" and thus it is no longer the #defined answer to the question "what ought I to do?" with no hidden assumptions about goals, or you must accept that the answer to the question "What ought I to do?" depends on what you want.

The moment you insist that "moral == good," you can try to claim that you ought to do the moral thing, but you are now open to the question, "Why should I be moral?" And now we're right back to it depending on what I want.

The only reason "moral" as defined previously - the answer to the question "What ought I to do?" - depends on what you want is because all "ought" questions require motivation. It is fundamentally impossible to have "ought" without a motivation.

I think the fundamental disconnect here is that under the proposed "objective morality" theory, there is an answer to "what ought one do?", even in the total absence of context. Let's go back to your globe/gravity analogy from earlier:

Imagine that there is a globe, and that gravity pulls the people on the globe towards the center of said globe.

Person A is standing on the North Pole of the globe.

Person B is standing on the South Pole of the globe.

You ask each of them "which way is down?"

Person A will say "South is down."

Person B will say "North is down."

Each rationally believes themself to be correct.

Now imagine that this globe isn't actually a world in a universe, but a shoebox diorama sitting on a table. In the diorama, the globe doesn't ever rotate or move. The North Pole is always at the top of the globe, and the South Pole is always at the bottom of the globe.

If you ask Person C, a person in the real world who is observing the shoebox diorama "In this diorama, which way is down?", they can correctly identify that South is down. Any person who says that North is down is objectively wrong.

Therefore, Person B is wrong. Person B lives inside the diorama. They have no way of knowing that they're in a diorama. They have no way of discovering that they're wrong. Nonetheless, they are wrong.

In the context of D&D, Person A is an in-universe character who believes "Good is moral", and Person B is an in-universe character who believes "Evil is moral." Each of them believes they are right. Neither of them has any way of knowing whether or not they are right.

Person C is not an in-universe person at all—not a god, not AO, not anyone. Rather, they're the Dungeon Master. They can define what is and isn't moral in their D&D universe. They can create a world such that Good is moral, such that Evil is moral, or a universe that follows some other ethical system. But if the GM says "Good is moral", then when Person B decided that they ought to be Evil, then person B is objectively incorrect. Not for any in-universe reason, but because the universe is defined such that they are incorrect.

Tanarii
2021-03-06, 10:20 AM
Imagine that there is a globe, and that gravity pulls the people on the globe towards the center of said globe.

Person A is standing on the North Pole of the globe.

Person B is standing on the South Pole of the globe.

You ask each of them "which way is down?"

Person A will say "South is down."

Person B will say "North is down."

Each rationally believes themself to be correct.

Now imagine that this globe isn't actually a world in a universe, but a shoebox diorama sitting on a table. In the diorama, the globe doesn't ever rotate or move. The North Pole is always at the top of the globe, and the South Pole is always at the bottom of the globe.

If you ask Person C, a person in the real world who is observing the shoebox diorama "In this diorama, which way is down?", they can correctly identify that South is down. Any person who says that North is down is objectively wrong.

Therefore, Person B is wrong. Person B lives inside the diorama. They have no way of knowing that they're in a diorama. They have no way of discovering that they're wrong. Nonetheless, they are wrong.This analogy fails, because if it were true Person B would fall off the bottom of the world.


But if the GM says "Good is moral", then when Person B decided that they ought to be Evil, then person B is objectively incorrect. Not for any in-universe reason, but because the universe is defined such that they are incorrect.
Good = moral doesn't really make sense as an objective statement though. If Moral = "way people ought to behave", in that context "why should I be moral?" is a very important question that must be answered before it can be objective. It's inherently subjective if it isn't defined. Even if the answer is "otherwise you'll be punished for all eternity".

Whereas "Good" = "X kind of behavior" can be, at least theoretically, an objective statement.

OldTrees1
2021-03-06, 12:50 PM
Good = moral doesn't really make sense as an objective statement though. If Moral = "way people ought to behave", in that context "why should I be moral?" is a very important question that must be answered before it can be objective. It's inherently subjective if it isn't defined. Even if the answer is "otherwise you'll be punished for all eternity".

Whereas "Good" = "X kind of behavior" can be, at least theoretically, an objective statement.


There is a campaign and the GM determined the nature of that reality.
There is a question. "What ought one do?". There is no "hidden qualifier" to this question
In order to continue I will define a Term. The word "moral" will be used as a name for the answer to the question "What ought one do?".


Now you have a character ask "why should I be moral?". Since "moral" is a term defined as the answer to the question "What ought one do?", I can substitute and simplify. Your character's question is "Why ought I do what I ought to do?". With no "hidden qualifier". Since your character's question is asking "Why <tautology>?", the answer is that tautologies are self proving statements.

Now you ask a more relevant question. "Well, naming the answer to "what ought one do?" does not answer the question, so what is the contents of the answer? Aka what is moral?". To this every player turns towards the GM. Just like the GM decided how time works in their campaign reality, the GM is the one to answer that question. There are various answers the GM might have.

They may say "I don't know", in which case nobody can know.
They may say "Mu. The campaign is amoral despite what some characters believe." in which case that is true.
They may say "Moral Relativism in in effect this campaign" in which case <Go Read about Moral Relativism> because it is true in this case.
They may say "Gruumish's guess happens to be correct" in which case that is true.
They may say "Kantian Ethics are in effect" in which case ask for clarification because Kant's writing is rather hard to decipher and is correct in this case.
Or they may say "The Good alignment as defined on pg XYZ is moral in this campaign" in which case that is true.
Or they may say something else, but what they say is true for that campaign.

gloryblaze
2021-03-06, 01:13 PM
This analogy fails, because if it were true Person B would fall off the bottom of the world.


In this hypothetical, it’s impossible to tell you’re inside the diorama from within the diorama. The globe has its own gravity that affects the tiny people living on it just like our gravity affects us. From Person B’s perspective, North is indeed down, as gravity pulls them towards the center of the globe. It’s only by completely changing our reference point to that of Person C that we can tell that Person B is wrong. Note that “down” and “the direction gravity pulls while within the diorama” are not synonymous in this hypothetical.

Tanarii
2021-03-06, 04:13 PM
Now you have a character ask "why should I be moral?". Since "moral" is a term defined as the answer to the question "What ought one do?", I can substitute and simplify. Your character's question is "Why ought I do what I ought to do?". With no "hidden qualifier". Since your character's question is asking "Why <tautology>?", the answer is that tautologies are self proving statements.You are mistaken. The question "Why ought I do what I ought to do?" is very much not a tautology. It is a valid question that requires an answer.

The reason is the actual question is "Why should I decide that I ought to do the thing you're telling me I ought to do?"




They may say "I don't know", in which case nobody can know.
They may say "Mu. The campaign is amoral despite what some characters believe." in which case that is true.
They may say "Moral Relativism in in effect this campaign" in which case <Go Read about Moral Relativism> because it is true in this case.
They may say "Gruumish's guess happens to be correct" in which case that is true.
They may say "Kantian Ethics are in effect" in which case ask for clarification because Kant's writing is rather hard to decipher and is correct in this case.
Or they may say "The Good alignment as defined on pg XYZ is moral in this campaign" in which case that is true.
Or they may say something else, but what they say is true for that campaign.

Those are no answers to the question. Except for the first one.


In this hypothetical, it’s impossible to tell you’re inside the diorama from within the diorama. The globe has its own gravity that affects the tiny people living on it just like our gravity affects us. From Person B’s perspective, North is indeed down, as gravity pulls them towards the center of the globe. It’s only by completely changing our reference point to that of Person C that we can tell that Person B is wrong. Note that “down” and “the direction gravity pulls while within the diorama” are not synonymous in this hypothetical.Exactly. And since down = "the direction that gravity pulls"*, the analogy fails. Or the person falls off the globe.

*unless your name is Ender

OldTrees1
2021-03-06, 04:55 PM
You are mistaken. The question "Why ought I do what I ought to do?" is very much not a tautology. It is a valid question that requires an answer.

The reason is the actual question is "Why should I decide that I ought to do the thing you're telling me I ought to do?"

Oh you were having a character ask "Why should I decide that I ought to do the thing you're telling me I ought to do?"


Well that means you already saw the Term "moral" being defined so you know it is a label/pointer/name.
You also know the tautological question "Is the term defined as name of the right answer to "What ought one do?" the name of the right answer to "What ought one do?". Aka if X is Y, then is X Y? Yes, if X is Y then X is Y.
You also know the character exists in a reality created by a GM. The GM can create that campaign reality the way they want. So ultimately any question a character asks about why the campaign reality is the way it is, ends up with the answer "Because the GM created it that way".



You then invented a character that you referenced by "you're" (which can't be me since I am not in the campaign world nor am I talking to your character). That is telling the asking character their theory about the reality they both live in?

So my post was only addressing background information relevant to establishing the context in which you character's question "Why should I decide that I ought to do the thing you're telling me I ought to do?" could be asked?

Well in that case this character "You're" does not know the answer to the question your character is asking. However you the player could get some relevant information if you ask the GM.

Tanarii
2021-03-06, 05:08 PM
Oh you were having a character ask "Why should I decide that I ought to do the thing you're telling me I ought to do?"From the perspective of the character, not by the character. That's the only perspective from which the object morality matters, if they're linked like you're don't here. At the GM/Player level it's subjective. They (or more typically the rules of the game they chose to play) are defining morality somehow. Although as a I've previously noted, most games don't define "what is moral", instead defining "what behavior maps to what moral label".


Well that means you already saw the Term "moral" being defined so you know it is a label/pointer/name.That is atypical. You're doing that in your example, but your example doesn't match how it is normally done. But accepted for this example.


You also know the tautological question "Is the term defined as the right answer to "What ought one do?" the right answer to "What ought one do?".Okay. Accepted as a subjective decision being made by the DM in this case, creating an in-universe objective.


You also know the character exists in a reality created by a GM. The GM can create that campaign reality the way they want. So ultimately any question a character asks about why the campaign reality is the way it is, ends up with the answer "Because the GM created it that way".

You then invented a character that you referenced by "you're" (which can't be me since I am not in the campaign world nor am I talking to your character). That is telling the asking character their theory about the reality they both live in?

So my post was only addressing background information relevant to establishing the context in which you character's question "Why should I decide that I ought to do the thing you're telling me I ought to do?" could be asked?Its being asked from the perspective of the character, not by the character. And now you've created a link between objective morality and "what they outfit to do", which means they are no longer independent. The characters perspective now defines if it is objective or not, and vice versa.


Well in that case this character "You're" does not know the answer to the question your character is asking. However you the player could get some relevant information if you ask the GM.There's a missing step then. From the perspective of the character, which is the only one that matters for purposes of objective morality if it's being directly linked to "what they ought to do", there is a disconnect between objective morality and why they ought to decide to do the thing being called "moral" within that framework.

OldTrees1
2021-03-06, 05:45 PM
From the perspective of the character, not by the character. That's the only perspective from which the object morality matters, if they're linked like you're don't here. At the GM/Player level it's subjective. They (or more typically the rules of the game they chose to play) are defining morality somehow. Although as a I've previously noted, most games don't define "what is moral", instead defining "what behavior maps to what moral label".

That is atypical. You're doing that in your example, but your example doesn't match how it is normally done. But accepted for this example.

Okay. Accepted as a subjective decision being made by the DM in this case, creating an in-universe objective.

Was this supposed to be further down? Neither the DM nor the subjective decision being made by the DM was mentioned yet. I was just "showing my work" about the term is itself.


Its being asked from the perspective of the character, not by the character. And now you've created a link between objective morality and "what they outfit to do", which means they are no longer independent. The characters perspective now defines if it is objective or not, and vice versa.

There's a missing step then. From the perspective of the character, which is the only one that matters for purposes of objective morality if it's being directly linked to "what they ought to do", there is a disconnect between objective morality and why they ought to decide to do the thing being called "moral" within that framework.

Here is a miscommunication. In the Ethics branch of Philosophy the term of art "Objective Morality" or "Moral Universalism" has a meaning "slightly" different from how you are using the terms separately. For my own IRL sanity this week I will not go in further detail on that subthread. Not to mention 2-3 of the example GM answers I gave were not examples of Objective Morality as the term is defined, so I am not talking about that subthread at this time.

So there is a question, the concept of an answer to the question, a term to reference the answer to the question, a GM that creates the campaign, and the GM might decide to subjectively decide what that answer is going to be in the campaign reality.

Aka the GM might decide to define "what behavior maps to the labels moral/amoral/immoral."

Now the question you are having asked is from the character's POV but not the character asking it. I am not sure exactly the nuance there but it is important because you mentioned it twice. I don't think I understand the question well enough to answer it, but I will answer a related question.


Suppose the POV of the character understands the label "moral" as the term I defined it as. So they understand the concept of it as a pointer to the answer.
Suppose the GM's POV includes the GM defining what behavior maps to the labels moral/amoral/immoral.
Note that in example answers (like Utilitarianism, Moral Relativism, Mu, etc), the answer tries to answer Why it is the answer, and if it is correct then its answer of Why is also correct.
Suppose the POV of the character is coincidentally asking about a behavior that the GM also happened to map to the label moral.
Suppose the following question from the POV of the character: Is behavior X moral? Ought I do behavior X? To answer those questions they would need reasons or evidence. So they ask Why is behavior X moral? Why ought I do behavior X?


Those last 4 questions have the answer "The character does not know". That won't stop characters from jumping to conclusions about what behaviors they ought or ought not do. But they don't know.

Suppose instead the question asked from the POV of the character was: Assuming X is the answer to the overall question of "What ought one do?", why is it the right answer? It is an end in and of itself, and it happens the be the right one from all the possible answers that are ends in and of themselves. But why was this one the right one? There are infinite answers that are self consistent ends in and of themselves. Why was this one the right one?

I fear the answer to that question is "Because that is the way this campaign reality was created", however there is no way for that to be know from the character's POV, and is probably unsatisfying from that POV.

Which is why I was trying to be so rigorous and picky about the definitions / terms / naming I was using. Even giving it as big a benefit of the doubt as I could, this is the limits of knowledge from the POV of characters. Characters can have subjective moral theories based on moral intuitions, but lack the knowledge of what the GM defined (unlike alignments).

From the perspective of the character, they cannot know morality and thus rely on their subjective beliefs based on their subjective moral intuitions. Including those that believe in an amoral reality. Or believe mutually contradicting things. Or that disagree with each other. Even objective alignment, despite allowing the character the potential to access full knowledge of alignment, does not fix this inherent ignorance of morality.

Mechalich
2021-03-06, 05:59 PM
So my post was only addressing background information relevant to establishing the context in which you character's question "Why should I decide that I ought to do the thing you're telling me I ought to do?" could be asked?


The common case answer to this question is 'because doing the thing I'm telling you ought to do' (where I represents some agent with awareness of the morality structure of the reality) is 'because doing so will have positive outcomes and not doing so will have negative outcomes.'

D&D has a major issue in that, because of decades of poorly conceived, generally contradictory (at times do to some fairly clear differences in moral theory between specific authors), and in some cases just plain bad, writing, the variance in outcomes between being a good-aligned person, a neutral-aligned person, and an evil-aligned person is insufficiently clear. Essentially the alignments, which are supposedly objective, can end up representing, in terms of beings, places, or actions assigned those alignments, outright contradictory things. If alignment X = Moral value 1, but alignment X also equals more value 2 and moral values 1 and 2 are - to the average audience member - explicitly not equal, then there's a problem.

The whole 'should we kill the orc babies' argument is emblematic of this. If there's an objective morality system in place there should be a clear answer to the question (including if the answer varies depending on circumstances). D&D, instead vacillates and dodges. D&D's 'objective morality' is analogous to a compass whose needle spins endlessly.

Segev
2021-03-06, 07:10 PM
There is a campaign and the GM determined the nature of that reality.
There is a question. "What ought one do?". There is no "hidden qualifier" to this question
In order to continue I will define a Term. The word "moral" will be used as a name for the answer to the question "What ought one do?".


Now you have a character ask "why should I be moral?". Since "moral" is a term defined as the answer to the question "What ought one do?", I can substitute and simplify. Your character's question is "Why ought I do what I ought to do?". With no "hidden qualifier". Since your character's question is asking "Why <tautology>?", the answer is that tautologies are self proving statements.

Now you ask a more relevant question. "Well, naming the answer to "what ought one do?" does not answer the question, so what is the contents of the answer? Aka what is moral?". To this every player turns towards the GM. Just like the GM decided how time works in their campaign reality, the GM is the one to answer that question. There are various answers the GM might have.

They may say "I don't know", in which case nobody can know.
They may say "Mu. The campaign is amoral despite what some characters believe." in which case that is true.
They may say "Moral Relativism in in effect this campaign" in which case <Go Read about Moral Relativism> because it is true in this case.
They may say "Gruumish's guess happens to be correct" in which case that is true.
They may say "Kantian Ethics are in effect" in which case ask for clarification because Kant's writing is rather hard to decipher and is correct in this case.
Or they may say "The Good alignment as defined on pg XYZ is moral in this campaign" in which case that is true.
Or they may say something else, but what they say is true for that campaign.


The DM has defined the morality for the setting. Let's say - because it's easy to use since we're familiar with the form - it's the D&D alignment grid, or something close enough that we don't need to worry about the differences.

You have defined "moral" as "what one ought to do." By defining it this way, you have made it impossible to answer without there being an underlying motivation.

You ought to do your homework so you learn the subject and get a good grade. You ought to be nice to your sister so she is healthy and happy, your parents don't punish you for being mean to her, and she'll be nice to you in return. You ought to take piano lessons so you can learn to play the piano and attract that girl who likes pianists. You ought to ask that girl out so she might go on a date with you.

There is no answer, not because the campaign is amoral, but because you have framed the question badly. The campaign has a defined morality. Good and evil are objectively present (even if that "objectiveness" is the DM declaring answers). You haven't demonstrated the campaign is amoral by asking the question, "What ought I to do?" with absolutely no motivation provided. You've simply made the question meaningless.

The question is as meaningless as, "What color is down?" with no other context, because you've removed the ability for "ought" to have any motivation, and thus it has no meaning. It is a word that requires motivation in order to justify itself. You ought to do things for reasons. There is never something you ought to do with absolutely no motivation behind why.

NigelWalmsley
2021-03-06, 08:08 PM
Now you have a character ask "why should I be moral?". Since "moral" is a term defined as the answer to the question "What ought one do?", I can substitute and simplify.

Except you can't. The fact that someone says "you ought to do X" doesn't inherently mean you ought to do X. It just means that they think you ought to do X. Similarly, even if there's a property of the universe that uses the same words to describe actions to describe morality, that doesn't make them the same. The idea that the universe being Utilitarian or Kantian or whatever would resolve moral debates reflects a fundamental misunderstanding of moral debates. There are people in the real world who thinks the universe agrees with their morality, and who think they have proof of that. But not everyone agrees with them!


D&D, instead vacillates and dodges. D&D's 'objective morality' is analogous to a compass whose needle spins endlessly.

The alternative isn't really all that much better though. The reason "do we kill the baby Orcs" is such a troublesome question is because once you accept the notion of Objective Morality, it becomes very difficult to say that you shouldn't, but people nonetheless feel deeply uncomfortable actually doing it. Trying to shoehorn everything into a pair of Good/Evil and Law/Chaos binaries is simply not something that correlates very well with people's moral intuitions if you're operating at enough detail to start asking serious moral questions.

OldTrees1
2021-03-06, 08:27 PM
There is no answer, not because the campaign is amoral, but because you have framed the question badly. The campaign has a defined morality. Good and evil are objectively present (even if that "objectiveness" is the DM declaring answers). You haven't demonstrated the campaign is amoral by asking the question, "What ought I to do?" with absolutely no motivation provided. You've simply made the question meaningless.

Odd, I don't recall claiming the campaign was amoral. I thought I was quite clear that the GM decided what it would be (Including possibilities like 2 different forms of amoral, Moral Relativism, and a list of examples of more conventional examples).

Segev at this point I have to assume your issue is not with my communication of the topic, but rather you and I disagree close to the trunk of Meta Ethics (https://en.wikipedia.org/wiki/Meta-ethics#Moral_semantics). I have not made the question "meaningless". You have complained that the question I described is impossible to answer without an underlying motivation. I have stated an underlying motivation would defeat the purpose of the question. I have described, in detail, the concept of a right answer (a very common meta ethical position). You seem to reject the concept of a right answer (very unusual meta ethical position) and instead focus on claims of "X if you want Y". With sufficient rigor those claims are descriptive claims, which don't cross the Is Ought boundary (https://en.wikipedia.org/wiki/Is%E2%80%93ought_problem).

While this is a dense topic and I have written as such, I think we are at an impasse. I hope the further research I pointed you towards (especially meta ethics) is sufficient for you to forgive me for disengaging.


Except you can't. The fact that someone says "you ought to do X" doesn't inherently mean you ought to do X. It just means that they think you ought to do X. Similarly, even if there's a property of the universe that uses the same words to describe actions to describe morality, that doesn't make them the same. The idea that the universe being Utilitarian or Kantian or whatever would resolve moral debates reflects a fundamental misunderstanding of moral debates. There are people in the real world who thinks the universe agrees with their morality, and who think they have proof of that. But not everyone agrees with them!

This is not a reply to what you quoted and does not appear to disagree with me.



At this point I do have to exit this subthread. It is a tangent at best and has either stalled out or requires dredging even deeper into meta ethics.

If people want to return to the topic of Alignments without the subject of Morality, then I would eagerly join them.

Saint-Just
2021-03-06, 09:07 PM
I think people are kinda missing the point of the question here. Saint-Just isn't asking "Does Detect Good really detect whether or not a character adheres to the IRL Objective Morality?". I mean, obviously they're not, that's an absurd thing to ask. They're asking how people would figure out the criteria by which the various Detect [Alignment] spells decide how strongly to light stuff up when cast. And for that, you'd use some behavioral studies:

...

Study Four: Is Alignment A Good Proxy For [Insert Ethics System Here]?
This one is a bit cheeky. Nobody can agree on which ethics system is correct. But we can usually agree on which people are experts on which ethics systems. For this experiment, we check everyone's alignment, then have a panel of experts on [Insert Ethics System Here] interview our test subjects and then rate their level of adherence to [Insert Ethics System Here]. Then we check everyone's alignment again, just to be sure that being declared Highly Ethical by the Mer-Pope (or whoever else we're bringing in on our panel for [Insert Ethics System Here]) isn't one of the things that affects alignment. If it turns out that there's a strong correlation between [Alignment #3] and the having the approval of the Federated Council of Giant Spiders Who Eat Puppies, that tells us something about the relationship between [Alignment #3] and Being A Giant Spider and/or Eating Puppies.

Now, none of this actually tells us whether or not we want to be [Good] or not. After all, in 3.5 the Puppy Eating Arson Spiders From Baator will detect faintly of [Good] if someone casts Protection From Evil on them, so Detect Good is clearly not a foolproof way of deciding if someone is a decent sort or not. But equally clearly, it can at least give us some information about the probable probity of people who detect as [Good] relative to those who detect as [Evil].

Surprisingly, my initial thought was mostly about this. A good or bad proxy. But also it was about less-than-ideal situations.

Going back to the campaign which inspired my question (though the question seems to be worthwhile on it's own, given the discussion it generated): it was mostly about... counterculture so to speak. Official position (not of the GM, of the Advisory Council? Society? In general it's what everybody knows and knows that everybody else knows that) is that is that alignments are what they are in D&D. Murder Evil charity Good etc. etc. Moral and objective. There is no obvious discrimination (it's the Sigil after all, a devil and a deva could have had a peaceful chat there even before the everyone was forced to stick together or die) but you do have an "alignment" field on your ID, you can have voluntary segregation etc. And some people spread the lore that alignment isn't what you think it is (not in a bid to convince everybody, it's not a revolution, it's not an active challenge to the current order, it's a underground circumventing norms).

And it seems to me that in such circumstances, and in general short of whole society cooperating in that venture discovering even alignment-as-a-proxy is incredibly hard; doubly so if there is enough of people convinced in each of two positions on less than precise grounds.

The idea was that either objective morality or at least really well-aligned with one system proxy may exist and you still will not know it for sure.

Mechalich
2021-03-06, 09:31 PM
The alternative isn't really all that much better though. The reason "do we kill the baby Orcs" is such a troublesome question is because once you accept the notion of Objective Morality, it becomes very difficult to say that you shouldn't, but people nonetheless feel deeply uncomfortable actually doing it. Trying to shoehorn everything into a pair of Good/Evil and Law/Chaos binaries is simply not something that correlates very well with people's moral intuitions if you're operating at enough detail to start asking serious moral questions.

Generally I agree. It's very difficult to structure a hypothetical objective morality, due to the demands of universality, with the nature/nurture problem of moral foundations for individuals.

Objective morality works fine with regards to things that are inherently evil in some fashion like fiends or undead or even something like mind flayers whose biology is predicated on destroying the minds of other sapient beings simply to exist. Orcs in a strictly Tolkien-based sense, where they are corrupt creations of an irredeemable dark lord that literally cannot be 'good' are fine. Orcs in a 'potentially a PC race' sense, well, then you have problems. Objective morality therefore works better when all the moral actors available are humans, because few would label any human culture inherently evil and an objective moral system that accepts cultural variations can be constructed. It also works better when the moral axes are simplified to a single line with fairly obvious endpoints - usually a single benevolent creator and a single malevolent opposition. Essentially, simplification either eliminates or minimizes many of the nasty edge cases that make people so uncomfortable. For example, in the Wheel of Time the question 'do we kill the baby Trollocs' is an obvious yes, because Trollocs aren't people, they're living bioweapons and the idea of playing as a Trolloc is ridiculous.

D&D has things that are both very clearly inherently evil like fiends, while at the same time it has a lot of things like orcs and drow that are 'evil' because they were raised in evil cultures. Accepting that those beings are actually evil, objectively, means internalizing a lot of grimdark, because essentially the various evil deities that control the evil races are engaged in the mass poisoning of souls forever and there doesn't seem to be anything that can be done to stop it.

NigelWalmsley
2021-03-06, 09:44 PM
Objective morality works fine with regards to things that are inherently evil in some fashion like fiends or undead or even something like mind flayers whose biology is predicated on destroying the minds of other sapient beings simply to exist.

That's true, but it's also frankly unnecessary. I don't need some kind of external aspect-of-the-universe morality to tell me that the Mind Flayers are the bad guys, I can just observe that A) they eat people and B) I'm a people, and therefore conclude that we're probably not going to be friends. Which, I think, is the most telling argument against D&D-style Alignment: even in the places where it doesn't cause problems, it doesn't solve them either.


Orcs in a strictly Tolkien-based sense, where they are corrupt creations of an irredeemable dark lord that literally cannot be 'good' are fine.

Honestly, even with Tolkien Orcs, people are going to have a really hard time with the idea that it's correct to burn down the Orc village and hunt down every Orc child you can find. People have a lot of empathy for anything that is remotely like a person, and "these people are irredeemably evil and must be purged from the world for the good of all" pattern-matches to some extremely nasty ideologies. There's a reason that, in more recent sources, the irredeemably Evil enemies often reproduce in non-standard ways that bypass the problem. There aren't baby Orks or Flood children for you to worry about killing, in no small part because the authors of that setting realized that no one really wants to be confronted with that question.

Grek
2021-03-06, 10:54 PM
Except you can't. The fact that someone says "you ought to do X" doesn't inherently mean you ought to do X. It just means that they think you ought to do X. Similarly, even if there's a property of the universe that uses the same words to describe actions to describe morality, that doesn't make them the same.You absolutely can and it absolutely does. I know this seems like a weird answer, but the weirdness is rooted more in the existence of a DM who can definitely and unambiguously state that your character is aware of the objectively moral action for your character to take in a given situation than in the nature of moral debate itself. If the DM says 'killing baby orcs is Evil in this campaign setting and your character is aware of that fact' there's no room to argue the point. This is very different from how things work IRL (where there is no DM to feed you objective moral truths), but if we're accepting the ability of the DM to define moral truth inside the setting, it's an inescapable conclusion.


And it seems to me that in such circumstances, and in general short of whole society cooperating in that venture discovering even alignment-as-a-proxy is incredibly hard; doubly so if there is enough of people convinced in each of two positions on less than precise grounds.I don't follow; what exactly could the non-cooperative members of society do to sabotage the results of the tests? Intentionally lie to confound the results? It's not as if psychology doesn't already have methods for dealing with that sort of experimental error. With getting together a panel of Hextorite clerics to opine on the relative conformity of various experimental subjects to the Hextorite ethos? They have holy books, just tally up all of the normative statements made within and use that to write a rubric. It's science, the whole objective is to come up with tests where it doesn't matter if what people think the answer 'should' be.

NigelWalmsley
2021-03-06, 11:27 PM
I know this seems like a weird answer, but the weirdness is rooted more in the existence of a DM who can definitely and unambiguously state that your character is aware of the objectively moral action for your character to take in a given situation than in the nature of moral debate itself. If the DM says 'killing baby orcs is Evil in this campaign setting and your character is aware of that fact' there's no room to argue the point.

That still doesn't work. Consider a simple moral dilemma: the Trolley Problem. Five people on one track, one person on the other track, you can choose which group of people dies. There are various reasons that people think one or the other possible answer to this question is correct.

Suppose, for the moment, that you're a Utilitarian who believes that, because five people is more than one person, you should pull the lever so the one person dies. You think this is the good/right/morally correct thing to do. Now suppose that you are transported into an alternate universe, indistinguishable from our own, except that you have absolute moral certainty that, in this universe, pulling the lever is Evil as an objective property of reality. Should you change your position on the Trolley Problem? Of course not. You didn't believe you should pull the lever because it was Good, you believed you should pull the lever because it saved four net lives. It still does that, so as far as you're concerned that's still the right answer.

What is happening is not that your moral theory is now wrong, just that your moral theory and the universe's disagree. I suppose you could describe that as "your action is Evil and you are Evil", but that's fundamentally not very useful because you don't think of yourself as Evil and typical definitions of Evil will not accurately predict your future actions. Frankly, I'm not even sure there's a fundamental difference between the hypothetical universe and our own. You could make the reasonable argument that our universe has the "objective morality" of "no actions carry any inherent moral value, positive or negative", and that every ethical theory (except certain strains of Nihilism) is in disagreement with it in the same way that our Utilitarian is in disagreement with the "don't pull the lever" universe.

Mechalich
2021-03-07, 12:57 AM
That still doesn't work. Consider a simple moral dilemma: the Trolley Problem. Five people on one track, one person on the other track, you can choose which group of people dies. There are various reasons that people think one or the other possible answer to this question is correct.

Suppose, for the moment, that you're a Utilitarian who believes that, because five people is more than one person, you should pull the lever so the one person dies. You think this is the good/right/morally correct thing to do. Now suppose that you are transported into an alternate universe, indistinguishable from our own, except that you have absolute moral certainty that, in this universe, pulling the lever is Evil as an objective property of reality. Should you change your position on the Trolley Problem? Of course not. You didn't believe you should pull the lever because it was Good, you believed you should pull the lever because it saved four net lives. It still does that, so as far as you're concerned that's still the right answer.

What is happening is not that your moral theory is now wrong, just that your moral theory and the universe's disagree. I suppose you could describe that as "your action is Evil and you are Evil", but that's fundamentally not very useful because you don't think of yourself as Evil and typical definitions of Evil will not accurately predict your future actions. Frankly, I'm not even sure there's a fundamental difference between the hypothetical universe and our own. You could make the reasonable argument that our universe has the "objective morality" of "no actions carry any inherent moral value, positive or negative", and that every ethical theory (except certain strains of Nihilism) is in disagreement with it in the same way that our Utilitarian is in disagreement with the "don't pull the lever" universe.

Assuming that the new universe's moral theory trumps your personal moral theory, than yes, your moral theory is in fact wrong. The trick is that in order for this to be meaningful there have to be moral consequences. In the real world there are arguably no moral consequences to any action. There are societal consequences, because other human beings may dislike those actions and therefore act accordingly up to the point of mandating death as a consequence but those consequences aren't 'moral' unless you suggest that society or the will of society is capable of making moral determinations.

In a fictional universe the key difference is that there is usually some being/entity/aspect of the universe itself capable of making moral determinations. Now the reason why this is so is ultimately because the author said so and it's a fiat action but accepting this is part of the fundamental suspension of disbelief necessary to appreciate the fictional universe. In this case a disagreement with the moral determinations offered in a fictional universe isn't a matter of ethical contention but instead a failure to suspend disbelief sufficiently to accept an alternative reality where morality actually works as stated. And, because most of the audience is going to evaluate the plausibility of a fictional universe's moral construction not based on philosophical debate but rather their personal acquired viewpoint as to what is or is not moral in an instinctive way, strong deviations from conventional morality will have a tendency to lose the audience even when the hypothetical fictional world is explicitly presented as a thought experiment.

This is, in some sense, the grimdark problem. If the morality posited by a fictional universe is unacceptable - because the in-universe 'god' has written laws the reader can't stand - or if it's simply pointless - because the universe is written in such a way that everyone is ultimately doomed and being 'good' is a sucker's game - they why should anyone care what happens in the story? Moral calibration is tricky. To go back to the trolley problem example, anyone can declare that, in fictional universe A, throwing the switch to move the trolley to kill the one person instead of the five people it's headed towards is wrong, but in order to make that universe work for story-telling you have to construct the explanation behind that such that a Utilitarian reader will accept the answer as plausible. That's the hard part.

Grek
2021-03-07, 01:45 AM
Suppose, for the moment, that you're a Utilitarian who believes that, because five people is more than one person, you should pull the lever so the one person dies. You think this is the good/right/morally correct thing to do. Now suppose that you are transported into an alternate universe, indistinguishable from our own, except that you have absolute moral certainty that, in this universe, pulling the lever is Evil as an objective property of reality. Should you change your position on the Trolley Problem? Of course not. You didn't believe you should pull the lever because it was Good, you believed you should pull the lever because it saved four net lives. It still does that, so as far as you're concerned that's still the right answer..
I'm not talking about a universe where pulling the lever is [Evil] like unholy water is, or where you have an unshakable conviction that Society (or perhaps just the DM) disapproves of people who would pull the lever. I'm talking about a universe where the in-universe character believes that it would be wrong and bad and contrary to the standards of behavior which the character themself endorses to pull the lever. That's what I meant when I was talking about there being an Objective Morality in setting where the character knows objectively what is right and wrong in a way that is distinct from what that character's player might think of the hypothetical.

Segev
2021-03-07, 06:04 PM
The alternative isn't really all that much better though. The reason "do we kill the baby Orcs" is such a troublesome question is because once you accept the notion of Objective Morality, it becomes very difficult to say that you shouldn't, but people nonetheless feel deeply uncomfortable actually doing it. Trying to shoehorn everything into a pair of Good/Evil and Law/Chaos binaries is simply not something that correlates very well with people's moral intuitions if you're operating at enough detail to start asking serious moral questions.It depends on what the objective morality is, actually. I would posit that, while, yes, if you set up objective morality such that you declare "it is good and right to kill baby orcs," you can fiat that into being for your campaign setting, it is not an essential quality of objective morality that it always be morally justified to kill baby orcs. That is a choice on the part of the designer of the objective morality system. (Remembering that we are discussing fictional worlds, here, where the "objective morality" can be a well-designed one or a poorly-designed one, based on the skill and effort of the creator.)


Odd, I don't recall claiming the campaign was amoral. I thought I was quite clear that the GM decided what it would be (Including possibilities like 2 different forms of amoral, Moral Relativism, and a list of examples of more conventional examples).One of the answers the DM may give according to you is the following:
They may say "Mu. The campaign is amoral despite what some characters believe." in which case that is true.That is to which I was responding when I said "it is not because the setting is amoral." I responded thus because this was the only one of the answers your hypothetical DMs gave that actually dealt with a situation where there was no underlying motivation. All your other examples have underlying motivations of "you should because you want to align with this alignment."


Segev at this point I have to assume your issue is not with my communication of the topic, but rather you and I disagree close to the trunk of Meta Ethics (https://en.wikipedia.org/wiki/Meta-ethics#Moral_semantics). I have not made the question "meaningless". You have complained that the question I described is impossible to answer without an underlying motivation. I have stated an underlying motivation would defeat the purpose of the question.This is like saying, "Defining 'down' to mean anything objective would defeat the purpose of the question, 'What is down?'"

I do not see how you have a meaningful question when you have rejected the meaning of the words you're using in it, and insist that using the words' meanings ruins the point of the question. What is the purpose of the question? I have thought that you were trying to assert that there is no objective answer to "what is moral?" even with an objective system. I am refuting that with my arguments, and I'll be happy to repeat them with that understanding if it will help. If that is not your position, I fundamentally do not understand what it is you're trying to say.


I have described, in detail, the concept of a right answer (a very common meta ethical position).No, you haven't. You have described, in detail, a tautology that states that "the right answer is the right answer." When you cannot tell me why it is the right answer, and, fundamentally, why I should care to be "right" by your definition of "right answer," you are playing semantic games and destroying the meaning of the words, the question, and the usefulness of the philosophy you are attempting to build around it.

"What is 2+2?" has a right answer because there is an objective measure of it. You take two things and put two more with them and now you definitely have four things. There's no useful subjectiveness here, and a person who swears his subjective math system allows there to be 15 things when he performs this operation will simply have all his experiments fail, just as the kid who swears up and down that his lego airplane can fly on its own will, upon putting it on the floor and saying "it's flying! see?" be simply imagining it while his plane remains stubbornly, objectively, stationary on the ground.


You seem to reject the concept of a right answer (very unusual meta ethical position) and instead focus on claims of "X if you want Y".On the contrary. There are objectively right answers. They tell you what you should do to be good, to be evil, to be lawful, to be chaotic, or whatever objective alignments exist. It is objectively the wrong thing to do to rape and murder your cousin if you are trying to be a good person. It is objectively the right thing to do to be charitable to those in need if you are trying to be a good person. These are "right answers."

The issue is that you're constructing a situation where there can't be a right answer because there isn't actually a fully-specified question. If you cannot tell me why I ought to do something without resorting to saying "because it's what you ought to do," you don't actually have an objective system.


With sufficient rigor those claims are descriptive claims, which don't cross the Is Ought boundary (https://en.wikipedia.org/wiki/Is%E2%80%93ought_problem).No, you're misapplying this, as the wikipedia article you linked defines it. There can be no "ought" without a purpose, a motivation. And when discussing objective ethics, "ought" must perforce interface with what "is," simply because for it not to requires there be no objective "oughts."


While this is a dense topic and I have written as such, I think we are at an impasse. I hope the further research I pointed you towards (especially meta ethics) is sufficient for you to forgive me for disengaging.I think you are misapplying what you linked, and are objectively wrong about what it is saying.

Let's try this at its very simplest: In your formulation of what "ought" and "moral" mean, if Bob asks an objectively correct and honest guru, "What ought I to do?" and then, in response to the guru's answer, Bob asks, "Why ought I do that?" what would the guru's answer be?


If people want to return to the topic of Alignments without the subject of Morality, then I would eagerly join them.I have never left the subject of alignments. Morality is inherently tied to them, at least when discussing D&D's alignment grid, which explicitly has a moral axis of "good and evil." :smallconfused:

Grek
2021-03-07, 06:44 PM
I have never left the subject of alignments. Morality is inherently tied to them, at least when discussing D&D's alignment grid, which explicitly has a moral axis of "good and evil." :smallconfused:Point of order, D&D's alignment system includes inanimate pools of liquid (holy water), literal parasites (celestial tapeworms) and spells which can be used to destroy innocent puppies (Holy Smite) as things that detect as [Good] while failing to report the abstract concepts of kindness or mercy as being [Good]. We should probably consider the possibility that Detect Good is not actually detecting the same thing a philosopher is talking about when they wax poetic about The Good.

Segev
2021-03-07, 08:01 PM
Point of order, D&D's alignment system includes inanimate pools of liquid (holy water), literal parasites (celestial tapeworms) and spells which can be used to destroy innocent puppies (Holy Smite) as things that detect as [Good] while failing to report the abstract concepts of kindness or mercy as being [Good]. We should probably consider the possibility that Detect Good is not actually detecting the same thing a philosopher is talking about when they wax poetic about The Good.

I don't dispute that, however, I fail to see how we can discuss whether this is well-formed as a system without touching on morality. The whole point of an alignment system - or at least D&D's grid - is the moral/ethical axis. It very much isn't INTENDED to do a bad job of encompassing the things the words used to describe its alignments mean.

OldTrees1
2021-03-07, 11:43 PM
I have never left the subject of alignments. Morality is inherently tied to them, at least when discussing D&D's alignment grid, which explicitly has a moral axis of "good and evil." :smallconfused:

1) I am not addressing the topic I disengaged from. If you want to learn more, I have pointed you towards the branches of Meta Ethics
2) The Good alignment and Morality are not necessarily linked. I am willing to engage with discussion on the amoral part of the thought experiment in the opening post.

NigelWalmsley
2021-03-08, 03:25 PM
Assuming that the new universe's moral theory trumps your personal moral theory, than yes, your moral theory is in fact wrong. The trick is that in order for this to be meaningful there have to be moral consequences.

What makes a "moral" consequence different from other kinds of consequence? Suppose that pulling the lever means you go to Hell, and are eaten by whatever kind of Devil eats people forever. That's not some special kind of consequence that makes pulling the lever "Evil" whether you think it is or not, it's just a regular consequence that's described in moral terms. If the Trolley Problem was stated with an "if you pull the lever, someone shoots you in the head" caveat, some people would still pull the lever (and, yes, some people would not). Basically, nothing you can do here can remove the dilemma. You can change it, and you can demand that the language be changed so that we can't call one side "Good" anymore, but there will still be some people who say "pull the lever" and some people who say "don't pull the lever", and as long as that's true you haven't solved anything.


In the real world there are arguably no moral consequences to any action.

But that is a moral consequence. 0 is a number. Just as "I will pay you 0 dollars to do that, and fine you 0 dollars for doing that" is an incentive (just a neutral one), "no moral judgement" is a moral judgement. And yet people who accept that the real world imposes the same consequence for killing as for saving a life do not believe that we must describe both those actions as "neutral".


I'm talking about a universe where the in-universe character believes that it would be wrong and bad and contrary to the standards of behavior which the character themself endorses to pull the lever. That's what I meant when I was talking about there being an Objective Morality in setting where the character knows objectively what is right and wrong in a way that is distinct from what that character's player might think of the hypothetical.

That's not a property of the universe, that's a property of the character. I can certainly imagine a character who believes in a different set of ethics than I do. But what I can't imagine -- what I believe to be literally unimaginable -- is a universe such that no character who agreed with my ethics could exist.


It depends on what the objective morality is, actually. I would posit that, while, yes, if you set up objective morality such that you declare "it is good and right to kill baby orcs," you can fiat that into being for your campaign setting, it is not an essential quality of objective morality that it always be morally justified to kill baby orcs. That is a choice on the part of the designer of the objective morality system. (Remembering that we are discussing fictional worlds, here, where the "objective morality" can be a well-designed one or a poorly-designed one, based on the skill and effort of the creator.)

Part of the problem is that the other answer doesn't work great either. If the baby Orcs aren't Evil, then presumably the adult Orcs aren't either (at least, not inherently), so why is it okay to kill them just because they're Orcs?

Chauncymancer
2021-03-08, 03:39 PM
What is happening is not that your moral theory is now wrong, just that your moral theory and the universe's disagree. I suppose you could describe that as "your action is Evil and you are Evil", but that's fundamentally not very useful because you don't think of yourself as Evil and typical definitions of Evil will not accurately predict your future actions.
Serial killers, thieves, terrorists, and military dictators of all kinds can provide you with an internally coherent explanation of why the only moral thing to do is to kill, rob, and enslave people. You've just got to tell some people "Your argument is just wrong, your behavior isn't moral."


Let's try this at its very simplest: In your formulation of what "ought" and "moral" mean, if Bob asks an objectively correct and honest guru, "What ought I to do?" and then, in response to the guru's answer, Bob asks, "Why ought I do that?" what would the guru's answer be?

The guru's answer, assuming he subscribes to a universalist formulation, is most likely going to go: "You need to understand that all human beings have goals they did not choose, and do not have the capacity to change, even if they are not aware of those goals and don't do anything to work on them. Those goals are programmed into all sapient creatures. When I tell you what you 'ought' to do, I'm addressing those specific goals. If you think some other goal is more important than the goals I'm addressing, I'm going to have to tell you that you don't actually get to decide which of your goals are most important. I'm telling you which of your goals is most important. That's my job as a guru.

Point of order, D&D's alignment system includes inanimate pools of liquid (holy water), literal parasites (celestial tapeworms) and spells which can be used to destroy innocent puppies (Holy Smite) as things that detect as [Good] while failing to report the abstract concepts of kindness or mercy as being [Good]. We should probably consider the possibility that Detect Good is not actually detecting the same thing a philosopher is talking about when they wax poetic about The Good.
Medieval and ancient Greek moral miasmic theory. https://www.youtube.com/watch?v=ALWLELLlv6E

Tanarii
2021-03-08, 03:45 PM
The guru's answer, assuming he subscribes to a universalist formulation, is most likely going to go: "You need to understand that all human beings have goals they did not choose, and do not have the capacity to change, even if they are not aware of those goals and don't do anything to work on them. Those goals are programmed into all sapient creatures. When I tell you what you 'ought' to do, I'm addressing those specific goals. If you think some other goal is more important than the goals I'm addressing, I'm going to have to tell you that you don't actually get to decide which of your goals are most important. I'm telling you which of your goals is most important. That's my job as a guru.
That doesn't work the moment you're dealing with someone that doesn't accept received wisdom as proof.

Segev
2021-03-08, 04:00 PM
What makes a "moral" consequence different from other kinds of consequence? Suppose that pulling the lever means you go to Hell, and are eaten by whatever kind of Devil eats people forever. That's not some special kind of consequence that makes pulling the lever "Evil" whether you think it is or not, it's just a regular consequence that's described in moral terms. If the Trolley Problem was stated with an "if you pull the lever, someone shoots you in the head" caveat, some people would still pull the lever (and, yes, some people would not). Basically, nothing you can do here can remove the dilemma. You can change it, and you can demand that the language be changed so that we can't call one side "Good" anymore, but there will still be some people who say "pull the lever" and some people who say "don't pull the lever", and as long as that's true you haven't solved anything.This is precisely the point I've been trying to make. Thank you for putting it in different words, as I am bad at rewording an argument once I've articulated it one way.


Part of the problem is that the other answer doesn't work great either. If the baby Orcs aren't Evil, then presumably the adult Orcs aren't either (at least, not inherently), so why is it okay to kill them just because they're Orcs?Presumably, being inherently evil (as we are accepting is the case for sake of this discussion), the adult orcs are acting that way. I tend to agree that "they're inherently evil" is a weak and lazy way of handling it, when "they're evil; just look at all the evil things they're doing!" is just as viable for a DM to say and avoids questions of free will. Who cares whether they're "born evil" or "evil because they're doing evil things" when they are, in fact, doing evil things that mean they need to be stopped?

Unrepentantly evil adults who do things that could get anybody put to death for their villainy can be safely - morally speaking - put to death. Especially in ye olde fantasy funtime games where them being alive and at your mercy is almost an accident on the DM's part because he wasn't trying to set up a moral conundrum and just expected you to kill the bandits-who-are-orcs.

Typically, the "born evil" thing comes up with "women and children" situations, not with "adult orcs who are being evil." And if the orcs aren't being evil, then the DM is doing a bad job of showing the evil of these "always inherently evil" creatures.


Serial killers, thieves, terrorists, and military dictators of all kinds can provide you with an internally coherent explanation of why the only moral thing to do is to kill, rob, and enslave people. You've just got to tell some people "Your argument is just wrong, your behavior isn't moral." Are we talking about a subjective alignment system? If so, you're only right from your own subjective moral code. If not, then the question arises, again, "what does it mean to be moral?" It's actually quite possible they're factually wrong about what "being good" means, and believe themselves to be good. With objective morality, you can define "good" objectively, and determine that they are not, in fact, correct. But "moral?" If you define "moral == good," then they can validly ask, "Why should I be moral?" If you define "moral" as being "what you ought to do," then if they're happy with the alignment they're living towards, they're making the morally correct choices, because they help them attain that alignment.


The guru's answer, assuming he subscribes to a universalist formulation, is most likely going to go: "You need to understand that all human beings have goals they did not choose, and do not have the capacity to change, even if they are not aware of those goals and don't do anything to work on them. Those goals are programmed into all sapient creatures. When I tell you what you 'ought' to do, I'm addressing those specific goals. If you think some other goal is more important than the goals I'm addressing, I'm going to have to tell you that you don't actually get to decide which of your goals are most important. I'm telling you which of your goals is most important. That's my job as a guru.


That doesn't work the moment you're dealing with someone that doesn't accept received wisdom as proof.

Tanarii has the right of it, put more succinctly than I could. (As evidenced, likely, but what I'm about to write at length, here.)

"Why should I care about these programmed goals, if I don't want to engage them?" is a perfectly valid question. "I, the guru, am telling you you should," is not persuasive, and cannot be proven to be correct unless you can crack open the soul of the being and prove that, deep down, he does care about those goals, and is lying to himself.

Even then, though, how do you prove that that being should be pursuing those goals, and that those goals are, in fact, the right ones for a being to have? Perhaps the very BEING is wrong, and should be overcome.

Objective facts can tell you what things are. They cannot tell you what they should be. They can tell you what you should do to accomplish certain things, based on how things are. That is what objectivity gets you: factual truth about what is. This is only useful insofar as you can then act upon this knowledge of reality to achieve real results.

The guru can tell you - assuming he's omniscient and honest on the subject - what your goals really, truly are...but he is still appealing to your goals. "Why should I want to do X?" when you get to the core of it, the guru can answer with a smile, "It isn't about whether you should; you do."

Telok
2021-03-08, 05:39 PM
Hmm. If alignment is objective then there's something like 2+2=4, thermodynamics, or the value of pi that exists. Given that alignment deals with actions/intent there's probably something like Newtonian physics; "for every [law] X there is an equal and opposite [chaos] Y." Stuff like that, although possibly not as nice and straightforward.

There exist spells to detect direction (g-e, l-c) and gross magnitude (no-faint-weak-middling-strong-overwhelming) of an aligned target. There exist things to change alignment, actions, spells, items, and the opposite helm.

Some experiments would be unethical but you could take true neutral subjects and have them perform minor actions until they start to ping an alignment. Then actions until that's reversed. Probably ask the ones you're testing the order/law axis on to figure out if alignment is proscriptive or descriptive and purely action, action/intent, or purely intent. At some point use the opposite helm(s) to check some larger value differences. It might be complex and multivariant. Maybe like "At value X in the [chaos] direction action Y increases rate of change from +1 to +1.3, but action inverse-Y reduces from -1 to -0.4, and once past X+1 intent ceases to matter for action Y but at X+2 it requires stronger intent for inverse-Y to have any effect."

If alignment is proscriptive then some amount of puppy kicking, even if you don't like it, will push you into orphan punching or something. Either despite your dislike for it or it literally alters your mind until you do like it. If it's descriptive then someone who knows they want to be chaotic but also wants to, for example, follow saftey rules so they don't die will know they have to offset those orderly actions by doing more lol-random stuff.

I'm thinking that objective alignments might push things much more toward "teams" and away from "morals" unless the alignments are proscriptive and/or mainly intent based. Still, probably leads to the occasional "has to be neutral" person who is genuinely nice but owns a puppy kicking mill for alignment maintence.

There's also the question of alignment transmission. Can things/beings be contaminated with alignment, and what effect (besides comic fodder like Xylon's crown & Miko) does that have?

NigelWalmsley
2021-03-08, 05:44 PM
Serial killers, thieves, terrorists, and military dictators of all kinds can provide you with an internally coherent explanation of why the only moral thing to do is to kill, rob, and enslave people. You've just got to tell some people "Your argument is just wrong, your behavior isn't moral."

Except we don't just tell those people they're wrong, we have a coherent theory of why those people are wrong. The reason we don't like serial killers isn't that they radiate Evil, it's that they kill people. The only explicative power Detect Evil has is in the name of the thing it's detecting, and that's just not sufficient for a moral argument. Again, imagine that "Detect Evil" was "Detect Strange" or "Detect Orange" or "Detect Snarf". Why should I set my morality to those things?


Presumably, being inherently evil (as we are accepting is the case for sake of this discussion), the adult orcs are acting that way.

And that gets you to the problem of redundancy. If the Orcs hunt and kill Humans for food, I don't need Detect Evil to tell me that they're Evil. I can make the judgement that cannibalism is bad in the real world without any such tool. Alignment, as conceived in D&D, is fundamentally a tool for explaining why it is okay to go into a dungeon, kill the inhabitants, and walk home with their stuff. Since that is, in the vast majority of circumstances, not actually okay, D&D Alignment does not hold up well to scrutiny. And that's actually okay! Not every tool has to be useful for every purpose. But people seem to really want Alignment to be a general-purpose morality system, and that's just not what it is.


"Why should I care about these programmed goals, if I don't want to engage them?" is a perfectly valid question. "I, the guru, am telling you you should," is not persuasive, and cannot be proven to be correct unless you can crack open the soul of the being and prove that, deep down, he does care about those goals, and is lying to himself.

It's also worth noting that this argument doesn't depend on the nature of D&D as a setting, and as such doesn't adequately answer the implicit question for why there's this kind of universally/cosmically correct morality in D&D, but not (as far as we can tell) in real life. There are gurus in the real world too, if they're actually infallible sources of morality, why are there ethical theories not based on their teachings?

Grek
2021-03-08, 06:34 PM
That's not a property of the universe, that's a property of the character. I can certainly imagine a character who believes in a different set of ethics than I do. But what I can't imagine -- what I believe to be literally unimaginable -- is a universe such that no character who agreed with my ethics could exist.Again, the supposition is not that no character could exist, merely that no such character does exist. And the reason that no such character exists is that whenever someone thinks that killing baby orcs is okay (or babies in general, but it always seems to default to orc babies for some reason) they get a little mental *ping* reminding them that the action they are contemplating is wrong and that they shouldn't do it. And because they evolved in a universe where that moral ping has been happening for the entirety of natural history and has never been wrong even once, they're naturally evolved to make it a load-bearing part of their decision making process. Obviously a powerful wizard could snap her fingers and create a new species that doesn't have that bone-deep instinct. But they'd still be getting the moral ping whenever they contemplated doing evil, and they'd still get sent to Gehenna if they actually went ahead and killed any orc babies.

Segev
2021-03-08, 06:40 PM
I am not going to discuss whether there is or what it could be that is objective morality IRL. We can discuss what it means in D&D without that.

Subjective vs objective is well-defined. Objective morality only devolves into teams divorced from morality if the morality is poorly specified and/or constructed. Whatever your or an author's beliefs about morality, it is generally possible to characterize a fictional objective morality that embodies it. I would hazard that, if you find your objectively-defined morality leads to internal contradiction, then you've either mischaracterized your objective morality or discovered a flaw in the rule set you are using to define your fictional objective morality.

This likely jeans you need to refine it and consider what the contradiction is and why it crops up. This is generally the case in any model e makes and can only test via thought experiments. I have discovered such problems in magic systems I've attempted to make and had to go back to the drawing board, for instance.

I will say that if you find that the objective morality system you have invented for your fictional setting invites what you believe to be bad people to get to classify themselves as "Good" and would classify folks you would say are good people as "Evil," the fault lies in the design of the system, not in the notion of objective morality, itself.

I feel the need to say this because I too often see stories of "too Good" being told where the "good" person is only "good" because the writer insists there are codicils that say this wicked deed they dojis good, honest. Generally speaking, those codicils are only "good" by general real-world consensus in extreme generalities and not in all nuanced cases, and the author seems to confuse lack of nuance and objectivity as the same thing, when they most definitely are not.