New OOTS products from CafePress
New OOTS t-shirts, ornaments, mugs, bags, and more
Page 3 of 4 FirstFirst 1234 LastLast
Results 61 to 90 of 94
  1. - Top - End - #61
    Bugbear in the Playground
     
    WhiteWizardGirl

    Join Date
    Dec 2013
    Gender
    Female

    Default Re: [Thought experiment] If alignments are objective how do we know what they represe

    Quote Originally Posted by OldTrees1 View Post
    TWhat if the Detect spells are Detect Lawful Evil and Detect Chaotic Evil rather than Detect Good and Detect Evil? In this case they are not true opposites but still don't overlap.
    That's not a huge problem. Suppose we have three teams - the [Cubs], the [Cardinals] and the [White Sox] - and the various Detect [Team] spells ping your [Team] more strongly if you:
    • Cheer for your [Team].
    • Boo members of other [Team]s.
    • Play sports wearing the [Team] logo. (strong!)
    • Have a giant foam finger for [Team]. (very strong!)
    • Are currently very very drunk.


    Obviously being sufficiently drunk will make you ping faintly of every team, UNLESS you counteract it with sufficient acts of booing and giant foam finger waving. And obviously booing the Cardinals is an act which shifts you steadily toward both [Cubs] and [White Sox] affiliation. But that doesn't mean that [Team] isn't a real concept in universe, it just means that it has properties which correspond less to morality and more to a sports rivalry.

    If we switch over to the four D&D extreme alignments, then we end up with results like "People who eat kittens get a strong reaction from Detect [Alignment #2] unless they also rob banks." and "People who eat kittens also get a strong reaction from Detect [Alignment #3], even if they do rob banks (in fact, it's even stronger if they do), but NOT if they do so while obeying the Nine Commandments of the Infernal Order of Baatorian Knights." Eventually, you'd puzzle out that you're looking at something like 'Detect Baatorian Morality', which may very well get summarized as 'Detect Lawful Evil'.

    RE: The rest of the thread. It's called the Is-Ought problem, yall. You can't go from "This is what the universe looks like." to "This is what you ought to do." or vis versa without some pre-existing moral intuitions about what you ought to do given the state of the universe. Even if it's a universal truth that having a giant red foam finger saying CARDINALS #1 makes you magically detect of [Cardinals], that still does not tell you whether detecting of [Cardinals] is something to be desired. You have to supply the 'moral' axiom that the Cardinals are the best sports team and everyone should be a Cardinals fan for yourself. It's not a feature of the universe except insofar as it is a feature of you. This is equally true when one of the mystical axes of affiliation the has been labelled [Good] instead of [Cardinals]. [Good] just a label; the fact that the mystical glowing winged humanoids call their team Team [Good] does not necessarily say that you should aspire to detect of [Good]. Ideally, it would work the other way around, where once everyone agrees that the Celestial Parrot Men are actually amazing role models and that everyone should strive to live up to their example, we assign the mystical radiation with wafts off of the Celestial Parrot Men the [Good] label so that everyone knows how society expects them to feel about the Celestial Parrot Men and their disciplines. But even then, pointing to the label and saying "It's clearly labelled [Good], obviously it's morally right." is missing the point. You still have to show what aspects about it are so great that it deserves to be given the label of [Good].
    Last edited by Grek; 2021-03-05 at 07:12 PM.

  2. - Top - End - #62
    Titan in the Playground
     
    NecromancerGuy

    Join Date
    Jul 2013

    Default Re: [Thought experiment] If alignments are objective how do we know what they represe

    Quote Originally Posted by Grek View Post
    That's not a huge problem. Suppose we have three teams - the [Cubs], the [Cardinals] and the [White Sox] - and the various Detect [Team] spells ping your [Team] more strongly if you:
    • Cheer for your [Team].
    • Boo members of other [Team]s.
    • Play sports wearing the [Team] logo. (strong!)
    • Have a giant foam finger for [Team]. (very strong!)
    • Are currently very very drunk.


    Obviously being sufficiently drunk will make you ping faintly of every team, UNLESS you counteract it with sufficient acts of booing and giant foam finger waving. And obviously booing the Cardinals is an act which shifts you steadily toward both [Cubs] and [White Sox] affiliation. But that doesn't mean that [Team] isn't a real concept in universe, it just means that it has properties which correspond less to morality and more to a sports rivalry.
    I did not mean to imply there was a problem. Just a puzzle of how to extend your solution to when there are detect spells for 2 adjacent corners rather than detect spells for both axes.

    I was curious about if you could differentiate between a pair of opposites, an orthogonal pair, and 2 adjacent corners. However two adjacent corners are an orthogonal pair now that I think about it. You just rotate the axes 45 degrees. If you have something that pings on Detect G and Detect C, then we could call it CG. If you have something that detects on CG and CE, then we could call it CCGE, or C for short.
    Last edited by OldTrees1; 2021-03-05 at 07:54 PM.

  3. - Top - End - #63
    Bugbear in the Playground
     
    WhiteWizardGirl

    Join Date
    Dec 2013
    Gender
    Female

    Default Re: [Thought experiment] If alignments are objective how do we know what they represe

    Yep. With the Chaotic Evil/Lawful Evil example, you'd eventually work out from testing that there's one list of behaviors that increases responsiveness to Detect [CE] but not Detect [LE], one list of behaviors that increases responsiveness to Detect [LE] but not Detect [CE] and one that does both. If List One reads "Hanging out with Slaad, Breaking Eggs On the Wrong Side and Jaywalking]; List Two is "Hanging Out with Modrons, Filing Taxes and Saluting" and List Three is "Murder, Summoning Demons and Summoning Devils", you'd eventually work out that Detect [CE] is looking at a mixture of lawlessness and evil while Detect [LE] is looking at a mixture of lawfulness and evil.

  4. - Top - End - #64
    Barbarian in the Playground
     
    PirateWench

    Join Date
    Jun 2017

    Default Re: [Thought experiment] If alignments are objective how do we know what they represe

    Quote Originally Posted by Segev View Post
    In objective morality, you either must define "moral/immoral" as the fixed axis, in which case you've essentially stated "moral = good" and thus it is no longer the #defined answer to the question "what ought I to do?" with no hidden assumptions about goals, or you must accept that the answer to the question "What ought I to do?" depends on what you want.

    The moment you insist that "moral == good," you can try to claim that you ought to do the moral thing, but you are now open to the question, "Why should I be moral?" And now we're right back to it depending on what I want.

    The only reason "moral" as defined previously - the answer to the question "What ought I to do?" - depends on what you want is because all "ought" questions require motivation. It is fundamentally impossible to have "ought" without a motivation.
    I think the fundamental disconnect here is that under the proposed "objective morality" theory, there is an answer to "what ought one do?", even in the total absence of context. Let's go back to your globe/gravity analogy from earlier:

    Spoiler: Analogy
    Show
    Imagine that there is a globe, and that gravity pulls the people on the globe towards the center of said globe.

    Person A is standing on the North Pole of the globe.

    Person B is standing on the South Pole of the globe.

    You ask each of them "which way is down?"

    Person A will say "South is down."

    Person B will say "North is down."

    Each rationally believes themself to be correct.

    Now imagine that this globe isn't actually a world in a universe, but a shoebox diorama sitting on a table. In the diorama, the globe doesn't ever rotate or move. The North Pole is always at the top of the globe, and the South Pole is always at the bottom of the globe.

    If you ask Person C, a person in the real world who is observing the shoebox diorama "In this diorama, which way is down?", they can correctly identify that South is down. Any person who says that North is down is objectively wrong.

    Therefore, Person B is wrong. Person B lives inside the diorama. They have no way of knowing that they're in a diorama. They have no way of discovering that they're wrong. Nonetheless, they are wrong.


    In the context of D&D, Person A is an in-universe character who believes "Good is moral", and Person B is an in-universe character who believes "Evil is moral." Each of them believes they are right. Neither of them has any way of knowing whether or not they are right.

    Person C is not an in-universe person at all—not a god, not AO, not anyone. Rather, they're the Dungeon Master. They can define what is and isn't moral in their D&D universe. They can create a world such that Good is moral, such that Evil is moral, or a universe that follows some other ethical system. But if the GM says "Good is moral", then when Person B decided that they ought to be Evil, then person B is objectively incorrect. Not for any in-universe reason, but because the universe is defined such that they are incorrect.
    Last edited by gloryblaze; 2021-03-06 at 03:54 AM.

  5. - Top - End - #65
    Titan in the Playground
     
    Tanarii's Avatar

    Join Date
    Sep 2015

    Default Re: [Thought experiment] If alignments are objective how do we know what they represe

    Quote Originally Posted by gloryblaze View Post
    Spoiler: Analogy
    Show
    Imagine that there is a globe, and that gravity pulls the people on the globe towards the center of said globe.

    Person A is standing on the North Pole of the globe.

    Person B is standing on the South Pole of the globe.

    You ask each of them "which way is down?"

    Person A will say "South is down."

    Person B will say "North is down."

    Each rationally believes themself to be correct.

    Now imagine that this globe isn't actually a world in a universe, but a shoebox diorama sitting on a table. In the diorama, the globe doesn't ever rotate or move. The North Pole is always at the top of the globe, and the South Pole is always at the bottom of the globe.

    If you ask Person C, a person in the real world who is observing the shoebox diorama "In this diorama, which way is down?", they can correctly identify that South is down. Any person who says that North is down is objectively wrong.

    Therefore, Person B is wrong. Person B lives inside the diorama. They have no way of knowing that they're in a diorama. They have no way of discovering that they're wrong. Nonetheless, they are wrong.
    This analogy fails, because if it were true Person B would fall off the bottom of the world.

    But if the GM says "Good is moral", then when Person B decided that they ought to be Evil, then person B is objectively incorrect. Not for any in-universe reason, but because the universe is defined such that they are incorrect.
    Good = moral doesn't really make sense as an objective statement though. If Moral = "way people ought to behave", in that context "why should I be moral?" is a very important question that must be answered before it can be objective. It's inherently subjective if it isn't defined. Even if the answer is "otherwise you'll be punished for all eternity".

    Whereas "Good" = "X kind of behavior" can be, at least theoretically, an objective statement.

  6. - Top - End - #66
    Titan in the Playground
     
    NecromancerGuy

    Join Date
    Jul 2013

    Default Re: [Thought experiment] If alignments are objective how do we know what they represe

    Quote Originally Posted by Tanarii View Post
    Good = moral doesn't really make sense as an objective statement though. If Moral = "way people ought to behave", in that context "why should I be moral?" is a very important question that must be answered before it can be objective. It's inherently subjective if it isn't defined. Even if the answer is "otherwise you'll be punished for all eternity".

    Whereas "Good" = "X kind of behavior" can be, at least theoretically, an objective statement.
    • There is a campaign and the GM determined the nature of that reality.
    • There is a question. "What ought one do?". There is no "hidden qualifier" to this question
    • In order to continue I will define a Term. The word "moral" will be used as a name for the answer to the question "What ought one do?".


    Now you have a character ask "why should I be moral?". Since "moral" is a term defined as the answer to the question "What ought one do?", I can substitute and simplify. Your character's question is "Why ought I do what I ought to do?". With no "hidden qualifier". Since your character's question is asking "Why <tautology>?", the answer is that tautologies are self proving statements.

    Now you ask a more relevant question. "Well, naming the answer to "what ought one do?" does not answer the question, so what is the contents of the answer? Aka what is moral?". To this every player turns towards the GM. Just like the GM decided how time works in their campaign reality, the GM is the one to answer that question. There are various answers the GM might have.
    • They may say "I don't know", in which case nobody can know.
    • They may say "Mu. The campaign is amoral despite what some characters believe." in which case that is true.
    • They may say "Moral Relativism in in effect this campaign" in which case <Go Read about Moral Relativism> because it is true in this case.
    • They may say "Gruumish's guess happens to be correct" in which case that is true.
    • They may say "Kantian Ethics are in effect" in which case ask for clarification because Kant's writing is rather hard to decipher and is correct in this case.
    • Or they may say "The Good alignment as defined on pg XYZ is moral in this campaign" in which case that is true.
    • Or they may say something else, but what they say is true for that campaign.
    Last edited by OldTrees1; 2021-03-06 at 12:55 PM.

  7. - Top - End - #67
    Barbarian in the Playground
     
    PirateWench

    Join Date
    Jun 2017

    Default Re: [Thought experiment] If alignments are objective how do we know what they represe

    Quote Originally Posted by Tanarii View Post
    This analogy fails, because if it were true Person B would fall off the bottom of the world.
    In this hypothetical, it’s impossible to tell you’re inside the diorama from within the diorama. The globe has its own gravity that affects the tiny people living on it just like our gravity affects us. From Person B’s perspective, North is indeed down, as gravity pulls them towards the center of the globe. It’s only by completely changing our reference point to that of Person C that we can tell that Person B is wrong. Note that “down” and “the direction gravity pulls while within the diorama” are not synonymous in this hypothetical.
    Last edited by gloryblaze; 2021-03-06 at 01:13 PM.

  8. - Top - End - #68
    Titan in the Playground
     
    Tanarii's Avatar

    Join Date
    Sep 2015

    Default Re: [Thought experiment] If alignments are objective how do we know what they represe

    Quote Originally Posted by OldTrees1 View Post
    Now you have a character ask "why should I be moral?". Since "moral" is a term defined as the answer to the question "What ought one do?", I can substitute and simplify. Your character's question is "Why ought I do what I ought to do?". With no "hidden qualifier". Since your character's question is asking "Why <tautology>?", the answer is that tautologies are self proving statements.
    You are mistaken. The question "Why ought I do what I ought to do?" is very much not a tautology. It is a valid question that requires an answer.

    The reason is the actual question is "Why should I decide that I ought to do the thing you're telling me I ought to do?"

    • They may say "I don't know", in which case nobody can know.
    • They may say "Mu. The campaign is amoral despite what some characters believe." in which case that is true.
    • They may say "Moral Relativism in in effect this campaign" in which case <Go Read about Moral Relativism> because it is true in this case.
    • They may say "Gruumish's guess happens to be correct" in which case that is true.
    • They may say "Kantian Ethics are in effect" in which case ask for clarification because Kant's writing is rather hard to decipher and is correct in this case.
    • Or they may say "The Good alignment as defined on pg XYZ is moral in this campaign" in which case that is true.
    • Or they may say something else, but what they say is true for that campaign.
    Those are no answers to the question. Except for the first one.

    Quote Originally Posted by gloryblaze View Post
    In this hypothetical, it’s impossible to tell you’re inside the diorama from within the diorama. The globe has its own gravity that affects the tiny people living on it just like our gravity affects us. From Person B’s perspective, North is indeed down, as gravity pulls them towards the center of the globe. It’s only by completely changing our reference point to that of Person C that we can tell that Person B is wrong. Note that “down” and “the direction gravity pulls while within the diorama” are not synonymous in this hypothetical.
    Exactly. And since down = "the direction that gravity pulls"*, the analogy fails. Or the person falls off the globe.

    *unless your name is Ender

  9. - Top - End - #69
    Titan in the Playground
     
    NecromancerGuy

    Join Date
    Jul 2013

    Default Re: [Thought experiment] If alignments are objective how do we know what they represe

    Quote Originally Posted by Tanarii View Post
    You are mistaken. The question "Why ought I do what I ought to do?" is very much not a tautology. It is a valid question that requires an answer.

    The reason is the actual question is "Why should I decide that I ought to do the thing you're telling me I ought to do?"
    Oh you were having a character ask "Why should I decide that I ought to do the thing you're telling me I ought to do?"

    • Well that means you already saw the Term "moral" being defined so you know it is a label/pointer/name.
    • You also know the tautological question "Is the term defined as name of the right answer to "What ought one do?" the name of the right answer to "What ought one do?". Aka if X is Y, then is X Y? Yes, if X is Y then X is Y.
    • You also know the character exists in a reality created by a GM. The GM can create that campaign reality the way they want. So ultimately any question a character asks about why the campaign reality is the way it is, ends up with the answer "Because the GM created it that way".


    You then invented a character that you referenced by "you're" (which can't be me since I am not in the campaign world nor am I talking to your character). That is telling the asking character their theory about the reality they both live in?

    So my post was only addressing background information relevant to establishing the context in which you character's question "Why should I decide that I ought to do the thing you're telling me I ought to do?" could be asked?

    Well in that case this character "You're" does not know the answer to the question your character is asking. However you the player could get some relevant information if you ask the GM.
    Last edited by OldTrees1; 2021-03-06 at 04:58 PM.

  10. - Top - End - #70
    Titan in the Playground
     
    Tanarii's Avatar

    Join Date
    Sep 2015

    Default Re: [Thought experiment] If alignments are objective how do we know what they represe

    Quote Originally Posted by OldTrees1 View Post
    Oh you were having a character ask "Why should I decide that I ought to do the thing you're telling me I ought to do?"
    From the perspective of the character, not by the character. That's the only perspective from which the object morality matters, if they're linked like you're don't here. At the GM/Player level it's subjective. They (or more typically the rules of the game they chose to play) are defining morality somehow. Although as a I've previously noted, most games don't define "what is moral", instead defining "what behavior maps to what moral label".

    Well that means you already saw the Term "moral" being defined so you know it is a label/pointer/name.
    That is atypical. You're doing that in your example, but your example doesn't match how it is normally done. But accepted for this example.

    You also know the tautological question "Is the term defined as the right answer to "What ought one do?" the right answer to "What ought one do?".
    Okay. Accepted as a subjective decision being made by the DM in this case, creating an in-universe objective.

    You also know the character exists in a reality created by a GM. The GM can create that campaign reality the way they want. So ultimately any question a character asks about why the campaign reality is the way it is, ends up with the answer "Because the GM created it that way".

    You then invented a character that you referenced by "you're" (which can't be me since I am not in the campaign world nor am I talking to your character). That is telling the asking character their theory about the reality they both live in?

    So my post was only addressing background information relevant to establishing the context in which you character's question "Why should I decide that I ought to do the thing you're telling me I ought to do?" could be asked?
    Its being asked from the perspective of the character, not by the character. And now you've created a link between objective morality and "what they outfit to do", which means they are no longer independent. The characters perspective now defines if it is objective or not, and vice versa.

    Well in that case this character "You're" does not know the answer to the question your character is asking. However you the player could get some relevant information if you ask the GM.
    There's a missing step then. From the perspective of the character, which is the only one that matters for purposes of objective morality if it's being directly linked to "what they ought to do", there is a disconnect between objective morality and why they ought to decide to do the thing being called "moral" within that framework.

  11. - Top - End - #71
    Titan in the Playground
     
    NecromancerGuy

    Join Date
    Jul 2013

    Default Re: [Thought experiment] If alignments are objective how do we know what they represe

    Quote Originally Posted by Tanarii View Post
    From the perspective of the character, not by the character. That's the only perspective from which the object morality matters, if they're linked like you're don't here. At the GM/Player level it's subjective. They (or more typically the rules of the game they chose to play) are defining morality somehow. Although as a I've previously noted, most games don't define "what is moral", instead defining "what behavior maps to what moral label".

    That is atypical. You're doing that in your example, but your example doesn't match how it is normally done. But accepted for this example.

    Okay. Accepted as a subjective decision being made by the DM in this case, creating an in-universe objective.
    Was this supposed to be further down? Neither the DM nor the subjective decision being made by the DM was mentioned yet. I was just "showing my work" about the term is itself.

    Quote Originally Posted by Tanarii View Post
    Its being asked from the perspective of the character, not by the character. And now you've created a link between objective morality and "what they outfit to do", which means they are no longer independent. The characters perspective now defines if it is objective or not, and vice versa.

    There's a missing step then. From the perspective of the character, which is the only one that matters for purposes of objective morality if it's being directly linked to "what they ought to do", there is a disconnect between objective morality and why they ought to decide to do the thing being called "moral" within that framework.
    Here is a miscommunication. In the Ethics branch of Philosophy the term of art "Objective Morality" or "Moral Universalism" has a meaning "slightly" different from how you are using the terms separately. For my own IRL sanity this week I will not go in further detail on that subthread. Not to mention 2-3 of the example GM answers I gave were not examples of Objective Morality as the term is defined, so I am not talking about that subthread at this time.

    So there is a question, the concept of an answer to the question, a term to reference the answer to the question, a GM that creates the campaign, and the GM might decide to subjectively decide what that answer is going to be in the campaign reality.

    Aka the GM might decide to define "what behavior maps to the labels moral/amoral/immoral."

    Now the question you are having asked is from the character's POV but not the character asking it. I am not sure exactly the nuance there but it is important because you mentioned it twice. I don't think I understand the question well enough to answer it, but I will answer a related question.

    • Suppose the POV of the character understands the label "moral" as the term I defined it as. So they understand the concept of it as a pointer to the answer.
    • Suppose the GM's POV includes the GM defining what behavior maps to the labels moral/amoral/immoral.
    • Note that in example answers (like Utilitarianism, Moral Relativism, Mu, etc), the answer tries to answer Why it is the answer, and if it is correct then its answer of Why is also correct.
    • Suppose the POV of the character is coincidentally asking about a behavior that the GM also happened to map to the label moral.
    • Suppose the following question from the POV of the character: Is behavior X moral? Ought I do behavior X? To answer those questions they would need reasons or evidence. So they ask Why is behavior X moral? Why ought I do behavior X?


    Those last 4 questions have the answer "The character does not know". That won't stop characters from jumping to conclusions about what behaviors they ought or ought not do. But they don't know.

    Suppose instead the question asked from the POV of the character was: Assuming X is the answer to the overall question of "What ought one do?", why is it the right answer? It is an end in and of itself, and it happens the be the right one from all the possible answers that are ends in and of themselves. But why was this one the right one? There are infinite answers that are self consistent ends in and of themselves. Why was this one the right one?

    I fear the answer to that question is "Because that is the way this campaign reality was created", however there is no way for that to be know from the character's POV, and is probably unsatisfying from that POV.

    Which is why I was trying to be so rigorous and picky about the definitions / terms / naming I was using. Even giving it as big a benefit of the doubt as I could, this is the limits of knowledge from the POV of characters. Characters can have subjective moral theories based on moral intuitions, but lack the knowledge of what the GM defined (unlike alignments).

    From the perspective of the character, they cannot know morality and thus rely on their subjective beliefs based on their subjective moral intuitions. Including those that believe in an amoral reality. Or believe mutually contradicting things. Or that disagree with each other. Even objective alignment, despite allowing the character the potential to access full knowledge of alignment, does not fix this inherent ignorance of morality.
    Last edited by OldTrees1; 2021-03-06 at 05:53 PM.

  12. - Top - End - #72
    Troll in the Playground
    Join Date
    Jul 2015

    Default Re: [Thought experiment] If alignments are objective how do we know what they represe

    Quote Originally Posted by OldTrees1 View Post
    So my post was only addressing background information relevant to establishing the context in which you character's question "Why should I decide that I ought to do the thing you're telling me I ought to do?" could be asked?
    The common case answer to this question is 'because doing the thing I'm telling you ought to do' (where I represents some agent with awareness of the morality structure of the reality) is 'because doing so will have positive outcomes and not doing so will have negative outcomes.'

    D&D has a major issue in that, because of decades of poorly conceived, generally contradictory (at times do to some fairly clear differences in moral theory between specific authors), and in some cases just plain bad, writing, the variance in outcomes between being a good-aligned person, a neutral-aligned person, and an evil-aligned person is insufficiently clear. Essentially the alignments, which are supposedly objective, can end up representing, in terms of beings, places, or actions assigned those alignments, outright contradictory things. If alignment X = Moral value 1, but alignment X also equals more value 2 and moral values 1 and 2 are - to the average audience member - explicitly not equal, then there's a problem.

    The whole 'should we kill the orc babies' argument is emblematic of this. If there's an objective morality system in place there should be a clear answer to the question (including if the answer varies depending on circumstances). D&D, instead vacillates and dodges. D&D's 'objective morality' is analogous to a compass whose needle spins endlessly.
    Now publishing a webnovel travelogue.

    Resvier: a P6 homebrew setting

  13. - Top - End - #73
    Colossus in the Playground
     
    Segev's Avatar

    Join Date
    Jan 2006
    Location

    Default Re: [Thought experiment] If alignments are objective how do we know what they represe

    Quote Originally Posted by OldTrees1 View Post
    • There is a campaign and the GM determined the nature of that reality.
    • There is a question. "What ought one do?". There is no "hidden qualifier" to this question
    • In order to continue I will define a Term. The word "moral" will be used as a name for the answer to the question "What ought one do?".


    Now you have a character ask "why should I be moral?". Since "moral" is a term defined as the answer to the question "What ought one do?", I can substitute and simplify. Your character's question is "Why ought I do what I ought to do?". With no "hidden qualifier". Since your character's question is asking "Why <tautology>?", the answer is that tautologies are self proving statements.

    Now you ask a more relevant question. "Well, naming the answer to "what ought one do?" does not answer the question, so what is the contents of the answer? Aka what is moral?". To this every player turns towards the GM. Just like the GM decided how time works in their campaign reality, the GM is the one to answer that question. There are various answers the GM might have.
    • They may say "I don't know", in which case nobody can know.
    • They may say "Mu. The campaign is amoral despite what some characters believe." in which case that is true.
    • They may say "Moral Relativism in in effect this campaign" in which case <Go Read about Moral Relativism> because it is true in this case.
    • They may say "Gruumish's guess happens to be correct" in which case that is true.
    • They may say "Kantian Ethics are in effect" in which case ask for clarification because Kant's writing is rather hard to decipher and is correct in this case.
    • Or they may say "The Good alignment as defined on pg XYZ is moral in this campaign" in which case that is true.
    • Or they may say something else, but what they say is true for that campaign.
    The DM has defined the morality for the setting. Let's say - because it's easy to use since we're familiar with the form - it's the D&D alignment grid, or something close enough that we don't need to worry about the differences.

    You have defined "moral" as "what one ought to do." By defining it this way, you have made it impossible to answer without there being an underlying motivation.

    You ought to do your homework so you learn the subject and get a good grade. You ought to be nice to your sister so she is healthy and happy, your parents don't punish you for being mean to her, and she'll be nice to you in return. You ought to take piano lessons so you can learn to play the piano and attract that girl who likes pianists. You ought to ask that girl out so she might go on a date with you.

    There is no answer, not because the campaign is amoral, but because you have framed the question badly. The campaign has a defined morality. Good and evil are objectively present (even if that "objectiveness" is the DM declaring answers). You haven't demonstrated the campaign is amoral by asking the question, "What ought I to do?" with absolutely no motivation provided. You've simply made the question meaningless.

    The question is as meaningless as, "What color is down?" with no other context, because you've removed the ability for "ought" to have any motivation, and thus it has no meaning. It is a word that requires motivation in order to justify itself. You ought to do things for reasons. There is never something you ought to do with absolutely no motivation behind why.

  14. - Top - End - #74

    Default Re: [Thought experiment] If alignments are objective how do we know what they represe

    Quote Originally Posted by OldTrees1 View Post
    Now you have a character ask "why should I be moral?". Since "moral" is a term defined as the answer to the question "What ought one do?", I can substitute and simplify.
    Except you can't. The fact that someone says "you ought to do X" doesn't inherently mean you ought to do X. It just means that they think you ought to do X. Similarly, even if there's a property of the universe that uses the same words to describe actions to describe morality, that doesn't make them the same. The idea that the universe being Utilitarian or Kantian or whatever would resolve moral debates reflects a fundamental misunderstanding of moral debates. There are people in the real world who thinks the universe agrees with their morality, and who think they have proof of that. But not everyone agrees with them!

    Quote Originally Posted by Mechalich View Post
    D&D, instead vacillates and dodges. D&D's 'objective morality' is analogous to a compass whose needle spins endlessly.
    The alternative isn't really all that much better though. The reason "do we kill the baby Orcs" is such a troublesome question is because once you accept the notion of Objective Morality, it becomes very difficult to say that you shouldn't, but people nonetheless feel deeply uncomfortable actually doing it. Trying to shoehorn everything into a pair of Good/Evil and Law/Chaos binaries is simply not something that correlates very well with people's moral intuitions if you're operating at enough detail to start asking serious moral questions.

  15. - Top - End - #75
    Titan in the Playground
     
    NecromancerGuy

    Join Date
    Jul 2013

    Default Re: [Thought experiment] If alignments are objective how do we know what they represe

    Quote Originally Posted by Segev View Post
    There is no answer, not because the campaign is amoral, but because you have framed the question badly. The campaign has a defined morality. Good and evil are objectively present (even if that "objectiveness" is the DM declaring answers). You haven't demonstrated the campaign is amoral by asking the question, "What ought I to do?" with absolutely no motivation provided. You've simply made the question meaningless.
    Odd, I don't recall claiming the campaign was amoral. I thought I was quite clear that the GM decided what it would be (Including possibilities like 2 different forms of amoral, Moral Relativism, and a list of examples of more conventional examples).

    Segev at this point I have to assume your issue is not with my communication of the topic, but rather you and I disagree close to the trunk of Meta Ethics. I have not made the question "meaningless". You have complained that the question I described is impossible to answer without an underlying motivation. I have stated an underlying motivation would defeat the purpose of the question. I have described, in detail, the concept of a right answer (a very common meta ethical position). You seem to reject the concept of a right answer (very unusual meta ethical position) and instead focus on claims of "X if you want Y". With sufficient rigor those claims are descriptive claims, which don't cross the Is Ought boundary.

    While this is a dense topic and I have written as such, I think we are at an impasse. I hope the further research I pointed you towards (especially meta ethics) is sufficient for you to forgive me for disengaging.

    Quote Originally Posted by NigelWalmsley View Post
    Except you can't. The fact that someone says "you ought to do X" doesn't inherently mean you ought to do X. It just means that they think you ought to do X. Similarly, even if there's a property of the universe that uses the same words to describe actions to describe morality, that doesn't make them the same. The idea that the universe being Utilitarian or Kantian or whatever would resolve moral debates reflects a fundamental misunderstanding of moral debates. There are people in the real world who thinks the universe agrees with their morality, and who think they have proof of that. But not everyone agrees with them!
    This is not a reply to what you quoted and does not appear to disagree with me.



    At this point I do have to exit this subthread. It is a tangent at best and has either stalled out or requires dredging even deeper into meta ethics.

    If people want to return to the topic of Alignments without the subject of Morality, then I would eagerly join them.
    Last edited by OldTrees1; 2021-03-06 at 08:37 PM.

  16. - Top - End - #76
    Barbarian in the Playground
     
    Devil

    Join Date
    Dec 2019

    Default Re: [Thought experiment] If alignments are objective how do we know what they represe

    Quote Originally Posted by Grek View Post
    I think people are kinda missing the point of the question here. Saint-Just isn't asking "Does Detect Good really detect whether or not a character adheres to the IRL Objective Morality?". I mean, obviously they're not, that's an absurd thing to ask. They're asking how people would figure out the criteria by which the various Detect [Alignment] spells decide how strongly to light stuff up when cast. And for that, you'd use some behavioral studies:

    ...

    Study Four: Is Alignment A Good Proxy For [Insert Ethics System Here]?
    This one is a bit cheeky. Nobody can agree on which ethics system is correct. But we can usually agree on which people are experts on which ethics systems. For this experiment, we check everyone's alignment, then have a panel of experts on [Insert Ethics System Here] interview our test subjects and then rate their level of adherence to [Insert Ethics System Here]. Then we check everyone's alignment again, just to be sure that being declared Highly Ethical by the Mer-Pope (or whoever else we're bringing in on our panel for [Insert Ethics System Here]) isn't one of the things that affects alignment. If it turns out that there's a strong correlation between [Alignment #3] and the having the approval of the Federated Council of Giant Spiders Who Eat Puppies, that tells us something about the relationship between [Alignment #3] and Being A Giant Spider and/or Eating Puppies.

    Now, none of this actually tells us whether or not we want to be [Good] or not. After all, in 3.5 the Puppy Eating Arson Spiders From Baator will detect faintly of [Good] if someone casts Protection From Evil on them, so Detect Good is clearly not a foolproof way of deciding if someone is a decent sort or not. But equally clearly, it can at least give us some information about the probable probity of people who detect as [Good] relative to those who detect as [Evil].
    Surprisingly, my initial thought was mostly about this. A good or bad proxy. But also it was about less-than-ideal situations.

    Going back to the campaign which inspired my question (though the question seems to be worthwhile on it's own, given the discussion it generated): it was mostly about... counterculture so to speak. Official position (not of the GM, of the Advisory Council? Society? In general it's what everybody knows and knows that everybody else knows that) is that is that alignments are what they are in D&D. Murder Evil charity Good etc. etc. Moral and objective. There is no obvious discrimination (it's the Sigil after all, a devil and a deva could have had a peaceful chat there even before the everyone was forced to stick together or die) but you do have an "alignment" field on your ID, you can have voluntary segregation etc. And some people spread the lore that alignment isn't what you think it is (not in a bid to convince everybody, it's not a revolution, it's not an active challenge to the current order, it's a underground circumventing norms).

    And it seems to me that in such circumstances, and in general short of whole society cooperating in that venture discovering even alignment-as-a-proxy is incredibly hard; doubly so if there is enough of people convinced in each of two positions on less than precise grounds.

    The idea was that either objective morality or at least really well-aligned with one system proxy may exist and you still will not know it for sure.

  17. - Top - End - #77
    Troll in the Playground
    Join Date
    Jul 2015

    Default Re: [Thought experiment] If alignments are objective how do we know what they represe

    Quote Originally Posted by NigelWalmsley View Post
    The alternative isn't really all that much better though. The reason "do we kill the baby Orcs" is such a troublesome question is because once you accept the notion of Objective Morality, it becomes very difficult to say that you shouldn't, but people nonetheless feel deeply uncomfortable actually doing it. Trying to shoehorn everything into a pair of Good/Evil and Law/Chaos binaries is simply not something that correlates very well with people's moral intuitions if you're operating at enough detail to start asking serious moral questions.
    Generally I agree. It's very difficult to structure a hypothetical objective morality, due to the demands of universality, with the nature/nurture problem of moral foundations for individuals.

    Objective morality works fine with regards to things that are inherently evil in some fashion like fiends or undead or even something like mind flayers whose biology is predicated on destroying the minds of other sapient beings simply to exist. Orcs in a strictly Tolkien-based sense, where they are corrupt creations of an irredeemable dark lord that literally cannot be 'good' are fine. Orcs in a 'potentially a PC race' sense, well, then you have problems. Objective morality therefore works better when all the moral actors available are humans, because few would label any human culture inherently evil and an objective moral system that accepts cultural variations can be constructed. It also works better when the moral axes are simplified to a single line with fairly obvious endpoints - usually a single benevolent creator and a single malevolent opposition. Essentially, simplification either eliminates or minimizes many of the nasty edge cases that make people so uncomfortable. For example, in the Wheel of Time the question 'do we kill the baby Trollocs' is an obvious yes, because Trollocs aren't people, they're living bioweapons and the idea of playing as a Trolloc is ridiculous.

    D&D has things that are both very clearly inherently evil like fiends, while at the same time it has a lot of things like orcs and drow that are 'evil' because they were raised in evil cultures. Accepting that those beings are actually evil, objectively, means internalizing a lot of grimdark, because essentially the various evil deities that control the evil races are engaged in the mass poisoning of souls forever and there doesn't seem to be anything that can be done to stop it.
    Now publishing a webnovel travelogue.

    Resvier: a P6 homebrew setting

  18. - Top - End - #78

    Default Re: [Thought experiment] If alignments are objective how do we know what they represe

    Quote Originally Posted by Mechalich View Post
    Objective morality works fine with regards to things that are inherently evil in some fashion like fiends or undead or even something like mind flayers whose biology is predicated on destroying the minds of other sapient beings simply to exist.
    That's true, but it's also frankly unnecessary. I don't need some kind of external aspect-of-the-universe morality to tell me that the Mind Flayers are the bad guys, I can just observe that A) they eat people and B) I'm a people, and therefore conclude that we're probably not going to be friends. Which, I think, is the most telling argument against D&D-style Alignment: even in the places where it doesn't cause problems, it doesn't solve them either.

    Orcs in a strictly Tolkien-based sense, where they are corrupt creations of an irredeemable dark lord that literally cannot be 'good' are fine.
    Honestly, even with Tolkien Orcs, people are going to have a really hard time with the idea that it's correct to burn down the Orc village and hunt down every Orc child you can find. People have a lot of empathy for anything that is remotely like a person, and "these people are irredeemably evil and must be purged from the world for the good of all" pattern-matches to some extremely nasty ideologies. There's a reason that, in more recent sources, the irredeemably Evil enemies often reproduce in non-standard ways that bypass the problem. There aren't baby Orks or Flood children for you to worry about killing, in no small part because the authors of that setting realized that no one really wants to be confronted with that question.

  19. - Top - End - #79
    Bugbear in the Playground
     
    WhiteWizardGirl

    Join Date
    Dec 2013
    Gender
    Female

    Default Re: [Thought experiment] If alignments are objective how do we know what they represe

    Quote Originally Posted by NigelWalmsley View Post
    Except you can't. The fact that someone says "you ought to do X" doesn't inherently mean you ought to do X. It just means that they think you ought to do X. Similarly, even if there's a property of the universe that uses the same words to describe actions to describe morality, that doesn't make them the same.
    You absolutely can and it absolutely does. I know this seems like a weird answer, but the weirdness is rooted more in the existence of a DM who can definitely and unambiguously state that your character is aware of the objectively moral action for your character to take in a given situation than in the nature of moral debate itself. If the DM says 'killing baby orcs is Evil in this campaign setting and your character is aware of that fact' there's no room to argue the point. This is very different from how things work IRL (where there is no DM to feed you objective moral truths), but if we're accepting the ability of the DM to define moral truth inside the setting, it's an inescapable conclusion.

    Quote Originally Posted by Saint-Just View Post
    And it seems to me that in such circumstances, and in general short of whole society cooperating in that venture discovering even alignment-as-a-proxy is incredibly hard; doubly so if there is enough of people convinced in each of two positions on less than precise grounds.
    I don't follow; what exactly could the non-cooperative members of society do to sabotage the results of the tests? Intentionally lie to confound the results? It's not as if psychology doesn't already have methods for dealing with that sort of experimental error. With getting together a panel of Hextorite clerics to opine on the relative conformity of various experimental subjects to the Hextorite ethos? They have holy books, just tally up all of the normative statements made within and use that to write a rubric. It's science, the whole objective is to come up with tests where it doesn't matter if what people think the answer 'should' be.
    Last edited by Grek; 2021-03-06 at 10:55 PM.

  20. - Top - End - #80

    Default Re: [Thought experiment] If alignments are objective how do we know what they represe

    Quote Originally Posted by Grek View Post
    I know this seems like a weird answer, but the weirdness is rooted more in the existence of a DM who can definitely and unambiguously state that your character is aware of the objectively moral action for your character to take in a given situation than in the nature of moral debate itself. If the DM says 'killing baby orcs is Evil in this campaign setting and your character is aware of that fact' there's no room to argue the point.
    That still doesn't work. Consider a simple moral dilemma: the Trolley Problem. Five people on one track, one person on the other track, you can choose which group of people dies. There are various reasons that people think one or the other possible answer to this question is correct.

    Suppose, for the moment, that you're a Utilitarian who believes that, because five people is more than one person, you should pull the lever so the one person dies. You think this is the good/right/morally correct thing to do. Now suppose that you are transported into an alternate universe, indistinguishable from our own, except that you have absolute moral certainty that, in this universe, pulling the lever is Evil as an objective property of reality. Should you change your position on the Trolley Problem? Of course not. You didn't believe you should pull the lever because it was Good, you believed you should pull the lever because it saved four net lives. It still does that, so as far as you're concerned that's still the right answer.

    What is happening is not that your moral theory is now wrong, just that your moral theory and the universe's disagree. I suppose you could describe that as "your action is Evil and you are Evil", but that's fundamentally not very useful because you don't think of yourself as Evil and typical definitions of Evil will not accurately predict your future actions. Frankly, I'm not even sure there's a fundamental difference between the hypothetical universe and our own. You could make the reasonable argument that our universe has the "objective morality" of "no actions carry any inherent moral value, positive or negative", and that every ethical theory (except certain strains of Nihilism) is in disagreement with it in the same way that our Utilitarian is in disagreement with the "don't pull the lever" universe.

  21. - Top - End - #81
    Troll in the Playground
    Join Date
    Jul 2015

    Default Re: [Thought experiment] If alignments are objective how do we know what they represe

    Quote Originally Posted by NigelWalmsley View Post
    That still doesn't work. Consider a simple moral dilemma: the Trolley Problem. Five people on one track, one person on the other track, you can choose which group of people dies. There are various reasons that people think one or the other possible answer to this question is correct.

    Suppose, for the moment, that you're a Utilitarian who believes that, because five people is more than one person, you should pull the lever so the one person dies. You think this is the good/right/morally correct thing to do. Now suppose that you are transported into an alternate universe, indistinguishable from our own, except that you have absolute moral certainty that, in this universe, pulling the lever is Evil as an objective property of reality. Should you change your position on the Trolley Problem? Of course not. You didn't believe you should pull the lever because it was Good, you believed you should pull the lever because it saved four net lives. It still does that, so as far as you're concerned that's still the right answer.

    What is happening is not that your moral theory is now wrong, just that your moral theory and the universe's disagree. I suppose you could describe that as "your action is Evil and you are Evil", but that's fundamentally not very useful because you don't think of yourself as Evil and typical definitions of Evil will not accurately predict your future actions. Frankly, I'm not even sure there's a fundamental difference between the hypothetical universe and our own. You could make the reasonable argument that our universe has the "objective morality" of "no actions carry any inherent moral value, positive or negative", and that every ethical theory (except certain strains of Nihilism) is in disagreement with it in the same way that our Utilitarian is in disagreement with the "don't pull the lever" universe.
    Assuming that the new universe's moral theory trumps your personal moral theory, than yes, your moral theory is in fact wrong. The trick is that in order for this to be meaningful there have to be moral consequences. In the real world there are arguably no moral consequences to any action. There are societal consequences, because other human beings may dislike those actions and therefore act accordingly up to the point of mandating death as a consequence but those consequences aren't 'moral' unless you suggest that society or the will of society is capable of making moral determinations.

    In a fictional universe the key difference is that there is usually some being/entity/aspect of the universe itself capable of making moral determinations. Now the reason why this is so is ultimately because the author said so and it's a fiat action but accepting this is part of the fundamental suspension of disbelief necessary to appreciate the fictional universe. In this case a disagreement with the moral determinations offered in a fictional universe isn't a matter of ethical contention but instead a failure to suspend disbelief sufficiently to accept an alternative reality where morality actually works as stated. And, because most of the audience is going to evaluate the plausibility of a fictional universe's moral construction not based on philosophical debate but rather their personal acquired viewpoint as to what is or is not moral in an instinctive way, strong deviations from conventional morality will have a tendency to lose the audience even when the hypothetical fictional world is explicitly presented as a thought experiment.

    This is, in some sense, the grimdark problem. If the morality posited by a fictional universe is unacceptable - because the in-universe 'god' has written laws the reader can't stand - or if it's simply pointless - because the universe is written in such a way that everyone is ultimately doomed and being 'good' is a sucker's game - they why should anyone care what happens in the story? Moral calibration is tricky. To go back to the trolley problem example, anyone can declare that, in fictional universe A, throwing the switch to move the trolley to kill the one person instead of the five people it's headed towards is wrong, but in order to make that universe work for story-telling you have to construct the explanation behind that such that a Utilitarian reader will accept the answer as plausible. That's the hard part.
    Now publishing a webnovel travelogue.

    Resvier: a P6 homebrew setting

  22. - Top - End - #82
    Bugbear in the Playground
     
    WhiteWizardGirl

    Join Date
    Dec 2013
    Gender
    Female

    Default Re: [Thought experiment] If alignments are objective how do we know what they represe

    Quote Originally Posted by NigelWalmsley View Post
    Suppose, for the moment, that you're a Utilitarian who believes that, because five people is more than one person, you should pull the lever so the one person dies. You think this is the good/right/morally correct thing to do. Now suppose that you are transported into an alternate universe, indistinguishable from our own, except that you have absolute moral certainty that, in this universe, pulling the lever is Evil as an objective property of reality. Should you change your position on the Trolley Problem? Of course not. You didn't believe you should pull the lever because it was Good, you believed you should pull the lever because it saved four net lives. It still does that, so as far as you're concerned that's still the right answer..
    I'm not talking about a universe where pulling the lever is [Evil] like unholy water is, or where you have an unshakable conviction that Society (or perhaps just the DM) disapproves of people who would pull the lever. I'm talking about a universe where the in-universe character believes that it would be wrong and bad and contrary to the standards of behavior which the character themself endorses to pull the lever. That's what I meant when I was talking about there being an Objective Morality in setting where the character knows objectively what is right and wrong in a way that is distinct from what that character's player might think of the hypothetical.
    Last edited by Grek; 2021-03-07 at 01:48 AM.

  23. - Top - End - #83
    Colossus in the Playground
     
    Segev's Avatar

    Join Date
    Jan 2006
    Location

    Default Re: [Thought experiment] If alignments are objective how do we know what they represe

    Quote Originally Posted by NigelWalmsley View Post
    The alternative isn't really all that much better though. The reason "do we kill the baby Orcs" is such a troublesome question is because once you accept the notion of Objective Morality, it becomes very difficult to say that you shouldn't, but people nonetheless feel deeply uncomfortable actually doing it. Trying to shoehorn everything into a pair of Good/Evil and Law/Chaos binaries is simply not something that correlates very well with people's moral intuitions if you're operating at enough detail to start asking serious moral questions.
    It depends on what the objective morality is, actually. I would posit that, while, yes, if you set up objective morality such that you declare "it is good and right to kill baby orcs," you can fiat that into being for your campaign setting, it is not an essential quality of objective morality that it always be morally justified to kill baby orcs. That is a choice on the part of the designer of the objective morality system. (Remembering that we are discussing fictional worlds, here, where the "objective morality" can be a well-designed one or a poorly-designed one, based on the skill and effort of the creator.)

    Quote Originally Posted by OldTrees1 View Post
    Odd, I don't recall claiming the campaign was amoral. I thought I was quite clear that the GM decided what it would be (Including possibilities like 2 different forms of amoral, Moral Relativism, and a list of examples of more conventional examples).
    One of the answers the DM may give according to you is the following:
    Quote Originally Posted by a DM hypothesized by OldTrees1
    They may say "Mu. The campaign is amoral despite what some characters believe." in which case that is true.
    That is to which I was responding when I said "it is not because the setting is amoral." I responded thus because this was the only one of the answers your hypothetical DMs gave that actually dealt with a situation where there was no underlying motivation. All your other examples have underlying motivations of "you should because you want to align with this alignment."

    Quote Originally Posted by OldTrees1 View Post
    Segev at this point I have to assume your issue is not with my communication of the topic, but rather you and I disagree close to the trunk of Meta Ethics. I have not made the question "meaningless". You have complained that the question I described is impossible to answer without an underlying motivation. I have stated an underlying motivation would defeat the purpose of the question.
    This is like saying, "Defining 'down' to mean anything objective would defeat the purpose of the question, 'What is down?'"

    I do not see how you have a meaningful question when you have rejected the meaning of the words you're using in it, and insist that using the words' meanings ruins the point of the question. What is the purpose of the question? I have thought that you were trying to assert that there is no objective answer to "what is moral?" even with an objective system. I am refuting that with my arguments, and I'll be happy to repeat them with that understanding if it will help. If that is not your position, I fundamentally do not understand what it is you're trying to say.

    Quote Originally Posted by OldTrees1 View Post
    I have described, in detail, the concept of a right answer (a very common meta ethical position).
    No, you haven't. You have described, in detail, a tautology that states that "the right answer is the right answer." When you cannot tell me why it is the right answer, and, fundamentally, why I should care to be "right" by your definition of "right answer," you are playing semantic games and destroying the meaning of the words, the question, and the usefulness of the philosophy you are attempting to build around it.

    "What is 2+2?" has a right answer because there is an objective measure of it. You take two things and put two more with them and now you definitely have four things. There's no useful subjectiveness here, and a person who swears his subjective math system allows there to be 15 things when he performs this operation will simply have all his experiments fail, just as the kid who swears up and down that his lego airplane can fly on its own will, upon putting it on the floor and saying "it's flying! see?" be simply imagining it while his plane remains stubbornly, objectively, stationary on the ground.

    Quote Originally Posted by OldTrees1 View Post
    You seem to reject the concept of a right answer (very unusual meta ethical position) and instead focus on claims of "X if you want Y".
    On the contrary. There are objectively right answers. They tell you what you should do to be good, to be evil, to be lawful, to be chaotic, or whatever objective alignments exist. It is objectively the wrong thing to do to rape and murder your cousin if you are trying to be a good person. It is objectively the right thing to do to be charitable to those in need if you are trying to be a good person. These are "right answers."

    The issue is that you're constructing a situation where there can't be a right answer because there isn't actually a fully-specified question. If you cannot tell me why I ought to do something without resorting to saying "because it's what you ought to do," you don't actually have an objective system.

    Quote Originally Posted by OldTrees1 View Post
    With sufficient rigor those claims are descriptive claims, which don't cross the Is Ought boundary.
    No, you're misapplying this, as the wikipedia article you linked defines it. There can be no "ought" without a purpose, a motivation. And when discussing objective ethics, "ought" must perforce interface with what "is," simply because for it not to requires there be no objective "oughts."

    Quote Originally Posted by OldTrees1 View Post
    While this is a dense topic and I have written as such, I think we are at an impasse. I hope the further research I pointed you towards (especially meta ethics) is sufficient for you to forgive me for disengaging.
    I think you are misapplying what you linked, and are objectively wrong about what it is saying.

    Let's try this at its very simplest: In your formulation of what "ought" and "moral" mean, if Bob asks an objectively correct and honest guru, "What ought I to do?" and then, in response to the guru's answer, Bob asks, "Why ought I do that?" what would the guru's answer be?

    Quote Originally Posted by OldTrees1 View Post
    If people want to return to the topic of Alignments without the subject of Morality, then I would eagerly join them.
    I have never left the subject of alignments. Morality is inherently tied to them, at least when discussing D&D's alignment grid, which explicitly has a moral axis of "good and evil."

  24. - Top - End - #84
    Bugbear in the Playground
     
    WhiteWizardGirl

    Join Date
    Dec 2013
    Gender
    Female

    Default Re: [Thought experiment] If alignments are objective how do we know what they represe

    Quote Originally Posted by Segev View Post
    I have never left the subject of alignments. Morality is inherently tied to them, at least when discussing D&D's alignment grid, which explicitly has a moral axis of "good and evil."
    Point of order, D&D's alignment system includes inanimate pools of liquid (holy water), literal parasites (celestial tapeworms) and spells which can be used to destroy innocent puppies (Holy Smite) as things that detect as [Good] while failing to report the abstract concepts of kindness or mercy as being [Good]. We should probably consider the possibility that Detect Good is not actually detecting the same thing a philosopher is talking about when they wax poetic about The Good.
    Last edited by Grek; 2021-03-07 at 06:45 PM.

  25. - Top - End - #85
    Colossus in the Playground
     
    Segev's Avatar

    Join Date
    Jan 2006
    Location

    Default Re: [Thought experiment] If alignments are objective how do we know what they represe

    Quote Originally Posted by Grek View Post
    Point of order, D&D's alignment system includes inanimate pools of liquid (holy water), literal parasites (celestial tapeworms) and spells which can be used to destroy innocent puppies (Holy Smite) as things that detect as [Good] while failing to report the abstract concepts of kindness or mercy as being [Good]. We should probably consider the possibility that Detect Good is not actually detecting the same thing a philosopher is talking about when they wax poetic about The Good.
    I don't dispute that, however, I fail to see how we can discuss whether this is well-formed as a system without touching on morality. The whole point of an alignment system - or at least D&D's grid - is the moral/ethical axis. It very much isn't INTENDED to do a bad job of encompassing the things the words used to describe its alignments mean.

  26. - Top - End - #86
    Titan in the Playground
     
    NecromancerGuy

    Join Date
    Jul 2013

    Default Re: [Thought experiment] If alignments are objective how do we know what they represe

    Quote Originally Posted by Segev View Post
    I have never left the subject of alignments. Morality is inherently tied to them, at least when discussing D&D's alignment grid, which explicitly has a moral axis of "good and evil."
    1) I am not addressing the topic I disengaged from. If you want to learn more, I have pointed you towards the branches of Meta Ethics
    2) The Good alignment and Morality are not necessarily linked. I am willing to engage with discussion on the amoral part of the thought experiment in the opening post.
    Last edited by OldTrees1; 2021-03-07 at 11:56 PM.

  27. - Top - End - #87

    Default Re: [Thought experiment] If alignments are objective how do we know what they represe

    Quote Originally Posted by Mechalich View Post
    Assuming that the new universe's moral theory trumps your personal moral theory, than yes, your moral theory is in fact wrong. The trick is that in order for this to be meaningful there have to be moral consequences.
    What makes a "moral" consequence different from other kinds of consequence? Suppose that pulling the lever means you go to Hell, and are eaten by whatever kind of Devil eats people forever. That's not some special kind of consequence that makes pulling the lever "Evil" whether you think it is or not, it's just a regular consequence that's described in moral terms. If the Trolley Problem was stated with an "if you pull the lever, someone shoots you in the head" caveat, some people would still pull the lever (and, yes, some people would not). Basically, nothing you can do here can remove the dilemma. You can change it, and you can demand that the language be changed so that we can't call one side "Good" anymore, but there will still be some people who say "pull the lever" and some people who say "don't pull the lever", and as long as that's true you haven't solved anything.

    In the real world there are arguably no moral consequences to any action.
    But that is a moral consequence. 0 is a number. Just as "I will pay you 0 dollars to do that, and fine you 0 dollars for doing that" is an incentive (just a neutral one), "no moral judgement" is a moral judgement. And yet people who accept that the real world imposes the same consequence for killing as for saving a life do not believe that we must describe both those actions as "neutral".

    Quote Originally Posted by Grek View Post
    I'm talking about a universe where the in-universe character believes that it would be wrong and bad and contrary to the standards of behavior which the character themself endorses to pull the lever. That's what I meant when I was talking about there being an Objective Morality in setting where the character knows objectively what is right and wrong in a way that is distinct from what that character's player might think of the hypothetical.
    That's not a property of the universe, that's a property of the character. I can certainly imagine a character who believes in a different set of ethics than I do. But what I can't imagine -- what I believe to be literally unimaginable -- is a universe such that no character who agreed with my ethics could exist.

    Quote Originally Posted by Segev View Post
    It depends on what the objective morality is, actually. I would posit that, while, yes, if you set up objective morality such that you declare "it is good and right to kill baby orcs," you can fiat that into being for your campaign setting, it is not an essential quality of objective morality that it always be morally justified to kill baby orcs. That is a choice on the part of the designer of the objective morality system. (Remembering that we are discussing fictional worlds, here, where the "objective morality" can be a well-designed one or a poorly-designed one, based on the skill and effort of the creator.)
    Part of the problem is that the other answer doesn't work great either. If the baby Orcs aren't Evil, then presumably the adult Orcs aren't either (at least, not inherently), so why is it okay to kill them just because they're Orcs?

  28. - Top - End - #88
    Dwarf in the Playground
     
    DwarfFighterGirl

    Join Date
    Aug 2010

    Default Re: [Thought experiment] If alignments are objective how do we know what they represe

    Quote Originally Posted by NigelWalmsley View Post
    What is happening is not that your moral theory is now wrong, just that your moral theory and the universe's disagree. I suppose you could describe that as "your action is Evil and you are Evil", but that's fundamentally not very useful because you don't think of yourself as Evil and typical definitions of Evil will not accurately predict your future actions.
    Serial killers, thieves, terrorists, and military dictators of all kinds can provide you with an internally coherent explanation of why the only moral thing to do is to kill, rob, and enslave people. You've just got to tell some people "Your argument is just wrong, your behavior isn't moral."
    Quote Originally Posted by Segev View Post
    Let's try this at its very simplest: In your formulation of what "ought" and "moral" mean, if Bob asks an objectively correct and honest guru, "What ought I to do?" and then, in response to the guru's answer, Bob asks, "Why ought I do that?" what would the guru's answer be?
    The guru's answer, assuming he subscribes to a universalist formulation, is most likely going to go: "You need to understand that all human beings have goals they did not choose, and do not have the capacity to change, even if they are not aware of those goals and don't do anything to work on them. Those goals are programmed into all sapient creatures. When I tell you what you 'ought' to do, I'm addressing those specific goals. If you think some other goal is more important than the goals I'm addressing, I'm going to have to tell you that you don't actually get to decide which of your goals are most important. I'm telling you which of your goals is most important. That's my job as a guru.
    Quote Originally Posted by Grek View Post
    Point of order, D&D's alignment system includes inanimate pools of liquid (holy water), literal parasites (celestial tapeworms) and spells which can be used to destroy innocent puppies (Holy Smite) as things that detect as [Good] while failing to report the abstract concepts of kindness or mercy as being [Good]. We should probably consider the possibility that Detect Good is not actually detecting the same thing a philosopher is talking about when they wax poetic about The Good.
    Medieval and ancient Greek moral miasmic theory. https://www.youtube.com/watch?v=ALWLELLlv6E
    Non est salvatori salvator,
    neque defensori dominus,
    nec pater nec mater,
    nihil supernum.

  29. - Top - End - #89
    Titan in the Playground
     
    Tanarii's Avatar

    Join Date
    Sep 2015

    Default Re: [Thought experiment] If alignments are objective how do we know what they represe

    Quote Originally Posted by Chauncymancer View Post
    The guru's answer, assuming he subscribes to a universalist formulation, is most likely going to go: "You need to understand that all human beings have goals they did not choose, and do not have the capacity to change, even if they are not aware of those goals and don't do anything to work on them. Those goals are programmed into all sapient creatures. When I tell you what you 'ought' to do, I'm addressing those specific goals. If you think some other goal is more important than the goals I'm addressing, I'm going to have to tell you that you don't actually get to decide which of your goals are most important. I'm telling you which of your goals is most important. That's my job as a guru.
    That doesn't work the moment you're dealing with someone that doesn't accept received wisdom as proof.

  30. - Top - End - #90
    Colossus in the Playground
     
    Segev's Avatar

    Join Date
    Jan 2006
    Location

    Default Re: [Thought experiment] If alignments are objective how do we know what they represe

    Quote Originally Posted by NigelWalmsley View Post
    What makes a "moral" consequence different from other kinds of consequence? Suppose that pulling the lever means you go to Hell, and are eaten by whatever kind of Devil eats people forever. That's not some special kind of consequence that makes pulling the lever "Evil" whether you think it is or not, it's just a regular consequence that's described in moral terms. If the Trolley Problem was stated with an "if you pull the lever, someone shoots you in the head" caveat, some people would still pull the lever (and, yes, some people would not). Basically, nothing you can do here can remove the dilemma. You can change it, and you can demand that the language be changed so that we can't call one side "Good" anymore, but there will still be some people who say "pull the lever" and some people who say "don't pull the lever", and as long as that's true you haven't solved anything.
    This is precisely the point I've been trying to make. Thank you for putting it in different words, as I am bad at rewording an argument once I've articulated it one way.

    Quote Originally Posted by NigelWalmsley View Post
    Part of the problem is that the other answer doesn't work great either. If the baby Orcs aren't Evil, then presumably the adult Orcs aren't either (at least, not inherently), so why is it okay to kill them just because they're Orcs?
    Presumably, being inherently evil (as we are accepting is the case for sake of this discussion), the adult orcs are acting that way. I tend to agree that "they're inherently evil" is a weak and lazy way of handling it, when "they're evil; just look at all the evil things they're doing!" is just as viable for a DM to say and avoids questions of free will. Who cares whether they're "born evil" or "evil because they're doing evil things" when they are, in fact, doing evil things that mean they need to be stopped?

    Unrepentantly evil adults who do things that could get anybody put to death for their villainy can be safely - morally speaking - put to death. Especially in ye olde fantasy funtime games where them being alive and at your mercy is almost an accident on the DM's part because he wasn't trying to set up a moral conundrum and just expected you to kill the bandits-who-are-orcs.

    Typically, the "born evil" thing comes up with "women and children" situations, not with "adult orcs who are being evil." And if the orcs aren't being evil, then the DM is doing a bad job of showing the evil of these "always inherently evil" creatures.

    Quote Originally Posted by Chauncymancer View Post
    Serial killers, thieves, terrorists, and military dictators of all kinds can provide you with an internally coherent explanation of why the only moral thing to do is to kill, rob, and enslave people. You've just got to tell some people "Your argument is just wrong, your behavior isn't moral."
    Are we talking about a subjective alignment system? If so, you're only right from your own subjective moral code. If not, then the question arises, again, "what does it mean to be moral?" It's actually quite possible they're factually wrong about what "being good" means, and believe themselves to be good. With objective morality, you can define "good" objectively, and determine that they are not, in fact, correct. But "moral?" If you define "moral == good," then they can validly ask, "Why should I be moral?" If you define "moral" as being "what you ought to do," then if they're happy with the alignment they're living towards, they're making the morally correct choices, because they help them attain that alignment.

    Quote Originally Posted by Chauncymancer View Post
    The guru's answer, assuming he subscribes to a universalist formulation, is most likely going to go: "You need to understand that all human beings have goals they did not choose, and do not have the capacity to change, even if they are not aware of those goals and don't do anything to work on them. Those goals are programmed into all sapient creatures. When I tell you what you 'ought' to do, I'm addressing those specific goals. If you think some other goal is more important than the goals I'm addressing, I'm going to have to tell you that you don't actually get to decide which of your goals are most important. I'm telling you which of your goals is most important. That's my job as a guru.
    Quote Originally Posted by Tanarii View Post
    That doesn't work the moment you're dealing with someone that doesn't accept received wisdom as proof.
    Tanarii has the right of it, put more succinctly than I could. (As evidenced, likely, but what I'm about to write at length, here.)

    "Why should I care about these programmed goals, if I don't want to engage them?" is a perfectly valid question. "I, the guru, am telling you you should," is not persuasive, and cannot be proven to be correct unless you can crack open the soul of the being and prove that, deep down, he does care about those goals, and is lying to himself.

    Even then, though, how do you prove that that being should be pursuing those goals, and that those goals are, in fact, the right ones for a being to have? Perhaps the very BEING is wrong, and should be overcome.

    Objective facts can tell you what things are. They cannot tell you what they should be. They can tell you what you should do to accomplish certain things, based on how things are. That is what objectivity gets you: factual truth about what is. This is only useful insofar as you can then act upon this knowledge of reality to achieve real results.

    The guru can tell you - assuming he's omniscient and honest on the subject - what your goals really, truly are...but he is still appealing to your goals. "Why should I want to do X?" when you get to the core of it, the guru can answer with a smile, "It isn't about whether you should; you do."

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •