1. - Top - End - #197
    Dwarf in the Playground
     
    NecromancerGuy

    Join Date
    Sep 2020

    Default Re: Useful Morality Subsystems (Alignment Replacements)

    The trolley problem is a good case to consider, because it illustrates the point that not only are there multiple axes on which good can be measured, but that they can actively oppose each other.

    There are several moral principles that could apply to the trolley problem: the principle of "avoid causing death" would have us not pull the lever, while "more survivors is better" would have us pull it. Most people would agree that both of those principles are important - but, when they are in conflict with each other, there's widespread disagreement over which one is more important. People who consider themselves to be good (or at least, in DnD-terms, neutral with good leanings) can and do disagree on what the good thing to do in this situation would be, or whether there even is a good thing to do, or whether there's only one.

    One of the downsides of having a game with objective morality is that conflicts of this sort are difficult to represent well. That's not to say that they can't be represented at all - Miko's aforementioned kerfuffle is a shining example - but the fact that detect good can confirm that you're all on the same team combined with the existence of imminent threats that ping on detect evil does tend to put a damper on them.

    I think that moral foundations theory would be a good place to start if you want to formalize this. The six foundations (Care/Harm, Fairness/Cheating, Loyalty/Betrayal, Authority/Subversion, Sanctity/Degradation, and Liberty/Oppression) could work very well to represent, in a way that's nuanced enough to be useful for roleplay and characterizations yet simple enough for a game system, both which principles a character considers important and how they measure up in light of those principles.

    There are plenty of real-world examples of conflict between people with differing opinions on which of these moral foundations are more important, so there's lots to draw from if you want conflict between "good" characters. I won't discuss specifics because most of these conflicts are political, but there's fertile ground here.

    A system based on moral foundations theory would also do quite a nice job of representing the law/chaos axis with more nuance than the original nine-alignment grid. Characters who value authority highly are quite different from those who value fairness and loyalty, yet both would be considered "lawful" under the old system. Characters who would choose liberty over authority are quite different from those who would choose liberty over absolutely all else, yet both would be considered "chaotic" under the old system.

    Actually, let's spell this out in full:

    • Lawful good: Primarily values care plus at least one out of of authority, loyalty, fairness, and sanctity. (Depending on the DM, any of these last four may be required. Authority is likely to be required.) May appreciate liberty, but when push comes to shove will choose other principles over it.
    • Neutral good: Values care very highly. Everything else is optional so long as the character doesn't stray into LG or CG territory.
    • Chaotic good: Primarily values care and liberty, and doesn't put much weight on authority. Everything else is optional.
    • Lawful neutral: Values at least one out of of authority, loyalty, fairness, and sanctity. (Depending on the DM, any of these four may be required. Authority is likely to be required.) May appreciate liberty and care, but when push comes to shove will choose other principles over them.
    • Neutral: Either doesn't value anything so much that it's always a clear overriding factor, or has a combination of values that can't be made to fit sensibly into any other alignment.
    • Chaotic neutral: Doesn't value authority, and probably places low or no value on loyalty, fairness, and/or sanctity. Values liberty and/or care enough to avoid being CE but not so much as to be CG.
    • Lawful evil: Values at least one out of of authority, loyalty, fairness, and sanctity. (Depending on the DM, any of these four may be required. Authority is likely to be required.) Does not value care or liberty.
    • Neutral evil: Does not value care. Everything else is optional so long as the character doesn't become LE or CE.
    • Chaotic evil: Does not value any of the moral foundations at all. (No, not even liberty - as much as they may like it for themselves, a CE character does not care about the liberty of others.)


    There's a lot of nuance here. The only alignment out of the original nine that doesn't have room for significant differences in opinion between members is CE, and that's an alignment that's well-known for being one-note. (It's also an alignment whose members usually end up fighting each other even if they all want exactly the same thing, so if you're looking to make a game where fine ethical disagreements can lead to conflict, you really don't need a way to represent those disagreements here.)

    I think it's also important to note that players can also disagree with their GM on what's good and evil, and it can be very disheartening to do what you believe is the right thing only to be told that your character is now evil as a result. Worse yet, the only way to restore your character's alignment requires that you agree (or at least pretend to agree) that what you did was wrong - the atonement spell only works on someone who is truly repentant, so someone who thinks that they actually did the right thing can't benefit from it!

    If your DM were to instead say something to the effect of "OK, that's going to loose you a mark of Sanctity" then that'd feel a lot better as a player - because if the player thinks that whatever they did to loose sanctity was the right thing then they must not consider sanctity to be important, so they won't take that loss as a mark against them personally.

    Quote Originally Posted by Kane0 View Post
    Black for 'refuses to participate'
    Purple for 'requires further context'
    Red for 'out-of-bounds response'
    Since we're talking about the trolley problem, I'm going to have a little fun with it: you should derail the cart. If you pull the lever at just the right time, you'll send the front wheels of the cart down one track and the rear wheels down another, potentially saving everybody.

    Of course, that's only what I'd do if I had time enough to think. One of the key components of the trolley problem as it's normally posed is that everything happens quickly. It's supposed to be a forced binary choice, and forcing this binary choice is justified by the assumption that there won't be time to look for third options.

    Yet what people who pose the trolley problem almost always miss is that, if it happens too fast to look for third options, then it also happens too fast to reason and philosophize and wring one's hands over the right response. The real answer to the fast trolley problem must be a reaction, with justification being either absent or contrived to fit. You can't have it both ways - either this is an exercise in moral reasoning where we can actually perform moral reasoning, or it's one with a forced binary choice. If it's the latter, then regardless of what you might think that you should do, for many of us the real answer would be "panic and freeze."

    The trolley problem is moderately useful as a contrivance. I think it's only useful it it's acknowledged as a contrivance, though. When we're talking about the trolley problem, we're not really talking about trolleys. We're talking about how we make hard decisions between distasteful options via a fantastical (and therefore comfortably not real-world) example.

    As I said earlier, they trolley problem illustrates the point that there are multiple principles that could be considered good but which can oppose each other. I suspect that any further insights that could be drawn from it would only be further elaborations on that theme. Nitpicking the presentation of the problem may be fun, and I'll admit that I indulged in that nitpicky fun in the above few paragraphs, but seriously, I think that the best insights that can be gleaned from it come from acknowledging the contrivance and then that, accepting it as constructed model example, it is a problem.

    So, I suppose I'm glowing red.
    Last edited by Herbert_W; 2021-03-07 at 11:07 PM.