New OOTS products from CafePress
New OOTS t-shirts, ornaments, mugs, bags, and more
Page 4 of 4 FirstFirst 1234
Results 91 to 94 of 94
  1. - Top - End - #91
    Ettin in the Playground
     
    Telok's Avatar

    Join Date
    Mar 2005
    Location
    61.2° N, 149.9° W
    Gender
    Male

    Default Re: [Thought experiment] If alignments are objective how do we know what they represe

    Hmm. If alignment is objective then there's something like 2+2=4, thermodynamics, or the value of pi that exists. Given that alignment deals with actions/intent there's probably something like Newtonian physics; "for every [law] X there is an equal and opposite [chaos] Y." Stuff like that, although possibly not as nice and straightforward.

    There exist spells to detect direction (g-e, l-c) and gross magnitude (no-faint-weak-middling-strong-overwhelming) of an aligned target. There exist things to change alignment, actions, spells, items, and the opposite helm.

    Some experiments would be unethical but you could take true neutral subjects and have them perform minor actions until they start to ping an alignment. Then actions until that's reversed. Probably ask the ones you're testing the order/law axis on to figure out if alignment is proscriptive or descriptive and purely action, action/intent, or purely intent. At some point use the opposite helm(s) to check some larger value differences. It might be complex and multivariant. Maybe like "At value X in the [chaos] direction action Y increases rate of change from +1 to +1.3, but action inverse-Y reduces from -1 to -0.4, and once past X+1 intent ceases to matter for action Y but at X+2 it requires stronger intent for inverse-Y to have any effect."

    If alignment is proscriptive then some amount of puppy kicking, even if you don't like it, will push you into orphan punching or something. Either despite your dislike for it or it literally alters your mind until you do like it. If it's descriptive then someone who knows they want to be chaotic but also wants to, for example, follow saftey rules so they don't die will know they have to offset those orderly actions by doing more lol-random stuff.

    I'm thinking that objective alignments might push things much more toward "teams" and away from "morals" unless the alignments are proscriptive and/or mainly intent based. Still, probably leads to the occasional "has to be neutral" person who is genuinely nice but owns a puppy kicking mill for alignment maintence.

    There's also the question of alignment transmission. Can things/beings be contaminated with alignment, and what effect (besides comic fodder like Xylon's crown & Miko) does that have?

  2. - Top - End - #92

    Default Re: [Thought experiment] If alignments are objective how do we know what they represe

    Quote Originally Posted by Chauncymancer View Post
    Serial killers, thieves, terrorists, and military dictators of all kinds can provide you with an internally coherent explanation of why the only moral thing to do is to kill, rob, and enslave people. You've just got to tell some people "Your argument is just wrong, your behavior isn't moral."
    Except we don't just tell those people they're wrong, we have a coherent theory of why those people are wrong. The reason we don't like serial killers isn't that they radiate Evil, it's that they kill people. The only explicative power Detect Evil has is in the name of the thing it's detecting, and that's just not sufficient for a moral argument. Again, imagine that "Detect Evil" was "Detect Strange" or "Detect Orange" or "Detect Snarf". Why should I set my morality to those things?

    Quote Originally Posted by Segev View Post
    Presumably, being inherently evil (as we are accepting is the case for sake of this discussion), the adult orcs are acting that way.
    And that gets you to the problem of redundancy. If the Orcs hunt and kill Humans for food, I don't need Detect Evil to tell me that they're Evil. I can make the judgement that cannibalism is bad in the real world without any such tool. Alignment, as conceived in D&D, is fundamentally a tool for explaining why it is okay to go into a dungeon, kill the inhabitants, and walk home with their stuff. Since that is, in the vast majority of circumstances, not actually okay, D&D Alignment does not hold up well to scrutiny. And that's actually okay! Not every tool has to be useful for every purpose. But people seem to really want Alignment to be a general-purpose morality system, and that's just not what it is.

    "Why should I care about these programmed goals, if I don't want to engage them?" is a perfectly valid question. "I, the guru, am telling you you should," is not persuasive, and cannot be proven to be correct unless you can crack open the soul of the being and prove that, deep down, he does care about those goals, and is lying to himself.
    It's also worth noting that this argument doesn't depend on the nature of D&D as a setting, and as such doesn't adequately answer the implicit question for why there's this kind of universally/cosmically correct morality in D&D, but not (as far as we can tell) in real life. There are gurus in the real world too, if they're actually infallible sources of morality, why are there ethical theories not based on their teachings?

  3. - Top - End - #93
    Bugbear in the Playground
     
    WhiteWizardGirl

    Join Date
    Dec 2013
    Gender
    Female

    Default Re: [Thought experiment] If alignments are objective how do we know what they represe

    Quote Originally Posted by NigelWalmsley View Post
    That's not a property of the universe, that's a property of the character. I can certainly imagine a character who believes in a different set of ethics than I do. But what I can't imagine -- what I believe to be literally unimaginable -- is a universe such that no character who agreed with my ethics could exist.
    Again, the supposition is not that no character could exist, merely that no such character does exist. And the reason that no such character exists is that whenever someone thinks that killing baby orcs is okay (or babies in general, but it always seems to default to orc babies for some reason) they get a little mental *ping* reminding them that the action they are contemplating is wrong and that they shouldn't do it. And because they evolved in a universe where that moral ping has been happening for the entirety of natural history and has never been wrong even once, they're naturally evolved to make it a load-bearing part of their decision making process. Obviously a powerful wizard could snap her fingers and create a new species that doesn't have that bone-deep instinct. But they'd still be getting the moral ping whenever they contemplated doing evil, and they'd still get sent to Gehenna if they actually went ahead and killed any orc babies.

  4. - Top - End - #94
    Colossus in the Playground
     
    Segev's Avatar

    Join Date
    Jan 2006
    Location

    Default Re: [Thought experiment] If alignments are objective how do we know what they represe

    I am not going to discuss whether there is or what it could be that is objective morality IRL. We can discuss what it means in D&D without that.

    Subjective vs objective is well-defined. Objective morality only devolves into teams divorced from morality if the morality is poorly specified and/or constructed. Whatever your or an author's beliefs about morality, it is generally possible to characterize a fictional objective morality that embodies it. I would hazard that, if you find your objectively-defined morality leads to internal contradiction, then you've either mischaracterized your objective morality or discovered a flaw in the rule set you are using to define your fictional objective morality.

    This likely jeans you need to refine it and consider what the contradiction is and why it crops up. This is generally the case in any model e makes and can only test via thought experiments. I have discovered such problems in magic systems I've attempted to make and had to go back to the drawing board, for instance.

    I will say that if you find that the objective morality system you have invented for your fictional setting invites what you believe to be bad people to get to classify themselves as "Good" and would classify folks you would say are good people as "Evil," the fault lies in the design of the system, not in the notion of objective morality, itself.

    I feel the need to say this because I too often see stories of "too Good" being told where the "good" person is only "good" because the writer insists there are codicils that say this wicked deed they dojis good, honest. Generally speaking, those codicils are only "good" by general real-world consensus in extreme generalities and not in all nuanced cases, and the author seems to confuse lack of nuance and objectivity as the same thing, when they most definitely are not.

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •