Results 91 to 94 of 94
-
2021-03-08, 05:39 PM (ISO 8601)
- Join Date
- Mar 2005
- Location
- 61.2° N, 149.9° W
- Gender
Re: [Thought experiment] If alignments are objective how do we know what they represe
Hmm. If alignment is objective then there's something like 2+2=4, thermodynamics, or the value of pi that exists. Given that alignment deals with actions/intent there's probably something like Newtonian physics; "for every [law] X there is an equal and opposite [chaos] Y." Stuff like that, although possibly not as nice and straightforward.
There exist spells to detect direction (g-e, l-c) and gross magnitude (no-faint-weak-middling-strong-overwhelming) of an aligned target. There exist things to change alignment, actions, spells, items, and the opposite helm.
Some experiments would be unethical but you could take true neutral subjects and have them perform minor actions until they start to ping an alignment. Then actions until that's reversed. Probably ask the ones you're testing the order/law axis on to figure out if alignment is proscriptive or descriptive and purely action, action/intent, or purely intent. At some point use the opposite helm(s) to check some larger value differences. It might be complex and multivariant. Maybe like "At value X in the [chaos] direction action Y increases rate of change from +1 to +1.3, but action inverse-Y reduces from -1 to -0.4, and once past X+1 intent ceases to matter for action Y but at X+2 it requires stronger intent for inverse-Y to have any effect."
If alignment is proscriptive then some amount of puppy kicking, even if you don't like it, will push you into orphan punching or something. Either despite your dislike for it or it literally alters your mind until you do like it. If it's descriptive then someone who knows they want to be chaotic but also wants to, for example, follow saftey rules so they don't die will know they have to offset those orderly actions by doing more lol-random stuff.
I'm thinking that objective alignments might push things much more toward "teams" and away from "morals" unless the alignments are proscriptive and/or mainly intent based. Still, probably leads to the occasional "has to be neutral" person who is genuinely nice but owns a puppy kicking mill for alignment maintence.
There's also the question of alignment transmission. Can things/beings be contaminated with alignment, and what effect (besides comic fodder like Xylon's crown & Miko) does that have?
-
2021-03-08, 05:44 PM (ISO 8601)
- Join Date
- Dec 2019
Re: [Thought experiment] If alignments are objective how do we know what they represe
Except we don't just tell those people they're wrong, we have a coherent theory of why those people are wrong. The reason we don't like serial killers isn't that they radiate Evil, it's that they kill people. The only explicative power Detect Evil has is in the name of the thing it's detecting, and that's just not sufficient for a moral argument. Again, imagine that "Detect Evil" was "Detect Strange" or "Detect Orange" or "Detect Snarf". Why should I set my morality to those things?
And that gets you to the problem of redundancy. If the Orcs hunt and kill Humans for food, I don't need Detect Evil to tell me that they're Evil. I can make the judgement that cannibalism is bad in the real world without any such tool. Alignment, as conceived in D&D, is fundamentally a tool for explaining why it is okay to go into a dungeon, kill the inhabitants, and walk home with their stuff. Since that is, in the vast majority of circumstances, not actually okay, D&D Alignment does not hold up well to scrutiny. And that's actually okay! Not every tool has to be useful for every purpose. But people seem to really want Alignment to be a general-purpose morality system, and that's just not what it is.
"Why should I care about these programmed goals, if I don't want to engage them?" is a perfectly valid question. "I, the guru, am telling you you should," is not persuasive, and cannot be proven to be correct unless you can crack open the soul of the being and prove that, deep down, he does care about those goals, and is lying to himself.
-
2021-03-08, 06:34 PM (ISO 8601)
- Join Date
- Dec 2013
- Gender
Re: [Thought experiment] If alignments are objective how do we know what they represe
Again, the supposition is not that no character could exist, merely that no such character does exist. And the reason that no such character exists is that whenever someone thinks that killing baby orcs is okay (or babies in general, but it always seems to default to orc babies for some reason) they get a little mental *ping* reminding them that the action they are contemplating is wrong and that they shouldn't do it. And because they evolved in a universe where that moral ping has been happening for the entirety of natural history and has never been wrong even once, they're naturally evolved to make it a load-bearing part of their decision making process. Obviously a powerful wizard could snap her fingers and create a new species that doesn't have that bone-deep instinct. But they'd still be getting the moral ping whenever they contemplated doing evil, and they'd still get sent to Gehenna if they actually went ahead and killed any orc babies.
-
2021-03-08, 06:40 PM (ISO 8601)
- Join Date
- Jan 2006
- Location
Re: [Thought experiment] If alignments are objective how do we know what they represe
I am not going to discuss whether there is or what it could be that is objective morality IRL. We can discuss what it means in D&D without that.
Subjective vs objective is well-defined. Objective morality only devolves into teams divorced from morality if the morality is poorly specified and/or constructed. Whatever your or an author's beliefs about morality, it is generally possible to characterize a fictional objective morality that embodies it. I would hazard that, if you find your objectively-defined morality leads to internal contradiction, then you've either mischaracterized your objective morality or discovered a flaw in the rule set you are using to define your fictional objective morality.
This likely jeans you need to refine it and consider what the contradiction is and why it crops up. This is generally the case in any model e makes and can only test via thought experiments. I have discovered such problems in magic systems I've attempted to make and had to go back to the drawing board, for instance.
I will say that if you find that the objective morality system you have invented for your fictional setting invites what you believe to be bad people to get to classify themselves as "Good" and would classify folks you would say are good people as "Evil," the fault lies in the design of the system, not in the notion of objective morality, itself.
I feel the need to say this because I too often see stories of "too Good" being told where the "good" person is only "good" because the writer insists there are codicils that say this wicked deed they dojis good, honest. Generally speaking, those codicils are only "good" by general real-world consensus in extreme generalities and not in all nuanced cases, and the author seems to confuse lack of nuance and objectivity as the same thing, when they most definitely are not.