New OOTS products from CafePress
New OOTS t-shirts, ornaments, mugs, bags, and more
Page 11 of 28 FirstFirst ... 23456789101112131415161718192021 ... LastLast
Results 301 to 330 of 837
  1. - Top - End - #301
    Titan in the Playground
     
    Fyraltari's Avatar

    Join Date
    Aug 2017
    Location
    France
    Gender
    Male

    Default Re: OOTS #1274 - The Discussion Thread

    Quote Originally Posted by Grey_Wolf_c View Post
    Huh. Can't place the origin of that one. Can I ask what's the original you are paraphrasing? Is it one of Esme's?

    GW
    Brutha in Small Gods. The phrasing is more along the lines of "In a hundred years, we'll all be dead, true. But right here, right now, we are alive."
    Forum Wisdom

    Mage avatar by smutmulch & linklele.

  2. - Top - End - #302
    Dragon in the Playground Moderator
     
    Peelee's Avatar

    Join Date
    Dec 2009
    Location
    Birmingham, AL
    Gender
    Male

    Default Re: OOTS #1274 - The Discussion Thread

    Quote Originally Posted by Doug Lampert View Post
    To paraphrase Pratchet, Right here, right now, we are alive.

    Morality may not matter to the dead, but the dead aren't the ones deciding what to do right here, right now.
    Ya know I'm starting to like this Pratchet guy.
    Cuthalion's art is the prettiest art of all the art. Like my avatar.

    Number of times Roland St. Jude has sworn revenge upon me: 2

  3. - Top - End - #303
    Titan in the Playground
     
    Grey_Wolf_c's Avatar

    Join Date
    Aug 2007

    Default Re: OOTS #1274 - The Discussion Thread

    Quote Originally Posted by Peelee View Post
    Ya know I'm starting to like this Pratchet guy.
    We could do a lot worse than to have a rule that any discussion of morality would need to be done entirely with Pratchett quotes.

    GW
    Interested in MitD? Join us in MitD's thread.
    There is a world of imagination
    Deep in the corners of your mind
    Where reality is an intruder
    And myth and legend thrive
    Quote Originally Posted by The Giant View Post
    But really, the important lesson here is this: Rather than making assumptions that don't fit with the text and then complaining about the text being wrong, why not just choose different assumptions that DO fit with the text?
    Ceterum autem censeo Hilgya malefica est

  4. - Top - End - #304
    Titan in the Playground
     
    Planetar

    Join Date
    Dec 2006
    Location
    Raleigh NC
    Gender
    Male

    Default Re: OOTS #1274 - The Discussion Thread

    Quote Originally Posted by Grey_Wolf_c View Post
    We could do a lot worse than to have a rule that any discussion of morality would need to be done entirely with Pratchett quotes.

    GW
    Then we have to have Esme weigh in.

    Quote Originally Posted by Carpe Jugulum
    There is a very interesting debate raging at the moment about the nature of sin, for example,” said Oats. “And what do they think? Against it, are they?” said Granny Weatherwax.
    “It’s not as simple as that. It’s not a black and white issue. There are so many shades of gray.”
    “Nope.”
    “Pardon?”
    “There’s no grays, only white that’s got grubby. I’m surprised you don’t know that. And sin, young man, is when you treat people as things. Including yourself. That’s what sin is.
    “It’s a lot more complicated than that . . .”
    “No. It ain’t. When people say things are a lot more complicated than that, they means they’re getting worried that they won’t like the truth. People as things, that’s where it starts.”
    “Oh, I’m sure there are worse crimes . . .”
    “But they starts with thinking about people as things . . . ”
    Rolling this back to the story, it's Julia that's treating Sunny as a "thing". And Esme, were she in this universe, wouldn't have it.

    Respectfully,

    Brian P.
    "Every lie we tell incurs a debt to the truth. Sooner or later, that debt is paid."

    -Valery Legasov in Chernobyl

  5. - Top - End - #305
    Bugbear in the Playground
     
    arimareiji's Avatar

    Join Date
    May 2017

    Default Re: OOTS #1274 - The Discussion Thread

    Quote Originally Posted by pendell View Post
    Then we have to have Esme weigh in.
    Nice quote from Carpe Jugulum. "It's a lot more complicated than that" has way too many semi-synonymous relatives, including one which I've gotten dreadfully tired of hearing of late: "[{Rule} doesn't apply in a case where I don't want it to or to a person I don't want it to, because] that's different".
    "Just a Sec Mate" avatar courtesy of Gengy. I'm often somewhere between it, and this gif. (^_~)
    Founding (and so far, only) member of the Greyview Appreciation Society
    "Only certainty in life: When icy jaws of death come, you will not have had enough treats. Nod. Get treat."

  6. - Top - End - #306
    Spamalot in the Playground
     
    Psyren's Avatar

    Join Date
    Oct 2010
    Gender
    Male

    Default Re: OOTS #1274 - The Discussion Thread

    Quote Originally Posted by Peelee View Post
    If no one's alive then there's no one to object to it. All we have is what matters to us when we're alive. I could right say if no one's alive then what's the point of money, but I don't think you're going to sit here and think "how true, let me dispose of all my money since it's nothing but a social construct that matters only to the living!"

    Mass Effect passed off a banality as wisdom.
    I agree completely, and it's worth pointing out that Mass Effect wasn't really trying to validate Javik's viewpoint - quite the opposite, pretty much your entire crew spends every conversation either dunking on the guy or hanging up on him because it's like talking to a brick wall. And he himself changes his mind later, provide you don't
    Spoiler: Endgame spoilers
    Show
    let him wallow in his memory shard, which instead heightens his fatalism to the point of making him suicidal.


    Javik is a great character with strong meme potential, but he's not meant to be a role model or the voice of the series.
    Last edited by Psyren; 2023-01-31 at 03:43 PM.
    Quote Originally Posted by The Giant View Post
    But really, the important lesson here is this: Rather than making assumptions that don't fit with the text and then complaining about the text being wrong, why not just choose different assumptions that DO fit with the text?
    Plague Doctor by Crimmy
    Ext. Sig (Handbooks/Creations)

  7. - Top - End - #307
    Bugbear in the Playground
     
    Fish's Avatar

    Join Date
    Oct 2007
    Location
    Olympia, WA

    Default Re: OOTS #1274 - The Discussion Thread

    Quote Originally Posted by pendell View Post
    There are two problems with this: The first is the problem of false dilemma. It's almost always artificial problems constructed for classrooms where you have exactly two choices with exactly two outcomes. Often, being pushed into this position means you're failing to consider option 3, which allows you to save everyone's life.

    The second problem is that, in the real world, the case isn't often "kill one to save ten, guaranteed'. It more usually comes down to "cause certain harm to one person in the hope it will potentially save up to 10 people sometime, somewhere."
    I am not going to defend utilitarianism here, but in my opinion, this isn't an effective strategy to attack it. Instructors need to present a problem in such a way that the precept becomes clear to students; metaphors must be simplified. When it comes to ethics and moral behavior, you have to highlight the nature of the problem by refusing to allow any loopholes to squeeze through (especially if the loopholes allow the student to evade thinking about the problem at hand).

    For instance, a geometry instructor might say, "You know the length of this side of the garden, and the length of that side, and the angle of this fence here, and you know the garden is on a flat surface. What is the length of the hypotenuse?" The instructor wants the students to come up with an answer based on what they are given, not an answer like "It's my garden, so I get a tape measure." This isn't a perfect analogy to teaching ethics, but I hope it shows that it makes little sense to attack trigonometry because the teaching method or the thought experiment contains "artificial problems."

    As to your second point, yes, that is a weakness of utilitarian problems. Many such problems presuppose a predictable event or a mechanized or mechanical outcome that an actor can know about in real time.
    Last edited by Fish; 2023-01-31 at 06:41 PM.
    The Giant says: Yes, I am aware TV Tropes exists as a website. ... No, I have never decided to do something in the comic because it was listed on TV Tropes. I don't use it as a checklist for ideas ... and I have never intentionally referenced it in any way.

  8. - Top - End - #308
    Bugbear in the Playground
     
    Tubercular Ox's Avatar

    Join Date
    Jan 2009

    Default Re: OOTS #1274 - The Discussion Thread

    Quote Originally Posted by pendell View Post
    Okay.

    I've just finished reviewing the strips with Sunny and I'm beginning to wonder whether Roy and I are making a mistake. Namely, in assuming Sunny is an actual child, rather than just being child-like, like Elan.
    I thought Sunny was adult and Elan-like this whole time, but I agree with Grey Wolf: If Roy says Sunny is a child, then he probably knows something we don't.

    And if you want to get really meta, if Rich thought it was obvious that Sunny was a child, he would write Roy saying so without explaining himself. But if Rich thought there were doubt in the audience over whether Sunny was a child, he'd write it more definitively if he wanted to settle the doubt, and more dubiously if he wanted to feed it.
    Quote Originally Posted by Kish View Post
    The creature in the darkness is [in the spoiler below] if Rich wrote a Cthulhu D20-based shaggy dog story.
    Spoiler: A shaggy dog story
    Show
    An evil sorcerer in command of a dark cult is trying to unleash a god-killing abomination more real than the gods themselves. At his side, yellow eyes revealed a Haunter of the Dark. The evil sorcerer ordered it to kill.
    TinyMushroom drew my avatar

  9. - Top - End - #309
    Titan in the Playground
     
    KorvinStarmast's Avatar

    Join Date
    May 2015
    Location
    Texas
    Gender
    Male

    Default Re: OOTS #1274 - The Discussion Thread

    Quote Originally Posted by Tubercular Ox View Post
    If Roy says Sunny is a child, then he probably knows something we don't.
    Roy also said that Durkon was still Durkon, in book 6 and he was not correct. Be careful about who is doing the narration; not all characters are reliable or omniscient. Roy may believe Sunny to be a child; that does not make him one necessarily. (Roy may or may not be right, or he may be buying into Serini's little game ...)
    Last edited by KorvinStarmast; 2023-01-31 at 07:05 PM.
    Avatar by linklele. How Teleport Works
    a. Malifice (paraphrased):
    Rulings are not 'House Rules.' Rulings are a DM doing what DMs are supposed to do.
    b. greenstone (paraphrased):
    Agency means that they {players} control their character's actions; you control the world's reactions to the character's actions.
    Gosh, 2D8HP, you are so very correct!
    Second known member of the Greyview Appreciation Society

  10. - Top - End - #310
    Ettin in the Playground
    Join Date
    May 2009

    Default Re: OOTS #1274 - The Discussion Thread

    Quote Originally Posted by Tubercular Ox View Post
    If Roy says Sunny is a child, then he probably knows something we don't.
    I don't think he needs to know anything we don't.

    Sunny here mentioned that he is learning 'manual' dexterity, panel 2, and doesn't know how to write letters yet (panel 3), and we know that Serini gives him busy work fit for a child (panel 11).

    None of that proves that Sunny is a child (I - don't know/can't remember/am not looking up right now - the biology of young beholders, or their parody equivalents which may be different - and suspect that Roy doesn't know much about them) but it does mean that Roy is justified in thinking of Sunny as a child.

  11. - Top - End - #311
    Barbarian in the Playground
     
    NinjaGuy

    Join Date
    Jan 2019

    Default Re: OOTS #1274 - The Discussion Thread

    Quote Originally Posted by Fish View Post
    I am not going to defend utilitarianism here, but in my opinion, this isn't an effective strategy to attack it. Instructors need to present a problem in such a way that the precept becomes clear to students; metaphors must be simplified. When it comes to ethics and moral behavior, you have to highlight the nature of the problem by refusing to allow any loopholes to squeeze through (especially if the loopholes allow the student to evade thinking about the problem at hand).

    For instance, a geometry instructor might say, "You know the length of this side of the garden, and the length of that side, and the angle of this fence here, and you know the garden is on a flat surface. What is the length of the hypotenuse?" The instructor wants the students to come up with an answer based on what they are given, not an answer like "It's my garden, so I get a tape measure." This isn't a perfect analogy to teaching ethics, but I hope it shows that it makes little sense to attack trigonometry because the teaching method or the thought experiment contains "artificial problems."
    Thanks for putting into words something I've been seeing here and there throughout the thread.

    Like, shoot, guys, I'm no philosophy major, but I'm pretty sure if your response to the trolley problem is "there's gotta be a way to save everyone!" or something on the same level, you're missing the point so comically badly that I don't know what to say. It's just a starting point, a simple problem without a right or wrong answer, just seeing where you stand on a single variable in your moral framework.
    Last edited by Frozenstep; 2023-01-31 at 08:37 PM.

  12. - Top - End - #312
    Dwarf in the Playground
     
    BarbarianGuy

    Join Date
    Jan 2022

    Default Re: OOTS #1274 - The Discussion Thread

    Quote Originally Posted by Wraithfighter View Post
    I like Javik's quote from Mass Effect 3, when it comes to a discussion like this: “Stand amongst the ashes of a trillion dead souls, and ask the ghosts if honor matters. The silence is your answer.”
    Thus spoke Horatio, the captain of the gate
    "For all the mortal men death cometh soon or late
    But how can man die better, than facing fearful odds?
    For the ashes of his fathers! For the temple of his gods!"

    Silence means yes, doesnt it? The dead are stunned by such a question, especially given that if they are conscious of anything at all, having gotten the whole dying and fear thereof thing over with, they would very clearly see that the manner in which they lived far outweighs the circumstances of their death, which, spoiler warning

    Spoiler
    Show
    everyone dies in the end


    Roy's morality makes absolutely no sense to me, but maybe that's just the whole morality axis that's been broken ever since the inception of the game. Roy has no problem dominating a kobold, and letting Belkar let a cat excrete in his mouth (no one has a problem with that, which is, really bonkers to me). Roy has no problem using any means necessary into coercing Belkar to do things for the party. IRL criminals are very rarely doing it for the evulz, there's almost always some kind of a "victim of circumstance" situation here. Both Belkar and Elan might qualify for some kind of "mental development issues". Roy has no problem letting Hilga with her child enter an extremely dangerous situation. "Oh but she would have gone off after Durkon anyways" is a crap argument, they had more than enough people to subdue her and keep her away from the fight, and options to leave the kid somewhere relatively safe (leave a basket under a random door, it's a dwarf city full of extremely honourable people). Serini is Sunny's "mom" and she clearly has no issues sending the kid into harm's way, and she will clearly have no issue sending Sunny out against Xykon, and what is Roy going to say then?



    Utilitarianism is clearly wrong.
    Consider the following. You are walking down a street, and you see a roller with a baby speeding its way downhill to the baby's doom. You save the roller and the baby. Hooray! Unambiguously good action, right? Except the baby grows up to be Adolf Stalin and kills billions of people. There is a statue of you in the capital of the evil empire. Under utilitarian model, that was an evil act, because the net consequences outweigh the good in the end.
    Or... does it? The death and destruction caused by Adolf Stalin solves the issues of overpopulation and climate crisis. Post ww3 world enters an unprecedented era of peace and prosperity as the survivors use the freed space and resources to create a sustainable paradise. So saving baby Adolf Stalin was a good thing in the end.
    Or... was it? Humanity is coddled by this era of prosperity. All ideas of exploring the universe die out because its just so comfortable here and with all the sustainable resources on earth, why would we ever reach for the stars? Which is fine and dandy until a bunch of aliens show up and enslave humanity. Or after millions of years the sun explodes and earth dies with it, and with it our species, perhaps the only intelligent life ever. So yeah, we're back to "saving babies is bad" in the end.
    So next time you see a baby stroller zipping downhill you give it a good kick. Bad baby! We need earth to be an overpopulated hell hole to drive our species out into space. Except this baby would have grown up to be Elbert Ainstein, to figure out ftl space travel or cure for cancer or something like that.

    Utilitarianism has the unfortunate problem of trying to reduce morality into a formula, except that requires a definitive answer as to what constitutes "in the end". Given a timescale you arrive at extremely different answers. If my timescale is say, "the nukes are in the sky and i have about until they land", what is optimal in the timespan of the next 30 minutes or less is entirely different than if I'm considering the next 50-80 years, a natural human lifespan. But the natural human lifespan also creates many problems from the moral perspective. Younglings love to rage on the Baby Boomers because (the popular opinion is) BBs lived their lives with absolutely no regard to how it will affect the future generations. We're the future generations and the world they're passing on to us is in a much worse shape than what they got. Things like climate change are projected to affect billions of people in the worst way, but if you're reading this, chances are you can expect to live your life rather comfortably and let the future generations deal with it. But the thing is, once you start considering time so far ahead that you'll be dust, your projections become less and less accurate. You save a baby, and by its 20s it can be a serial killer or something. The problem of utilitarianism is that every part of the formula is just a crapshot guess in the dark. You just don't know.

    The only moral framework that really matters is Character Ethics. One's intentions, but also the cultivation of virtues, which are things which we kinda sorta know lead to better results. Maybe. It's a crapshot, but one can take the Wager of Markus Aurelius - either good conduct and honour matters, or it doesn't matter. But if it doesn't matter then it doesn't matter, it's better to live as if it matters anyways - and if it matters then it's going to turn out better for everyone anyways. Society prospers when old men plant trees the shade of which they will never sit under, nor the fruit of which they will eat. Maybe Roy's position here is flawed if you're on the "the nukes have been launched" timescale, but Roy is taking a wiser position that, well what if the world doesn't end? And then you have to live with the consequences of your decisions, because nothing is certain. Always stuck asking yourself, maybe I could have done something different. It's better to live in such a way that you will never have regrets, whether you live for five minutes or five thousand years. Especially since in universe, they have it on good authority that unless you're killed by the Snarl, your soul with persist for much longer than your body will.

    I still disagree with Roy here, and it's grating to see Rich be so "modern culture America centric" again and again. If you want to write something that's not petty escapism, it should have universal value, or at least the culture you're writing it for should be projected to survive longer than it is projected for you to finish your story. People considered to have been kids by today's Western standards have fought in wars and had officer positions, and continue to do so around the world. The whole "teenager" thing itself is a rather silly social construct that will likely last as long as Baby Boomers do. The whole "teenage rebellion" is basically the result of treating people who are ready to become adults as stupid kids. If you're ever having issues with someone 12+, try treating that person as a person, with respect, as an adult. You will be surprised by the results. 12-13 is around the time traditionally societies have rites of passage into adulthood. Some thirteen year olds are wiser than some thirty year olds, and I imagine that to be the case more when talking about people from 2nd and 3rd world countries compared to 1st worlders. Sunny is childlike, but Sunny is also facing danger on a semi regular basis, not just in defending the world, but because Copyrighted Eye Things brought up under normal conditions are adventurer fodder. The young of foxes and rabbits live in an absolutely different world than first world children, and then, the entire world of OOtS is basically a massive soul farm. The gods are absolutely interested in creating societies which encourage competition and conflict and weed out weakness, and elves are the K strategy ones, while humans are the goblins of the "adventurer races". Except for laughs and anachronistic jokes there's no way any serious approach to this setting should resemble modern first world societies.

    Quote Originally Posted by Frozenstep View Post
    Thanks for putting into words something I've been seeing here and there throughout the thread.

    Like, shoot, guys, I'm no philosophy major, but I'm pretty sure if your response to the trolley problem is "there's gotta be a way to save everyone!" or something on the same level, you're missing the point so comically badly that I don't know what to say. It's just a starting point, a simple problem without a right or wrong answer, just seeing where you stand on a single variable in your moral framework.
    Yeah well, the trolley problem is asinine. It's a meme like all popular memes because it's nonsensical. When you get an answer, you're not getting a "single variable" in a presupposed moral framework that you're forcing on the person whom you ask this problem to the exclusion of other frameworks aka begging the question with a healthy dose of false dichotomy. What you're getting is instead whatever the person assumes about the situation because the question as is is nonsensical. Which, if you understand enough to extract this information, it can be useful. But you're much better off asking something more useful like "what is the difference between a rock and a brick"
    Last edited by Dasick; 2023-01-31 at 08:48 PM.

  13. - Top - End - #313
    Colossus in the Playground
     
    Kish's Avatar

    Join Date
    Nov 2004

    Default Re: OOTS #1274 - The Discussion Thread

    Quote Originally Posted by Dasick View Post
    Thus spoke Horatio, the captain of the gate
    "For all the mortal men death cometh soon or late
    But how can man die better, than facing fearful odds?
    For the ashes of his fathers! For the temple of his gods!"

    Silence means yes, doesnt it? The dead are stunned by such a question, especially given that if they are conscious of anything at all, having gotten the whole dying and fear thereof thing over with, they would very clearly see that the manner in which they lived far outweighs the circumstances of their death, which, spoiler warning

    Spoiler
    Show
    everyone dies in the end


    Roy's morality makes absolutely no sense to me, but maybe that's just the whole morality axis that's been broken ever since the inception of the game. Roy has no problem dominating a kobold, and letting Belkar let a cat excrete in his mouth (no one has a problem with that, which is, really bonkers to me).
    Back the truck up. Belkar explicitly hid that he was doing that from Roy. (No indication that Elan or Haley knew about it, either. Durkon, now, Durkon was the kind of passive there that would have me bumping his alignment down to Lawful Neutral, but that's me.)

    Not touching the rest, but.
    Last edited by Kish; 2023-01-31 at 09:31 PM.

  14. - Top - End - #314
    Barbarian in the Playground
    Join Date
    Feb 2018

    Default Re: OOTS #1274 - The Discussion Thread

    When I looked at the "make the dominated kobold kill itself on traps" strip, since it had been brought up a few times... it seemed to me that Roy DID take offense to the idea. He was able to be talked into allowing it through trickery (no no this little guy definitely knows how to spot traps properly!), and was clearly unhappy when the kobold was killed

  15. - Top - End - #315
    Ogre in the Playground
     
    hroţila's Avatar

    Join Date
    Jun 2015
    Gender
    Male

    Default Re: OOTS #1274 - The Discussion Thread

    Quote Originally Posted by OvisCaedo View Post
    When I looked at the "make the dominated kobold kill itself on traps" strip, since it had been brought up a few times... it seemed to me that Roy DID take offense to the idea. He was able to be talked into allowing it through trickery (no no this little guy definitely knows how to spot traps properly!), and was clearly unhappy when the kobold was killed
    Yeah. He stopped the kobold and made Haley disarm the traps as soon as the first one exploded and Roy understood what was happening, and the kobold was later killed through no fault of his. He was clearly not OK with the treatment the kobold got.

    (Although I also think it's worth noting that this is still a comic strip with lots of absurd humour that sometimes verges on the psychopathic, and that we shouldn't expect it to consistently stick to the way the real world operates, at least in scenes that are clearly played for laughs. Otherwise everyone is bad and also a moron)
    ungelic is us

  16. - Top - End - #316
    Barbarian in the Playground
     
    NinjaGuy

    Join Date
    Jan 2019

    Default Re: OOTS #1274 - The Discussion Thread

    Quote Originally Posted by Dasick View Post
    Utilitarianism is clearly wrong.
    Consider the following. You are walking down a street, and you see a roller with a baby speeding its way downhill to the baby's doom. You save the roller and the baby. Hooray! Unambiguously good action, right? Except the baby grows up to be Adolf Stalin and kills billions of people. There is a statue of you in the capital of the evil empire. Under utilitarian model, that was an evil act, because the net consequences outweigh the good in the end.
    So...again, I'm not educated on this topic, so maybe I should shut up...but are you sure you aren't just creating a ridiculous strawman here?

    Quote Originally Posted by Dasick View Post

    Yeah well, the trolley problem is asinine. It's a meme like all popular memes because it's nonsensical. When you get an answer, you're not getting a "single variable" in a presupposed moral framework that you're forcing on the person whom you ask this problem to the exclusion of other frameworks aka begging the question with a healthy dose of false dichotomy. What you're getting is instead whatever the person assumes about the situation because the question as is is nonsensical. Which, if you understand enough to extract this information, it can be useful. But you're much better off asking something more useful like "what is the difference between a rock and a brick"
    It doesn't force a moral framework on anyone, though? It's just a hypothetical to help understand someone's moral framework. There's not a right or wrong answer for a reason. The question is purposely kept free from details to prevent any distraction from the simple dilemma it presents.

  17. - Top - End - #317
    Colossus in the Playground
     
    Kish's Avatar

    Join Date
    Nov 2004

    Default Re: OOTS #1274 - The Discussion Thread

    Quote Originally Posted by OvisCaedo View Post
    When I looked at the "make the dominated kobold kill itself on traps" strip, since it had been brought up a few times... it seemed to me that Roy DID take offense to the idea. He was able to be talked into allowing it through trickery (no no this little guy definitely knows how to spot traps properly!), and was clearly unhappy when the kobold was killed
    Indeed. And note that when Rich is in "make jokes about D&D rules" territory rather than "write a serious story" territory, Haley can easily convince Roy of nearly anything; Bluff is a class skill for her which she has maxed or close to, Sense Motive is a cross-class skill for Roy which he has never been indicated to have any points in. So, as implausible as her "he can surely find traps as well as me" line was, Roy believed it until its falsity was demonstrated to him.

  18. - Top - End - #318
    Pixie in the Playground
     
    Zombie

    Join Date
    Jul 2018

    Default Re: OOTS #1274 - The Discussion Thread

    Utilitarianism has a 'reasonably forseeable' clause. You do not have to worry about what a baby might or might not do in every possible future. Utilitarianism does not justify - for example - murdering random people because there are possible futures that those individuals might make worse. Yes, you do deal with the future in your calculation. And, yes, more knowledge may well change the calculation. In my opinion, that is the case in every moral system, or at least it should be. If knowing more cannot possibly change your conclusions, then that is worrying.

  19. - Top - End - #319
    Dwarf in the Playground
     
    BarbarianGuy

    Join Date
    Jan 2022

    Default Re: OOTS #1274 - The Discussion Thread

    Quote Originally Posted by Kish View Post
    Back the truck up. Belkar explicitly hid that he was doing that from Roy. (No indication that Elan or Haley knew about it, either. Durkon, now, Durkon was the kind of passive there that would have me bumping his alignment down to Lawful Neutral, but that's me.)

    Not touching the rest, but.
    Im not going through the strips again for this point so you might be right. But Belkar did say a lot of things about what he will have the kobold do when Roy was within earshot iirc. Either way Roy had no problem keeping the kobold on a dangerous quest where they could all die, instead of say, handing him off to authorities, or releasing him in some manner (depending on the situation)

    Quote Originally Posted by OvisCaedo View Post
    When I looked at the "make the dominated kobold kill itself on traps" strip, since it had been brought up a few times... it seemed to me that Roy DID take offense to the idea. He was able to be talked into allowing it through trickery (no no this little guy definitely knows how to spot traps properly!), and was clearly unhappy when the kobold was killed
    Disarming traps is clearly a very dangerous activity. It's like disarming a bomb. Someone might be a professional bomb squad member, but when you send that someone out to do the job, it's understood that it's very dangerous and there's no guarantee the bomb defusal professional is coming back with enough matter to fit in a matchbox.

    Quote Originally Posted by hroţila View Post
    Yeah. He stopped the kobold and made Haley disarm the traps as soon as the first one exploded and Roy understood what was happening, and the kobold was later killed through no fault of his. He was clearly not OK with the treatment the kobold got.

    (Although I also think it's worth noting that this is still a comic strip with lots of absurd humour that sometimes verges on the psychopathic, and that we shouldn't expect it to consistently stick to the way the real world operates, at least in scenes that are clearly played for laughs. Otherwise everyone is bad and also a moron)
    That's kind of the crux of the issue isn't it. The humour/stick figure thing really undercuts all attempts at serious storytelling because both share the same universe and serious matters have to contend with humour things being canon. it's even harder to be invested in the serious aspect when we have word of god (Thor) that really they were scraping the bottom of the barrel when they made this world.

    Quote Originally Posted by Frozenstep View Post
    So...again, I'm not educated on this topic, so maybe I should shut up...but are you sure you aren't just creating a ridiculous strawman here?
    It's called "taken to logical extreme to illustrate the issue". Or the two issues. The issue of timescale and issue of you not knowing the consequences of your action. An action can be good or bad depending if you look at it's consequence 5 minutes after, 5 years or 5 millennia.

    It actually gets worse, I was just making a joking example here to keep things somewhat light. You can easily justify something more mundane things such as slavery, genocide, human experimentation with utilitarianism, and the more technology we get the more we can get closer to something straight out of dystopian horror that gives people nightmares.

    It doesn't force a moral framework on anyone, though? It's just a hypothetical to help understand someone's moral framework. There's not a right or wrong answer for a reason. The question is purposely kept free from details to prevent any distraction from the simple dilemma it presents.
    It's not a simple dilemma. Why you're there making that decision matter. What you know and how you know it matters. Who the people are and how they got there matters. Everything matters. And you insisting on it being simple and all other variables not mattering is part of "forcing your moral framework on others", because you're forcing the axiom that you can take complex situations like that and boil them down to basic components with which to do moral calculus with, and that everything else is a distraction.

    I honestly can't even begin to imagine answering this question without inventing, or assuming additional detail, and the answer changes quite a bit depending on these details.

    Utilitarianism has a 'reasonably forseeable' clause. You do not have to worry about what a baby might or might not do in every possible future. Utilitarianism does not justify - for example - murdering random people because there are possible futures that those individuals might make worse. Yes, you do deal with the future in your calculation. And, yes, more knowledge may well change the calculation. In my opinion, that is the case in every moral system, or at least it should be. If knowing more cannot possibly change your conclusions, then that is worrying.
    What constitutes as reasonable is the eternal bane of justice systems everywhere, and the eternal copout. I've provided plenty of serious examples in my write up. Baby boomers would claim it was not reasonably forseeable at the time that their actions were ruining everything for everyone (quite a few of them will say that's not the case either anyways). The inventor of the internal combustion engine could claim that he couldn't reasonably forsee the ecological damage his invention was going to do. But in either of those cases there's been plenty of moral opposition as well. And that's the crux of my issue with utilitarianism, because you're trying to do hard math with a whole bunch of really, really squishy things.

    IMO, better understanding is a moral imperative because the better you understand the better decisions you can make.
    Last edited by Dasick; 2023-01-31 at 10:14 PM.

  20. - Top - End - #320
    Pixie in the Playground
     
    Zombie

    Join Date
    Jul 2018

    Default Re: OOTS #1274 - The Discussion Thread

    Quote Originally Posted by Dasick View Post


    What constitutes as reasonable is the eternal bane of justice systems everywhere, and the eternal copout. I've provided plenty of serious examples in my write up. Baby boomers would claim it was not reasonably forseeable at the time that their actions were ruining everything for everyone (quite a few of them will say that's not the case either anyways). The inventor of the internal combustion engine could claim that he couldn't reasonably forsee the ecological damage his invention was going to do. But in either of those cases there's been plenty of moral opposition as well. And that's the crux of my issue with utilitarianism, because you're trying to do hard math with a whole bunch of really, really squishy things.

    IMO, better understanding is a moral imperative because the better you understand the better decisions you can make.
    It seems as though your central objection to utilitarianism is that someone's calculations might be different to someone else's. Is this correct?

  21. - Top - End - #321
    Dwarf in the Playground
     
    BarbarianGuy

    Join Date
    Jan 2022

    Default Re: OOTS #1274 - The Discussion Thread

    Quote Originally Posted by David Gould View Post
    It seems as though your central objection to utilitarianism is that someone's calculations might be different to someone else's. Is this correct?
    My central objection is that someone's calculation is different between different timescales. It's also how fuzzy and imprecise the calculations get the bigger the timescale you impose, which, the longer the timescale you impose the closer utilitarianism gets to a legitimate moral framework, that is, you're not getting nightmare horror situations. So the better you calibrate it, the more useless it becomes.

    You take the same person, and give them timescales of 30 minutes, lifetime, or the next 300 years. What they give you as their behaviour, or if they understand the proposed moral calculus* and propose the optimal behaviour given the context, illustrates the issue.

    Take someone and pose the question - nukes have been launched. You have 30 minutes before you're wiped out. No, you can't get anywhere safe within that time. What do you do? People being honest will generally speaking list things that would be counter to their usual behaviour. What is their usual behaviour at, it assumes how long? Average human lifespan? Or the next 5 to 10 years?

    Whereas the Virtue Ethic answer to the nuke question is, you do whatever you were doing cause you've been living your every day and moment as if it were your last.

    *the whole happiness vs suffering metric is bonkers as well

  22. - Top - End - #322
    Pixie in the Playground
     
    Zombie

    Join Date
    Jul 2018

    Default Re: OOTS #1274 - The Discussion Thread

    Quote Originally Posted by Dasick View Post
    My central objection is that someone's calculation is different between different timescales. It's also how fuzzy and imprecise the calculations get the bigger the timescale you impose, which, the longer the timescale you impose the closer utilitarianism gets to a legitimate moral framework, that is, you're not getting nightmare horror situations. So the better you calibrate it, the more useless it becomes.

    You take the same person, and give them timescales of 30 minutes, lifetime, or the next 300 years. What they give you as their behaviour, or if they understand the proposed moral calculus* and propose the optimal behaviour given the context, illustrates the issue.

    Take someone and pose the question - nukes have been launched. You have 30 minutes before you're wiped out. No, you can't get anywhere safe within that time. What do you do? People being honest will generally speaking list things that would be counter to their usual behaviour. What is their usual behaviour at, it assumes how long? Average human lifespan? Or the next 5 to 10 years?

    Whereas the Virtue Ethic answer to the nuke question is, you do whatever you were doing cause you've been living your every day and moment as if it were your last.

    *the whole happiness vs suffering metric is bonkers as well
    In what sense do you mean that someone living by the (assuming there is only one and that everyone agrees on what is virtuous) virtue ethic is living your every and day and moment as if it were your last? Are you saying that people who live by virtue ethic do not take actions to make their future, or the future of their descendants, better? I literally cannot imagine what 'living each day as if were your last' looks like.

    As to you again talking about timescales, you again seem to be ignoring that these things are unknowable to a person. I cannot know whether or not the baby I save will be a mass murderer. Something unknowable cannot be included in any calculation. So it isn't.

    However, I fear that we are drifting far, far away from the purpose of this thread... ;)

  23. - Top - End - #323
    Dwarf in the Playground
     
    BarbarianGuy

    Join Date
    Jan 2022

    Default Re: OOTS #1274 - The Discussion Thread

    Quote Originally Posted by David Gould View Post
    In what sense do you mean that someone living by the (assuming there is only one and that everyone agrees on what is virtuous) virtue ethic is living your every and day and moment as if it were your last? Are you saying that people who live by virtue ethic do not take actions to make their future, or the future of their descendants, better? I literally cannot imagine what 'living each day as if were your last' looks like.
    Interesting you say that because it's a common enough maxim. Which, kinda illustrates my point.

    The premise of Virtue Ethics is that the only moral thing that matters is intentions and actions which cultivate virtues. The cultivation of virtues is intended to create situations which are good now, and tomorrow, and five years after. The end result of any action is not important, it is only the probable outcome of practicing certain habits that matters. A good, reasonably religiously neutral explanation is stoicism as expressed in the Meditations by Markus Aurelius (whom I have mentioned above).

    If we imagine playing chess, utilitarianism is like trying to brute force your way by thinking 10 moves ahead. Virtue ethics is about playing by well known heuristics and strategic guidelines.

    Virtue ethics based civilizations have a lot of overlap, Id say about 80%? If not in their implementation then in their core assumptions. Seems like there are definitely right answers there (maybe local maxima as well given circumstances)

    As to you again talking about timescales, you again seem to be ignoring that these things are unknowable to a person. I cannot know whether or not the baby I save will be a mass murderer. Something unknowable cannot be included in any calculation. So it isn't.
    That was a joking example.

    Although, suppose we isolated the "{Scrubbed}" gene.

    Or had AI which could predict crime.

    Both of the above are reasonably realistic scenarios. "Using AI to predict crime and charge people with crimes they haven't committed" is something that makes people uncomfortable.

    If we go back to Virtue Ethics, the answer is, yes, you save the baby. You save the baby because it is innocent and the virtue of Justice (another word for it is talking about human rights) is something that has served humans well, because the baby has so much potential for good as it does for evil and you would rather believe that everyone's everyday actions, even your own, no matter how small can help build that potential for good rather than give in to the cold calculation of practicality. You'd rather take a personal risk, and make a personal sacrifice for that world. But more importantly, you just don't want to live in a society where pre-crime is a thing and people don't help one another. You're doing your part to make the world a better place and lead by example in everything you do, because you know that the {Scrubbed} card is in the deck, and you're dedicated to creating a society where {Scrubbed}would have been a mediocre landscaping artist instead of a mass murderer.

    However, I fear that we are drifting far, far away from the purpose of this thread... ;)
    Seems like 11 pages later people were still arguing the morality of using Sunny as bait. I prefer to think we're getting closer to the root of that disagreement.
    Last edited by Pirate ninja; 2023-02-01 at 06:52 AM.

  24. - Top - End - #324
    Pixie in the Playground
     
    Zombie

    Join Date
    Jul 2018

    Default Re: OOTS #1274 - The Discussion Thread

    Quote Originally Posted by Dasick View Post
    Interesting you say that because it's a common enough maxim. Which, kinda illustrates my point.

    The premise of Virtue Ethics is that the only moral thing that matters is intentions and actions which cultivate virtues. The cultivation of virtues is intended to create situations which are good now, and tomorrow, and five years after. The end result of any action is not important, it is only the probable outcome of practicing certain habits that matters.
    It is only the probable outcome of practicing certain habits that matters.

    It is only the probable outcome of taking a certain action that matters.



    It thus seems that the disagreement is not about the predictability of outcomes, as that seems to be built in to both systems.

    Habits versus actions is the key disagreement, then. This seems to me that your objections over time scale differences are irrelevant.

  25. - Top - End - #325
    Barbarian in the Playground
     
    NinjaGuy

    Join Date
    Jan 2019

    Default Re: OOTS #1274 - The Discussion Thread

    Quote Originally Posted by Dasick View Post
    It's called "taken to logical extreme to illustrate the issue". Or the two issues. The issue of timescale and issue of you not knowing the consequences of your action. An action can be good or bad depending if you look at it's consequence 5 minutes after, 5 years or 5 millennia.
    I mean, you can make anything look crazy by taking them to logical extremes, and ignoring the nuance and details. It just really makes it look like you're arguing in bad faith, strawmanning it by extrapolating simple examples that aren't meant to be all-out rigorous demonstrations of moral frameworks.

    And are you sure about Utilitarianism being forced to take things at an unknowable scale? Because you can't make choices based on information you can't know. Are you sure you aren't misrepresenting something, here?

    Quote Originally Posted by Dasick View Post
    It's not a simple dilemma. Why you're there making that decision matter. What you know and how you know it matters. Who the people are and how they got there matters. Everything matters. And you insisting on it being simple and all other variables not mattering is part of "forcing your moral framework on others", because you're forcing the axiom that you can take complex situations like that and boil them down to basic components with which to do moral calculus with, and that everything else is a distraction.

    I honestly can't even begin to imagine answering this question without inventing, or assuming additional detail, and the answer changes quite a bit depending on these details.
    You're greatly misunderstanding something here. Of course all that stuff matters. The trolley problem isn't some grand, moral-framework defining question. It's not meant to be used to boil down complex situations (and I seriously don't think you can boil any school of thought down to "lol, we just trolley problem'd it, bro"). If you're inventing stuff or assuming additional detail, you're vastly overthinking it. Of course there's nuance and detail in real life, the hypothetical is not meant to be used as a way to ignore that stuff in real life.

    It's literally just trying to gauge your opinion on a single factor. Those other details? Other factors, that other questions could ask about.
    Last edited by Frozenstep; 2023-01-31 at 11:49 PM.

  26. - Top - End - #326
    Pixie in the Playground
     
    Zombie

    Join Date
    Jul 2018

    Default Re: OOTS #1274 - The Discussion Thread

    Quote Originally Posted by Dasick View Post
    Interesting you say that because it's a common enough maxim. Which, kinda illustrates my point.

    The premise of Virtue Ethics is that the only moral thing that matters is intentions and actions which cultivate virtues. The cultivation of virtues is intended to create situations which are good now, and tomorrow, and five years after. The end result of any action is not important, it is only the probable outcome of practicing certain habits that matters. A good, reasonably religiously neutral explanation is stoicism as expressed in the Meditations by Markus Aurelius (whom I have mentioned above).
    Let us assume I go to work at 7.30 and it takes me half-an-hour to get there. It is 7.30 when the missiles launch, so I have 30 minutes to live. Should I go to work, knowing that I will not get there? If you say no, I still do not understand what you mean by 'living every day as if it is my last'. If you say yes, I am forced to conclude that virtue ethics fails at edge cases in exactly the same way that other moral systems do.


    If we imagine playing chess, utilitarianism is like trying to brute force your way by thinking 10 moves ahead. Virtue ethics is about playing by well known heuristics and strategic guidelines.
    And generally the optimal way to play chess is to use both methods. I think that utilitarianism actually encompasses a lot of the same results as virtue ethics in any case. We just may differ on how we deal with edge cases.


    Virtue ethics based civilizations have a lot of overlap, Id say about 80%? If not in their implementation then in their core assumptions. Seems like there are definitely right answers there (maybe local maxima as well given circumstances)
    There is a lot of overlap among the conclusions of utilitarians, too. My point was that you cannot object to utilitarianism on the grounds that in some cases people using it come up with different answers if in some cases virtue ethicists also come up with different answers.



    That was a joking example.

    Although, suppose we isolated the "{Scrub the post, scrub the quote}" gene.

    Or had AI which could predict crime.


    Both of the above are reasonably realistic scenarios. "Using AI to predict crime and charge people with crimes they haven't committed" is something that makes people uncomfortable.
    Both of those are not sensible. The AI situation in particular is illogical. If the AI is 100% correct that person A will kill person B in seven days, then attempting to arrest them now will not prevent the crime; if it could, then the AI is not 100% correct - there are actions that will prevent the crime from occurring and thus thwarting the prediction. So from a utilitarian point of view it would be completely useless to arrest the person given the fact that the AI was 100% correct.


    If we go back to Virtue Ethics, the answer is, yes, you save the baby. You save the baby because it is innocent and the virtue of Justice (another word for it is talking about human rights) is something that has served humans well, because the baby has so much potential for good as it does for evil and you would rather believe that everyone's everyday actions, even your own, no matter how small can help build that potential for good rather than give in to the cold calculation of practicality. You'd rather take a personal risk, and make a personal sacrifice for that world. But more importantly, you just don't want to live in a society where pre-crime is a thing and people don't help one another. You're doing your part to make the world a better place and lead by example in everything you do, because you know that the {Scrub the post, scrub the quote} card is in the deck, and you're dedicated to creating a society where {Scrub the post, scrub the quote} would have been a mediocre landscaping artist instead of a mass murderer.
    And a utilitarian answer is, yes, you save the baby. For pretty much exactly the same reasons ...


    Seems like 11 pages later people were still arguing the morality of using Sunny as bait. I prefer to think we're getting closer to the root of that disagreement.
    :)
    Last edited by Pirate ninja; 2023-02-01 at 06:53 AM.

  27. - Top - End - #327
    Dwarf in the Playground
     
    BarbarianGuy

    Join Date
    Jan 2022

    Default Re: OOTS #1274 - The Discussion Thread

    Quote Originally Posted by David Gould View Post
    It is only the probable outcome of practicing certain habits that matters.

    It is only the probable outcome of taking a certain action that matters.

    It thus seems that the disagreement is not about the predictability of outcomes, as that seems to be built in to both systems.

    Habits versus actions is the key disagreement, then. This seems to me that your objections over time scale differences are irrelevant.
    Well, you don't really get the whole "live your moment as if it were your last" thing, which a foundational principle of VE so it does matter.

    Timescale is central to UTIL moral framework, because the answer changes drastically. Timescale is irrelevant to VE moral framework, because it's attempting to have one system for making everyday life enjoyable in a way that also makes the future problems easier to deal with.

    Another issue with UTIL framework is that, because factors can cause to wildly different actions, the accuracy of those factors is an issue. Factors are less of an issue to a VE habit driven framework, because they don't change the actions that much.

    Quote Originally Posted by Frozenstep View Post
    I mean, you can make anything look crazy by taking them to logical extremes, and ignoring the nuance and details. It just really makes it look like you're arguing in bad faith, strawmanning it by extrapolating simple examples that aren't meant to be all-out rigorous demonstrations of moral frameworks.
    Taking things to logical extremes is a standard stress test. It's how you figure out the limits of systems, where it starts to break and deform. It helps isolate variables and illustrate situations. In some sense the trolley problem attempts to do so. It's an entirely hypothetical, useless scenario which attempts to gauge opinion on a single factor.

    And are you sure about Utilitarianism being forced to take things at an unknowable scale? Because you can't make choices based on information you can't know. Are you sure you aren't misrepresenting something, here?
    That's my criticism. The timescale is assumed at some point. The assumed timescale changes the equation drastically.


    You're greatly misunderstanding something here. Of course all that stuff matters. The trolley problem isn't some grand, moral-framework defining question. It's not meant to be used to boil down complex situations (and I seriously don't think you can boil any school of thought down to "lol, we just trolley problem'd it, bro"). If you're inventing stuff or assuming additional detail, you're vastly overthinking it. Of course there's nuance and detail in real life, the hypothetical is not meant to be used as a way to ignore that stuff in real life.

    It's literally just trying to gauge your opinion on a single factor. Those other details? Other factors, that other questions could ask about.
    It doesn't do a good job at that because there are two conflicting factors here.

    First factor is, are five lives worth more than one

    Second factor is, "do you need to act to be responsible for a bad action" (murder/manslaughter) or "is it ok to assume the position of judgment"

    The question really boils down to "are you ok with taking direct action to kill someone to save a net positive of four lives". Which has so many factors to it, the answer doesn't really tell you anything. People will pull the lever or not for various reasons, if you listen closely, for assumed details (imo, I dont think you can overthink a scenario which starts with "so regardless of what you do someone dies but you can decide who dies"). Its the rationale as to why that contains useful information that you can extract.

    I can't answer that question as is because both I'm good at understanding when I start assuming things, and because a) I believe certain lives are simply worth more or less when compared to one another and b) Im ok assuming the position to judge in some situations but not others.

  28. - Top - End - #328
    Pixie in the Playground
     
    Zombie

    Join Date
    Jul 2018

    Default Re: OOTS #1274 - The Discussion Thread

    Quote Originally Posted by Dasick View Post
    Well, you don't really get the whole "live your moment as if it were your last" thing, which a foundational principle of VE so it does matter.
    Then can you explain it?


    Timescale is central to UTIL moral framework, because the answer changes drastically. Timescale is irrelevant to VE moral framework, because it's attempting to have one system for making everyday life enjoyable in a way that also makes the future problems easier to deal with.
    Timescale is not the issue. Knowledge is the issue.

    Let us say that I as a utilitarian save a baby. The reason is because I judge that that is the best action to increase human happiness.

    Then that baby grows up to be a dictator.

    My action of saving the baby is still a good action under the utilitarian framework. This is because utilitarians living under the dictator would recognise that I made the only decision possible given my knowledge.

  29. - Top - End - #329
    Barbarian in the Playground
     
    NinjaGuy

    Join Date
    Jan 2019

    Default Re: OOTS #1274 - The Discussion Thread

    Quote Originally Posted by Dasick View Post
    Taking things to logical extremes is a standard stress test. It's how you figure out the limits of systems, where it starts to break and deform. It helps isolate variables and illustrate situations. In some sense the trolley problem attempts to do so. It's an entirely hypothetical, useless scenario which attempts to gauge opinion on a single factor.
    You're ignoring that nuance and detail part. If you take the most basic, shorted summery of something and then push it to an extreme, you can greatly misrepresent it. You have to take the whole school of thought, the different types and subtypes within it against those kinds of tests, otherwise it's pretty pointless. It'd be like getting all your news from only looking at the headlines of newspapers.


    Quote Originally Posted by Dasick View Post
    It doesn't do a good job at that because there are two conflicting factors here.

    First factor is, are five lives worth more than one

    Second factor is, "do you need to act to be responsible for a bad action" (murder/manslaughter) or "is it ok to assume the position of judgment"

    The question really boils down to "are you ok with taking direct action to kill someone to save a net positive of four lives". Which has so many factors to it, the answer doesn't really tell you anything. People will pull the lever or not for various reasons, if you listen closely, for assumed details (imo, I dont think you can overthink a scenario which starts with "so regardless of what you do someone dies but you can decide who dies"). Its the rationale as to why that contains useful information that you can extract.

    I can't answer that question as is because both I'm good at understanding when I start assuming things, and because a) I believe certain lives are simply worth more or less when compared to one another and b) Im ok assuming the position to judge in some situations but not others.
    The whole point of the scenario is to present that boiled down question, to remove those other factors so you're not answering based on other factors. You're overthinking it because the question's bare-bones nature is the point. It's not meant to be applied directly to real life.

    Again, there's other scenarios that boil down to questions that ask about other factors, too. The trolley problem is just one among many hypotheticals, and they're just starting points.
    Last edited by Frozenstep; 2023-02-01 at 01:11 AM.

  30. - Top - End - #330
    Dwarf in the Playground
     
    BarbarianGuy

    Join Date
    Jan 2022

    Default Re: OOTS #1274 - The Discussion Thread

    Quote Originally Posted by David Gould View Post
    Let us assume I go to work at 7.30 and it takes me half-an-hour to get there. It is 7.30 when the missiles launch, so I have 30 minutes to live. Should I go to work, knowing that I will not get there? If you say no, I still do not understand what you mean by 'living every day as if it is my last'. If you say yes, I am forced to conclude that virtue ethics fails at edge cases in exactly the same way that other moral systems do.
    If you do a good enough job at VE, the answer is yes, and it's not a broken edge case. If you're really into stoicism or stoicism compatible religion (all major ones I think?) it uhh doesn't even need much of an explanation.

    Actually, the VE asks the counter question of, well why are you wasting so much time commuting in the first place? Maybe you should live simpler. Or maybe you should make more sacrifices for your work if it's that important. I realize half an hour isn't all that much, but people generally commute for much longer.

    But the yes solution is... lets say you don't just sit in your car bored out of your mind but having to do it cause you have to pay the bills. With modern technology you could be listening to audiobooks, or talking to people you care about. Or maybe you found a scenic route to work that you love seeing every day. Maybe in that time, you just need to see that route in silence for 30 minutes. Maybe it takes a 30 minute bike ride or a jog to get there (I used to do that at 4 in the morning, it was fun. Kinda miss it). Maybe you got the routine that works for you. You're not just enduring the ride for the sake of enduring it, you're looking for ways to use that time in an important manner.

    The illustration is the cliche of a character having a close brush with death and making drastic changes to their lives.

    A real kind of shock to me when talking about possible post-apocalyptic future and prepper hobbies, someone was like "Well, instead of doing all that, I would just get a gun and a bottle of whiskey, get drunk and off myself".

    And generally the optimal way to play chess is to use both methods. I think that utilitarianism actually encompasses a lot of the same results as virtue ethics in any case. We just may differ on how we deal with edge cases.
    Tactical situations with looking 10 moves ahead are a tool of the strategic heuristic approach. You're not using both methods, you use one when the other dictates it is appropriate to do so.


    There is a lot of overlap among the conclusions of utilitarians, too. My point was that you cannot object to utilitarianism on the grounds that in some cases people using it come up with different answers if in some cases virtue ethicists also come up with different answers.
    Again the issue isnt that different people come to different conclusions. It's how wildly different someone's answers can be given a difference in some factor or timescale, which is problematic because how erroneous factors can be and how greater timescale approaches what people would accept as a solid moral framework (and how erroneous factors compound over time)


    Both of those are not sensible. The AI situation in particular is illogical. If the AI is 100% correct that person A will kill person B in seven days, then attempting to arrest them now will not prevent the crime; if it could, then the AI is not 100% correct - there are actions that will prevent the crime from occurring and thus thwarting the prediction. So from a utilitarian point of view it would be completely useless to arrest the person given the fact that the AI was 100% correct.
    The AI situation is only paradoxical due to the wording. If AI can predict that if not arrested a person will 100% commit a crime, then it's a logical premise. This is something that is discussed a lot both in sci fi and dystopian writing, and modern discussion on privacy and advances in AI. See Chinese Social Credit score. Or another example, women start seeing ads for baby stuff before they even suspect they're pregnant these days. It's only really a matter of time before big data can do the same for criminal behaviour. Or maybe it already can, but the people in charge don't want to use it for various reasons.

    People generally object to this kind of thing on the ground of human rights (ie, an offshoot of VE), but in a UTIL framework, if you have a good enough ability to predict crime (blah blah, unless action is taken), what's the issue again?

    And a utilitarian answer is, yes, you save the baby. For pretty much exactly the same reasons ...
    But what if you can go back in time and kill {Scrubed} as a baby? And the time machine nerds have confirmed that this will only have net good effects on your timeline up until the moment you go back in time?

    Imo the best you can do under VE is "go back to whenScrubbedis of age and challenge him to a duel, or maybe set him up with some art connections so he can be a painter", even if the baby killing plan is 100% to succeed and my plan is significantly less likely to succeed.

    Quote Originally Posted by Frozenstep View Post
    You're ignoring that nuance and detail part. If you take the most basic, shorted summery of something and then push it to an extreme, you can greatly misrepresent it. You have to take the whole school of thought, the different types and subtypes within it against those kinds of tests, otherwise it's pretty pointless. It'd be like getting all your news from only looking at the headlines of newspapers.
    I'm taking issue with the underlying premise of utilitarianism. That is, the idea that you can do moral calculus. The issue being, as I mentioned above, is that the calculation changes drastically with different timescales, from a day, to a couple years, to lifetime, to several lifetimes, to the entire lifetime of our species. No one really acknowledges that. What's moral if you're thinking about just your lifetime is completely different from what's moral if you're thinking ahead a couple of generations. When I talk about the 30 minute nuke test, I'm not just pulling it out of nowhere, it's based on trends of discussions I've had with real people. Frankly, it scares me and it shows a problem with an approach to morality.

    I've provided many examples and situations beyond just the original logical extreme stress test. It was, somewhat humorous, somewhat simplistic, somewhat a caricature. Sure, ok, it failed to convey the point I was trying to convey. Mea culpa. Let's discuss more specific situations and examples.

    The whole point of the scenario is to present that boiled down question, to remove those other factors so you're not answering based on other factors. You're overthinking it because the question's bare-bones nature is the point.

    Again, there's other scenarios that boil down to questions that ask about other factors, too. The trolley problem is just one among many hypotheticals, and they're just starting points.
    The question's bare bones nature assumes that your moral framework can provide an answer.
    Mine doesn't. I literally can't answer it in a vacuum.
    Hence, it's assuming a moral framework, or moral axioms that can answer it in this vacuum.

    It also doesn't do a good enough job to separate those factors you want to separate.
    Last edited by Pirate ninja; 2023-02-01 at 06:55 AM.

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •