Results 301 to 330 of 837
-
2023-01-31, 02:40 PM (ISO 8601)
- Join Date
- Aug 2017
- Location
- France
- Gender
Re: OOTS #1274 - The Discussion Thread
Forum Wisdom
Mage avatar by smutmulch & linklele.
-
2023-01-31, 02:40 PM (ISO 8601)
- Join Date
- Dec 2009
- Location
- Birmingham, AL
- Gender
-
2023-01-31, 02:45 PM (ISO 8601)
- Join Date
- Aug 2007
Re: OOTS #1274 - The Discussion Thread
Interested in MitD? Join us in MitD's thread.There is a world of imagination
Deep in the corners of your mind
Where reality is an intruder
And myth and legend thrive
Ceterum autem censeo Hilgya malefica est
-
2023-01-31, 02:52 PM (ISO 8601)
- Join Date
- Dec 2006
- Location
- Raleigh NC
- Gender
Re: OOTS #1274 - The Discussion Thread
"Every lie we tell incurs a debt to the truth. Sooner or later, that debt is paid."
-Valery Legasov in Chernobyl
-
2023-01-31, 03:40 PM (ISO 8601)
- Join Date
- May 2017
Re: OOTS #1274 - The Discussion Thread
Nice quote from Carpe Jugulum. "It's a lot more complicated than that" has way too many semi-synonymous relatives, including one which I've gotten dreadfully tired of hearing of late: "[{Rule} doesn't apply in a case where I don't want it to or to a person I don't want it to, because] that's different".
-
2023-01-31, 03:42 PM (ISO 8601)
- Join Date
- Oct 2010
- Gender
Re: OOTS #1274 - The Discussion Thread
I agree completely, and it's worth pointing out that Mass Effect wasn't really trying to validate Javik's viewpoint - quite the opposite, pretty much your entire crew spends every conversation either dunking on the guy or hanging up on him because it's like talking to a brick wall. And he himself changes his mind later, provide you don't
Spoiler: Endgame spoilerslet him wallow in his memory shard, which instead heightens his fatalism to the point of making him suicidal.
Javik is a great character with strong meme potential, but he's not meant to be a role model or the voice of the series.Last edited by Psyren; 2023-01-31 at 03:43 PM.
Plague Doctor by Crimmy
Ext. Sig (Handbooks/Creations)
-
2023-01-31, 06:39 PM (ISO 8601)
- Join Date
- Oct 2007
- Location
- Olympia, WA
Re: OOTS #1274 - The Discussion Thread
I am not going to defend utilitarianism here, but in my opinion, this isn't an effective strategy to attack it. Instructors need to present a problem in such a way that the precept becomes clear to students; metaphors must be simplified. When it comes to ethics and moral behavior, you have to highlight the nature of the problem by refusing to allow any loopholes to squeeze through (especially if the loopholes allow the student to evade thinking about the problem at hand).
For instance, a geometry instructor might say, "You know the length of this side of the garden, and the length of that side, and the angle of this fence here, and you know the garden is on a flat surface. What is the length of the hypotenuse?" The instructor wants the students to come up with an answer based on what they are given, not an answer like "It's my garden, so I get a tape measure." This isn't a perfect analogy to teaching ethics, but I hope it shows that it makes little sense to attack trigonometry because the teaching method or the thought experiment contains "artificial problems."
As to your second point, yes, that is a weakness of utilitarian problems. Many such problems presuppose a predictable event or a mechanized or mechanical outcome that an actor can know about in real time.Last edited by Fish; 2023-01-31 at 06:41 PM.
The Giant says: Yes, I am aware TV Tropes exists as a website. ... No, I have never decided to do something in the comic because it was listed on TV Tropes. I don't use it as a checklist for ideas ... and I have never intentionally referenced it in any way.
-
2023-01-31, 06:59 PM (ISO 8601)
- Join Date
- Jan 2009
Re: OOTS #1274 - The Discussion Thread
I thought Sunny was adult and Elan-like this whole time, but I agree with Grey Wolf: If Roy says Sunny is a child, then he probably knows something we don't.
And if you want to get really meta, if Rich thought it was obvious that Sunny was a child, he would write Roy saying so without explaining himself. But if Rich thought there were doubt in the audience over whether Sunny was a child, he'd write it more definitively if he wanted to settle the doubt, and more dubiously if he wanted to feed it.TinyMushroom drew my avatarSpoiler: A shaggy dog storyAn evil sorcerer in command of a dark cult is trying to unleash a god-killing abomination more real than the gods themselves. At his side, yellow eyes revealed a Haunter of the Dark. The evil sorcerer ordered it to kill.
-
2023-01-31, 07:04 PM (ISO 8601)
- Join Date
- May 2015
- Location
- Texas
- Gender
Re: OOTS #1274 - The Discussion Thread
Roy also said that Durkon was still Durkon, in book 6 and he was not correct. Be careful about who is doing the narration; not all characters are reliable or omniscient. Roy may believe Sunny to be a child; that does not make him one necessarily. (Roy may or may not be right, or he may be buying into Serini's little game ...)
Last edited by KorvinStarmast; 2023-01-31 at 07:05 PM.
Avatar by linklele. How Teleport Worksa. Malifice (paraphrased):
Rulings are not 'House Rules.' Rulings are a DM doing what DMs are supposed to do.
b. greenstone (paraphrased):
Agency means that they {players} control their character's actions; you control the world's reactions to the character's actions.
Second known member of the Greyview Appreciation Society
-
2023-01-31, 07:25 PM (ISO 8601)
- Join Date
- May 2009
Re: OOTS #1274 - The Discussion Thread
I don't think he needs to know anything we don't.
Sunny here mentioned that he is learning 'manual' dexterity, panel 2, and doesn't know how to write letters yet (panel 3), and we know that Serini gives him busy work fit for a child (panel 11).
None of that proves that Sunny is a child (I - don't know/can't remember/am not looking up right now - the biology of young beholders, or their parody equivalents which may be different - and suspect that Roy doesn't know much about them) but it does mean that Roy is justified in thinking of Sunny as a child.
-
2023-01-31, 08:33 PM (ISO 8601)
- Join Date
- Jan 2019
Re: OOTS #1274 - The Discussion Thread
Thanks for putting into words something I've been seeing here and there throughout the thread.
Like, shoot, guys, I'm no philosophy major, but I'm pretty sure if your response to the trolley problem is "there's gotta be a way to save everyone!" or something on the same level, you're missing the point so comically badly that I don't know what to say. It's just a starting point, a simple problem without a right or wrong answer, just seeing where you stand on a single variable in your moral framework.Last edited by Frozenstep; 2023-01-31 at 08:37 PM.
-
2023-01-31, 08:41 PM (ISO 8601)
- Join Date
- Jan 2022
Re: OOTS #1274 - The Discussion Thread
Thus spoke Horatio, the captain of the gate
"For all the mortal men death cometh soon or late
But how can man die better, than facing fearful odds?
For the ashes of his fathers! For the temple of his gods!"
Silence means yes, doesnt it? The dead are stunned by such a question, especially given that if they are conscious of anything at all, having gotten the whole dying and fear thereof thing over with, they would very clearly see that the manner in which they lived far outweighs the circumstances of their death, which, spoiler warning
Spoilereveryone dies in the end
Roy's morality makes absolutely no sense to me, but maybe that's just the whole morality axis that's been broken ever since the inception of the game. Roy has no problem dominating a kobold, and letting Belkar let a cat excrete in his mouth (no one has a problem with that, which is, really bonkers to me). Roy has no problem using any means necessary into coercing Belkar to do things for the party. IRL criminals are very rarely doing it for the evulz, there's almost always some kind of a "victim of circumstance" situation here. Both Belkar and Elan might qualify for some kind of "mental development issues". Roy has no problem letting Hilga with her child enter an extremely dangerous situation. "Oh but she would have gone off after Durkon anyways" is a crap argument, they had more than enough people to subdue her and keep her away from the fight, and options to leave the kid somewhere relatively safe (leave a basket under a random door, it's a dwarf city full of extremely honourable people). Serini is Sunny's "mom" and she clearly has no issues sending the kid into harm's way, and she will clearly have no issue sending Sunny out against Xykon, and what is Roy going to say then?
Utilitarianism is clearly wrong.
Consider the following. You are walking down a street, and you see a roller with a baby speeding its way downhill to the baby's doom. You save the roller and the baby. Hooray! Unambiguously good action, right? Except the baby grows up to be Adolf Stalin and kills billions of people. There is a statue of you in the capital of the evil empire. Under utilitarian model, that was an evil act, because the net consequences outweigh the good in the end.
Or... does it? The death and destruction caused by Adolf Stalin solves the issues of overpopulation and climate crisis. Post ww3 world enters an unprecedented era of peace and prosperity as the survivors use the freed space and resources to create a sustainable paradise. So saving baby Adolf Stalin was a good thing in the end.
Or... was it? Humanity is coddled by this era of prosperity. All ideas of exploring the universe die out because its just so comfortable here and with all the sustainable resources on earth, why would we ever reach for the stars? Which is fine and dandy until a bunch of aliens show up and enslave humanity. Or after millions of years the sun explodes and earth dies with it, and with it our species, perhaps the only intelligent life ever. So yeah, we're back to "saving babies is bad" in the end.
So next time you see a baby stroller zipping downhill you give it a good kick. Bad baby! We need earth to be an overpopulated hell hole to drive our species out into space. Except this baby would have grown up to be Elbert Ainstein, to figure out ftl space travel or cure for cancer or something like that.
Utilitarianism has the unfortunate problem of trying to reduce morality into a formula, except that requires a definitive answer as to what constitutes "in the end". Given a timescale you arrive at extremely different answers. If my timescale is say, "the nukes are in the sky and i have about until they land", what is optimal in the timespan of the next 30 minutes or less is entirely different than if I'm considering the next 50-80 years, a natural human lifespan. But the natural human lifespan also creates many problems from the moral perspective. Younglings love to rage on the Baby Boomers because (the popular opinion is) BBs lived their lives with absolutely no regard to how it will affect the future generations. We're the future generations and the world they're passing on to us is in a much worse shape than what they got. Things like climate change are projected to affect billions of people in the worst way, but if you're reading this, chances are you can expect to live your life rather comfortably and let the future generations deal with it. But the thing is, once you start considering time so far ahead that you'll be dust, your projections become less and less accurate. You save a baby, and by its 20s it can be a serial killer or something. The problem of utilitarianism is that every part of the formula is just a crapshot guess in the dark. You just don't know.
The only moral framework that really matters is Character Ethics. One's intentions, but also the cultivation of virtues, which are things which we kinda sorta know lead to better results. Maybe. It's a crapshot, but one can take the Wager of Markus Aurelius - either good conduct and honour matters, or it doesn't matter. But if it doesn't matter then it doesn't matter, it's better to live as if it matters anyways - and if it matters then it's going to turn out better for everyone anyways. Society prospers when old men plant trees the shade of which they will never sit under, nor the fruit of which they will eat. Maybe Roy's position here is flawed if you're on the "the nukes have been launched" timescale, but Roy is taking a wiser position that, well what if the world doesn't end? And then you have to live with the consequences of your decisions, because nothing is certain. Always stuck asking yourself, maybe I could have done something different. It's better to live in such a way that you will never have regrets, whether you live for five minutes or five thousand years. Especially since in universe, they have it on good authority that unless you're killed by the Snarl, your soul with persist for much longer than your body will.
I still disagree with Roy here, and it's grating to see Rich be so "modern culture America centric" again and again. If you want to write something that's not petty escapism, it should have universal value, or at least the culture you're writing it for should be projected to survive longer than it is projected for you to finish your story. People considered to have been kids by today's Western standards have fought in wars and had officer positions, and continue to do so around the world. The whole "teenager" thing itself is a rather silly social construct that will likely last as long as Baby Boomers do. The whole "teenage rebellion" is basically the result of treating people who are ready to become adults as stupid kids. If you're ever having issues with someone 12+, try treating that person as a person, with respect, as an adult. You will be surprised by the results. 12-13 is around the time traditionally societies have rites of passage into adulthood. Some thirteen year olds are wiser than some thirty year olds, and I imagine that to be the case more when talking about people from 2nd and 3rd world countries compared to 1st worlders. Sunny is childlike, but Sunny is also facing danger on a semi regular basis, not just in defending the world, but because Copyrighted Eye Things brought up under normal conditions are adventurer fodder. The young of foxes and rabbits live in an absolutely different world than first world children, and then, the entire world of OOtS is basically a massive soul farm. The gods are absolutely interested in creating societies which encourage competition and conflict and weed out weakness, and elves are the K strategy ones, while humans are the goblins of the "adventurer races". Except for laughs and anachronistic jokes there's no way any serious approach to this setting should resemble modern first world societies.
Yeah well, the trolley problem is asinine. It's a meme like all popular memes because it's nonsensical. When you get an answer, you're not getting a "single variable" in a presupposed moral framework that you're forcing on the person whom you ask this problem to the exclusion of other frameworks aka begging the question with a healthy dose of false dichotomy. What you're getting is instead whatever the person assumes about the situation because the question as is is nonsensical. Which, if you understand enough to extract this information, it can be useful. But you're much better off asking something more useful like "what is the difference between a rock and a brick"Last edited by Dasick; 2023-01-31 at 08:48 PM.
-
2023-01-31, 08:49 PM (ISO 8601)
- Join Date
- Nov 2004
Re: OOTS #1274 - The Discussion Thread
Back the truck up. Belkar explicitly hid that he was doing that from Roy. (No indication that Elan or Haley knew about it, either. Durkon, now, Durkon was the kind of passive there that would have me bumping his alignment down to Lawful Neutral, but that's me.)
Not touching the rest, but.Last edited by Kish; 2023-01-31 at 09:31 PM.
Orth Plays: Currently Baldur's Gate II
-
2023-01-31, 09:01 PM (ISO 8601)
- Join Date
- Feb 2018
Re: OOTS #1274 - The Discussion Thread
When I looked at the "make the dominated kobold kill itself on traps" strip, since it had been brought up a few times... it seemed to me that Roy DID take offense to the idea. He was able to be talked into allowing it through trickery (no no this little guy definitely knows how to spot traps properly!), and was clearly unhappy when the kobold was killed
-
2023-01-31, 09:15 PM (ISO 8601)
- Join Date
- Jun 2015
- Gender
Re: OOTS #1274 - The Discussion Thread
Yeah. He stopped the kobold and made Haley disarm the traps as soon as the first one exploded and Roy understood what was happening, and the kobold was later killed through no fault of his. He was clearly not OK with the treatment the kobold got.
(Although I also think it's worth noting that this is still a comic strip with lots of absurd humour that sometimes verges on the psychopathic, and that we shouldn't expect it to consistently stick to the way the real world operates, at least in scenes that are clearly played for laughs. Otherwise everyone is bad and also a moron)ungelic is us
-
2023-01-31, 09:23 PM (ISO 8601)
- Join Date
- Jan 2019
Re: OOTS #1274 - The Discussion Thread
So...again, I'm not educated on this topic, so maybe I should shut up...but are you sure you aren't just creating a ridiculous strawman here?
It doesn't force a moral framework on anyone, though? It's just a hypothetical to help understand someone's moral framework. There's not a right or wrong answer for a reason. The question is purposely kept free from details to prevent any distraction from the simple dilemma it presents.
-
2023-01-31, 09:30 PM (ISO 8601)
- Join Date
- Nov 2004
Re: OOTS #1274 - The Discussion Thread
Indeed. And note that when Rich is in "make jokes about D&D rules" territory rather than "write a serious story" territory, Haley can easily convince Roy of nearly anything; Bluff is a class skill for her which she has maxed or close to, Sense Motive is a cross-class skill for Roy which he has never been indicated to have any points in. So, as implausible as her "he can surely find traps as well as me" line was, Roy believed it until its falsity was demonstrated to him.
Orth Plays: Currently Baldur's Gate II
-
2023-01-31, 09:58 PM (ISO 8601)
- Join Date
- Jul 2018
Re: OOTS #1274 - The Discussion Thread
Utilitarianism has a 'reasonably forseeable' clause. You do not have to worry about what a baby might or might not do in every possible future. Utilitarianism does not justify - for example - murdering random people because there are possible futures that those individuals might make worse. Yes, you do deal with the future in your calculation. And, yes, more knowledge may well change the calculation. In my opinion, that is the case in every moral system, or at least it should be. If knowing more cannot possibly change your conclusions, then that is worrying.
-
2023-01-31, 10:07 PM (ISO 8601)
- Join Date
- Jan 2022
Re: OOTS #1274 - The Discussion Thread
Im not going through the strips again for this point so you might be right. But Belkar did say a lot of things about what he will have the kobold do when Roy was within earshot iirc. Either way Roy had no problem keeping the kobold on a dangerous quest where they could all die, instead of say, handing him off to authorities, or releasing him in some manner (depending on the situation)
Disarming traps is clearly a very dangerous activity. It's like disarming a bomb. Someone might be a professional bomb squad member, but when you send that someone out to do the job, it's understood that it's very dangerous and there's no guarantee the bomb defusal professional is coming back with enough matter to fit in a matchbox.
That's kind of the crux of the issue isn't it. The humour/stick figure thing really undercuts all attempts at serious storytelling because both share the same universe and serious matters have to contend with humour things being canon. it's even harder to be invested in the serious aspect when we have word of god (Thor) that really they were scraping the bottom of the barrel when they made this world.
It's called "taken to logical extreme to illustrate the issue". Or the two issues. The issue of timescale and issue of you not knowing the consequences of your action. An action can be good or bad depending if you look at it's consequence 5 minutes after, 5 years or 5 millennia.
It actually gets worse, I was just making a joking example here to keep things somewhat light. You can easily justify something more mundane things such as slavery, genocide, human experimentation with utilitarianism, and the more technology we get the more we can get closer to something straight out of dystopian horror that gives people nightmares.
It doesn't force a moral framework on anyone, though? It's just a hypothetical to help understand someone's moral framework. There's not a right or wrong answer for a reason. The question is purposely kept free from details to prevent any distraction from the simple dilemma it presents.
I honestly can't even begin to imagine answering this question without inventing, or assuming additional detail, and the answer changes quite a bit depending on these details.
Utilitarianism has a 'reasonably forseeable' clause. You do not have to worry about what a baby might or might not do in every possible future. Utilitarianism does not justify - for example - murdering random people because there are possible futures that those individuals might make worse. Yes, you do deal with the future in your calculation. And, yes, more knowledge may well change the calculation. In my opinion, that is the case in every moral system, or at least it should be. If knowing more cannot possibly change your conclusions, then that is worrying.
IMO, better understanding is a moral imperative because the better you understand the better decisions you can make.Last edited by Dasick; 2023-01-31 at 10:14 PM.
-
2023-01-31, 10:21 PM (ISO 8601)
- Join Date
- Jul 2018
-
2023-01-31, 10:38 PM (ISO 8601)
- Join Date
- Jan 2022
Re: OOTS #1274 - The Discussion Thread
My central objection is that someone's calculation is different between different timescales. It's also how fuzzy and imprecise the calculations get the bigger the timescale you impose, which, the longer the timescale you impose the closer utilitarianism gets to a legitimate moral framework, that is, you're not getting nightmare horror situations. So the better you calibrate it, the more useless it becomes.
You take the same person, and give them timescales of 30 minutes, lifetime, or the next 300 years. What they give you as their behaviour, or if they understand the proposed moral calculus* and propose the optimal behaviour given the context, illustrates the issue.
Take someone and pose the question - nukes have been launched. You have 30 minutes before you're wiped out. No, you can't get anywhere safe within that time. What do you do? People being honest will generally speaking list things that would be counter to their usual behaviour. What is their usual behaviour at, it assumes how long? Average human lifespan? Or the next 5 to 10 years?
Whereas the Virtue Ethic answer to the nuke question is, you do whatever you were doing cause you've been living your every day and moment as if it were your last.
*the whole happiness vs suffering metric is bonkers as well
-
2023-01-31, 10:53 PM (ISO 8601)
- Join Date
- Jul 2018
Re: OOTS #1274 - The Discussion Thread
In what sense do you mean that someone living by the (assuming there is only one and that everyone agrees on what is virtuous) virtue ethic is living your every and day and moment as if it were your last? Are you saying that people who live by virtue ethic do not take actions to make their future, or the future of their descendants, better? I literally cannot imagine what 'living each day as if were your last' looks like.
As to you again talking about timescales, you again seem to be ignoring that these things are unknowable to a person. I cannot know whether or not the baby I save will be a mass murderer. Something unknowable cannot be included in any calculation. So it isn't.
However, I fear that we are drifting far, far away from the purpose of this thread... ;)
-
2023-01-31, 11:23 PM (ISO 8601)
- Join Date
- Jan 2022
Re: OOTS #1274 - The Discussion Thread
Interesting you say that because it's a common enough maxim. Which, kinda illustrates my point.
The premise of Virtue Ethics is that the only moral thing that matters is intentions and actions which cultivate virtues. The cultivation of virtues is intended to create situations which are good now, and tomorrow, and five years after. The end result of any action is not important, it is only the probable outcome of practicing certain habits that matters. A good, reasonably religiously neutral explanation is stoicism as expressed in the Meditations by Markus Aurelius (whom I have mentioned above).
If we imagine playing chess, utilitarianism is like trying to brute force your way by thinking 10 moves ahead. Virtue ethics is about playing by well known heuristics and strategic guidelines.
Virtue ethics based civilizations have a lot of overlap, Id say about 80%? If not in their implementation then in their core assumptions. Seems like there are definitely right answers there (maybe local maxima as well given circumstances)
As to you again talking about timescales, you again seem to be ignoring that these things are unknowable to a person. I cannot know whether or not the baby I save will be a mass murderer. Something unknowable cannot be included in any calculation. So it isn't.
Although, suppose we isolated the "{Scrubbed}" gene.
Or had AI which could predict crime.
Both of the above are reasonably realistic scenarios. "Using AI to predict crime and charge people with crimes they haven't committed" is something that makes people uncomfortable.
If we go back to Virtue Ethics, the answer is, yes, you save the baby. You save the baby because it is innocent and the virtue of Justice (another word for it is talking about human rights) is something that has served humans well, because the baby has so much potential for good as it does for evil and you would rather believe that everyone's everyday actions, even your own, no matter how small can help build that potential for good rather than give in to the cold calculation of practicality. You'd rather take a personal risk, and make a personal sacrifice for that world. But more importantly, you just don't want to live in a society where pre-crime is a thing and people don't help one another. You're doing your part to make the world a better place and lead by example in everything you do, because you know that the {Scrubbed} card is in the deck, and you're dedicated to creating a society where {Scrubbed}would have been a mediocre landscaping artist instead of a mass murderer.
However, I fear that we are drifting far, far away from the purpose of this thread... ;)Last edited by Pirate ninja; 2023-02-01 at 06:52 AM.
-
2023-01-31, 11:34 PM (ISO 8601)
- Join Date
- Jul 2018
Re: OOTS #1274 - The Discussion Thread
It is only the probable outcome of practicing certain habits that matters.
It is only the probable outcome of taking a certain action that matters.
It thus seems that the disagreement is not about the predictability of outcomes, as that seems to be built in to both systems.
Habits versus actions is the key disagreement, then. This seems to me that your objections over time scale differences are irrelevant.
-
2023-01-31, 11:42 PM (ISO 8601)
- Join Date
- Jan 2019
Re: OOTS #1274 - The Discussion Thread
I mean, you can make anything look crazy by taking them to logical extremes, and ignoring the nuance and details. It just really makes it look like you're arguing in bad faith, strawmanning it by extrapolating simple examples that aren't meant to be all-out rigorous demonstrations of moral frameworks.
And are you sure about Utilitarianism being forced to take things at an unknowable scale? Because you can't make choices based on information you can't know. Are you sure you aren't misrepresenting something, here?
You're greatly misunderstanding something here. Of course all that stuff matters. The trolley problem isn't some grand, moral-framework defining question. It's not meant to be used to boil down complex situations (and I seriously don't think you can boil any school of thought down to "lol, we just trolley problem'd it, bro"). If you're inventing stuff or assuming additional detail, you're vastly overthinking it. Of course there's nuance and detail in real life, the hypothetical is not meant to be used as a way to ignore that stuff in real life.
It's literally just trying to gauge your opinion on a single factor. Those other details? Other factors, that other questions could ask about.Last edited by Frozenstep; 2023-01-31 at 11:49 PM.
-
2023-01-31, 11:50 PM (ISO 8601)
- Join Date
- Jul 2018
Re: OOTS #1274 - The Discussion Thread
Let us assume I go to work at 7.30 and it takes me half-an-hour to get there. It is 7.30 when the missiles launch, so I have 30 minutes to live. Should I go to work, knowing that I will not get there? If you say no, I still do not understand what you mean by 'living every day as if it is my last'. If you say yes, I am forced to conclude that virtue ethics fails at edge cases in exactly the same way that other moral systems do.
If we imagine playing chess, utilitarianism is like trying to brute force your way by thinking 10 moves ahead. Virtue ethics is about playing by well known heuristics and strategic guidelines.
Virtue ethics based civilizations have a lot of overlap, Id say about 80%? If not in their implementation then in their core assumptions. Seems like there are definitely right answers there (maybe local maxima as well given circumstances)
That was a joking example.
Although, suppose we isolated the "{Scrub the post, scrub the quote}" gene.
Or had AI which could predict crime.
Both of the above are reasonably realistic scenarios. "Using AI to predict crime and charge people with crimes they haven't committed" is something that makes people uncomfortable.
If we go back to Virtue Ethics, the answer is, yes, you save the baby. You save the baby because it is innocent and the virtue of Justice (another word for it is talking about human rights) is something that has served humans well, because the baby has so much potential for good as it does for evil and you would rather believe that everyone's everyday actions, even your own, no matter how small can help build that potential for good rather than give in to the cold calculation of practicality. You'd rather take a personal risk, and make a personal sacrifice for that world. But more importantly, you just don't want to live in a society where pre-crime is a thing and people don't help one another. You're doing your part to make the world a better place and lead by example in everything you do, because you know that the {Scrub the post, scrub the quote} card is in the deck, and you're dedicated to creating a society where {Scrub the post, scrub the quote} would have been a mediocre landscaping artist instead of a mass murderer.
Seems like 11 pages later people were still arguing the morality of using Sunny as bait. I prefer to think we're getting closer to the root of that disagreement.Last edited by Pirate ninja; 2023-02-01 at 06:53 AM.
-
2023-02-01, 12:16 AM (ISO 8601)
- Join Date
- Jan 2022
Re: OOTS #1274 - The Discussion Thread
Well, you don't really get the whole "live your moment as if it were your last" thing, which a foundational principle of VE so it does matter.
Timescale is central to UTIL moral framework, because the answer changes drastically. Timescale is irrelevant to VE moral framework, because it's attempting to have one system for making everyday life enjoyable in a way that also makes the future problems easier to deal with.
Another issue with UTIL framework is that, because factors can cause to wildly different actions, the accuracy of those factors is an issue. Factors are less of an issue to a VE habit driven framework, because they don't change the actions that much.
Taking things to logical extremes is a standard stress test. It's how you figure out the limits of systems, where it starts to break and deform. It helps isolate variables and illustrate situations. In some sense the trolley problem attempts to do so. It's an entirely hypothetical, useless scenario which attempts to gauge opinion on a single factor.
And are you sure about Utilitarianism being forced to take things at an unknowable scale? Because you can't make choices based on information you can't know. Are you sure you aren't misrepresenting something, here?
You're greatly misunderstanding something here. Of course all that stuff matters. The trolley problem isn't some grand, moral-framework defining question. It's not meant to be used to boil down complex situations (and I seriously don't think you can boil any school of thought down to "lol, we just trolley problem'd it, bro"). If you're inventing stuff or assuming additional detail, you're vastly overthinking it. Of course there's nuance and detail in real life, the hypothetical is not meant to be used as a way to ignore that stuff in real life.
It's literally just trying to gauge your opinion on a single factor. Those other details? Other factors, that other questions could ask about.
First factor is, are five lives worth more than one
Second factor is, "do you need to act to be responsible for a bad action" (murder/manslaughter) or "is it ok to assume the position of judgment"
The question really boils down to "are you ok with taking direct action to kill someone to save a net positive of four lives". Which has so many factors to it, the answer doesn't really tell you anything. People will pull the lever or not for various reasons, if you listen closely, for assumed details (imo, I dont think you can overthink a scenario which starts with "so regardless of what you do someone dies but you can decide who dies"). Its the rationale as to why that contains useful information that you can extract.
I can't answer that question as is because both I'm good at understanding when I start assuming things, and because a) I believe certain lives are simply worth more or less when compared to one another and b) Im ok assuming the position to judge in some situations but not others.
-
2023-02-01, 12:23 AM (ISO 8601)
- Join Date
- Jul 2018
Re: OOTS #1274 - The Discussion Thread
Then can you explain it?
Timescale is central to UTIL moral framework, because the answer changes drastically. Timescale is irrelevant to VE moral framework, because it's attempting to have one system for making everyday life enjoyable in a way that also makes the future problems easier to deal with.
Let us say that I as a utilitarian save a baby. The reason is because I judge that that is the best action to increase human happiness.
Then that baby grows up to be a dictator.
My action of saving the baby is still a good action under the utilitarian framework. This is because utilitarians living under the dictator would recognise that I made the only decision possible given my knowledge.
-
2023-02-01, 12:50 AM (ISO 8601)
- Join Date
- Jan 2019
Re: OOTS #1274 - The Discussion Thread
You're ignoring that nuance and detail part. If you take the most basic, shorted summery of something and then push it to an extreme, you can greatly misrepresent it. You have to take the whole school of thought, the different types and subtypes within it against those kinds of tests, otherwise it's pretty pointless. It'd be like getting all your news from only looking at the headlines of newspapers.
The whole point of the scenario is to present that boiled down question, to remove those other factors so you're not answering based on other factors. You're overthinking it because the question's bare-bones nature is the point. It's not meant to be applied directly to real life.
Again, there's other scenarios that boil down to questions that ask about other factors, too. The trolley problem is just one among many hypotheticals, and they're just starting points.Last edited by Frozenstep; 2023-02-01 at 01:11 AM.
-
2023-02-01, 12:58 AM (ISO 8601)
- Join Date
- Jan 2022
Re: OOTS #1274 - The Discussion Thread
If you do a good enough job at VE, the answer is yes, and it's not a broken edge case. If you're really into stoicism or stoicism compatible religion (all major ones I think?) it uhh doesn't even need much of an explanation.
Actually, the VE asks the counter question of, well why are you wasting so much time commuting in the first place? Maybe you should live simpler. Or maybe you should make more sacrifices for your work if it's that important. I realize half an hour isn't all that much, but people generally commute for much longer.
But the yes solution is... lets say you don't just sit in your car bored out of your mind but having to do it cause you have to pay the bills. With modern technology you could be listening to audiobooks, or talking to people you care about. Or maybe you found a scenic route to work that you love seeing every day. Maybe in that time, you just need to see that route in silence for 30 minutes. Maybe it takes a 30 minute bike ride or a jog to get there (I used to do that at 4 in the morning, it was fun. Kinda miss it). Maybe you got the routine that works for you. You're not just enduring the ride for the sake of enduring it, you're looking for ways to use that time in an important manner.
The illustration is the cliche of a character having a close brush with death and making drastic changes to their lives.
A real kind of shock to me when talking about possible post-apocalyptic future and prepper hobbies, someone was like "Well, instead of doing all that, I would just get a gun and a bottle of whiskey, get drunk and off myself".
And generally the optimal way to play chess is to use both methods. I think that utilitarianism actually encompasses a lot of the same results as virtue ethics in any case. We just may differ on how we deal with edge cases.
There is a lot of overlap among the conclusions of utilitarians, too. My point was that you cannot object to utilitarianism on the grounds that in some cases people using it come up with different answers if in some cases virtue ethicists also come up with different answers.
Both of those are not sensible. The AI situation in particular is illogical. If the AI is 100% correct that person A will kill person B in seven days, then attempting to arrest them now will not prevent the crime; if it could, then the AI is not 100% correct - there are actions that will prevent the crime from occurring and thus thwarting the prediction. So from a utilitarian point of view it would be completely useless to arrest the person given the fact that the AI was 100% correct.
People generally object to this kind of thing on the ground of human rights (ie, an offshoot of VE), but in a UTIL framework, if you have a good enough ability to predict crime (blah blah, unless action is taken), what's the issue again?
And a utilitarian answer is, yes, you save the baby. For pretty much exactly the same reasons ...
Imo the best you can do under VE is "go back to whenScrubbedis of age and challenge him to a duel, or maybe set him up with some art connections so he can be a painter", even if the baby killing plan is 100% to succeed and my plan is significantly less likely to succeed.
I'm taking issue with the underlying premise of utilitarianism. That is, the idea that you can do moral calculus. The issue being, as I mentioned above, is that the calculation changes drastically with different timescales, from a day, to a couple years, to lifetime, to several lifetimes, to the entire lifetime of our species. No one really acknowledges that. What's moral if you're thinking about just your lifetime is completely different from what's moral if you're thinking ahead a couple of generations. When I talk about the 30 minute nuke test, I'm not just pulling it out of nowhere, it's based on trends of discussions I've had with real people. Frankly, it scares me and it shows a problem with an approach to morality.
I've provided many examples and situations beyond just the original logical extreme stress test. It was, somewhat humorous, somewhat simplistic, somewhat a caricature. Sure, ok, it failed to convey the point I was trying to convey. Mea culpa. Let's discuss more specific situations and examples.
The whole point of the scenario is to present that boiled down question, to remove those other factors so you're not answering based on other factors. You're overthinking it because the question's bare-bones nature is the point.
Again, there's other scenarios that boil down to questions that ask about other factors, too. The trolley problem is just one among many hypotheticals, and they're just starting points.
Mine doesn't. I literally can't answer it in a vacuum.
Hence, it's assuming a moral framework, or moral axioms that can answer it in this vacuum.
It also doesn't do a good enough job to separate those factors you want to separate.Last edited by Pirate ninja; 2023-02-01 at 06:55 AM.