Results 241 to 270 of 538
-
2019-07-29, 10:20 PM (ISO 8601)
- Join Date
- Sep 2018
-
2019-07-29, 10:33 PM (ISO 8601)
- Join Date
- Dec 2009
- Location
- Birmingham, AL
- Gender
Re: OOTS #1172 - The Discussion Thread
Cuthalion's art is the prettiest art of all the art. Like my avatar.
Number of times Roland St. Jude has sworn revenge upon me: 1
-
2019-07-29, 10:41 PM (ISO 8601)
- Join Date
- Mar 2013
- Location
- Los Angeles, CA
- Gender
Re: OOTS #1172 - The Discussion Thread
Elvis isn't dead. He pumped gas for my car and sold me a coke just the other day . . .
-
2019-07-29, 10:42 PM (ISO 8601)
- Join Date
- Dec 2009
- Location
- Birmingham, AL
- Gender
Re: OOTS #1172 - The Discussion Thread
Last edited by Peelee; 2019-07-29 at 10:42 PM.
Cuthalion's art is the prettiest art of all the art. Like my avatar.
Number of times Roland St. Jude has sworn revenge upon me: 1
-
2019-07-29, 11:05 PM (ISO 8601)
- Join Date
- Sep 2015
Re: OOTS #1172 - The Discussion Thread
Unfortunately this is a real question that the software engineers who write code for self-driving cars have to address. If a car gets itself into a situation where it can either kill TWO people who are not in the car, or avoid those two people and kill JUST the driver of the car, how should the car react?
Also, what they're finding with the current implementations of self-driving cars is that
1) The manufacturers are not willing to accept liability under any circumstances because regardless of how good their software is (and it's not really all that great), there are situations where it DOES kill people, and apparently their method of finding those situations is to let the cars onto the streets and seeing who dies. The autopilot might be a better driver than the typical human, and maybe even cause many fewer deaths on average than a human driver, but it's nowhere near perfect.
2) By the time the software makes a mistake and requires a human to take over, the human isn't left with enough time to take the controls and react to the situation, and most implementations currently fail silently so the human doesn't know there was a problem anyhow.
3) The manufacturers require a dummy behind the wheel who is willing to accept liability for when the software kills people for legal reasons, and that person is more or less at the mercy of software that was most likely outsourced to the lowest bidder.
As a software engineer, since I don't especially have a death wish and would rather my car not blame me when it decides to kill someone, I can't recommend self-driving cars in any setting, for any reason right now. Some of my data is old, in some cases years old, but I've seen no evidence that the various manufacturers of self-driving cars are willing to accept any responsibility whatsoever for the deaths their failures cause, and they ARE willing to kill people to get their product on the streets.
I will grant that there is another interpretation of the data: if a self-driving car causes fewer deaths, on average, than a human driver, then pushing it live is a social good even if it does kill people. This is potentially a valid argument, but again, the manufacturers are not willing to accept any responsibility of any kind for the people that wind up dying for their profit. There are a lot more incentives for them to push a bad product live than there are for them to wait until their product is really good, because if they wait the competition will eat their lunch.Last edited by diremage; 2019-07-29 at 11:18 PM.
-
2019-07-29, 11:42 PM (ISO 8601)
- Join Date
- May 2009
- Location
- In a castle under the sea
- Gender
Re: OOTS #1172 - The Discussion Thread
Spoiler: Somehow, some of us are still on topic.
Probably, he seems like the kind of guy who cares about dwarves. He might need to convene a new Council of Clans before taking any action about it, but he'd probably be mad if Thor killed one of his priests.
Nah, you'd need a gate to the Outer Planes for that.
...You meant the creature type, right?
Spoiler: Auto-Automobiles
First off, there's a difference between agreeing to not interfere with the car's course and not interfering with its destination. Second, you didn't address the question of how the heck Pizza Fortress is going to get that $3x back from customers who didn't want to be there even without considering the PR feedback. Finally, your cynicism about how horrible the future could be, while not entirely unwarranted, is entirely misdirected. You're worrying too much about blunt-force manipulation when your paranoia would be much better spent on the subtle kind. Since that kind, you know, works.
How would you react? I'd hope it's some variation of "try to save everyone and fail," but realistically, it's going to be some variation of "not realize the full trolley problem until it's too late," because if you notice a potential accident early enough to figure out who's going to be killed by what action, it's probably early enough to avoid the accident altogether.
Don't get me wrong, it's a question that needs to be answered, but I don't get why people treat it as the conceptual problem that needs to be solved before we can trust self-driving cars. I'd put it in the same general tier as "How much should the car break the law to compensate for other people breaking the law"? (Because that question is something that needs to be answered for self-driving cars to handle your typical four-way stop sign in a reasonable amount of time.)
If I had to pick a "the conceptual problem," it would be the liability thing you mentioned. How liable is the manufacturer of an autocar for accidents caused in part by their software screwing up and in part by the driver not noticing that they screwed up until it's too late? The reason this is the question is because there isn't a single obvious answer. There's a fairly clear solution to any vanilla trolley problem, and most of the "Who should die in the tiny percent of a percent chance that the car recognizes an accident too late to stop it but early enough to decide who should die" scenarios are basically trolley problems, except that logically there shouldn't be clear answers due to the lack of tracks and stationary victims. The liability question, on the other hand, is not only much more likely to come up, but much harder to answer, because some fault lies with each party. You can't make a universal ratio of "The driver is x% liable and the manufacturer is y% liable," because each accident is different.
I will grant that there is another interpretation of the data: if a self-driving car causes fewer deaths, on average, than a human driver, then pushing it live is a social good even if it does kill people. This is potentially a valid argument, but again, the manufacturers are not willing to accept any responsibility of any kind for the people that wind up dying for their profit. There are a lot more incentives for them to push a bad product live than there are for them to wait until their product is really good, because if they wait the competition will eat their lunch.
Spoiler: *hums Star Wars theme*
Part of me is convinced that at least some of the chatbots people are talking about actually exist. It has been properly shamed.
Aside: This makes it sound like you think Solo was better than TFA. I think Solo had more potential than TFA, given that the latter movie seemingly tried to show how the sequel trilogy would be different from the original trilogy via a series of subtle differences more than it tried to tell its own story, but Solo had enough tonal and structural issues (with the ones around L3 being the worst, since the movie couldn't seem to decide whether she was a plot device, a joke or someone with serious concerns worth addressing) that I can't say it was the better movie of the two.
If your chat bots have "launch nukes" programmed as a potential output, you've done something horribly wrong. Not even Elan wrong, that's more like Homer Simpson levels of "whoops".
That, or some idiot hooked your chatbot up to ICBM early warning systems and it got confused by a tangent about Starkiller Base.
-
2019-07-29, 11:48 PM (ISO 8601)
- Join Date
- Feb 2008
Re: OOTS #1172 - The Discussion Thread
Seems like the obvious answer, to me, is that now there is a hole in the ceiling and V can still fly.
V flies up and casts Dispel Magic. V isn't in the chamber, so is not affected. Several of the clan members are no longer dominated.
Vote fails.
-
2019-07-29, 11:51 PM (ISO 8601)
- Join Date
- Dec 2009
- Location
- Birmingham, AL
- Gender
Re: OOTS #1172 - The Discussion Thread
You clearly don't live in Oregon or New Jersey.
V is in the chamber. From the looks of it, the domed room is right up against the edge of the mountain, as the hole in the ceiling opens straight to daylight. Unless V can Dimension Door to outside the mountain, it doesn't look like they'll be able to do anything with that hole.Last edited by Peelee; 2019-07-29 at 11:54 PM.
Cuthalion's art is the prettiest art of all the art. Like my avatar.
Number of times Roland St. Jude has sworn revenge upon me: 1
-
2019-07-29, 11:55 PM (ISO 8601)
- Join Date
- Feb 2013
-
2019-07-30, 12:14 AM (ISO 8601)
- Join Date
- Mar 2007
- Location
- Oregon, USA
Re: OOTS #1172 - The Discussion Thread
FeytouchedBanana eldritch disciple avatar by...me!
The Index of the Giant's Comments VI―Making Dogma from Zapped Bananas
-
2019-07-30, 12:14 AM (ISO 8601)
- Join Date
- Oct 2007
- Location
- Olympia, WA
Re: OOTS #1172 - The Discussion Thread
The Giant says: Yes, I am aware TV Tropes exists as a website. ... No, I have never decided to do something in the comic because it was listed on TV Tropes. I don't use it as a checklist for ideas ... and I have never intentionally referenced it in any way.
-
2019-07-30, 12:46 AM (ISO 8601)
- Join Date
- Sep 2015
Re: OOTS #1172 - The Discussion Thread
If you and I form a contract whereby you pay my next of kin a bunch of money for the privilege of killing me, and then you DO kill me, you still get turned to stone if you're in the dwarven clan council chamber.
In general employment laws exist because society recognizes that in an employer/employee relationship, the parties are inherently unequal in terms of knowledge and power, and therefore the law provides some minimal protections to the employee. This works in part because we generally guarantee food to people whose labor is worth less than the nominal minimum, with the intent that following the laws won't literally kill them.
With regards to self-driving cars, the trolley problem IS a liability problem. For the liable party, there is no right answer that results in zero liability.
In 2016 Uber predicted that by this year they would have 75,000 self-driving cars on the road, and have been spending 20 million dollars a month to do so. These robots were intended to replace uber taxi drivers and truck drivers. Uber's plans were significantly delayed after their prototype killed a pedestrian during live tests and the court found that their software could not tell the difference between a pedestrian and a puddle. Right now Uber's plan to get ROI on its self-driving cars appears to be "Sue Waymo and stick them with the bill."
Also around 2016 Tesla stopped pushing their self-driving autopilot after a number of unfortunate accidents resulting in deaths, including several cases where it turned out the car had a blind spot that allowed high-clearance truck beds to decapitate the Tesla drivers. Guess how they found this out. There are some decent reasons right now to buy a Tesla, but the autopilot is not one of them.
There are AI's that can write music and books, I wouldn't be surprised if they could do a better Star Wars script than the prequels. You just need to feed a few million hours of blockbuster scripts into the AI and let it crunch for a couple years.
Edit: The Tesla accident I linked is NOT the one I was thinking of, but it's the same blind spot three years later, apparently. The one I was thinking of, the autopilot was trying to park the car and drove under a high-clearance truck bed at low speeds, taking the roof off.Last edited by diremage; 2019-07-30 at 01:10 AM.
-
2019-07-30, 01:14 AM (ISO 8601)
- Join Date
- Feb 2019
-
2019-07-30, 01:27 AM (ISO 8601)
- Join Date
- Mar 2017
- Location
- US
- Gender
Re: OOTS #1172 - The Discussion Thread
-
2019-07-30, 01:42 AM (ISO 8601)
- Join Date
- Jan 2007
- Location
- Singapore
-
2019-07-30, 01:48 AM (ISO 8601)
- Join Date
- Jun 2009
- Gender
Re: OOTS #1172 - The Discussion Thread
I think computer based colision avoidance would be (far) superior to human based.
Add two factors:
In my scenario all cars are controlled with a central computer: thus, they won't run into each other because the PC controlls all of them.
Thus, you avoid the SECOND car running into the FIRST car because the first car suddenly does something the SECOND one doesn't expect.
Which wouldn't happen at all, because the PC KNOWS all cars' moved in advance.
Also, it would know EXACTLY how much distance to keep in the first place, because obviously it would know the accelerating/decelerating capabilities of either car and keep a safety distance at all times (VERY UNLIKE, you know, the stupid humans who drive the cars right now!).
Of course humans might be able to press an emergency stop button if they happen to see something the car doesn't. But guess what? See above about safety distance.
Re:Rails. Well, there wouldn't be actual rails, but I imagine the system would be set up in a way that the cars basically drive on pre-defined tracks, in order to simplify the calculation algorithms.
Which, come to think of it, we are doing right now ANYWAY (or, SHOULD BE DOING). Many accidents happen because drunk and/or tired people deviate from their tracks.
Now that I think of it, I believe the future will be that FIRST people will install "my" system on autobahns.
It is probably far easier than installing within cities' street networks, and people would benefit HUGELY from not being run into at high speeds, AND travelling for long distances is the most boring anyway - and therefore most likely to fall asleep.
It most closely resembles travelling on rails - so most boring, and most easy to install and programm i imagine.Boytoy of the -Fan-Club
What? It's not my fault we don't get a good-aligned female paragon of promiscuity!
I heard Blue is the color of irony on the internet.
I once fought against a dozen people defending a lady - until the mods took me down in the end.
Want to see my prison tatoo?
*Branded for double posting*
Sometimes, being bad feels so good.
-
2019-07-30, 02:06 AM (ISO 8601)
- Join Date
- Jan 2012
Re: OOTS #1172 - The Discussion Thread
It's a pretty well-established feature of official comic discussion threads on here that after a certain amount of time, they wind up becoming mostly an argument about some real-world topic with only the most tenuous relationship to the actual comic.
But even considering that, I'm impressed at how quickly this self-driving cars debate sprang up.
-
2019-07-30, 02:16 AM (ISO 8601)
- Join Date
- Sep 2015
-
2019-07-30, 02:44 AM (ISO 8601)
- Join Date
- Jan 2012
Re: OOTS #1172 - The Discussion Thread
I've been talking about non-dwarves the entire time. "Outsiders" being the people who don't belong in the council room.
I'm just saying, it should definitely be possible for someone outside to start damaging the hole more, trying to intervene from outside through the hole, or even dropping in now! It's hard to say what would happen, but it seems a whole realm of new possibilities have opened up.Avatar by linklele!
-
2019-07-30, 03:25 AM (ISO 8601)
- Join Date
- Oct 2015
- Gender
Re: OOTS #1172 - The Discussion Thread
Three days later, according to a quick search.
Unrelated: I'm not sure how we got onto self-driving cars or how we did so in a way that doesn't violate the no-politics rule. Or maybe I'm having trouble thinking of how to comment in a way that doesn't, even though my main comment is just "mass transit > self-driving cars."
-
2019-07-30, 03:47 AM (ISO 8601)
- Join Date
- Feb 2019
Re: OOTS #1172 - The Discussion Thread
The slow reaction time of humans is the main reason that speed limits on limited-access highways are not higher--currently 65 miles per hour / 100 km/h or so in the USA for example. Our current vehicles are physically capable of traveling nearly twice this speed, but doing so would mean half as much time available for human reaction combined with longer braking distance, which is very bad when an obstacle comes up and there's no space to move into an adjacent lane.
-
2019-07-30, 03:49 AM (ISO 8601)
- Join Date
- Sep 2015
-
2019-07-30, 04:32 AM (ISO 8601)
- Join Date
- Jun 2015
Re: OOTS #1172 - The Discussion Thread
Finally, some NPC's that aren't stupid enough to fall for it. I do so hate when the plot only works if the characters are idiots especially if it's for the same tired joke over and over again. And that ceiling is about to collapse on all the dominated elders who haven't voted yet.
-
2019-07-30, 05:51 AM (ISO 8601)
- Join Date
- Dec 2018
- Location
- Australia
- Gender
Re: OOTS #1172 - The Discussion Thread
Hah, I think Durkon has a cunning plan.
"You... little... *****. It's what my old man called me, it's like it was my name, and I proved him right, by killing all the wrong people. [And], I love ya Henry, and I'll never call you anything but your name, but you gotta decide; are you gonna lay there, swallow that blood in your mouth, or are you gonna stand up, spit it out, and go spill theirs?" - Unknown
-
2019-07-30, 06:01 AM (ISO 8601)
- Join Date
- Jul 2019
Re: OOTS #1172 - The Discussion Thread
I mean, we’ve already had one Linear Guild member return this book, and there is one Linear Guild member who can fly, is an Outsider, and is most likely on this plane.
I though TFA was really good and that Solo was okay, it’s much easier to write an okay movie than a really good one so my chatbot has only reached that far.
[force]This is not the idiot you’re looking for.[/force]Last edited by Schroeswald; 2019-07-30 at 06:06 AM.
-
2019-07-30, 06:46 AM (ISO 8601)
- Join Date
- Feb 2017
-
2019-07-30, 07:23 AM (ISO 8601)
- Join Date
- Jan 2015
- Location
- Brazil
- Gender
Re: OOTS #1172 - The Discussion Thread
Each one of us, alone, is but a drop in the sea
Our powers pale compared with the great heroes
Our battles don’t hit theheadlines or shake the earth
But they are few, can’t be everywhere, and we, many
So, when the world or universe needs saving, they come
But when people needs saving, we are the ones to appear
We're underdogs, but we rise up to the challenge to be heroes.
(Wishing Joe, a low-powered superhero)
"I really like the Geek Math'ology we do here"
-
2019-07-30, 07:27 AM (ISO 8601)
- Join Date
- Aug 2007
Re: OOTS #1172 - The Discussion Thread
You are grossly overestimating the capabilities of single-system control. You've just described a far more complex version of the global flight management system. There are about 100,000 flights per day across the entire world. Los Angeles alone, it seems, has four times that many cars per day in a single one of its roads. The global flight system is run by some of the best, most reliable computer systems we've ever built, and is still heavily dependent on human beings doing the heavy lifting. And unlike cars, it can avoid vehicle intersection in three dimensions, with far greater margins of error you find in any road.
The idea you could control all cars in a city with a single computer, with little enough delay in communication to make safe decisions (where decisions need to be made and applied in fractions of a second) is at this point sci-fi.
ETA: To illustrate the issue, what exactly do you think this computer would do if a child in a bike swerved into traffic? How would the computer 1) identify the problem, 2) calculate the rerouting for every car in the vicinity, avoiding them crashing into other already-calculated paths and 3) transmit the corrections to all vehicles that might be involved in the approximately second and a half it takes for that kid to be run over?
AI cars deal with this by each carrying a computer of their own, processing it's own visual data in real time, and driving defensively when it is called for it, and aggressively when it must. Centralizing the problem would require a computer that would dwarf google's entire operation, and it'd still be incapable of dealing with the sheer raw data from billions of vehicles all needing second-to-second decisions.
Grey WolfLast edited by Grey_Wolf_c; 2019-07-30 at 08:07 AM.
Interested in MitD? Join us in MitD's thread.There is a world of imagination
Deep in the corners of your mind
Where reality is an intruder
And myth and legend thrive
Ceterum autem censeo Hilgya malefica est
-
2019-07-30, 07:58 AM (ISO 8601)
- Join Date
- Nov 2007
- Location
- USA
- Gender
-
2019-07-30, 08:12 AM (ISO 8601)
- Join Date
- Jan 2015
- Location
- Brazil
- Gender
Re: OOTS #1172 - The Discussion Thread
Been re-reading the strip and it got me thinking: what happens with someone who is turned to stone? I mean, is he dead and his soul goes for the afterlife, or his soul stays in the petrified body until it's destroyed or he "dies of old age" (since in DnD people have a precise expiration date)?
If we suppose Durkon's petrification means he's dead, was his death honorable? Yes, he died sort of in battle, and fighting for the sake of all dwarves, but he also died breaking dwarven law, so I can see this going one way or another...
And what if the crux of their backup plan wasn't just about the hammer returning, but also about him dying and going whererver he needs to go? (and I have no idea where that should be).Last edited by D.One; 2019-07-30 at 08:12 AM.
Each one of us, alone, is but a drop in the sea
Our powers pale compared with the great heroes
Our battles don’t hit theheadlines or shake the earth
But they are few, can’t be everywhere, and we, many
So, when the world or universe needs saving, they come
But when people needs saving, we are the ones to appear
We're underdogs, but we rise up to the challenge to be heroes.
(Wishing Joe, a low-powered superhero)
"I really like the Geek Math'ology we do here"