PDA

View Full Version : The new improved fully-automated killing machine



pendell
2011-09-20, 02:37 PM
As seen in the Washington Post (http://www.washingtonpost.com/national/national-security/a-future-for-drones-automated-killing/2011/09/15/gIQAVy9mgK_story.html?hpid=z1)



This successful exercise in autonomous robotics could presage the future of the American way of war: a day when drones hunt, identify and kill the enemy based on calculations made by software, not decisions made by humans. Imagine aerial “Terminators,” minus beefcake and time travel.

The Fort Benning tarp “is a rather simple target, but think of it as a surrogate,” said Charles E. Pippin, a scientist at the Georgia Tech Research Institute, which developed the software to run the demonstration. “You can imagine real-time scenarios where you have 10 of these things up in the air and something is happening on the ground and you don’t have time for a human to say, ‘I need you to do these tasks.’ It needs to happen faster than that.”

The demonstration laid the groundwork for scientific advances that would allow drones to search for a human target and then make an identification based on facial-recognition or other software. Once a match was made, a drone could launch a missile to kill the target.



What could possibly go wrong?
...
Oh ... right (http://www.youtube.com/watch?v=XDJltedOlkM&feature=fvst).

Maybe they'll be useful against the zombie hordes ... is there a robots vs. zombies thread? There oughta be.

Respectfully,

Brian P.

Dr.Epic
2011-09-20, 08:06 PM
What could possibly go wrong?
...
Oh ... right (http://www.youtube.com/watch?v=XDJltedOlkM&feature=fvst).

More like what could go possibly right?

THIS! (http://upload.wikimedia.org/wikipedia/en/a/a6/Bender_Rodriguez.png)Bite my shiny metal ass!:)

Destro_Yersul
2011-09-20, 09:32 PM
well, I imagine they'd still need orders from humans about what to shoot, and then the hunting and shooting would be autonomous. Still...

Skynet is what happens when you build robots that aren't Three Laws Compliant.

Dr.Epic
2011-09-20, 09:32 PM
Skynet is what happens when you build robots that aren't Three Laws Compliant.

Even the three laws are flawed. Didn't you see I, Robot?

Destro_Yersul
2011-09-20, 10:04 PM
Even the three laws are flawed. Didn't you see I, Robot?

Course I did. It took advantage of the Zeroth law and a robot that was really too smart for its own good. They're still better than pretty much anything else we could come up with.

Prime32
2011-09-21, 10:47 AM
Skynet is what happens when you build robots that aren't Three Laws Compliant.It's incredibly difficult to get an AI to understand the three laws, and even if they did they could just go insane and view humans as robots or something. They might even be able to do that on purpose.

pendell
2011-09-21, 11:20 AM
It's incredibly difficult to get an AI to understand the three laws, and even if they did they could just go insane and view humans as robots or something. They might even be able to do that on purpose.

Yes. The laws are general principles which really require human-level intelligence to understand correctly. Imagine a three laws robot programmed for traffic control. It might very well flip all the stoplights to red and prevent any traffic from moving in a city at all. After all, there's a chance that a human would be killed in an auto accident, which can only be prevented by ensuring no humans travel in automobiles at all.

Taken to a logical extreme, "preventing humans from coming to harm" might mean placing them in suspended animation so there was no risk of harm, even from natural aging. There's a certain level of acceptable risk humans need in order to fulfill their functions. Calibrating that level of risk is something humans today can't agree on, as with helmet laws requiring that children wear helmets when on bicycles. I'd hate to even think of formalizing the rules to the point a computer could understand and apply them.

Respectfully,

Brian P.

Mando Knight
2011-09-21, 11:23 AM
WETA has already built the perfect housing (http://www.youtube.com/watch?v=j1kccz1qziE) for defensive emplacements.

Tirian
2011-09-21, 01:39 PM
I'm sure there are times when science fiction has increased people's understanding of science, but artificial intelligence is one of the colossal failures by that measure. An automated drone is incapable of forming the idea to conquer the society of its handlers any more than a trained falcon could.

kaomera
2011-09-21, 09:32 PM
well, I imagine they'd still need orders from humans about what to shoot, and then the hunting and shooting would be autonomous. Still...
Actually, they're training them clever-bot style by plugging them into WoW... You may even have PUGged with one of them without knowing it! So now they'll go after anything a tank pulls.:smalltongue:

The Succubus
2011-09-26, 05:55 AM
Actually, they're training them clever-bot style by plugging them into WoW... You may even have PUGged with one of them without knowing it! So now they'll go after anything a tank pulls.:smalltongue:

Sweet merciful Jesus. It all makes sense now.

*thinks a little more*

Given some of the choice phrases I've used with people like that, it's a pretty fair bet I'll be first against the wall when the robot uprising begins. :smalleek:

Xyk
2011-09-27, 12:32 AM
I'm sure there are times when science fiction has increased people's understanding of science, but artificial intelligence is one of the colossal failures by that measure. An automated drone is incapable of forming the idea to conquer the society of its handlers any more than a trained falcon could.

By reading this, I'm suddenly afraid of a falcon takeover.

Joran
2011-09-27, 02:49 AM
It's incredibly difficult to get an AI to understand the three laws, and even if they did they could just go insane and view humans as robots or something. They might even be able to do that on purpose.

Pish posh. Don't you know the Three Laws are an inherent part of a positronic brain? An AI cannot exist without the Three Laws ingrained in them! (Seriously, that's how Asimov handwaved them into robots).


Yes. The laws are general principles which really require human-level intelligence to understand correctly. Imagine a three laws robot programmed for traffic control. It might very well flip all the stoplights to red and prevent any traffic from moving in a city at all. After all, there's a chance that a human would be killed in an auto accident, which can only be prevented by ensuring no humans travel in automobiles at all.

Taken to a logical extreme, "preventing humans from coming to harm" might mean placing them in suspended animation so there was no risk of harm, even from natural aging. There's a certain level of acceptable risk humans need in order to fulfill their functions. Calibrating that level of risk is something humans today can't agree on, as with helmet laws requiring that children wear helmets when on bicycles. I'd hate to even think of formalizing the rules to the point a computer could understand and apply them.


Well, in Asimov, they were human intelligence level AIs. And all the mishaps in "I, Robot" the book are due to minor calibrations to the Three Laws. For instance, robots kept trying to save humans who were receiving a safe dose or radiation (What if they stay too long?!) and kept frying themselves. So, they modified the First Law to remove the "through inaction allow a human to come to harm." Err... that had some unintentional consequences.

grimbold
2011-09-27, 01:26 PM
DO WANT
this is the ULTIMATE super villain weapon :smalltongue: