PDA

View Full Version : Robots that lie



pendell
2009-08-20, 07:04 AM
Popular science (http://www.popsci.com/scitech/article/2009-08/evolving-robots-learn-lie-hide-resources-each-other)



In an experiment run at the Laboratory of Intelligent Systems in the Ecole Polytechnique Fédérale of Lausanne, France, robots that were designed to cooperate in searching out a beneficial resource and avoiding a poisonous one learned to lie to each other in an attempt to hoard the resource. Picture a robo-Treasure of the Sierra Madre.

The experiment involved 1,000 robots divided into 10 different groups. Each robot had a sensor, a blue light, and its own 264-bit binary code "genome" that governed how it reacted to different stimuli. The first generation robots were programmed to turn the light on when they found the good resource, helping the other robots in the group find it.

The robots got higher marks for finding and sitting on the good resource, and negative points for hanging around the poisoned resource. The 200 highest-scoring genomes were then randomly "mated" and mutated to produce a new generation of programming. Within nine generations, the robots became excellent at finding the positive resource, and communicating with each other to direct other robots to the good resource.

However, there was a catch. A limited amount of access to the good resource meant that not every robot could benefit when it was found, and overcrowding could drive away the robot that originally found it.

After 500 generations, 60 percent of the robots had evolved to keep their light off when they found the good resource, hogging it all for themselves. Even more telling, a third of the robots evolved to actually look for the liars by developing an aversion to the light; the exact opposite of their original programming!


I, for one, welcome our new robot masters.

Respectfully,

Brian P.

Myshlaevsky
2009-08-20, 07:10 AM
Pretty interesting. Thanks for posting it, pendell.

Ichneumon
2009-08-20, 07:20 AM
I knew this day would come. They've been lying to us all along!


...

Seriously though, interesting article.

Elder Tsofu
2009-08-20, 07:21 AM
Hm, its not as they lie - seems more like they lost their ability to turn the light on during a "mutation". (and thus couldn't do it if they wanted it or not)
And how is aversion to the light a good thing... it only drive you way from the ones still flaring when finding. (could be usefull if some flared all the time except when they found thou)

kamikasei
2009-08-20, 07:27 AM
Further to Elder Tsofu's point, I don't think they're modelling the mental processes of the other robots to such an extent that you can say they're lying. However, anything that erodes the nonsense in the popular consciousness about AIs being perfectly hyperintelligent emotionless beings incapable of falsehood or understanding contradiction is welcome.

orchitect
2009-08-20, 08:14 AM
500 generations of robot inbreeding is what it sounds like to me.

:smallbiggrin:

Eldan
2009-08-20, 08:49 AM
Right: inbreeding leads to lying robots :smalltongue:

Interesting. Nice use of game theory.

(Shame on that article for one thing, though: the EPFL is not in france, it's in switzerland.)

Joran
2009-08-20, 09:27 AM
Hopefully, they included this into their fitness function:

http://imgs.xkcd.com/comics/genetic_algorithms.png

http://xkcd.com/534/

Otherwise, we'll have to create the zombie apocalypse to counter the robot revolution and that might get messy.

truemane
2009-08-20, 11:21 AM
Pfft. At least you never get lies from a Zombie. Say what you like about the infernally animated cannibalistic undead, at least they're upfront about their needs: Brains, lots of them, as often as possible.

Totally Guy
2009-08-20, 11:31 AM
Now they need to get roombas trick each other into missing out on the best dirt.

This is good game theory and I'd like to see more about that.

I can't get the table function to work.

Evil DM Mark3
2009-08-20, 12:04 PM
I agree with some of the other posters here, this seems more like bad "mating" and mutation than learning to lie.

Ichneumon
2009-08-20, 12:08 PM
I agree with some of the other posters here, this seems more like bad "mating" and mutation than learning to lie.

After having reread the article, I'm afraid I have to agree. The lying in itself doesn't seem like an evolutionary adaptation at all.

Erloas
2009-08-20, 12:26 PM
Hm, its not as they lie - seems more like they lost their ability to turn the light on during a "mutation". (and thus couldn't do it if they wanted it or not)
And how is aversion to the light a good thing... it only drive you way from the ones still flaring when finding. (could be usefull if some flared all the time except when they found thou)

Well this is a bit of information that is not presented. If they simply can't turn on their lights any more that is one thing. But I think what is implied but not actually stated, is that they could still turn on their light if they wanted to but chose not to to avoid others showing up to compete for the resource.

They said that others clearly can turn on their lights. They also said that some robots learned to check out a resource another is sitting at even if they have their light off. This would seem to also imply that they have found some that would have their light on at a bad resource. They don't really say if there are different levels of good and bad resource, if all good resources are equal, or if some are better then others. If there are different levels of good and one just happens to find a positive resource but continue to look for a better resource. If that is the case then maybe if one finds a bad resource it might turn on its light to attract another robot and then leave to see if the other robot had a positive but not very good resource.

Of course the whole article is rather vague and doesn't really go into anything. But that is only to be expected from Popular Science. My guess is that the title comes from real conclusions the experimenters actually came up with but they didn't bother to include any of the important details in the article itself.

littlequietguy
2009-08-20, 12:27 PM
I saw this after reading the liar game manga...
:smallamused::smallsmile: they are nowhere close to that level.

Lupy
2009-08-20, 12:30 PM
I think I've seen this before a while ago, or something similar to it.

Yarram
2009-08-20, 04:18 PM
Whether or not they lie though... That's awesome.

Elder Tsofu
2009-08-20, 04:38 PM
Well this is a bit of information that is not presented. If they simply can't turn on their lights any more that is one thing. But I think what is implied but not actually stated, is that they could still turn on their light if they wanted to but chose not to to avoid others showing up to compete for the resource.

Yup, it's vague so I went with the easiest explanation. They somehow crossed the programs with each other and induced mutation. (repetitive, addition, deleterious etc.)
The easiest explanation is that one got a mutation which made it unable to signal to others which gave it an edge over the others.
This lead to winning and spreading of the error in later generations.
It is much more probable than the program would evolve and recode itself to become sentient. :smallwink:
And I'm usually sceptic to most studies, I let them convince me with their arguments - and this one failed and thus got labelled as attention-seeking garbage. :smallwink:

Bacteria and computers seem quite alike.
Much like bacteria share useful (and less useful) plasmids between each other to develop resistance to drugs and give other advantages the computers could infect each other with, to us, malicious code. It would just go much much faster and would probably not know what hit us. :smalltongue:

Erloas
2009-08-20, 06:00 PM
Yup, it's vague so I went with the easiest explanation. They somehow crossed the programs with each other and induced mutation. (repetitive, addition, deleterious etc.)
The easiest explanation is that one got a mutation which made it unable to signal to others which gave it an edge over the others.
I read the other article that the Popular Science article linked to, it didn't say a whole lot more but some. It that it sounded like it was very clear that they all still had the ability to use their lights. It sounded like a lot of them in fact used the light while out in the open searching for the source and then turned them off once they found it.

kpenguin
2009-08-20, 06:04 PM
Yup, it's vague so I went with the easiest explanation. They somehow crossed the programs with each other and induced mutation. (repetitive, addition, deleterious etc.)
The easiest explanation is that one got a mutation which made it unable to signal to others which gave it an edge over the others.
This lead to winning and spreading of the error in later generations.
It is much more probable than the program would evolve and recode itself to become sentient. :smallwink::

Wait, isn't that how evolution works? A mutation is introduced which lends an advantage, thus giving the mutated organism a greater chance of survival and making offspring, thus spreading the mutation over generations?

Linkavitch
2009-08-20, 08:22 PM
Scary...I need to make myself a tin foil hat now...

thubby
2009-08-20, 08:53 PM
seriously, what idiot decided making evolving robots was a good idea?

Yarram
2009-08-20, 09:00 PM
seriously, what idiot decided making evolving robots was a good idea?

I dunno. If we have robot overlords we won't really have to think much. It'll be awesome. (and I won't have to do ext. English exams.)

Flame of Anor
2009-08-20, 11:30 PM
You all should read Crabs Take Over The Island, by (I think) Anatoly Dnieprov.

Eldan
2009-08-21, 01:41 AM
Wait, isn't that how evolution works? A mutation is introduced which lends an advantage, thus giving the mutated organism a greater chance of survival and making offspring, thus spreading the mutation over generations?

That's just about the definition, yes. And, I might add, the point of the experiment.
Evolutionary biologists have used these evolutionary game theory experiments for some time, though I'm not sure I've heard of one actually using robots instead of just programs running on a large computer. It's really fascinating. I'll go dig around the EPFL website and see if I find anything. Though my french is still horribly bad.

Edit, from their website:

Artificial Evolution
We have developed several novel approaches to artificial evolution of complex embedded systems characterized by non-linear interactions of multiple hardware and software components. The application to robotics, known as Evolutionary Robotics, is a classic specialty of the laboratory. Current research efforts aim at evolutionary synthesis of analog electrical circuits, learning neural controllers, reverse engineering of biological networks (genetic and metabolic networks), and biomedical signal processing.

Okay, these people make interesting stuff. Almost makes me sad that I studied ecology and not robotics. Though a french speaking university really wouldn't be my thing.
http://lis.epfl.ch/. Hmm. Link is strange, but click on "projects" or "robots" on the left side. Fascinating. Swarm bots are scary, though.

They also have a youtube channel I'll have to take a look at when I get home. (http://www.youtube.com/user/EPFLLIS)

Elder Tsofu
2009-08-21, 02:59 AM
Wait, isn't that how evolution works? A mutation is introduced which lends an advantage, thus giving the mutated organism a greater chance of survival and making offspring, thus spreading the mutation over generations?

Yup it is evolution, I apologise if I was unclear on the subject.
The first part of the quote is just a much more "simple" evolution with a much larger probability of happening than it recoding itself to be aware of itself finding a (whatever it was) and take the decision not to lead the other robots there.
It would happen faster than the other and thus many robots would aquire it, and when it is aquried a robot wont get any further advantage of the more complex mutation as it already have its light turned of (and thus lowering the probability even further of it spreading).
If the simple mutation is not "fatal" that is, then no offspring would get it.

kamikasei
2009-08-21, 04:12 AM
It's certainly evolution, it's just not lying. The robots are just too dumb to lie.

It's a case where the population starts out programmed "find food -> turn on light, see light -> go to it". After a while some of them are "find food -> do nothing", while others are still "see light -> go to it". The former aren't deceiving the latter, the latter are just not well adapted to the changed conditions.

turkishproverb
2009-08-21, 04:20 AM
bingo. They're not lying per se. If they were turning the light on in the wrong spot, and then dashing to the right one, I might buy this "lying" buisiness, but as it is written there? not seeing it.

thubby
2009-08-21, 11:41 AM
lying by omission. they are supposed to turn on their light when they find goodies.
it would be like the party rogue finding a chest and not telling anyone.

Pyrian
2009-08-21, 12:27 PM
"Feminine Products" :haley:

kamikasei
2009-08-21, 01:02 PM
lying by omission. they are supposed to turn on their light when they find goodies.
it would be like the party rogue finding a chest and not telling anyone.

It's more like the party rogue finding a chest and not telling anyone because they have no concept of language, just an inherited tendency to yell when they see treasure, or not.

Erloas
2009-08-21, 01:43 PM
It's more like the party rogue finding a chest and not telling anyone because they have no concept of language, just an inherited tendency to yell when they see treasure, or not.

But trying to put it like that is ignoring the context. If the party rogue happened to be mute, but it was still expected of the rogue to do what simple method they had to communicate that fact to others, then it would be the same.
These robots only have a very limited method to interact with each other and even though they are supposed to help others find food (it was in all the robots' original code, or so it was implied) some where not using the communication they did have available.

If it is lying or not is a little questionable, but it is at very least clearly keeping secrets. They know something they don't want others to know and are not telling them even though they are supposed to.

kamikasei
2009-08-21, 01:52 PM
If it is lying or not is a little questionable, but it is at very least clearly keeping secrets. They know something they don't want others to know and are not telling them even though they are supposed to.

But the supposed exists only in the human observers' minds. It's something being imposed on the activity from outside. The robots have no mental model of the other robots and their activities. They're just finding something and it's either triggering a behaviour or it's not. Nothing in their programming is saying "you are supposed to inform the others". The only thing resembling that is "when you find food, turn on your light". And the ones who keep their light off don't have that instruction.

The rogue in my example has no expectations. She and the rest of the party are the purest innocents imaginable. All they know how to do is run around and occasionally stumble across things that they like. Some of them instinctively make a commotion when they do so. Others don't. The actions of the others and whether or not they're drawn to the commotion don't enter in to their thinking. Their thinking doesn't exist. Deceit does not exist. Intention does not exist. There is simply stimulus and response.

Pyrian
2009-08-21, 02:02 PM
Whether or not an evolutionary tendency should be described as an intention is a semantic argument of considerable meaninglessness. The simple fact of the matter is that it is too useful a semantic construct to not use. Stating that "eyes are FOR seeing" is so much less clumsy than stating that "eyes gradually evolved the way they are because organisms with eyes out-reproduced organisms without eyes due to the fact that being able to see gave them various advantages".

kamikasei
2009-08-21, 02:11 PM
The simple fact of the matter is that it is too useful a semantic construct to not use.

It's thoroughly misleading people as to what's actually going on, which I would call the opposite of useful. I'm not saying that because this tendency evolved, it cannot be described as an intention. I'm saying that the only "deceit" involved is at the level of evolution, not the individual. This doesn't even rise to the level of such examples as animals which mimic poisonous species to deter predators, a genuine example of deceit where the individuals aren't doing any thinking. This work is interesting, but pitching it as "robots learn to lie!" is both exaggerating and misleading.

Pyrian
2009-08-21, 03:06 PM
It's thoroughly misleading people as to what's actually going on, which I would call the opposite of useful.I don't think many people are actually confused by that at all - and those that are have no idea what was going on anyway.


I'm saying that the only "deceit" involved is at the level of evolution, not the individual.That's an enormous change in your position and I'm glad I convinced you to recant your earler statements and take on my more reasonable perspective, even if you're not willing to admit that that's what just happened. (Your earlier statements were flat declarations that no lying occurred outside of the imagination of observers on precisely the grounds that the robots possessed no individual intent.)


This doesn't even rise to the level of such examples as animals which mimic poisonous species to deter predators, a genuine example of deceit where the individuals aren't doing any thinking.I would say it actually rises above that level, as it directly involves communication within a species, rather than mere camoflauge.

Eldan
2009-08-21, 03:10 PM
Actually, that would be an interesting idea... give the robots a possibility to really communicate:
give them three more lights, let's say yellow, green and white and give them the (mechanical, not programmed) ability to turn any combination of them on and recognize colour combinations by seeing them. Then see if communication and strategy can evolve.

Elder Tsofu
2009-08-21, 03:22 PM
If a mute man don't answer you orally when you demand it, does it mean that he refuse to talk with you?

If you don't have an intent to deceit its not a deception, if you aren't aware of yourself telling something untrue then its not a lie. (it is just wrong)

kamikasei
2009-08-21, 03:44 PM
I don't think many people are actually confused by that at all - and those that are have no idea what was going on anyway.

The people earlier in the thread talking about what the robots were "supposed" to do seem pretty confused to me. That the presentation of the research left them with "no idea what was going on" is precisely the problem.


That's an enormous change in your position and I'm glad I convinced you to recant your earler statements and take on my more reasonable perspective, even if you're not willing to admit that that's what just happened. (Your earlier statements were flat declarations that no lying occurred outside of the imagination of observers on precisely the grounds that the robots possessed no individual intent.)

Before you get too self-congratulatory, I should point out that the change in my position is a) not to yours and b) not occasioned by anything you said. I simply read the ScienceBlogs version of the story, which was more detailed than the Popular Science one, and found that it was a little more nuanced than I had thought.

My initial posts were based on Elder Tsofu's summary that some robots just lost the inclination to turn on their light. The truth is slightly different.


I would say it actually rises above that level, as it directly involves communication within a species, rather than mere camoflauge.

"Communication" is a misnomer. What they're doing is only communication if you class any leak of information into your environment as communication.

The channels of information are so narrow, the size of the populations so small, and the speed of evolution so fast, that while this is an interesting glimpse into how an organism's behaviour is influenced by the information it leaks into the environment, it doesn't begin to approach the stability of real-world situations where you can say that one organism has evolved specifically to exploit the perception and behaviour of another. That adaptation in turn is only very slightly like what we mean when we talk about humans lying, and is not at all what is conjured up by a headline like "robots learn to lie".

Erloas
2009-08-21, 04:06 PM
... it doesn't begin to approach the stability of real-world situations where you can say that one organism has evolved specifically to exploit the perception and behaviour of another. That adaptation in turn is only very slightly like what we mean when we talk about humans lying, and is not at all what is conjured up by a headline like "robots learn to lie".

Actually yes, they basically did evolve to exploit the behavior of others. While there were variations in the start, the majority of the robot community was programs to turn on the light when it found a resource* and to seek out the blue light of other robots* to find the resource more quickly. The robots that did that early on in the experiment where the most successful, I think it said the majority did that by the 10th generation.
It was the fact that there was a limited amount of space at the resource and some found out that by telling others of the resource they were more likely to be displaced from the resource that got them to start changing their original behavior of showing the light when the resource was found and to look for a light to find the resource.

*At least that is what the article seemed to be saying to me.

I think they are saying they are lying, not because of one asking another one a question, but based on cultural expectation. The "have you found a resource question" which is answered by a simple yes or no with the blue light isn't one that is asked by each robot but an assumed question that exists in the community itself based on the original programming.

Eldan
2009-08-21, 04:21 PM
I don't think that's it:

The robots couldn't change their behaviour at all: there were random mutations to their programs which resulted in behavioural changes, but no action by the robots was involved in that change. So, random evolution leading to beneficial strategy by natural selection.

Pyrian
2009-08-21, 05:01 PM
The people earlier in the thread talking about what the robots were "supposed" to do seem pretty confused to me. That the presentation of the research left them with "no idea what was going on" is precisely the problem.On review of the early portion of the thread, I see no confusion entailed by the use of intention to describe evolved traits EXCEPT by those of you who consider that semantic construct "incorrect". So, I can only infer that any such confusion comes from doing what you did: deviating from the standard, accepted meanings of the terms in context.


Before you get too self-congratulatory, I should point out that the change in my position is a) not to yours and b) not occasioned by anything you said. I simply read the ScienceBlogs version of the story, which was more detailed than the Popular Science one, and found that it was a little more nuanced than I had thought.That doesn't make sense. I was talking about the semantics, not the specific study. You went from "exists only in the human observers' minds" to "I'm saying that the only "deceit" involved is at the level of evolution, not the individual", an enormous distinction ("observers' minds" to "level of evolution") that, given the timing, I find very difficult to believe was not specifically occasioned by my post on the subject.


"Communication" is a misnomer. What they're doing is only communication if you class any leak of information into your environment as communication.It's an evolutionarily (or scientifically set up, in the origin) deliberate imparting of information (or misinformation) there specifically for others' use. That meets a much narrower standard of meaning for "communication" than even the dictionary definition of communication (which generally meets the standard you described above, anyway).


...and is not at all what is conjured up by a headline like "robots learn to lie".Again, it only seems to be people on your side of the semantic argument who were confused by that. It's not their fault that you were using "intention" in a non-standard way. (It is unfortunate that there are two relatively distinct types of intention involved - the initial setup versus what was evolved to).


The robots couldn't change their behaviour at all: there were random mutations to their programs which resulted in behavioural changes, but no action by the robots was involved in that change. So, random evolution leading to beneficial strategy by natural selection.You're contradicting yourself. The actions of the robots change (from generation to generation) by mutation (no action of their own) and by natural selection (entirely dependent on the robots' actions).

Evil DM Mark3
2009-08-21, 05:18 PM
Lets see what we can get for a dictionary deffinition of lie. I use this site for convienience. (http://dictionary.reference.com/browse/lie)

LIE
a false statement made with deliberate intent to deceive; an intentional untruth; a falsehood.
something intended or serving to convey a false impression; imposture: His flashy car was a lie that deceived no one.
an inaccurate or false statement.
the charge or accusation of lying: He flung the lie back at his accusers.Now it is clear that the key component of a lie is the intent to mislead. If a person makes a mistake he has not lied, he was mistaken. Thus in order to lie these robots must be aware of both the existance of other members of their "species" and of their own existance, as well as the nature of truth. I doubt that they are that advanced, if they where the headline would be "Reserchers create AIs as smart as primates". If they lack this capacity then this is just random behaviour with selective and possibly damaging breeding leading to evolution in an insular enviroment.

kamikasei
2009-08-21, 06:04 PM
That doesn't make sense. I was talking about the semantics, not the specific study. You went from "exists only in the human observers' minds" to "I'm saying that the only "deceit" involved is at the level of evolution, not the individual", an enormous distinction ("observers' minds" to "level of evolution") that, given the timing, I find very difficult to believe was not specifically occasioned by my post on the subject.

You're confusing two separate points I was addressing. The only mention I made of "human observers" was in response to the claim that the robots were "supposed" to turn on their lights to signal one particular thing. But this is not part of the system - it's anthropomorphism. The human observers are the only ones doing the "supposing". It's not as if the robots get together at the start of each run and agree on what the signals mean, and then some of them go off to subvert that agreed meaning. Some of the robots are simply making mistaken assumptions about the behaviour of the others.

The confusion that leads someone to talk about what the robots are "supposed" to do is exactly the confusion I was talking about.

You then bring up a different issue, the question of semantics around a mindless adaptation, and I allow that the robots are doing something like the kind of "deceit" we talk about when a creature has some adaptation intended to fool another - thought this is not at all the same thing we mean by saying that robots have evolved the ability to lie.

The discussion has strayed far enough from the actual topic of the thread that I can't be bothered trying to follow it at this point; if you want to kvetch further, PM me.

Berserk Monk
2009-08-21, 07:11 PM
Well, if robots do ever decide to take over, we can enlist an army of clay soldiers to stop them. (http://www.youtube.com/watch?v=3M4_XZ3FLHw&NR=1&feature=fvwp)