PDA

View Full Version : Alien Intelligence and Nintendo



Talakeal
2015-03-09, 03:44 PM
So I was recently shown this series of videos on Youtube:

https://www.youtube.com/watch?v=xOCurBYI_gY

This is about a guy teaching a computer program to play classic video games. I found it to be a fascinating example of intelligence that is flat out different than our own. The computer is not "smarter" or "dumber" than we are, it just sees the world in an extremely different manner.

For example, it moves with perfect precision and almost never fails to overcome a dexterity based problem, but it has no ability to plan ahead or predict future consequences. It can easily find and exploit bugs, but has trouble making basic logical leaps in gameplay that a child would grasp. It can figure out how to abuse the RNG seed to always get the ideal power up by twitching in a certain way before opening a chest, but can't grasp that losing lives will impede its ability to progress in the future. And its solution to most puzzles is to simply wait out the clock.

I think this is a wonderful metaphor for trying to visualize how a non human sapient race could be portrayed in fiction. Rather than just being a human in a funny hat or acting completely randomly it is just something which lacks capabilities we take for granted, possesses capabilities we never considered, and has priorities that, to us, seem completely illogical and unintuitive.

Traab
2015-03-09, 05:30 PM
Its an interesting thought, but too me at least, the problem is, you have to create not just a race that is missing those things, but also explain how the race evolved without those abilities. We developed the capabilities we have in order to survive in our environment, so for a race to not have similar skills, they would have to be truly alien in environment as well.

veti
2015-03-09, 05:37 PM
"Not caring about losing lives" seems curious, maybe a quirk of the programming rather than an intrisic property. On the other hand, if you have perfect reflexes, maybe lives really don't matter. After all, if you have perfect reflexes, then in most games that means you'll basically only die when you choose to.

And since you only need one life left at the end - it may well be that treating them as a resource to be spent is, in fact, the most efficient way to beat the game, and doesn't really "impede your ability in future" at all.

Lord Raziere
2015-03-09, 05:41 PM
Its an interesting thought, but too me at least, the problem is, you have to create not just a race that is missing those things, but also explain how the race evolved without those abilities. We developed the capabilities we have in order to survive in our environment, so for a race to not have similar skills, they would have to be truly alien in environment as well.

the closest I've got is making a hivemind that developed in isolation in the cosmos. turns out the hivemind is just a really big hermit with lots of bodies. but as a result, it doesn't have any conception of a lot of abstract concepts that we take for granted. like war, or social skills, lying, stories....lots of stuff. its pretty interesting how a lot of what we think has come about because of the need for a society and all the concepts that come with it.

DigoDragon
2015-03-09, 08:35 PM
I'm looking at the way the software is learning and I can see the similarities to very young children. Give them a controller and many will randomly press buttons and not understand the correlation between buttons and Mario on the screen. At first anyway. After a little time children learn to control their button pushing to have Mario actually proceed through the level.

I dunno, just that little bit there was fascinating to me.

I think a lot of AI in science fiction seem to skip that stage of development.

GloatingSwine
2015-03-09, 08:40 PM
"Not caring about losing lives" seems curious, maybe a quirk of the programming rather than an intrisic property. On the other hand, if you have perfect reflexes, maybe lives really don't matter. After all, if you have perfect reflexes, then in most games that means you'll basically only die when you choose to.


If you watch the video it's explained why it doesn't care about losing lives.

It's "trained" to look for sequences of numbers that go up and attempt to replicate the conditions that made them go up. Lives are a number that goes down and it doesn't care about numbers going down.

Talakeal
2015-03-09, 10:38 PM
If you watch the video it's explained why it doesn't care about losing lives.

It's "trained" to look for sequences of numbers that go up and attempt to replicate the conditions that made them go up. Lives are a number that goes down and it doesn't care about numbers going down.

A human brain, though, would make the connection between running out of lives and being incapable to make the numbers go up while dead. No so here.

DigoDragon
2015-03-10, 07:06 AM
Wow that AI did well on Gradius. O.o

Brother Oni
2015-03-10, 07:30 AM
While not quite the same thing, Google recently published a paper (in Nature no less) on their own AI who learnt how to play Atari games: link (http://www.popsci.com/new-google-ai-plays-atari-games-well-you-can) and abstract (http://www.nature.com/nature/journal/v518/n7540/full/nature14236.html).


Wow that AI did well on Gradius. O.o

It would be amazingly good at danmaku (bullet hell) games.

Flickerdart
2015-03-10, 01:45 PM
A human brain, though, would make the connection between running out of lives and being incapable to make the numbers go up while dead. No so here.
Yeah, that's a little bit more abstract than it's capable of. But it doesn't have an understanding of "while dead" because it just restarts the game. It also doesn't care about high scores, so totals are less important to it than a comfortable stream of points per second, which is a very reasonable approximation of how a deathless machine would act. Humans, with their mayfly livespans, can scurry around and try to earn a place on the leaderboard before their own lives run out. In the end, the machine will always earn more points cumulatively.

Reddish Mage
2015-03-10, 09:53 PM
Its not uncommon to draw a parallel between Alien Intelligence and Artificial Intelligence. But unless we are talking about anything like real artificial intelligence, a computer that sort of solves problems in a sort of flexible and dynamic manner, I don't even see where the launching point is.

Traditional AI goes about things in a very rote manner, these routines follow very limited programs in very defined contexts. Most video game elements have only a few programmed responses (see Link, ram Link), with perhaps a random element to mix things up minimally (perform sequence alpha beta gamma or delta). This is the way of traditional AI, programed to react in very specific ways to very specific inputs, with occasional randomness spice.

A calculator, a sprite, or a routine, doesn't "see the world differently" it doesn't see the world at all.

Alent
2015-03-11, 01:35 AM
Ooh, I didn't know he'd released a Part 3! Awesome. :smallbiggrin:


Yeah, that's a little bit more abstract than it's capable of. But it doesn't have an understanding of "while dead" because it just restarts the game. It also doesn't care about high scores, so totals are less important to it than a comfortable stream of points per second, which is a very reasonable approximation of how a deathless machine would act. Humans, with their mayfly livespans, can scurry around and try to earn a place on the leaderboard before their own lives run out. In the end, the machine will always earn more points cumulatively.

Other people including the original author have commented on the nature of how the AI "perceives" things like death, so I'll instead comment that I find it very strange that you see machines as the deathless element.

Maybe it's just the way my IT career happens to be coming to an end, but practical and planned obsolescence have worked together to effectively demonstrate that logic machines are the ones that have the Mayfly lifespan. The average computer is expected to fail or be replaced at the end of 3 years. Even if parts failure doesn't strike, standardization changes slowly force old systems off the market. (Especially paradigm shifts like the move in certain consumer bases from "I have this PC for E-mail and nothing else" to "I have this tablet for E-mail and candy crush saga and nothing else.")

Sure, it's easier and more realistic to "maintain" an old machine by replacing its failed parts than it is a person's failed parts, but somewhere down that line of reasoning you hit the Ship of Theseus paradox, anyway.

Am I maybe misunderstanding what you mean by "Deathless machine"?

Talakeal
2015-03-11, 02:21 AM
Its not uncommon to draw a parallel between Alien Intelligence and Artificial Intelligence. But unless we are talking about anything like real artificial intelligence, a computer that sort of solves problems in a sort of flexible and dynamic manner, I don't even see where the launching point is.

Traditional AI goes about things in a very rote manner, these routines follow very limited programs in very defined contexts. Most video game elements have only a few programmed responses (see Link, ram Link), with perhaps a random element to mix things up minimally (perform sequence alpha beta gamma or delta). This is the way of traditional AI, programed to react in very specific ways to very specific inputs, with occasional randomness spice.

A calculator, a sprite, or a routine, doesn't "see the world differently" it doesn't see the world at all.

As to whether or not a computer program is sentient, that is a whole nother rabbit hole that I don't particularly want to go down. I was just using this as a metaphor for how a different being could perceive the world, not debate whether or not the program was such a being.

Cespenar
2015-03-11, 04:22 AM
About abilities/flaws of an A.I., I don't think it would be wise to ascribe generalized faults to A.I. unless you get so generalized that it doesn't start to make sense regardless. Because otherwise, its every apparent flaw can be fixed with just more programming.

For the above mentioned example, making the A.I. give importance to the losing and keeping of lives for future investment should be fairly trivial to code in, if the guy behind it would wish to do so.

Flickerdart
2015-03-11, 09:17 AM
Sure, it's easier and more realistic to "maintain" an old machine by replacing its failed parts than it is a person's failed parts, but somewhere down that line of reasoning you hit the Ship of Theseus paradox, anyway.
The machine in question is software. There's no Ship of Theseus problem caused by replacing the hardware.

Tengu_temp
2015-03-12, 06:05 AM
the closest I've got is making a hivemind that developed in isolation in the cosmos. turns out the hivemind is just a really big hermit with lots of bodies. but as a result, it doesn't have any conception of a lot of abstract concepts that we take for granted. like war, or social skills, lying, stories....lots of stuff. its pretty interesting how a lot of what we think has come about because of the need for a society and all the concepts that come with it.

That's almost word for word what the Buggers from Ender's Game are.

NichG
2015-03-12, 07:12 AM
About abilities/flaws of an A.I., I don't think it would be wise to ascribe generalized faults to A.I. unless you get so generalized that it doesn't start to make sense regardless. Because otherwise, its every apparent flaw can be fixed with just more programming.

For the above mentioned example, making the A.I. give importance to the losing and keeping of lives for future investment should be fairly trivial to code in, if the guy behind it would wish to do so.

What's interesting is when doing something well-intentioned like that actually ends up harming the performance of the AI/ML algorithm. It's a good demonstration of how human intuition for solving problems the way humans are good at can end up being wrong about different methodologies.

For example, there was a recent Kaggle competition for using quick and dirty IR spectra to evaluate the quality of soil in Africa. When serious chemists/spectrographers work with data like that, there are a number of filters and processing steps they use to reduce the noise, make the peaks in the spectrum clearer, etc. It turns out that when you run those on the spectra before pushing them through the usual machine learning algorithms, the performance of the ML algorithms drops significantly.

What's going on is that the human scientists who work with spectra are using their visual system to understand the data. The human visual system is good at detecting contrast - nearby points which are significantly different from each-other. However, a noisy signal generates a lot of meaningless contrast. That interferes with the human visual system, but for the machine learning algorithms they don't assume that adjacent points in the spectrum are supposed to be more similar to each-other than far away points. Instead, they have to learn that from the unstructured data. The result is that there's information in the pattern of noise that they can make use of productively that humans have a lot of trouble with. If you do the thing that makes the signal easier for a human to interpret, you're throwing out data that the algorithm can use to improve its predictions.

Similarly, you might assume that telling the AI 'hey, you should try to avoid losing lives' would improve its gameplay, but it may be that that's a particular human instinct based on how our own learning works.

On the other hand, the ML algorithms may have problems that would be trivial for a human to solve. For example, if a human is reading handwriting, its not a big deal if the letters are rotated something like 5-10 degrees from horizontal, or maybe viewed at a slight angle. For a lot of the standard machine learning algorithms, things like rotations, skews, and scalings of visual data are a pretty severe stumbling block.

Reddish Mage
2015-03-12, 07:55 AM
As to whether or not a computer program is sentient, that is a whole nother rabbit hole that I don't particularly want to go down. I was just using this as a metaphor for how a different being could perceive the world, not debate whether or not the program was such a being.

The problem with the computer program metaphor is as that the AI needs to be sufficiently sophisticated to resemble an intelligence. Once that is achieved, the remaining problem for a computer intelligence is that its "intelligence" is "artificial" in ways alien to any natural being that could theoretically evolve or that we might encounter.

Thinking of a computer as an intelligent being or even a person leads to interesting questions, but a useful metaphor for dealing with Martians or Betelgeusians...there are some very firm limits to that.


The machine in question is software. There's no Ship of Theseus problem caused by replacing the hardware.

Actually bringing up software brings up the problem of embodiment. Most think of human intelligence as the brains, not the "software." Software is capable of being copied, deleted, and altered in ways that would intuitively destroy a human intelligence: Do you think you can survive being "copied" by having a process create new neurons while destroying your neurons, until your entire brain is destroyed and a new one created in a remote location? We do that sort of thing with software constantly.

Hardware, on the other hand, has some sort of body, so we can theoretically ascribe intelligence to hardware without that problem...of course practically, people who work with computers are concerned with the software and not with its very flexible embodiment.

NichG
2015-03-12, 09:00 AM
The thing with discussions of embodiment when it comes to these things is that the software can't tell when its embodiment has changed. For 'human' software, everything we do with it is going to be error-prone (we don't know what precisely what details would be important to copy, so there's a grey area). But for computer software, emulation means that you can have software which runs the same way even if you take a break for a million years, or distribute the computation across a million computers, or even run multiple copies of the computation in parallel.

I think the intuition that those things would 'destroy' a human intelligence is simply wrong. There's a difference between how things work and what we can comprehend. We can't comprehend what it would be like to experience ourselves being divvied up, emulated, etc, because its specifically a process that, in order to perform it correctly, must be impossible to detect. So the experience we can't imagine, that we perceive to be our own destruction, is rather no experience at all.

themaque
2015-03-13, 07:36 PM
I saw the thread title and thought it was something to do with Nintendo's product release procedures. Ooops.

Lord Raziere
2015-03-13, 11:27 PM
That's almost word for word what the Buggers from Ender's Game are.

oh. well mine are plant-cat hybrids and the hivemind was created as background for a hive mind's scout cut off from the greater hivemind so it can go investigate individuality so....I guess there is a difference? I didn't know really, and Ender's Game is blurry in my head, and mostly being about Ender being like the greatest strategist ever (Dawn Caste?) and how he got tricked into basically killing them all (no, I'm not putting spoilers on that, I'm pretty sure its old enough to not warrant that). so I either knew it subconsciously/forgot that part, or the book just never told me that.

Reddish Mage
2015-03-16, 08:15 PM
I think spoilers are warranted whenever spoiling the ending of anything...unless its Darth Vader being Luke's father or something similarly ubiquitous.