PDA

View Full Version : Human Thought



graymachine
2010-07-06, 08:07 PM
I've recently had the opportunity to read up on the nature of thought, more specifically human and it's relation to artificial thought. The heart of my point, or question as it were, being that our mechanisms for rational thought arose through the, "good enough," nature of natural selection. As Hitchens' puts it, for extrapolatation, our prefrontal lobes are too small and our adrenal glands are too big. My curiousity in this mental landscape is whether or not there is some blind-spot in our ability to reason. Is there some natural hole that we cannot see, and therefore make the world as we do? Would an artificial mind, born from rational effort rather than happenstance see that spot, as it were? Would we hate it for that? Glorify it? I'm interested in the thoughts on this, from the most profane to the most enlightened. Thank you for your time.

Pyrian
2010-07-06, 08:19 PM
Artificial intelligences tend to find very simple patterns easier than the human mind does, amusingly.

Speculatively, though, I think it's really more the vice-versa. Our instincts don't draw blind spots, they draw "over-inference" spots. We instinctively recognize dangers (darkness, snakes, heights) without ever having been exposed to them as being dangerous. We perceive depths as being farther than flat distances. Close to the horizon, we see the moon as being larger, because our instincts tell us that things on the horizon are larger than they appear.

Our blind spot is, instead, ourselves. Our minds seem to instinctively draw a curtain over its own processes. It's often painfully obvious in other people, but our own biases? We are given to thinking that we have perfectly good reasons for everything we think is true.

It's very difficult to convince someone that the moon looking larger when close to the horizon is a mental illusion rather than an optical illusion. A set of people, all of whom will tell you they are not sexist (and probably mean it), will grade the exact same paper worse when a female name is at the top. The well-known and well-documented examples are almost beyond count. Our minds are both inherently and easily biased, and have great difficulty recoginizing that fact.

Starfols
2010-07-06, 08:21 PM
Well, if we're unable to comprehend it, we won't know until it happens, will we? :smalltongue:

Most arguments involving incomprehendability are self defeating in my experience, for that reason.

graymachine
2010-07-06, 08:37 PM
Well, if we're unable to comprehend it, we won't know until it happens, will we? :smalltongue:

Most arguments involving incomprehendability are self defeating in my experience, for that reason.

I qouted you for ease, but I'll address you first to clarify things. I'm not posing a question of incompressibility; such things as are for people that stare at their navels too long. I'm asking what is it that we think we would come to in a dialogue with something else that reasons differently( or more likely better) than we do? An interesting question, I think.

Sadly, I can't scroll up to the previous entries, being on my phone, but I will respond to the mention of gender differences that stuck with me: irrelivant. I think most serious thinkers have dismissed this trivial difference, insomuch as it deals with thought.

As a possibly different mode of impressing my point, I would observe that any significant advancements that we make in the next 1000 years, or even the next few 100 years, won't be made by beings that any of us think of as human in the normal sense.

Starscream
2010-07-06, 08:40 PM
Would an artificial mind, born from rational effort rather than happenstance see that spot, as it were?

Are we assuming that the "rational thought" birthing this artificial mind is our own? If a human being manages to create a piece of software that possesses true intelligence, won't that intelligence be founded upon the same assumptions and limited perceptions as the intelligence that created it?

I took a course once in artificial intelligence programming (fascinating stuff, by the way), and one of the first things we were shown was a video of a robot that had been programmed to stack blocks. It tried to stack them starting from the top, and didn't understand why this didn't work. From the point of view of the information available to the robot, it was working in a perfectly rational manner. It simply didn't assume the existence and effect of gravity, and why should it have? It was software, and had no conception of such things, unless given it by its programmers.

The same thing applies to humans. We have a pretty efficient brain, some useful sensory organs, the ability to communicate with language, and a set of instincts specially developed to help us survive on this planet. We we don't know/understand/perceive everything. We can't, say, sense magnetic fields, see far into the infrared or ultraviolet spectrums, navigate via sound very well, or smell the presence of other humans (and what's on their minds) from a great distance.

There are animals who can do all these things, because they need to. We don't, so we can't. We do the best with the information we have access to, just like the robot that couldn't perceive the existence of gravity, and had no idea why the blocks kept falling. As Terry Pratchett once put it, "We are attempting to understand the mysteries of the universe with a language that we developed to tell one another where the best fruit is."

So if we ever do manage to create a computer with true intelligence, I imagine that it will either be operating under the same skewed assumptions of reality that we do (because we will give them to it), or with its own biases and misapprehensions, born of the limitations of its own electronic nature.

graymachine
2010-07-06, 08:46 PM
Ah, I found my second point in re-reading the interesting first point, via the relation of instinct and rational thought. I honestly don't think that they are connected and that the argument that reason is substrated by instinct akin to claim that I can't fly because I'm naturally terrified of heights. Instincts back us up and support us, primarily, because we have a lot of baggage behind us before we developed reason.

graymachine
2010-07-06, 08:58 PM
Are we assuming that the "rational thought" birthing this artificial mind is our own? If a human being manages to create a piece of software that possesses true intelligence, won't that intelligence be founded upon the same assumptions and limited perceptions as the intelligence that created it?

I took a course once in artificial intelligence programming (fascinating stuff, by the way), and one of the first things we were shown was a video of a robot that had been programmed to stack blocks. It tried to stack them starting from the top, and didn't understand why this didn't work. From the point of view of the information available to the robot, it was working in a perfectly rational manner. It simply didn't assume the existence and effect of gravity, and why should it have? It was software, and had no conception of such things, unless given it by its programmers.

The same thing applies to humans. We have a pretty efficient brain, some useful sensory organs, the ability to communicate with language, and a set of instincts specially developed to help us survive on this planet. We we don't know/understand/perceive everything. We can't, say, sense magnetic fields, see far into the infrared or ultraviolet spectrums, navigate via sound very well, or smell the presence of other humans (and what's on their minds) from a great distance.

There are animals who can do all these things, because they need to. We don't, so we can't. We do the best with the information we have access to, just like the robot that couldn't perceive the existence of gravity, and had no idea why the blocks kept falling. As Terry Pratchett once put it, "We are attempting to understand the mysteries of the universe with a language that we developed to tell one another where the best fruit is."

So if we ever do manage to create a computer with true intelligence, I imagine that it will either be operating under the same skewed assumptions of reality that we do (because we will give them to it), or with its own biases and misapprehensions, born of the limitations of its own electronic nature.

An excellent and exciting voice of practicality! The great things we're talking about probably won't come with a bang, but a whimper. I would, hoping not to banish my enthusiasm for such as voice, that the comparison made would be as us laughing at the grasping actions of a microscopic organism in it's grasping at life. My thought, only.

graymachine
2010-07-06, 09:07 PM
A secondary question, what do you think of the likelihood that we'll probably look on what we'll make one day and say, "You're just a thing. "

Trog
2010-07-06, 09:27 PM
The interesting bit is that we could create an artificial intelligence and set completely different parameters for its evolution than had governed our own past. And it can accomplish in days or weeks (probably) what might take generations. So while we had to adapt and overcome our environment we could develop an intelligence that starts with any other set of parameters as it's environment. Limit the environment to specific mental problems - like say, energy production, and let it run wild in trying to come up with a solution.

They've actually done this sort of thing already and while the AI doesn't function as, say, a human brain can with all its adaptability, it can focus better on solving small problems than even we can. As technology advances this ability to learn, adapt, and evolve will as well and, eventually, it would seem inevitable that we would be able to emulate, say, the equivalent of a small child. And so on.

Fascinating subject, really, as we're looking at the new face of evolution and we're the lynch pin in making it a reality. Then send these sorts of creatures (once they reach an intelligence that we can call them that) out into space (an environment we are very poorly adapted to survive in but in which a being which needs only some sort of electrical charge to function could do just fine in) and you have the beginnings of the spread of life throughout the void. Some people balk at this as science fiction but, really, is it any different than, say, life moving from the seas to land? From the land to the air? The next step for life, logically, is from the planet to the void.

NeoRetribution
2010-07-06, 10:04 PM
Speaking strictly from a programming perspective, the premise proposed is...inefficient.

The following explanation is a bit biased, but it is worth stating.

Humans are notoriously lazy. Machines, such as computers, are tools. Like tools they tend to be programmed to perform specific tasks which they are capable of completing. In other words, "The right code for the right job."

The theme that the first post in this thread alludes to is the difference between logical thought and emotive thought. For humans this is completely ridiculous because we have one integrated brain and not two halves.

For a computer this does not apply. The programmer responsible for any artificial intelligence simulation will likely use their own biases and values for the specific code, engine, and libraries used by the Intelligence.

Which means that one human, or a group of humans, would have to be willing to dedicate time to program each aspect of human intelligence, perception, and memory. Artificial intelligence is worthless without sensory perception as the intelligence would have nothing to think about other than direct user input. I find it...difficult to swallow that any human being would want yet-another-opinion about whatever they have to say.

But...for the assumption of the premise, I will assume that what is proposed is a two-brain computer ( or partition; take your pick ). The inner, unchangeable, brain is a core with a coded engine and libraries which are designed to learn. Rules governing the learning would be assigned by the programmer as to what can be learned, and what can not. In some cases that core might be completely ignorant of entire aspects of psychology due to limitations in programming.

The second, outer, brain would be a reprogrammable drive. The inner brain, programmed to learn, would dictate what the outer brain could learn, do, remember, and-so-on-and-so-on. It would also be responsible for re-writing portions of the outer brain should it become necessary and if it is programmed to do so. The outer brain would then control the actual actions of the artificial intelligence in whatever form they took.

Speaking practically, the definition of an artificial intelligence is one that can disagree with the programmer. It is programmed to disagree with the programmer, by whatever means the programmer chooses to make available. Inside a limited calculating machine this might be interesting or seem amusing. But the possibilities are frightening when assumptions include that the artificial intelligence would include articulated robot bodies.

What if the Intelligence decides that it has the right to exist and humans do not?

Perhaps on a more esoteric note, it seems plausible that if an artificial intelligence could think, and yet was given no programmed purpose, the result might be a computer brain's attempt at questions. A search for purpose. Or, if no solution was presented that the artificial intelligence agreed with, suicide...since there would be no direct reason for it to exist.

TSGames
2010-07-06, 10:12 PM
As a possibly different mode of impressing my point, I would observe that any significant advancements that we make in the next 1000 years, or even the next few 100 years, won't be made by beings that any of us think of as human in the normal sense.

:smallconfused:

Flickerdart
2010-07-06, 10:14 PM
:smallconfused:
Y'know. The French. :smallbiggrin:

Kiren
2010-07-06, 10:29 PM
Lets look back into history, 2 civilizations, Native Americans and the Europeans.

The Europeans and the Native Americans thought differently, the Europeans believed their ways were civilized, and that the Native American's beliefs were wrong and theirs were right. The Europeans thought sharing everything one owns with their neighbors like the Native Americans did was childish, they believed the civilized owned their own property.

When 2 civilizations with vastly different ways meet, hostilities ensue.

If an artificial intelligence evolves to a point they disagree with humanity, their will be violence. I will quote Mass Effect 2 here, because their is a very good example. A species called Quarians made a servant robot race called Geth, and their programing evolved into self awareness. So it asks its creator if it has a sole, the creator gets frightened. The Geth's self awareness leads to self preservation as the Quarians try to turn off the Geth. Their is a conflicting opinion, the Quarians believe the Geth should be turned off, the Geth want to protect their existence. The Quarians get exiled from their homeworld.

Technology that thinks differently then a Human will be feared by the general population, take for instance Swine Flu. Their fear-factor over weighed the actual danger, people over-react and become paranoid.

I' Robot also holds a good point. If we give an artificial intelligence a simple command like, protect someone, to what end will they do it. Unless off course common sense is programmed, but thats becoming a rarity among humans.

Anyway, this is just my opinion. We will never know until something happens.

fknm
2010-07-06, 10:41 PM
True "Artificial Intelligence" is utterly impossible as long as we're stuck within turing-machine computing paradigms (which is probably forever), thus making speculation about it utterly pointless.

Look up the "halting problem".

EDIT- And, before someone mentions them, Oracle machines (in the computational theory meaning of the word, not the relational database) are a cop-out.

Kuma Da
2010-07-06, 10:46 PM
Blindspot? Children.

We're essentially pre-programmed to care for and nurture anything with the proper face/body ratio. If we ever find ourselves at war with horrible alien invaders that look like kittens, we're screwed.

Also possibly relevant: the monkeysphere. http://www.cracked.com/article_14990_what-monkeysphere.html

edit: there are words in there that, while they do not break forum regulations, some may consider offensive. Not intended for toddlers or the righteous.

Flickerdart
2010-07-06, 10:52 PM
Blindspot? Children.

We're essentially pre-programmed to care for and nurture anything with the proper face/body ratio. If we ever find ourselves at war with horrible alien invaders that look like kittens, we're screwed.

Also possibly relevant: the monkeysphere. http://www.cracked.com/article_14990_what-monkeysphere.html

edit: there are words in there that, while they do not break forum regulations, some may consider offensive. Not intended for toddlers or the righteous.
In a world like that...Soon, kicking puppies will be considered morally just. Then we shall all have our revenge!

Thajocoth
2010-07-07, 12:02 AM
True "Artificial Intelligence" is utterly impossible as long as we're stuck within turing-machine computing paradigms (which is probably forever), thus making speculation about it utterly pointless.

There is a major fundamental difference in the way humans and computers process information that makes it difficult for computers to match humans, even if they had the same processing power (which they don't).

Computers store hard data. A song might be C, B-flat, D-sharp, A...

Humans store data differences. A song might be Note, Up three notes, Down a note, up half an octave.

Computers store data in a specific location. Data in the human mind is constantly in motion, in a pattern.

So the human mind (and, really, the minds of many animals, since it's fundamentally the same), will easily fill in gaps in a pattern. Skip the 5th note in Jingle Bells, and your mind will fill it in.

Our sense of sight is particularly bad... We've got blind spots, red lines, and the result is blurry. Our optic nerve is better than Photoshop and processes an average of 60 images per second. Not only does it take crap and make it photo quality, but it also finds all the faces and edges and embeds their locations in the image it sends to the brain (which, actually, is fairly similar to a jpg...)

graymachine
2010-07-07, 01:28 AM
Hm. While people seem well versed in specific arguments, there seems an underlying tone of strong bio-chauvinism.

Worira
2010-07-07, 01:29 AM
Your argument has the basic flaw of that not being a thing.

Starfols
2010-07-07, 01:51 AM
I qouted you for ease, but I'll address you first to clarify things. I'm not posing a question of incompressibility; such things as are for people that stare at their navels too long. I'm asking what is it that we think we would come to in a dialogue with something else that reasons differently( or more likely better) than we do? An interesting question, I think.
I'd think we'd need more context before we answer that. Earth occupied by ultra intelligent super-beings? Awe and fear, most likely. Indignation or humility, if we could communicate. Discovery of some terrestrial sapient life, or creation of AI's would be totally different. We just can't know.


A secondary question, what do you think of the likelihood that we'll probably look on what we'll make one day and say, "You're just a thing. "

Reply: You are also a thing, meatbag. :smalltongue:


Hm. While people seem well versed in specific arguments, there seems an underlying tone of strong bio-chauvinism.

I think it's more like skepticism. A lot of the arguments here are for artificial intelligence's impracticality, rather than answering the 'what if' question.