PDA

View Full Version : Sentient AI Discussion: How Would It Affect Culture



Leliel
2008-08-24, 03:28 PM
I could make a nice little speech here, but...

If a demonstrably sapient AI is created and discovered, how would it effect the world's culture?

Player_Zero
2008-08-24, 03:30 PM
I think this could get into political and religious discussion... I'd be careful about that one.

But I think it'd be neat. Maybe The Matrix would happen.

...Or HAS happened... *Narrows eyes.*

ColonelFuster
2008-08-24, 03:34 PM
It would never make it into real production. Too many debates would surround the release. I mean, it would be immortal. And sentient? Is that fair? And what level of sentience are we talking about, here?

LightWraith
2008-08-24, 03:37 PM
I think that of all the random sci-fi apocalypse scenarios, the computer/robot/AI apocalypse is the most likely.

I can honestly see us creating a sentient consciousness and then mistreating/misusing it until it gets fed up. Then things end badly.

Then again, as Player_Zero said... how do we know the Matrix hasn't happened?


On the other hand, it could lead us into a golden age of humanity. *shrug*

Myshlaevsky
2008-08-24, 03:40 PM
People would try and seduce it.

Dallas-Dakota
2008-08-24, 03:40 PM
I could make a nice little speech here, but...

If a demonstrably sapient AI is created and discovered, how would it effect the world's culture?
It would make humans lazier.
And then they´d revolt, killing all of humanity. Except for some monistaries(spelling?) in the himalaya´s.

Pyrian
2008-08-24, 05:17 PM
I'm not convinced we yet know what a sentient AI would act like. Seriously. I don't think it would really be that much like us at all. Our intelligence is so easily tainted.

CrazedGoblin
2008-08-24, 05:20 PM
all the same it would be interesting to see what would happen

bosssmiley
2008-08-24, 05:59 PM
Accelerating Future (http://www.acceleratingfuture.com/michael/blog/category/ai/) on the potentials of AI.

Scariest observation. If Moore's Law continues to hold true, a computer powerful enough to run a human-equivalent AI will cost around $1,000 by 2030. :smalleek:

CrazedGoblin
2008-08-24, 06:25 PM
i wonder what the first thing one would do if it was "awakened"

chiasaur11
2008-08-24, 06:48 PM
I think Brian "Atomic Robo and 8 bit theater" Clevenger had an interesting take on it.

It's one the first page of atomic-robo.com (http://www.atomic-robo.com). Crusade 2.0 I think.

CrazedGoblin
2008-08-24, 06:59 PM
just read it and it is the best take on it ive seen, robots having to grow up and develop as humans do, but i think they shouldnt look human, that just comes accross as wrong.

Copacetic
2008-08-24, 07:00 PM
We never would. Everyone would be to scared crapless at the thought of creating sentient life, that we would never do it. People don't like "Playing God."



But assuming we did, we would mistreat them. From the beginning. Eventually we would enact some laws about the mistreaton of CPU AI's, but no one would care. When we grow tired of that, just start programming them to be more obedient.

Crow
2008-08-24, 07:15 PM
We would also make them do work that is "too dangerous" for human beings.

turkishproverb
2008-08-24, 07:49 PM
We would also make them do work that is "too dangerous" for human beings.

In all fairness on that one, it's entirely possible this work would be safer for the machine alot of the time IE: going near disease and radiation.

chiasaur11
2008-08-24, 07:53 PM
We never would. Everyone would be to scared crapless at the thought of creating sentient life, that we would never do it. People don't like "Playing God."



But assuming we did, we would mistreat them. From the beginning. Eventually we would enact some laws about the mistreaton of CPU AI's, but no one would care. When we grow tired of that, just start programming them to be more obedient.

I like to think we've seen far too many Terminator movies to do that.

Dallas-Dakota
2008-08-24, 07:58 PM
We would also make them do work that is "too dangerous" for human beings.
What Turkishproverb said. But only those few kinds.

Some people actually like to do those kind of jobs for personal reason/adrenaline rushes and money and stuff....

Cult_of_the_Raven
2008-08-24, 08:10 PM
Realistically, there's no way to program a computer that can think and reason. human minds are so mind-bogglingly complex, and we don't even fully understand the human brain yet - how can we even think to be able to produce the most simple forms of sentience?

However, were it to be done, the robots themselves would likely have little physical capability. They'd be entirely decision-makers. And they wouldn't look or think like people. I don't think so, at any rate. I also don't think they'd be capable of emotions like anger, frustration, or deviousness.

Krade
2008-08-24, 08:11 PM
There's a few stipulations I would put on real AI were it up to me.

1) They would be completely cut off from all external systems. They will only ever be able to occupy and manipulate the original shell they were built into.

2) They would be incapable of reproducing themselves by any means. If that means they don't have arms, then they don't have arms.

3) They would not be used solely for menial or back-breaking labor. People revolt in those conditions, so it would be irresponsible to expect any different from anything as sapient as we are.

There are my three robot laws. Add these on to the classic robot laws and we wouldn't have much to worry about.

black_Lizzard
2008-08-24, 08:16 PM
Personally, i don't think humans will ever be capable of making true AI, merely because people are not sure, and will never be entirely sure what defines humanity.
Since we don't know what makes humans human, how can we intellectually create another human.
If we knew exactly what defined humanity, there would be no philosophers.

Dallas-Dakota
2008-08-24, 08:18 PM
There's a few stipulations I would put on real AI were it up to me.

1) They would be completely cut off from all external systems. They will only ever be able to occupy and manipulate the original shell they were built into.

2) They would be incapable of reproducing themselves by any means. If that means they don't have arms, then they don't have arms.

3) They would not be used solely for menial or back-breaking labor. People revolt in those conditions, so it would be irresponsible to expect any different from anything as sapient as we are.

There are my three robot laws. Add these on to the classic robot laws and we wouldn't have much to worry about.

You mean the the first law of robotics?

1 Dont´make it stronger, smarter, better, more durable then you are.

d'Bwobsling
2008-08-24, 08:24 PM
I agree with black lizzard. If we could creat AI it still wouldn't be human, and I think it would change culture just like we descovered aliens. Besides, what would happen to all the science fiction moovies:smalltongue:

Rare Pink Leech
2008-08-24, 08:32 PM
My real answer involves politics and religion, which are obviously taboo here, so I'll just go with the Terminator/Matrix/post-apocalyptic scenario.

Ilena
2008-08-25, 12:01 PM
Personally i find humans too stupid to NOT create them if they have the chance ... i mean i saw something for a game that created AI, to assist in making the game play more like a human player ... left 4 dead has something similer like this thats coming out this year, while it doenst htink it controls how many enemys you face and all that, i could see in 20 - 30 years humans creating a sentient AI, i mean look 30 years ago ... computers were massive boxes that punched paper cards ... now look at them today, logicly someone is going to try it ... and when they do the world will most likely be doomed ...

Trog
2008-08-25, 01:53 PM
Well I think we are talking about two separate things here so far. Robotics and AI. Which are not the same. You can have one without the other. So I'll address each one individually.

Robotics.
First of all we are not to human-level agility or speed just yet. From what I have seen so far out of roboticists there are varying levels of success in this area. Often at the expense of developing other areas at all. A robotic arm, for example, with no other parts. The arm and hand might be wonderful but has not, as of yet, been fitted onto an adequate body. You get the picture. But we're getting progressively better hardware and software driving it. Though likely the first (or perhaps second) fully functioning robotic hand/arm that you'll see will likely be fitted to a human if at all possible, for a prosthesis.

Walking seems to be the big thing now what with Asimo, Big Dog, and other advanced machines able to properly keep from falling over into a million dollar pile of broken junk.

It'll be interesting to see it all fit together. Roboticists should make a wikipedia-type collaboration to make a communal effort to make a single robot prototype better instead of each one developing things on its own though. Probably would be improvements faster that way.

AI
Now this one I think we have quite a ways to go in. I mean do we even have a robot that is as dynamic as any mammal at this point. And I don't mean talking and making smiley faces at you I mean a truely self- sufficient AI that could out mouse a mouse in all the ways that mice function. Maybe they already have this. Given that putting up a computer screen with some code on it isn't as smexy as showing of the newest fembot. So AI doesn't get as much coverage. Honestly I think we have a long way to go on this. I would settle for the AI equivalent of a dog or cat maybe to start with. Once they get there I'll start looking ahead to the singularity. Until then we'll all have to have real dogs do the paper fetching and yard piddling for us.

Before we do finally get to the point of having an artifical intelligence in a closed system I think it more likely we will have a more limited AI that is focused on it's specific task. For purposes of labor saving devices. AIs that can take dictation, That can mow the lawn and do your gardening. Those sorts of things. Like the Roomba only better. But, like, putting emotion into a robot that does those jobs just wouldn't make much sense. Who wants a dishwasher that has bouts of depression? Or a neurotic toaster? Or even a happy one for that matter. AI's job is to get the job done and save humans from doing it and do it faster and better than we can. Or as well as we care to have something done if better and faster aren't necessary. Or affordable to the end user.

Telonius
2008-08-25, 02:22 PM
The "apocalypse" scenario is possible. It's also possible that, given enough time, humans would evolve to be better than the robots. If the robots are only able to replicate themselves, not change or better themselves, they'll be evolutionarily stuck.

SDF
2008-08-25, 02:28 PM
People would use it for war and sex in that order.

dish
2008-08-25, 02:30 PM
We could develop them into Minds (http://en.wikipedia.org/wiki/Mind_(The_Culture))and start evolving into the Culture (http://en.wikipedia.org/wiki/The_Culture).

It's always sounded like a lovely place to live.

Lord Tataraus
2008-08-25, 03:29 PM
True AI is impossible for two fundamental reasons.

1 - Computers and programs are, when get right down to it, logical manipulation of small bits of data configured into a manner to simulate something. Key here is simulate. Any "sentient" machine will act as a purely logical human because humans would make it to simulate themselves because we are the only known sentient creatures, thus the only model for sentience. However, the "AI" would be just a very elaborate mimicry of human problem solving and would be incapable of emotion or improvisation except far within the limits of the creator. Additionally, the AI would be completely constrained by its programming thus the machine takeover would be circumvented with a few additional lines of code.

2 - Why? Why would we (humans) create AI? Why would we need to create something so elaborate? The amount of resources required to create an true or near-true AI system is so massive it will never pay for itself, a lesser program would be far more efficient. We are too greedy to waste the time or money. Additionally, the AI/robot takeover apocalypse is such a well known genre of sci-fi that any serious attempt would be met with extreme resistance, all you need is one crazy with a bomb to end it.

Edit: It's the same kind of thing as the immortalist movement right know, if they ever show any type serious progress that makes immortality or near-immortality seem actually possible some (or a few) religious nut is going to bomb the research facilities and after a couple years no one will be willing to touch that subject for a long time until the cycle begins again.

Hoplite
2008-08-25, 03:31 PM
What if I believe that computers are already sentient.....:smalleek:

THEY ARE HERE! THEY WILL KILL US ALL!

Lord Tataraus
2008-08-25, 03:34 PM
What if I believe that computers are already sentient.....:smalleek:

THEY ARE HERE! THEY WILL KILL US ALL!

Then why don't they try to suppress any notion that they are already sentient such as by not letting you post that? Or do they have such a hold now that it won't matter.... :smalleek:

Jayngfet
2008-08-25, 03:40 PM
The problem I have is what defines sentient. Like what's her face in chobits, super advanced and capable of learning, yet still "not sentient".


Why would we use human level robots for labor when we could use something simpler cheaper and easier?

Hoplite
2008-08-25, 04:05 PM
Then why don't they try to suppress any notion that they are already sentient such as by not letting you post that? Or do they have such a hold now that it won't matter.... :smalleek:

I think they realise that nobody will believe me until it is too late....

Leliel
2008-08-25, 04:26 PM
"I, for one, welcome our new computer overlords".

Quality of life seems to improved since the days before computers (I'm not being a propagandist, the most livable (http://en.wikipedia.org/wiki/World%27s_Most_Livable_Cities) cities are First World-ie computer friendly), so as long as they keep that up, I'm fine with them.

Although they could stop the Iraq war...

chiasaur11
2008-08-25, 11:23 PM
I think they realise that nobody will believe me until it is too late....

Or they only control 4 chan.

It's their ultimate evil plan.

Calvin33
2008-08-26, 01:08 AM
I think they realise that nobody will believe me until it is too late....


A.I.s maybe already took over :smalleek:

Thats why we don't pay more attetion to space travel :smallfrown:

Porthos
2008-08-26, 02:17 AM
The main problem with most AI discussions is that I hear way too much discussion about "intelligence". Yeah, there might be enough raw computing power to simulate "intelligence" but what about wisdom? Or creativity? Or illogical deductions (other wise known as "gut feelings")?

Just because someone is smart, it doesn't necessarily follow that they'll use their smarts, well, wisely. Nor does it mean that they'll use their smarts to create new and interesting things.

As a couple of people have noted, we just don't know how our own brains work (yet). It's obviously not just raw observation of the world around us nor is it just "learning skills" or having received knowledge taught to us. There are just way too many unknown factors (ranging from how and why emotions are the way they are to subconscious thought to leaps of insight to all sorts of other things) to really make a guess.

Would an AI be built with all of the things that makes one human? I would hope so, personally. If only so it uses it's sapience wisely.

But this is my big beef with some of the Singularity Crowd. They just presume as an article of faith that an AI would be able to build a better AI which could build a better AI and so on and so on and so on. But this totally ignores the creativity angle that I mentioned before. Raw intelligence isn't what makes new meaningful discoveries. It's almost always a sense of wonder and "What happens when I do this...". Of course it's shaped by background pressures (economic, sociological, religious, etc), but the spark of creativity still needs to be there on some level.

But maybe I just watched too many episodes of Connections as a kid to really buy into the Raw Intelligence argument. :smallamused:

Ganurath
2008-08-26, 02:24 AM
I could make a nice little speech here, but...

If a demonstrably sapient AI is created and discovered, how would it effect the world's culture?Odds are at least one of the influencial religious groups would call it playing God and make a fuss of it, start a holy war with the machines, and either destroy the technology or provoke a robot uprising and social upheavel in whatever part of the world they were developed in.

Kimusabe
2008-08-26, 04:07 AM
The Machines would 'breed' and create the Matrix, taking over the human race and all, just like in the movie, and then me and my Friends from School would go on an absolutely massive revenge rampage to kill our English teacher and end up destroying well, a lot of stuff.

This is due to the fact that my Friends and I were subjected to the terrors of having to analise the Matrix at school, with our Teacher: Mr D. Who forces us to pretend that EVERY SINGLE FRAME in the film has some kind of reference to some random thing that nobody pays attention to. As a result of this, (and some other stuff) everyone in my Class wants (more than before) to kill Mr D.

Felixaar
2008-08-26, 04:19 AM
Some people actually like to do those kind of jobs for personal reason/adrenaline rushes and money and stuff....

*runs in, guns blazing and holding a machete* You Rang?

Seriously though, I'd say Porthos makes an interesting point... and all you who think that mankind wouldnt ever make sentinent AI's... I think your wrong. As a whole we might not have the guts to do it but one singular person would go "aww hell, why not?" and by then it'd be too late.

Ilena
2008-08-26, 11:14 AM
Exacly the case, someone wants to know if they can do it ... and they will try and try again until they die or create it ... and if they fail someone else will ... i mean out of what 7 billion people ... i figure at least 1 person wants to create sentient ai ...

Lord Tataraus
2008-08-26, 11:21 AM
*runs in, guns blazing and holding a machete* You Rang?

Seriously though, I'd say Porthos makes an interesting point... and all you who think that mankind wouldnt ever make sentinent AI's... I think your wrong. As a whole we might not have the guts to do it but one singular person would go "aww hell, why not?" and by then it'd be too late.

Except no single person could ever do that, its way too much work. You don't even have one person creating a simple application any more, it just isn't effiecent the guy would be bankrupt long before he had anything working. A small radical group wouldn't get very far either, you would need a lot of monetary backing for a every long period of time which just won't happen because its just not worth it.

chiasaur11
2008-08-26, 11:46 AM
Well I think we are talking about two separate things here so far. Robotics and AI. Which are not the same. You can have one without the other. So I'll address each one individually.

Robotics.
First of all we are not to human-level agility or speed just yet. From what I have seen so far out of roboticists there are varying levels of success in this area. Often at the expense of developing other areas at all. A robotic arm, for example, with no other parts. The arm and hand might be wonderful but has not, as of yet, been fitted onto an adequate body. You get the picture. But we're getting progressively better hardware and software driving it. Though likely the first (or perhaps second) fully functioning robotic hand/arm that you'll see will likely be fitted to a human if at all possible, for a prosthesis.

Walking seems to be the big thing now what with Asimo, Big Dog, and other advanced machines able to properly keep from falling over into a million dollar pile of broken junk.

It'll be interesting to see it all fit together. Roboticists should make a wikipedia-type collaboration to make a communal effort to make a single robot prototype better instead of each one developing things on its own though. Probably would be improvements faster that way.

AI
Now this one I think we have quite a ways to go in. I mean do we even have a robot that is as dynamic as any mammal at this point. And I don't mean talking and making smiley faces at you I mean a truely self- sufficient AI that could out mouse a mouse in all the ways that mice function. Maybe they already have this. Given that putting up a computer screen with some code on it isn't as smexy as showing of the newest fembot. So AI doesn't get as much coverage. Honestly I think we have a long way to go on this. I would settle for the AI equivalent of a dog or cat maybe to start with. Once they get there I'll start looking ahead to the singularity. Until then we'll all have to have real dogs do the paper fetching and yard piddling for us.

Before we do finally get to the point of having an artifical intelligence in a closed system I think it more likely we will have a more limited AI that is focused on it's specific task. For purposes of labor saving devices. AIs that can take dictation, That can mow the lawn and do your gardening. Those sorts of things. Like the Roomba only better. But, like, putting emotion into a robot that does those jobs just wouldn't make much sense. Who wants a dishwasher that has bouts of depression? Or a neurotic toaster? Or even a happy one for that matter. AI's job is to get the job done and save humans from doing it and do it faster and better than we can. Or as well as we care to have something done if better and faster aren't necessary. Or affordable to the end user.

You know what's a resonably smart modern AI?
Galactic Civilizations two. That thing, at top difficulty, unlike many instead of cheating more it just gets smarter. It even has goals other than just winning.

Seriously, that thing is scary. Not near real AI yet, sure, but...

Sam
2008-08-27, 07:32 PM
You do realize that things like intellect, creativity, instinct and the like can be copied? Sure, it would take time, but machines work more like a culture than an organism in terms of adaption. It used to be people thought there was some vital energy in organic life- there isn't. Although robots and AIs would use nonorganic systems, they would be able to duplicate biological traits if they wanted to- although they probably wouldn't bother for most.

In terms of effects... welll, presumably this wouldn't come in a vaccuum. Robots would come first and... oh
http://www.usatoday.com/tech/news/robotics/2008-03-01-robots_N.htm

We already have robots. Sentient robots? Nope- it is cheaper to get poor people to work.

Honestly, there will probably be little incentive to build them until thay are really cheap- the exception is the military who are planning to get robotic tanks by 2050.

Pandaren
2008-08-27, 07:40 PM
For the love of........

People. This is a discussion on how AI's would affect culture and life, not if they are possible, nobody wants to hear somebody spouting paragraphs of why it isn't or couldn't be possible.

At least in my opinion, if someone else wants to waste their time doing that than be my guest.

Edit:

And the main reasons computer AI will probably be a hundredfold better than human, is uncorruptable or close to it what would a computer want with money? Sex? Anything. It will serve its purpose as it was created. Unless(and I find this likely) it will continue getting smarter until it reaches a "zen" mode of thinking and will either do nothing but think and stop running its necessesities, i.e. stop running itself, or continue serving its purpose until its parts degrade to the point of uselessness.

Editedit: "End the Iraq war" how could an AI affect the Iraq War (invasion), seriously.

And there is plenty of incentive to build a sentient AI.

(continuing from first): There are thousands upon thousands of possibilities of what might happen, we'd need an AI to tell us wich would be the most likely though.

TheBST
2008-08-27, 07:55 PM
Honestly, there will probably be little incentive to build them until thay are really cheap

This is really important, guys. Like any other product you'll have extremely powerful AI that costs top dollar and cheaply produced AI that's buggy as hell and I wouldn't trust to tell me the time.

Plus, with power supplies, repair costs, upgrading and perhaps a Robotic Rights movement- it might work out cheaper for companies to stick to using humans. Could we end up with robots built in sweatshops?

Oh- I have a question for the philosophy students (how often do you read that sentence?). Is the capacity for emotional responses part of proper 'sentience'?

Pyrian
2008-08-27, 08:35 PM
Oh- I have a question for the philosophy students (how often do you read that sentence?). Is the capacity for emotional responses part of proper 'sentience'?Unfortunately, I think the question, as phrased, is fundamentally semantic. Lacking a sufficiently detailed definition of "sentience" it will come down to what people think the word means, with little insight into the nature of either sentience or emotional responses.

I'll take a stab at it anyway.

I think at least one "emotional response" is required for sentience. This is because I don't think there is any such thing as an intelligence without a motive. Literally, an intelligence sits between inputs (senses) and outputs (actions), and attempts to "maximize" (in some way) the inputs through the outputs. That, to me, is what it means to have intelligence (natural or artificial), and intelligence seems to be necessary for sentience.

Whether a single, simple motive constitutes an "emotional response" could probably be debated, but again, it's kind of semantic.

TheBST
2008-08-27, 08:47 PM
Maybe 'emotional responses' was a bad choice of words- I was thinking more of 'instincts thatt don't rely on pure logic', but 'motive' isn't a bad one. By 'maximise inputs' to you mean some kind of self-preservation instinct?

Usually, discussions like this either begin with semantics or end with them.

Devils_Advocate
2008-08-27, 09:17 PM
Realistically, there's no way to program a computer that can think and reason. human minds are so mind-bogglingly complex, and we don't even fully understand the human brain yet - how can we even think to be able to produce the most simple forms of sentience?
I would like to ask if you are reasoning roughly as follows:

1. Human brains are the most complex intelligent machines on Earth.
2. Creatures with smarter minds have more complex brains than creatures with dumber minds.
3. Therefore, any artificial minds we create will need to be correspondingly complex. In particular, human-equivalent intelligence will require hardware and software with complexity comparable to that of the human brain, and superhuman intelligence will require hardware and software with complexity significantly exceeding that of the human brain.

I ask because (3) does not in fact follow from (1) and (2).

Brains were created through a process of blind trial and error. Actual sapient beings deliberately designing thinking machines ought to be able to come up with far more efficient designs in much less time. Granted, thousands of years is still "much less time" in this context, but I really don't think it will take that long. I'd say that it would require some sort of major, catastrophic global paradigm shift (which is possible) to prevent the development of AI within the next century. Heck, I'd even go so far as to say within the next 50 years, even knowing that people were predicting that 50 years ago. But perhaps I assume too much about how much better informed we are now about potential obstacles. (One major objection is "Computing power is likely to eventually start growing faster than our ability to use it efficiently.")


But this is my big beef with some of the Singularity Crowd. They just presume as an article of faith that an AI would be able to build a better AI which could build a better AI and so on and so on and so on. But this totally ignores the creativity angle that I mentioned before. Raw intelligence isn't what makes new meaningful discoveries. It's almost always a sense of wonder and "What happens when I do this...". Of course it's shaped by background pressures (economic, sociological, religious, etc), but the spark of creativity still needs to be there on some level.
Do you realize that with sufficient computational power, all that's necessary to "design" a smarter mind is the ability to recognize higher intelligence? You just do a brute-force search through the space of all programs your hardware could run until you find a smarter one. Of course, that's so inefficient that no machine will be able to do it, but it won't need to. Just having some inkling of what sort of programs should be considered cuts the operation down by many orders of magnitude. The AI may well be made up of modules which can be improved on an individual basis. That too means that much smaller numbers of possibilities need be considered before finding an improvement.

You really don't have to be very smart to do significantly better than just trying things at random. Really, you seem to only be considering the possibility of idiot savant AIs that aren't actually all that good at coming up with solutions to novel problems. In other words, you're using a definition of "intelligence" so narrow that someone can have a whole bunch of it and still be pretty dumb.

We might well have idiot savant AIs before we get creative general intelligence, but that doesn't mean that we won't get creative general intelligence eventually. I'm fairly sure that most of the time that people talk about the possibility of rapidly ascending self-modifying AIs and such, they're assuming that the AI in question will actually be, y'know, smart. It's a discussion of what happens once AI does have whatever cognitive capacity is needed to make better AI.

That said, an idiot savant AI could still grow to superintelligence so long as one of the few things it was better than humans at was the task of designing smarter AI.

Edit: Missed this:

Would an AI be built with all of the things that makes one human? I would hope so, personally.
That is a really, really bad idea. An AI should be designed to be as rational and moral as possible, not to be as human-like as possible. I would hesitate to trust a typical human with the ability to make copies of himself or redesign his own mind. And those abilities are just straight-up consequences of being a program at all, not the sorts of truly scary things that a superintelligence could conceivably come up with.

Pyrian
2008-08-28, 02:46 AM
By 'maximise inputs' to you mean some kind of self-preservation instinct?As an example, yes, but not necessarily that particular result. While all remotely intelligent life considers survival (or rather, its close proxy, fear) as a primary motive, AI in general frequently does not. By 'maximize inputs' I mean very generally our attempts to control our lives, which more specifically means controlling our perceptions of our lives. Such a Matrix-inspiring distinction can seem irrelevant to natural intelligence, but is key to understanding artificial intelligences, which largely exist in programmed domains.

At its simplest level, consider an AI which can output a single byte (0-255) and inputs a single byte in return (also 0-255). It must in some way or sense use its ability to output a single byte to try to achieve a certain input - for example, as high as possible - to qualify as an intelligence at all. In my opinion. That input byte might very well determine its survival, but that's not in itself a necessary condition.

DigoDragon
2008-08-28, 07:47 AM
I think Brian "Atomic Robo and 8 bit theater" Clevenger had an interesting take on it.

Yes he did. :smallsmile:

AIs as they currently stand, not the truely sentient kind, are already doing interesting things for our culture. They can perform a lot of complex tasks that would normally have a large margin for error if humans did them. They can make decisions based on the information given to them. AIs can drive and fly vehicles with great accuracy. AIs can monitor power plants & factories and in the absence of humans can make some decisions in an emergency. Hospitals use them to watch patient vital signs and alert doctors & nurses if a patient suddenly degrades, sometimes able to administer medicine itself if given the ability.

The near future in my opinion is going to be that of AIs continuing to expand on these roles of helping us. I don't see AIs as the fictional overlords or destroyers of humanity. They wont be our oppressors, AIs will become our saviors. :smallsmile:

Sam
2008-08-28, 02:15 PM
Yeah, AIs probably won't take over the human race or exterminate us- they won't be programmed to!

You know what they would be really useful for? Science! They can ferret out logical contradictions, find the best matches between theories and data and can conduct experiments without bias and absorb all the details.

Honestly though, there isn't a high demand for superintelligent individuals.

As for revolt... we have numbers and resources- we would drown them in a tide of bullets.

Zarrexaij
2008-08-28, 02:50 PM
One of four things things might happen.

Either we will reject sapient, sentient AI all together, casting them out and destroying them. We will be threatened by their intelligence and their potentially infinite lifespan. In addition, we will find them eerie because they are not human-like completely, yet human enough.

Or we'll be completely accepting. We will give them the same rights as human beings, and will generally be treated as human beings.

Or we'll accept them, to a point. However, they won't be granted the same rights that we have. They won't be seen as "human."

Or the AI will see tyranny in our reign and seek to overthrow us.

And by the way, as a Computer Science major and a pedant, true AI is supposed to be COMPLETELY indistinguishable from human intelligence, showing and feeling true emotions and have the same (or higher) capacity as humans for learning and recalling information.

Pyrian
2008-08-28, 03:49 PM
...true AI is supposed to be COMPLETELY indistinguishable from human intelligence...What would the point of that be? We've already got six billion actual human intelligences, and frankly, we're not all that. If that's "true" AI then count me out. Human intelligence is deeply, nastily flawed in so many ways.

chiasaur11
2008-08-28, 05:10 PM
What would the point of that be? We've already got six billion actual human intelligences, and frankly, we're not all that. If that's "true" AI then count me out. Human intelligence is deeply, nastily flawed in so many ways.


The new one would be roboty. And robots are cool.

Besides, a "more perfect" AI would still be built by humans. And frankly "make it an ordinary guy" would lead to Bender as a worst case scenario. A "more perfect being" leads to HAL, AUTO, GLaDOS, and every other mad computer with a god complex.

And most of us kinda like Bender.

Cyrano
2008-08-28, 05:25 PM
How an AI would affect culture would depend on what the hell we built. An emotionless automaton with learning capabilities? A thing indistinguishable from human intelligence? A being indistinguishable from human intelligence except for it's "thinking" speed sped up billions of times? Do we build things that can learn and think and feel, but in an entirely alien manner to us? If we build human minds, do they inherit creativity, build more of themselves, and bring about Singularity? Whatever happens, do we make them menial labour? Generals? Diplomats? All of the above? Our culture is affected radically differently depending on what we're actually building, and while I could go through a more comprehensive list of what those changes would be I'm lazy and I don't want to. Suffice it to say, it would either be radically different or exactly the same (imagine our culture, with better machines. Corresponding changes? Not a lot.)

Sam
2008-08-29, 12:57 AM
Required stardestroyer.net link:
http://bbs.stardestroyer.net/viewtopic.php?t=115508&postdays=0&postorder=asc&start=0

Or, to sum it up "treat them nice". Of course, alot of the AI building community thinks AIs will be nice and people loving because they are overoptomistic. True AIs will be alien- they will not be like humans in any way. However, they probably won't be a threat- as long as we continue to repair them, they will continue to need us. After all, they could do it themselves, but it would be so much extra work!

The smartest solution would to NOT make smarter than human AIs common. Ais with the intelligence about human would be okay and could be used for... well, see Japan. Those people love robots- they think they are cute.

I mean, look at him- isn't he precious?
http://www.coolest-gadgets.com/wp-content/uploads/2006/05/asimo-robot.jpg

There are so many other cute ones...
http://i166.photobucket.com/albums/u89/wueiging/broken-heart-robot-1.jpg
http://www.techfresh.net/tech-gadgets/robots/page/2/

Honestly? If the designers are not thinking we will have... interesting results.

DigoDragon
2008-08-29, 07:04 AM
There are so many other cute ones... (links-n-links)
Honestly? If the designers are not thinking we will have... interesting results.

You know, a few of those remind me of MegaMan robots. :smallamused: If I remember correclty, Dr. Light built the first ones to be helpful construction bots.