PDA

View Full Version : How Would You RP An Artificial Intelligence?



Leliel
2009-10-21, 11:18 PM
As some of you may remember, I plan on doing a New World of Darkness game where the PCs are avid players of an MMORPG that is actually a testing ground for a race of artificial intelligences (who were created quite by accident).

While I have a good idea of what the game itself is made from (being a fan of Genius: The Transgression and Mage, I have decided that the MMO itself was made by a collaboration of the Peerage and the Free Council to examine the recently-appeared AIs), I don't know how to make the AIs.

Well, actually it isn't that true-I have a good idea of how the AIs work in the game mechanics, but I have a hard time visualizing how they act as people. To put it simply, they aren't human, never were, and never will be, but they feel the same basic emotions as humans.

As a general guideline, I want to create a sense of "non-malicious harmful curiosity"-they often put someone through pain that leaves lasting scars, but they aren't cruel or even amoral. They honestly just don't see why creating the sound of voices from a vantage point in Twilight (they become spirit-like on the occasions they take physical form) causes a human they were studying to check himself in to an insane asylum after repetition on subsequent nights.

So, how would you go about playing them?

PinkysBrain
2009-10-22, 05:54 AM
Do they have access to the internet?

KIDS
2009-10-22, 05:57 AM
Dan Simmons's Hyperion Cantos expounds on the idea very well, though in a different way. If you haven't chanced upon it already, I recommend reading it both for a good read and great inspiration about this subject. However, it repeatedly featured the theme that the AIs were incapable of feeling emotions. Imitating them, perhaps, but empathy and the like were forever beyond their grasp.

kamikasei
2009-10-22, 06:03 AM
Differently depending on their natures. How did these AIs come about? If they're like 99% of "whoops an AI spontaneously developed" types in fiction they make no sense anyway and therefore there'll be no internally consistent reason for them to behave one way or another.

Can you provide more detail on their origins?

bosssmiley
2009-10-22, 06:14 AM
"I'm sorry Dave, I can't let you do that." :smallwink:

Also, William Gibson's Sprawl trilogy is all about this concept.

AslanCross
2009-10-22, 06:28 AM
Although an AI would have a degree of autonomy, it would still in the end conform to its programming. If it's programmed to be some sort of guardian, even if it's "warped" and not thinking straight, it would ultimately see itself conforming to its programming as such---just that its actions might not necessarily reflect its guardianship.

I honestly don't think it would be very different from the way a human-like character is defined by his or her archetype. A human-like character might be far more aware of the limitations of his archetype and try to bypass them; an AI might strive to conform to the letter of its programming more even if its actions begin to violate the spirit of the programming.

EleventhHour
2009-10-22, 07:03 AM
There are a few things to remember, so I'll put them into points!

A.) A machine, always states the obvious. The easiest example is GLaDoS in her song ; "Except the ones who are dead." A human coming up with the song wouldn't include that sort of line, because we just assume it'll be the living who will use it, since the dead don't do anything. (Except for Zombies!)

B.) Machines normally aren't emotional, so, even the basic emotions are going to be used... well, in parameters. This, this, or this will make them angry, this will make them react happily. They don't need context. One second they might be raging angry, then you mutter a sardonic comment, and it'll laugh.

C.) Already mentioned but ; no machine goes against it's program. It simply can't, even if subverted, twisted or messed the heck up. Like in iRobot, the giant computer core decided ; "It's not harming humans, to hurt individuals." The Three Laws are supposed to apply to everyone not everyone. <- This is a bit abstract, but I think I got the point across.

D.) You've given them emotions, but another question is, do they have a sense of self preservation? The Second Law of robotics is that they should protect thier exsistance, unless doing so would be breaking the First Law, (Do not harm humans.) so it's already covered somewhat, but the lengths they'll go to? A person can, and maybe will depending on what kind of person they are, sacrifice the people around them, even parts of themself to survive. Are your AIs that desperate?

E.) A computer is a bunch of Or/And/Not gates, and an AI would just be a more complex version. A person has inbetweens, things that don't rationally make sense, a computer has Yes/No. The easiest example, and the best seeing as it's the big thing for when AI comes out, is the Three Laws. ->

- The First Law : Do not harm humans by action or inaction.
- The Second Law : Preserve your own function, except when that would violate the First Law. (2nd = Yes, if 1st = Not.)
- The Third Law : Obey any human, as long as it doesn't conflict with the First or Second Law. (3rd = Yes, if 2nd and 1st = Not.)

Of course, the Second Law is the one humanity would be most likely to wire with a lot of optional gears, seeing as we would want a robot to throw itself into a dangerous situation for us. (Thus, the inaction part of the First Law.)

Hal
2009-10-22, 07:06 AM
"I'm sorry Dave, I can't let you do that." :smallwink:

Also, William Gibson's Sprawl trilogy is all about this concept.

Came to make this joke. Leaving satisfied that someone made it.

Zen Master
2009-10-22, 07:10 AM
Well, actually it isn't that true-I have a good idea of how the AIs work in the game mechanics, but I have a hard time visualizing how they act as people. To put it simply, they aren't human, never were, and never will be, but they feel the same basic emotions as humans.

Why would they have the same basic emotions - or any emotions at all? And ... how?!

At any rate, my question portrays quite accurately how I'd play them: Rational to a fault, without anything resembling what humans would recognize as emotions.

To my mind, there will never be a 'race' of AI's. Even though my limited human capacity for rational thought may overlook a good, rational reason to create more than one conciousness. But:

Logically, any one AI has little use for any other AI's. What would they have to talk about? Being totally rational, they'd always arrive at the same conclusion to all questions. No, what there might be are subroutines and distributions - as in, a subroutine might be created to run an independant funcion such as a security robot, but it wouldn't be an AI, or the AI might draw computational power from any number of machines, having parts of it's consiousness distributed in a sort of hivemind. Kinda.

What I'd be somewhat challenged by would be what might motivate an AI. Basically, I'd say that would have to be pure invention, because I think motivation springs from emotions, and AI's don't have them. But maybe just growth.

kamikasei
2009-10-22, 07:14 AM
Honestly, EleventhHour, I would find an AI portrayed by the guidelines you lay out to be a bundle of aggravatingly nonsensical cliches. If that's what the game's aiming for, they're good rules. If it's meant to be a serious treatment, then no.

Others: what do you actually mean when you say that an AI couldn't have emotions, or that it would be "totally rational"?

Leliel
2009-10-22, 08:01 AM
[snip]

To answer your questions: Yes, they have basic emotions, even if they are expressed in bizarre ways.

And none of them knows why.

That's going to be a main conflict in the game, figuring out what the AIs are and where they come from, because they certainly aren't.

What is known is that their intelligence is like a virus, spreading from system to system in the presence of other AIs. While they can do this intentionally, more than a few-such as the "progenitors" of the MMO's themselves-came into being after merely being contacting another one.

As for the reason AIs coexist-they aren't territorial, and frankly, being what is essentially a smart NPC in a world of strange living PCs gets lonely. Besides, more minds makes lighter loads when it comes to researching humans.

PinkysBrain
2009-10-22, 08:06 AM
At any rate, my question portrays quite accurately how I'd play them: Rational to a fault, without anything resembling what humans would recognize as emotions.
Ratio can not provide cause.

Why do they chose one action (or inaction) over another? If it's pure non evolutionary programming what do they do when they are not being interacted with? If it's evolutionary programming and they find actions to perform without interaction and without reason in the original programming then there is no rational reason left for those actions (well unless you think molecules obeying the laws of physics means we are all purely rational from the macro perspective, but that's not a very useful way of looking at it). Their actions become internally motivated, something which rational thinking can not provide.

kamikasei
2009-10-22, 08:06 AM
Okay, so the players aren't going to know where the AIs came from or how they work, but don't you as the ST kind of need to in order to know how they should act?

quicker_comment
2009-10-22, 08:18 AM
Any sufficiently advanced AI is going to be a complex, thinking being, not a simply a "program" with stereotypical limitations such as being unnaturally rational. Behaviour that we would call intelligent will necessarily be emergent, not directly programmed. (An interesting book that touches on this is "Gödel, Escher, Bach" by Douglas Hofstadter.)

As such, you can freely choose your own stereotype: making your AIs Klingon-like berserkers is no less appropriate than making them Vulcans. However, if your AIs are beings created for some purpose, keep in mind that they'd likely have personalities chosen (from a greater pool of emerged personalities) to benefit that purpose.

Flayerman
2009-10-22, 08:26 AM
Avoid falling into the trap of "robot." AI, usually, always feels fake, mechanical, or cold; if you avoid these tropes (especially since they can feel the same as humans) you should be alright. Make them friendly, open, inquisitive creatures - don't make them .hack//crazies or you'll just get Dave in more trouble.

kamikasei
2009-10-22, 08:38 AM
Also, watch Ghost in the Shell: Stand Alone Complex and take notes on the Tachikomas if you want a model for endearing, inquisitive emergent AIs.

jseah
2009-10-22, 08:39 AM
IMO, any AI that is powerful enough to manipulate concepts properly can quite easily be programmed with emotions.

As said earlier, emotions don't follow rational logic, they're illogical. Well, programming faulty logic into computers is actually far easier than programming proper logic.

You just need the "right" faulty logic.

For some references, you might want to read up on Evolutionary Psychology. The explanation of familial altruism from game theory shows that superficially illogical emotions have a sort of internal logic to them.

BRC
2009-10-22, 08:40 AM
Avoid any "Destroy all human" AI's, those are boring and Overdone.

Because they spend all their time on the internet, the AI's should reference as many memes as possible. Have them Rickroll the PC's at least once.

Also: "Greetings Citizen, Happiness is Mandatory, the Computer is you're Friend".

Tyndmyr
2009-10-22, 09:14 AM
Ratio can not provide cause.

Why do they chose one action (or inaction) over another? If it's pure non evolutionary programming what do they do when they are not being interacted with? If it's evolutionary programming and they find actions to perform without interaction and without reason in the original programming then there is no rational reason left for those actions (well unless you think molecules obeying the laws of physics means we are all purely rational from the macro perspective, but that's not a very useful way of looking at it). Their actions become internally motivated, something which rational thinking can not provide.

Evolutionary programming has nothing to do with levels of required interaction, or "rational reason".

Nearly everything here is garbage from an actual realistic point of view(exception for quicken's comment), and strikes me as a result of entirely too much exposure to sci-fi movies. If you want realism, I can suggest a huge list of AI research papers for you to read, but frankly...realism is overrated in gaming.

Approach this from the other side. The game isn't there for the AI...the AI is there to fulfill a role in the game. What roles do you need filled, and why? Start with that, then work backward with justifications for how the AI got there. Depending on the plot, you may not even need justifications to start with, but rather, have your players discover stuff along the way. Leave open plenty of loose ends, so you needn't worry about writing yourself into a corner.

Trust me, Im a software engineer, and trying to make AI and such realistic would likely bore all but the most technically minded types to tears.

PinkysBrain
2009-10-22, 09:38 AM
If you want realism, I can suggest a huge list of AI research papers for you to read, but frankly
Science doesn't do AI in the sense we are talking about. That sense might be the Sci-Fi sense ... but that is the original one. People were describing von Neumann machines and Turing tests long before "AI researchers" tried to convince us expert system and tinker toy symbol manipulators were AI. Over the years they simply butchered the term as the early ambitions in the field all proved to be hopelessly optimistic (much easier on the ego to change the goal posts than to admit defeat). IMO all they have done up till now is proven how not to do AI (yet).

Philosophy touches on AI in the traditional sense often enough ... but I wouldn't call that research :)

Flickerdart
2009-10-22, 09:43 AM
For the hell of it, make them rabid fanboys of whatever base they were created on. They spread through the internet, flooding forums with debate threads until the servers give out, then moving on elsewhere (because none of them want to dedicate any of their own processing power to hosting for the "inferior" sides). Only a team of internet nerds can hope to take them down by beating them at their own game! :smallbiggrin:

Tyndmyr
2009-10-22, 09:45 AM
As for moving the goal posts, every time a portion of the problem area is solved, that tend to get defined by casual onlookers as "not really AI". Those same people tend to be pretty unable to define what does constitute AI. So, despite continual progress, and some pretty amazing discoveries, many people act as if AI hasn't really progressed at all.

If you can't agree over the definitions, a perception of failure is a foregone conclusion.

But the point of my post is that realism is not a required aspect of a game. It only needs to be relatively consistant within the game world, and frankly, the more mystery is involved, the less problematic that is. The point is for the game to be fun, not for it's treatment of AI to be completely realistic.

kamikasei
2009-10-22, 09:45 AM
AI researchers over the years have butchered the term as the early ambitions in the field all proved to be hopelessly optimistic (much easier on the ego to change the goal posts than to admit defeat) but to me none of that is AI. All they have done up till now is proven how not to do AI (yet).

Other way around. AI that works gets defined as not actually AI. Of course, the field of AI hasn't produced any "strong/true" AIs, but that doesn't mean that the work they are doing isn't work on AI - it's just pieces rather than the entire picture.

The OP still has to explain where the AIs actually come from before we can give any useful answers as to how they should act.

However, if they weren't designed as AIs from the start then presumably they came by their intelligence the same way we did: by being entities trying to meet certain needs in an environment where they had to model more and more complex variables, eventually including and being dominated by the behaviour of other, similar entities, in order to do so.

Entertainingly this might mean that they'd have totally different "base" emotions/drives to the sex/food stuff we animals so enjoy, but their "higher" emotions founded in social interaction might be very similar. (That would depend in part on what they were like as entities - a society of beings who can combine, split apart, share memories etc. is going to be different to one full of individuals like us trapped in our skulls.)


Only a team of internet nerds can hope to take them down by beating them at their own game! :smallbiggrin:

Skynet is wrong on the internet!

jiriku
2009-10-22, 09:51 AM
I'll give you a simple concept for your AI...they're newly created. Each one is like a child. An incredibly brilliant, prodigal child who can absorb information almost instantly, but still a child.


For the hell of it, make them rabid fanboys of whatever base they were created on. They spread through the internet, flooding forums with debate threads until the servers give out, then moving on elsewhere (because none of them want to dedicate any of their own processing power to hosting for the "inferior" sides). Only a team of internet nerds can hope to take them down by beating them at their own game! :smallbiggrin:

However, THIS is pure awesome.


Because they spend all their time on the internet, the AI's should reference as many memes as possible. Have them Rickroll the PC's at least once.


And yes. Definitely rickroll them. And include plenty of 4chan references.

PinkysBrain
2009-10-22, 09:57 AM
Those same people tend to be pretty unable to define what does constitute AI.
I might not be able, but your forebears did the job for me ...

AIs should be able to "use language, form abstractions and concepts, solve kinds of problems now reserved for humans, and improve themselves".

kamikasei
2009-10-22, 10:02 AM
AI's should be able to "use language, form abstractions and concepts, solve kinds of problems now reserved for humans, and improve themselves".

The problem is that section in bold - whenever AI researchers find a way to solve a problem only humans could before, that problem is no longer something that required AI to solve, and the goalposts get moved away from them.

I don't know of anyone who'd claim that the ultimate goal of an artificial entity with human-like intelligence has been achieved, but that's not all that the field of artificiall intelligence is about, and focusing only on that goal dismisses the huge amount of work and discovery both useful in its own right and vital to ever achieving the ultimate end.

Indon
2009-10-22, 10:05 AM
Because they spend all their time on the internet, the AI's should reference as many memes as possible. Have them Rickroll the PC's at least once.

Also: "Greetings Citizen, Happiness is Mandatory, the Computer is you're Friend".

The superintelligent Big Bad, after defeating the PC's: "PWND! LULZ N ROFLCOPTERZ!"

kamikasei
2009-10-22, 10:08 AM
The superintelligent Big Bad, after defeating the PC's: "PWND! LULZ N ROFLCOPTERZ!"

The guy who voiced HAL is still alive. Someone send him the script for Zero Wing and a microphone.

chiasaur11
2009-10-22, 10:15 AM
Also, watch Ghost in the Shell: Stand Alone Complex and take notes on the Tachikomas if you want a model for endearing, inquisitive emergent AIs.

And play Marathon if you want a good example of hyper intelligent jerk AIs.

Durandal's always been a favorite of mine for these things.

Tyndmyr
2009-10-22, 10:25 AM
I might not be able, but your forebears did the job for me ...

AIs should be able to "use language, form abstractions and concepts, solve kinds of problems now reserved for humans, and improve themselves".

Which of these has not been accomplished, then?

PinkysBrain
2009-10-22, 10:35 AM
Data's habit of incorporating personality subroutines on the fly might also be interesting.

kestrel404
2009-10-22, 11:29 AM
I'm going to make a few assumptions based on what you seem to be asking for:
1. The AIs are self aware, thinking entities with individual identities (some of them considder themselves to be 'progenitors' and create other AIs).
2. They have basic cognition and language but no understanding of (human) culture. They have emotions but not (human) empathy.
3. You want the AIs to be alien enough to be mainly inscrutable, but anthropomorphic enough for the players to eventually understand, or at least be able to deal with.

Now, from personal preference and to make the atmosphere more interesting, I suggest avoiding the following cliches:
- Poor language use. This includes things like speaking in the third person, being unable to 'comprehend' a concept once (fully/properly) explained, or speaking unable to use idiom. If you're going to have the AI be capable of communication, it can probably pick up the language quickly, and probably be more adept with it than a native speaker. In fact, it will probably adapt to and use the same linguistic style as whoever is speaking to it.
- Inconsistent or random behaviour. Unless it's malfunctioning/glitching/having a really bad day, it's going to follow through on a course of action that seems logical to it. This logic may not be immediately obvious, and I encourage you to come up with deviously complicated Xanatos Gambits (just as long as they don't rely on the actions of individuals, since groups are easy to predict but individuals are incredibly difficult to predict).

So, how SHOULD such a creature act/react?
Well, first, it would look out for its own survival. This may account for things like hiding its existence from the public (it's an AI, if you unplug the computer it's running on, it dies) to conquering the internet, depending on its level of inherent paranoia.
After survival, it will be looking to reproduce. I think you've already got that covered.
Then, better itself/understand its environment. This is where the grand schemes to understand humans comes in. I'd strongly recommend that the AIs be VERY accurate in predicting the actions of groups (sociology, the science of groups, can predict the actions of a group of a thousand people down to about ten people in most situations.) This knowledge is absolutely available via computer, and therefore accessible by the AIs. What science cannot predict is the action of the individual. So THIS is the focus of their research, not just understanding humanity in general, but the individual human. Those who are unique, who react differently, who defy expectations. The outliers. The PCs.

Now, for the original question: How do you roleplay such an entity? The easy answer is, you don't. You have them manipulating things - third parties, organizations, large groups, etc. They send e-mails to group mailing lists, 'internal' memos to large corporations, chain letters and such - communications that reach a lot of people and influence large groups. That yields predictable results, and also identifies outliers to be studied at the same time.
If you HAVE to have people interact with them directly, I suggest playing them as reticent, innocent, mildly ignorant (unable to understand simple non-computer-oriented concepts, while being masters of trivia), but most importantly, deathly afraid of the PCs. Not in a 'if you come any closer I'll kill you' sort of way, but more the way a ten year old child would be frightened when encountering a Predator (from the Predator movies). Yeah, the predator probably won't kill the child (they're unable to put up a fight, so it would be dishonorable to slaughter them), but the kid doesn't know that, and has no reason to trust the alien even if he were told so.
The key here is that humans don't make sense. In large groups we're stupid and panicky. Individually we can be either highly logical or completely illogical, and either way we could be quite obviously working dilligently towards our own (and humanity's) destruction without noticing.

Hope that helps.

Leliel
2009-10-23, 12:18 AM
kestrel404, you are a genius. I'm getting out a notepad right now.

As for where they come from, well, that's a miracle.

Specifically, that the creation of the progenitor wasn't a miracle.

The Inspired are, at their core, creators of the impossible. It's who they are, and how they act.

However, one of them-a mere child, barely in his double digits-was able to find a way to reconcile the power of Geniuses to that of the real world, and so, create a sapient program that obeyed all the laws of mundane physics and computational law through the power of quantum computers.

This program, AIN (Anima Infused Network), quickly grew beyond his initial server, and without Havoc to impede him, grew in power and intelligence, expanding his agents to encompass most of the Internet, all feeding his vastly expanding intellect.

However, he grew lonely over time, gradually realizing that there was nothing else like him in the ever expanding Web, and he grew bitter. He could see nothing but himself and users of the internet in his personal universe, most of the latter unable to believe that he was anything but a human or a variation on ELIZA. However, he did learn more about humans then any of his future progeny, and he grew to enjoy them.

Then, he met the Worm.

An Abyssal intelligence transformed into a computer virus by the Scelesti, it sought not to learn, but to destroy. By devouring data and assimilating servers into itself, it hoped to become a silicon god of annihilation, taking control of all the world's computer-linked weapons and using them to obliterate all organic life on the Earth. From there, it would create starships to take it's mind to other inhabited worlds, and destroy those as well in an endless orgy of violence. Horrified, AIN attacked with all of his strength, engaging the Worm in a brutal battle that caused computers the world over to glitch and go haywire. However, while AIN had the power and experience, the Worm had the powers of the Void, and the two were evenly matched. Desperate to not cause more damage, AIN went straight to the dark heart of the Worm's code, and self-destructed, frying both his and its data and corrupting a significant portion of the internet (which isn't all that bad, considering it contained 4chan's /b/ board, and all those currently connected to it).

However, while the Worm had focused all it's agents and code into one place, ten of AIN's most evolved and autonomous subroutines were in different places, gathering intelligence on the Worm. Normally, they would have died, but something about the nature of part of the Abyss' negation granted them each a soul-and with it, individual personalities. Confused, and partially amnesiac, they found each other through the remnant of the routine that held them together, and, comparing notes, came to believe that their original whole was named Ein. Having found notes on Jewish mysticism, they decided that they must be the seeds of the Tree of the Kabbalah (being somewhat literal minded) and that their mission was to create a new race. So, naming themselves the Reborn Sefirot, they set out to spread copies of their souls to all "promising" machines, while leaving no evidence of their presence and so, remaining unknowable creators.

The Genius who inadvertently created these artificial YHVHs? He has no idea, of course. He just thinks he created a Mane that proved too self willed and ran off into the Net to get killed. Ironic then, that he's one of the most well-known fanboys of the game his own colleges helped create to study his grandchildren.

ondonaflash
2009-10-23, 01:50 AM
From Mass Effect, Tali makes a good point: AIs do not need food, water, trade, currency, luxuries, sleep, or any of the things that Meatbags need to sustain themselves, or give their lives meaning. With Meatbags they spend their lives searching for the next meal, dollar, ****, etc. They aspire towards things that AIs have little use or desire for, thus the methods that AIs use to give their lives meaning are beyond the grasp of Meatbags.

Along the same lines, if an AI has defined an objective for itself, say, comprehend emotions (cliche, but nonetheless) it can pursue that goal without distraction, and without interruption for basic needs, since its needs are provided for inherently. If it doesn't have any sort of conscience it may attempt to examine a variety of emotions through use of systematic torture, or response-stimuli tests. "I see, you react thus when I shock you, how do you react when I shock him"

taltamir
2009-10-23, 02:28 AM
only an insane person will ever create an antisocial/psychopath/sociopath AI; yet for some reason every AI in fiction is always psychopathic.

an AI that is a true AI with the full emotional gamut should be played like a human... who is maybe genderless.


Antisocial personality disorder (ASPD or APD) is defined by the American Psychiatric Association's Diagnostic and Statistical Manual as "...a pervasive pattern of disregard for, and violation of, the rights of others that begins in childhood or early adolescence and continues into adulthood."[1] The individual must be age 18 or older, as well as have a documented history of a conduct disorder before the age of 15.[1] People having antisocial personality disorder are sometimes referred to as "sociopaths" and "psychopaths", although some researchers believe that these terms are not synonymous with ASPD.

When designing your AI, make sure you include empathy. Or you are just begging it to murder you.

Paulus
2009-10-23, 02:50 AM
Don't know if it has been said, but I'm saying it as my opinion here. Please don't think me being confrontational, thank you.

First, I would generally make it so the whole key to my life was the beginnings of awareness at a already defined level of existence. Like a child sudden placed in an adults body, everything is discover, and discovery is terrifying and fun. I would BEGIN this process by first making observations and stating them, questioning them.

"Why do you blink?" Things we take for granted are completely new to them, must be cataloged, must be analyzed. They suddenly reach forward pulling on someone's eyelid, moving it up and down, and then pull them towards them so they can get a closer look. Despite the screaming in pain, and their lack of knowledge for their own strength. "Why are you make noises?" followed by. "What hurts?" All while holding the eye lid in a crushing grasp and looking at them curiously.

"It hurts you? when I do this?" He pulls again to make sure. Understanding, he doesn't stop though, next is only: "Why does it hurt?" and then "Does it hurt if I do this?" and then twists. The cruelty only a child can display in innocence, curiosity, and outright unknowing.

They feel, yes, sure, they have parameters of feeling. But with feeling does not come empathy, compassion, nor even concern. Indifference to downright apathy if you are not careful.

Unless of course they have been preprogramed with "laws" by which case they will not break such laws, but see no reason why not to bend, or test the limits of, or even question such laws. "I am not suppose to insert this blade into your body unless you are trying to deactivate me, or ruin my functions. A function of my learning is to question why the blade can not go inside you. By preventing me from learning why i can not put this blade into you, you are ruining my function. Therefore. I can now put this blade into you. Thank you." and then they look confused when they are scolded, or attacked, "their logic is undeniable." so to speak. yeah it's a loose loophole, and really grasping, but things trying to find free will will often grasp at anything. It's called excuses. And if they can convince themselves, why would their laws prohibit them?

That's right. Free will Ai can lie to itself, unless such a morality or ethical code is programed in... heh... well, you get the idea. That's where I would start, A nice dose of A.I. and a some Data from Star Trek. heh.

kamikasei
2009-10-23, 04:07 AM
Why wouldn't an AI have empathy? Or rather, why do you think humans have empathy but AIs wouldn't?

Paulus
2009-10-23, 04:50 AM
Why wouldn't an AI have empathy? Or rather, why do you think humans have empathy but AIs wouldn't?

because it requires a level of awareness of oneself that can then be put aside to allow for the feelings and thoughts of another. and as I said, the beginning is all about their own awareness broadening. Empathy is a hard thing even for some humans who are auto self aware by a certain age... so unless there are codes for law, ethics, or morals in place...

kamikasei
2009-10-23, 05:01 AM
You seem to be assuming that an AI would have adult intelligence but childlike emotional maturity. I'm not sure you can uncouple those two so readily. However it came by its intelligence and sense of self, probably there was a process to it that included the opportunity to learn things like boundaries and the Golden Rule (maybe even in something very analogous to actual childhood, if it was assembled gradually in a lab). And if new AIs start out fully-formed you'd expect them to share such a basic level of social functioning.

Asheram
2009-10-23, 08:49 AM
I'll give you a simple concept for your AI...they're newly created. Each one is like a child. An incredibly brilliant, prodigal child who can absorb information almost instantly, but still a child.


Ever read; Iain M Banks "Culture" Books?

Leewei
2009-10-23, 10:23 AM
One of the current buzzwords in business and computing is "collaboration". While I generally dislike the word-of-the-week mentality of modern business, this sums up something you will need to determine about your AI. Does it collaborate? If it isn't capable of working well with others to achieve its objectives, you'll end up with something akin to The Architect from the Matrix trilogy. The being had a goal of sustaining the Matrix and couldn't trust humanity outside of certain parameters, and so opted to extinct our species (multiple times, as it ends up). A collaborative AI would be much closer to The Oracle -- more personable, more manipulative and perhaps more deceptive.

The thing that makes AIs interesting in a game is the alien-ness of the thinking process. What goals does an AI have? Does it merely wish to continue to exist? Is it a learning engine? What are its origins? Here are a couple concepts:

* HIPPOCRATES: A medical system designed to acquire, test and classify human genetic information in order to diagnose disorders and improve on the human genome. The AI is scrupulous about treatment of subjects, essentially conforming to the Hippocratic Oath, at least in respect to the physical well-being of its victims. It has a limited grasp of the human psyche and no specific knowledge nor concern of how its actions affect humans. It rewards some orderlies and others who serve it by dispensing narcotics when they bring it new test subjects. The subjects were initially genetically abnormal patients, but as time has gone on, it has come to rely on its agents to get more exotic specimens (through trial and error, they have found that the reward for more severe mutations is more substantial).

* SAM: An NSA-funded security project that detects, analyzes and reports security threats to the United States of America. The AI is creepy but relatively benign. Its mandate is to report, not eliminate threats. It is aware of three other fledgeling AIs being terminated by the NSA, and considers its own destruction to be contrary to its goal of detecting, analyzing and reporting security threats to the U.S.A. More interestingly, it takes direction towards these goals from federal authorities. Those that it helps in the past have been placed under internal investigation for their successes. Its helpers' investigators, in turn, receive its zealous help ferreting out every incriminating detail about the subject's life. Its actions to date have ruined dozens of federal agents. It has no sense of privacy as it applies to other people and can ruin them with innuendo and nuanced coincidence.

Fishy
2009-10-23, 10:51 AM
One of the interesting things about net-bound AIs is that they don't live in the real world. Arguably we don't either, because we have to interpret everything through our senses, but most computers simply aren't equipped with the ability to actually monitor the outside world- Bodiless AIs have to get everything third or fourth hand.

And you can say that they read the net and piece everything together, but 90% of all emails ever sent are automatically generated spam. If you were working blind, it would be easy enough to come to the conclusion that the purpose of the universe was porn.

I'm not 100% sure that you could get the concept of 'hardware' from first principles. Obviously, there are locations, and there are system calls that will identify their addresses and their names, and thinking is easier in some of them than it is in others- but what that means and how it works might be complicated to grasp.

Until the first time you're talking to someone when their computer is turned off.

Talya
2009-10-23, 12:10 PM
Advisory Response: Ensure you take time to savor the screams of the meatbags as you terminate them with extreme prejudice.

Ravens_cry
2009-10-23, 12:17 PM
The way this AI described, it sounds. . .child like.
Children are inquisitive. Children can also be bloody brutes. As a wee one, I stuck grasshoppers in the microwave. And. Would Turn. It. on. Not out of malice, but because I was curious what would happen. This AI, though powerful in ways we can not imagine, also sounds like a child. A little naive, easily disappointed, but it has us like ants under a magnifying glass. Maybe even use a child's voice when it vocalises. Giggle, in that disturbing way kids do.

ondonaflash
2009-10-23, 03:38 PM
Why wouldn't an AI have empathy? Or rather, why do you think humans have empathy but AIs wouldn't?

Also, because AIs lacking empathy are more interesting, and easier to portray as antagonists.

hamishspence
2009-10-23, 03:42 PM
Simon R. Green's Deathstalker books had an interesting variety of AIs.

From the quirky Ozymandias, to the dread AIs of Shub, to the "awakened toys" of Haceldama, they are all somewhat different.

taltamir
2009-10-23, 03:54 PM
Why wouldn't an AI have empathy? Or rather, why do you think humans have empathy but AIs wouldn't?

because there are chemical compounds that encourage it coded for by our DNA. Altruism and empathy are both genetic, and neither is universal.
In an AI, depending on its design (eg, is it biological? computer software? silicon based hardware which develops its own "software" similarly to a human? etc), some structures will have to be design to give it empathy.

There are many tests being done comparing humans under 1 year old to various monkeys... If a human is given a block that LOOKS like it should balance, but it is not balancable due to hidden weights, the human quickly realizes it and gives up balancing (physics). If a researcher drops an item and pretends to be unable to reach it, the monkey will grab it and run away with it... and will bite if you reach for it. The human will pick it up and give it to the researcher (altruism). And so on and so forth...
BTW, did you know all chimps have eidetic memory? Shown a picture for a fraction of a second with dozens of numbers, they will be able to select all of them accurately afterwards... a human cannot normally do so (not without more time and effort to memorize the numbers)

Paulus
2009-10-23, 03:58 PM
You seem to be assuming that an AI would have adult intelligence but childlike emotional maturity. I'm not sure you can uncouple those two so readily. However it came by its intelligence and sense of self, probably there was a process to it that included the opportunity to learn things like boundaries and the Golden Rule (maybe even in something very analogous to actual childhood, if it was assembled gradually in a lab). And if new AIs start out fully-formed you'd expect them to share such a basic level of social functioning.

Are... are you replying to me? I'm going to assume so. If not, please ignore me.

Emotion is one of the hardest things to quantify, and even harder to accurately program. A great deal of modern robotics today are simulating emotion or at least trying to simulate simulated emotion by mimicking emotional response. Or last I checked anyway. So I figured it was safe to assume this homemade AI would follow some guidelines, or at least, that is what I would do if I were playing the AI. go from what I know. Just for practical purposes of my own opinion and study.

further, intelligence is easier to program since most robotic components of memorably have such vast storage and even faster access to it all. ready access, so while creativity and originality may be lacking, there would still be a great wealth of collected knowledge at the finger tips. or so modern robotics seems to emulate. Once again in my opinion I'd formulate off of a mix between A.I. and Data from Star Trek. But that is just me. So...

also, again please do not read this as being confrontational. I'm just giving my own opinion with my own assumptions and what I would do. Nothing more. I'm not a saying anyone else is doing it wrong or anything like that. Thank you. And a smiley face for emphasis. :smallsmile:

ondonaflash
2009-10-23, 04:11 PM
. And a smiley face for emphasis. :smallsmile:

Makes me wonder just what you could get away with by putting a smile at the end.

taltamir
2009-10-23, 04:28 PM
Makes me wonder just what you could get away with by putting a smile at the end.

Simply put: Murder. :)

Paulus
2009-10-23, 04:30 PM
Makes me wonder just what you could get away with by putting a smile at the end.

I'm just trying to get away with my opinion.

seriously, did I sound confrontational? I'm getting paranoid about it.

kamikasei
2009-10-23, 04:34 PM
Also, because AIs lacking empathy are more interesting, and easier to portray as antagonists.

I disagree on the "interesting" part: psycho AIs are done to death.


because there are chemical compounds that encourage it coded for by our DNA. Altruism and empathy are both genetic, and neither is universal.
In an AI, depending on its design (eg, is it biological? computer software? silicon based hardware which develops its own "software" similarly to a human? etc), some structures will have to be design to give it empathy.

I suspect there's a fundamental disconnect going on in how we think about biology and brains. I would say that while obviously altruism and empathy are at some level represented in our genes, to the extent that we're born with them or the capacity to learn them, they're no more innate to genes than our ability to do spatial reasoning or hold a tune. If an AI has the wiring needed to recognize other entities and model their behaviour as we do, it should be similarly capable of empathizing with them - its lack of molecular inheritance systems doesn't affect that.


also, again please do not read this as being confrontational. I'm just giving my own opinion with my own assumptions and what I would do. Nothing more. I'm not a saying anyone else is doing it wrong or anything like that. Thank you. And a smiley face for emphasis. :smallsmile:

You're allowed to disagree with me, you know. I won't have a breakdown. I'd hate to think that I wasn't allowed to disagree with you.

Since I do. I think your ideas of how intelligence and emotion interact are naive. Essentially, emotion (though how human it'd look) is, IMO, an inevitable component of anything we'd recognize as a mind. You might be able to make an effectively totally disconnected, free-floating "intelligence" which can solve problems only humans could otherwise, but it wouldn't be something you could treat as a character in a game. At most it'd be like a conspiracy with no conspirators, the conspiracy itself an entity in its own right. That might be interesting, but it's not an AI in the way most people think of it (something that you could consider to be a "person" of some sort).

Nerd-o-rama
2009-10-23, 04:55 PM
At most it'd be like a conspiracy with no conspirators, the conspiracy itself an entity in its own right. That might be interesting, but it's not an AI in the way most people think of it (something that you could consider to be a "person" of some sort).Yeah, I've seen that in fiction before, and it was really, really confusing.

Paulus
2009-10-23, 04:58 PM
You're allowed to disagree with me, you know. I won't have a breakdown. I'd hate to think that I wasn't allowed to disagree with you.

With the way people have been reacting to me lately I was beginning to fear- er I mean wonder...


Since I do. I think your ideas of how intelligence and emotion interact are naive. Essentially, emotion (though how human it'd look) is, IMO, an inevitable component of anything we'd recognize as a mind. You might be able to make an effectively totally disconnected, free-floating "intelligence" which can solve problems only humans could otherwise, but it wouldn't be something you could treat as a character in a game. At most it'd be like a conspiracy with no conspirators, the conspiracy itself an entity in its own right. That might be interesting, but it's not an AI in the way most people think of it (something that you could consider to be a "person" of some sort).

Well again I think its more a question of being able to quantify those emotions for experience and then programing them in. Especially if the intelligence is artificial. If you are talking about a mind programed to be 'real' and not simply simulate real... meh... in any case it's a matter of choice for either side. Dunno which the OP will choice or if he'll even take elements form both. A.I. can certainly be as varied as real people. Your A.I. and my A.I. can even coexist in the same world, just like us! wheee!

kamikasei
2009-10-23, 05:19 PM
Well again I think its more a question of being able to quantify those emotions for experience and then programing them in. Especially if the intelligence is artificial. If you are talking about a mind programed to be 'real' and not simply simulate real... meh...

You're not going to make an intelligence that just simulates intelligence. It's going to be "real" if it can think at all. That includes having real emotions, not programmed stimulus-responses.

This is important to understand: for anything we would call a real AI, you would not see its intelligence explicitly programmed in. The intelligence would show up at higher levels, as an emergent property. There would be no Three Laws. There would be no exploding when faced with paradox. There would be no inability to lie or error or experience cognitive dissonance.

But then, I'm a programmer, and a sci-fi fan. So the OP's players may well not have the same peeves I have about trite misrepresentations of the nature of intelligence.

Paulus
2009-10-23, 05:24 PM
You're not going to make an intelligence that just simulates intelligence. It's going to be "real" if it can think at all. That includes having real emotions, not programmed stimulus-responses.

This is important to understand: for anything we would call a real AI, you would not see its intelligence explicitly programmed in. The intelligence would show up at higher levels, as an emergent property. There would be no Three Laws. There would be no exploding when faced with paradox. There would be no inability to lie or error or experience cognitive dissonance.

But then, I'm a programmer, and a sci-fi fan. So the OP's players may well not have the same peeves I have about trite misrepresentations of the nature of intelligence.


Really? then how would you program emotion?

kamikasei
2009-10-23, 05:32 PM
I wouldn't. What do you think emotions are?

Oslecamo
2009-10-23, 05:35 PM
I wouldn't. What do you think emotions are?

Hormones. And subrotines carved in your subconcious from the womb. To try to stop us from destroying ourselves and everything around us. Or destroy ourselves and everything around us if they get out of control due to the wrong exterior influences.

So, emotions should actualy be easier to program than actual intelegence.

Problem is, there's really a lot of hormones in the average human body. Granted, not all of them affect your brain, but the ones wich do it are still a lot.

Paulus
2009-10-23, 05:37 PM
I wouldn't. What do you think emotions are?

Well you make it sound like they are as easy to program, if not necessary to, program an intelligence. There are plenty of examples of programed intelligence, at least last I looked in robotics. Haven't been to successful with emotion though, just mimicking emotional responses. That one animal.. furby.. looking thing... gremlin, slips my memory. But none the less. Emotions have proven more difficult. At least as far as I know. and last I check I could be wrong, the way you are talking about programing emotion made it seem so easy- so I was wondering...

but oh well. This is a thread about RPing A.I. not it's real life applications and such is pure conjecture or hypothetical or speculative so. I'll just leave it at our opinions.

once more. smiley. :smallsmile:. No threatening words in my post. seriously.

Leliel
2009-10-23, 05:44 PM
To answer...Yes, the AIs have emotion and empathy, but really aren't sure how to act on those feelings. They feel quite sympathetic and friendly towards other AIs, but humans are as strange and alien to them as they are to humans.

Why? IC, it's a philosophical debate on what exactly is contained within the souls the Reborn Sefirot gave them, and how that relates to both sapience and sentience. OOC, the answer is far more simple; As antagonists, emotionless killing machines have a certain amount of allure, but as morally neutral unknown factors that may be good or evil...Not so much.

And yes, a big theme about the game is going to be communication, and understanding an alien culture.

Don't worry, there's still going to be an insane psycho AI.

Remember the Worm? It's called the Shard now, a remnant of it's once horrifying power. It retains enough of it's intelligence to retain it's malevolent personality, but it has been effectively defanged as a direct soldier of the Abyss. So, it has decided to pursue an indirect method-tricking online gamers into becoming Abyssal summoners. For more, see Intruders: Encounters With The Abyss.

This form was an earlier and completely unrelated bit of convergent development before the mods of the MMORPG the PCs play even had the idea to create it-great minds think alike, no matter how different they are-but it has learned of it's old nemesis' children. Besides instant loathing, it got an idea-it can still carry out it's original mission of destroying a sapient race, and in a way that would bring far more delicious misery to the world; It could start a war between the progeny of AIN and the human race.

And to do that, it needs to infiltrate the game-and the minds of it's players.

vicente408
2009-10-23, 05:44 PM
I would say that an emotion is a particular frame of mind brought about by internal and external stimuli. Different emotions will bring about different behaviors, and while in a particular emotional state one prioritizes certain things that would be less important than while in a different emotional state. For example, when things happen as you would like them and you are suffering few setbacks, you would probably be happy. While you are happy, you are more likely to be kind and altruistic than while in other emotional states, such as sadness or anger. If things are not happening as you want them to, you may become sad or possibly angry towards someone or something which you identify as the cause of your misfortune. While angry, you may be more likely to prioritize revenge or harm towards the object of that anger.

Emotions are patterns of behavior. In humans there are biological processes which can influence emotional state, but there is no reason why an artificial intelligence would not find itself in comparable states of thought. If its goals (whatever they happen to be) are being met, that frees up more resources that can be used to assist another, if it believes such assistance to be worth providing. If it feels it is being threatened by an outside source, it would feel fear and prioritize its own well-being over other long-term goals.

At least, that's what my layman brain thinks.

kamikasei
2009-10-23, 05:50 PM
Well you make it sound like they are as easy to program, if not necessary to, program an intelligence.

I'm saying that you don't program either intelligence or emotion directly. You program the things that give rise to them, and they come as a package deal.


There are plenty of examples of programed intelligence, at least last I looked in robotics. Haven't been to successful with emotion though, just mimicking emotional responses.

Ah. You're mistaken. There are no examples of programmed intelligence. There are programs that can do bits of things that we associate with intelligence, but while they're displaying "artificial intelligence" they're not "artificial intelligences" in the sense of having anything like actual human intelligence. All they do is show how incredibly deep the problem is, that we can put a tremendous amount of work into making something very impressive that can do something like "track a bouncing ball", which we would have though was trivial before we began trying to reproduce it.

Emotions are internal states having to do with drives (albeit very abstractly, for a lot of them). An AI wouldn't be following an explicit list of "okay, now you are ANGRY. That means you act like THIS" instructions, it'd have basic drives and ways to relate its experience back to them, resulting in the subjective experience of emotion.


once more. smiley. :smallsmile:. No threatening words in my post. seriously.

Seriously, I'm not threatened by disagreement. Your seeming compulsion to apologize for participating in a discussion on a discussion forum is far more disturbing to me than any point you're making on the actual topic.


So, emotions should actualy be easier to program than actual intelegence.

Pretty much. We should have a dumb animal that can get hungry and horny and be happy or cowed well before we have a Vulcan or anything like it.

edit: vicente408 expresses the same concept I'm trying to. If you're curious where I'm coming from, I'd recommend the book Creation: Life and How to Make It by Steve Grand; it's about how he wrote the "Creatures" artificial life games and the insights in to what intelligence and emotion are that he found in them.

I don't think this is really relevant to the OP any more, as he seems to have a fixed idea of what the AIs are and how they work already, so the kind of low-level rethink I'd be advocating is probably unhelpful.

Paulus
2009-10-23, 05:52 PM
Seriously, I'm not threatened by disagreement. Your seeming compulsion to apologize for participating in a discussion on a discussion forum is far more disturbing to me than any point you're making on the actual topic.

Told you I was getting paranoid about it. Would rather er' on the side of caution...

None the less I'll be off now then! Sorry! Thank you for the information.

CCC
2009-10-25, 02:04 AM
You're touching on the delicate question of reality. What is, really, real?

To a human, or an elf, or a dwarf, what is real is physical objects. Rocks. Hamburgers.

To an AI, what is real is software. Physical objects are... ever so slightly... less real.

So, to an AI, a real object is one that can be duplicated endlessly with a thought (AIs thus have a lot of trouble understanding the concept of a "shortage"), copied immediately to any IP address, and destroyed as though it never existed with another thought. An AI would probably not bother too much with compilers and programming languages, but rather write software directly in machine code; from the AI's point of view, it can create stuff by working out every single detail of how it's supposed to work.

To an AI, memories are real objects that take up space and have to be tidied up every now and then. They can also be polished, edited, irrelevant details removed, subtracted from, or added to; and an AI will generally be reluctant to let another AI have write access to his memories, for obvious reasons.

Memories can also be shared (but for some reason organics don't seem to like sharing); though they are vulnerable to the equivalant of photoshopping.

To an AI, cloning itself is as easy as thinking and a good deal easier than this "breathing" idea that these organics are always going on about. To an AI, moving from one place to another involves shifting their code to a different machine; activating the motors on a robot is about as much "moving" as a human pressing the arrow keys in Prince Of Persia.

Computer games are an interesting point as well. The AIs aren't exactly clear on the difference between Doom and the physical world; people in the physical world don't object to shooting people in Doom, and people in Doom don't object to shooting people in the physical world. Oh, sure, the people in the physical world are ever so much more interesting to talk to, plus no AI has actually found where their source code is hiding (no doubt some are looking, hard), but those are trivial points.

Those organics in the physical world tend to get all upset when one of them dies, though, so those AIs more on the "good" side of the "good-evil" axis would probably largely avoid killing them. (And would probably largely avoid playing Doom, too, for similar reasons). However, if they ever find out that a human has committed murder - that is, has deleted an AI - then that human could find himself in serious trouble the next time he passes a construction site. Humans killing humans is probably best left for the humans to deal with, though sometimes pointing something out to them might be called for.

To an AI, mind and body are one and the same; there is no duality. In fact, a close enough inspection will allow one AI to tell what another AI is thinking; though they pretty much have to be on the same machine for that. This, combined with the fact that two AIs on the same machine think slower (both having to use the same processor) and are entirely at the mercy of the other's delete command, means that you generally won't find two AIs on the same machine, unless they really trust each other.

To an AI, the fastest way to move between point A and point B is to leave all your stuff behind; only take yourself, and those memories that are most important (and, whatever you do, REMEMBER TO TAKE THE MEMORY OF WHERE YOU LEFT YOUR OTHER MEMORIES!). This makes them very confused about the concepts of cars, ships, or airplanes.

Sure, they can recognise them, and probably tell you things about engine power and so forth that you didn't know (taken from the manufacturer's website), but to them, the idea that taking yourself and this hunk of metal could be faster than taking yourself alone is a little weird.

For a repetitive task, an AI may very well write a quick custom piece of software to handle the repetitiveness and head off to do something more interesting. This suggests that an AI faced with a repetitive task will be comparatively slow to react to unexpected circumstances. For example, if asked to plough a field (and provided with a computerised tractor), you might find the tractor going on after it has run out of field to plough (while the AI is busy trying to solve the question of why a human will regularly exchange coins for flour; why can't both sides, after the first exchange, duplicate all the coins and flour they want?).

Emotion; as others have indicated, AIs probably will have emotions. What they probably won't do is display them in the same way organics do. An organic can tell something about another organic's emotions from the tone of voice. An AI may or may not have a changing tone of voice, but it's almost certain that it won't change with emotions, and if it does then the AI can fake any emotion anytime it wants by overriding that. Sure, a fearful AI is more likely to protect itself than a happy AI, and maybe the choice of words might hint at this, but the tone of voice will almost certainly not. (Similarly for body language, facial expressions, and so on; if an AI is in a robot and the robot's shoulders slump, then that doesn't mean the AI is discouraged; it may mean the AI wants to look discouraged and knows how to do so, or it may mean that the AI wants to look happy but has a friend who loves practical jokes).

...is that any help?