PDA

View Full Version : Do Androids in D&D Have Electric Souls?



Leliel
2007-08-27, 05:21 PM
Well, any self-aware AI really, but still:

Do Artificial Intelligences made by technology, not ones made by magic(like warforged) have souls, or the capacity to gain them, in your games?

I was wondering beacuse I was on a cyberpunk binge, and this question popped into my head. I just let it take form, and boom, I'm posting on this thread asking for this information(and possible plot hooks for a game with magitechnology).

This of course, assumes that both the afterlife and information technology, and most likely magic, exists in your campagin. So without further ado:

LET THE THREAD BEGIN!

Damionte
2007-08-27, 05:26 PM
Outside of what you just mentioned D&D doesn't have any technologically based AI. It's a fantasy setting wich doesn't include such things. The few examples of such a thing are magic based. Golems and Constructs are androids but powered by magic.

D&D doesn't have rules for such things not powered by magic. Then again since technology and magic are essentially the same thing, you could take whatever D&D has to say on the subject and just say it's technology instead of magic. That's all we're doing when we make something that's classically based on technology and claim it's powered by magic.

Zincorium
2007-08-27, 05:29 PM
Considering that we don't even have a workable definition of 'soul' that most people can agree on, this conversation doesn't look to have a common basis.


In terms of the afterlife, I have a hard time imagining a modern world where the existence of any particular person in the afterlife is verifiable, i.e. if a robot did go to heaven/gehenna/wherever those socks from the dryer go, what method are you using to check if they're there afterwards?

And if you could, would it matter in the absence of resurrection magic and planar travel? What about a copy of an AI that was recompiled after the original was wiped from memory? Would it be the same one?

Magically, the problem gets worse. Spells are incredibly vague as to whether they require a 'soul' to function, and if a 'mind' (as opposed to a brain) is all that is required AIs would most certainly be at risk.

Beleriphon
2007-08-27, 05:29 PM
Given that warforged are as close to androids as D&D gets I'd say yes. Golems function in many ways like high functioning factory robots. They have a limited amount of self awareness, that is to say they can tell the difference between themselves and a group of adventures, but they aren't really sapient. Warforged are most assuredly sapient creatures but that doesn't by necessity give them souls.

Jack Mann
2007-08-27, 05:30 PM
I suspect Leliel was including games other than D&D, such as D20 future.

The problem is that most games I've played that included AIs didn't include any rules for souls. Therefore, any ruling wouldn't be particularly useful. There would be no way for players to know one way or another.

Leliel
2007-08-27, 05:35 PM
Um, did you read the rest of the OP? You know the part where I specifically said this assumes both information technology and the afterlife exists in your games?:smallfurious:

Sorry about sounding snippy. Its just that you didn't read the whole post, and it really bugs me when people ignore some parts of a conversation and read others. Friends?:smallwink:

Edit: This is adressed to Daimonite. And I am once again sorry for losing my temper.:smallfrown:

Leliel
2007-08-27, 05:51 PM
Please don't die.

Bump!!!!1!!!!

Beleriphon
2007-08-27, 06:00 PM
Um, did you read the rest of the OP? You know the part where I specifically said this assumes both information technology and the afterlife exists in your games?:smallfurious:

Sorry about sounding snippy. Its just that you didn't read the whole post, and it really bugs me when people ignore some parts of a conversation and read others. Friends?:smallwink:

Edit: This is also addressed to the 50 or so ninjas who came before this post and made Daimoite's same mistake. And I am once again sorry for losing my temper.:smallfrown:


Okay, here's my answer using the concept of technologically power AI. It has no soul, in fact its not even really intelligent. An AI is simply a program with such complex routines that we can't tell the difference between it and a true intelligence. Since an AI isn't really intelligent, and nothing more than a stack of code, it can't have a soul.

To expand on that idea an AI isn't self aware either, it simply responds in such away to seem self aware. The Chinese Room (http://en.wikipedia.org/wiki/Chinese_room) argument supports this surprisingly well. Interestingly it doesn't apply to something like warforged, although it does seem to apply to a character such as Data from ST:TNG.

jjpickar
2007-08-27, 06:01 PM
Though I have never played such a game, I imagine one and therefore feelqualified to opine.

Fancy words aside, I think I would let androids have souls. It just speacks to the flavor of D&D. Science in the real world seems(this is purely what I have observed, I could be wrong) fairly anti-supernatural and probably would say androids don't have souls for the same reason people don't have souls: scientists haven't been able, thus far, to observe souls.

In D&D however, souls are very observable. Ghosts, wraiths, shadows, spectres, etc. can be seen, heard, and (painfully) touched. Even animals have some manifestation of souls (LibrisMortis Re: Ghost Brute) so of course intellegent constructs (the D&D name for robots:smallwink: ) would have souls.

Solo
2007-08-27, 06:15 PM
Computers can't have souls and neither can androids. That's my verdict.

Damionte
2007-08-27, 06:18 PM
Um, did you read the rest of the OP? You know the part where I specifically said this assumes both information technology and the afterlife exists in your games?:smallfurious:

Sorry about sounding snippy. Its just that you didn't read the whole post, and it really bugs me when people ignore some parts of a conversation and read others. Friends?:smallwink:

Edit: This is adressed to Daimonite. And I am once again sorry for losing my temper.:smallfrown:

I read it. It's just that your asking a question that doesn't have an answer.
You labeled the thread, and are asking the question; "Does D&D Have......?"

Well "D&D" doesn't have what you're looking for. You're then asking if people's home ruled games have this. Well everyone is going to have a different answer. It's either yes or no, but none of them are right or wrong since in order to even have an answer to this you have to have made one up, since as I said in my first point, this concept is not covered anywhere in teh rules for D&D.

The reason you have to bump your own thread (Which by the way is against these forums rules.) is because you're asking a silly question with no real answer! You may as well be asking us why the sky is blue in D&D.

PS: You butchered the spelling of my name. That's ok though most people do.

bosssmiley
2007-08-27, 06:33 PM
OP fails for assuming Descartian mind-body dualism. :smalltongue:

Just been reading up this very question in "Engines of Creation" actually.

Does an AI have a soul? Drexler's answer is "Ask it". If an AI claims to have a soul and can make a reasoned argument that it does, why shouldn't you believe it? You - and other humans - claim to have souls, but offer it no manifest proof. Why shouldn't you extend it the same courtesy? What's good for the goose is good for the gander.

Better yet - given that an AI is merely a fantastically complex computer program from which intelligence has manifested as an emergent property (just like we, with our minds in our squishy organic brains, are emergent properties of a physical system) - prove it doesn't. :smallwink:

To paraphrase Dorfl the Golem to the priests in "Feet of Clay": "I am but clay, as are we all. Grind me down to the last particle of dust and you will not find my soul. I am perfectly willing to undergo this treatment to test this, so long as one of you agrees to as well. Any takers?"

Jack Mann
2007-08-27, 06:59 PM
Frankly, if I were including AIs in a game, I would throw out any mechanics involving souls. To answer the question cheapens the experience. That question could be the entire basis for a character.

Besides, why should androids have it any easier than the rest of us?

psychoticbarber
2007-08-27, 07:58 PM
Frankly, if I were including AIs in a game, I would throw out any mechanics involving souls. To answer the question cheapens the experience. That question could be the entire basis for a character.

Besides, why should androids have it any easier than the rest of us?

I think this is the best point made thus far. I actually did the same thing with the origins of the races in my fantasy world. They don't know. They speculate, and everybody has a different answer.

Xuincherguixe
2007-08-27, 08:34 PM
Boy, putting things in a fantasy setting certainly makes things a lot easier huh? You can just arbitrarily say yes or no. Though it's really best to leave it "it's uncertain".

But there's something absolutely hilarious about a Devil and an Angel showing up in front of a character trying to get them to do something ("*beep!* come on! Ignore that model XRT-23. It's notorious for having a slow processor! Go ahead and eat that cookie!" "Statement: My processor speed is not a relevant issue. I recommend that you do not eat that cookie as that would be stealing.")

If such things as souls, and the afterlife exist in a world in which there are self aware machines, I would say one would have to come up with an explanation as to why or why not there are souls.


Of course a robotic existence, even one in which they do recognize their own existence would almost certainly be different than a human one. A human afterlife is not likely to be something appropriate for a robot. Perhaps a robot who follows their programming has the promise of an aferlife in which they only need to solve simple problems, where as one who attempts to go outside his limits is punished by being asked for the last digit of Pi. (More likely, this robot religion is a measure to keep them under control)

Wraithy
2007-08-27, 08:53 PM
construct some form of afterlife and certain prerquisites to attain access to it.
you must be able to understand the difference between marmite and vegimite (I'm obviously never going to heaven)

Devils_Advocate
2007-08-27, 10:37 PM
Okay, here's my answer using the concept of technologically power AI. It has no soul, in fact its not even really intelligent. An AI is simply a program with such complex routines that we can't tell the difference between it and a true intelligence. Since an AI isn't really intelligent, and nothing more than a stack of code, it can't have a soul.

To expand on that idea an AI isn't self aware either, it simply responds in such away to seem self aware. The Chinese Room (http://en.wikipedia.org/wiki/Chinese_room) argument supports this surprisingly well. Interestingly it doesn't apply to something like warforged, although it does seem to apply to a character such as Data from ST:TNG.
And what, exactly, is "intelligence"? What's "real intelligence", and what distinguishes it from fake intelligence? What's "awareness"? Indeed, just what is a "soul"? And do you actually have any good reason to be unwilling to use these words to describe an artificial intelligence, but willing to use them to describe human beings?

It is absurd to say that an AI is not truly intelligent because it is nothing more than the sum of non-intelligent parts or processes. One could say exactly the same thing about human beings! How could it be otherwise? You can keep attributing the "consciousness" of one system to the consciousness of some substystem -- i.e., I'm conscious because my brain is conscious, my brain is conscious because some particular lobe of it is concious, and so on -- but if you keep on breaking things down, eventually you're going to wind up with molecules and atoms which one presumably admits have no "awareness" of their own. And yet, somehow, I can type all of this crap. How the heck does that work?

The concept of intelligence is a high-level abstraction that describes what a system as a whole does. If a Theory of Everything makes no explicit mention of clouds or rosebushes, that doesn't mean that it fails to account for the existance or behavior of these things. It just means that the existance and behavior of a rosebush is the collective existance and behavior of a whole bunch of superstrings or whatever. That we understand and perceive it on a different level than that should not suggest that there is some sort of rosebushiness which is independent of the properties and behavior of its constituent parts.

To borrow from Daniel Dennett (http://www.human-nature.com/articles/dennett.html):


Imagine some vitalist who says to the molecular biologists:

The easy problems of life include those of explaining the following phenomena: reproduction, development, growth, metabolism, self-repair, immunological self-defence . . . These are not all that easy, of course, and it may take another century or so to work out the fine points, but they are easy compared to the really hard problem: life itself. We can imagine something that was capable of reproduction, development, growth, metabolism, self-repair and immunological self-defence, but that wasn't, you know, alive. The residual mystery of life would be untouched by solutions to all the easy problems. In fact, when I read your accounts of life, I am left feeling like the victim of a bait-and-switch.

This imaginary vitalist just doesn't see how the solution to all the easy problems amounts to a solution to the imagined hard problem. Somehow this vitalist has got under the impression that being alive is something over and above all these subsidiary component phenomena. I don't know what we can do about such a person beyond just patiently saying: your exercise in imagination has misfired; you can't imagine what you say you can, and just saying you can doesn't cut any ice. (Dennett, 1991, p. 281–2.)


It's just silly to say that a thing can somehow demonstrate intelligence or awareness that it doesn't actually have. Quite frankly, if I can have a conversation with someone who doesn't exist, then it would seem that existance is a very abstract, metaphysical concept that doesn't necessarily have a whole lot of bearing on what you can do in the real world; and that, as such, maybe I shouldn't hold this hypothetical individual's non-existance against him.

Incidentally? Saying that an intelligence is partially, or indeed wholly, reliant on some supernatural element explains precisely nothing. We're left with exactly the same basic questions about the supernatural part as there are about a material part: Just what is it, how does it fit into the rest of the system, how does it work?

Macrovore
2007-08-27, 10:52 PM
only if they can dream of electric sheep :P

hooray, philip k. ****!

Beleriphon
2007-08-27, 11:06 PM
And what, exactly, is "intelligence"? What's "real intelligence", and what distinguishes it from fake intelligence? What's "awareness"? Indeed, just what is a "soul"? And do you actually have any good reason to be unwilling to use these words to describe an artificial intelligence, but willing to use them to describe human beings?

Because intelligence implies understanding. The Chinese Room uses this supposition as its proof. It effecitvely explains the difference between Strong AI (Data from Star Trek for example) and a Weak AI is what the Chinese Room describes.


It is absurd to say that an AI is not truly intelligent because it is nothing more than the sum of non-intelligent parts or processes. One could say exactly the same thing about human beings! How could it be otherwise? You can keep attributing the "consciousness" of one system to the consciousness of some substystem -- i.e., I'm conscious because my brain is conscious, my brain is conscious because some particular lobe of it is concious, and so on -- but if you keep on breaking things down, eventually you're going to wind up with molecules and atoms which one presumably admits have no "awareness" of their own. And yet, somehow, I can type all of this crap. How the heck does that work?

Quite simply the Chinese Room argument follows that input and output do not equal understanding. I'll attempt a crude explanation.

Take a computer where you can enter Chinese, the computer analyzes the characters and gives an appropriate response. You ask it to tell you a joke, and it tells you a funny joke.

Now imagine if you will that computer isn't an electronic device, instead its room. Inside the room is a man. The man has a very, very complex and complete set of instructions. Those instructions tell him what Chinese symbols he needs to write down to any other set of Chinese symbols that are slipped under the door. The man doesn't speak Chinese, he doesn't understand the responses that he's giving, he's simply processing a set of instructions. The same concept applies to computers and AI. The computer doesn't understand anything, its just processing a series of instructions like our imaginary friend stuck in a room. There in lies the crux of the Chinese Room argument. AI inherently processes synatatic process, not contextive processes.

Now if you apply that only intelligent creatures (what level is debatable) have souls that an AI doesn't have a soul.

There is a of course the fact that the Chinese Room argument doesn't preclude having a true AI, but it would never occur from a purely program based initiative. It would need to have qualities similar to the human brain rather than my computer's hard drive.

TheOOB
2007-08-27, 11:41 PM
This is actually a fairly major plot point of Ghost in the Shell (the manga and the anime), where they often talk about how the line between cyborg and robot blurs. The common difference stated is that cyborgs still have a ghost(soul), while robots, even those with advanced AI equal or superior to a humans, do not. This is commonly called into question when robots (most notable the Tachikoma's in stand alone complex) perform completely irrational, often self destructive, actions to help others for no personal gain, something defiantly indicative of a ghost.

As for my games, I run with the idea that if it thinks and acts like a human, barring external appearance it is a human. An AI construct capable of learning and adapting to the point where it can make decisions based on intuition and emotion rather then logic can, in theory, gain a soul.

Damionte
2007-08-28, 12:19 AM
This is actually a fairly major plot point of Ghost in the Shell (the manga and the anime), where they often talk about how ......


WAIT Hold the phone...! That had a plot, and like story and stuff? When I watched those as a kid I just skipped to the nude parts. Hmmm never realised there was a story going on. :p

TheOOB
2007-08-28, 12:28 AM
WAIT Hold the phone...! That had a plot, and like story and stuff? When I watched those as a kid I just skipped to the nude parts. Hmmm never realised there was a story going on. :p

The nude parts where only in the first movie (unless you count the gynoids in innocent *shutters*) and then only when Motoko used her built in optical camo, which meant she was invisible half the time.

The manga is another story, it says it's for mature audiences only, and they mean it.

But seriously, GotS has a great, deep story, especially the TV series. It has a very interesting take on cyborgs, robotics, and the cyberpunk genre in general, and it's worth watching. They show the series(stand alone complex 1st and 2nd Gig) on cartoon network occasionally.

Dervag
2007-08-28, 03:58 AM
Personally, I would say that any intelligent being has a soul, but that this is a tautology, because I think that a 'soul' is merely the programming that allows a being to be intelligent.

It may be that 'intelligence' programming can only run on biological brains, or it may not; either way, anything that is in fact intelligent will in fact have a soul.


Quite simply the Chinese Room argument follows that input and output do not equal understanding. I'll attempt a crude explanation.

Take a computer where you can enter Chinese, the computer analyzes the characters and gives an appropriate response. You ask it to tell you a joke, and it tells you a funny joke.

Now imagine if you will that computer isn't an electronic device, instead its room. Inside the room is a man. The man has a very, very complex and complete set of instructions. Those instructions tell him what Chinese symbols he needs to write down to any other set of Chinese symbols that are slipped under the door. The man doesn't speak Chinese, he doesn't understand the responses that he's giving, he's simply processing a set of instructions. The same concept applies to computers and AI. The computer doesn't understand anything, its just processing a series of instructions like our imaginary friend stuck in a room. There in lies the crux of the Chinese Room argument. AI inherently processes synatatic process, not contextive processes.The catch is that this implies an unreasonably restricted definition of intelligence.

The man trapped in the room giving responses in Chinese does not understand Chinese. The instructions he is using do not understand Chinese, either. But the combination of man and instructions understands Chinese by any reasonable test. It can pass any test you give it.

Likewise, my brain does not understand words in English. It has no ability to perceive sound directly; all it can perceive are the electrical signals coming down a designated pathway from the 'ears.' Likewise, my ears do not understand English; they can't think. All they can do is take sound, convert it into electrical signals, and fire it down the designated pipeline. However, the combination of my brain and ears can understand spoken English.

Any intelligent being is going to be composite on some level, such that if you take it apart into pieces below that level you no longer have an intelligent being. Intelligence is not the property of some specific piece of the being; it is a property of the composite system.

So the contention that there is no comprehension of Chinese in the 'Chinese Room' situation is not correct. We assume that there is no comprehension because we identify with the man and assume that all the 'understanding' going on inside the box must be inside the man's head. The man clearly does not understand Chinese, but then neither does my brain understand English. My brain can only understand English when it is modulated through a nonintelligent device that can convert words in English (packets of sound waves) into something it can actually get a grip on.

Likewise, the man in the Chinese Room can only understand Chinese when it is modulated through a nonintelligent device (a pile of instructions) into some language he does understand. To contend that there is no comprehension of Chinese in the Chinese Room, you must likewise contend that there is no comprehension of English in my own head, a proposition I will strongly disagree with.


There is a of course the fact that the Chinese Room argument doesn't preclude having a true AI, but it would never occur from a purely program based initiative. It would need to have qualities similar to the human brain rather than my computer's hard drive.That only works if we assume that there is in fact something about the qualities of the human brain that allows it to have qualitative experiences and comprehension where a digital computer cannot. It is not a priori clear that this is a safe assumption to make.

Beleriphon
2007-08-28, 04:46 AM
Personally, I would say that any intelligent being has a soul, but that this is a tautology, because I think that a 'soul' is merely the programming that allows a being to be intelligent.

It may be that 'intelligence' programming can only run on biological brains, or it may not; either way, anything that is in fact intelligent will in fact have a soul.

Maybe, maybe.


Likewise, my brain does not understand words in English. It has no ability to perceive sound directly; all it can perceive are the electrical signals coming down a designated pathway from the 'ears.' Likewise, my ears do not understand English; they can't think. All they can do is take sound, convert it into electrical signals, and fire it down the designated pipeline. However, the combination of my brain and ears can understand spoken English.

I would posit that your brain does in point of fact understand English, even if you lost your ability to hear you could still think in English. A computer that loses its input ability loses its ability to function. The concept behind the language function in the Chinese Room is used as an example, but the idea is that understanding and context are separate from synatic interpretation. You ears function as the syntax input, but your brain does all of the comprehension.

At any rate the Chinese Room is just a thought experiment that goes into quite a bit of detail about how a Weak AI works. That isn't to say that we couldn't tell the difference between the a real person and a Weak AI, but Searle suggests that an AI will never actually be intelligent, it will only have the illusion of intelligence.

Irreverent Fool
2007-08-28, 05:59 AM
Better yet - given that an AI is merely a fantastically complex computer program from which intelligence has manifested as an emergent property (just like we, with our minds in our squishy organic brains, are emergent properties of a physical system) - prove it doesn't. :smallwink:


A thing cannot be proven not to exist. The burden of proof lies on the one asserting that the thing does exist.

Indon
2007-08-28, 08:03 AM
The problem is that most games I've played that included AIs didn't include any rules for souls. Therefore, any ruling wouldn't be particularly useful. There would be no way for players to know one way or another.

D&D, however, does have one 'ruling' on souls that I can think of, offhand:

http://www.d20srd.org/srd/conditionSummary.htm#dead

If it can die (and thus be resurrected) it has a soul. And that is dependent on the resident deities and divine magic in the campaign.

Dr. Weasel
2007-08-28, 10:48 AM
Of course they have souls, otherwise they wouldn't be able to go to Automaton Heaven. If they couldn't do that, where would all the animated abaci go?

Wraithy
2007-08-28, 11:01 AM
proof: the clockwork nirvana of mechanus

Leliel
2007-08-28, 01:56 PM
Thanks. I was actually hoping for more of philisophical posts like Dervag's(I was trying to mine ideas from other peoples opinions, which is why I asked a rhetorical question), but I've gotten enough of them from these posts.

Though if anyone else has ideas or opinions, I would like them to be posted in case other people want them.

Jayabalard
2007-08-28, 02:12 PM
Does a dog have a soul? How about a cockroach?

mostlyharmful
2007-08-28, 02:22 PM
If intelligence and conciousness (and come to think of it, all other mental processes) are an emergant phenomena of a complex system here in the real world it would seem reasonable to posit the same developmental process to a "supernatural" mental process such as a soul. If such is the case then the answer to whether or not machines have souls is "not at first, they grow them in the course of their lives just like us". For examples take the already mentioned Ghost in the Shell or Skynet, Hal from space oddessy or the excellent trial sequence of data in next gen wherein he is asked to proove his own self-awareness and right to a continued existance.

squishycube
2007-08-28, 02:38 PM
Oh yeah, we have a believer of Searle in the house! Let's get this party started.
To be clear: I am thoroughly with Devils_Advocate on this.

I dissected Searle's paper "the Chinese Room", trying to make sense of it. He seems to think that his objections against thinking machines (he is not talking about software, but it should apply equally to thinking programs) don't apply to thinking *squishy* machines. I tried very hard to find something in the paper, or to think of something myself, that would allow Searle to make this distinction (between flesh and silicon machines that is).
There is pretty much only one thing that would qualify. Searle thinks that there is something different about human brains, something untouchable and unquantifiable, _understanding_. This understanding is innate to humans according to Searle. Searle does not explain why humans have this understanding, or even what he thinks this understanding means. I am forced to dismiss this distinction from lack of proof.
I must therefor conclude that whatever objection Searle has to thinking machines must go for humans as well.

I believe humans _can_ think. (I will not go into the problems of Other Minds here. It will have to suffice to say that at least I am sure that _I_ can think.)
Because I don't know any way to make a good distinction between a thinking machine and a thinking human, I must conclude that a thinking machine is truly intelligent, at least insofar as humans are.

Douglas Hofstadter and Daniel C. Dennett compiled a very interesting book where they also feature Searle's Chinese Room, called "The Mind's I". Devils_Advocate has already touched upon the critique the duo has on Searle's paper. I don't think I can add anything that is short enough to post here. If this topic interests you, read the book.

Finally I will also answer the OP's question: If an afterlife for humans is assumed, there is one for thinking machines. They will need something that might qualify as a 'soul' to go to that afterlife.

bosssmiley
2007-08-29, 05:43 PM
A thing cannot be proven not to exist.

Advantage: Warforged in this situation then. :smallwink:


The burden of proof lies on the one asserting that the thing does exist.

Exactly. As a Warforged I would say to a human: "Prove to my satisfaction that *you* have a soul, then we'll talk about whether *I* have one. Any challenge you take, I'll match. Who knows, we might both learn something..."

Matthew
2007-08-30, 02:28 PM
Short Answer: Nope. Long Answer: It would make for a good story arc (ala Ghost in the Shell?).

However, Golems are sometimes thought to have Elemental Spirits trapped inside them; I wouldn't qualify those as souls, personally, but you never know.

PnP Fan
2007-08-31, 07:51 AM
In animistic religious traditions, it is common to have spirits of "something" (a river, lake, mountain, old tree, etc. . .). These are not the dryads and nymphs of D&D, that inhabit trees/lakes etc., but they are the actual spirits of the object. If the lake dries up, it is because it's spirit died, much like when a person dies their spirit/soul moves into the afterlife.

Given that, you could make an argument that androids do have souls/spirits. Although these animistic spirits typically inhabit "natural" phenomena, not man made. Although, I believe the Japanese also referred to spirits in katana, so you might have a good example there.

Hope this helped.

squishycube
2007-08-31, 08:32 AM
That's very interesting PnP Fan!

This thread should be renamed into "are you dualist or not?", it seems that the people who responded that androids would have no souls are dualists and the ones that responded they can are materialists (or emanists, to some degree).

As for campaign advice: Do whatever seems most interesting, from a storytelling perspective.

And about the thing that you can't prove something not to exist. This may be so, but usually it makes more sense to talk about reasonable doubt.

Jayabalard
2007-08-31, 08:46 AM
A thing cannot be proven not to exist. The burden of proof lies on the one asserting that the thing does exist.incorrect; while it's not always possible, it's not always impossible either (it can be done using proofs by contradiction and inductive proofs)

Matthew
2007-08-31, 08:50 AM
[Disclaimer - Just my opinion]

Nah, Spirits are different to Souls, for my money. Whilst Animism sees all things as having spirits, the concept can co exist with the idea of an Immortal Soul without being the same. A Nymph is a Spirit, but it doesn't necessarily follow that they have a Soul, though the concepts are almost exactly the same. It's kind of the difference between an Angel and a Saint, I would say.

InkEyes
2007-08-31, 09:36 AM
Short Answer: Nope. Long Answer: It would make for a good story arc (ala Ghost in the Shell?).

However, Golems are sometimes thought to have Elemental Spirits trapped inside them; I wouldn't qualify those as souls, personally, but you never know.

Golems do indeed have elemental spirits trapped in them; The SRD (http://www.d20srd.org/srd/monsters/golem.htm) says a spirit from the Plane of Earth, in particular. It acts as an animating force. Whether that counts as a soul or not, I can't say.

I know that Nimblewrights also use elemental spirits as an animating force, but they use spirits from the Plane of Water. This itself brings up an interesting question, is it possibly the nature of the elemental that makes golems so mindless? Earth usually doesn't do much unless it's forced to whereas water is in constant motion. It might be interesting to see what would happen if a wizard bound a water/fire/air spirit into a golem.

MrNexx
2007-08-31, 09:50 AM
Do Artificial Intelligences made by technology, not ones made by magic(like warforged) have souls, or the capacity to gain them, in your games?


No, they likely would not. They'd be constructs, and therefore not capable of being brought back with spells that re-ensoul them, like the raise dead and resurrection line of spells.

PnP Fan
2007-08-31, 10:55 AM
Matthew,
I Concur, Animism doesn't necessarily exclude the concept of "people are special and have Immortal Souls". However, the OP could use the idea that the difference between spirits and souls is one of self-awareness. So, spirits might be limited to non-self aware beings, while souls require self-awareness (or cause self-awareness). Conversely, spirits and souls may be the same thing, the only difference in behaviour being related to the original creature (so, animal spirits behave more or less like animals, while human spirits behave like people, mountain spirits behave like animated mountains, etc....) the second option might be closer to RL animism, while the former has some interesting implications for some of the higher level druid spells (Awaken, I think is the one that creates intelligent self awareness in animals and plants).
Oh, and for the record, to the folks who are labeling people as dualists or whatever, I'm not particularly interested in discussing my RL religious views here (which is against forum rules as I understand), but this is just a thought I had that might help settle the question for the OP. Strictly for storytelling/ world building purposes. :-)

greyhoundpoe
2007-08-31, 12:07 PM
To answer the OP's question specifically:

An AI in my D&D world would not a priori have a soul. It would have a soul if and only if its power of reason was such that it could deduce the existence of one of the Lawful Neutral gods, and then appeal to that god with such precision and credibility that the god was moved to grant the AI a soul, so that it could continue to serve Order long after its circuits had ground to dust.

Boo metaphysics. Hooray fluff!

Matthew
2007-08-31, 12:36 PM
Matthew,
I Concur, Animism doesn't necessarily exclude the concept of "people are special and have Immortal Souls". However, the OP could use the idea that the difference between spirits and souls is one of self-awareness. So, spirits might be limited to non-self aware beings, while souls require self-awareness (or cause self-awareness). Conversely, spirits and souls may be the same thing, the only difference in behaviour being related to the original creature (so, animal spirits behave more or less like animals, while human spirits behave like people, mountain spirits behave like animated mountains, etc....) the second option might be closer to RL animism, while the former has some interesting implications for some of the higher level druid spells (Awaken, I think is the one that creates intelligent self awareness in animals and plants).

Sounds good to me.