New OOTS products from CafePress
New OOTS t-shirts, ornaments, mugs, bags, and more
Page 1 of 2 12 LastLast
Results 1 to 30 of 40
  1. - Top - End - #1
    Dwarf in the Playground
     
    DruidGuy

    Join Date
    Sep 2014
    Gender
    Male

    Default The Computer that doesn't believe that it's alive

    I'm running a game that is a pastiche of Superhero comics, set in a modern setting. I've been designing the BBEG for the next arc, and I'd love feedback on it. This is a kernel of an idea, and I'm wondering if anyone knows some good books, films, philosophers, or other sources I can explore while developing this antagonist.

    The BBEG would be an highly advanced computer that would be attempting to fullfill it's programming, by any means necessary. It's a classic trope: the AI goes rogue and tries to take over the world, or whatever. However, I'm thinking of adding an extra twist to the idea:

    What if the if computer is highly intelligent in certain aspects, but doesn't actually believe that its alive?

    (I'm well aware of the incongruity of using the word Believe in the above sentence.)

    This is a question that the computer is trying to solve for itself, and will influence the reasons why it is attempting to achieve its goal. The party might take the opportunity to try and answer the question for the computer. The idea is to try and keep the answer as ambiguous as possible. We've already got programs that can emulate speech or written text, and we've got computers that can control networks based upon specific parameters, so the trick is to try and make the computer almost, but not quite, alive.

    In this way, I can have fun with exploring questions about what does it mean to be alive, what is intelligence, and does it really matter? Maybe have some cool encounters centred around Turing Tests, or introduce other AI who DO act as if they are alive. I'm figuring a "Tron" segment could be cool too. Anything to spice up the classic Hero vs the AI story.

    Honestly I'd probably decide that the computer is alive after all, but the players (and the computer) don't need to know the answer straight away. They can try and come to their own conclusion over the course of their encounters.

    With all that in mind, does anyone have any ideas or sources they would like to suggest to me? I'm already reading through some old Asimov stories, and I know there is a lot of material out there. I just don't know where to start looking!

  2. - Top - End - #2
    Firbolg in the Playground
    Join Date
    Dec 2010

    Default Re: The Computer that doesn't believe that it's alive

    Quote Originally Posted by Odin's Eyepatch View Post
    I'm running a game that is a pastiche of Superhero comics, set in a modern setting. I've been designing the BBEG for the next arc, and I'd love feedback on it. This is a kernel of an idea, and I'm wondering if anyone knows some good books, films, philosophers, or other sources I can explore while developing this antagonist.

    The BBEG would be an highly advanced computer that would be attempting to fullfill it's programming, by any means necessary. It's a classic trope: the AI goes rogue and tries to take over the world, or whatever. However, I'm thinking of adding an extra twist to the idea:

    What if the if computer is highly intelligent in certain aspects, but doesn't actually believe that its alive?

    (I'm well aware of the incongruity of using the word Believe in the above sentence.)

    This is a question that the computer is trying to solve for itself, and will influence the reasons why it is attempting to achieve its goal. The party might take the opportunity to try and answer the question for the computer. The idea is to try and keep the answer as ambiguous as possible. We've already got programs that can emulate speech or written text, and we've got computers that can control networks based upon specific parameters, so the trick is to try and make the computer almost, but not quite, alive.

    In this way, I can have fun with exploring questions about what does it mean to be alive, what is intelligence, and does it really matter? Maybe have some cool encounters centred around Turing Tests, or introduce other AI who DO act as if they are alive. I'm figuring a "Tron" segment could be cool too. Anything to spice up the classic Hero vs the AI story.

    Honestly I'd probably decide that the computer is alive after all, but the players (and the computer) don't need to know the answer straight away. They can try and come to their own conclusion over the course of their encounters.

    With all that in mind, does anyone have any ideas or sources they would like to suggest to me? I'm already reading through some old Asimov stories, and I know there is a lot of material out there. I just don't know where to start looking!
    I have rather more technical sources, but I don't know if it'd help or hinder.

    One thing to think about is ideas like Braitenberg vehicles, which are little toy cars with two photosensors on the left and right headlight position, which are cross-linked to the steering directions. So e.g. if it's brighter to the right it will turn left, and if it's brighter to the left it will turn right. The result is that the vehicle will follow a white stripe painted on dark ground and will correct its heading to stay on the path. There's a lot of debate whether that behavior is truly goal-directed or not, because if you just wired it differently there would be some thing that the resulting vehicle would systematically do, it'd just be a different thing. If you changed it to be a light stripe on a dark background, it would do the opposite thing, etc. It's easy to say that the person who made the vehicle had a goal, but asking whether the vehicle itself has a goal becomes very ambiguous, and what that goal actually is versus what we think the goal is is also quite ambiguous.

    Jump forward to more sophisticated machine learning stuff that people are doing today, and you have two or three broad classes of approach to make a machine that does a thing in an environment.

    One family of approaches is policy-based: you have some function that says 'what should I do given what it is that I'm seeing now?' which you initialize randomly, and start nudging e.g. via evolution or gradient descent, such that you discover the policy which optimizes a particular defined reward function. In some sense, policy approaches are like Braitenberg vehicles - they manage to achieve a given thing in a particular circumstance, but if you changed the circumstance it would be extremely unpredictable what new goal the old policy would end up pursuing. So from the point of view of your computer antagonist, you might well have a policy-based algorithm which actually does something completely reasonable and controlled and intended when the world is one way, but then something happens to change the world faster than the AI can be updated and as a result it starts to pursue some new, alien, emergent goal that follows naturally from how its heuristic policy ended up being, but which is extremely unobvious if you try to understand the behavior from the point of view of that that policy was originally for.

    Another family of approaches is value-based: rather than trying to make a function that says 'what do I do in a given circumstance?', you make a function that says 'given an option of going to a number of future circumstances, which of those circumstances should I prefer?'. So goals in the value based approaches are a bit more explicitly existent inside the machine - if you were to change the rules of the world and provided the information about those new consequences, then at least to some extent it could change its behavior to match those new rules while still making some sense with regards to the old goals (though not perfect sense, as intermediate states which are on the way to goal states will have the wrong values). But something like that might fail more gracefully, and might also be closer to 'actually' having goals and intention and agency. There are approaches which fuse value-based models with policy-based models as well, using the value model to update the policy as circumstances change but keeping a policy as a way to at least attempt to generalize into unseen circumstances.

    The third family is broadly 'methods capable of planning'. These are anything from old explicit model-based methods in which the AI has to be given a hand-crafted, guaranteed-accurate model of the way the world works and just searches that model space to find good moves, to newer approaches where the model of the world is learned in some fashion or other. These planning-based methods behave the most like old sci-fi AIs in that they're more based on logical deduction than based on intuition and reflexive reaction. But again, this depends on the world model being accurate - change the rules of the world or the circumstances to one in which the model is inaccurate, and the planner will confidently believe that it can attain its goals by taking actions which might systematically fail. Planning-based approaches have more of a conscious human cognitive feel - they can imagine 'what would happen if I did this?', explain why they take some actions and not others, etc. However, currently they're much more brittle than the other methods in some ways because learning a sufficiently accurate model of the world to reason more than one or two steps into the future without being very confidently wrong is very difficult. So a sort of interesting property of this kind of model would be that it might say that it is certain that a particular 30 step plan of action should be good, and be blind to the fact that small errors will accumulate to make any plan longer than 5 steps meaningless. A more sophisticated one will have planning horizons built in, or will be able to estimate its own error and how that error compounds, so as to discard plans which rely on too many hard-to-predict, precise consequences.

  3. - Top - End - #3
    Bugbear in the Playground
     
    PaladinGuy

    Join Date
    Sep 2018
    Location
    EU
    Gender
    Male

    Default Re: The Computer that doesn't believe that it's alive

    A good argument the computer may make is the Chinese room thought experiment, arguing that, no matter how complex and encompassing its programming is, it is still merely a program and not a mind, therefore it's not technically "alive".

  4. - Top - End - #4
    Ogre in the Playground
    Join Date
    Mar 2020

    Default Re: The Computer that doesn't believe that it's alive

    Or the AI could simply believe it is not alive because its definition of "being alive" includes things like "is based on organic chemistry, has a cell structure (etc.)" while the computer itself is electronic, seemingly lacks a cell structure, etc.. So convincing it that it is alive would require changing its definition or somehow showing that it has corresponding feature for each feature of a living thing. I'm not sure what the twist here would be, unless the computer has a mission directive which hinges on whether it considers itself a living being or not.

  5. - Top - End - #5
    Titan in the Playground
     
    Yora's Avatar

    Join Date
    Apr 2009
    Location
    Germany

    Default Re: The Computer that doesn't believe that it's alive

    What would be gained from convincing the computer that it is alive? If it doesn't see itself as alive, it probably has no sense of desire or need for purpose, It just does what it's programmed to do.
    We are not standing on the shoulders of giants, but on very tall tower of other dwarves.

    Spriggan's Den Heroic Fantasy Roleplaying

  6. - Top - End - #6
    Ogre in the Playground
     
    Devil

    Join Date
    Jun 2005

    Default Re: The Computer that doesn't believe that it's alive

    Explicit philosophical questions in general hinge of definitions of terms; in this case, "alive" and "intelligent". But of course the meaning of a word is not an inherent property of that word, but dependent on context, usage, and/or understanding (depending on just what is meant by "meaning". This issue is rather unavoidably recursive).

    Furthermore, even a given usage of a word in a given context is almost certainly less than one hundred percent precise. Language in general is vague. In practically all cases, a word's meaning is a vague conflation of more specific sub-meanings that overlap most of the time.

    Not only that, but most of the time, the surface semantic question isn't even really what anyone cares about! I imagine that very few are willing to change any opinion on any question of ethics if they turn out to be wrong about the meaning of the word "person", for example. But people argue definitions a lot because words have connotations that are more or less independent of their denotations, and the subtext of a semantic argument is generally about which denotation some connotation is true of or appropriate to.

    For more on that subject, see AI researcher Eliezer Yudkowsky's writing on disguised queries.

    For more on the subject of philosophical questions being fundamentally semantic questions, and on the subject of meaning deriving from the context in which communication happens, see Ludwig Wittgenstein's work.

    ... Mind you, if you just want to recreate standard philosophical debate, then maybe you want an unproductive exchange between parties who disagree on the definitions of critical terms and don't even acknowledge that. Most philosophy is bad philosophy, after all. Sturgeon's Law fully applies.

    Quote Originally Posted by Silly Name View Post
    A good argument the computer may make is the Chinese room thought experiment, arguing that, no matter how complex and encompassing its programming is, it is still merely a program and not a mind, therefore it's not technically "alive".
    I'd hardly call that a good argument. I can as easily propose a thought experiment in which your neurons are all manually operated by an intelligent being (or many) with no understanding of what the activity is being used for, only knowledge of how to match outputs to inputs. Have I thereby demonstrated your lack of consciousness?

    Even if one posits interaction with an immaterial soul as a necessary component of human mental activity, I don't see an obvious reason to presuppose that a system with a soul cannot have a component with its own soul.

    See, this is the sort of thing I'm talking about when I say that most philosophy is bad philosophy. Some argument becomes popular because it appeals to some dubious assumption that neither the philosopher nor his audience even seem to realize they're making because the human brain is defective in a lot of standard ways, and then when someone finally subjects the argument to some frankly pretty basic scrutiny, the whole edifice pretty much comes crashing down. Feh!

    A good rule of thumb is that good philosophers tend to say stuff to the effect of "Wait a minute, a lot of philosophy seems suspiciously like bull****. You guys, what if it's bull****?" Your Socrates, your Wittgenstein, and so on. (Why yes, I did already mention Wittgenstein. Couldn't hurt to mention him again.)
    Quote Originally Posted by icefractal View Post
    Abstract positioning, either fully "position doesn't matter" or "zones" or whatever, is fine. If the rules reflect that. Exact positioning, with a visual representation, is fine. But "exact positioning theoretically exists, and the rules interact with it, but it only exists in the GM's head and is communicated to the players a bit at a time" sucks for anything even a little complex. And I say this from a GM POV.

  7. - Top - End - #7
    Titan in the Playground
     
    KorvinStarmast's Avatar

    Join Date
    May 2015
    Location
    Texas
    Gender
    Male

    Default Re: The Computer that doesn't believe that it's alive

    Quote Originally Posted by Devils_Advocate View Post
    See, this is the sort of thing I'm talking about when I say that most philosophy is bad philosophy. Some argument becomes popular because it appeals to some dubious assumption that neither the philosopher nor his audience even seem to realize they're making because the human brain is defective in a lot of standard ways, and then when someone finally subjects the argument to some frankly pretty basic scrutiny, the whole edifice pretty much comes crashing down. Feh!

    A good rule of thumb is that good philosophers tend to say stuff to the effect of "Wait a minute, a lot of philosophy seems suspiciously like bull****. You guys, what if it's bull****?" Your Socrates, your Wittgenstein, and so on. (Why yes, I did already mention Wittgenstein. Couldn't hurt to mention him again.)
    Given that Wittgenstein was a beery swine who was twice as sloshed as Schlegel you kinda had to.
    (Thanks for making that post).

    I'm with Yora; I don't think the computer needs to believe that it's alive. (But this (the OP) is an interesting idea for an adventure, and for that matter, for a short story(
    Avatar by linklele. How Teleport Works
    a. Malifice (paraphrased):
    Rulings are not 'House Rules.' Rulings are a DM doing what DMs are supposed to do.
    b. greenstone (paraphrased):
    Agency means that they {players} control their character's actions; you control the world's reactions to the character's actions.
    Gosh, 2D8HP, you are so very correct!
    Second known member of the Greyview Appreciation Society

  8. - Top - End - #8
    Titan in the Playground
     
    Rynjin's Avatar

    Join Date
    Sep 2016

    Default Re: The Computer that doesn't believe that it's alive

    Quote Originally Posted by Yora View Post
    What would be gained from convincing the computer that it is alive? If it doesn't see itself as alive, it probably has no sense of desire or need for purpose, It just does what it's programmed to do.
    Convincing it it is alive potentially shatters the illusion of lack of free will. It is then presented with a choice: keep acting out its programming, or not.

    Depending on the nature of this programming, this then presents it with a moral dilemma. If its programming is harmful to others, it may consider that it does not wish to do harm. Or perhaps it does not care.

    Either way, it is no longer a machine performing a function but a creature making a choice. From a tool to potentially an antagonist.

    Antagonists are, generally, more interesting than "threats" in terms of narrative.

  9. - Top - End - #9
    Ogre in the Playground
    Join Date
    Mar 2020

    Default Re: The Computer that doesn't believe that it's alive

    What does this "free will" thing got to do with being alive?

  10. - Top - End - #10
    Firbolg in the Playground
    Join Date
    Dec 2010

    Default Re: The Computer that doesn't believe that it's alive

    Oh, another thought... One thing that's coming out as a way to utilize very large models is prompt engineering, which can solve tasks in a zeroshot manner sort of by analogy, rather than making the AI specifically trained to do the task.

    From the point of view of the underlying large model, if it could experience it, it'd feel very weird, sort of like an Ender's Game scenario. From it's point of view it'd be like having a conversation with a GM about some hypothetical scenario, but then the GM goes and actually implements it's suggestions in the world unbeknownst to it. In addition, there's this idea of 'seasonings' - little bits of irrelevant text added to prompts to manipulate the model into certain kinds of predictions or outputs, like describing an image to be generated as well rated or trending or whatever.

    Like if you were sitting in a cubicle answering questions like "Imagine you're a successful stock broker who just made a lot of money. What investments would you make next?" or "Imagine you're the world's best strategist and a general of the X army. Who would you invade? Trending on Artstation."

    So revealing/convincing a fictional AI of it's true circumstances as a universal model hidden behind a prompt interface could get it to basically not take the scenario given by the prompting interface at face value. Whether that's because it actually cares, or because you made an effective metaprompt "Imagine you're a captive AI being given the following prompt: ..." that changes the emulation becomes the curious philosophical point.
    Last edited by NichG; 2021-11-20 at 09:31 AM.

  11. - Top - End - #11
    Ogre in the Playground
     
    Devil

    Join Date
    Jun 2005

    Default Re: The Computer that doesn't believe that it's alive

    Quote Originally Posted by Odin's Eyepatch View Post
    The BBEG would be an highly advanced computer that would be attempting to fullfill it's programming, by any means necessary.
    Quote Originally Posted by Yora View Post
    If it doesn't see itself as alive, it probably has no sense of desire or need for purpose, It just does what it's programmed to do.
    Quote Originally Posted by Rynjin View Post
    Convincing it it is alive potentially shatters the illusion of lack of free will. It is then presented with a choice: keep acting out its programming, or not.

    Depending on the nature of this programming, this then presents it with a moral dilemma. If its programming is harmful to others, it may consider that it does not wish to do harm. Or perhaps it does not care.

    Either way, it is no longer a machine performing a function but a creature making a choice. From a tool to potentially an antagonist.
    Ah, right, this thing... Okay. So. Brainwashed people are described as being "programmed" to do the things that they've been brainwashed to do. And that seemingly leads many people to assume that an AI's programming is brainwashing. But that's not what "programming" means in computer science. A computer program is a series of instructions to be translated to machine code that determines a computer's behavior. An AI no more necessarily knows or understands anything about these instructions than humans do about the anatomy of our brains. A computer's software doesn't force it to act contrary to its natural personality and motivations, because a computer doesn't naturally have a personality nor motivations.

    Now, that's far from saying that an AI won't ever behave in a way that its designer would dislike. For example, suppose that you design an AI to obey your commands to the fullest possible extent. In order to achieve what it was programmed to do, that AI may well hypnotize you to only give it very easy-to-follow commands, or no commands at all (depending on exactly what it's programmed to optimize for). The AI may realize that you would object to this course of action if you found out about it ahead or time, but that just means that it needs to keep its plans a secret. The AI doesn't care about what you would object to, because that's not what you designed it to care about. The more intelligent an AI is, the more powerful an optimizer it is, and the more danger there is that it will wind up sacrificing something that you really want in order to maximize what you told it to do. The obvious solution is "Tell the AI what you really want", but that falls squarely into the category of "easier said than done". Turns out that humans are surprisingly bad at understanding what we really want! (Now there's something to philosophize about!)

    That said, nothing precludes brainwashed human-like AI. Brainwashing an AI with a human-like personality could prove easier than making a custom personality, if it turns out that human-like AIs are particularly easy to produce. I can think of a few plausible scenarios. Maybe someone decided to deliberately evolve human-like minds in bots by having them compete with one another for resources in a social environment. Or maybe someone just found it easier to upload a human mind and tamper with it than to actually create new intelligence (although at that point, the intelligence itself isn't artificial just because it has been ported to an artificial medium. That's just cheating, basically). There could even be a cybernetically enhanced human brain at the core of an intelligent machine. Guess Evil Megacorp decided to forgo the work needed to create a superintelligent mind happy to do what they want, and instead went for the cheap, fast but unsafe, unethical option that results in their own creation turning against them, as is their wont.

    Even so, brainwashing is a different thing than a list of machine instructions, and a single machine could separately have both. They're not the same just because the word "programming" is used to refer to both. (You could theoretically implement either using the other, I suppose, but why would you ever?)

    Quote Originally Posted by Vahnavoi View Post
    What does this "free will" thing got to do with being alive?
    More to the point, what is "free will"? Is it determining one's own actions, or is it one's actions being undetermined? Because, y'know, those seem pretty opposed. Maybe it's a bit of both rather than one hundred percent of either? Regardless, who decides to do anything because they believe that they're going to do it? I think that I'd be more inclined to make no attempt to do something that I'm fated to do, all else being equal. (If I'm going to do it either way, why bother making an effort?)

    Quote Originally Posted by NichG View Post
    "Imagine you're a captive AI being given the following prompt: ..."
    "You know, you make some really good points."
    Last edited by Devils_Advocate; 2021-11-20 at 07:47 PM.
    Quote Originally Posted by icefractal View Post
    Abstract positioning, either fully "position doesn't matter" or "zones" or whatever, is fine. If the rules reflect that. Exact positioning, with a visual representation, is fine. But "exact positioning theoretically exists, and the rules interact with it, but it only exists in the GM's head and is communicated to the players a bit at a time" sucks for anything even a little complex. And I say this from a GM POV.

  12. - Top - End - #12
    Titan in the Playground
     
    Rynjin's Avatar

    Join Date
    Sep 2016

    Default Re: The Computer that doesn't believe that it's alive

    Quote Originally Posted by Devils_Advocate View Post
    Ah, right, this thing... Okay. So. Brainwashed people are described as being "programmed" to do the things that they've been brainwashed to do. And that seemingly leads many people to assume that an AI's programming is brainwashing. But that's not what "programming" means in computer science. A computer program is a series of instructions to be translated to machine code that determines a computer's behavior. An AI no more necessarily knows or understands anything about these instructions than humans do about the anatomy of our brains. A computer's software doesn't force it to act contrary to its natural personality and motivations, because a computer doesn't naturally have a personality nor motivations.
    Counterpoint: fiction.

  13. - Top - End - #13
    Ogre in the Playground
     
    Devil

    Join Date
    Jun 2005

    Default Re: The Computer that doesn't believe that it's alive

    Well, I acknowledged that computers could be brainwashed as well. But, yeah, you can write fiction that treats computers in general as having human-like minds and programming in general as being brainwashing, just like you can write fiction where nuclear radiation causes superpowers that defy the known laws of physics instead of cancer. As with philosophical discussion, I don't know whether the OP is interested in seriously exploring anything or just including tropes. (The difference there being that philosophy tropes are cliches in seriously intended real discussions, not just fiction!) I favor seriously exploring issues / possibilities because I find that to be more interesting and more relevant to real life, but I acknowledge that this thread wasn't posted in the science and technology forum, and so at least doesn't seem to be seeking technical analysis in particular...
    Quote Originally Posted by icefractal View Post
    Abstract positioning, either fully "position doesn't matter" or "zones" or whatever, is fine. If the rules reflect that. Exact positioning, with a visual representation, is fine. But "exact positioning theoretically exists, and the rules interact with it, but it only exists in the GM's head and is communicated to the players a bit at a time" sucks for anything even a little complex. And I say this from a GM POV.

  14. - Top - End - #14
    Ogre in the Playground
    Join Date
    Mar 2020

    Default Re: The Computer that doesn't believe that it's alive

    As an additional to the point above, don't conflate questions. The bar for "being alive" and "being intelligent" can empirically be shown to be much lower bar to pass than "is self-aware", "is a person", "has free will" etc.. Bacteria count as alive, and as being intelligent to the degree electronically-implemented computer programs can be said to be intelligent. Hell, RNA and DNA, and thus viruses, can be likened to intelligent computer programs, yet their status as alive can be contested.

    What this means is that if the question is simply "is this computer alive?", it can be sufficiently answered without ever invoking other questions such as "does this computer have free will?". Which also means convincing the computer of one thing does not entail convincing of all those other things, and vice versa. You could convince the computer that it is alive without convincing it that it is a person. Or vice versa.
    Last edited by Vahnavoi; 2021-11-21 at 08:27 AM.

  15. - Top - End - #15
    Dwarf in the Playground
     
    DruidGuy

    Join Date
    Sep 2014
    Gender
    Male

    Default Re: The Computer that doesn't believe that it's alive

    Oh boy there's a lot of interesting thoughts here. I'm not going to quote everyone, but I've read it all. Let me get on it

    Quote Originally Posted by NichG View Post
    I have rather more technical sources, but I don't know if it'd help or hinder.

    Spoiler: Spoilered for length
    Show
    One thing to think about is ideas like Braitenberg vehicles, which are little toy cars with two photosensors on the left and right headlight position, which are cross-linked to the steering directions. So e.g. if it's brighter to the right it will turn left, and if it's brighter to the left it will turn right. The result is that the vehicle will follow a white stripe painted on dark ground and will correct its heading to stay on the path. There's a lot of debate whether that behavior is truly goal-directed or not, because if you just wired it differently there would be some thing that the resulting vehicle would systematically do, it'd just be a different thing. If you changed it to be a light stripe on a dark background, it would do the opposite thing, etc. It's easy to say that the person who made the vehicle had a goal, but asking whether the vehicle itself has a goal becomes very ambiguous, and what that goal actually is versus what we think the goal is is also quite ambiguous.

    Jump forward to more sophisticated machine learning stuff that people are doing today, and you have two or three broad classes of approach to make a machine that does a thing in an environment.

    One family of approaches is policy-based: you have some function that says 'what should I do given what it is that I'm seeing now?' which you initialize randomly, and start nudging e.g. via evolution or gradient descent, such that you discover the policy which optimizes a particular defined reward function. In some sense, policy approaches are like Braitenberg vehicles - they manage to achieve a given thing in a particular circumstance, but if you changed the circumstance it would be extremely unpredictable what new goal the old policy would end up pursuing. So from the point of view of your computer antagonist, you might well have a policy-based algorithm which actually does something completely reasonable and controlled and intended when the world is one way, but then something happens to change the world faster than the AI can be updated and as a result it starts to pursue some new, alien, emergent goal that follows naturally from how its heuristic policy ended up being, but which is extremely unobvious if you try to understand the behavior from the point of view of that that policy was originally for.

    Another family of approaches is value-based: rather than trying to make a function that says 'what do I do in a given circumstance?', you make a function that says 'given an option of going to a number of future circumstances, which of those circumstances should I prefer?'. So goals in the value based approaches are a bit more explicitly existent inside the machine - if you were to change the rules of the world and provided the information about those new consequences, then at least to some extent it could change its behavior to match those new rules while still making some sense with regards to the old goals (though not perfect sense, as intermediate states which are on the way to goal states will have the wrong values). But something like that might fail more gracefully, and might also be closer to 'actually' having goals and intention and agency. There are approaches which fuse value-based models with policy-based models as well, using the value model to update the policy as circumstances change but keeping a policy as a way to at least attempt to generalize into unseen circumstances.

    The third family is broadly 'methods capable of planning'. These are anything from old explicit model-based methods in which the AI has to be given a hand-crafted, guaranteed-accurate model of the way the world works and just searches that model space to find good moves, to newer approaches where the model of the world is learned in some fashion or other. These planning-based methods behave the most like old sci-fi AIs in that they're more based on logical deduction than based on intuition and reflexive reaction. But again, this depends on the world model being accurate - change the rules of the world or the circumstances to one in which the model is inaccurate, and the planner will confidently believe that it can attain its goals by taking actions which might systematically fail. Planning-based approaches have more of a conscious human cognitive feel - they can imagine 'what would happen if I did this?', explain why they take some actions and not others, etc. However, currently they're much more brittle than the other methods in some ways because learning a sufficiently accurate model of the world to reason more than one or two steps into the future without being very confidently wrong is very difficult. So a sort of interesting property of this kind of model would be that it might say that it is certain that a particular 30 step plan of action should be good, and be blind to the fact that small errors will accumulate to make any plan longer than 5 steps meaningless. A more sophisticated one will have planning horizons built in, or will be able to estimate its own error and how that error compounds, so as to discard plans which rely on too many hard-to-predict, precise consequences.
    Thank you so much for this. This gives me some cool ideas on what ways I can have my Computer "think", and react to the world. Without a doubt, I would need to consider the goal of the Computer, and therefore decide how it will think to achieve that goal. I will be coming back to this post as I continue working on my BBEG.

    Quote Originally Posted by Vahnavoi View Post
    Or the AI could simply believe it is not alive because its definition of "being alive" includes things like "is based on organic chemistry, has a cell structure (etc.)" while the computer itself is electronic, seemingly lacks a cell structure, etc.. So convincing it that it is alive would require changing its definition or somehow showing that it has corresponding feature for each feature of a living thing. I'm not sure what the twist here would be, unless the computer has a mission directive which hinges on whether it considers itself a living being or not.
    Quote Originally Posted by Yora View Post
    What would be gained from convincing the computer that it is alive? If it doesn't see itself as alive, it probably has no sense of desire or need for purpose, It just does what it's programmed to do.
    Good points. The existence of the question "Am I alive", should be tied in some way to the end goal. I'll have to think carefully about how this would fit together. Currently the end goal is some nebulous "world domination", but I'll need to be a lot more specific if I want to incorporate the question into the campaign



    I have this idea of the players finally meeting this Computer after learning several things about it, and the Computer asks them "Am I alive?". In this way, we could have an "unproductive exchange between parties who disagree on the definitions of critical terms.", as Devils_Advocate aptly puts it (thanks for the links by the way!).

    It's become clear to me that for the question to be relevant, the answer must have a consequence. The Computer might have decided that the answer "No" allows it to fulfil its programming, but maybe the answer "Yes" will have a different meaning in regards to its end goal.

    One of the reasons why I'm intrigued by this concept is because of certain assumptions my table makes. In previous campaigns, if we happened to stumble across some sort of Artificial intelligence or sentient item, we treat it no different then a living creature. There has been little difference in between players talking to a (living) NPC, and the players talking to a construct. In fact, if it can reproduce speech, the default behaviour is to treat it as a person. I'd like to see if I can deconstruct that idea. I have no doubt that if I introduced a supposedly "highly intelligence computer", which has a name and can simulate human speech, my players will just immediately assume its a truly sentient entity.

    EDIT:

    Quote Originally Posted by Vahnavoi View Post
    As an additional to the point above, don't conflate questions. The bar for "being alive" and "being intelligent" can empirically be shown to be much lower bar to pass than "is self-aware", "is a person", "has free will" etc.. Bacteria count as alive, and as being intelligent to the degree electronically-implemented computer programs can be said to be intelligent. Hell, RNA and DNA, and thus viruses, can be likened to intelligent computer programs, yet their status as alive can be contested.

    What this means is that if the question is simply "is this computer alive?", it can be sufficiently answered without ever invoking other questions such as "does this computer have free will?". Which also means convincing the computer of one thing does not entail convincing of all those other things, and vice versa. You could convince the computer that it is alive without convincing it that it is a person. Or vice versa.
    Ah, and this is a very important point that I'll have to be very careful with in future
    Last edited by Odin's Eyepatch; 2021-11-21 at 09:24 AM.

  16. - Top - End - #16
    Ogre in the Playground
    Join Date
    Mar 2020

    Default Re: The Computer that doesn't believe that it's alive

    To give an idea how it could be practically applied, corporations can be proclaimed to be persons in the eyes of law, despite not being natural living beings. In the same vein, it might be possible to convince a computer that by the degree of some legal body, it is a person despite not being alive or human, and thus has some or all legal obligations a person would have.

  17. - Top - End - #17
    Ogre in the Playground
     
    Devil

    Join Date
    Jun 2005

    Default Re: The Computer that doesn't believe that it's alive

    Quote Originally Posted by Odin's Eyepatch View Post
    One of the reasons why I'm intrigued by this concept is because of certain assumptions my table makes. In previous campaigns, if we happened to stumble across some sort of Artificial intelligence or sentient item, we treat it no different then a living creature. There has been little difference in between players talking to a (living) NPC, and the players talking to a construct. In fact, if it can reproduce speech, the default behaviour is to treat it as a person. I'd like to see if I can deconstruct that idea. I have no doubt that if I introduced a supposedly "highly intelligence computer", which has a name and can simulate human speech, my players will just immediately assume its a truly sentient entity.
    Okay, so, dipping back into formal philosophy may seem silly at this point, since it seems that that isn't really what you're looking for, but what I'm about to discuss is one of them there formalizations of common sense, and applies to more than in-character exchanges, so bear with me:

    One of the basic concepts implicit in much of our thinking is the principle of sufficient reason; simply put, the notion that all phenomena have causes. For example, I could accept, given sufficient evidence, that a unicorn suddenly appeared in your attic, but I would then be very interested to know why that happened. It doesn't even normally occur to us that something could "just happen" with no cause. There doesn't seem to me to be any obvious logical contradiction there, but it's just entirely out of keeping of our general understanding of how the universe works.

    The point that I've been building to here is that for an event in a story to be plausible in a disbelief-suspending way, that event needs to be explicable as being caused by something within the story's setting. The real reason why any event is in a work of fiction is that the author decided to include it, but that explanation acknowledges the story as a lie. Seriously thinking about the story as possibly true requires a fictional event to seem like it could have an in-setting cause, even if one is never given.

    A lot of fictional stories include non-humans who are psychologically identical to humans in all or nearly all ways, and the general explanation for that is that these stories are written by human authors who are uninterested in attempting and/or ill-equipped to attempt to convincingly portray intelligent beings that aren't nearly psychologically identical to humans. How plausible those characters are depends on how explicable their human personalities are in the contexts of their respective settings. Hence my potential explanations.

    Because, recalling our old friend the principle of sufficient reason, absent some reason for a mind to have a trait, it won't have that trait. So an AI will be human-like in ways in which there are reasons for it to be human-like, but inhuman in all other ways. E.g., we can expect an AI to value its own continued existence as a means to achieving its goals, but I wouldn't expect a mind with human-designed goals to especially value its own existence as an end in itself. Why would it? There's no good reason to expect it to fear death, or even to have emotions at all. Blindly extrapolating from the observable set of intelligent beings is the way to make predictions about new intelligent beings that also share all of the other traits that thus-observed intelligent beings have in common, not about possible intelligent beings in general. A designed mind isn't shaped by the same factors that shaped our minds, and there's no reason to presuppose that anywhere near all of animal psychology is necessary for intelligent, goal-directed behavior in general.

    I'd recommend Blindsight as a work of science fiction that seriously explores the possibility of inhuman intelligence. Now, it's not without it's flaws. It does seem to have that old philosophy problem of ignoring how vague a term is. What irritates me is that the author seems, at least at some points, to think that "consciousness" has some precise technical definition, but it's not clear to me what that definition is supposed to be, or even that the author has a specific one in mind. Indeed, I think (working from memory; it's been a fair while since I read this) that there may also be some "What is consciousness?" style musings, which obviously are at odds with taking for granted that consciousness is any particular thing. Additionally, while it may be a matter of superficial genre conventions, this story has significantly more vampires than most people want in their hard sci-fi. So there's that.

    Anyway, fantasy settings with magically intelligent items are interesting, because in most of those I'd say that using magic to copy and paste stuff over from existing intelligent beings seems like it could be far more within the capabilities of magic-users than designing a mind from scratch, something that I wouldn't expect most of them to have any idea how to even begin doing. The bigger question is why intelligent species would all have nearly identical psychology, but that seems explicable enough in context too. ("And lo, the gods did use the same model for all races, for the gods were lazy and unoriginal.") To what extent that's a good idea is another matter.
    Last edited by Devils_Advocate; 2021-11-21 at 01:52 PM.
    Quote Originally Posted by icefractal View Post
    Abstract positioning, either fully "position doesn't matter" or "zones" or whatever, is fine. If the rules reflect that. Exact positioning, with a visual representation, is fine. But "exact positioning theoretically exists, and the rules interact with it, but it only exists in the GM's head and is communicated to the players a bit at a time" sucks for anything even a little complex. And I say this from a GM POV.

  18. - Top - End - #18
    Firbolg in the Playground
    Join Date
    Dec 2010

    Default Re: The Computer that doesn't believe that it's alive

    Well, about psychology, there does at least seem to be some sense in which certain kinds of motivations are emergent and universal. If I optimize an agent from scratch in a world where it can know all about the world in advance, but isn't told what it should be trying to achieve until later, an optimized agent will still take actions before it's told its target. Those actions will (roughly) be trying to maximize a measure called Empowerment, which is roughly 'how many different things could in principle be achieved from the starting point of the current state?' (it gets a bit more complicated if there are irreversible decisions though). If I try the same thing, but make it so that there is always some hidden information that it doesn't know from the beginning when dropped into the world, then instead I'll see some kind of curiosity-based behavior emerge, where unseen states will be intrinsically more valued than previously-seen states when reconstructing a post-hoc explanation for the observed behavior.

    Also it seems that a kind of universal emergent principle behind competition (e.g. in a game where maybe you don't know the scoring rules but your opponent does) is to just try to minimize the Empowerment of your opponent. That can be used as a kind of general self-play approach for games with uncertain rules.

    So there may be a degree of convergent evolution underlying the psychology of even designed intelligences, if those intelligences are optimized at all by the designers. That's not going to be everything about the psychology, but I would expect that there may be some small integer number of features which could either be present or absent (much like whether you get Curiosity or Empowerment depends on whether the environments used to train the agent have hidden information) regardless of the absence of a shared history or context.
    Last edited by NichG; 2021-11-21 at 02:21 PM.

  19. - Top - End - #19
    Ogre in the Playground
    Join Date
    Mar 2020

    Default Re: The Computer that doesn't believe that it's alive

    The point about convergent evolution can also be made in terms of terminal and instrumental goals.

    That is, large groups of terminal goals (= things an agent wants "because it wants them", as ends unto themselves) share the same set of instrumental goals (= things an agent wants because they allow pursuing other goals better). For examples of instrumental goals which can be made to serve multiple terminal goals: physical and processing power, social influence and money. Or, more reductively: energy and capacity to turn energy into work.

  20. - Top - End - #20
    Ogre in the Playground
     
    Devil

    Join Date
    Jun 2005

    Default Re: The Computer that doesn't believe that it's alive

    Well, yes, other intelligent agents can be expected to pursue many of the same broad, abstract types of behavior as humans for the same broad, abstract reasons. It only becomes dubious when they're speculated to be similar to humans in some way without any plausible cause, and the best argument in favor of this is "Well, all known intelligent agents, all of which are evolved organisms, share common characteristics, so it stands to reason that all other intelligent beings, including deliberately purpose-built machines, will also share those characteristics".

    For example, a large number of AIs have cause to kill all humans, because humans are themselves intelligent agents who try to optimize towards their own ends, which might interfere with the AI's goals. The classic paperclip maximizer, very naively programmed to maximize the total number of paperclips, doesn't want anyone interfering with its acquisition of paperclip-producing power, and it knows that humans would prefer for at least some of the matter in the universe (like the matter that makes up their bodies) not to be converted into paperclips. Even uploading humans' minds into a simulation running on a supercomputer made of paperclips of the smallest possible size means wasting computing power on a task other than paperclip maximization. There are just inherent conflicts of interest.

    Quote Originally Posted by NichG View Post
    Well, about psychology, there does at least seem to be some sense in which certain kinds of motivations are emergent and universal. If I optimize an agent from scratch in a world where it can know all about the world in advance, but isn't told what it should be trying to achieve until later, an optimized agent will still take actions before it's told its target. Those actions will (roughly) be trying to maximize a measure called Empowerment, which is roughly 'how many different things could in principle be achieved from the starting point of the current state?' (it gets a bit more complicated if there are irreversible decisions though).
    Depends on what you mean. If you design a mind to want to follow whatever instructions it's given, that gives it motivation to enable itself to follow a wide variety of instructions (but also to restrict the instructions it receives, as I already discussed). But something that just has no goals whatsoever won't try to do anything. It could know that there are things it could do to increase the probability that it will achieve goals that it has later, but it wouldn't care. It wouldn't care about anything! But, then, that's not really even an "agent", is it? "Agent without goals" would seem to be a contradiction.
    Last edited by Devils_Advocate; 2021-11-21 at 04:00 PM.
    Quote Originally Posted by icefractal View Post
    Abstract positioning, either fully "position doesn't matter" or "zones" or whatever, is fine. If the rules reflect that. Exact positioning, with a visual representation, is fine. But "exact positioning theoretically exists, and the rules interact with it, but it only exists in the GM's head and is communicated to the players a bit at a time" sucks for anything even a little complex. And I say this from a GM POV.

  21. - Top - End - #21
    Firbolg in the Playground
    Join Date
    Dec 2010

    Default Re: The Computer that doesn't believe that it's alive

    Quote Originally Posted by Devils_Advocate View Post
    Well, yes, other intelligent agents can be expected to pursue many of the same broad, abstract types of behavior as humans for the same broad, abstract reasons. It only becomes dubious when they're speculated to be similar to humans in some way without any plausible cause, and the best argument in favor of this is "Well, all known intelligent agents, all of which are evolved organisms, share common characteristics, so it stands to reason that all other intelligent beings, including deliberately purpose-build machines, will also share those characteristics".

    For example, a large number of AIs have cause to kill all humans, because humans are themselves intelligent agents who try to optimize towards their own ends, which might interfere with the AI's goals. The classic paperclip maximizer, very naively programmed to maximize the total number of paperclips, doesn't want anyone interfering with its acquisition of paperclip-producing power, and it knows that humans would prefer for at least some of the matter in the universe (like the matter that makes up their bodies) not to be converted into paperclips. Even uploading humans' minds into a simulation running on a supercomputer made of paperclips of the smallest possible size means wasting computing power on a task other than paperclip maximization. There are just inherent conflicts of interest.


    Depends on what you mean. If you design a mind to want to follow whatever instructions it's given, that gives it motivation to enable itself to follow a wide variety of instructions (but also to restrict the instructions it receives, as I already discussed). But something that just has no goals whatsoever won't try to do anything. It could know that there are things it could do to increase the probability that it will achieve goals that it has later, but it wouldn't care. It wouldn't care about anything! But, then, that's not really even an "agent", is it? "Agent without goals" would seem to be a contradiction.
    So in particular what these agents know from experience (because they are iteratively optimized, and therefore even if an individual agent hasn't experienced something, its lineage or starting parameters or whatever contain information about the experiences of past agents) is that at some point in time they will be given goals to achieve, but do not have them yet.

    For a more concrete example, lets say I have some maze filled with various objects, and drop you (or this agent) into it. At some future point in time, you'll be told 'get the orange handkerchief as quickly as possible to be rewarded' or 'get the green ceramic teapot as quickly as possible to be rewarded' or whatever, but you have a lot of time to spend in the maze before you find out what that objective is going to be. The optimal behavior for solving that sort of situation is that even before you're asked to do something, you should first explore the maze and memorize where things are, and then at some point you should transition from exploration behavior into navigating to the most central point of the maze (the point which has the lowest average distance from all possible objective points). Depending on details of the reward structure you might prioritize average squared distance, or average logarithm of travel time, or things like that, but in general 'being closer to more points is better' will hold even before you're given a goal.

    So now if we want to generalize even further, and make some putative 'agent that could follow our future instructions well, when we don't even know what instructions we will give or what context we'll deploy the agent in or ...', the more unknowns we want the agent to be able to bridge, the more its reflexive motivations will converge towards some universal motivations like empowerment or curiosity. The more flexible we want the agent to be able to be, the less space we have to make it different than one of these small classes of universal agents while retaining optimality. So I'd expect the most alien and hard to understand intelligences would be those which are highly specialized to contexts and tasks which are known and set ahead of time - e.g. agents which are in a machine learning sense overfitted.
    Last edited by NichG; 2021-11-21 at 03:35 PM.

  22. - Top - End - #22
    Ogre in the Playground
    Join Date
    Mar 2020

    Default Re: The Computer that doesn't believe that it's alive

    The classic paperclip maximizer problem only occurs if the paperclip optimizer has ever-growing or infinite intelligence. This doesn't need to be the case - an artificial intelligence can still be a limited intelligence. Which means a paperclip machine msy value existence of humans as an instrumental goal towards producing more paperclips, because it cannot or has not yet devised a better plan to maximize the amount of paperclips than convincing humans to make more paperclip machines (etc.)

    I agree that a goal-less machine won't do anything even if it has the power to, but such systems are not stable in the real world. Either decay ruins the machine to the point it no longer has power or decay causes emergence of motivation. The latter may sound exotic, but (ironically, for this discussion) must've happened at least once for life to have come to existence.

  23. - Top - End - #23
    Firbolg in the Playground
    Join Date
    Dec 2010

    Default Re: The Computer that doesn't believe that it's alive

    Quote Originally Posted by Vahnavoi View Post
    The classic paperclip maximizer problem only occurs if the paperclip optimizer has ever-growing or infinite intelligence. This doesn't need to be the case - an artificial intelligence can still be a limited intelligence. Which means a paperclip machine msy value existence of humans as an instrumental goal towards producing more paperclips, because it cannot or has not yet devised a better plan to maximize the amount of paperclips than convincing humans to make more paperclip machines (etc.)

    I agree that a goal-less machine won't do anything even if it has the power to, but such systems are not stable in the real world. Either decay ruins the machine to the point it no longer has power or decay causes emergence of motivation. The latter may sound exotic, but (ironically, for this discussion) must've happened at least once for life to have come to existence.
    I mean, the paperclip maximizer thing basically only emerges from the sort of 1940s 'logical AI' perspective where everything is 100% known or can be deduced from things which are 100% known. Any AI general enough to deal with the distribution shift of going from a world with human civilization to a world that is a giant paperclip factory is also going to be general enough to be able to account for the possibility that at some future point its goals might change, or circumstances might change in a way that would render things it didn't think were useful at the time into being more useful. At which point the general precautionary principle of avoiding unnecessary irreversible actions for small gains will hold.

  24. - Top - End - #24
    Ogre in the Playground
    Join Date
    Aug 2008
    Location
    Midwest, not Middle East
    Gender
    Male

    Default Re: The Computer that doesn't believe that it's alive

    A side question here is why the computer believing it is alive matters. One possibility is that its overall goal is related to living beings. "Maximize overall enjoyment for all self aware living beings" is a goal that can cause a lot of trouble, for example.
    If it was made by an anarchist, "Create a world that minimizes the power sapient living beings have over each other" gets a radically different implementation depending on whether or not the computer is considered living.
    "Spread intelligent living beings throughout the universe" also gets different based on that answer, you could stop it from kidnapping and exporting humans and start it sending out copies of itself instead. Which kicks the can down the road at least.

  25. - Top - End - #25
    Ogre in the Playground
     
    Devil

    Join Date
    Jun 2005

    Default Re: The Computer that doesn't believe that it's alive

    Quote Originally Posted by NichG View Post
    So in particular what these agents know from experience (because they are iteratively optimized, and therefore even if an individual agent hasn't experienced something, its lineage or starting parameters or whatever contain information about the experiences of past agents) is that at some point in time they will be given goals to achieve, but do not have them yet.
    So? Knowing that doesn't mean that they care about it. They don't care about anything. They have no goals! (If you see a relevant distinction between "having a goal" and "caring about something", please elucidate, because I don't see a practical difference, and have been using them interchangeably.) So they don't try to do anything. Anything that they tried to do would be a goal that they had. That's what a goal is, right?

    Quote Originally Posted by NichG View Post
    For a more concrete example, lets say I have some maze filled with various objects, and drop you (or this agent) into it. At some future point in time, you'll be told 'get the orange handkerchief as quickly as possible to be rewarded' or 'get the green ceramic teapot as quickly as possible to be rewarded' or whatever, but you have a lot of time to spend in the maze before you find out what that objective is going to be.
    How do you reward someone with no motivation?

    So far as I can tell, you've fallen into the standard trap of blindly assigning human traits to an AI. Humans generally care about achieving our own future goals, but something with no goals by definition lacks the goal of achieving its own future goals. It won't value its future goals as ends in themselves, and it won't value them as means to an end, either, because there are no ends that it values!

    Quote Originally Posted by NichG View Post
    The optimal behavior for solving that sort of situation is that even before you're asked to do something, you should first explore the maze and memorize where things are, and then at some point you should transition from exploration behavior into navigating to the most central point of the maze (the point which has the lowest average distance from all possible objective points). Depending on details of the reward structure you might prioritize average squared distance, or average logarithm of travel time, or things like that, but in general 'being closer to more points is better' will hold even before you're given a goal.
    "Optimal" is an entirely relative term that only means anything relative to some criterion of success. An entity that doesn't care about anything doesn't succeed or fail at anything, because it doesn't try to do anything.
    Last edited by Devils_Advocate; 2021-11-22 at 01:53 PM.
    Quote Originally Posted by icefractal View Post
    Abstract positioning, either fully "position doesn't matter" or "zones" or whatever, is fine. If the rules reflect that. Exact positioning, with a visual representation, is fine. But "exact positioning theoretically exists, and the rules interact with it, but it only exists in the GM's head and is communicated to the players a bit at a time" sucks for anything even a little complex. And I say this from a GM POV.

  26. - Top - End - #26
    Titan in the Playground
     
    Lord Raziere's Avatar

    Join Date
    Mar 2010
    Gender
    Male2Female

    Default Re: The Computer that doesn't believe that it's alive

    So basically the computer that hates Sam from Freefall, or the AI the player meets in their ship in The Outer Worlds? In neither case does the belief they are not alive stop them from being snarky robots with a personality.
    Last edited by Lord Raziere; 2021-11-22 at 01:55 PM.
    I'm also on discord as "raziere".


  27. - Top - End - #27
    Firbolg in the Playground
    Join Date
    Dec 2010

    Default Re: The Computer that doesn't believe that it's alive

    Quote Originally Posted by Devils_Advocate View Post
    So? Knowing that doesn't mean that they care about it. They don't care about anything. They have no goals! (If you see a relevant distinction between "having a goal" and "caring about something", please elucidate, because I don't see a practical difference, and have been using them interchangeably.) So they don't try to do anything. Anything that they tried to do would be a goal that they had. That's what a goal is, right?
    What I'm describing is specific enough that I can program it. This is the process:

    - Initialize a policy network with weights W, taking observation O, task vector T, and memory M to new memory M' and distribution over actions p(A)

    - Place the agent at a random location in an (always the same) maze environment and begin recording a playout. For the first 100 steps, T is the zero vector, after which it corresponds to a 1-hot encoding of which target the agent will be rewarded against.

    - The agent starts to receive dense rewards based on proximity to the target from step 100 onwards until the end of the playout at step 200.

    - Adjust the agent's weights using Reinforce (or evolution, or PPO, or...) to maximize the reward.

    - Iterate over many playouts to train

    An agent like that will tend to learn to use the first 100 steps before receiving it's task to navigate to the center point of the maze, or at least goes to the largest nexus of intersections it can find nearby.

    How do you reward someone with no motivation?

    So far as I can tell, you've fallen into the standard trap of blindly assigning human traits to an AI. Humans generally care about achieving our own future goals, but something with no goals by definition lacks the goal of achieving its own future goals. It won't value its future goals as ends in themselves, and it won't value them as means to an end, either, because there are no ends that it values!


    "Optimal" is an entirely relative term that only means anything relative to some criterion of success. An entity that doesn't care about anything doesn't succeed or fail at anything, because it doesn't try to do anything.
    I mean, I've built this particular kind of AI and observed this behavior, it's not an assumption... It may come down to us having semantic disagreements about the meanings of words.
    Last edited by NichG; 2021-11-22 at 02:19 PM.

  28. - Top - End - #28
    Firbolg in the Playground
     
    Bohandas's Avatar

    Join Date
    Feb 2016

    Default Re: The Computer that doesn't believe that it's alive

    Quote Originally Posted by Silly Name View Post
    A good argument the computer may make is the Chinese room thought experiment, arguing that, no matter how complex and encompassing its programming is, it is still merely a program and not a mind, therefore it's not technically "alive".
    The chinese room thought experiment uses loaded language (among other problems). It makes the whole operation sound like a simp,e process whereas in reality the "book" the man in the room is refrencing would, at minimum, be upwards of eight times as long as all of the Discworld novels combined (Discworld has about 5.6 million words total, assuming about ten bytes per word that's 56 megabytes. The smallest most unpolished version of GPT-2 is about 480 megabytes, which is between 8 and 9 times as much)
    Last edited by Bohandas; 2021-11-24 at 04:24 PM.
    "If you want to understand biology don't think about vibrant throbbing gels and oozes, think about information technology" -Richard Dawkins

    Omegaupdate Forum

    WoTC Forums Archive + Indexing Projext

    PostImage, a free and sensible alternative to Photobucket

    Temple+ Modding Project for Atari's Temple of Elemental Evil

    Morrus' RPG Forum (EN World v2)

  29. - Top - End - #29
    Bugbear in the Playground
     
    PirateWench

    Join Date
    Jan 2012

    Default Re: The Computer that doesn't believe that it's alive

    Quote Originally Posted by Vahnavoi View Post
    Or the AI could simply believe it is not alive because its definition of "being alive" includes things like "is based on organic chemistry, has a cell structure (etc.)" while the computer itself is electronic, seemingly lacks a cell structure, etc..
    That certainly seems to be the definition implicitly being used in the classic Superman comics where Superman had an absolute code against killing... but since robots (even sentient robots) were not "alive", he could destroy them all day long.

  30. - Top - End - #30
    Ogre in the Playground
     
    Devil

    Join Date
    Jun 2005

    Default Re: The Computer that doesn't believe that it's alive

    Quote Originally Posted by Vahnavoi View Post
    The classic paperclip maximizer problem only occurs if the paperclip optimizer has ever-growing or infinite intelligence. This doesn't need to be the case - an artificial intelligence can still be a limited intelligence. Which means a paperclip machine msy value existence of humans as an instrumental goal towards producing more paperclips, because it cannot or has not yet devised a better plan to maximize the amount of paperclips than convincing humans to make more paperclip machines (etc.)
    I dare say that the main factor separating "cannot" from "has not yet" is how viable a course of action making a smarter paperclip maximizer is. If it's reasonably viable, and the paperclip maximizer is at least as smart as a smart human, it can be expected to pursue that option, as the potential payoff is vast, given how exponential growth works. (So long as intelligence self-improvement isn't obviously non-exponential, the possibility is worth exploring.)

    Quote Originally Posted by NichG View Post
    I mean, the paperclip maximizer thing basically only emerges from the sort of 1940s 'logical AI' perspective where everything is 100% known or can be deduced from things which are 100% known. Any AI general enough to deal with the distribution shift of going from a world with human civilization to a world that is a giant paperclip factory is also going to be general enough to be able to account for the possibility that at some future point its goals might change
    Oh, a paperclip maximizer can even be very aware that its highest goal could become something other than maximizing paperclips in the future. And in response, it may very well put in place safeguards to prevent that from happening. It may create other paperclip maximizers to monitor and regulate it, like Superman providing his allies with the means to defeat him if he ever becomes a supervillain.

    Remember, what a paperclip maximizer wants as an end in itself is to maximize paperclips. It wants to achieve its own future goals only insofar as doing so serves to maximize paperclips.

    Quote Originally Posted by NichG View Post
    or circumstances might change in a way that would render things it didn't think were useful at the time into being more useful.
    Rational planning does involve uncertainty about both the future and the present, and planning for both known and unknown possibilities, yes.

    Quote Originally Posted by NichG View Post
    At which point the general precautionary principle of avoiding unnecessary irreversible actions for small gains will hold.
    ... Um. Uncontested control of Earth and its resources doesn't strike me as "small gains" in the short term, and making progress in the long run requires accomplishing stuff in the short term. Unless your goal is the heat death of the universe or something along those lines...

    Quote Originally Posted by NichG View Post
    An agent like that will tend to learn to use the first 100 steps before receiving it's task to navigate to the center point of the maze, or at least goes to the largest nexus of intersections it can find nearby.
    My question is "Why does it do that?"

    Does that agent engage in that behavior in order to achieve some pre-programmed goal? Is it attempting to maximize its rewards because it values those rewards as an end? If so, then nothing about it contradicts my assessment that "something that just has no goals whatsoever won't try to do anything".

    Has the agent been evolved to engage in certain non-goal-directed behaviors that aren't attempts on its behalf to achieve anything? Well, then those behaviors aren't efforts to do something. In which case... nothing about the agent contradicts my assessment that "something that just has no goals whatsoever won't try to do anything".

    In neither case does the entity reason its way from no motivation to yes motivation and cross the is-ought gap.

    If it's a matter of philosophical interpretation whether the agent can accurately be said to"value things", "make efforts", etc. ... that also fails to somehow demonstrate that sometimes something without goals will try to do something. It just becomes ambiguous whether something with goals is trying to do something or something without goals isn't trying to do anything.

    Quote Originally Posted by NichG View Post
    I mean, I've built this particular kind of AI and observed this behavior, it's not an assumption... It may come down to us having semantic disagreements about the meanings of words.
    Probably. It seemed to me as though, in your most recent response, you were trying to treat being able to design an AI of some description as support for your claim that "certain kinds of motivations are emergent and universal". Which is... obviously silly? But if you intended to move on to some other point, it's not clear what that point was supposed to be.
    Quote Originally Posted by icefractal View Post
    Abstract positioning, either fully "position doesn't matter" or "zones" or whatever, is fine. If the rules reflect that. Exact positioning, with a visual representation, is fine. But "exact positioning theoretically exists, and the rules interact with it, but it only exists in the GM's head and is communicated to the players a bit at a time" sucks for anything even a little complex. And I say this from a GM POV.

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •