New OOTS products from CafePress
New OOTS t-shirts, ornaments, mugs, bags, and more
Page 1 of 3 123 LastLast
Results 1 to 30 of 64
  1. - Top - End - #1
    Ogre in the Playground
    Join Date
    Jan 2012

    Default I need help explaining VI vs AI.

    first off: I am not sure if this is the right forum, if not please move to the correct one.

    Ok now to the point.

    I am getting ready to run a Sci Fi game of my own created system ( its closely related to the PDQ system with some more crunch added)

    I got to the point where I had to explain that while artificial intelligence has not been created yet, virtual intelligence is pretty abundant.

    From the look on my player's face I knew they were confused and so I tried to explain it again.

    Once more, I found that I was lacking in my ability to coherently explain the difference, without having to go into a very long speel.

    Can anybody give me a 1 or 2 paragraph summary of the differences that can be given to my players without their brains melting from my ability to NOT get my idea across in a coherent manner?

  2. - Top - End - #2
    Ogre in the Playground
    Join Date
    Mar 2020

    Default Re: I need help explaining VI vs AI.

    Well I also have no idea what line in the sand you're trying to draw, because virtual intelligences are proper subset of artificial intelligences. Specifically, a virtual intelligence is an artificial intelligence embedded into and working in a virtual environment, such as a video game. Well, technically, it could also be a human intelligence that's virtualized somehow, a la the Matrix.

    So, clarify: what's the distinction you're trying to make?

  3. - Top - End - #3
    Firbolg in the Playground
     
    Bohandas's Avatar

    Join Date
    Feb 2016

    Default Re: I need help explaining VI vs AI.

    I think he might mean hard vs. soft AI
    "If you want to understand biology don't think about vibrant throbbing gels and oozes, think about information technology" -Richard Dawkins

    Omegaupdate Forum

    WoTC Forums Archive + Indexing Projext

    PostImage, a free and sensible alternative to Photobucket

    Temple+ Modding Project for Atari's Temple of Elemental Evil

    Morrus' RPG Forum (EN World v2)

  4. - Top - End - #4
    Librarian in the Playground Moderator
     
    LibraryOgre's Avatar

    Join Date
    Dec 2007
    Location
    San Antonio, Texas
    Gender
    Male

    Default Re: I need help explaining VI vs AI.

    Basically, there's no hard definition. "Virtual Intelligence", as normally applied, is an artificial intelligence that functions only in a virtual world... it does not interact with the real world, save through inputs from users.

    Mass Effect had VIs, but they were more like UIs... user interfaces. They used natural language (i.e. you could ask it questions without having to phrase it in terms that computers use), but had fairly limited databases of what they could do. Combat VIs could be used to run drones and the like, and they were similarly limited.... they could use Friend or Foe systems to acquire targets and fire on them, but they could not, for example, decide to take the elevator to surprise their target, unless that was programmed into them.

    This would contrast with Artificial General Intelligences, which is more like a Star Wars Droid, or Commander Data. They can learn, extrapolate, and acquire information outside their original parameters. Avina, on the Citadel in Mass Effect, could be given access to a bigger database, but would never seek out new information or draw conclusions about what she knew.
    The Cranky Gamer
    *It isn't realism, it's verisimilitude; the appearance of truth within the framework of the game.
    *Picard management tip: Debate honestly. The goal is to arrive at the truth, not at your preconception.
    *Mutant Dawn for Savage Worlds!
    *The One Deck Engine: Gaming on a budget
    Written by Me on DriveThru RPG
    There are almost 400,000 threads on this site. If you need me to address a thread as a moderator, include a link.

  5. - Top - End - #5
    Colossus in the Playground
     
    BlackDragon

    Join Date
    Feb 2007
    Location
    Manchester, UK
    Gender
    Male

    Default Re: I need help explaining VI vs AI.

    And the specific reason Mass Effect used those is because AGIs were banned after what happened with the Geth.

    If you're trying to draw the same sort of distinction, I would say that an AGI is self-aware and can learn from experience, while a VI is fixed in what it can do--an AGI would recognise the trick if you were to tell it "This statement is a lie", while a VI probably wouldn't.

  6. - Top - End - #6
    Ogre in the Playground
    Join Date
    Mar 2020

    Default Re: I need help explaining VI vs AI.

    An AI doesn't need to remotely approach self-awareness to learn from experience (a fairly simple neural network suffices) or to recognize logical paradoxes (a grammar checker can do that.) In fact, it would be fairly trivial right now to make a chatbot that recognizes and spits out sassy replies to any such smartassery, faking understanding when it's really just giving a statistically probable reply.

  7. - Top - End - #7
    Titan in the Playground
     
    Brother Oni's Avatar

    Join Date
    Nov 2007
    Location
    Cippa's River Meadow
    Gender
    Male

    Default Re: I need help explaining VI vs AI.

    Quote Originally Posted by factotum View Post
    If you're trying to draw the same sort of distinction, I would say that an AGI is self-aware and can learn from experience, while a VI is fixed in what it can do--an AGI would recognise the trick if you were to tell it "This statement is a lie", while a VI probably wouldn't.
    Ghost in the Shell Standalone Complex has an excellent example of the difference between a VI and AI: Tachikoma prank.

    The 'lady' is essentially an admin robot with a pretty face to help human interactions, while the mobile tanks are a nested intelligence (they have a shared memory which is synchronised daily, but have developed different personalities based on their shared memories).

    As I see it, a VI could range from a self driving car with a voice interface up to a Predator drone with the automatic target recognition and engagement protocols enabled.
    Neither are suddenly going to be capable of doing something different (and more importantly, want to do something different) like go out for a stroll and help a little girl find her lost dog along the way (which a tachikoma did in one episode).

  8. - Top - End - #8
    Troll in the Playground
     
    Lvl 2 Expert's Avatar

    Join Date
    Oct 2014
    Location
    Tulips Cheese & Rock&Roll
    Gender
    Male

    Default Re: I need help explaining VI vs AI.

    Schlock Mercenary uses the term synthetic intelligence for this concept, where synthetic means "kinda stupid".

    Examples include smart munitions that never wonder why their goal in life is to blow up and a bus that prioritizes in an unexpected way by refusing to return to the docks until it's sure people will stop shooting at it. Typical for these SI's is that they can't really be reasoned with. Something is a valid order, in which case they'll execute on it, or it's not (for instance because a self preservation protocol takes precedence) and they don't. But that's an internal decision. We also don't see SI's connect and talk to each other or meld into a fleetmind like the AI's in that universe like to do. An SI is a smart system that can operate on both hard computational logic and more fuzzy organic-like thought processes, but it is strictly limited by the boundaries of its programming with no way or intent to step outside it, any learning it does it strictly in service of its task and they are typically designed to limit the means the outside world has to influence of temper with the program. In this world they also don't have to be worse at their task than a full on AI. A mob owned SI security system locked the smartest and most complex AI in the galaxy out of their computer system. It took a long enough time that it was not some kind of reflex or automated response, the system learned from the break in, found a way to counter it and acted on that seemingly while keeping its owner updated on its progress, but without feeling the need to do a "chatting AI's scene" or a weakness to being reasoned out of doing its job.
    The Hindsight Awards, results: See the best movies of 1999!

  9. - Top - End - #9
    Colossus in the Playground
     
    BlackDragon

    Join Date
    Feb 2007
    Location
    Manchester, UK
    Gender
    Male

    Default Re: I need help explaining VI vs AI.

    Quote Originally Posted by Lvl 2 Expert View Post
    Schlock Mercenary uses the term synthetic intelligence for this concept, where synthetic means "kinda stupid".
    Thinking about it, the Culture from Iain M. Banks' books has the opposite attitude--for them, as soon as a task requires more than a certain level of computing power, they have to give the AI a personality; so the autopilot of a shuttle or the brain controlling a suit that allows a human to live in the high pressure and gravity of a gas giant atmosphere are both like that (much to the annoyance of the man wearing the suit!).

  10. - Top - End - #10
    Banned
    Join Date
    Feb 2014
    Location
    Denmark
    Gender
    Male

    Default Re: I need help explaining VI vs AI.

    In my book - which is to say, I'm making this up as I go along - AI is an actual thinking machine, while VI is a not-at-all-thinking machine, except it will feel like it's thinking to casual testing. In other words, a really good chat engine might be called a VI, despite just being pattern recognition software feeding expected answers back to you.

    While AI - actual thinking machine - is radically different, an artificial mind capable of human thought processes (something I, on a sidenote, consider next to uimaginable).

  11. - Top - End - #11
    Firbolg in the Playground
    Join Date
    Dec 2010

    Default Re: I need help explaining VI vs AI.

    Modern AI research tends to divide on the extent of out-of-training generalization.

    But from a story point of view, I'd draw the divide between internally and externally motivated agents. An internally motivated agent can decide what it wants and dynamically updates its motivations and goals. For an externally motivated agent this is held constant.

    There are other dividing lines though. I'd say that a realistic sci-fi take on AI would be much more diverse than a binary breakdown. You'd have internally vs externally motivated, reacters and planners, interpolators and extrapolators, internally homogeneous or heterogeneous agents, homeostatic and heterostatic in all combinations.

    And there wouldn't be a simple line of 'these are people, those are not.'

  12. - Top - End - #12
    Bugbear in the Playground
    Join Date
    Aug 2014
    Location
    Ontario, Canada
    Gender
    Male

    Default Re: I need help explaining VI vs AI.

    The thing is, you, yourself haven't given us any definition of what you think the difference between VI and AI are, or where you got that idea. The differences depend on the setting.
    It's like dragons versus wyverns; in some settings, the difference is the number of legs, or intelligence, or the ability to breathe fire, or there may not be a difference. There's no single "correct" answer.

    If we're talking about Mass Effect, though, an AI can learn and adapt itself to grow beyond the scope of its own programming. Like a true natural intelligence, it gains experience and becomes better at a task the more it performs that task. They can even learn to perform new tasks as needed. Think of them as you would any other being, just on a computer.
    A VI, on the other hand, is much closer to computers today. They have a strict area of expertise. They may be able to add more information when given it, but they won't ever learn or think on their own. They either know something or they don't.

    If you were to ask an AI to make an educated guess, it would do so, pausing only the briefest of moments to process the question and work out an answer.
    Ask the same of a VI, and - unless it was programmed to make guesses, like a weather prediction VI - it simply couldn't do so.
    That's all I can think of, at any rate.

    Quote Originally Posted by remetagross View Post
    All hail the mighty Strigon! One only has to ask, and one shall receive.

  13. - Top - End - #13
    Bugbear in the Playground
     
    Marillion's Avatar

    Join Date
    May 2009

    Default Re: I need help explaining VI vs AI.

    Quote Originally Posted by Lvl 2 Expert View Post
    A mob owned SI security system locked the smartest and most complex AI in the galaxy out of their computer system. It took a long enough time that it was not some kind of reflex or automated response, the system learned from the break in, found a way to counter it and acted on that seemingly while keeping its owner updated on its progress, but without feeling the need to do a "chatting AI's scene" or a weakness to being reasoned out of doing its job.
    That reminds me a little bit of Bart the Troll in The Witcher: Wild Hunt. A criminal boss uses a troll to guard his vault, and despite being a living, thinking being, you can't talk your way past him, bribe him, or trick him into leaving his post. Bart's only vulnerability as a security system is being physically or magically overpowered. He faithfully executes his duties in exchange for food, and doesn't want or even imagine a life outside of those parameters. He's smart enough to perform his task, but too stupid to be outsmarted; if his boss doesn't say you can go in the vault, you don't get to go in the vault. Using that as an example might make it easier for your players to understand the difference between VI and AI, if they're more familiar with fantasy than Sci-Fi.
    Quote Originally Posted by Xefas View Post
    I like my women like I like my coffee; 10 feet tall, incomprehensible to the human psyche, and capable of ending life as a triviality.

  14. - Top - End - #14
    Bugbear in the Playground
     
    MindFlayer

    Join Date
    Feb 2015

    Default Re: I need help explaining VI vs AI.

    Traditionally, EMACS was the text editor of choice for AI, not VI, but that's from the days when MIT was the center of AI research.

    Oh, you meant Virtual Intelligence. Never mind.

  15. - Top - End - #15
    Firbolg in the Playground
     
    Bohandas's Avatar

    Join Date
    Feb 2016

    Default Re: I need help explaining VI vs AI.

    Quote Originally Posted by Mark Hall View Post
    Basically, there's no hard definition. "Virtual Intelligence", as normally applied, is an artificial intelligence that functions only in a virtual world... it does not interact with the real world, save through inputs from users.
    So basically it's an AI, but not one that's in a robot?

    Like Agent Smith would be a VI but the Terminator wouldn't (and HAL 9000 would be kind of an edge case)?
    Last edited by Bohandas; 2020-06-01 at 11:03 PM.
    "If you want to understand biology don't think about vibrant throbbing gels and oozes, think about information technology" -Richard Dawkins

    Omegaupdate Forum

    WoTC Forums Archive + Indexing Projext

    PostImage, a free and sensible alternative to Photobucket

    Temple+ Modding Project for Atari's Temple of Elemental Evil

    Morrus' RPG Forum (EN World v2)

  16. - Top - End - #16
    Librarian in the Playground Moderator
     
    LibraryOgre's Avatar

    Join Date
    Dec 2007
    Location
    San Antonio, Texas
    Gender
    Male

    Default Re: I need help explaining VI vs AI.

    Quote Originally Posted by Bohandas View Post
    So basically it's an AI, but not one that's in a robot?

    Like Agent Smith would be a VI but the Terminator wouldn't (and HAL 9000 would be kind of an edge case)?
    HAL 9000 would be an AI, because he interacts with the real world. He not be in a humanoid body, but he's also got extensive control of the ship.

    Agent Smith would be a good example of a VI... the "outside world" he interacts with is entirely virtual.
    The Cranky Gamer
    *It isn't realism, it's verisimilitude; the appearance of truth within the framework of the game.
    *Picard management tip: Debate honestly. The goal is to arrive at the truth, not at your preconception.
    *Mutant Dawn for Savage Worlds!
    *The One Deck Engine: Gaming on a budget
    Written by Me on DriveThru RPG
    There are almost 400,000 threads on this site. If you need me to address a thread as a moderator, include a link.

  17. - Top - End - #17
    Ogre in the Playground
     
    ElfPirate

    Join Date
    Aug 2013

    Default Re: I need help explaining VI vs AI.

    Quote Originally Posted by Lvl 2 Expert View Post
    Schlock Mercenary uses the term synthetic intelligence for this concept, where synthetic means "kinda stupid".
    We also don't see SI's connect and talk to each other or meld into a fleetmind like the AI's in that universe like to do. An SI is a smart system that can operate on both hard computational logic and more fuzzy organic-like thought processes, but it is strictly limited by the boundaries of its programming with no way or intent to step outside it, any learning it does it strictly in service of its task and they are typically designed to limit the means the outside world has to influence of temper with the program.
    Actually at one point 3 detonator SIs are discussing whether they can blow up or not. They all monitor a different condition for detonation, entry to the ship, gravitics (ie is it moving) and last one is on a timer. They sort of discuss the deeper meaning of free will (no, not really) wanting to take the one action they are allowed, ie detonating. The irony is they are still talking when the timer goes. I would say the choice to detonate here is a red herirng. They have set parameters and cannot go against the programming which in this case manifests as a discussion about whether they should. It's also a joke the author thought of so may be no more indicator than that. But I think it again proves and insight into the kinda stupid aspect of smart missiles.

    At other points targeting SIs end up communicating with each other or other entities. The kinda stupid tends to show up in these SI viewpoints as they invariably complain about not being allowed to blow up yet.

    In Schlockverse there's a scale for how "smart" an AI is (in part a measur eof how much processing capacity it has) and I think there's a threshold of like 1 on that scale where you count as sentient AI (5 is already very smart, no upper limit as far as I'm aware).

    As you say the SIs are limited to working within the boundraries of their programming. AIs can push those boundaries or work around them, though not necessarily always break them.

  18. - Top - End - #18
    Bugbear in the Playground
    Join Date
    Sep 2013

    Default Re: I need help explaining VI vs AI.

    Sounds more like an expert system than true AI - https://en.wikipedia.org/wiki/Expert_system

  19. - Top - End - #19
    Ettin in the Playground
     
    Kobold

    Join Date
    May 2009

    Default Re: I need help explaining VI vs AI.

    At the risk of coming across as cynical: "true AI" is something that is, and always will be, many years away yet. Any AI that actually exists will always be considered not "true AI", and so people will think of another name for it.

    This is based on observing, in my lifetime, how the definition of "AI" changes every time it gets fulfilled. Once upon a time we would have considered "learning" as the hallmark of intelligence, but then neural networks came along and suddenly it was nothing special. Once we would have called an algorithm that could autonomously fly a plane (including takeoff, landing and manoeuvres in between) "intelligent", but as soon as they actually existed, we "recognised" that they're nothing of the sort. Once we thought that a machine that could pass the Turing test could be considered "intelligent"; but now bots that can do that are commonplace, they've become boring, and nobody even talks about the Turing test any more.

    There is huge resistance to admitting "actually, that's pretty smart" of a machine. Part of that is justifiable scepticism, but a (in my opinion, larger) part can better be described as collective egotism: we are just loth to admit the possibility that, actually, intelligence isn't a magical property that goes beyond mere physical cause and effect. This is our generation's 'Descent of Man': we just can't bring ourselves to admit that we're not that special.
    "None of us likes to be hated, none of us likes to be shunned. A natural result of these conditions is, that we consciously or unconsciously pay more attention to tuning our opinions to our neighbor’s pitch and preserving his approval than we do to examining the opinions searchingly and seeing to it that they are right and sound." - Mark Twain

  20. - Top - End - #20
    Colossus in the Playground
     
    BlackDragon

    Join Date
    Feb 2007
    Location
    Manchester, UK
    Gender
    Male

    Default Re: I need help explaining VI vs AI.

    Quote Originally Posted by veti View Post
    Once we thought that a machine that could pass the Turing test could be considered "intelligent"; but now bots that can do that are commonplace, they've become boring, and nobody even talks about the Turing test any more.
    Just to note, Turing himself did not create his test with the intent of proving that a machine could think, only that it could simulate a human being. I'm sure he was well aware that a sufficiently complicated expert system (although he wouldn't have called it that, obviously) could fool people into thinking it was human without ever actually "thinking".

  21. - Top - End - #21
    Troll in the Playground
     
    Lvl 2 Expert's Avatar

    Join Date
    Oct 2014
    Location
    Tulips Cheese & Rock&Roll
    Gender
    Male

    Default Re: I need help explaining VI vs AI.

    Quote Originally Posted by veti View Post
    At the risk of coming across as cynical: "true AI" is something that is, and always will be, many years away yet. Any AI that actually exists will always be considered not "true AI", and so people will think of another name for it.

    This is based on observing, in my lifetime, how the definition of "AI" changes every time it gets fulfilled. Once upon a time we would have considered "learning" as the hallmark of intelligence, but then neural networks came along and suddenly it was nothing special. Once we would have called an algorithm that could autonomously fly a plane (including takeoff, landing and manoeuvres in between) "intelligent", but as soon as they actually existed, we "recognised" that they're nothing of the sort. Once we thought that a machine that could pass the Turing test could be considered "intelligent"; but now bots that can do that are commonplace, they've become boring, and nobody even talks about the Turing test any more.

    There is huge resistance to admitting "actually, that's pretty smart" of a machine. Part of that is justifiable scepticism, but a (in my opinion, larger) part can better be described as collective egotism: we are just loth to admit the possibility that, actually, intelligence isn't a magical property that goes beyond mere physical cause and effect. This is our generation's 'Descent of Man': we just can't bring ourselves to admit that we're not that special.
    A very similar observation can be made about animal behavior. People keep pointing to aspects of our thinking that make humans unique among all animals, and then later that aspect of thinking is found in another species.

    The difference with the AI situation of course being that animal intelligence is typically more comparable to ours, we mostly just have more of it. It's much harder to properly compare computers to humans. You sometimes hear a claim like "in 2050, a single computer will be smarter than all humans" (or whatever years and benchmarks they pick), but how do you even compare those? The base thing that's happening at a flip flop/cellular level is even completely different. You don't even need electronics to build a calculator that can (given a human giving the right input, and as long as you pick the right calculations) work faster than the most brilliant mathematician. Similarly it's far from unthinkable that for the foreseeable future the machines will stay worse than us at other specific tasks.
    The Hindsight Awards, results: See the best movies of 1999!

  22. - Top - End - #22
    Firbolg in the Playground
    Join Date
    Dec 2010

    Default Re: I need help explaining VI vs AI.

    Working in AI definitely makes me feel like humans aren't all that smart, and it gives me an appreciation for the illusions we buy into as to our own intelligence. There's all sorts of results that things people believe correspond to conscious volition can be done when in a medically induced sleep, or can be confirmed to take place before it's actually possible for the post-hoc 'conscious process' explanation that the person feels is true could actually have happened.

    At the same time, I think a big problem with AI is that we're terribly uncreative about what we ask machines to do. A lot of the recent advances that seem shockingly fast don't have to do with improved hardware or better architecture design, they're reformulating the thing we choose to ask the machine to learn. AlphaGo went from 'we have to ask the machine to maximize its reward' to 'we should ask the machine to imitate an improved version of itself'. Style transfer came from the insight that rather than asking to match an target image pixel by pixel, we should instead ask to match the aggregate statistics of a target image. Deep dream was the realization that we can use the learning signal that we use to train the network to instead train the network's input. GANs were the realization that directly asking a model to output a particular high dimensional probability distribution is incredibly hard, but asking two models to fight over whether that distribution has been successfully matched is much easier.

    So there's certainly other failures in our current imagination of what we could ask a machine to do. How would we ask a machine to choose its own motivations? What would even be the measure we'd use to know that we succeeded or failed in getting such a thing?

    In the end, I think a lot of the barriers to calling something 'AI' have to do with the realization that we've failed to imagine a thing we could ask for, or that we can't quite put what we want in concrete terms. So when we explore the limits of what we've previously wanted, only then do we realize that there's something we missed. And it's not necessarily because there's some brilliant idea or magic sauce or physiologically realistic neuroscience detail we missed - it's because we don't even know how to ask the question we want to solve. It's not humans being or having something special that can't be copied.
    Last edited by NichG; 2020-06-10 at 10:11 AM.

  23. - Top - End - #23
    Banned
    Join Date
    Feb 2014
    Location
    Denmark
    Gender
    Male

    Default Re: I need help explaining VI vs AI.

    The biggest problem with AI is that it's all 1's and 0's. Anything AI can do, can be done on a piece of paper. It can be done with gears. It can be done by people, passing notes to one another. So how far are we willing to think that can go?

    No one questions that given outside input, a simple computer can do advanced math, just using 1's and 0's. But then, no one argues that simple computers 'think'. More advanced computers play chess at the very highest level, but no one argues that those computers 'think'. They just calculate, a lot, and fast.

    Computers have been known to hold long and seemingly meaningful conversations. But no one really argues those computers think either. There's all sort of clever predictive algorithms, problem solving, there are examples of computers doing what humans have failed to do - construct a protein for detergent, is an example that was famous when I was younger.

    I think a lot of what we think of as 'thinking' can be copied by 1's and 0's. Observing nature, and mapping what's going on - that I believe has more to do with repetition and pattern recognition, than anything else. Actually understanding what's going on is another matter. I have my doubts computers will ever do that. Again, anything a computer can do, can be done on paper. If, will the paper then 'understand'? Really?

    But there's more. We humans 'want'. What we want is as varied as the number of humans out there, but we have motivation. It's evident in everything we do, everything we've created. I will personally take any bet that computers will never 'want'. They will never have volition, never act on their own accord, never make their own plans, have their own ideas.

    Of course, I could be wrong. That's fine, that's just an observation. But if I am, it raises interesting questions about our own 'want'.

  24. - Top - End - #24
    Bugbear in the Playground
    Join Date
    Nov 2013

    Default Re: I need help explaining VI vs AI.

    Anything you can do can be done one paper given sufficient knowledge, a ridiculous amount of paper and lots and lots of time. Brains aren't magic they are a complex and messy but follow normal physics and you could theoretically take one state of a brain and then manually go through what happens in a brain to replicate a thought.

    Most declarations that it is impossible for computers to ever think tend to imply humans aren't thinking either or fall back on magic.

  25. - Top - End - #25
    Troll in the Playground
     
    Lvl 2 Expert's Avatar

    Join Date
    Oct 2014
    Location
    Tulips Cheese & Rock&Roll
    Gender
    Male

    Default Re: I need help explaining VI vs AI.

    Quote Originally Posted by Kaptin Keen View Post
    The biggest problem with AI is that it's all 1's and 0's.
    Brain cells don't work with 0's and 1's, but they are not incomprehensibly complex either. (They produce a single quantitative output signal based on dozens of quantitative input signals, all weighed as either a positive or a negative factor of a specific importance, where the output is continuously adjusted to changes in input, essentially. EDIT: Although the complexity increases a bit further by including their ability to continuously update which other cells they are connected to, being involved in the continuous reprogramming of the brain from the bottom up.) It's not a process that's impossible to recreate with anything but a biological brain cell, let alone a human brain cell. The more complex human behaviors are all emergent properties, qualities that do not exist in the parts but only because of how the parts come together as a whole. There's no reason in that argument that similar complex properties could not be attained starting from binary components like flip flops, or indeed from a patient enough person writing things down on pieces of paper (the emergent behavior would just be reaaaaaaaally slow in that last case).

    Stuff like emotions, including basic wants and motivation, are actually as far as I can tell some of the least ridiculously complex emergent behaviors in the human brain (which is to say: still ridiculously complex, but most other stuff is worse). A lot of animals with less complex brains have some version of motivation, like how a lizard is typically not going to sit around waiting for your cat to eat it. That behavior is related to a very similar neurotransmitter and hormone cocktail to those that govern our heroic last stands, romantic endeavors and epic journeys around the world. Even plants can be attributed something like this sometimes if you squint right.

    Computers used to be different, at least on a philosophical level, because we knew what made them tick. My computer starts up Word when I click that icon because I clicked that icon. If it does not start Word the problem is not that my computer didn't feel like it today, it's just broken, because we know what the computer should be doing. But recent advances in machine learning have brought us onto fuzzy grounds. We now have computer programs of which nobody knows how they work anymore. And it's not like we can't precisely tell what this command here does, no all of it is literally unreadable, there's no way to convert any part of the program to a programming language anymore. So if an algorithm grown through machine learning doesn't start my Word one day as I click the icon, does that still make it defective? Could it not starting Word be part of the learned behavior? Do we have a way to tell the difference? And if we don't have a way to tell the difference, like we have no way to tell the difference when you or I decide not to start up Word one day, is it a reasonable approximation to make that the program made a decision?
    Last edited by Lvl 2 Expert; 2020-06-10 at 04:07 PM.
    The Hindsight Awards, results: See the best movies of 1999!

  26. - Top - End - #26
    Firbolg in the Playground
    Join Date
    Dec 2010

    Default Re: I need help explaining VI vs AI.

    It's easy to give something a want, either at the learning stage or the behavior stage or both (and they can be different!), but the hard part is deciding how wants should evolve and change over time. And the even harder part is to make something that discovers the wants that it should have, without having to explicitly define those wants as the builder of the thing.

    For example, there are a number of methods for implementing curiosity or empowerment motivated agents, and of course you can use things like reward functions, and those all tend to operate at the level of learning in practice - that is, what the agent learns is a behavior that happens to satisfy the want, but the behavior itself often doesn't derive directly from the want. Another approach is to instead teach the agent just to identify how good a certain thing would be for a specific goal, without making the goal itself part of the training. That gets more flexible behaviors - the agent can change its goals on the fly without losing its general competency or action policy. And of course you can do all sorts of exotic stuff like having the learning dynamics want one thing while the behavioral dynamics want another (useful in some meta-learning applications where you want to learn a specific kind of generalization behavior, for example).

    But all of this is fairly complex engineering where you're deciding explicitly what the agent 'is supposed to' want. And if you make that decision based on a naive understanding of motivation, you get all sorts of glitches or surprising consequences. You can end up with agents that lock themselves into dark rooms because you built them to 'want to be good at predicting things' and a dark room is the most perfectly predictable state they can enter. Or if you tell them to surprise themselves and take actions that challenge their predictive capabilities, they go and stare at TV static or any environmental source of randomness. We're still in the hand-crafting stage of this kind of thing, meaning that we're limited by our own conception of how our own wants work - and those sort of explicit stories we tell about our own behavior tend to be really reductive and miss a lot of nuance and adaptivity that makes the things actually work in reality.

    So the next step is having something where wants are dynamic and emergent. But the complicated thing there is, if we don't want to have to specify those dynamics by hand, we at least need to specify what we want those dynamics to achieve - but that's the same problem as defining the wants by hand in the first place. You could also just give up on that and make a network of interconnected wants that are all randomly initialized and just see what happens, but then while you might get complex behavior out, it would be entirely impenetrable - what is it for, how do you know it worked, etc?

  27. - Top - End - #27
    Ettin in the Playground
     
    georgie_leech's Avatar

    Join Date
    Sep 2011
    Location
    Calgary, AB
    Gender
    Male

    Default Re: I need help explaining VI vs AI.

    Quote Originally Posted by NichG View Post
    You could also just give up on that and make a network of interconnected wants that are all randomly initialized and just see what happens, but then while you might get complex behavior out, it would be entirely impenetrable - what is it for, how do you know it worked, etc?
    Well that sure became a metaphor for childbirth real fast

    That is to say, whether it's a good idea doesn't mean that creating such an impenetrable AI would be impossible.
    Quote Originally Posted by Grod_The_Giant View Post
    We should try to make that a thing; I think it might help civility. Hey, GitP, let's try to make this a thing: when you're arguing optimization strategies, RAW-logic, and similar such things that you'd never actually use in a game, tag your post [THEORETICAL] and/or use green text

  28. - Top - End - #28
    Banned
    Join Date
    Feb 2014
    Location
    Denmark
    Gender
    Male

    Default Re: I need help explaining VI vs AI.

    Quote Originally Posted by Ibrinar View Post
    Anything you can do can be done one paper given sufficient knowledge, a ridiculous amount of paper and lots and lots of time. Brains aren't magic they are a complex and messy but follow normal physics and you could theoretically take one state of a brain and then manually go through what happens in a brain to replicate a thought.

    Most declarations that it is impossible for computers to ever think tend to imply humans aren't thinking either or fall back on magic.
    You don't know anything about how the brain works - because no one does. Sure, we know some things about neurons and chemistry and so on. But there is not a man alive anywhere on the planet who has any shred of idea how that stuff becomes thoughts.

    Now, if you go back and read what I wrote, you'll notice that I do not mention magic at any point.

    If you want to counter my main argument, you need to explain to me - if humans are passing notes around, doing exactly the same as an 'AI' program - is it the notes that are intelligent? Do you think it can have self awareness? Do the notes know they're notes?

    What will the first thing they learn to want be?

    Quote Originally Posted by Lvl 2 Expert View Post
    Stuff like emotions, including basic wants and motivation, are actually as far as I can tell some of the least ridiculously complex emergent behaviors in the human brain (which is to say: still ridiculously complex, but most other stuff is worse). A lot of animals with less complex brains have some version of motivation, like how a lizard is typically not going to sit around waiting for your cat to eat it. That behavior is related to a very similar neurotransmitter and hormone cocktail to those that govern our heroic last stands, romantic endeavors and epic journeys around the world. Even plants can be attributed something like this sometimes if you squint right.
    A computer is a brick. It does - precisely - nothing on it's own. Absolutely nothing at all.

    We put our wants and motivations in there. That is not the same thing.

    So yes. Computes are unable to do even the simplest things that go on in the brain. However, they can do more complex things - provided we take that stance. I do not agree that emotions are less complex than calculation, but that's besides the point. The point is that we can pretty easily tell a computer what we want it to think, but I refute the claim or the idea that any computer will ever decide for itself what it wants to think.

    And so long as it isn't deciding for itself, it's not thinking. It's a brick.

  29. - Top - End - #29
    Troll in the Playground
     
    Lvl 2 Expert's Avatar

    Join Date
    Oct 2014
    Location
    Tulips Cheese & Rock&Roll
    Gender
    Male

    Default Re: I need help explaining VI vs AI.

    Quote Originally Posted by Kaptin Keen View Post
    A computer is a brick. It does - precisely - nothing on it's own. Absolutely nothing at all.
    Speak for your own computer. Mine wants to wake up in the middle of the night to install Windows updates. It's super creepy, but I can ask it not to do that by unplugging it before we go to sleep.


    On a more serious note: the distinction you're making is philosophical in nature. We know how that want got in that computer, someone at Microsoft designed Windows that way. Meanwhile we cannot trace exactly where my current want to eat a cookie comes from (although it certainly has to do with a bunch of stuff we do know about, like our need for nutrition and the relative scarcity of some nutrients during our evolution). But from the point of view of the computer that makes little difference. It still wants to run that update. The line blurs further with machine learning. You end up with a program that was sort of bred or trained to do a certain task, rather than being outright told to do it. While we may have installed a sense in it of the importance of updating its library (like say it's an airport security program that does facial recognition and needs to stay up to date on current terrorism suspects) we have not programmed the moments it tries to do so. So at that point, what is it that makes that behavior not a real want?

    EDIT: I'm not arguing here that these programs are to be considered human level sentient by the way, I'm actually more arguing that basic wants are a bad measure of human level sentience, while also arguing that I don't see a hard divide that would prevent all non-biological system from ever reaching a state where they could be considered human like sentient.
    Last edited by Lvl 2 Expert; 2020-06-11 at 04:17 AM.

  30. - Top - End - #30
    Colossus in the Playground
     
    BlackDragon

    Join Date
    Feb 2007
    Location
    Manchester, UK
    Gender
    Male

    Default Re: I need help explaining VI vs AI.

    Quote Originally Posted by Kaptin Keen View Post
    Now, if you go back and read what I wrote, you'll notice that I do not mention magic at any point.
    .
    .
    .
    I refute the claim or the idea that any computer will ever decide for itself what it wants to think.
    In my opinion, these two statements are contradictory. You've acknowledged that the human brain is not magic, and that therefore means it must be *possible* to understand it--we don't fully, not yet, but we're working on it all the time. If the human brain is understandable then it should, once we *do* fully understand it, be possible to duplicate how it operates using electronics or even a massively complicated computer program. If we can duplicate the human brain electronically, how can you then say that this duplicate is not thinking? It's doing exactly what a human brain does!

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •