PDA

View Full Version : Movies CHAPPiE: Good Promise, Bad Follow-Through



Solaris
2015-03-24, 07:34 PM
I went to go see CHAPPiE with my fiancee today, and just got back from watching it.
Once again... I am disappointed, and wish to warn my fellow forumgoers that the movie might not be as good as a passing familiarity with science fiction, robotics, and transhumanism might lead you to hope.

The writing... was about what you'd expect from a Saturday morning cartoon - not an R-rated motion picture. The plotline was basically a combination of Short Circuit and (new) Robocop with Avatar's ending. The characters were incredibly stupid, flat, and two-dimensional. The only ones with a hint of depth were the Latino criminal, America, and the bizarrely-accoutered Yolandi and Ninja. It was only a hint, mind. It wasn't so much nuance and depth as it was two two-dimensional layers laid one directly over the other.

Particularly egregious were the CEO who passed up the opportunity to corner the market on sentient AIs, the engineer (Deon) who acted just about as stupidly as possible at every turn to include his confrontation with Bradley, and the psychotic-for-no-good-reason, incredibly moronic (I'll make a Battlemech and sell it to a police force! Brilliant! I'll destroy all the police drones to sell my 'Mech! Brilliant! I'll commit multiple spectacularly violent acts of homicide and attempted homicide while trying to apprehend the rogue robot! BRILLIANT!), uncontrollably violent 'ex-soldier' played by Hugh Jackman, Bradley. For Asimov's sake, he had all the depth of a Saturday morning cartoon supervillain - and we already had one of those, with the guy who had the hair-deedlies and dreadlocks that threatened America, Yolandi, and Ninja in the beginning. Leave the stereotype of the violent veteran back in the Viet Nam era to die its ignoble death.

Bad show, all around. With that quality of writing, you're not going to attract the science fiction fan crowd, and the action wasn't pretty enough or punchy enough to carry it with the action movie crowd. With its R rating, it can't justify the juvenile writing with its slow action. I'm really quite disappointed in it, on account of I really love self-aware robots and wanted this to be... well, I wanted them to put more effort into it than the screenwriters for a kid's cartoon show. A low-grade kid's cartoon show.

Grinner
2015-03-24, 08:00 PM
Haven't seen the movie, though the trailer did catch my attention. That said, I feel a need to give counterpoints.


Particularly egregious were the CEO who passed up the opportunity to corner the market on sentient AIs...

I can't see anything good about strong AI...I think she made the right call.


Bad show, all around. With that quality of writing, you're not going to attract the science fiction fan crowd...

Are we really all that picky, now? :smalltongue:

Solaris
2015-03-24, 08:05 PM
I can't see anything good about strong AI...I think she made the right call.

Of not putting it in one of the super-strong police robots? Absolutely.
Of not putting it into another robot frame for further experimentation and implementation into a new market of robot servants and robot buddies? No. While "Evil Robot Takeover" is a trope in fiction, it's no more based in a realistic assessment of AI than worries about Martians invading.


Are we really all that picky, now? :smalltongue:

Yes, dagnabbit. If you're going to make an action movie, make an action movie - I like those, too. This one wasn't an action movie, and thus the writing would have had to carry it. The writing... didn't.

t209
2015-03-24, 08:10 PM
Well, now that we see a string of awful movies,
I wonder if it was a good idea if they led him to direct Halo movie instead of Ridley Scott, the same sucky director.
Maybe Chappie should go with "Drones aren't evil, it's the people who uses it". Then again, I heard about carpet bombing being acceptable being WWII from my grandparents' account (Burma/Myanmar).

Grinner
2015-03-24, 08:13 PM
Of not putting it in one of the super-strong police robots? Absolutely.
Of not putting it into another robot frame for further experimentation and implementation into a new market of robot servants and robot buddies? No. While "Evil Robot Takeover" is a trope in fiction, it's no more based in a realistic assessment of AI than worries about Martians invading.

Experiment? On sentient creatures? Make sentient slaves?

The robot activists would like to have a very angry word with you. :smallamused:

Besides, weren't the robots they already had sufficient? Plus, they didn't demand salaries, voting rights, or coffee breaks. :smalltongue:

Solaris
2015-03-24, 10:36 PM
Maybe Chappie should go with "Drones aren't evil, it's the people who uses it". Then again, I heard about carpet bombing being acceptable being WWII from my grandparents' account (Burma/Myanmar).

I'd have been shocked and amazed to see that movie produced. I'm a former UAS operator; according to the media, I'm pretty much the spawn of Satan and my birds his unholy emissaries. That's really frustrating if you've an inkling of what potential applications UAVs, UGVs, robots, and drones have rather than just paranoid fantasies.


Experiment? On sentient creatures? Make sentient slaves?

The robot activists would like to have a very angry word with you. :smallamused:

Besides, weren't the robots they already had sufficient? Plus, they didn't demand salaries, voting rights, or coffee breaks. :smalltongue:

Oh look, set up for a sequel.

I know what the pacing reminded me of! The Matrix sequels: Action scenes interspersed with kinda slow talky scenes. It wasn't nearly so bad as the Matrix sequels, though.

Tengu_temp
2015-03-25, 11:41 AM
I can't see anything good about strong AI...I think she made the right call.


Sentient AI would be a huge deal for anyone working with robotics. To paraphrase a Cinema Snob Midnight Review, her reaction is comparable to a NASA engineer being told about the discovery of aliens, and saying "excuse me, in this department we're making rocket engine components, thank you very much".

Grinner
2015-03-25, 03:35 PM
Sentient AI would be a huge deal for anyone working with robotics. To paraphrase a Cinema Snob Midnight Review, her reaction is comparable to a NASA engineer being told about the discovery of aliens, and saying "excuse me, in this department we're making rocket engine components, thank you very much".

Here's the thing: have you ever seen The Terminator?

Ricky S
2015-03-25, 06:52 PM
Personally I really liked the film, but I am more a fan of die antword and I like seeing South Africa portrayed in film, having South African born parents. The plot wasnt strong at all but I enjoyed the movie anyway. It is hard to fully go into depth of the idea of the story in the two or so hours it played on screen. They had to have "more broad strokes" to cover what needed to be covered.

Solaris
2015-03-25, 09:17 PM
Here's the thing: have you ever seen The Terminator?

What makes you think an AI designed to protect and serve would decide genocide is the best way to do it?

Grinner
2015-03-25, 09:53 PM
What makes you think an AI designed to protect and serve would decide genocide is the best way to do it?

Honestly, I wasn't really thinking of the film being discussed when I wrote that. But then again, didn't that computer in I,Robot decide the best way to protect humanity was to subjugate it with iron-fisted revolution?

I actually really like Tengu_Temp's analogy. Creating sentient AI would be just as momentous as meeting intelligent alien life. However, I suspect there's a reason why Stephen Hawking expressed concerns about the prospect of alien contact a few years back.

t209
2015-03-25, 09:53 PM
Personally I really liked the film, but I am more a fan of die antword and I like seeing South Africa portrayed in film, having South African born parents. The plot wasnt strong at all but I enjoyed the movie anyway. It is hard to fully go into depth of the idea of the story in the two or so hours it played on screen. They had to have "more broad strokes" to cover what needed to be covered.
Well, that also made South Africa and Wakanda (fictional) to be NOT portrayed by Hollywood as war-torn, primitive, and/or extremely poor. Seriously, even Blood Diamond (I know the historical context but it kinda played on public perception on it) showed South Africa as more stable than other countries.

Closet_Skeleton
2015-03-26, 08:35 AM
Well, now that we see a string of awful movies,
I wonder if it was a good idea if they led him to direct Halo movie instead of Ridley Scott, the same sucky director

Directors don't write scripts or pick stories. Neill Blomkamph did co-write/write all his feature films sure, but that's still him being a terrible writer not a terrible director (I guess terrible 'writer-director' would be fair).

Ridley Scott hasn't made a good film in over twenty years (but I don't like Gladiator so I'm being unfair to him if you do).

Halo was pretty much doomed to have a terrible script from the start so I doubt it will be any good. Its unlikely that either Blomkamph or Ridley Scott would be writing it anyway, or even have that much of a say in who does.


While "Evil Robot Takeover" is a trope in fiction, it's no more based in a realistic assessment of AI than worries about Martians invading.

Strong AI isn't based on a realistic assessment of AI, so its not like there's a non-fantasy way of dealing with it in fiction.

Ranxerox
2015-03-26, 10:45 AM
Personally I really liked the film, but I am more a fan of die antword and I like seeing South Africa portrayed in film, having South African born parents. The plot wasnt strong at all but I enjoyed the movie anyway. It is hard to fully go into depth of the idea of the story in the two or so hours it played on screen. They had to have "more broad strokes" to cover what needed to be covered.

Yeah, I liked it too. While there is a lot of talk here about the stupidity of the plot, nobody in CHAPPiE talks about using 100% of the brain, or does anything that we know for a fact is flat out impossible. I know those are pretty low bars, but when you are a fan of sci-fi movies, you learn to set the your bars low. I liked that the movie wasn't set in America or Japan. I'm an American myself, but that doesn't mean that I think that we own the future. Oh, also, the movie is pro-science and technology which is nice, since as pointed out in another thread, a lot of science fiction isn't.

tomandtish
2015-03-26, 11:14 AM
What makes you think an AI designed to protect and serve would decide genocide is the best way to do it?

Have you seen this (http://en.wikipedia.org/wiki/The_Avengers:_Earth%27s_Mightiest_Heroes) show's version of Ultron?

Tengu_temp
2015-03-26, 03:53 PM
Here's the thing: have you ever seen The Terminator?

As an engineer and a computer scientist who knows a thing or two about artificial intelligence, I must say that I'm goddamn tired of all the stories where AI research leads to machines starting a bloody rebellion and/or enslaving humanity. And I'm even more tired of people who react like that to any mentions of advances in AI.

Lethologica
2015-03-26, 04:47 PM
Honestly, I wasn't really thinking of the film being discussed when I wrote that. But then again, didn't that computer in I,Robot decide the best way to protect humanity was to subjugate it with iron-fisted revolution?
The movie I, Robot is a ham-fisted repurposing of some Asimov memes and Hollywood action movie memes to a locked-room murder mystery. It's not in the least a reliable indicator of how an AI would behave in reality.

t209
2015-03-26, 05:40 PM
Have you seen this (http://en.wikipedia.org/wiki/The_Avengers:_Earth%27s_Mightiest_Heroes) show's version of Ultron?
Then again, it's Hank Pym. He messed up everything, other than smacking his wife one time during mental break down.

Grinner
2015-03-26, 06:46 PM
As an engineer and a computer scientist who knows a thing or two about artificial intelligence, I must say that I'm goddamn tired of all the stories where AI research leads to machines starting a bloody rebellion and/or enslaving humanity. And I'm even more tired of people who react like that to any mentions of advances in AI.

I'm sorry, but those credentials aren't entirely useful here. It seems to me that most artificial intelligence research has very little to do with intelligence as it relates to consciousness. Instead, researchers always seem to be working on computer vision, natural language processing, and the like. I don't think they've really tackled the big question.

Would AI necessarily destroy humanity? Maybe, maybe not. That would depend on what happened first. The reason why I would be concerned about AI is not a fear of immediate genocide. Rather, introducing more sapient creatures to an already increasingly populated world is bound to be problematic, especially when they're initially regarded as property.

Lethologica
2015-03-26, 08:00 PM
I'm sorry, but those credentials aren't entirely useful here. It seems to me that most artificial intelligence research has very little to do with intelligence as it relates to consciousness. Instead, researchers always seem to be working on computer vision, natural language processing, and the like. I don't think they've really tackled the big question.

Would AI necessarily destroy humanity? Maybe, maybe not. That would depend on what happened first. The reason why I would be concerned about AI is not a fear of immediate genocide. Rather, introducing more sapient creatures to an already increasingly populated world is bound to be problematic, especially when they're initially regarded as property.
Frankly, what this suggests is that you aren't well-acquainted with AI research. Computer vision and NLP are two of the more immediately applicable areas of AI research; they are by no means the only ones.

For the record, some AI researchers are convinced that a strong AI would be unfriendly, so it's not like your position is unsupportable. But it's ridiculous to say that AI researchers aren't even thinking about your concerns. This is sort of like saying that biologists haven't thought about how impossible it is for life to come from non-life. Most of them aren't working on the origin-of-life problem right now, to be sure, but it's one of the basic questions of the field, there are people working on it, the rest of them have at least thought about it, and they tend to have informed opinions on the subject.

Grinner
2015-03-26, 08:14 PM
But it's ridiculous to say that AI researchers aren't even thinking about your concerns.

I hate for this to turn unfriendly, but did I say that? Sure, people think about AI killing us all. A lot of different people think about a lot of different things a lot of the time. It should be noted that there's a vast gap between thought and action. When I read through AI literature, most of the things I see being done are variations on and recombinations of various algorithms. Very few research efforts deal with the crux of artificial intelligence, the thing we've been dreaming of all these years. Unlike your biology example, this thing, I suspect, is something quite within our grasp.

Lethologica
2015-03-26, 08:53 PM
I hate for this to turn unfriendly, but did I say that? Sure, people think about AI killing us all. A lot of different people think about a lot of different things a lot of the time. It should be noted that there's a vast gap between thought and action. When I read through AI literature, most of the things I see being done are variations on and recombinations of various algorithms. Very few research efforts deal with the crux of artificial intelligence, the thing we've been dreaming of all these years. Unlike your biology example, this thing, I suspect, is something quite within our grasp.
I'm not sure how artificial general intelligence can simultaneously be <within our grasp> and <not something anyone is tackling>, unless you subscribe to a theory where someone recombines a NLP algorithm and accidentally ends up with strong AI. It is the case that there is an artificial general intelligence research community, and it is not the case that artificial general intelligence is quite within their (or our) grasp.

007_ctrl_room
2015-03-26, 09:11 PM
i was kind of mixed on district 9 - i really liked it, but at the same time i didnt. that's what's keeping me from seeing chappie in theaters; i'll definitely catch it when it drops on netflix though

Grinner
2015-03-26, 09:49 PM
I'm not sure how artificial general intelligence can simultaneously be <within our grasp> and <not something anyone is tackling>...

As I was trying to point out earlier, the sort of things in artificial intelligence that are directly applicable to modern problems (i.e. make money) are not the sort of things you might term "intelligent", whereas the sort of things in artificial intelligence that might be called "intelligent" are also the sort of things that are liable to create more problems than they might conceivably solve. Therefore, members of the former group receive funding, while members of the latter group do not.

Before we go there, no, I don't have hard numbers on the matter, but again, I tend to see more articles about neural networks applied to the problems of data analysis versus articles where robots learn to lie or invent languages.

...Having given it some thought, perhaps one of the most telling things is that of the books on AI that I've read, very few seriously approached the issue of consciousness...In fact, while they all had chapters on computer vision, I'd be hard-pressed to find mention of the word "consciousness". I think there may have been one, but interestingly enough, that was more a book on neurology and consciousness than AI...


...unless you subscribe to a theory where someone recombines a NLP algorithm and accidentally ends up with strong AI.

...I suppose it's not impossible...That'd be kinda weird, though.

Lethologica
2015-03-26, 10:33 PM
Papers on evolution outnumber papers on abiogenesis, too, but that doesn't mean biology suffers a fundamental deficiency of people thinking hard about abiogenesis.

If we can skip the argument over what people said, that would be nice, so:

It sounds like you acknowledge there are people working on AGI, but since they're not a large fraction of the community, you don't think Tengu_temp having studied AI is sufficient to demonstrate credibility in discussion of AGI. It would probably have been better to ask if Tengu_temp had more specific domain knowledge rather than assuming his credentials didn't mean anything, but that's relatively minor.
It also sounds like you're not arguing that strong AI is a doomsday scenario. Fine.
That leaves the questions of why you (a) "can't see anything good about strong AI" and (b) cite movie depictions of doomsday scenarios as if they have any credibility.

Grinner
2015-03-26, 10:48 PM
It also sounds like you're not arguing that strong AI is a doomsday scenario. Fine.[/list]

That leaves the questions of why you (a) "can't see anything good about strong AI"

And it would be a mistake for you to think that of me. With an increasingly interconnected world, anything capable of interacting and exploiting computers and networks at far greater levels of proficiency than any human poses a real threat. Fortunately for humanity, we've not yet managed to manufacture something of the sort.


and (b) cite movie depictions of doomsday scenarios as if they have any credibility.

Recognizability.

Now, are quite through interrogating me? Or do you wish to continue this increasingly hostile discussion?

Lethologica
2015-03-26, 11:14 PM
And it would be a mistake for you to think that of me. With an increasingly interconnected world, anything capable of interacting and exploiting computers and networks at far greater levels of proficiency than any human poses a real threat. Fortunately for humanity, we've not yet managed to manufacture something of the sort.
You say threat, someone else says opportunity. Any power is potentially a doomsday scenario--but fission makes for reactors as well as bombs. It's not clear that AI is a more certain doomsday than fission. Plus, the advances that would be necessary to make AGI possible might have a host of side benefits.


Recognizability.

Now, are quite through interrogating me? Or do you wish to continue this increasingly hostile discussion?
To me it looks like peak hostility came somewhere between condescendingly asking another forum vet whether he was familiar with a cultural icon, and dismissing said member's personal experience with a relevant field based on vague generalizations about that field, as if that experience was less relevant than your pop culture references. Clarifying what justified that behavior was an interrogation, I suppose. It can be over now if you want it to be.

Grinner
2015-03-26, 11:29 PM
You say threat, someone else says opportunity. Any power is potentially a doomsday scenario--but fission makes for reactors as well as bombs. It's not clear that AI is a more certain doomsday than fission. Plus, the advances that would be necessary to make AGI possible might have a host of side benefits.

Yes, in one fell swoop, you could put every IT worker out of work, assuming you could keep a handle on your new drones.

And again, I think we already have all the pieces we need. We just need someone to fit them together.

{scrubbed}

Lethologica
2015-03-27, 12:11 AM
Ah. I'll try to avoid longstanding differences in the future. Thanks for the heads-up. (I'll try to avoid creating any longstanding differences as well. Sorry for the hostility.)

I'm still dubious on a few counts. First, on whether the tools for AGI exist in fragments at present--if you take Watson's NLP, weld it to driverless cars, add the best HFT program and facial recognition software, I'm still not sure how we get to sentient robot takeover (though it's perfectly possible to wreck the world with non-sentient technology). Second, on whether AGI distinct from humanity is the destination of AI in the foreseeable future--we seem to be a lot better at augmenting sapients with technology than getting technology to act sapient. Third, whether there's no element of buggy-whip-manufacturer in the proposed IT worker disaster.

Grinner
2015-03-27, 09:42 AM
I'm still dubious on a few counts. First, on whether the tools for AGI exist in fragments at present--if you take Watson's NLP, weld it to driverless cars, add the best HFT program and facial recognition software, I'm still not sure how we get to sentient robot takeover (though it's perfectly possible to wreck the world with non-sentient technology).

That's true...I misspoke there then. I think we've got all of the pieces as far as sensory input is concerned, and if you have robots, locomotion seems like a self-resolving problem with the right mix of sensors, neural networks, and learning algorithms, like teaching a toddler to walk.

...One of the books on consciousness I've been reading essentially proposes that consciousness is nothing more than illusion created by a continuity of memory and a stream of sensory input being processed by a patchwork of neurological structures. Maybe the key to sentience is just sticking enough doohickeys together until something clicks.


Second, on whether AGI distinct from humanity is the destination of AI in the foreseeable future--we seem to be a lot better at augmenting sapients with technology than getting technology to act sapient.

I agree.

I hate to continue beating a dead horse, but that may be because that's where we've been focusing our research efforts. This is beneficial to us. However, the moment someone invents an algorithm for creativity, things would become more complicated.


Third, whether there's no element of buggy-whip-manufacturer in the proposed IT worker disaster.

I'm afraid I'm not familiar with the term "buggy-whip-manufacturer". Based on some Googling, are you proposing that the IT worker disaster would simply be a result of an industry becoming staid and unwilling to adapt?

If so, well yeah. But what would we do then? It takes awhile to train a workforce in First-World countries, where there are frequently education requirements. Moreover, the technology and medical industries, according to statistics scattered about the Internet, are supposed to supply the coming generation with a good portion of their jobs.

Normally the solution is to move on to greener pastures or make new pastures, but the number of things a human can do that a machine can't is rapidly depleting. (Obligatory XKCD (http://xkcd.com/1263/)) At the same time, we've still not quite got the post-scarcity economy thing down.

Some people might find work managing and maintaining the AI workers, but that's going to be a relatively small amount compared to the people that were replaced.

BannedInSchool
2015-03-27, 11:16 AM
...One of the books on consciousness I've been reading essentially proposes that consciousness is nothing more than illusion created by a continuity of memory and a stream of sensory input being processed by a patchwork of neurological structures. Maybe the key to sentience is just sticking enough doohickeys together until something clicks.
The experiments showing how fallible our memory and consciousness is often get me to exclaim that our brains are stupid, and evil, and lying to us. It's kind of hard to swallow, though, to say that an AI which uses a bunch of shortcuts and has a faulty model of the world is a success, even though that would be like us. :smallbiggrin: