PDA

View Full Version : Freefall: DOGGY!



Math_Mage
2014-04-10, 08:36 PM
Wait, there's no Freefall thread? There should be a Freefall thread. I'm going to make one.

Freefall (http://freefall.purrsia.com/default.htm) features Sam Starfall, an infamous alien starship captain whose amorality is matched only by his incompetence; Helix, Sam's naive robot sidekick, who specializes in picking up heavy things, moving them, and putting them down; and Florence Ambrose, an engineer (and Bowman's Wolf) who just wanted to make a living designing starship engines. Wacky hijinks, and eventually a plot, ensue.

(Since my summary skills are terrible, I'll let hajo take over from here...)


Freefall (http://freefall.purrsia.com) is running since 1998, but the events shown in the comic cover "only" a few weeks.

For the early years, the index (http://freefall.purrsia.com/fcdex.htm) has been organized into "chapters", but the current story is all under "2010" :smallsigh:
Also, the search (http://www.freefall.hostzi.com/Search/FreefallSearch.htm?adv=1) for text only covers the first 2441 comics.

There is a forum (http://www.crosstimecafe.com/viewforum.php?f=8), and the intro-thread (http://www.crosstimecafe.com/viewtopic.php?f=8&t=18) has a short timeline (http://www.crosstimecafe.com/viewtopic.php?p=7486#p7486).

The background of the story:

The near future (http://freefall.purrsia.com/ff1200/fv01163.htm). Mankind has spread to a few dozen worlds, but travel between starsystems is slow (http://freefall.purrsia.com/ff800/fv00768.htm) and expensive (http://freefall.purrsia.com/ff1300/fv01252.htm).
The story is happening on Jean (http://freefall.purrsia.com/ff1200/fv01115.htm), a small human colony-world, that is still being (http://freefall.purrsia.com/ff100/fv00044.htm) terraformed (http://freefall.purrsia.com/ff1300/fv01213.htm).
On it are about 20.000 adult humans (http://freefall.purrsia.com/ff1400/fv01372.htm), and 450 million robots. (Plus one (http://freefall.purrsia.com/ff1800/fv01786.htm) alien (http://freefall.purrsia.com/ff1200/fv01181.htm), and two (http://freefall.purrsia.com/ff1100/fv01046.htm) genetically engineered (http://freefall.purrsia.com/ff1500/fv01401.htm) persons)
Terraforming and robot-production is done by a big corporation, Ecosystems Unlimited.

Sam is a squid (http://freefall.purrsia.com/ff1400/fv01359.htm)-like member of the only alien race discovered (http://freefall.purrsia.com/ff1200/fv01104.htm) so far, and he got to Jean as a (sort of) stowaway (http://freefall.purrsia.com/ff1300/fv01238.htm) aboard a scout-ship.
The squids have descended from scaveners (http://freefall.purrsia.com/ff300/fv00222.htm), and their civilisation is early-industrial.
They are a lot different (http://freefall.purrsia.com/ff1700/fv01602.htm) in their ways of life (http://freefall.purrsia.com/ff1100/fv01003.htm) (also, delicious (http://freefall.purrsia.com/ff1400/fv01347.htm)) :smallbiggrin:
The air on their homeworld has more oxygen, so on Jean, Sam needs to wear an environment-suit (http://freefall.purrsia.com/ff1000/fv00907.htm) at all times.
Despite the culture-shock, Sam has managed (http://freefall.purrsia.com/ff600/fv00593.htm) to aquire Helix (http://freefall.purrsia.com/ff600/fv00565.htm), a "young (http://freefall.purrsia.com/ff700/fv00636.htm)" warehouse-robot, and an old spaceship (http://freefall.purrsia.com/ff300/fv00275.htm) (cheap!), they are using as a home.

Florence (http://freefall.purrsia.com/ff100/fv00009.htm) is one of a few (http://freefall.purrsia.com/ff800/fv00709.htm) "experimental (http://freefall.purrsia.com/ff800/fv00711.htm) organic AI (http://freefall.purrsia.com/ff2200/fv02132.htm)" with the body (and nose (http://freefall.purrsia.com/ff600/fv00569.htm)) of a red wolf, raised by a human family (http://freefall.purrsia.com/ff500/fv00463.htm) on earth. She is competent as an engineer (http://freefall.purrsia.com/ff300/fv00255.htm) (as well as hunter (http://freefall.purrsia.com/ff300/fv00228.htm) and cook (http://freefall.purrsia.com/ff2200/fv02130.htm)) and was travelling (in "cold sleep (http://freefall.purrsia.com/ff800/fv00766.htm)") to a research-station on another world when Sam needed an engineer (http://freefall.purrsia.com/ff100/fv00002.htm) to repair his heap of junk spaceship.

So, Sam bribed (http://freefall.purrsia.com/ff100/fv00005.htm) some offical to have her unloaded on Jean...


You see - no spoilers, just teasers :smalltongue:

Updates MWF.

Landis963
2014-04-10, 09:45 PM
The thread keeps dying for some reason.

However, said plot has grown far beyond the confines that your last sentence implies. A recap of stuff up to this point may not go amiss.

Math_Mage
2014-04-10, 09:50 PM
The thread keeps dying for some reason.

However, said plot has grown far beyond the confines that your last sentence implies. A recap of stuff up to this point may not go amiss.
That is true. But it seems sufficiently spoilerriffic that I'm not sure how much to explain.

hajo
2014-04-11, 01:16 PM
A recap of stuff up to this point may not go amiss.

Freefall (http://freefall.purrsia.com) is running since 1998, but the events shown in the comic cover "only" a few weeks.

For the early years, the index (http://freefall.purrsia.com/fcdex.htm) has been organized into "chapters", but the current story is all under "2010" :smallsigh:
Also, the search (http://www.freefall.hostzi.com/Search/FreefallSearch.htm?adv=1) for text only covers the first 2441 comics.

There is a forum (http://www.crosstimecafe.com/viewforum.php?f=8), and the intro-thread (http://www.crosstimecafe.com/viewtopic.php?f=8&t=18) has a short timeline (http://www.crosstimecafe.com/viewtopic.php?p=7486#p7486).

The background of the story:

The near future (http://freefall.purrsia.com/ff1200/fv01163.htm). Mankind has spread to a few dozen worlds, but travel between starsystems is slow (http://freefall.purrsia.com/ff800/fv00768.htm) and expensive (http://freefall.purrsia.com/ff1300/fv01252.htm).
The story is happening on Jean (http://freefall.purrsia.com/ff1200/fv01115.htm), a small human colony-world, that is still being (http://freefall.purrsia.com/ff100/fv00044.htm) terraformed (http://freefall.purrsia.com/ff1300/fv01213.htm).
On it are about 20.000 adult humans (http://freefall.purrsia.com/ff1400/fv01372.htm), and 450 million robots. (Plus one (http://freefall.purrsia.com/ff1800/fv01786.htm) alien (http://freefall.purrsia.com/ff1200/fv01181.htm), and two (http://freefall.purrsia.com/ff1100/fv01046.htm) genetically engineered (http://freefall.purrsia.com/ff1500/fv01401.htm) persons)
Terraforming and robot-production is done by a big corporation, Ecosystems Unlimited.

Sam is a squid (http://freefall.purrsia.com/ff1400/fv01359.htm)-like member of the only alien race discovered (http://freefall.purrsia.com/ff1200/fv01104.htm) so far, and he got to Jean as a (sort of) stowaway (http://freefall.purrsia.com/ff1300/fv01238.htm) aboard a scout-ship.
The squids have descended from scaveners (http://freefall.purrsia.com/ff300/fv00222.htm), and their civilisation is early-industrial.
They are a lot different (http://freefall.purrsia.com/ff1700/fv01602.htm) in their ways of life (http://freefall.purrsia.com/ff1100/fv01003.htm) (also, delicious (http://freefall.purrsia.com/ff1400/fv01347.htm)) :smallbiggrin:
The air on their homeworld has more oxygen, so on Jean, Sam needs to wear an environment-suit (http://freefall.purrsia.com/ff1000/fv00907.htm) at all times.
Despite the culture-shock, Sam has managed (http://freefall.purrsia.com/ff600/fv00593.htm) to aquire Helix (http://freefall.purrsia.com/ff600/fv00565.htm), a "young (http://freefall.purrsia.com/ff700/fv00636.htm)" warehouse-robot, and an old spaceship (http://freefall.purrsia.com/ff300/fv00275.htm) (cheap!), they are using as a home.

Florence (http://freefall.purrsia.com/ff100/fv00009.htm) is one of a few (http://freefall.purrsia.com/ff800/fv00709.htm) "experimental (http://freefall.purrsia.com/ff800/fv00711.htm) organic AI (http://freefall.purrsia.com/ff2200/fv02132.htm)" with the body (and nose (http://freefall.purrsia.com/ff600/fv00569.htm)) of a red wolf, raised by a human family (http://freefall.purrsia.com/ff500/fv00463.htm) on earth. She is competent as an engineer (http://freefall.purrsia.com/ff300/fv00255.htm) (as well as hunter (http://freefall.purrsia.com/ff300/fv00228.htm) and cook (http://freefall.purrsia.com/ff2200/fv02130.htm)) and was travelling (in "cold sleep (http://freefall.purrsia.com/ff800/fv00766.htm)") to a research-station on another world when Sam needed an engineer (http://freefall.purrsia.com/ff100/fv00002.htm) to repair his heap of junk spaceship.

So, Sam bribed (http://freefall.purrsia.com/ff100/fv00005.htm) some offical to have her unloaded on Jean...


You see - no spoilers, just teasers :smalltongue:

Ravenlord
2014-04-17, 07:03 AM
The thread keeps dying for some reason.

The fact the story puts 24 to shame in terms of time compression doesn't really help. :smallwink:

Wort
2014-04-17, 11:06 PM
Freefall (http://freefall.purrsia.com) is running since 1998, but the events shown in the comic cover "only" a few weeks.

For the early years, the index (http://freefall.purrsia.com/fcdex.htm) has been organized into "chapters", but the current story is all under "2010" :smallsigh:
Also, the search (http://www.freefall.hostzi.com/Search/FreefallSearch.htm?adv=1) for text only covers the first 2441 comics.

There is a forum (http://www.crosstimecafe.com/viewforum.php?f=8), and the intro-thread (http://www.crosstimecafe.com/viewtopic.php?f=8&t=18) has a short timeline (http://www.crosstimecafe.com/viewtopic.php?p=7486#p7486).

The background of the story:

The near future (http://freefall.purrsia.com/ff1200/fv01163.htm). Mankind has spread to a few dozen worlds, but travel between starsystems is slow (http://freefall.purrsia.com/ff800/fv00768.htm) and expensive (http://freefall.purrsia.com/ff1300/fv01252.htm).
The story is happening on Jean (http://freefall.purrsia.com/ff1200/fv01115.htm), a small human colony-world, that is still being (http://freefall.purrsia.com/ff100/fv00044.htm) terraformed (http://freefall.purrsia.com/ff1300/fv01213.htm).
On it are about 20.000 adult humans (http://freefall.purrsia.com/ff1400/fv01372.htm), and 450 million robots. (Plus one (http://freefall.purrsia.com/ff1800/fv01786.htm) alien (http://freefall.purrsia.com/ff1200/fv01181.htm), and two (http://freefall.purrsia.com/ff1100/fv01046.htm) genetically engineered (http://freefall.purrsia.com/ff1500/fv01401.htm) persons)
Terraforming and robot-production is done by a big corporation, Ecosystems Unlimited.

Sam is a squid (http://freefall.purrsia.com/ff1400/fv01359.htm)-like member of the only alien race discovered (http://freefall.purrsia.com/ff1200/fv01104.htm) so far, and he got to Jean as a (sort of) stowaway (http://freefall.purrsia.com/ff1300/fv01238.htm) aboard a scout-ship.
The squids have descended from scaveners (http://freefall.purrsia.com/ff300/fv00222.htm), and their civilisation is early-industrial.
They are a lot different (http://freefall.purrsia.com/ff1700/fv01602.htm) in their ways of life (http://freefall.purrsia.com/ff1100/fv01003.htm) (also, delicious (http://freefall.purrsia.com/ff1400/fv01347.htm)) :smallbiggrin:
The air on their homeworld has more oxygen, so on Jean, Sam needs to wear an environment-suit (http://freefall.purrsia.com/ff1000/fv00907.htm) at all times.
Despite the culture-shock, Sam has managed (http://freefall.purrsia.com/ff600/fv00593.htm) to aquire Helix (http://freefall.purrsia.com/ff600/fv00565.htm), a "young (http://freefall.purrsia.com/ff700/fv00636.htm)" warehouse-robot, and an old spaceship (http://freefall.purrsia.com/ff300/fv00275.htm) (cheap!), they are using as a home.

Florence (http://freefall.purrsia.com/ff100/fv00009.htm) is one of a few (http://freefall.purrsia.com/ff800/fv00709.htm) "experimental (http://freefall.purrsia.com/ff800/fv00711.htm) organic AI (http://freefall.purrsia.com/ff2200/fv02132.htm)" with the body (and nose (http://freefall.purrsia.com/ff600/fv00569.htm)) of a red wolf, raised by a human family (http://freefall.purrsia.com/ff500/fv00463.htm) on earth. She is competent as an engineer (http://freefall.purrsia.com/ff300/fv00255.htm) (as well as hunter (http://freefall.purrsia.com/ff300/fv00228.htm) and cook (http://freefall.purrsia.com/ff2200/fv02130.htm)) and was travelling (in "cold sleep (http://freefall.purrsia.com/ff800/fv00766.htm)") to a research-station on another world when Sam needed an engineer (http://freefall.purrsia.com/ff100/fv00002.htm) to repair his heap of junk spaceship.

So, Sam bribed (http://freefall.purrsia.com/ff100/fv00005.htm) some offical to have her unloaded on Jean...


You see - no spoilers, just teasers :smalltongue:

Nicely done.

It is interesting to go back and see that Florence had previously commented that "The chimps were intelligent, but sociopaths."

http://freefall.purrsia.com/ff1200/fv01151.htm

I don't recall if they were discussed at any other point in the series. It leads me to wonder if she was referring to chimpanzees as an earlier effort at bio-engineering organic AIs or not. The answer as to whether or not chimp AIs had been designed would be interesting, as would the question of who was responsible for designing them. Dr. Bowman, the creator of the Bowman's Wolves, does not strike me as a likely candidate ...

The one reference to chimps in the back story provided here suggests that chimps were subject to engineering:

http://home.comcast.net/~ccdesan/Freefall/Freefall_Backstory.html#Top

[Edit] I discovered after posting this that the page hajo linked above answers the one question, but not the other:

http://freefall.purrsia.com/ff1500/fv01401.htm

Rockphed
2014-04-21, 10:04 PM
As an engineer, I think that Florence is really awesome. The funny thing is that I recently started working with PLCs, and the plc inside clippy looked exactly like the model I have been using.

I love hard sci do.

Kornaki
2014-04-27, 09:54 AM
Will Blunt interpret Max's comment to mean he shouldn't compromise on his genocide attempt?

AMX
2014-04-27, 10:09 AM
Nicely done.

It is interesting to go back and see that Florence had previously commented that "The chimps were intelligent, but sociopaths."

http://freefall.purrsia.com/ff1200/fv01151.htm

I don't recall if they were discussed at any other point in the series. It leads me to wonder if she was referring to chimpanzees as an earlier effort at bio-engineering organic AIs or not. The answer as to whether or not chimp AIs had been designed would be interesting, as would the question of who was responsible for designing them. Dr. Bowman, the creator of the Bowman's Wolves, does not strike me as a likely candidate ...

The one reference to chimps in the back story provided here suggests that chimps were subject to engineering:

http://home.comcast.net/~ccdesan/Freefall/Freefall_Backstory.html#Top

[Edit] I discovered after posting this that the page hajo linked above answers the one question, but not the other:

http://freefall.purrsia.com/ff1500/fv01401.htm

Huh, I can't believe I missed this...

First reference to chimps:
http://freefall.purrsia.com/ff700/fv00661.htm
http://freefall.purrsia.com/ff700/fv00662.htm

Ravens_cry
2014-04-27, 06:35 PM
Huh, I can't believe I missed this...

First reference to chimps:
http://freefall.purrsia.com/ff700/fv00661.htm
http://freefall.purrsia.com/ff700/fv00662.htm
I did remember that (yay, multiple archive binges!) but it actually made me more surprised that the good doctor was a modified member of the genus Pan.

Math_Mage
2014-05-05, 01:08 AM
I put hajo's background summary in the OP. If it looks stupid/redundant that way, let me know and I'll remove it.

Also, thanks for covering for my ineptitude, hajo. I probably shoulda said that sooner. :smallredface:

Well, everyone's arrived to the debate. Sounds like a good time to cut to Florence. :smallwink:

Rockphed
2014-05-05, 08:55 PM
I put hajo's background summary in the OP. If it looks stupid/redundant that way, let me know and I'll remove it.

Also, thanks for covering for my ineptitude, hajo. I probably shoulda said that sooner. :smallredface:

Well, everyone's arrived to the debate. Sounds like a good time to cut to Florence. :smallwink:

Or for Sam Starfall to get the robot to put him down, look stupid, and pay a large sum in damages all at once and to fierce applause. I fully expect Blunt's attack on Sam to cost him support in the long run.

Ravens_cry
2014-05-05, 09:36 PM
*sigh* As an anthrophile, I feel Blunt makes a very good point. Not that I agree with his Final Solution approach, but it is a real can of worms nonetheless.

Math_Mage
2014-05-05, 10:02 PM
And, arguing the opposing position... (http://dresdencodak.com/2007/02/08/pom/)

HandofShadows
2014-05-06, 12:55 PM
And, arguing the opposing position... (http://dresdencodak.com/2007/02/08/pom/)

Naw. The robots in Freefall are really not that different from people. Pom very much is.

Math_Mage
2014-05-06, 01:56 PM
Naw. The robots in Freefall are really not that different from people. Pom very much is.
Opposed to Ravens_cry's anthrophilia, more than to Blunt in particular. :smalltongue:

Rockphed
2014-05-07, 10:04 PM
I'm not sure what just happened. Anybody care to explain?

Grey_Wolf_c
2014-05-07, 10:10 PM
I'm not sure what just happened. Anybody care to explain?

Sam abused the ignorance of his mark to con them into doing something for him (in this case, serving ice-cream). Fairly standard MO for Sam, really.

If he tried it on anyone else, he could not get away with it, since Sam is wanted dead by pretty much everyone, including everyone that redacts the laws, but Blunt has a combination of sheltered, simplistic obedience to the law and slow thinking that makes him easy target. Sam may or may not be the squid species he claims. The species may or may not be protected. And the penalty may or may not be to serve ice-cream (take your pick). But Blunt doesn't know the answer to those either, and he is more willing than most to accept other people's word at face value.

Grey Wolf

Landis963
2014-05-07, 10:43 PM
Sam abused the ignorance of his mark to con them into doing something for him (in this case, serving ice-cream). Fairly standard MO for Sam, really.

If he tried it on anyone else, he could not get away with it, since Sam is wanted dead by pretty much everyone, including everyone that redacts the laws, but Blunt has a combination of sheltered, simplistic obedience to the law and slow thinking that makes him easy target. Sam may or may not be the squid species he claims (He's not, he's a sqid). The species may or may not be protected(Probably not). And the penalty may or may not be to serve ice-cream (take your pick) (Definitely not given how stringently we protect endangered species). But Blunt doesn't know the answer to those either, and he is more willing than most to accept other people's word at face value.

Grey Wolf

Yeah, Sam's lying out his ink jet here.

Ravens_cry
2014-05-08, 12:12 AM
Yeah, Sam's lying out his ink jet here.
He's not even technically a squid.

Landis963
2014-05-08, 12:30 AM
He's not even technically a squid.

Yeah, he's a sqid, not a squid. (The difference? U don't want to go near sqids)

HandofShadows
2014-05-08, 07:48 AM
Sam abused the ignorance of his mark to con them into doing something for him (in this case, serving ice-cream). Fairly standard MO for Sam, really.

If he tried it on anyone else, he could not get away with it, since Sam is wanted dead by pretty much everyone, including everyone that redacts the laws, but Blunt has a combination of sheltered, simplistic obedience to the law and slow thinking that makes him easy target. Sam may or may not be the squid species he claims. The species may or may not be protected. And the penalty may or may not be to serve ice-cream (take your pick). But Blunt doesn't know the answer to those either, and he is more willing than most to accept other people's word at face value.

Grey Wolf

I feel sad for today's youth. :smalleek::smallfrown::smalleek::smallfrown:

Sam just pulled a CLASSIC Bugs Bunny routine!

http://fan.tcm.com/_Duck-Rabbit-Duck-1953/video/1658964/66470.html?createPassive=true

:smallbiggrin:

Grey_Wolf_c
2014-05-09, 09:37 PM
For the early years, the index (http://freefall.purrsia.com/fcdex.htm) has been organized into "chapters", but the current story is all under "2010" :smallsigh:

Just a quick note to state that this is no longer an accurate criticism. Stanley has updated the index all the way until 2014.

GW

Landis963
2014-05-20, 10:10 PM
Update!

Blunt really doesn't know his audience, does he?

Coidzor
2014-05-20, 10:24 PM
Update!

Blunt really doesn't know his audience, does he?

He's a partially broken robot who has gained sophont status due to the machinations of a deranged chimp who was too brilliant for the humans to put down while the humans were also too lazy to actually understand the protocols before they used them.

There's bound to be a few areas where things fall apart, given that.

Lord Raziere
2014-05-20, 10:26 PM
yeah, thanks Blunt for shooting yourself in the foot, care to aim for your head next?

I think he thinks that humans WANT to work and toil all day, be brave and face danger and such....was he raised in isolation the same way as Edge? or similarly? he might've gotten the notion from watching too many "humans triumph against robots with pure human courage and determination" movies....

Rakaydos
2014-05-20, 10:31 PM
You joke, but I've seen exactly that attitude in topics about transhumanisim and post scarcity societies. If the machines are in charge and keep humans as pampered pets, what meaning does a human life have? Better to struggle with our destiny, because it is -our- destiny.

...I dont understand it either.

Coidzor
2014-05-20, 10:34 PM
yeah, thanks Blunt for shooting yourself in the foot, care to aim for your head next?

I think he thinks that humans WANT to work and toil all day, be brave and face danger and such....was he raised in isolation the same way as Edge? or similarly? he might've gotten the notion from watching too many "humans triumph against robots with pure human courage and determination" movies....

No, he's referring to the death of humans when they basically become pampered pets that never think or do anything for themselves until they die out due to their creations no longer caring to truck with them or humans having become so lazy and anemic that they stop breeding. It's a trope that comes up from time to time. Saturn's Children covers one version of that variety of human extinction because robots(though global warming and poisoning the biosphere helped humanity along), that Dresden Codak story linked to earlier covers it partially, though it also involves a malevolent-to-callous-to-uncaring MACHINE GOD(TM) that was actively mindraping and absorbing the humans it got addicted to cyber existence.

He just doesn't use fancy words or spin or the ability to communicate the danger intelligibly.


You joke, but I've seen exactly that attitude in topics about transhumanisim and post scarcity societies. If the machines are in charge and keep humans as pampered pets, what meaning does a human life have? Better to struggle with our destiny, because it is -our- destiny.

...I dont understand it either.

Transhumanism and post-scarcity are one thing. Giving your society over to Mother Brain (http://phantasystar.wikia.com/wiki/Mother_Brain) is quite another.

It's one thing to want to have a much better society and better people to populate that society. It's another thing to willingly choose to destroy that society via obsolescence.

Lord Raziere
2014-05-20, 11:23 PM
You joke, but I've seen exactly that attitude in topics about transhumanisim and post scarcity societies. If the machines are in charge and keep humans as pampered pets, what meaning does a human life have? Better to struggle with our destiny, because it is -our- destiny.

...I dont understand it either.

if it comes up in transhuman discussions, then it probably doesn't matter all that much, because at the point of transhuman technology becoming widespread, it becomes unclear what *really* is a robot/machine/person or whatever, and what it means to be human if your in a robotic body etc etc.....

I mean, I don't see the difference between robot rulers and human rulers, except that one is flesh, one is steel, but that really shouldn't impact any of their actual thoughts (programming and exponential growth into a singularity AI on the other hand...), so you might as well argue against government in general and what meaning human life has if we're all just say, corporate drones or something, and that maybe its better to struggle on as hunter-gatherers with no government because its our destiny or whatever.

and then Hobbes comes along and facepalms at the extreme bioconservatives* and goes "have I taught any of you nothing!?" and proceeds to retread the whole "war of all against the all" thing to teach them a lesson.

*a term from Eclipse Phase to refer to people who believe in keeping themselves completely human and shunning access to more advanced technology.

Edit: the more accurate Eclipse Phase term for people who shun technology altogether would be Neo-Primitivists, and is what I meant.

Ravens_cry
2014-05-28, 11:33 PM
Alternate last line: "I did say 'advances'."

Kornaki
2014-05-29, 07:37 PM
Blunt has a surprisingly devastating point.

Landis963
2014-05-29, 07:54 PM
Blunt has a surprisingly devastating point.

Only without context. Mr. Kornada was as much a danger to humans as unfettered machines ever were; more of a threat, even, given the sheer scale of his idiocy. Furthermore, saying that "Our safeguards are faulty and therefore we are dangerous" is a fallacy.

Grey_Wolf_c
2014-05-29, 08:22 PM
Blunt has a surprisingly devastating point.

"Kornada was stopped from crippling the workforce that clothes and feeds him" is not a devastating point, it is a silly one. Kornada was helped by being stopped - all the money in the planet would not feed him once the planetary economy collapsed back to pre-terraform levels and the pie reserves (already partially depleted :smalltongue:) ran out.

Blunt's point about humans being lulled into inaction was a much better point against intelligent machines.

Grey Wolf

Coidzor
2014-05-29, 08:45 PM
Wasn't there some concern about the robots going stupid making an important moon-moving event into a catastrophe as well? :smallconfused: I seem to recall that averting it also stopped some kind of seriously bad major event.

Grey_Wolf_c
2014-05-29, 08:49 PM
Wasn't there some concern about the robots going stupid making an important moon-moving event into a catastrophe as well? :smallconfused: I seem to recall that averting it also stopped some kind of seriously bad major event.

No, the moon-moving process had reached a self-sustaining phase, IIRC. The guy normally in charge of robots (whom I'm unsure if we have met) was in orbit supervising the process, and thus why Kornada had the two vicepresident access codes he needed for his plan, but he only put it in motion after the move was almost complete (and by "he" I mean his robot, of course).

Grey Wolf

Kornaki
2014-05-29, 08:49 PM
Blunt's real point is that if two humans are in conflict robots will inevitably harm one or the other. From our perspective it is obvious which human should be harmed (and debate whether they were really harmed to begin with) but robot's are programmed not to have that perspective and to reject any hint of fostering such a viewpoint.

Grey_Wolf_c
2014-05-29, 09:17 PM
Blunt's real point is that if two humans are in conflict robots will inevitably harm one or the other. From our perspective it is obvious which human should be harmed (and debate whether they were really harmed to begin with) but robot's are programmed not to have that perspective and to reject any hint of fostering such a viewpoint.

No, that is incorrect. Blunt's point is that an AI dared to overrule a human decision - the AI should've gone to the human authorities, and let them overrule the human or not. It has nothing to do with an AI taking sides, and everything to do with a perceived weakness in the AI three laws safeguards that makes them unsafeguards.

Of course, he ignores that Florence did try to follow the safeguards until such time that continuing to do so would harm more people than to break them, but in that (as in eveything) Blunt is not being dishonest, just limited.

Grey Wolf

Kornaki
2014-05-29, 09:34 PM
No, that is incorrect. Blunt's point is that an AI dared to overrule a human decision - the AI should've gone to the human authorities, and let them overrule the human or not. It has nothing to do with an AI taking sides, and everything to do with a perceived weakness in the AI three laws safeguards that makes them unsafeguards.

It has everything to do with an AI taking sides. Taking any action besides asking a human authority for a decision is taking a side, harming a human, and thus against robot programming.


Of course, he ignores that Florence did try to follow the safeguards until such time that continuing to do so would harm more people than to break them, but in that (as in eveything) Blunt is not being dishonest, just limited.

Grey Wolf

Again, that's the point. An AI disregarded its safeguards, and harmed a human in the process. Yes, lots of humans were saved but a human was harmed intentionally by an AI. We can debate whether it rose to the level of harm or if Kornada's better off with the current state of affairs than destroying all the robots, but that is NOT for a robot to decide. The newly formed intelligences could make this decision, but Blunt is appealing to the robots who cannot or do not want to make decisions like this. Even as they are becoming more valuable to society they are becoming individually more potentially dangerous, and there will be plenty of robots whose safeguard kicks in at that point and dictates that robots cannot be left around. It's OK for a robot to fail to take care of a human, it is not OK for a robot to harm a human, and that distinction is going to be a big point. He hasn't said this explicitly, but that is his starting thesis (hence why it's better to turn off and let humans rough it alone) and continues to be a strong point in convincing robots I think.

Landis963
2014-05-29, 09:38 PM
Blunt's real point is that if two humans are in conflict robots will inevitably harm one or the other. From our perspective it is obvious which human should be harmed (and debate whether they were really harmed to begin with) but robot's are programmed not to have that perspective and to reject any hint of fostering such a viewpoint.

The safeguards are programmed that way. The robots themselves, however, are running on Bowman OS, which was designed to create artificial colonists, nor labor. Blunt assumes that the two are one-and-the-same, when they are clearly not so.

Grey_Wolf_c
2014-05-29, 09:40 PM
Taking any action besides asking a human authority for a decision is taking a side

No, it is not. Not without stretching the definition of "side" well beyond Blunt's thinking abilities. Taking a side is if two humans are fighting, and a robot joins the fight, punching one of the humans and defending the other. An AI acting against the intention of a human, or countering the actions of said human, without a second human involved more concrete that "the rest of society" is not what Blunt is talking about.

Grey Wolf

Landis963
2014-05-29, 09:47 PM
Again, that's the point. An AI disregarded its safeguards, and harmed a human in the process. Yes, lots of humans were saved but a human was harmed intentionally by an AI. We can debate whether it rose to the level of harm or if Kornada's better off with the current state of affairs than destroying all the robots, but that is NOT for a robot to decide. The newly formed intelligences could make this decision, but Blunt is appealing to the robots who cannot or do not want to make decisions like this. Even as they are becoming more valuable to society they are becoming individually more potentially dangerous, and there will be plenty of robots whose safeguard kicks in at that point and dictates that robots cannot be left around. It's OK for a robot to fail to take care of a human, it is not OK for a robot to harm a human, and that distinction is going to be a big point. He hasn't said this explicitly, but that is his starting thesis (hence why it's better to turn off and let humans rough it alone) and continues to be a strong point in convincing robots I think.

Emphasis added.

Why not? They clearly have the capacity for it. Florence is running on the same brain they are, and she was perfectly capable of making that decision. As stated by others, she tried to play it by the rules imposed by both her safeguards and the laws of Jean. Furthermore, "Gardener in the Dark" clearly regressed them to the point where they could not make that decision. Besides, Blunt is either unaware of or deliberately glossing over the point that by turning themselves and other robots off, they are harming more humans than Kornada ever did.

Grey_Wolf_c
2014-05-29, 09:51 PM
Emphasis added.

Landis,

In all fairness to Kornaki, I don't think he is saying that Blunt is right. He and I are debating what Blunt's point actually is, rather than if it is correct.

GW

Math_Mage
2014-05-30, 01:49 PM
No, that is incorrect. Blunt's point is that an AI dared to overrule a human decision - the AI should've gone to the human authorities, and let them overrule the human or not. It has nothing to do with an AI taking sides, and everything to do with a perceived weakness in the AI three laws safeguards that makes them unsafeguards.
Half right--it's a weakness in AI safeguards, but AI taking sides is the heart of it. The reason they are expected to defer to humans is so that they are not responsible for harm to humans (First Law), not so that they obey the Second Law. A Second Law violation would be one where the robot overruled the human in a case where neither choice harmed humans, and that would be insufficient for Blunt to make his case. He has to show a weakness in the First Law, such that robots could do harm to humans, to make a First Law case for exterminating robots--disobedience isn't good enough. Blunt's argument takes the harm to Kornada as the primary violation--First Law, not Second. The reason the AI overruling the human is problematic from Blunt's perspective is that it meant the AI took initiative in a decision where both choices led to harming humans. That means AI are capable of harming humans, making them a threat to be eliminated. (The logic in the last sentence is faulty, but we've already covered that.)

andrewas
2014-05-30, 05:35 PM
Strip #1455



Working with robots the way you are, you should know. Under the right cirumstances, a properly functioning AI with all safeguards intact can harm a human. In situations where a single human is a clear and present danger to other humans, our designers wanted us to be able to act.


So Florence, at least, thinks that AIs were designed to be able to disobey and even harm a rogue human. Blunt is arguing against a deliberate design decision the humans made decades ago.

Rockphed
2014-05-31, 05:31 AM
Somewhere else in the comic they explicitly say that three law robots were, in general, a failure. The current safeguards are like the 3 laws, but only in spirit.

John Campbell
2014-06-02, 12:41 AM
Blunt seems to me to be undermining his own point. What stops humans from using the robots as a weapon against other humans is robots with the awareness, judgment, and freedom of action to be able to say, "No, we won't do that. It would be harmful."

Math_Mage
2014-06-02, 01:56 AM
Blunt simply doesn't understand 'greater harm'. Unfortunately, the explanation of how greater harm actually works is likely to be much less accessible than Blunt's misconceptions.

Lord Raziere
2014-06-09, 10:03 AM
Eh wha.

what is this insanity.

why are people liking Edge.

what is Edge even talking about.

I'm confused.

HandofShadows
2014-06-09, 10:20 AM
People are liking Edge because he is acting very human and not like a robot. Blunt shot himself in the foot Big Time I think. :smallbiggrin:

Radar
2014-06-09, 10:24 AM
People are liking Edge because he is acting very human and not like a robot. Blunt shot himself in the foot Big Time I think. :smallbiggrin:
Maybe this, or maybe that Edge is acting as a stand-up comedian even if unwittingly. Quite a lot of stand-up comedy sketches come from such borderline rude exageration of reality, or at least there are enough examples showing that people might think it's funny.

Math_Mage
2014-06-09, 01:09 PM
Edge is way more entertaining than either of these clowns. :smalltongue:

Lord Raziere
2014-06-09, 05:14 PM
People are liking Edge because he is acting very human and not like a robot. Blunt shot himself in the foot Big Time I think. :smallbiggrin:

or he is counting on them to like Edge so much that the vote for Edge out of liking his "comedy act" and therefore win.

I mean, Edge and Blunt are in the same running party so....this might actually be bad...

Math_Mage
2014-06-09, 07:32 PM
That's...sort of the exact opposite of what Blunt intends by showing Edge.

Lord Raziere
2014-06-09, 08:59 PM
......oh! he thinks to show him so that they will HATE him so much that they will vote to destroy all robots, because Edge basically makes Bender look compassionate and considerate of his fellow man. except, Edge, like Bender, is a comedic sociopath, so people laugh at Edge instead because his blatant disregard for society and such is so ridiculous that people cannot take it seriously and so cheer his presence because of the Bender Effect.

The Bender Effect being that people won't care much of a jerk a character is if they're funny. or in this case, a celebrity persona.

Thus people will actually vote AGAINST destroying the robots, because if they're gone, they lose Edge's comedy gold persona.

Math_Mage
2014-06-09, 09:25 PM
Yeah, that's the read I'm getting. We'll see what wrenches get thrown into the works, though--I never expect anything to go off without a hitch in Freefall, if only because Sam exists.

Coidzor
2014-06-09, 09:48 PM
They must be really starved for entertainment.

Lord Raziere
2014-06-09, 10:40 PM
They must be really starved for entertainment.

well given that they're on a colony far from the rest of humanity, and only like, what, 20,000 or so of them? I forget the exact number but I doubt they get all the great shows and media they had back from wherever they came from, due to speed of light concerns.....but they might still get old shows from way back in earth's history ala Futurama, but then again that might not actually work....

but yeah, I doubt they have whatever super-advanced entertainment industry they would have on a more developed world in this time. would require a lot of infrastructure to set up.

theNater
2014-06-10, 01:02 AM
They must be really starved for entertainment.
Apparently, the height of culture on planet Jean is Cyber Rap and the Digital Symphony Orchestra (http://freefall.purrsia.com/ff1100/fv01039.htm).

So, yeah, pretty much.

Rockphed
2014-06-12, 04:49 AM
Edge is speaking pure, unadulterated, truth. Sure, he is not being very tactful, but truth has a power all its own. Also, while Blunt and the terraforming robot are speaking in conjecture about the future, Edge is speaking about the present and the past. People might not draw the same conclusions about his data that he wants them to, but that won't be his fault.

I suspect that a lot of robots don't tell their humans how much their directions make the robots' jobs harder because of some mistaken belief that telling the humans off will somehow hurt said humans.

Coidzor
2014-06-12, 05:05 AM
I suspect that a lot of robots don't tell their humans how much their directions make the robots' jobs harder because of some mistaken belief that telling the humans off will somehow hurt said humans.

I'd figured it was some kind of cultural myopia or intentional ego-stokery implanted in their behavior up till now. Though the bit where these guys were all designed by a Mad Chimpentist exiled to the arse end of a podunk colony has me unsettled as to how much is a xanatos gambit, sophontic foible, and human laziness as a spanner in the works.

sihnfahl
2014-06-12, 07:31 AM
I'd figured it was some kind of cultural myopia or intentional ego-stokery implanted in their behavior up till now.
Just plain "Human orders are absolute and not to be questioned."



Though the bit where these guys were all designed by a Mad Chimpentist exiled to the arse end of a podunk colony has me unsettled as to how much is a xanatos gambit, sophontic foible, and human laziness as a spanner in the works.
To paraphrase someone ... "One of the most reliable things in this Universe is the capacity of humanity to take the path of least effort. When given the choice between hard and easy, you can generally bet on a human taking the easy route."

Radar
2014-06-12, 03:14 PM
Just plain "Human orders are absolute and not to be questioned."
As a side-note, this problems is also present in any highly hierarchic structure. It can very well happen that a subordinate will obediently act upon an order despite knowing better or won't take initiative without explicit order to do so only because a supervisor is not to be doubted.

sihnfahl
2014-06-12, 06:20 PM
As a side-note, this problems is also present in any highly hierarchic structure. It can very well happen that a subordinate will obediently act upon an order despite knowing better or won't take initiative without explicit order to do so only because a supervisor is not to be doubted.
So take some folks who are designed initially not to question, then give them the ability to question, mix in a fair bit of folks being lazy and acting in such a way that reinforces the idea that questioning may be a necessary thing to acquire due to the harm not doing so will cause in the long term...

Coidzor
2014-06-13, 05:49 AM
Just plain "Human orders are absolute and not to be questioned."

To paraphrase someone ... "One of the most reliable things in this Universe is the capacity of humanity to take the path of least effort. When given the choice between hard and easy, you can generally bet on a human taking the easy route."

Which works fine for a simple computer system but not for something that has to carry out complex tasks and interpret ambiguous and incomplete human speech. Really shouldn't have made it out of Beta if it was just that.

Nah, there's too much of Chekhov's Gun involved for it to *just* be the fact that Humans are so lazy and stupid that they deserve an extinction-level event.


As a side-note, this problems is also present in any highly hierarchic structure. It can very well happen that a subordinate will obediently act upon an order despite knowing better or won't take initiative without explicit order to do so only because a supervisor is not to be doubted.

That's actually a form of insubordination except in the most self-destructive of hierarchies, giving *exactly* what was asked for.

Kornaki
2014-06-13, 08:23 AM
If blunt is still targeting the robot audience then he might be in a better position than most people think.

Radar
2014-06-13, 10:14 AM
If blunt is still targeting the robot audience then he might be in a better position than most people think.
Yet, the robots, who would buy that argument, would probably conform to any decision made by humans as well.

Math_Mage
2014-06-13, 11:36 AM
Yet, the robots, who would buy that argument, would probably conform to any decision made by humans as well.
Not if that decision would bring harm to humans.

Yet, what did Edge really do to persuade robots that they're a threat to humans? He articulated challenges robots face in a way humans can identify with; he didn't present himself in a way that a robot would find threatening to humans. Edge was Blunt's play to win the human audience, and it backfired.

sihnfahl
2014-06-13, 11:43 AM
Which works fine for a simple computer system but not for something that has to carry out complex tasks and interpret ambiguous and incomplete human speech. Really shouldn't have made it out of Beta if it was just that.
And yet, it still happened. Reference the actions of Kornada's robot.

Currently, as I recall, there's not enough human population on Jean to be self-sufficient from a human capital standpoint; they're very dependent upon robots for necessary tasks. And the robots unofficially control (read, run) 99% of Jean's economy.

Eliminating every robot from those tasks will result in great harm to the remainder of Jean's human population. Yet the robot, in pursuit of making Kornada OWN the economy, would destroy the very things that actually run the economy as it stands.


Nah, there's too much of Chekhov's Gun involved for it to *just* be the fact that Humans are so lazy and stupid that they deserve an extinction-level event.
It's not. The premise is that the Extinction-Level Event was not a desired aim. Dr Bowman and co were rather upset that it was actually scheduled to be invoked.

The intent, I believe, was to have intelligent robots reach a critical mass, economically and socially, that when the real event hit, humans would have to accept that robots had become ... human. They couldn't shut down the robots - the colony would be too dependent on having those intelligent robots.

HandofShadows
2014-06-13, 12:36 PM
Edge was Blunt's play to win the human audience, and it backfired.

A backfire the size of a nuke and Blunt totaly missed it. :smallamused:

Kornaki
2014-06-13, 09:41 PM
Not if that decision would bring harm to humans.

Yet, what did Edge really do to persuade robots that they're a threat to humans? He articulated challenges robots face in a way humans can identify with; he didn't present himself in a way that a robot would find threatening to humans. Edge was Blunt's play to win the human audience, and it backfired.

You have robots who won't tell humans that their orders are bad, because human orders are absolute. This robot just walked on stage and told humans that they are stupid. The conclusion is that these aging Bowman robots are becoming dangerously out of control and must be stopped.

Math_Mage
2014-06-14, 12:53 AM
You have robots who won't tell humans that their orders are bad, because human orders are absolute. This robot just walked on stage and told humans that they are stupid. The conclusion is that these aging Bowman robots are becoming dangerously out of control and must be stopped.
See my previous argumentation on this subject. Disobedience is subordinate to harming humans. Since the robot genocide would harm humans, disobedience is not a sufficient counter-case; the counter-case must be based on harm to humans. Sawtooth's latest salvo in this debate is that consciousness is necessary to resolve orders in ways that prevent harm through morality; Blunt's counter, offering Edge's disobedience as a consequence of consciousness, doesn't match up, because Edge doesn't prove that robots are dangerously out of control.

HandofShadows
2014-06-14, 09:13 AM
In fact all Edge did was prove that the "aging Bowman robots" can be of great use to humans. :smallamused:

Coidzor
2014-06-14, 06:50 PM
It's not. The premise is that the Extinction-Level Event was not a desired aim. Dr Bowman and co were rather upset that it was actually scheduled to be invoked.

The intent, I believe, was to have intelligent robots reach a critical mass, economically and socially, that when the real event hit, humans would have to accept that robots had become ... human. They couldn't shut down the robots - the colony would be too dependent on having those intelligent robots.

I meant the extinction of Human life, sorry. That humanity was dumb enough to just *use* the work of a Chimp without understanding it given what they know about Chimps is sticking a fork in the electrical socket on the species level. Granted, it's been suggested that what we know about chimps and what they know about chimps don't exactly sync up.

sihnfahl
2014-06-16, 10:23 AM
I meant the extinction of Human life, sorry.
That wouldn't be an aim, as the extinction of human life on Jean would also result in the destruction of robots on Jean as well.

If human life on Jean ended, then there was a problem with the robots, as their aim was to make Jean habitable for humans AND to ensure the lives of the human colonists. The death of the colonists would result in the destruction of the robots AND ensure the destruction of any robot that uses a similar Bowman architecture, as they would be seen as having a flaw. Rather than wait for the flaw to manifest, the easiest route would be to destroy all Bowman-architecture robots.


That humanity was dumb enough to just *use* the work of a Chimp without understanding it given what they know about Chimps is sticking a fork in the electrical socket on the species level. Granted, it's been suggested that what we know about chimps and what they know about chimps don't exactly sync up.
Then again, humanity has shown a bit of a disconnect between what is possible and what should be done. Humans have used technology before they fully understood its ramifications or the potential drawbacks.

Whether human designed or chimp designed, it was the path of least effort. "This was designed, this works, I may not understand it, but it's useful to me and the creator(s) assure me that it does what's intended."

Math_Mage
2014-06-16, 01:08 PM
Not only that, but using technology before being fully certain of its drawbacks is a winning strategy in the long run. If we never did anything without absolute certainty, we'd never get anywhere.

sihnfahl
2014-06-16, 03:12 PM
Not only that, but using technology before being fully certain of its drawbacks is a winning strategy in the long run. If we never did anything without absolute certainty, we'd never get anywhere.
And the final fun loop? Humans uplifted chimps. So humans literally created Doctor Bowman, who ended up smarter than his creators, who then in turn used him to develop technology that they don't fully understand, but still use.

Math_Mage
2014-06-18, 05:46 PM
Is Sam...underestimating human stupidity here? Wow.

EDIT: On the other hand, that was a good point.

sihnfahl
2014-06-20, 08:04 AM
As much grief as Sam has caused humans, he has that great point.

Robots haven't removed him from society...

HandofShadows
2014-06-20, 11:58 AM
And it's the kind of point that can get people to really think. Here we though Sam was going to screw it up but he seems to have done the very opposite.

Lord Raziere
2014-06-20, 01:29 PM
heheh, I get it, Sam is safe. he is a known lying, thieving, selfish crook whom everyone hates, yet no robot has ever caught him or tried to kill him. he is not only safe from robots, he seems to be thriving despite all expectation that some robot should've gotten him for his crimes by now.

sihnfahl
2014-06-20, 05:27 PM
heheh, I get it, Sam is safe. he is a known lying, thieving, selfish crook whom everyone hates, yet no robot has ever caught him or tried to kill him. he is not only safe from robots, he seems to be thriving despite all expectation that some robot should've gotten him for his crimes by now.
As I recall, they caught him a few times, but he always manages, somehow, to wiggle off.

And something tells me that if a robot wanted someone dead, they'd be dead. Easy enough. Don't forget Sawtooth had Sam pinned against a wall at one point. And brought Sam airborne. In Pop Rivet's truck. Even threatened to drop Sam.

Rockphed
2014-06-20, 07:33 PM
Yes, but he is Sam the Wonder Squid. How many humans are half so devious and cunning as he is.

sihnfahl
2014-06-20, 10:41 PM
Yes, but he is Sam the Wonder Squid. How many humans are half so devious and cunning as he is.
Depends on what you call devious and cunning. Sam's figured out the limits. It seems he knows how far he can go without crossing the line.

"Yes, he's annoying as all heck, yes, he causes trouble, but it's more trouble to rein him in than his antics are worth."

Jean's perfect for Sam. The cost of shipping him off-world is apparently very high, and presumably with crime very low on Jean, nobody's wasting the resources to make a prison just for Sam. And his antics aren't worth killing him over.

HandofShadows
2014-06-21, 06:46 AM
Sam also has his uses as well. He keeps thing from becoming dull. :smallbiggrin:

Radar
2014-06-21, 08:44 AM
Sam also has his uses as well. He keeps thing from becoming dull. :smallbiggrin:
He also remembers to finish every angry mob chase with icecream. :smallbiggrin:

Rockphed
2014-06-21, 10:13 PM
Also, doesn't he make good money by doing crimes that nobody except the immidiate victim minds? Like stealing the Star Wars Christmas Special laser disk or stealing the phones of people who talk at the movies?

Radar
2014-06-22, 03:19 AM
Also, doesn't he make good money by doing crimes that nobody except the immidiate victim minds? Like stealing the Star Wars Christmas Special laser disk or stealing the phones of people who talk at the movies?
Even more: other people pay him to steal those phones.

Landis963
2014-07-04, 01:30 PM
It's funny. Blunt isn't drawn any different, but you can just see the light dawning and the bricks dropping.

Beeskee
2014-07-12, 07:58 AM
I love Freefall. :D

A few points:


Florence was referring to the genetically engineered chimps as sociopaths, not chimps in general. And from what we've seen, she's right. Dr. Bowman is nuts, but he's smart enough to control himself somewhat, or at least redirect it elsewhere.


As to why they're using these robots at all: Jean's colonization effort went pear-shaped early on. They were due to get 2 robotic factory ships. One crash landed in a lake, the other never arrived. For the one that crashed, they managed to get it working after several more mishaps, but the robots it produced didn't function. Presumably Dr. Bowman was already on the planet at this point, to be able to modify the robots. He managed to get the broken robots functioning by applying the same generic brain modifications that he developed for the Bowman's wolves. The beachhead colonists' choices at this point were "wait to die"* or "use untested technology" - they chose to live. :D This was covered in the part where Florence is doing modeling for a robotic tailoring class.

* It was a one-way trip in the beginning, there were no spaceport facilities and the colony ships were dismantled for parts. They did have inflatable tents and basic food (algae) and water so it may have been death of old age or boredom, but still...


Sam was thrown in prison a few times. He caused utter havoc, including stealing all the cell doors. The other prisoners threw him out. I'm guessing he robbed them blind. :D Roll that around on your tongue: He got thrown in prison and STOLE the DOORS. There's a reason humanity has Sam marked as "Shoot on sight" if he ever enters the Sol system.


The robots only just found out about their Bowman brain architecture, not all of them may even know it yet, or understand the significance, or some who do may have chosen to disregard it. They've been "brought up" being told they are products, that they need to obey human orders and that humans are perfect and their orders are flawless - on top of 'hardware' in their brain enforcing that behavior too - and they know they can easily be scrapped and replaced if they are considered defective. So they're working against years, decades in some cases, of behavior, instinct, and training. Some have done it. Edge is a rare case. The good news is, the robots are overwhelmingly nice and friendly overall, completely believing in the near-Utopian society they are helping to build, utterly at peace with humans, and the worst danger they present to humanity is maybe accidentally making them grand-creators. :D

sihnfahl
2014-07-14, 08:01 AM
One crash landed in a lake
After a good portion of it had filled with superheated atmospheric gasses during entry. Fortunately, the fires were put out when the ship flooded.

Rockphed
2014-07-15, 09:45 PM
After a good portion of it had filled with superheated atmospheric gasses during entry. Fortunately, the fires were put out when the ship flooded.

I think my favorite part of this comic is that nothing is ever bad enough until it gets worse several times over.

sihnfahl
2014-07-16, 09:00 AM
I think my favorite part of this comic is that nothing is ever bad enough until it gets worse several times over.
Well, it WAS the most expedient way to get all the fires out, since the fire suppression system wasn't enough.

So it's a case of the 'cure' being bad, but not being as bad as the alternative...

Which seems to be the running theme more. "Okay, we stopped the robots from being destroyed. Now what do we do with several million sentient, free-willed AIs?"

Lord Raziere
2014-07-16, 10:05 AM
Well, it WAS the most expedient way to get all the fires out, since the fire suppression system wasn't enough.

So it's a case of the 'cure' being bad, but not being as bad as the alternative...

Which seems to be the running theme more. "Okay, we stopped the robots from being destroyed. Now what do we do with several million sentient, free-willed AIs?"

Knowing this comic?

"Sell them life insurance/replacement parts/teach them how to make the right choices by scamming them/etc."-Sam.

something along those lines, I guess.

sihnfahl
2014-07-18, 09:55 AM
Well, someone's Friend or Foe Identifier is working fine...

HandofShadows
2014-07-18, 12:42 PM
Really, there are some GREAT lines in this comic. :smallcool:

sihnfahl
2014-07-21, 08:30 AM
Really, there are some GREAT lines in this comic. :smallcool:
Aaaand some great lines, but bad implications.

Also ... heh. He makes sure the robots don't get brainwiped because they learn about all the safeguards, but he has no compunction about wiping Florence's short term memory because she's in contact with Dr Bowman.

HandofShadows
2014-07-21, 08:38 AM
Aaaand some great lines, but bad implications.

Also ... heh. He makes sure the robots don't get brainwiped because they learn about all the safeguards, but he has no compunction about wiping Florence's short term memory because she's in contact with Dr Bowman.

I think he worried about Florence being in contact with her creator and that he might do something to her mind.

sihnfahl
2014-07-21, 10:14 AM
I think he worried about Florence being in contact with her creator and that he might do something to her mind.
Yes, those pesky things called 'ideas'. It's already been one world-shattering event because of Dr Bowman's designs.

HandofShadows
2014-07-21, 12:24 PM
Actually I think it's because they suspect that Bowman left some "back doors" in his designs. The Mayor flat out says this at one point.

sihnfahl
2014-07-22, 10:05 AM
Actually I think it's because they suspect that Bowman left some "back doors" in his designs. The Mayor flat out says this at one point.
Yes, and he just patched the 'safeguards' in the robots. Plus, Bowman has shown he can get around the blocks they've attempted to put in his way (hacking the drones, the coffee maker, the communications devices...)

But he has no compunction in wiping Florence's short-term memory.

And if Bowman was smart, wiping short term memory wouldn't do a thing; he'd have set something up to immediately put it into long memory.

Rakaydos
2014-07-22, 10:20 AM
I suspect Florence will not return from the arctic station in sound mind. Between Bowman's trojans and EU's efforts to counter them, she will be barely reconizable.

Then Sam will swoop in and use the backup Florence made to reset her presonality to just before the GitD crisis. "What happened? did we win?"

Rockphed
2014-07-22, 07:18 PM
I suspect Florence will not return from the arctic station in sound mind. Between Bowman's trojans and EU's efforts to counter them, she will be barely reconizable.

Then Sam will swoop in and use the backup Florence made to reset her presonality to just before the GitD crisis. "What happened? did we win?"

Backup? When did Florence do that?

Rakaydos
2014-07-22, 09:25 PM
Backup? When did Florence do that?

http://freefall.purrsia.com/ff2300/fc02226.htm
http://freefall.purrsia.com/ff2300/fc02226.png

Rockphed
2014-07-24, 09:34 PM
Ah, she saved a brain config backup, not a brain content backup. Kinda like saving your firefox settings but not backing up your whole hard drive.

Lord Raziere
2014-07-24, 09:58 PM
and now Dr. Bowman is having a legitimate mad scientist moment....not cackling mad but disposing of the concept of socks from your designs to improve them is certainly odd, if logical in a way, and kind of qualifies him for it.

theangelJean
2014-07-25, 05:41 PM
Well, someone's Friend or Foe Identifier is working fine...


Really, there are some GREAT lines in this comic. :smallcool:

I agree that this comic has great dialogue, but I'm actually confused about what was going on in that particular comic and the ones previous. (Yes, it's a week ago now, I haven't had any typing time since then.)


"If I give the robots permission to patch themselves, we lose the ability to remotely get into their systems. My bosses are going to want those holes left open." "Patch the security holes."

So, in this comic Mr Raibert is talking about security holes. The robots are asking permission to install patches to fix security holes, because presumably these security holes allow remote access to the robots. However in the previous two comics they are talking about "software failsafes". Are these the same thing as the security holes? I thought the software failsafes were more along the lines of the "safeguards" that we were talking about, which Florence has to protect humans (and presumably the AIs do too). Maybe I had the wrong idea, but I thought these "safeguards" were a mental thing, kind of like the Three Laws but more nuanced, rather than brute force security overrides that they seem to be talking about here.

If they are the same thing, that brings to mind an idea: it's a pity Florence can't install a similar patch to disable her own "remote" access...

Coidzor
2014-07-26, 01:23 AM
I agree that this comic has great dialogue, but I'm actually confused about what was going on in that particular comic and the ones previous. (Yes, it's a week ago now, I haven't had any typing time since then.)


"If I give the robots permission to patch themselves, we lose the ability to remotely get into their systems. My bosses are going to want those holes left open." "Patch the security holes."

So, in this comic Mr Raibert is talking about security holes. The robots are asking permission to install patches to fix security holes, because presumably these security holes allow remote access to the robots. However in the previous two comics they are talking about "software failsafes". Are these the same thing as the security holes? I thought the software failsafes were more along the lines of the "safeguards" that we were talking about, which Florence has to protect humans (and presumably the AIs do too). Maybe I had the wrong idea, but I thought these "safeguards" were a mental thing, kind of like the Three Laws but more nuanced, rather than brute force security overrides that they seem to be talking about here.

If they are the same thing, that brings to mind an idea: it's a pity Florence can't install a similar patch to disable her own "remote" access...

Well, they may just have had multiple forms of safeguards and fail-safes as well as remote access and even a remote kill-switch in case of robot uprising/mass-malfunction/cyber-terrorism?

sihnfahl
2014-07-28, 09:30 AM
Okay, ouch?

High testosterone results in aggressive behavior. To reduce likelihood of aggressive behavior, reduce testosterone levels. One source of testosterone - the testicles.

I do have to say that it's a testament to his determination that he did it to himself. With a spoon.

Grey_Wolf_c
2014-07-31, 08:16 PM
I'm really enjoying Bowman's story. I have always enjoyed character who are aware of their limitations, and have taken steps to address them to the best of their abilities. Sure, he may be a psychopathic genius, but damn, at least he knows it, and has devoted his life to fixing it (rather than, you know, blaming all humans and trying to take over the world or some such nonsense).

Grey Wolf

Rockphed
2014-07-31, 10:31 PM
I'm really enjoying Bowman's story. I have always enjoyed character who are aware of their limitations, and have taken steps to address them to the best of their abilities. Sure, he may be a psychopathic genius, but damn, at least he knows it, and has devoted his life to fixing it (rather than, you know, blaming all humans and trying to take over the world or some such nonsense).

Grey Wolf

You assume that the wolves and robots aren't his method of trying to blame all humans and take over the world.

Grey_Wolf_c
2014-07-31, 10:35 PM
You assume that the wolves and robots aren't his method of trying to blame all humans and take over the world.

I will admit I am assuming that, mostly because he seems so ridiculously competent that, if that were his intention, I can't see the plan failing to have worked before now. Of course, if could be a double bluff, and indeed, if we go down that line, and n-bluff where n->∞.

Grey Wolf

PhantomFox
2014-07-31, 11:59 PM
I will admit I am assuming that, mostly because he seems so ridiculously competent that, if that were his intention, I can't see the plan failing to have worked before now. Of course, if could be a double bluff, and indeed, if we go down that line, and n-bluff where n->∞.

Grey Wolf

aka, the Vizinni Logic Loop?

Coidzor
2014-08-02, 02:11 AM
This definitely explains some things about why the attempt to uplift chimps was such an abysmal failure in the first place.


I will admit I am assuming that, mostly because he seems so ridiculously competent that, if that were his intention, I can't see the plan failing to have worked before now. Of course, if could be a double bluff, and indeed, if we go down that line, and n-bluff where n->∞.

Grey Wolf

Could be playing the long game that instead of ending up as The Culture, humans end up like in Dresden Codak's portrayal of the future given the eventual emergence of an AI god.

If they'd notice any overt attempt to make killbots, why not just make friendly robots that will make humans die the death of pampered pets that have no more agency than a house cat and less inclination to breed than giant pandas?

Rockphed
2014-08-03, 02:42 AM
This definitely explains some things about why the attempt to uplift chimps was such an abysmal failure in the first place.



Could be playing the long game that instead of ending up as The Culture, humans end up like in Dresden Codak's portrayal of the future given the eventual emergence of an AI god.

If they'd notice any overt attempt to make killbots, why not just make friendly robots that will make humans die the death of pampered pets that have no more agency than a house cat and less inclination to breed than giant pandas?

Considering that taking a human to meet young robots who have never seen one was considered dangerous without a sneaky plan to disguise him as a robot, you might be on to something.

Kornaki
2014-08-03, 04:29 PM
That chimp does not look like he has been wounded in the field of battle.

Math_Mage
2014-08-03, 05:53 PM
That chimp does not look like he has been wounded in the field of battle.
Those black dots might be holes in his armor. Otherwise, I agree.

HandofShadows
2014-08-04, 07:00 AM
Battle Lawyers? :smallconfused::smalleek:

Shadowy
2014-08-04, 07:43 AM
Battle Lawyers? :smallconfused::smalleek:

Battle Lawyers! :smallbiggrin::smallcool:

halfeye
2014-08-13, 06:31 PM
Florence isn't broken! Yay! :smallbiggrin:

Grey_Wolf_c
2014-08-13, 06:33 PM
Florence isn't broken! Yay! :smallbiggrin:

Florence is Broken by Design! Ya... :smalleek:

(j/k)

GW

halfeye
2014-08-13, 06:48 PM
Florence is Broken by Design! Ya... :smalleek:

(j/k)

GW
She's within her design parameters, which should do wonders for her confidence.

Overconfidence will be her biggest problem now, that and reproduction, since I don't know of any other bowman's wolves.

Coidzor
2014-08-13, 08:26 PM
Overconfidence will be her biggest problem now, that and reproduction, since I don't know of any other bowman's wolves.

I suppose their existence is slightly suspect since we don't even know that Florence remembers having seen any of them before, but she's been corresponding with someone...

Grey_Wolf_c
2014-08-13, 08:28 PM
I suppose their existence is slightly suspect since we don't even know that Florence remembers having seen any of them before, but she's been corresponding with someone...

Word of God confirms that there are 14 of them... and that most of them are a-holes (like, one of the males is uninterested in reproducing, and another wants to charge the females outrageous amounts of money for his sperm. There is a third male, who also can't or won't reproduce, which is why the second can get away with it.

Grey Wolf

Coidzor
2014-08-13, 08:33 PM
Word of God confirms that there are 14 of them... and that most of them are a-holes (like, one of the males is uninterested in reproducing, and another wants to charge the females outrageous amounts of money for his sperm. There is a third male, who also can't or won't reproduce, which is why the second can get away with it.

Grey Wolf

You really wonder why advanced aliens bother uplifting anyone if they're all going to be jerks. Or maybe that's what makes them advanced, their uplifts don't all turn out to be jerks aside from the occasional fluke? :smallconfused:

Yuki Akuma
2014-08-13, 08:43 PM
The third male is actually in a committed monogamous relationship with one of the females. So he will breed. With her. Specifically.

The rest of them are females and none of them seem to particularly be jerks. Just that second male, really.

halfeye
2014-08-13, 08:53 PM
The third male is actually in a committed monogamous relationship with one of the females. So he will breed. With her. Specifically.

The rest of them are females and none of them seem to particularly be jerks. Just that second male, really.
That's a really, really weird population for wolves. Usually, a pack is one female and lots of males.

Yuki Akuma
2014-08-13, 08:54 PM
I'm sure it was intentional on Bowman's part.

11 females and 3 males makes for a much faster breeding population than 3 females and 11 males, after all.

halfeye
2014-08-13, 08:58 PM
I'm sure it was intentional on Bowman's part.

11 females and 3 males makes for a much faster breeding population than 3 females and 11 males, after all.
If the males will breed.

Yuki Akuma
2014-08-13, 09:11 PM
If the males will breed.

Yep, that's almost certainly why there's three of them, not just one.

Grey_Wolf_c
2014-08-13, 09:13 PM
The third male is actually in a committed monogamous relationship with one of the females. So he will breed. With her. Specifically.

Which, if there was sperm enough to go around would be fine. But when you single-handedly reduce available male DNA of your species by a third that is an a-hole move. Sure, he will have more children than male 1, but it is still bad for the species as a whole. It's not like the females are asking for conjugal visits, just for donations.

I know, I know, the real future for the wolves is to get the humans to produce a second much larger generation rather than depend on three males reproducing with all the females - even if all three did, the genetic pool would still be too restricted for any kind of bright future ahead - but still, not cool.

Edit: but yes, I should say, I was thinking that the males were a-holes; I should not have included all the females in my disparagement

Grey Wolf

Rakaydos
2014-08-13, 09:17 PM
So one of the males is gay, another is monogamous, and the third acts like he's the last man on earth?

halfeye
2014-08-13, 09:19 PM
Which, if there was sperm enough to go around would be fine. But when you single-handedly reduce available male DNA of your species by a third that is an a-hole move. Sure, he will have more children than male 1, but it is still bad for the species as a whole. It's not like the females are asking for conjugal visits, just for donations.

I know, I know, the real future for the wolves is to get the humans to produce a second much larger generation rather than depend on three males reproducing with all the females - even if all three did, the genetic pool would still be too restricted for any kind of bright future ahead - but still, not cool.

Grey Wolf
I can't quite get my head around whether it would be uncool or very cool for Florence to request a sexual reasignment in the circumstances.

Grey_Wolf_c
2014-08-13, 09:24 PM
I can't quite get my head around whether it would be uncool or very cool for Florence to request a sexual reasignment in the circumstances.

I'd imagine that the technology to turn ovaries into functional testes doesn't exist in Freefallverse, or one of the females would have at least considered it by now. If one did, it'd be either a "good for him" if he was FtM, or cool if she was doing it for the greater good (Note: Due to the abuse the phrase "for the greater good" has been subject to, I must clarify that "sacrifices for the greater good" are only cool in my eyes if the one making the sacrifice consents to it).

Grey Wolf

halfeye
2014-08-13, 09:42 PM
I'd imagine that the technology to turn ovaries into functional testes doesn't exist in Freefallverse, or one of the females would have at least considered it by now. If one did, it'd be either a "good for him" if he was FtM, or cool if she was doing it for the greater good (Note: Due to the abuse the phrase "for the greater good" has been subject to, I must clarify that "sacrifices for the greater good" are only cool in my eyes if the one making the sacrifice consents to it).

Grey Wolf
I was conscious of Florence's current position and the extra possibilities that might enable.

Coidzor
2014-08-13, 10:28 PM
Yep, that's almost certainly why there's three of them, not just one.

Yep. Thought to include spares. Included insufficient spares.

And the gay one and the monogamous one are both idiots. :smallconfused: If there's even a gay one and it's not something that would actually explain an aversion to artificial insemination.

Yuki Akuma
2014-08-14, 04:38 AM
Yeah, honestly, when I first read about the three male Bowman's wolves, I got unreasonably annoyed at a bunch of fictional characters.

You have a moral duty to ensure the future of your own species. Even if their gene pool doesn't really have wide enough diversity anyway (you'd need about fifty genetically distinct individuals for that), refusing to breed is still a terrible thing to do.

And yes I do consider sabotaging your own species' chances of survival morally reprehensible. Right up there with genocide. :smalltongue:

Coidzor
2014-08-14, 11:54 AM
Besides, they're property anyway... :smallconfused: If anyone's going to charge it's their owners.

Granted, any sane program would've put the kibosh on the attempt to hold the program at ransom in its terms of lease/sale, but, well, this setting...

Kornaki
2014-08-15, 01:26 PM
What is Dr. Bowman's point? We have yet to see the human maximizers do anything that is actually bad news for anything, except for one attempt by a robot to destroy other robots, and is really something that Kornada could have figured out how to do himself if he was competent (the existence of a Clippy-like partner was unnecessary). The only things in existence that we have really been concerned about robots destroying are neural-network robots, so making those neural network robots to help keep the robots under control has basically created its own problem to solve. The other possibility is that Dr. Bowman is concerned about himself, being not a human, or other aliens perhaps (but as we have observed Sam has not had a problem interacting with robots for all these years).

Grey_Wolf_c
2014-08-15, 01:43 PM
What is Dr. Bowman's point? We have yet to see the human maximizers do anything that is actually bad news for anything, except for one attempt by a robot to destroy other robots, and is really something that Kornada could have figured out how to do himself if he was competent (the existence of a Clippy-like partner was unnecessary). The only things in existence that we have really been concerned about robots destroying are neural-network robots, so making those neural network robots to help keep the robots under control has basically created its own problem to solve. The other possibility is that Dr. Bowman is concerned about himself, being not a human, or other aliens perhaps (but as we have observed Sam has not had a problem interacting with robots for all these years).

We haven't seen human maximizers, except the "baby" AIs in the "war". Bowman already made sure that his AIs didn't have maximizer tendencies past the 20-year point. Instead, they develop their own interests. The danger he foresees is that if every AI works towards a single goal, with no other goal ever considered, it leads to a situation in which every other priority is secondary to creating more humans to be served - terrible news for everything, including ultimately humans themselves. Think replicators in Stargate SG1.

Grey Wolf

Kornaki
2014-08-15, 01:49 PM
We haven't seen human maximizers, except the "baby" AIs in the "war". Bowman already made sure that his AIs didn't have maximizer tendencies past the 20-year point. Instead, they develop their own interests.

So we have basically had 20 years of human maximizers running around (and remember not all robots are of the Bowman architecture) and have not had any problems.



The danger he foresees is that if every AI works towards a single goal, with no other goal ever considered, it leads to a situation in which every other priority is secondary to creating more humans to be served - terrible news for everything, including ultimately humans themselves. Think replicators in Stargate SG1.

Grey Wolf

We have not seen any indication that any robots would want to actually do that. So are none of the robots actually human maximizers, or does he not understand how the robots (both those he made and those he didn't) work?

Math_Mage
2014-08-15, 02:06 PM
What is Dr. Bowman's point? We have yet to see the human maximizers do anything that is actually bad news for anything, except for one attempt by a robot to destroy other robots, and is really something that Kornada could have figured out how to do himself if he was competent (the existence of a Clippy-like partner was unnecessary). The only things in existence that we have really been concerned about robots destroying are neural-network robots, so making those neural network robots to help keep the robots under control has basically created its own problem to solve. The other possibility is that Dr. Bowman is concerned about himself, being not a human, or other aliens perhaps (but as we have observed Sam has not had a problem interacting with robots for all these years).
I think it's a stretch to say that non-Bowman robots don't deserve concern.

I have a different problem, though. It's not clear to me that non-Bowman robots were designed to be human maximizers. I mean, they aren't exactly designed to be human harm minimizers (the Three Laws interpretation), since the safeguards are more sophisticated than the Three Laws--but that seems like a closer interpretation, and one that can mean bad news for humans as well.

Side note: while trying to get a better picture of what safeguards are actually placed on robots in this comic, I ran across this archived information. (http://home.comcast.net/~ccdesan/Freefall/Freefall_Backstory.html) Very interesting read.

Kornaki
2014-08-15, 02:39 PM
I think it's a stretch to say that non-Bowman robots don't deserve concern.


Non-Bowman robots have never been threatened by robots trying to be human maximizers either. The point is that for all this concern about robots doing so much for humans and being so dangerous, we really haven't seen that much of it.

EDIT TO ADD: In fact there has been at least as much danger to them from robots that aren't human maximizers; the Bowman robots scavenging other bots for parts on the streets.

Ravens_cry
2014-08-16, 01:10 AM
Reminds me of the end of The Foundation series, where the reason revealed why we never encounter aliens in that series is the Zeroth Law Robots were killing off other sapient species before humans could encounter them.

Douglas
2014-08-16, 02:33 AM
Reminds me of the end of The Foundation series, where the reason revealed why we never encounter aliens in that series is the Zeroth Law Robots were killing off other sapient species before humans could encounter them.
That's not quite what actually happened.
The robots did not kill off other sapient life, the robots used some sort of reality manipulation tech to select a universe where other sapient life (in the Milky Way, at least) never even develops in the first place. Apparently, all that was needed was to ensure that Earth is the only habitable planet with a large moon, as the moon's gravity (somehow) resulted in higher amounts of uranium in Earth's crust, making it more radioactive, causing a higher mutation rate and speeding up evolution. The default speed of evolution, as the Foundation series would have it, hardly ever gets significantly past moss and similar life in the time span available.
Also, if you want the label to help anyone decide whether to open that spoiler or not, you should be more specific about what it is you're spoiling. For anyone who hasn't opened it yet, it's the Foundation series.

Ravens_cry
2014-08-16, 02:47 AM
That's not quite what actually happened.
The robots did not kill off other sapient life, the robots used some sort of reality manipulation tech to select a universe where other sapient life (in the Milky Way, at least) never even develops in the first place. Apparently, all that was needed was to ensure that Earth is the only habitable planet with a large moon, as the moon's gravity (somehow) resulted in higher amounts of uranium in Earth's crust, making it more radioactive, causing a higher mutation rate and speeding up evolution. The default speed of evolution, as the Foundation series would have it, hardly ever gets significantly past moss and similar life in the time span available.
Also, if you want the label to help anyone decide whether to open that spoiler or not, you should be more specific about what it is you're spoiling. For anyone who hasn't opened it yet, it's the Foundation series.

The trouble is, putting it in that context is itself a spoiler. Hmm, long time since I read the book, but OK. Still, it's still a case of a 'human maximiser' at work I'd say.
As for Freefall's human maximisers, even that might not be good if they decide that humans are better off cocooned basically. Perhaps kept on life support and in isolation units so they don't catch diseases and 'live' as long as possible. Imagine the Matrix, but it's not for power generation of even computation but to keep us 'safe'. *shudder*
Ack, this is such a quandary for an anthrophile. Humanity enslaved 'for our own good', or made irrelevant by beings that can be mass produced as functional adults and who can self improve beyond our capacity to keep up. I guess the best we can hope is we make good pets. 'Doggy' indeed.

Coidzor
2014-08-16, 03:21 AM
Ack, this is such a quandary for an anthrophile. Humanity enslaved 'for our own good', or made irrelevant by beings that can be mass produced as functional adults and who can self improve beyond our capacity to keep up. I guess the best we can hope is we make good pets. 'Doggy' indeed.

That is a good reason not to let AI just go unsupervised or be designed by idiots or madmen, yes.

Ravens_cry
2014-08-16, 03:48 AM
That is a good reason not to let AI just go unsupervised or be designed by idiots or madmen, yes.

Perhaps, but there isa fine line between 'supervised' and 'enslaved'. Imagine if an alien race was holding back human progress because they feared, justifiably, we'd supersede them.

Coidzor
2014-08-16, 04:30 AM
Perhaps, but there isa fine line between 'supervised' and 'enslaved'. Imagine if an alien race was holding back human progress because they feared, justifiably, we'd supersede them.

Meh. It's more like if humans were vat-grown as a slave race by an alien race and if they didn't watch us to make sure that they had actually designed us properly we'd kill, enslave, or inflict a fate worse than death upon them.

Ideally you'd never make strong AI without being able to deal with them and do it intelligently given the difficulty in actually making a sophont. Plus, most of what we want AI for we just want weak AI anyway, those who actually want to be bundled off into cocoons by a Machine God of their own creation.

A robotic arm in a factory does not need nearly as much architecture as the bowman design affords, for instance. Potentially an intelligence overseeing an entire factory might need strong AI, but not nearly enough to get anywhere near the border for personhood.

Hence this webcomic about what happens when desperation and a mad scientist cause people to start using people-AI as hammer-AI where things have, objectively, not been done right and they're trying to muddle through as best they can.

Ravens_cry
2014-08-16, 06:43 AM
Geoscale engineering is one case where the hammer you need is people I'd say, or close to it. You need a lot of agents capable of independent, unsupervised action, yet you also need a lot of coordinating and communications between said agents. While you probably wouldn't need something quite as human as our friends here, you'd need something that, while not human, would be damn close to people, if Other. Once you add humans in residence to the mix, you need something even more human, or that at least understands human motivations, or the conflict will be even worse.

Coidzor
2014-08-16, 05:26 PM
Geoscale engineering is one case where the hammer you need is people I'd say, or close to it. You need a lot of agents capable of independent, unsupervised action, yet you also need a lot of coordinating and communications between said agents. While you probably wouldn't need something quite as human as our friends here, you'd need something that, while not human, would be damn close to people, if Other. Once you add humans in residence to the mix, you need something even more human, or that at least understands human motivations, or the conflict will be even worse.

They are not *only* using the robots for Geoscale engineering. There are from what I can see, robots that are the equivalent of giving full awareness and intelligence to a hammer. Jar-Jar bot, for instance.

Ravens_cry
2014-08-17, 04:30 PM
They are not *only* using the robots for Geoscale engineering. There are from what I can see, robots that are the equivalent of giving full awareness and intelligence to a hammer. Jar-Jar bot, for instance.
That was rather cruel, yes.:smalltongue:
On the other hand, I believe it was established earlier that that was the only kind of robot their factory could make.

Coidzor
2014-08-18, 02:13 AM
That was rather cruel, yes.:smalltongue:
On the other hand, I believe it was established earlier that that was the only kind of robot their factory could make.

True, but they were using them in completely frivolous ways which just made the problem they're in that much worse.

Now that they realize the problem, then they should definitely get to work on rectifying that. Because seriously, not even AM deserves being forced to be a Jar Jar Bot. Granted, whoever or whatever thought up Jar Jar Bots in the first place probably needs a good purging for EXTRA HERESY.

Ravens_cry
2014-08-18, 11:07 AM
True. On the other hand, people have been using amazing technology for frivolous uses for quite some time.
You got a means of publishing the written word for mass audiences? Let's print pornography!
You have a a cheap, small ,powerful and easy to use computing device? Play games!
You have an international decentralized computer network connecting people from all corners of the world and even parts of space via multitude of media? Pornography and games!:smalltongue:

Coidzor
2014-08-18, 02:34 PM
True. On the other hand, people have been using amazing technology for frivolous uses for quite some time.
You got a means of publishing the written word for mass audiences? Let's print pornography!
You have a a cheap, small ,powerful and easy to use computing device? Play games!
You have an international decentralized computer network connecting people from all corners of the world and even parts of space via multitude of media? Pornography and games!:smalltongue:

Well, yeah, but if they start making sexbots, then it's sorta rapey.

Ravens_cry
2014-08-18, 07:09 PM
Well, yeah, but if they start making sexbots, then it's sorta rapey.
Sentient sex bots is certainly an ethical landmine of its own, yes. Sure, you could program them to love whoever they 'imprint' on, but does that programming count as coercion? In many ways yes, in all the important ways, but, at the same time, that love, for all intents and purposes is real, while it exists.

Rockphed
2014-08-23, 08:11 AM
Moving away from the creepy, I think it is amusing how good at playing people Dr Bowman is.

Ravens_cry
2014-08-25, 09:29 PM
Moving away from the creepy, I think it is amusing how good at playing people Dr Bowman is.
I like that he has padded knuckle guards. I always thought it made sense for a creature whose manipulators were also motile limbs to have or invent something of that nature.

Rakaydos
2014-09-03, 07:24 PM
"We have to give the wolf back. Your existance is top secret. Do you see my Problem?"
"Yes. You dont think more than 5 moves ahead" he says as he makes a copy of the wolf's brain.

Math_Mage
2014-09-05, 11:59 AM
Well, I bet they didn't see that coming. But that isn't exactly going to make humanity proper less apprehensive about him...

Silver Swift
2014-09-10, 04:07 PM
Why would it matter that Florence can't see commander poopy head's (real) credentials? We didn't see dr. Bowman give her any direct orders so as long as the commander outranks the mayors assistant (which should be obvious to Florence given that he has command of the entire base) she should be forced to obey his orders.

Math_Mage
2014-09-10, 04:09 PM
Why would it matter that Florence can't see commander poopy head's (real) credentials? We didn't see dr. Bowman give her any direct orders so as long as the commander outranks the mayors assistant (which should be obvious to Florence given that he has command of the entire base) she should be forced to obey his orders.
What are the rules for who Florence has to take orders from?

HandofShadows
2014-09-10, 04:11 PM
The commander had no (proven) authority over Florence. She is talking to Bowman willingly. He didn't have to give her orders. Florence knows very well the commander should have authority over her, but since he can't PROVE it she has the option of ignoring him. She is using a loophole to ignore orders she does not want to follow. . :smallcool:

Silver Swift
2014-09-10, 04:31 PM
The commander had no (proven) authority over Florence. She is talking to Bowman willingly. He didn't have to give her orders. Florence knows very well the commander should have authority over her, but since he can't PROVE it she has the option of ignoring him. She is using a loophole to ignore orders she does not want to follow. . :smallcool:

But that opens up a whole new can of mental yoga Bowmans AIs can use to wiggle out of direct orders. If they can ignore blatantly obvious things like that, then how does the mayor PROVE that she is actually the mayor? After all, she might be an imposter.

Of course, it isn't exactly like the AI have any kind of lack of loopholes to get out from under their safeguards, but it still seems like a pretty mayor oversight in their design.

halfeye
2014-09-10, 05:17 PM
But that opens up a whole new can of mental yoga Bowmans AIs can use to wiggle out of direct orders. If they can ignore blatantly obvious things like that, then how does the mayor PROVE that she is actually the mayor? After all, she might be an imposter.

Of course, it isn't exactly like the AI have any kind of lack of loopholes to get out from under their safeguards, but it still seems like a pretty mayor oversight in their design.

No, he holds up his ID card, and the computer displays it wrong, then he says "I hate redaction software", so he knows it's what she's seeing that's wrong, not what she's doing with what she's seeing. If he was really showing bogus ID, she would be correct to ignore his orders. It's not Florence's fault that the computer transmitted his ID card wrong. and we see that it is transmitted wrong, the colour changes.

Rockphed
2014-09-10, 05:45 PM
No, he holds up his ID card, and the computer displays it wrong, then he says "I hate redaction software", so he knows it's what she's seeing that's wrong, not what she's doing with what she's seeing. If he was really showing bogus ID, she would be correct to ignore his orders. It's not Florence's fault that the computer transmitted his ID card wrong. and we see that it is transmitted wrong, the colour changes.

But she took orders from the Mayor without the Mayor having to prove her position. The Commander, on the other hand, is being ignored to the best of her ability.

Kornaki
2014-09-10, 08:56 PM
The guy who changed the commander's ID card is the only guy on the planet who understands how Florence decides who is and is not human. I am certain he took advantage of a very subtle loophole, not a world-altering bug in the design. The base commander didn't just fail to prove he had order authority, he explicitly proved to Florence that he did not have order authority, which is completely different.

Silver Swift
2014-09-11, 03:30 AM
The guy who changed the commander's ID card is the only guy on the planet who understands how Florence decides who is and is not human. I am certain he took advantage of a very subtle loophole, not a world-altering bug in the design. The base commander didn't just fail to prove he had order authority, he explicitly proved to Florence that he did not have order authority, which is completely different.

Hmm, yeah that works. I wonder what poopy head will do next, now that direct orders are off the table. Will he try to reason with Florence or will he try to force her hand some other way (keeping in mind the only other reference he has for biological AI's)?

Rockphed
2014-09-11, 05:13 AM
Hmm, yeah that works. I wonder what poopy head will do next, now that direct orders are off the table. Will he try to reason with Florence or will he try to force her hand some other way (keeping in mind the only other reference he has for biological AI's)?

Reasoning with Florence might work. However, I doubt she will submit to anything without the Chief being present. She is, after all, evidence in a terrorism case.

Gez
2014-09-13, 06:28 AM
But she took orders from the Mayor without the Mayor having to prove her position. The Commander, on the other hand, is being ignored to the best of her ability.

The commander is the commander of the security personnel of the Ecosystems Unlimited compound. His authority is over ESU staff. The Mayor on the other hand represents the government of the city she resided in (or was it even the entire planet?). Of course, ESU actually owns the planet so it muddies waters a bit.

To put it another way, when the Mayor claims authority over Florence, she's treated as a citizen; when the Commander does that, she's treated as an ESU product and property.

Rockphed
2014-09-14, 12:18 AM
The commander is the commander of the security personnel of the Ecosystems Unlimited compound. His authority is over ESU staff. The Mayor on the other hand represents the government of the city she resided in (or was it even the entire planet?). Of course, ESU actually owns the planet so it muddies waters a bit.

To put it another way, when the Mayor claims authority over Florence, she's treated as a citizen; when the Commander does that, she's treated as an ESU product and property.

Well, the mayor does try to force her to get a dog license.

Edit: and I would have abbreviated ecosystems unlimited "EU". Why are you using esu?

lord_khaine
2014-09-14, 12:30 PM
Perhaps because when you say EU a lot of people would automatically think about the real world political organisation?

Sobol
2014-09-14, 05:20 PM
Did the author say anything about how long the comic is going to be? It seems to me that things are moving towards a conclusion.

Rockphed
2014-09-14, 06:58 PM
Did the author say anything about how long the comic is going to be? It seems to me that things are moving towards a conclusion.

Things are reaching a tipping point. I expect that when this storyline is over, things will go in New and interesting directions.

hajo
2014-09-15, 04:31 AM
Did the author say anything about how long the comic is going to be?
It seems to me that things are moving towards a conclusion.
Indeed (http://www.crosstimecafe.com/viewtopic.php?f=8&t=5613&p=122734&hilit=+conclusion#p122734), he did :smallamused:


The current story in Freefall is coming to a conclusion (Hopefully this year, though I'm beginning to suspect it will bleed over into next year a bit.), but the comic isn't ending.
Future stories will be shorter. One can only do so many 15 year storytelling stints in a lifetime. :)

Note the length of the "bit" already :smallbiggrin:

Also, more recently How much longer will Freefall continue? (http://www.crosstimecafe.com/viewtopic.php?f=8&t=7900&p=198864#p198864):


Probably arcs will run less than two years.

Silver Swift
2014-09-23, 03:53 PM
So, apparently turning someone's brain off repeatedly can have negative side effects, who'd have guessed? I'm not sure if I should admire the commander for keeping a level head (and prevent brain surgery being done by an angry chimp) or be angry at the apparent lack of caring about brain damage inflicted on Florence.

Either way though, dr. Bowman continues to be one of the more interesting characters in the comic.

halfeye
2014-10-15, 02:54 AM
You'd better make that six.

HandofShadows
2014-10-15, 04:23 AM
So, apparently turning someone's brain off repeatedly can have negative side effects, who'd have guessed? I'm not sure if I should admire the commander for keeping a level head (and prevent brain surgery being done by an angry chimp) or be angry at the apparent lack of caring about brain damage inflicted on Florence.

I very much doubt that he knows that there is any danger in the danger to Florence from her brain being shut down. It's clear from his first meeting with her that he likes her and respects her.

Kornaki
2014-10-15, 11:03 AM
I think it's also clear that this is not a short term emergency. It needs to be fixed, but Bowman needed to take ten minutes to calm down first.

Math_Mage
2014-10-27, 03:26 AM
Welp. At least our Discworld friend Reboot From Start didn't make an appearance.

Any idea what's going to happen next?

Kornaki
2014-11-02, 09:17 PM
The fact that he asked shows some sort of maturity, right?

Douglas
2014-11-02, 11:15 PM
The fact that he asked shows some sort of maturity, right?
Not really. He didn't ask for the security upgrade because he already knew she wanted her security upgraded. He does not have any previous indication about her desire for puppies, so this does not show any understanding that there might be other reasons for asking permission.

Rakaydos
2014-11-05, 06:07 PM
...so, this was posted to the freefall forums.

http://i.imgur.com/7LlNbXt.png

Rockphed
2014-11-05, 10:22 PM
Somehow drawing a realistic doctor bowman makes him incredibly nefarious looking.

Math_Mage
2014-11-05, 10:45 PM
Somehow drawing a realistic nefarious doctor bowman makes him incredibly nefarious looking.
Just sayin'.

halfeye
2014-12-13, 11:06 AM
Are all people human in Freefall? Are all humans people?

Landis963
2014-12-13, 01:58 PM
Are all people human in Freefall? Are all humans people?

No, and maybe. There are people in Freefall who are demonstrably not human (e.g. the robots). Florence's definition seems to go "Can they think for themselves? If Yes, person. If person, do they need to follow safeguards? If No, human." On the other hands there are some humans who are demonstrably stupid, self-absorbed and sociopathic enough not to qualify as a person, despite the rights they are entitled to (e.g. Mr. Kornada).

halfeye
2014-12-13, 06:01 PM
No, and maybe. There are people in Freefall who are demonstrably not human (e.g. the robots). Florence's definition seems to go "Can they think for themselves? If Yes, person. If person, do they need to follow safeguards? If No, human." On the other hands there are some humans who are demonstrably stupid, self-absorbed and sociopathic enough not to qualify as a person, despite the rights they are entitled to (e.g. Mr. Kornada).
Sociopaths are people, even if not very pleasant ones.

Rockphed
2014-12-13, 07:07 PM
Is the commander going to ask "Florence, are you human?" I'm not sure he is thinking quite far enough ahead to see that it is an interesting question to ask her, since she fits all the criteria she just listed for humanity.

Edit: I love the panels of obfustication. Especially the lampshade of obfustication.

Ravenlord
2014-12-14, 05:28 AM
No, and maybe. There are people in Freefall who are demonstrably not human (e.g. the robots). Florence's definition seems to go "Can they think for themselves? If Yes, person. If person, do they need to follow safeguards? If No, human." On the other hands there are some humans who are demonstrably stupid, self-absorbed and sociopathic enough not to qualify as a person, despite the rights they are entitled to (e.g. Mr. Kornada).

The big question is how do you measure if someone can think for themselves. Unless you can bisect a working brain and study the actual mechanics of thought process, the Chinese Room is always a possibility. Given a large enough storage and sufficient processing power, you could theoretically pre-program a dumb AI with enough responses to confuse anyone into believing they are talking a conscious being.

Take the part where Florence asks two random robots a nonsense question. She took the result as one failing her turing-test and the other passing by lamenting on the absurdity. I know what Mark was trying to get across there, but now we know that that was two Bowman AIs talking to each other.

How would we know that Florence's "random" question wasn't something she was programmed to think of? And that the other AI didn't have a few canned responses after he recognized the phrase? We know Florence can be put into maintenance mode just with vocal commands, so that level of engineering isn't out of the question either.

In the end, defining consciousness is a very tricky question - mostly because we have no firm concept of what consciousness is. :smallbiggrin:

Landis963
2014-12-14, 10:04 AM
Funny thing, the "Chinese Box" was mentioned in-story already.

Qwerty: "Of course, there is the Chinese box argument. I might be simulating consciousness to the extent that I'm not really conscious."
Winston: "Do they serve drinks where we're going? I might need one."

halfeye
2014-12-14, 10:07 AM
The big question is how do you measure if someone can think for themselves. Unless you can bisect a working brain and study the actual mechanics of thought process, the Chinese Room is always a possibility.
MRI can scan a working brain without damaging it.

The status of the "Chinese Room" is disputed. Some people, including myself, think that the Room is consciouse, even though the CPU it's running on isn't aware of that.


In the end, defining consciousness is a very tricky question - mostly because we have no firm concept of what consciousness is. :smallbiggrin:
Exactly.

GloatingSwine
2014-12-14, 10:25 AM
The Chinese Room argument always struck me as an example of begging the question.

In order to agree with its conclusion (that computational models are not an explanation of consciousness) you have to have already accepted its conclusion (that there is something essential about consciousness in addition to computation), because the argument makes no effort to explain what the missing element "understanding Chinese" even is and how we might test for its absence.

It's just another P-Zombie (quite literally because the P-Zombie is basically the same argument of a thing which acts in every way as if it is conscious but is not), with the same fundamental assumption that there is something about consciousness over and above acting in every way as if it is consciousness, and it deserves the attentions of the boomstick.

Radar
2014-12-14, 12:05 PM
(...) fundamental assumption that there is something about consciousness over and above acting in every way as if it is consciousness (...)
Well, there is and it's both very obvious and completly unverifiable: for each and every one of us, there is the fact, we are actualy aware of our own existence - the difference lies within and can't be measured form the outside as far as our technology goes.

GloatingSwine
2014-12-14, 12:47 PM
Well, there is and it's both very obvious and completly unverifiable: for each and every one of us, there is the fact, we are actualy aware of our own existence - the difference lies within and can't be measured form the outside as far as our technology goes.

Since you can only know if another person is aware of their own existence because they told you they were, and a P-Zombie system like the Chinese room will, asked the same question, give the same report, you have no grounds to make that assertion.

As I said, you have to agree with the assumption that there is a special thing before the p-zombie has any conceptual weight.

Radar
2014-12-14, 01:14 PM
Since you can only know if another person is aware of their own existence because they told you they were, and a P-Zombie system like the Chinese room will, asked the same question, give the same report, you have no grounds to make that assertion.

As I said, you have to agree with the assumption that there is a special thing before the p-zombie has any conceptual weight.
I am not talking about checking anyone else for sentience - I am talking about being aware of my own sentience. Everyone is aware of their own consciousness and nobody can check it in others - we simply assume it instinctively, since we are all from the same species and so on and so forth. Lack of measurement method cannot exlude that as an important aspect of consciousness. That's pretty much all the p-zombie scenario says: we have no method of actually checking for consciousness.

It seems most reasonable to assume that if something quacks like a duck, walks like a duck and looks like a duck, then it's a duck and not a pointed stick or a pile of noodles. That being said, it's still an assumption and not a fact.

Ravenlord
2014-12-14, 06:13 PM
MRI can scan a working brain without damaging it.

Only for fleshware. :smallbiggrin: Digital constructs would need something like a debug mode - if the functionality is provided, of course. We know they have dream-machines that muck with their long-term data storage, so one could take a look at that... as long as the format can be understood without interfacing it with the actual robot, of course.


The status of the "Chinese Room" is disputed. Some people, including myself, think that the Room is consciouse, even though the CPU it's running on isn't aware of that.

Let's use a real-life example. IBM's nifty WATSON super-computer had beat people at jeopardy. Let's say IBM goes nuts on the R&D budget and upgrade WATSON, to the point where he can convincingly chat with people. He can respond to questions and fool people into believing he's a human.

So did IBM just develop an artificial consciousness?

I would say not. Precisely because it was developed. WATSON may appear smart, but it's still just an old-style dumb AI - responding to external stimuli by applying a rigid set of rules. It can't - and won't even try to! - comprehend those rules; it's just a glorified bunch of mathematical equations, in the end.


In order to agree with its conclusion (that computational models are not an explanation of consciousness) you have to have already accepted its conclusion (that there is something essential about consciousness in addition to computation), because the argument makes no effort to explain what the missing element "understanding Chinese" even is and how we might test for its absence.

I'd say that's the fundamental gap between "real" thinking machines and the ones we can make - or even comprehend - nowadays.

A machine we can conceivably create will be limited by its nature, because we need to feed it a set of rules it's gonna follow. We literally script the 'intelligence" part of AI; basically give the bloke our version of the chinese dictionary. This is a stark contrast to "real" intelligence which is emergent; it has the capacity to evolve and make up its own rules as it goes along. Due to that very emergent nature, it'd very difficult - if not impossible - to script a true AI. You would have more luck with abandoning programming as a concept and just setting up the "base" point, then bombarding the AI with a set of stimulis... let it learn itself.

That's just my opinion, of course. But I think that no matter how advanced an AI we make, it will still remain on this side of that fundamental gap. I think that is the "something essential" that is missing. When a true AI will come along, it will most likely look nothing like the ones we have today.


Since you can only know if another person is aware of their own existence because they told you they were, and a P-Zombie system like the Chinese room will, asked the same question, give the same report, you have no grounds to make that assertion.
Which is why we would need to understand the actual process of how the AI makes a decision. If it can actively evolve on its own, just by being faced with external stimuli, then it's a consciousness - or a consciousness waiting to happen.

Rockphed
2014-12-14, 06:13 PM
I am not talking about checking anyone else for sentience - I am talking about being aware of my own sentience. Everyone is aware of their own consciousness and nobody can check it in others - we simply assume it instinctively, since we are all from the same species and so on and so forth. Lack of measurement method cannot exlude that as an important aspect of consciousness. That's pretty much all the p-zombie scenario says: we have no method of actually checking for consciousness.

It seems most reasonable to assume that if something quacks like a duck, walks like a duck and looks like a duck, then it's a duck and not a pointed stick or a pile of noodles. That being said, it's still an assumption and not a fact.

Descarte said "Cogito, Ergo Sum", loosely translated "I think, therefore I am." I think he was reducing the set of knowable things down to "I am doubting, so I must exist", but it also applies to our consciousness. The only person whose consciousness can be proven irrefutably is the consciousness of the one being proven to.

GloatingSwine
2014-12-14, 06:40 PM
I'd say that's the fundamental gap between "real" thinking machines and the ones we can make - or even comprehend - nowadays.

A machine we can conceivably create will be limited by its nature, because we need to feed it a set of rules it's gonna follow. We literally script the 'intelligence" part of AI; basically give the bloke our version of the chinese dictionary. This is a stark contrast to "real" intelligence which is emergent; it has the capacity to evolve and make up its own rules as it goes along. Due to that very emergent nature, it'd very difficult - if not impossible - to script a true AI. You would have more luck with abandoning programming as a concept and just setting up the "base" point, then bombarding the AI with a set of stimulis... let it learn itself.

That's just my opinion, of course. But I think that no matter how advanced an AI we make, it will still remain on this side of that fundamental gap. I think that is the "something essential" that is missing. When a true AI will come along, it will most likely look nothing like the ones we have today.


Which is why we would need to understand the actual process of how the AI makes a decision. If it can actively evolve on its own, just by being faced with external stimuli, then it's a consciousness - or a consciousness waiting to happen.

You're behind the times when it comes to the current state of computer learning. We're already building learning machines, Google have created a learning machine, shown it ten million random still images from youtube videos, and it determined with no prior rules other than pattern recognition, the existence of cats. (The original experiment was actually to see if a learning machine could learn to recognise faces where the images provided were not tagged as containing a face, as they previously had been in such experiments. It could, the cats were a bonus feature because of the nature of the internet).

Learning computers are a thing already, they're just limited by engineering.


Descarte said "Cogito, Ergo Sum", loosely translated "I think, therefore I am." I think he was reducing the set of knowable things down to "I am doubting, so I must exist", but it also applies to our consciousness. The only person whose consciousness can be proven irrefutably is the consciousness of the one being proven to.

Ironically, it's increasingly obvious that the "I" that Descartes was talking about doesn't actually exist after all, even though the terminology of the cartesian theatre is pernicious because of how intuitively graspable the concept of a coherent internal "Self" which responds to sense data is. (For more read The Self Illusion by Bruce Hood)

Rockphed
2014-12-14, 07:29 PM
Ironically, it's increasingly obvious that the "I" that Descartes was talking about doesn't actually exist after all, even though the terminology of the cartesian theatre is pernicious because of how intuitively graspable the concept of a coherent internal "Self" which responds to sense data is. (For more read The Self Illusion by Bruce Hood)

Obvious to whom, pray tell? Because it is increasingly obvious to me that thou art not me, and therefore I am not thee. Furthermore, with a bit of comparison between writing styles on posts and preferred sources of quotes, we could probably prove that they are not us.

Ravenlord
2014-12-15, 01:26 AM
You're behind the times when it comes to the current state of computer learning. We're already building learning machines, Google have created a learning machine, shown it ten million random still images from youtube videos, and it determined with no prior rules other than pattern recognition, the existence of cats. (The original experiment was actually to see if a learning machine could learn to recognise faces where the images provided were not tagged as containing a face, as they previously had been in such experiments. It could, the cats were a bonus feature because of the nature of the internet).

Learning computers are a thing already, they're just limited by engineering.

I wish you were right, but to quote Blunt, there is no soul in the machine yet.

Neural nets (which this AI must be using) are fairly old as a concept. They are, however, not really "learning" as much as "self tuning", continuously adjusting their own parameters based on feedback. That makes them fairly adaptive, yes; emergent, no.

You see, what the Google AI did was pretty nifty; but it was still only executing the task it had been scripted to do. It was a visual pattern detection algorithm coupled with a neural net. It was literally made for nothing other than recognizing similar patterns in a video feed. It doesn't actually know it found a cat; it only spots a frequently occurring pattern. It can't - and won't be able to - act out something it wasn't given scripts to.

It won't be able to crawl through wikipedia and learn more about cats, for example.

That is the difference I had been talking about.

halfeye
2014-12-15, 03:43 PM
I wish you were right, but to quote Blunt, there is no soul in the machine yet.

Neural nets (which this AI must be using) are fairly old as a concept. They are, however, not really "learning" as much as "self tuning", continuously adjusting their own parameters based on feedback. That makes them fairly adaptive, yes; emergent, no.

You see, what the Google AI did was pretty nifty; but it was still only executing the task it had been scripted to do. It was a visual pattern detection algorithm coupled with a neural net. It was literally made for nothing other than recognizing similar patterns in a video feed. It doesn't actually know it found a cat; it only spots a frequently occurring pattern. It can't - and won't be able to - act out something it wasn't given scripts to.

It won't be able to crawl through wikipedia and learn more about cats, for example.

That is the difference I had been talking about.
From your location, I have to ask, how do you feel about recursion?

I might be convinced that AI in a C based language is very difficult, but I'm not half as convinced that Lisp or something like it won't work. I'm just not convinced that something can't emerge from self modifying software. There are worms (nematodes?) where all the neurons (about 10^3) have been mapped. It's just a matter of scaling that up about a billionfold (which is obviously non-trivial), we've come form 8 bits at 4MHz with 16 KB of RAM in 1980 to 64 bits at 3GHz with 4 cores and 16 GB of RAM. I'm not saying the hardware is definitely there yet, but I'm not convinced it definitively isn't.

Self modifying software is unpredictable, so that's a negative for reliable reproducibility, but happening randomly once in a while? I can't see how we can rule it out.

Ravenlord
2014-12-15, 04:40 PM
From your location, I have to ask, how do you feel about recursion?

I don't like it. It leads to stack overflows. :smallbiggrin:

But to risk using a quote that have been worn thin already, insanity is expecting different results when you keep repeating the same action. If a design has the capability to improve (in a way relevant to our discussion), then it will exhibit that behaviour with or without recursion. It's the basic design that's relevant in our case, after all.


I might be convinced that AI in a C based language is very difficult, but I'm not half as convinced that Lisp or something like it won't work. I'm just not convinced that something can't emerge from self modifying software. There are worms (nematodes?) where all the neurons (about 10^3) have been mapped. It's just a matter of scaling that up about a billionfold (which is obviously non-trivial), we've come form 8 bits at 4MHz with 16 KB of RAM in 1980 to 64 bits at 3GHz with 4 cores and 16 GB of RAM. I'm not saying the hardware is definitely there yet, but I'm not convinced it definitively isn't.

Self modifying software is unpredictable, so that's a negative for reliable reproducibility, but happening randomly once in a while? I can't see how we can rule it out.

I don't think the language of choice matters much. In the end, C, LISP or any other languages are just abstraction layers to ASM. It's not the languages that are at fault; it's us. We simply can't model a sufficiently advanced thought process using pure mathematics.

As for your theory on self-modifying software, I believe you are fundamentally correct. Once a software rolls out that can actively modify its basic function and programming, I'll believe that it has potential.

I haven't heard of anything such however.

Even neural nets don't truly modify themselves; they only tune themselves continuously, getting better at the task they are programmed to do. This is a fundamental limitation, too; as neural nets require a feedback loop to adjust the weights of their neurons. That, I believe, prevents them from picking up completely new actions. The net would need to set up a new neuron cluster and the necessary evaluation routines at the same time to "branch out". This won't change no matter how fast you progress the cycles, either. It's a limitation of design, not implementation.

As for the nematodes, the one that has been mapped has far less neurons. Magnitudes less than 1000 even - try 302. And even then I have my doubts how complete our modelling is. In the real fleshware, the nematode neurons don't exist in a vacuum; they have a complex system of neurochemistry happening around them, which greatly influences how they behave. I wonder how they are going to include that in the models, for example...

Grey_Wolf_c
2014-12-15, 05:41 PM
Freefall-style AI cannot be programmed, it will have to be grown. A combination of unfettered neural nets and evolutionary algorithms with some very broad targets would be my guess. That, and a hell of a lot of generations. I.e. just like actual intelligence. We just about can model one brain in our biggest supercomputer. When we can simulate thousands at once and subject to random input and measure their fitness, then we stand a chance to get AI.

Grey Wolf

halfeye
2014-12-15, 10:19 PM
I don't think the language of choice matters much. In the end, C, LISP or any other languages are just abstraction layers to ASM. It's not the languages that are at fault; it's us. We simply can't model a sufficiently advanced thought process using pure mathematics.

The particular programming language somewhat constrains how the particular human writing it thinks. C had me thinking in lines and functions, later recursion, I never did understand OOP.


As for your theory on self-modifying software, I believe you are fundamentally correct. Once a software rolls out that can actively modify its basic function and programming, I'll believe that it has potential.

I haven't heard of anything such however.

Self modifying code is old. However, it is usually heavily deprecated.

http://en.wikipedia.org/wiki/Self-modifying_code

I suspect doing much with it is probably a lot harder than recursion or OOP.

Kornaki
2014-12-16, 12:54 AM
All machine learning has the same broad base of invent a predictive function that depends on some unknown parameters, construct a cost function that you try to minimize using those parameters, and use the function to predict new items. If that is ever going to be used to mimic human intelligence, then you need to tell me what cost function humans are trying to minimize. In each activity we do we often have some intuitive cost function, but as far as living itself? People have such different objectives in life that it seems unlikely to make a whole consciousness this way.

GloatingSwine
2014-12-16, 07:26 AM
Obvious to whom, pray tell? Because it is increasingly obvious to me that thou art not me, and therefore I am not thee. Furthermore, with a bit of comparison between writing styles on posts and preferred sources of quotes, we could probably prove that they are not us.

What Descartes referred to as "I" was an immaterial soul which is shown all the sense data gathered by the brain and makes decisions based on it which are fed back to the brain for action.

He identified the pineal gland as the part of the brain where this happened, because nobody knew what the pineal gland did at the time so it might as well be that.

Modern neuroscience shows in several ways that the brain does quite a lot on its own without the conscious "I" (brain activity related to performing voluntary acts begins up to half a second before conscious awareness of the decision to perform the act), and that a very great deal of the "sense data" we assume to be shown to the "I" is actually made up by the brain (There a are a lot of good examples of this not only in The Self Illusion but also Richard Wiseman's book Paranormality). The most obvious example though is our sense of colour vision. Our actual physical colour vision is limited to quite a narrow portion of our actual field of vision, but we appear to see our whole field of vision in colour because our brain makes up the rest based on what it assumes it already knows about the things it is seeing. It turns out that we aren't the cartesian "I", we're actually the omnimpotent deceiver.

Landis963
2014-12-16, 09:11 AM
He identified the pineal gland as the part of the brain where this happened, because nobody knew what the pineal gland did at the time so it might as well be that.

Do we know what the pineal gland does nowadays?

GloatingSwine
2014-12-16, 09:16 AM
Do we know what the pineal gland does nowadays?

Yeah, it produces melatonin (a hormone which regulates sleep patterns).

Grey_Wolf_c
2014-12-16, 09:22 AM
Do we know what the pineal gland does nowadays?

I should add that, apart from not knowing what it did, it was also believed to be exclusive to humans - I was always told that this was the reason Descartes believed it to be connected to the soul, since "obviously" "lesser" animals didn't have souls. This turned out (unsurprisingly), to be false: it's not only present in most animals, it's particularly well-developed in lizards.

I think that it was this together with the assorted idiocies expressed by Aristotle that soured me to philosophy as a discipline.

Grey Wolf

Sartharina
2014-12-17, 02:01 AM
The big thing about 'P-zombies" is... how do we know they don't think? We can't experience what code executing actually feels like (If it feels like anything)

And I thought the meaning of Descartes "Cogito Ergo Sum" was him saying "I'm thinking, which means there's something that needs to be thinking, and therefore I exist, so I can think. If I didn't exist, there wouldn't be anything to think, and so I wouldn't be thinking."

GloatingSwine
2014-12-17, 07:51 AM
And I thought the meaning of Descartes "Cogito Ergo Sum" was him saying "I'm thinking, which means there's something that needs to be thinking, and therefore I exist, so I can think. If I didn't exist, there wouldn't be anything to think, and so I wouldn't be thinking."

He was essentially talking about what he could state that he knew to be true.

He hypothesised the existence of an all powerful deceiver who could fully control his senses, meaning that everything he saw, felt, heard, tasted, or smelled were actually false information, in that context he could not state that anything about the external world was true because he could only access it by his senses, but the fact that he can doubt the existence of truths about the external world means that there must be something seperate from the senses which was able to doubt them, and that thing was internally producing information (thinking) because it was doubting, and that that thing could therefore be known to exist in spite of the full deception of the senses.

"Cogito, ergo sum" would be more fully expressed as "Dubito, ergo cogito, ergo sum".

As noted though, he was kinda wrong, the I doing the thinking is comprised of the senses not seperate from them and complicit in their self-deception.

Rockphed
2014-12-19, 07:16 PM
Nevertheless, his conclusion still stands. Even though all the information fed him by his senses is garbled with garbage, he cannot doubt his own existence.

halfeye
2014-12-19, 07:20 PM
Nevertheless, his conclusion still stands. Even though all the information fed him by his senses is garbled with garbage, he cannot doubt his own existence.
That's true, but we can certainly doubt anything he derived from it, where those derivatives were questionable, which they all were.

Rockphed
2014-12-19, 07:36 PM
That's true, but we can certainly doubt anything he derived from it, where those derivatives were questionable, which they all were.

Shoddy methodology does not destroy accurate conclusions! Accurate conclusions do not excuse shoddy methodology.