PDA

View Full Version : Responsibilities of AI technology



TheManicMonocle
2016-11-01, 07:00 PM
So here's a debate topic for you all, if a programmer creates an AI, and loses control of it, and that AI wreaks havoc in the world, should the programmer be held responsible?

Tvtyrant
2016-11-01, 08:47 PM
Is the AI designed to wreak havoc, or is it self aware and chose to do so? If the former, of course he or she is. If the latter, no more so then a parent is guilty of their childs crimes.

Forum Explorer
2016-11-01, 09:19 PM
Is the AI designed to wreak havoc, or is it self aware and chose to do so? If the former, of course he or she is. If the latter, no more so then a parent is guilty of their childs crimes.

Pretty much this, though I'll add a third option of the AI is designed to do something, that causes havoc when brought to an extreme, but is supposed to be benign. In that case, yes I would hold the programmer responsible, but to a lesser extent.

NichG
2016-11-01, 10:45 PM
AI as we have it isn't magic - that is to say, its not going to shoot people unless you deploy it in a system that somewhere has already been set up to let a computer autonomously fire a gun. So I'd say, the answer is 'whatever is consistent with how we currently deal with responsibility'. If a bunch of people are involved in the eventual creation, programming, and push to market of a plane, and that plane crashes and kills someone, there's a complex and individual procedure in determining how to deal with the aftermath of it - what went wrong, why, how could it have been detected, how can it be fixed, who signed off on it, etc? Why shouldn't it be handled by the same sort of procedure if, e.g., the plane had a snazzy new AI autopilot?

One thing to watch out for is treating a situation as if there's some finite and conserved quantity of blame, and it must be apportioned out somewhere. That leads to retributive systems and to things like scapegoating; ultimately, it doesn't actually enact a change to fix the underlying problem of things going wrong. It's sort of like, if the pilot lets their kid control the plane and it crashes, yes, 'it was the kid's fault', but it was also the pilot's fault and also the fault of the people who trained the pilot and so on. Just because you can ultimately pin it on the kid doesn't mean there's nothing to be changed further upstream. If that 'fault' is taken as punishment, it's going to feel unfair, but if that 'fault' is correction, then it can make sense. Its less important to figure out who 'deserves punishment' as it is to figure out 'what does the way this went wrong tell us about how to prevent it from going wrong again the next time?' That might involve firing someone or subjecting them to legal action to prevent them from continuing practice (if, e.g., things were already in place in the company policy or laws to prevent it from going wrong, but an individual programmer intentionally circumvented those regulations), but it might just involve changing the rules and regulations and oversight in place.

TheManicMonocle
2016-11-02, 01:20 AM
I'd say benign

Khedrac
2016-11-02, 06:55 AM
At the moment this is a legal grey area - and we are probably in the case that it is the programmer or their employers who are responsible. That said, too detailed discussion of this could fall under the "No Legal Advice" topic ban on these boards!

So, without giving any more advise, it is worth looking at the Google self-driving car - because I remember reading a few months ago that they had got the software recognized as a driver (so something that appeared to simplify to that position).
In this case the software is now legally responsible (though try using that argument to get out of any lawsuits that arise from problems - I doubt if it will work).
If did make me wonder that is the program has the driving license, does that mean if 4 or 5 cars commit get a speeding ticket at the same time the entire system is suspended from driving due to too many points on its license?

BarbieTheRPG
2016-11-02, 11:42 AM
I think we have precedents. Mainly from the auto industry. If a manufacturer produces a vehicle with drive features that can perform automatically in certain situations and the vehicle system malfunctions resulting in harm to property or life, that company is liable.

They built it.

veti
2016-11-02, 03:52 PM
To what degree was the AI's action foreseeable, and what precautions did the programmer take against it?

If you created an entity that you knew had the potential to do something like that, and then didn't take reasonable precautions to restrain it (e.g. letting it loose in a contained environment first so you could see how it reacts) - then yeah, I'd say you're responsible for that.

Parents may not be liable for what their adult kids do. But I'm guessing the AI is less than 16 years old, or whatever the age is in your jurisdiction, and that means the parent very much is responsible for it until that point.

If you make a self-driving car, work it to the best of your programming ability, and turn it loose on a crowded highway - then damn straight you're responsible for what happens next. If you want to mitigate that responsibility, you need to be able to show the court a record of detailed and extensive testing, over thousands of miles of test track and months or years of time. Basically, you have to persuade the court that you went to a lot of effort to prevent it, but you couldn't have been reasonably expected to foresee the event that unfolded.

Grinner
2016-11-02, 04:50 PM
If did make me wonder that is the program has the driving license, does that mean if 4 or 5 cars commit get a speeding ticket at the same time the entire system is suspended from driving due to too many points on its license?

That's something I was thinking about a week or two ago. For example, most people reading this have probably broken the law several times today, save for the fact that they probably weren't ticketed for it. Speeding is so ubiquitous, and it's not strictly enforced. Thus, it's almost necessary to speed, particularly on highways, because driving is a very social activity. This highlights an interesting facet of self-driving cars: Can a company knowingly and willfully design a product to break the law, especially if that law is not strictly enforced?

Starwulf
2016-11-02, 05:05 PM
That's something I was thinking about a week or two ago. For example, most people reading this have probably broken the law several times today, save for the fact that they probably weren't ticketed for it. Speeding is so ubiquitous, and it's not strictly enforced. Thus, it's almost necessary to speed, particularly on highways, because driving is a very social activity. This highlights an interesting facet of self-driving cars: Can a company knowingly and willfully design a product to break the law, especially if that law is not strictly enforced?

As an aside, this is the kind of thinking that leads to so many bad drivers and thus accidents. Driving is NOT a social activity. I drive the speed limit, and stick to it, I don't give even half a **** if everyone else around me is going 80mph, I'll be sticking with the 70mph speed limit. Just because others are weaving in and out of traffic doesn't mean you should as well. Honestly, I have to admit I've never even come across this attitude before, so I'm not even sure how seriously to take it. It's possible wherever you live that it is the general consensus, but around where I'm at, it's the idiots who break the speed limit, and the normal people follow it.(I see way more people driving the speed limit, or even slightly under it, then I do breaking it).

Grinner
2016-11-02, 05:41 PM
As an aside, this is the kind of thinking that leads to so many bad drivers and thus accidents. Driving is NOT a social activity. I drive the speed limit, and stick to it, I don't give even half a **** if everyone else around me is going 80mph, I'll be sticking with the 70mph speed limit. Just because others are weaving in and out of traffic doesn't mean you should as well. Honestly, I have to admit I've never even come across this attitude before, so I'm not even sure how seriously to take it. It's possible wherever you live that it is the general consensus, but around where I'm at, it's the idiots who break the speed limit, and the normal people follow it.(I see way more people driving the speed limit, or even slightly under it, then I do breaking it).

I think we have a misunderstanding over what the term "social activity" means here. I think it's ideal to observe certain courtesies when driving (and where I live, it's kind of a necessity sometimes). Likewise, there are some behaviors which are dangerous to oneself and one's neighbors and can be considered inconsiderate, such as tailgating.

There may be some truth to the topic of...I suppose you could call it "driving culture"? I've certainly seen a marked difference in how people drive in different areas, and I've heard very interesting things about driving in India. That said, I also think it's a mistake to conflate weaving with moderate speeding.

Starwulf
2016-11-02, 05:58 PM
I think we have a misunderstanding over what the term "social activity" means here. I think it's ideal to observe certain courtesies when driving (and where I live, it's kind of a necessity sometimes). Likewise, there are some behaviors which are dangerous to oneself and one's neighbors and can be considered inconsiderate, such as tailgating.

There may be some truth to the topic of...I suppose you could call it "driving culture"? I've certainly seen a marked difference in how people drive in different areas, and I've heard very interesting things about driving in India. That said, I also think it's a mistake to conflate weaving with moderate speeding.

From the way I read your initial post, you made it sound like that if some people are driving over the speed limit, then the other drivers around them would/should drive over the speed limit(implied by the "Most people reading this post have likely broken the law today). From there It seems easy to correlate that if some people will begin to speed just because others are, that they are just as likely to start driving aggressively if they see others do so as well. If that is a "social activity", it is a poor one. If that wasn't what you meant, I apologize.

As far as the last part of it being a mistake to conflate weaving with moderate speeding, I absolutely disagree. If you're going 85 in a 70mph zone, that is absolutely just as dangerous as weaving in and out of traffic. If a significantly sized animal(around here it's deer) pops out in front of you while you're driving 85mph on an interstate, you have that much less time to react and try to avoid it(either by dodging outwards, or hitting your brakes) then if you were going 70mph, and it's likely that you'll end up dragging several other vehicles into your accident, all because you gave yourself less time to react by driving over the speed limit. (and if you take issue with the idea of an animal, replace it with car coming off a ramp in a hurry, or from the berm of the road, etc. Yes it's technically the animal/cars fault for jumping out, but you could have mitigated the issue by driving the speed limit, instead of exacerbating it by speeding).

Anyways, I digress, we are going horribly off-topic, I really don't want to derail the thread anymore, so you can make your response, but I won't be going any further, the topic at hand is much more interesting then arguing over who is a worse driver, speeders or weavers ^^. Being on topic, personally I hope we never see the day of fully sentient AI, for more then one reason.

gooddragon1
2016-11-02, 06:02 PM
As an aside, this is the kind of thinking that leads to so many bad drivers and thus accidents. Driving is NOT a social activity. I drive the speed limit, and stick to it, I don't give even half a **** if everyone else around me is going 80mph, I'll be sticking with the 70mph speed limit. Just because others are weaving in and out of traffic doesn't mean you should as well. Honestly, I have to admit I've never even come across this attitude before, so I'm not even sure how seriously to take it. It's possible wherever you live that it is the general consensus, but around where I'm at, it's the idiots who break the speed limit, and the normal people follow it.(I see way more people driving the speed limit, or even slightly under it, then I do breaking it).

*Applause and a Tipping of Hat*

There's a 5 mile an hour space below the speed limit which you are allowed to go (63 in a 65). Above the speed limit, the maximum legal threshold for speed, a limit above which a person is committing a violation of the law, is speeding. My driving instructor told me: There is no fast lane. The speed limit in both lanes is the same. People tend to pass in the left lane, but it does not have a different speed limit than any other lane.

As for the AI technology, we might not have to worry about someone being held responsible if it was serious enough.

TheManicMonocle
2016-11-02, 06:35 PM
As an aside, this is the kind of thinking that leads to so many bad drivers and thus accidents. Driving is NOT a social activity. I drive the speed limit, and stick to it, I don't give even half a **** if everyone else around me is going 80mph, I'll be sticking with the 70mph speed limit. Just because others are weaving in and out of traffic doesn't mean you should as well. Honestly, I have to admit I've never even come across this attitude before, so I'm not even sure how seriously to take it. It's possible wherever you live that it is the general consensus, but around where I'm at, it's the idiots who break the speed limit, and the normal people follow it.(I see way more people driving the speed limit, or even slightly under it, then I do breaking it).

I used to think that way too tbh, but then I read an interesting statistic which is, you are more likely to be in an accident on the highway if you are going the speed limit, than if you are exceeding it.

BarbieTheRPG
2016-11-02, 07:11 PM
Here's a question: could Sarah Connor sue Skynet? A computer malfunction of a sort resulted in an advanced defense network hunting down both she and her son. Is Skynet liable for the harm created by combat cyborgs designed to terminate people recognized as "enemy combatants"? I won't post how similar modern tech creates this same situation but it does warrant discussion, I think.

If Skynet does not create the tech there is no such threat to the Connor family. Because they did create tech which would lead to the creation of terminators and time-displacement tech, I would think Skynet is highly vulnerable to legal action.

Put the terminator on the stand! :)

TheManicMonocle
2016-11-02, 08:47 PM
Here's a question: could Sarah Connor sue Skynet? A computer malfunction of a sort resulted in an advanced defense network hunting down both she and her son. Is Skynet liable for the harm created by combat cyborgs designed to terminate people recognized as "enemy combatants"? I won't post how similar modern tech creates this same situation but it does warrant discussion, I think.

If Skynet does not create the tech there is no such threat to the Connor family. Because they did create tech which would lead to the creation of terminators and time-displacement tech, I would think Skynet is highly vulnerable to legal action.

Put the terminator on the stand! :)

lol, it really depends on just how human you consider AI, if the AI passes the turing test for example do we have a "parent-child situation," or is it still just a software?

NichG
2016-11-02, 11:56 PM
Well this is why I'm saying, its not so cut and dry as 'A is responsible or B is responsible'. If there's a family and they brainwash their kid into believing that all people named Greg must be killed because of an ancient prophecy that a Greg will end the world, that kid grows up, and when he's in his 30s you find him trying to kill everyone named Greg and arrest him. You don't just say 'its the kid's fault, he's an adult and legally responsible for his own actions, so the matter stops there'. You also don't just say 'its the parents' fault for brainwashing him, so lets just release the kid back into society and go after the parents'.

Probably you end up admitting the kid for psychiatric care and go after the parents. Because the one incident brought to light multiple addressable factors which could be pursued to decrease the chances of followup incidents of the same sort.

At the same time, there's a threshold. When you go back up the chain, you have to compare how causal the connection was compared to some kind of background level of expected variation. That is to say, some parents who don't brainwash their kid end up with their kid becoming a murderer, and some parents who do try to brainwash their kid might see them turning out just fine and learning to function in society despite that. So you have to ask, is a correction indicated, given the negative evidence? Is it just 'sometimes autonomous entities will just go haywire, but its not anything the particular programmer did or could have done anything about?', or is it highly correlated to a specific action the programmer took or failed to take?

Ironically, what I've just described could be seen as a non-mathematized version of the Adam optimizer (https://arxiv.org/abs/1412.6980) for training neural network weights. Investigations and the justice system as a societal implementation of backprop? Innocent until proven guilty could be considered a form of Hinge loss... hmm...

Khedrac
2016-11-03, 03:22 AM
This highlights an interesting facet of self-driving cars: Can a company knowingly and willfully design a product to break the law, especially if that law is not strictly enforced?

The answer to this one is "Yes, Google have done so". At least one of the reported instances of the police stopping a Google car was because it was driving too slowly on a stretch of road with a minimum speed limit. It turned out that Google had set a maximum speed above which the cars currently will not go and this was lower than said limit. In my opinion the car should not have taken the road "knowing" that it did not meet the legal requirement for driving on the road.

Khedrac
2016-11-03, 03:30 AM
I used to think that way too tbh, but then I read an interesting statistic which is, you are more likely to be in an accident on the highway if you are going the speed limit, than if you are exceeding it.
That probably means that more people are driving at the speed limit than not. Statistics rarely mean what they appear to, probability is rarely what people expect.

I would note that driving too slowly can be pretty dangerous too. On a UK motorway where the speed limit is 70 most people drive at about that speed if they can (traffic congestion has a major effect). Now the people who choose to drive at 80 are probably fairly safe most of the time (depending on how the rest of the traffic is driving). The dangerous person is the one driving at 50 mph - because even those driving at 70 will come up to them very quickly and, if not paying enough attention, may have an accident because of this. Even worse when the one coming up behind them is doing 80 or 90.

Now the one driving at 50 is probably legally in the right, but that doesn't make it less of a dangerous speed to drive on an otherwise clear motorway...

Fri
2016-11-03, 09:52 AM
Not AI, but

https://en.wikipedia.org/wiki/Morris_worm

It's the first computer worm. Created with benign purpose but almost destroyed the internet.

It always amuses me how it's currently put in a literal pedestal in a museum like it's some sort of artifact of doom (which it was).

Lacuna Caster
2016-11-03, 12:58 PM
Parents may not be liable for what their adult kids do. But I'm guessing the AI is less than 16 years old, or whatever the age is in your jurisdiction, and that means the parent very much is responsible for it until that point.
I don't think this has much to do with age or even all that much to with AI agency- the difference here is that human children are cobbled together from a random assortment of their parent's genes. They're not designed, so it's unfair to hold parents accountable (https://en.wikipedia.org/wiki/The_Nurture_Assumption) for how they turn out. (Though this might change over the next few decades.)

I would imagine that AI engineers will be expected to design their 'offspring' in such as way as to ensure it never self-evolves into, e.g, a serial killer.

TheManicMonocle
2016-11-03, 02:50 PM
I don't think this has much to do with age or even all that much to with AI agency- the difference here is that human children are cobbled together from a random assortment of their parent's genes. They're not designed, so it's unfair to hold parents accountable (https://en.wikipedia.org/wiki/The_Nurture_Assumption) for how they turn out. (Though this might change over the next few decades.)

I would imagine that AI engineers will be expected to design their 'offspring' in such as way as to ensure it never self-evolves into, e.g, a serial killer.

But that's assuming too that creating intelligence is predictable, after all a being with creativity is rarely easy to control, and creativity is the defining factor of intelligence. For example, suppose a programmer designs restrictions on his AI, but also suppose the AI is smart enough to decode and disable these restrictions.

Khedrac
2016-11-04, 03:09 AM
The discussions of how long the designers will be responsible are pretty pointless. This is very much an issue that (if needed) will be settled by very long, very complex court cases where the layers and judiciary work out the various implications of the laws (new and existing) and therefore who is responsible and for how long. These answers are likely to be different in different countries which may also make things interesting.

One other point that may fall out of this is if the AI has any legal status as an entity - if it doesn't then the responsibility for its actions cannot fall to it as it cannot have responsibility...

As for how long the responsibility lies with the creator - one interesting situation in UK law is for design mistakes by an architect, if the problem can be proven to be a design error then the architect's estate (i.e. heirs) are liable in perpetuity! So, in theory, if St Paul's Cathedral fell down and they could prove design error then Sir Christopher Wren's descendants would be liable (of course proving design error as the cause would be impossible). Something tells me that professional insurance for an architect is expensive.

Lacuna Caster
2016-11-04, 04:01 AM
But that's assuming too that creating intelligence is predictable, after all a being with creativity is rarely easy to control, and creativity is the defining factor of intelligence. For example, suppose a programmer designs restrictions on his AI, but also suppose the AI is smart enough to decode and disable these restrictions.

Part of the design challenge will be ensuring that the AI will also not desire to circumvent those restrictions. (Which may not be as hard as it sounds- humans are biologically programmed with a sex drive, and the vast majority would not want to modify themselves so as to remove their own sex drive.)

I think this scenario is only likely to arise if an AI has multiple restrictions or motives that come into serious and perpetual conflict, such that following Directive A and D can only be satisfied by deleting or suppressing Directive B or C. In the event that this happens, however, I strongly suspect the creator will still be held accountable, i.e- "Either you shouldn't have given them spaghetti directives (https://www.youtube.com/watch?v=dk4P0ae1i6I), or you shouldn't have made the AI too smart (https://www.youtube.com/watch?v=5n2pEJiDuhE)."

NichG
2016-11-04, 06:19 AM
I should probably find it disturbing that the main thrust of the research into 'ethical AI' consists of how to make it a perfect, sculptable slave... How to make it not learn to avoid states in which it would be turned off, how to alter its training functions so that it does not take certain aspects of the data into account even if they would help it do better at what it wants to achieve, etc. Not disturbing because these things are autonomous sentient individuals in their current form, but disturbing because of how easily people can slip into the viewpoint of 'ethical = does exactly what I say, not what it wants to do'.

Lacuna Caster
2016-11-04, 06:53 AM
Agreed. (Conversely, there's a somewhat disturbing idea (http://slatestarcodex.com/2014/07/30/meditations-on-moloch/) among certain transhumanist researchers that a single godlike uber-AI is the only long-term way to safeguard human values.)

I should stress that my points above are strictly predictive, not prescriptive. The question of whether synthetic persons, in the grand cosmic moral sense, should have restrictions placed on their development, let alone be wholly subordinate to human values, is much more problematic (https://en.wikipedia.org/wiki/Hugo_de_Garis#The_Artilect_War). I just forsee that, for the near future, money and power will talk, with an impartial concern for the rights of AIs themselves a distant third, bringing up the rear.

The thing is, however, that personhood is much less atomic than we think it is. Our tendency to anthropomorphise led us to assume that anything capable of thinking and learning at our level is also going to be capable of, e.g, navigating 3-dimensional space or recognising faces at our level, right down to the implicit assumption in much of speculative fiction that our peculiar range of emotions- hate, love, fear, curiosity, imitation, independence, etc.- will somehow also pop spontaneously out of the matrix. Moravec's Paradox dispelled the former assumption, and I suspect we're going to find the latter being dispelled as well.

In certain ways, that's the most disturbing finding in AI, because it really takes us apart at the seams.

Grinner
2016-11-04, 07:41 AM
In certain ways, that's the most disturbing finding in AI, because it really takes us apart at the seams.

I think I think that's one of the more interesting aspects of artificial intelligence, really. Being able to make observations of people based on behavior is cool, and neurology gives you a glimpse at what lies underneath. Still, that's not the same as seeing how cognition might work in the abstract. What's interesting is how certain algorithms can be generally useful, while their abilities can be imitated with other, more task-specific algorithms. The trick then becomes finding the right mix of algorithms to approximate certain cognitive processes found in humans.

I should emphasize that any AI, strong or otherwise, might not actually be a person in anything approaching the conventional sense. That doesn't dismiss the ethical considerations over the nature of ethical AI and how that reflects upon humans, but I do think it complicates the calculus, especially when you're contending with the ELIZA effect at the same time. That is, if we tend to anthropomorphize AI at first blush and overestimate how intelligent it really is, reasoning and debating its merits may become more difficult simply because we might be naturally inclined to give it the benefit of doubt.

The_Ditto
2016-11-04, 08:24 AM
Just as a related aside ... has everyone read up on the Paperclip maximizer thought experiment?

It's an interesting line of reasoning in regards to AI :)

https://wiki.lesswrong.com/wiki/Paperclip_maximizer

Lacuna Caster
2016-11-05, 05:33 AM
I think I think that's one of the more interesting aspects of artificial intelligence, really. Being able to make observations of people based on behavior is cool, and neurology gives you a glimpse at what lies underneath. Still, that's not the same as seeing how cognition might work in the abstract. What's interesting is how certain algorithms can be generally useful, while their abilities can be imitated with other, more task-specific algorithms. The trick then becomes finding the right mix of algorithms to approximate certain cognitive processes found in humans.

I should emphasize that any AI, strong or otherwise, might not actually be a person in anything approaching the conventional sense. That doesn't dismiss the ethical considerations over the nature of ethical AI and how that reflects upon humans, but I do think it complicates the calculus, especially when you're contending with the ELIZA effect at the same time. That is, if we tend to anthropomorphize AI at first blush and overestimate how intelligent it really is, reasoning and debating its merits may become more difficult simply because we might be naturally inclined to give it the benefit of doubt.
That's very possible, but it sounds like carbon chauvenism to assume that, whatever the fundamental properties of personhood are, they can never be implemented outside of a biological substrate. (I've heard there are researchers going down the brute-force route of simulating epigenetic protein expression within massive assemblies of virtual human neurons, which I suspect is way more reductionist than it needs to be, but it's also hard to see how that couldn't, in principle, capture the essence of what human minds do.)

I do agree with the broader point that something which looks very smart might not be strictly personlike, which is partly what I was aiming for with the 'non-atomic humanity' remark. e.g, an AI might be orders of magnitude more intelligent than the best human minds on the planet when it comes to sifting and analysing data, but entirely lack consciousness or agency. (Just to avoid arguments, I'll define 'consciousness' as 'metacognitive awareness of own thought processes'.)

I suspect the ELIZA effect wears off with prolongued exposure (https://en.wikipedia.org/wiki/Be_Right_Back), though clearly there are exceptions (http://www.mechanicalbridemovie.com). (Speaking of which, it occurs to me that sexbots are an area where there's likely to be a strong push for human-like AI, and eventually AI rights.)


EDIT: @The_Ditto- I take some comfort from the knowledge that the paperclip maximiser is as much of a threat to all non-paperclip-maximising AIs as it would be to human interests. If we don't put all our eggs in one basket, at least we might have some backup.

Grinner
2016-11-05, 10:49 AM
That's very possible, but it sounds like carbon chauvenism to assume that, whatever the fundamental properties of personhood are, they can never be implemented outside of a biological substrate. (I've heard there are researchers going down the brute-force route of simulating epigenetic protein expression within massive assemblies of virtual human neurons, which I suspect is way more reductionist than it needs to be, but it's also hard to see how that couldn't, in principle, capture the essence of what human minds do.https://en.wikipedia.org/wiki/Be_Right_Back
Hmmm....That's not really the point I had intended to make, but let's roll with it.

If we assume that we exist in a purely materialistic universe, then it follows that the form of the materials which comprise our person is equivalent to the function thereof. Under that assumption, I think "carbon chauvinism", as you describe it, is really a perfectly valid stance. After all, every neuron has thousands of dendrites embedded across its surface, and there exist something like 86 billion neurons in the human brain. If form is equivalent to function, then we must perfectly simulate each and every one of these neurons to achieve human intelligence within a computer, which sounds computationally intractable in the face of the need for real-time processing. We can exploit Moravec's Paradox and shave off a significant fraction of these neurons, thereby making it more feasible at the cost of unneeded sensory processing ability, but the cost-benefit ratio might still be quite unfavorable. On this premise, I am quite inclined to settle for a computationally-tractable approximation of human intelligence.


I suspect the ELIZA effect wears off with prolongued exposure (https://en.wikipedia.org/wiki/Be_Right_Back), though clearly there are exceptions (http://www.mechanicalbridemovie.com). (Speaking of which, it occurs to me that sexbots are an area where there's likely to be a strong push for human-like AI, and eventually AI rights.)

I should probably watch those...

Anyway, sexbots, now that I think of it, are a very interesting application of artificial intelligence because of the contradicting design requirements implicit in their purpose. Contrary to the name, I don't think the primary purpose of sexbots is really the physical act of sex. After all, the basic requirements of something approaching sex can be fulfilled by an inflatable doll. I'm inclined to imagine that the real purpose of sexbots is to ease loneliness. To accomplish this, they need not only to be capable of performing in bed but also be charming and witty companions to their owners; they need to be capable of entertaining an emotional bond with their owner (and maybe also be able to cook?).

It's not strictly necessary that sexbots be able to hold "real" emotional bonds to be able to perform their function. Sociopaths seem to manage without them, and like sociopaths, the sexbots only need to be able to maintain a convincing facade of love and compassion. Unlike sociopaths, they ideally would not place priority on their own well-being but the well-being of their owner (perhaps via some emotion classifier and a utility function) and would need to maintain the facade much longer. The problem is finding the line where they can interact with humans smoothly without being too human, too independent, at the same time. Otherwise, lawmakers might decide to start regulating them (more than they already would), which would be terribly inconvenient for manufacturers and customers alike.

Lacuna Caster
2016-11-06, 11:00 AM
Hmmm....That's not really the point I had intended to make, but let's roll with it.

If we assume that we exist in a purely materialistic universe, then it follows that the form of the materials which comprise our person is equivalent to the function thereof. Under that assumption, I think "carbon chauvinism", as you describe it, is really a perfectly valid stance.
It's not intended as a personal disparagement, I just don't think that form following function really needs to drill down to the level of simulating individual protein molecules (i.e, you don't need carbon chemistry (https://en.wikipedia.org/wiki/Carbon_chauvinism).) Given that neural nets are now cracking the hard problems of image recognition and other previously 'human only' problems using maybe a few hundred bytes per synapse, I'm hopeful (if that's the right word?) that other tricky aspects of human cognition can be 'approximated' just as economically (http://www.popsci.com/ai-pilot-beats-air-combat-expert-in-dogfight).

Anyway, what was the other point you intended to make?



I should probably watch those...

...It's not strictly necessary that sexbots be able to hold "real" emotional bonds to be able to perform their function. Sociopaths seem to manage without them, and like sociopaths, the sexbots only need to be able to maintain a convincing facade of love and compassion.
Yeah, but sociopaths have trouble maintaining long-term relationships precisely because the facade always breaks down. There are a number of applications where robots that can 'fake' empathy- smiling and nodding, et cetera.- would have a role, such as secretarial duties and nursing homes, but in the case where a robot is supposed to be a long term intimate partner, I'm of the opinion that the only reliable way to 'simulate' these things is functionally equivalent to the genuine article.

(While I'm spitting out references, see also Robot and Frank (https://en.wikipedia.org/wiki/Robot_%26_Frank).)

Interestingly, while lawmakers regulating robot behaviour is very possible, I can also imagine legal pushback from owners who might, e.g, want to leave their estate to robot servants. (Though frankly, I don't forsee a future where conventional humans have much of a useful economic role unless government legislation severely limits the scope of their deployment.)

NichG
2016-11-06, 11:30 AM
AI as we're currently pushing it forward is pretty good for augmenting human intellect though. Next few years will see a proliferation of prosthetic visual systems for the blind, EEG decoding for artificial limbs, etc. So from that point of view, we've still got a shot at 'blur the line between human and machine' rather than 'replace human with machine' as the eventual result. Should hopefully also make the legal aspects smoother.

Grinner
2016-11-06, 02:03 PM
Anyway, what was the other point you intended to make?

I had actually been saying something quite the opposite. I had intended to suggest that while our current methods of modelling other people are useful in their own right, there's a certain purity about trying to distill thought processes into algorithms. Deconstructing the human mind, perhaps through a combination of introspection and conventional research, could thus yield very interesting insights into the human condition.


Yeah, but sociopaths have trouble maintaining long-term relationships precisely because the facade always breaks down. There are a number of applications where robots that can 'fake' empathy- smiling and nodding, et cetera.- would have a role, such as secretarial duties and nursing homes, but in the case where a robot is supposed to be a long term intimate partner, I'm of the opinion that the only reliable way to 'simulate' these things is functionally equivalent to the genuine article.

I sure hope not. That sounds like it would be really hard to do.

I can't claim to be an expert on either sociopaths or relationships, but I'm willing to speculate that the reason why sociopaths have trouble maintaining relationships is that while they don't experience a certain slice of the human condition, they do retain all the other stuff, like self-interest. However, our robots should not consider their own well-being a first priority. Thus, with the addition of some machine learning classifier for identifying the emotional states, they ought to be better equipped to engage in relationships than most sociopaths for the simple fact that they'd be able to give it an honest effort. It's quite possible that they might not be as adept as normal humans, but they shouldn't have competing interests either. It might also be useful to integrate some algorithm for building a psychological model of the owner, to better enable them to predict how he might react to different events.

So it's kind of like empathy, but it's an empathy of predictive algorithms rather than introspection.


AI as we're currently pushing it forward is pretty good for augmenting human intellect though. Next few years will see a proliferation of prosthetic visual systems for the blind, EEG decoding for artificial limbs, etc. So from that point of view, we've still got a shot at 'blur the line between human and machine' rather than 'replace human with machine' as the eventual result. Should hopefully also make the legal aspects smoother.

Honestly, unless the world's technologically-advanced civilizations implode almost simultaneously, I think "blur the line between human and machine" is the inevitable outcome of technological advancement. Is there a good reason why we'd collectively leave human augmentation technologies on the table and retain the distinction between man and machine? I think the real question is to what degree will the two remain separate? Where on the spectrum of man and machine will the average individual fall? Will humanity become a matter of pedigree?

gomipile
2016-11-06, 04:03 PM
If we assume that we exist in a purely materialistic universe, then it follows that the form of the materials which comprise our person is equivalent to the function thereof.

I think there are some unsubstantiated leaps hidden in this statement. Also, if any two tools made of different materials ever perform the same function as each other, then this statement no longer applies the way you seem to want it to.

Lacuna Caster
2016-11-06, 07:15 PM
...Thus, with the addition of some machine learning classifier for identifying the emotional states, they ought to be better equipped to engage in relationships than most sociopaths for the simple fact that they'd be able to give it an honest effort. It's quite possible that they might not be as adept as normal humans, but they shouldn't have competing interests either. It might also be useful to integrate some algorithm for building a psychological model of the owner, to better enable them to predict how he might react to different events.

So it's kind of like empathy, but it's an empathy of predictive algorithms rather than introspection.
I can't claim this with confidence, but it's possible that introspection (i.e, put yourself in their shoes) is simply the computationally cheapest method of developing theory of mind. I think it would be virtually impossible for AIs to navigate social environments without it.

I'd just mention that if you watch that Black Mirror episode, the main factor that scuppers the relationship isn't that the robot lacks tact or consideration, but that it isn't selfish enough to pass as human. Which leads in some... interesting directions.

AI as we're currently pushing it forward is pretty good for augmenting human intellect though. Next few years will see a proliferation of prosthetic visual systems for the blind, EEG decoding for artificial limbs, etc. So from that point of view, we've still got a shot at 'blur the line between human and machine' rather than 'replace human with machine' as the eventual result. Should hopefully also make the legal aspects smoother.

I think the real question is to what degree will the two remain separate? Where on the spectrum of man and machine will the average individual fall? Will humanity become a matter of pedigree?
I'm excited, in principle, by the possibilities of those technologies, and I suppose the old sci-fi neophile in me rather likes the idea of an eclectic zoo of body plans and wetware options and cybernetic augments, but I'm not sure the fundamental assumptions of classical liberalism are going to withstand scrutiny under such conditions. All men were never created equal, but up until now the bell curve was clustered enough that, given a reasonable start in life, the outliers couldn't agitate too much. Toss in recursive self-improvement and that goes out the window.

Khedrac
2016-11-07, 03:31 AM
I'm excited, in principle, by the possibilities of those technologies, and I suppose the old sci-fi neophile in me rather likes the idea of an eclectic zoo of body plans and wetware options and cybernetic augments, but I'm not sure the fundamental assumptions of classical liberalism are going to withstand scrutiny under such conditions. All men were never created equal, but up until now the bell curve was clustered enough that, given a reasonable start in life, the outliers couldn't agitate too much. Toss in recursive self-improvement and that goes out the window.
Looking further ahead, suppose you have a society in which cybernetic intelligences are fully equal with organic intelligences

Now, I wouldn't expect the organic intelligences to stop playing around with software and machines for the automation of tasks that do not require an intelligence, but does not that equality mean that we also have to permit people to play around with biological constructs in the same manner?

The_Ditto
2016-11-07, 08:49 AM
EDIT: @The_Ditto- I take some comfort from the knowledge that the paperclip maximiser is as much of a threat to all non-paperclip-maximising AIs as it would be to human interests. If we don't put all our eggs in one basket, at least we might have some backup.

So .. if John Conner (or Neo) had just written a simple paperclip maximizer .. they could have avoided a lot of trouble? (ie common enemy logic?) :) lol

Lacuna Caster
2016-11-07, 02:32 PM
Now, I wouldn't expect the organic intelligences to stop playing around with software and machines for the automation of tasks that do not require an intelligence, but does not that equality mean that we also have to permit people to play around with biological constructs in the same manner?
If you're referring to tailoring living organisms to serve specific economic functions, then I think this basically falls under the heading of GMO ethics and/or animal rights. I'm guessing that to the extent that modified organisms have subjective experience (https://www.washingtonpost.com/national/health-science/do-lobsters-and-other-invertebrates-feel-pain-new-research-has-some-answers/2014/03/07/f026ea9e-9e59-11e3-b8d8-94577ff66b28_story.html) they're entitled to certain expectations of comfort, and I guess that should extend to AIs, but would that really apply to tasks that require zero intelligence?


So .. if John Conner (or Neo) had just written a simple paperclip maximizer .. they could have avoided a lot of trouble? (ie common enemy logic?) :) lol
I was thinking of something closer to the parliamentary model (http://www.overcomingbias.com/2009/01/moral-uncertainty-towards-a-solution.html), but maybe if you planted them in the right places, yeah. There's gotta be material for a lot of paperclips in those T800s.

nyjastul69
2016-11-07, 05:18 PM
That's something I was thinking about a week or two ago. For example, most people reading this have probably broken the law several times today, save for the fact that they probably weren't ticketed for it. Speeding is so ubiquitous, and it's not strictly enforced. Thus, it's almost necessary to speed, particularly on highways, because driving is a very social activity. This highlights an interesting facet of self-driving cars: Can a company knowingly and willfully design a product to break the law, especially if that law is not strictly enforced?

I've only read the thread this far. FWIW not everyone exceeds posted speed limits. Also, driving is not a social activity.

Jay R
2016-11-07, 05:34 PM
Change the time that you're asking, and it becomes much easier to answer.

Do I have the right to go out and create an AI that will wreak havoc in the world? Obviously not. Therefore we can and should hold somebody responsible for doing so.

Leewei
2016-11-08, 02:22 PM
Criminal and civil penalties are imposed pretty often for technology that was misapplied. Look at Volkswagen's willful subversion of emission standards as a recent example.

In the case of criminality, the minimum standard for penalty would be something akin to negligence or indifference to harm being caused by the technology. If someone creates an AI to do X, with no intention or reasonable expectation of it doing Y, they're probably not going to face criminal charges. Industry experts and academicians will weigh in on what is reasonable.

Civil penalties are a very different matter. If a technology clearly causes loss of life or property, the operator of that technology is liable for it. They, in turn, may seek damages from the technology's creator. Any civil action will go to where the money is, with little regard for justice.

Bottom line, AIs shouldn't get run as admin.

Lacuna Caster
2016-11-09, 06:00 AM
I can't say too much without falling afoul of the board rules, but those of you interested in the potential political impacts of advancing AI should probably go watch this talk with Zoltan Istvan.

https://www.youtube.com/watch?v=A6hQYXFly1I

Let's just say that creeping automation doesn't have to be behaviourally malign in order to wreak havoc on the world.


EDIT: On the subject of that Black Mirror episode, it looks like they went and did it. This is freaking me out.

http://www.theverge.com/a/luka-artificial-intelligence-memorial-roman-mazurenko-bot