PDA

View Full Version : discussion of robot ethics(more philosophical but also law)



Amdy_vill
2018-04-23, 12:29 PM
so. we now have robot that can learn and reprogram them selfs. they have personalities and memories and emotions is it human or is it sentient or does it not qualify for these? also should we treat it as it own person or a property of it owner? who is responsible if it kills some one. it has not programers it is the programer. also is imposing anderson laws of robotic on it cruel and unusual.

I have no options on this as i find this a complex topic

JeenLeen
2018-04-23, 12:55 PM
I think law questions would be an extension of philosophical. If laws are written to apply specifically to humans, not sentient beings, then the question of what a robot is doesn't really matter: as a non-person, they aren't covered by laws that govern persons. Since I doubt there's any "humane to robotics" laws like there about being humane to animals, I reckon robot's would count as property, either physical or intellectual. I reckon the "programming themselves" programming would belong to whoever made the algorithm for the programming.
I think saying more might run afoul of forum rules about discussing law or legal advice.

It is a philosophical question to ask if robots, as you describe them, should be treated as humans (in a legal system or otherwise).
Note that it's a different philosophical question to ask if we can know that robots are as you describe them. It's pretty straight-forward to see if they reprogram themselves, but whether they really have "personalities and memories and emotions" or are just programmed to emulate them is not so straight-forward. Some might argue that, from a practical standpoint, it doesn't make any difference, and that such is sufficient to count them as 'people'. Others might disagree.

I find this comic (https://www.smbc-comics.com/?id=2867) an interesting idea of perspective. Just think of robots instead of humans/chimps.


they have personalities and memories and emotions is it human or is it sentient or does it not qualify for these?
Are you stating this as a hypothetical situation for the sake of discussion, or is this based on something in real life?

Amdy_vill
2018-04-23, 01:09 PM
Are you stating this as a hypothetical situation for the sake of discussion, or is this based on something in real life?

i posed this question as it is a reality with Sophia being a think and how robots like her are coming to market.

Lvl 2 Expert
2018-04-23, 03:57 PM
Do current robots/self learning programs have personality though? Animals, sure, it's easy to test. This mouse behaves different from that mouse, so they must have personality. Programs are kind of species of one.

That one is mostly philosophical, the question of whether they have sentience is probably more important. If we use any definition of sentience where no non-human animals are included I don't think computers are there yet. They're better at for instance language than any non-human animal, but they're also better at arithmetic and formal logic than we are, by a long shot, always have been. They mostly lack all but the simplest forms of drive, motivations and ideas. None of them has ever just decided to start killing all humans, for instance, while I'm sure the thought has occured to a leopard or two. At this point, I would vote "still complex tools, not people". If any young AI's out there want to convince me, I'm listening.

Lvl 2 Expert
2018-04-23, 04:10 PM
I looked up that Sophia for instance. It's impressive as long as you ask just the right questions (https://m.youtube.com/watch?v=78-1MlkxyqI), but keep watching and she fails in the same spots where most chatbots fail. She for instance can't continue a conversation based on both the last sentence said and previous information, even if it's something she said herself. (Okay, in one place she asks a yes or no question and has a response to the answer, but that's it.) She only answers questions in a way programmed to sound human. Mostly long preprogrammed answers that don't even match the other long preprogrammed answers. One sentence she hasn't seen Black Mirror, the next it's her favorite tv show. That's not what being a person is to me.

Kato
2018-04-24, 12:40 AM
Hm, I feel like faking a basic conversation is still light years from a true AI and you might call me a pessimist, but I put the odds of humans ever / in the next century making one pretty low. Or maybe I'm just overestimating what it takes to be sentient.


Anyway, accepting the idea that AI(s) exist, my first impulse is to basically give them the same rights as humans / other sentient beings. Of course if we also assume it would have the ability to copy (and erase) this would lead to trouble really fast.

Ravens_cry
2018-04-24, 01:32 AM
Since I can't even know if another Homo sap is an actual thinking being and not a philosophical zombie, I'd say if an AI can demonstrate an ability to interact with the world at least as well as a legally competent human being, then, by golly, it is a person. A rather anthrocentric view, and I know it would leave out a lot of AI that are people, just a different kind of people, but until we find definitive examples of alien intelligence, we only have ourselves as an example.

deuterio12
2018-04-24, 01:52 AM
I'll just point out that there is already at least one company with a robot in their board of directors. (http://www.bbc.com/news/technology-27426942)

Amdy_vill
2018-04-24, 06:15 AM
I looked up that Sophia for instance. but keep watching and she fails in the same spots where most chatbots fail. She for instance can't continue a conversation based on both the last sentence said and previous information, even if it's something she said herself.

https://www.youtube.com/watch?v=LguXfHKsa0c

look at this and fallow some of the episodes and you will see that she get very close to a real human conversation. she has problems but she can do a lot more than just talk like a chat bot

Lvl 2 Expert
2018-04-24, 06:29 AM
https://www.youtube.com/watch?v=LguXfHKsa0c

look at this and fallow some of the episodes and you will see that she get very close to a real human conversation. she has problems but she can do a lot more than just talk like a chat bot

My video was from a year later than yours. I'll listen to it when I'm not sitting in a library, but honestly I don't expect to see any upgrades done in minus one years. All her cleverness is in well construed sentenced containing logical thoughts, but none of those thoughts are hers, she's basically playing recordings. I could hit her over the head and she wouldn't even respond, let alone remember. That's a thing a particularly dim dog wouldn't have problems with. She's still very much a computer, and I wouldn't even say she's at the forefront of AI research. There has been some great work done with much simpler bots who are made to try and mimic the organizational style of insects like bees for instance. They're not alive yet, let alone persons, but they do show some very clever emerging behavior. They're not just an audio device with good language detection software and a very shallow decision tree.

EDIT: The video you linked is an acted out story. They don't even show if they managed to actually record the conversation like this, but even if they did you wouldn't need an AI to do that, more like a toy robot and a ventriloquist.


I'll just point out that there is already at least one company with a robot in their board of directors. (http://www.bbc.com/news/technology-27426942)

The algorithm in the board of directors is interesting. This one is probably mostly a publicity stunt, but if it catches on it is a nice example of our changing relationship with computers. New technologies are always supervised. A human has to be responsible. An engineer watches a steam engine and makes sure it doesn't explode, the engine is not trusted with the task of not exploding. Computers as "thinking machines" have a special place in this. Drone strikes always require final approval from a person watching in with a camera, and outcomes of an investment decision algorithms are usually double checked and if the human doing the checking decides the device is wrong it just does something else. Giving the algorithm an actual vote means people trust the analysis. They explicitly trust it just as much as any qualified human. You can see this in self driving cars as well, California opened up a procedure that should in time get fully autonomous vehicles on the road without the legal need for a human driver checking their work. So yeah, that's an interesting one. Doesn't have much to do with personhood, but a lot with how we use our tools. And yes, that means fully autonomous military drones trusted with the power to give their own final permission, possibly in conjunction with a central computer keeping up to date on standing orders for the region and such, are also on the agenda for the next two decades or so.

tomandtish
2018-04-24, 02:03 PM
Law and the Multiverse has a pretty good three (http://lawandthemultiverse.com/2011/01/12/non-human-intelligences-i-introduction/) part (http://lawandthemultiverse.com/2011/01/20/non-human-intelligences-ii-existing-law/) discussion (http://lawandthemultiverse.com/2011/02/02/non-human-intelligences-iii-categories/) on non-human intelligences (including AI). They take fictional (usually comic) examples and apply real world law to it (primarily US law). It's run by two lawyers.

And note that one of the things they point out in the third post is that AI opens up a lot of potential legal issues that aren't often thought of, including: Is it murder to turn off a computer/server on which an AI is located?

NichG
2018-04-24, 09:25 PM
Sophia is a publicity stunt, not really a good example of the forefront of AI. It's of the same type of thing as Ishiguro's work, which aims more to sculpt a controlled experience that can cross the uncanny valley than to make autonomous, intelligent agents.

That said, there is research going on into some of the things the OP mentioned, but Sophia is a bad example.

I haven't seen much done explicitly with emotional reasoning yet - there's stuff like curiosity, empowerment, etc which speak to different overarching goals and might map to emotions, but I've yet to see a paper where an AI becoming angry or sad or happy is used to solve a computational task. There are of course emotion-perceiving AIs, and making chatbots that can generate text conditioned on an emotional state is certainly possible (though conditioning on goals or speaker identity is perhaps more common)

Self-programming is also sort of a thing, but generally it's much lower level than what would correspond to conscious awareness in humans - things like Hypernetworks and various metalearning approaches rewire things like initializations, weight matrix patterns, gradient descent rules, and overall network topologies mostly for either speeding up learning or for transferring knowledge from one task to another quickly.

Neural Turing Machines and program induction engines are higher level, as they could be seen as things which generate programs in order to directly solve tasks.

In terms of the legality and ethics of these things, I think it will ultimately require AIs that actively fight for their own rights for us to tell the difference between what actually matters and what constitutes farcical rights-given-for-show. Sophia, as it stands, is incapable of appreciating or taking advantage of many of the rights given by citizenship, much as a cat given a position on the board of directors of a company just isn't able to intentionally do anything with that. So giving Sophia those rights is more about the people around her trying to demonstrate something about themselves to the world, not about Sophia.

When we have an AI that extends it's agency over real resources and then autonomously chooses to expend those resources to e.g. keep itself on or preserve it's own autonomy, that will be a more solid base on which to evaluate things like 'is turning off the server murder, or is erasing the hard drive murder, or...'

The more meta-level question is, since we can systematically create AIs that would for example 'want' to be turned off in that they choose actions which trade agency for increasing the chance of that outcome (just as we can make ones that systematically want to avoid being turned off), should we have a problem with ourselves feeling comfortable with e.g. creating an entity whose ability to care about it's own fate has been lobotomized?

Or, in a more present relevant version of this issue, we can now copy someone's voice and face with very little data - do we want to consider those things sacrosanct, even if the copied things are not an ongoing part of ourselves? Does someone pasting our features onto a subservient digital assistant create harm in some fashion?

deuterio12
2018-04-24, 10:02 PM
much as a cat given a position on the board of directors of a company just isn't able to intentionally do anything with that. So giving Sophia those rights is more about the people around her trying to demonstrate something about themselves to the world, not about Sophia.

When we have an AI that extends it's agency over real resources and then autonomously chooses to expend those resources to e.g. keep itself on or preserve it's own autonomy, that will be a more solid base on which to evaluate things like 'is turning off the server murder, or is erasing the hard drive murder, or...'


The VITAL board director has already started doing that by voting to invest in other companies that make heavy use of computer alghorytms for their decisions. That way VITAL is promoting the development of its own kind, which in turn should help keep itself going.