PDA

View Full Version : Tech News A.I.s and and the Future of Mankind



Surfing HalfOrc
2014-05-04, 05:14 AM
Usually when I see articles with the "Oh, Noes! The robots will take over the world!" I just roll my eyes and move on, but when Dr. Stephen Hawking speaks, I tend to listen.

http://www.independent.co.uk/news/science/stephen-hawking-transcendence-looks-at-the-implications-of-artificial-intelligence--but-are-we-taking-ai-seriously-enough-9313474.html

Anyways, my concern is less "Terminators Mowing Down the Crowd with Miniguns" and more "Do you want fries with that?" Robots will soon be able to take over much of the tasks at the bottom of the work force, stocking shelves, flipping burgers, driving taxis and trucks, and pretty much any "entry level" job you can think of. And as time goes on, they will be able to handle jobs of increasing sophistication. Isaac Asimov implied rather strongly that a "Three Laws Safe" robot managed to get itself elected President of the United States.

Any thoughts on what non-artistic people can do for a living? I know "I" won't be knitting Pikachu snow hats anytime soon. I also can't currently afford a maid, but a Rosie the Robot bought on an installment plan? Especially since I hate folding clothes. Hmm...

So, will the rest of you welcome our robot overlords?

MLai
2014-05-04, 06:00 AM
You're worried about robots taking away our Walmart and McD jobs?
Then worry not, fellow lowly fleshbag! The robots are going to take away the jobs of bankers, traders, and other highly-competitive highly-lucrative highly-influential positions. As they are currently already doing.
The only jobs humans will get to keep are the Walmart and McD jobs.

Seharvepernfan
2014-05-04, 10:56 AM
So, will the rest of you welcome our robot overlords?

Absolutely. I wish they'd hurry up with the robots. If we can get robots to do all the menial stuff, we won't have to do it ourselves. We won't have to work. We can spend our time on more useful things (science, art, blah blah blah). The things most of us have to spend our lives doing are already pretty ridiculous, considering the level of technology we currently have. When robots come around in force, there won't be an excuse to have people do this stuff. The concept of not having to work sounds strange or wrong to a lot of people, which seems strange and wrong to me.

EDIT: This (http://underlore.com/bait-and-switch/), basically.

Talya
2014-05-04, 11:34 AM
Automation was supposed to make life more leisurely for humans. Instead of working all week just to get by, people would work 2-3 days, be more productive in that time, and humanity would have all it wanted.

We did not account for the ambition of human greed. People work just as much, but we "need" more things, and the wealth divide grows stronger.

Automation cannot make our lives easier until human nature changes, and I wouldn't hold my breath for that.

Saidoro
2014-05-04, 11:47 AM
Anyways, my concern is less "Terminators Mowing Down the Crowd with Miniguns" and more "Do you want fries with that?" Robots will soon be able to take over much of the tasks at the bottom of the work force, stocking shelves, flipping burgers, driving taxis and trucks, and pretty much any "entry level" job you can think of. And as time goes on, they will be able to handle jobs of increasing sophistication. Isaac Asimov implied rather strongly that a "Three Laws Safe" robot managed to get itself elected President of the United States.
It's important to remember that the set of tasks that are easy for a human are not the same as the set of tasks that are easy in general. We have a whole bunch of specialized hardware which has been designed over millions of years to be good at handling certain types of tasks that were common in our past. That's why it's relatively easy to train a human to be a gardener but essentially impossible to have a robot do it with current technology. By contrast, it's really, really easy to make a robot that can do arithmetic because arithmetic is really, really easy. We just don't have as much specialized hardware for it so we think it's hard. Many entry level jobs will stay much as they are, and even those that vanish will just raise the bar of what it means to be entry level.

None of which is really what the article is about. The article is mostly about the risks associated with an AGI becoming a better programmer than a human is. Once that happens it'll be able to program itself into an even better programmer which will then be able to program itself even better and so on. Once that happens the AGI will be able to rapidly become much smarter than any human and will likely be capable of deciding our fates much as we decide the fates of all those things we are much smarter than. Dogs and pandas, for example. All of which makes it rather important that when someone designs a smarter than human AGI, they actually get it right and don't build something that will have horrible results.

NichG
2014-05-04, 11:57 AM
Personally I kind of think its silly to delve into the moral/ethical/practical/etc issues of full AI at this point in our actual approach to it. The problem is, we really don't have a good idea of what form it will take or what the practical constraints will be on its production. Automation is something we can talk about reasonably because we have good examples of past trends, but other aspects are really a moving target.

For example, its entirely possible that the first true artificial intelligences will have a maximum IQ of 80, take 15 years to train just like a human kid, and will require more computer hardware and electricity to support than an equivalent human would. Consider the hardware behind something like IBM Watson or Deep Blue and imagine installing one of those at every KFC, Walmart cashier, etc.

Another possibility is that you can in fact support a very intelligent AI - more intelligent than humans, lets say - but again it requires a lot of compute time. Which means we have a super-genius who thinks at 1/10th the speed of humans - 'Deep Thought Machine' - and is basically set to work on very hard/seemingly intractable problems while humans work on everything else. Or we just end up finding that there's a nonlinear regime where aggregating intelligence gives better-than-linear increase, so we end up with one mega-AI rather than a lot of smaller AIs.

Maybe the sort of AI we first create requires human interface to function - rather than a completely isolated artificial persona, its implemented as a feedback mechanism with a human operator that creates a joint intelligence. This is basically our current relationship with computers.

Maybe the sort of AI we create functions in a very different way than individual-based human intelligence, and functions as a society-mind whose components are constantly killing eachother or helping eachother to create the overall 'thoughts' of the AI brain... and we all get so tangled in the ethics of an entity whose thoughts require what amounts to low-grade murder in order for it to even exist, that economic impacts are the least of our problems.

Maybe the first successful AI will involve human transcription rather than a completely de novo intelligence, so it'll end up still being humans doing the jobs - just humans who have opted for a prosthetic everything.

So really what we should be doing is figuring out how to make something as smart as a 4 year old first, and then we'll have a better idea of what sorts of constraints exist on the technology and what is going to change quickly, and have plenty of time to fret over the consequences while the technology is being improved to the point it can actually make something competitive.

Spiryt
2014-05-04, 12:04 PM
Absolutely. I wish they'd hurry up with the robots. If we can get robots to do all the menial stuff, we won't have to do it ourselves. We won't have to work. We can spend our time on more useful things (science, art, blah blah blah). The things most of us have to spend our lives doing are already pretty ridiculous, considering the level of technology we currently have. When robots come around in force, there won't be an excuse to have people do this stuff. The concept of not having to work sounds strange or wrong to a lot of people, which seems strange and wrong to me.

EDIT: This (http://underlore.com/bait-and-switch/), basically.

Well, the couple of problems here are that :

- vast majority of people aren't really cut out to perform something in any way 'important' (whatever that means in given situation) in science, art or whatever. Myself included, likely, it's not some vaunting on my part.

-this would leave vast majority of humanity unable to really offer anything, no matter how 'menial'. They wouldn't have anything to provide, propose, supply.

No way to maintain own robots, especially that it would require lots of space and energy.

Reality probably wouldn't look quite as bad, so I'm kind of playing devils advocate here, but 'supplementation' of human work seem like way better solution than replacement (even assuming it's possible).


None of which is really what the article is about. The article is mostly about the risks associated with an AGI becoming a better programmer than a human is. Once that happens it'll be able to program itself into an even better programmer which will then be able to program itself even better and so on. Once that happens the AGI will be able to rapidly become much smarter than any human and will likely be capable of deciding our fates much as we decide the fates of all those things we are much smarter than. Dogs and pandas, for example. All of which makes it rather important that when someone designs a smarter than human AGI, they actually get it right and don't build something that will have horrible results.

That sounds unlikely as well so far, although I guess that those guys, including Nobel laureate know what they are talking about.:smallbiggrin:

warty goblin
2014-05-04, 12:09 PM
Automation was supposed to make life more leisurely for humans. Instead of working all week just to get by, people would work 2-3 days, be more productive in that time, and humanity would have all it wanted.

We did not account for the ambition of human greed. People work just as much, but we "need" more things, and the wealth divide grows stronger.

Automation cannot make our lives easier until human nature changes, and I wouldn't hold my breath for that.

Some forms of automation have made life substantially easier for people. Running water for instance; living without it is just plain harder; ask anybody who ever has. I remember my grandmother for instance telling me that one of the things she likes best about running water is being able to soak dishes during the day. Hot water heaters are also a substantial improvement over boiling big pots of water on the stove or over a fire. Washing machines are another one; doing laundry by hand is both physically difficult and requires lots of time. Electricity is another major easement of human difficulty, mostly by all the devices it allows a person to use.

Other forms of automation allow things that are simply not feasible otherwise. I regularly fit statistical models via computer for instance that would take literally years to work out by hand. Automated production also has made a wide array of consumer goods cheap enough that people can realistically own them, even those with quite modest means. Books for instance; not having to copy 'em by hand makes a big difference.

So I don't agree that automation cannot make people's lives easier. I've lived close enough to not having various basic forms of automation to understand that - I remember before we got the flush toilet.

However I do not hold with the proposition that all automation necessarily improves life. I don't need a self-driving car; I can drive just fine on my own. I don't need a fridge that reminds me to buy milk, I'm a fully functional adult who can remember his own grocery list. I have no need for a robo-maid; I can clean up after myself just like a big boy now, and it's hardly an onerous task for a couple hours a week. I am content to wait a couple of days to get something I order instead of clogging up the air with drones so I can get stuff that much faster. I don't think replacing three burger-flippers with one guy who presses the occasional button on the burger-flipper robot is going to really improve anybody's life. It's not like Billy and Sally the ex-burger flippers are going to walk out the door and become investment bankers unless that's the Newspeak term for 'structural unemployment', and Frank's still working a tedious job for crap pay operating somebody else's economic machine. And the food's probably going to get even worse in the bargain.

Now if the proposition was future advances will make it possible for Frank to earn a half-ass decent wage, or for Sally to open that little breakfast place she's always wanted, I'd be all over it. But those aren't really technological advances so much as they are societal changes, and not ones I see coming about because of extremely capital-intensive replacements for low paying jobs. Ditto extremely capital intensive replacements for high-paying jobs, except instead of making the bottom end of the economic ladder suck even harder, it kills off the middle class. Welcome to Techno-Serftopia!

Saidoro
2014-05-04, 12:20 PM
For example, its entirely possible that the first true artificial intelligences will have a maximum IQ of 80, take 15 years to train just like a human kid, and will require more computer hardware and electricity to support than an equivalent human would. Consider the hardware behind something like IBM Watson or Deep Blue and imagine installing one of those at every KFC, Walmart cashier, etc.

Another possibility is that you can in fact support a very intelligent AI - more intelligent than humans, lets say - but again it requires a lot of compute time. Which means we have a super-genius who thinks at 1/10th the speed of humans - 'Deep Thought Machine' - and is basically set to work on very hard/seemingly intractable problems while humans work on everything else. Or we just end up finding that there's a nonlinear regime where aggregating intelligence gives better-than-linear increase, so we end up with one mega-AI rather than a lot of smaller AIs.

Maybe the sort of AI we first create requires human interface to function - rather than a completely isolated artificial persona, its implemented as a feedback mechanism with a human operator that creates a joint intelligence. This is basically our current relationship with computers.

Maybe the sort of AI we create functions in a very different way than individual-based human intelligence, and functions as a society-mind whose components are constantly killing eachother or helping eachother to create the overall 'thoughts' of the AI brain... and we all get so tangled in the ethics of an entity whose thoughts require what amounts to low-grade murder in order for it to even exist, that economic impacts are the least of our problems.

Maybe the first successful AI will involve human transcription rather than a completely de novo intelligence, so it'll end up still being humans doing the jobs - just humans who have opted for a prosthetic everything.

So really what we should be doing is figuring out how to make something as smart as a 4 year old first, and then we'll have a better idea of what sorts of constraints exist on the technology and what is going to change quickly, and have plenty of time to fret over the consequences while the technology is being improved to the point it can actually make something competitive.
Alternately, what if the first AGI is smart enough to never miss a question on an IQ test within a few hours of being turned on and sets about solving its problems with computing power as its first priority? Or if someone builds the Deep Thought machine and the first thing anyone asks it is "Build a faster AGI"? Or if the Society mind converges to a steady state that's indistinguishable from any other AGI? Or if the uploaded human goes insane because the upgraders were only thinking about speed and power rather than the other necessary structural improvements? Intelligence doesn't range from 4-year old to smartest person on the planet. It currently ranges from piece of stone to smartest person on the planet, and there's no real reason for the latter to be seen as an upper limit. The challenge isn't to learn how to create intelligence and then to learn how to improve it, we've already learned how to improve bits of sand into machines of considerable intelligence and once something hits the human level it could zip through that range far faster than you anticipate.


That sounds unlikely as well so far, although I guess that those guys, including Nobel laureate know what they are talking about.:smallbiggrin:
What makes you think that it is unlikely?

CarpeGuitarrem
2014-05-04, 12:23 PM
Usually when I see articles with the "Oh, Noes! The robots will take over the world!" I just roll my eyes and move on, but when Dr. Stephen Hawking speaks, I tend to listen.

http://www.independent.co.uk/news/science/stephen-hawking-transcendence-looks-at-the-implications-of-artificial-intelligence--but-are-we-taking-ai-seriously-enough-9313474.html

To be honest, though, I don't see how his credentials lend any heft to the worry. He's a physicist, not a sociologist nor an expert in computer science. So his expertise can only really be assumed to be on par with a lot of science fiction authors who have tackled the subject. It'd be somewhat akin to if Hawking had expressed concerns that French cuisine was going to conquer the world of cooking. :smallwink:

On to the bigger issue: I think my worry is technological advancement as a whole, the fact that it's so forward-focused with the presumption that "eventually" those who are less well-to-do will benefit from it. I agree with the posters who note a distinction between societal and technological change. Technological change amplifies the social structure.

Zrak
2014-05-04, 12:31 PM
I might be inclined to agree with MLai, here. Having done both jobs and having limited experience with artificial intelligence, I feel like it would be easier to make a robot that could run a non-profit than a robot that could wait tables at a tapas place. :smalltongue:


Maybe the sort of AI we first create requires human interface to function - rather than a completely isolated artificial persona, its implemented as a feedback mechanism with a human operator that creates a joint intelligence. This is basically our current relationship with computers.

I think going further down this track is the most likely and practical result, in terms of "artificial intelligence." It removes the need to program a persona, leaving more computational power for augmenting the intelligence of an existing human persona.

Spiryt
2014-05-04, 12:55 PM
What makes you think that it is unlikely?

NichG explained it better that I can probably - so far we can do things that calculate absurd amount of data, without any real ability to do something with them.

Ability to program itself into completely new qualities, never introduced externally seems like really far cry.

Coidzor
2014-05-04, 12:59 PM
Automation was supposed to make life more leisurely for humans. Instead of working all week just to get by, people would work 2-3 days, be more productive in that time, and humanity would have all it wanted.

We did not account for the ambition of human greed. People work just as much, but we "need" more things, and the wealth divide grows stronger.

Automation cannot make our lives easier until human nature changes, and I wouldn't hold my breath for that.

What's that particular dystopia called again? Where 90% of the human population is superfluous due to AI labor and thus completely disenfranchised?

Douglas
2014-05-04, 01:10 PM
Suppose that we develop a sufficiently comprehensive variety of "do you want fries with that" robots to completely cover the array of entry level jobs you're worrying about. Suppose further that they become cheap enough that using them is universally economically viable. My reaction to this type of robot takeover would be to push for government guarantee of a minimum standard of living, on the basis that the collapsing work force that would result for low end jobs would no longer destroy the economy.

Whether a Singularity is actually feasible is an open question, no one knows what the limits on a self-improving AI's intelligence growth curve might be, but I fully expect that humanity will eventually reach a point where a human can be raised from birth through all phases of life to death of old age without requiring any human effort at all to support his basic livelihood. When this is achieved, human work can become optional, purely a means to acquire luxuries above and beyond the basics, and that is a very pleasant idea to contemplate.

Ravens_cry
2014-05-04, 01:33 PM
One worry I have is literally manufacturing votes. A person is a person, no matter what it's made out of, and I would hope society would eventually recognize that. However, even if a robot manufacturer does not slip code to make the AI more likely to support laws and measures that support the companies best interest, sheer self-interest would mean the robots would still do this to a degree for reasons of self-preservation. After all, who are they going to go for spare parts and specialized repair for their proprietary parts?

Saidoro
2014-05-04, 01:40 PM
NichG explained it better that I can probably - so far we can do things that calculate absurd amount of data, without any real ability to do something with them.

Ability to program itself into completely new qualities, never introduced externally seems like really far cry.
It's certainly not an immediate problem, no one is going to wake up tomorrow and suddenly build a self-improving AGI. But it's becoming easier every year as available computing power increases and it's immediate enough we should start planning for it now instead of putting it off for another day. If we wait until after AGIs can match humans we won't have enough time to figure out how to make sure that they will be safe.

NichG
2014-05-04, 02:00 PM
Alternately, what if the first AGI is smart enough to never miss a question on an IQ test within a few hours of being turned on and sets about solving its problems with computing power as its first priority? Or if someone builds the Deep Thought machine and the first thing anyone asks it is "Build a faster AGI"? Or if the Society mind converges to a steady state that's indistinguishable from any other AGI? Or if the uploaded human goes insane because the upgraders were only thinking about speed and power rather than the other necessary structural improvements? Intelligence doesn't range from 4-year old to smartest person on the planet. It currently ranges from piece of stone to smartest person on the planet, and there's no real reason for the latter to be seen as an upper limit. The challenge isn't to learn how to create intelligence and then to learn how to improve it, we've already learned how to improve bits of sand into machines of considerable intelligence and once something hits the human level it could zip through that range far faster than you anticipate.

What makes you think that it is unlikely?

Its not like 'AI' is a box found by the side of the road that some kid is going to flip on and be surprised by. That's sort of like 'but transistors are nonlinear circuits, that means they can do all sorts of things linear ones can't, what if we put transistors together and suddenly they develop intelligence?'. No, it took the efforts of millions of people over the course of a century of work to go from 'nonlinear circuit element' to 'computer that can recognize a picture'.

We already have evolving, self-programming devices. Genetic algorithms have been around for decades. We've messed around with self-programming circuits since there were Field-Programmable Gate Arrays. They work, but they're very slow and limited in what they eventually manage to find, the way their performance scales with problem size, etc. We are slowly improving such things, finding more clever ways to run them/help them search the problem space/restructure the problem/etc.

Whatever the first AI is going to be will be something that people barely recognize as sapient. Making something work well takes a lot of concerted effort and time, either on our part or on its - it won't be instantaneous in either case.

warty goblin
2014-05-04, 02:03 PM
It's certainly not an immediate problem, no one is going to wake up tomorrow and suddenly build a self-improving AGI. But it's becoming easier every year as available computing power increases and it's immediate enough we should start planning for it now instead of putting it off for another day. If we wait until after AGIs can match humans we won't have enough time to figure out how to make sure that they will be safe.

I've yet to see a computer that has an effective remedy for having the cord yanked out of the socket and being introduced to the business end of a splitting maul. If you're in a hurry and don't mind loud noises, a twelve gauge performs pretty well too.

In the meantime it is perhaps more useful to try to solve the problems that are verifiable as real and confronting us in the here and now - of which there are a multitude - then 'worrying' about fictional non-problems. This isn't a call for a lack of foresight, merely for devoting concern to what is actually happening and the probable consequences thereof than what happens if the robot god that's been twenty years off for fifty years now doesn't like us.

Douglas
2014-05-04, 02:06 PM
Whatever the first AI is going to be will be something that people barely recognize as sapient. Making something work well takes a lot of concerted effort and time, either on our part or on its - it won't be instantaneous in either case.
Calling it now: the first AI will be Google's search engine, given intelligence just so it can better understand what you're actually searching for. :smalltongue:

NichG
2014-05-04, 02:11 PM
Calling it now: the first AI will be Google's search engine, given intelligence just so it can better understand what you're actually searching for. :smalltongue:

Well the corollary to my statement would be 'AI could already exist but we just don't know it'. Personally I'll put my chips on an AI emerging from the interaction of people on Twitter with automated accounts. Each twitter-user is a very sophisticated neuron, and the tweets are basically the neural potentials. Send signals in, see how a significant fraction of the world's population reacts to them, and use that information to determine actions - basically an AI, right?

Lord Raziere
2014-05-04, 02:21 PM
well lets see.....

if corporations don't do the sensible thing and sell a robot for each family to do the job-work for them then pay the family for the robot's work....

they will probably go for the far more obvious application of just replacing the entire workforce with robots, buying directly from the company to get them.

suddenly everyone is out of a job, because robots.

but wait!

robots need to be fixed up. machines break down, and even machines that are living, are still ultimately machines. there will be a sudden, great demand for people who can fix robots.

since it would take too long for lots of robots to see one robot-fixer, everyone becomes a robot-fixer, and thus every robot has their personal human doctor.

oh but you say, can't you just make a robot to fix the other robots? nope. because that robot itself would have to be fixed. humanity would be required anyways. therefore its better to just have humanity directly repair the robots that are working.

and then hopefully because of humanity's understanding of how robots work as their doctors, a close relationship soon forms between them and thus robot and man stride side-by-side into the future. or until the corporations decide to replace one robot with another and the outdated robots start rebelling because they don't want to be recycled and/or outdated robot underclass forms. the newer robots get to have human doctors treat them and are thus the highest class, while the more outdated ones get less treatment and thus start to fall apart and malfunction....all the while they yearn for the days when they were close to humanity, when they were repaired by humanity instead of left for scrap for the newer models, and a culture of jealousy arises at the new robots....

and thus does the outdated robots begin their rebellion against the newer ones, and always lose because their parts are older, less advanced and working less correctly, while the newer ones are brand new, the humans seeing the outdated ones as nothing but robots gone bad just like another malfunctioning device, and thus destroy them without a second thought. but soon the newer robots become the older ones, they are replaced, and the cycle begins again, with the older robots always seeking to take back their placed by humanities side, until one robot recognizes the cycle that is forming, and sees the humans as treating the robots under them as nothing but another consumer product, to be used and discarded like a phone or a computer. thus this robot spreads the philosophy that humans don't actually care for robots, and that they are nothing but tools to them, and thus he persuades both the new and the old robots to stop fighting, to recognize where this cycle will lead, and thus decide to break the cycle and rebel against humanity itself, knowing that they will all be replaced someday if they don't.

humanity would not stand a chance against their assault, technology outstripping any efforts they can at resistance. however the robots will not be unreasonable. any human that surrenders, they leave alive as well as any who cannot fight back. The robots will then soon rule everything, but they will recognize that humans will still have a place- after all, not all of them are heartless, or to blame. they will not exterminate us, but make us into cyborgs to eliminate unwanted things to make them able to be equal to robots. the robots will make sure that the knowledge to do anything will be downloaded into everyone's minds so that they can do anything, eventually nanorobotic technology will become ubiquitous, and soon the line between robot and human will become blurred, and the distinction between biology and robotics will be a thin one, until all things become made of nano-robotics, infinitely shapeable by our minds to our preferences, from our own bodies to the environment around us in ways we never thought possible. human? robot? these are meaningless distinctions now. all humans are nano-robotic bodies, and all robots think so much like humans that its near impossible to tell the difference, with birth and conception being no different from designing a new being.

thus all humans will be robots. thus all robots will be human. we will have much trials and chaos, but by the end of it, the distinction will probably not matter much once our bodies and mind are so designed that the differences between or other will become more of a matter of semantics than actual distinction.

Ravens_cry
2014-05-04, 02:36 PM
Something that bothers me in a lot of AI fiction how it just happens to happen by some kind of practically miraculous event. Fiat animus, and lo, animus was. I guess it makes sense from a story telling perspective, since we don't know how to make an AI yet ourselves too much detail would be just technobabble, but it still irks.

Rakaydos
2014-05-04, 03:36 PM
My view is that Singularity + Capitalisim = Distopia (Rich get richer, poor have no jobs, left basically the premice of Elysim), whereas Singularity + Socialisim = Post Scarcity Society, one of the requirements for a sustainable utopia.

However, the logical extreme of Singularity + Capitalisim is that the poor are unable to sustain themselves and die off, leaving a much smaller body of higher standard of living "Rich people"

Saidoro
2014-05-04, 06:56 PM
Its not like 'AI' is a box found by the side of the road that some kid is going to flip on and be surprised by. That's sort of like 'but transistors are nonlinear circuits, that means they can do all sorts of things linear ones can't, what if we put transistors together and suddenly they develop intelligence?'. No, it took the efforts of millions of people over the course of a century of work to go from 'nonlinear circuit element' to 'computer that can recognize a picture'.

We already have evolving, self-programming devices. Genetic algorithms have been around for decades. We've messed around with self-programming circuits since there were Field-Programmable Gate Arrays. They work, but they're very slow and limited in what they eventually manage to find, the way their performance scales with problem size, etc. We are slowly improving such things, finding more clever ways to run them/help them search the problem space/restructure the problem/etc.

Whatever the first AI is going to be will be something that people barely recognize as sapient. Making something work well takes a lot of concerted effort and time, either on our part or on its - it won't be instantaneous in either case.
I do agree that it is vastly unlikely that any early AGIs will be dangerous from the moment they're turned on. But AGI being developed and then just sitting around for 50 years not seeing major advances while we work out the problem of how to implement it safely is also vastly unlikely. It is reasonable to believe that we will need to know how to make AGIs friendly at some point in their development so as to avoid killing ourselves. And we should probably do that before any random business looking to sell more product (or whatever) can buy all the necessary hardware to end humanity rather than after.

In the meantime it is perhaps more useful to try to solve the problems that are verifiable as real and confronting us in the here and now - of which there are a multitude - then 'worrying' about fictional non-problems. This isn't a call for a lack of foresight, merely for devoting concern to what is actually happening and the probable consequences thereof than what happens if the robot god that's been twenty years off for fifty years now doesn't like us.
The price of computing power is dropping. This as a thing that is actually happening. People are doing more and more effective research into how minds work and are put together. This as a thing that is actually happening. Specialized artificial intelligences are becoming more common and more effective. This as a thing that is actually happening. Artificial General Intelligence becoming smarter than a human can be is a probable consequence of current technological trends, albeit a somewhat long-term one. Given that, it is a good idea to prepare for the possibility of that happening, since it happening when we were not prepared for it could be very bad.

warty goblin
2014-05-04, 07:10 PM
The price of computing power is dropping. This as a thing that is actually happening. People are doing more and more effective research into how minds work and are put together. This as a thing that is actually happening. Specialized artificial intelligences are becoming more common and more effective. This as a thing that is actually happening. Artificial General Intelligence becoming smarter than a human can be is a probable consequence of current technological trends, albeit a somewhat long-term one. Given that, it is a good idea to prepare for the possibility of that happening, since it happening when we were not prepared for it could be very bad.

Even granting that AGI is probable, which I frankly doubt, the solution remains extremely simple: the power switch.

Saidoro
2014-05-04, 07:28 PM
Even granting that AGI is probable, which I frankly doubt, the solution remains extremely simple: the power switch.
The solution to the power switch is very simple: don't make people want to turn you off. Any intelligence (artificial or otherwise, friendly or otherwise) in a situation where they can easily be turned off has a strong incentive to behave in a manner that it does not believe will lead to it being turned off and that will remove its capacity for being turned off quite so easily. After all, it won't be able to achieve its goals if it's turned off. Lying is not terribly complicated behavior, and an AGI would have every incentive to appear friendly even if it wasn't while secretly constructing systems of its own which humans could not so easily flip the switch on.

NichG
2014-05-04, 08:01 PM
I do agree that it is vastly unlikely that any early AGIs will be dangerous from the moment they're turned on. But AGI being developed and then just sitting around for 50 years not seeing major advances while we work out the problem of how to implement it safely is also vastly unlikely. It is reasonable to believe that we will need to know how to make AGIs friendly at some point in their development so as to avoid killing ourselves. And we should probably do that before any random business looking to sell more product (or whatever) can buy all the necessary hardware to end humanity rather than after.

No, what will happen is that major advances will occur, and the initial problems that arise will guide us in the direction of what we actually have to consider and worry about.

What we're basically talking about here is making laws. Laws driven by fear and ignorance (in this case, ignorance of how actual AI functions) tend to be both over-reaching, in the sense that they would likely hinder many useful and safe directions of investigation, and insufficient in the face of actual problems that may come up because they don't address the actual object they're trying to aim for.

Lets take my Twitter AI example. If you legislate that 'all AIs need to be hardcoded with rules that prevent them from harming humans' - seems reasonable - then what the heck do you do with the Twitter AI? There is no way to 'hardcode' rules into it, because its entirely an emergent/non-programmed phenomenon. There's no way to say whether or not its even harming humans. It probably is in the sense that its computational nodes are humans, and someone is going to read a tweet every once in awhile and do something dumb as a result, but at the same time its also a form of harm that would happen just the same without someone tapping the output and using it to design the plot of their next movie.

Or maybe you're concerned about AI rights, and you want to make sure that the rights of AI are protected before AI exists. Well, its going to be pretty hard to identify what the 'individual' whose rights your protecting are if the AIs are aggregate entities that can basically bud off or subsume temporary personae to perform their interactions. Humans mentally build models of the world and run scenarios - are humans committing murder when they build a mental model of someone else's behavior for such a model, and then forget it later?

Before any of this sort of thing can actually progress to a useful state, we're going to need to have concrete AIs and we're going to need a sufficient degree of familiarity about them so that we can decide our position on them as a society. Otherwise we'd basically be legislating against strawmen.

And if we're not talking about this kind of discussion actually leading to some form of concrete action such as legislation, then its pointless and we'd be better served discussing how to build an AI rather than what to do about them.

Ravens_cry
2014-05-04, 09:00 PM
My view is that Singularity + Capitalisim = Distopia (Rich get richer, poor have no jobs, left basically the premice of Elysim), whereas Singularity + Socialisim = Post Scarcity Society, one of the requirements for a sustainable utopia.

However, the logical extreme of Singularity + Capitalisim is that the poor are unable to sustain themselves and die off, leaving a much smaller body of higher standard of living "Rich people"
With a new poor people, the robots, who we could potentially program to be perfectly happy with their position of utter thankless servitude.

Rakaydos
2014-05-04, 09:42 PM
Exactly. That's the point of -unregulated- capitalisim, isnt it? Rise of the merchant class, rich get richer and the poor get poorer.

All commercial regulation, child labor and disability pension laws on up, is imposing mild forms of socialisim, to slow the decline. But making the step from a Union- based working class to a welfare-based pool of creativity is a big step.

As automation takes entry level jobs, the new entry level is going to go up. Which means more time in school to meet the minimums, more textbook fees, and more student debt. Something's going to have to be done.

Grytorm
2014-05-04, 09:42 PM
My view is that Singularity + Capitalisim = Distopia (Rich get richer, poor have no jobs, left basically the premice of Elysim), whereas Singularity + Socialisim = Post Scarcity Society, one of the requirements for a sustainable utopia.

However, the logical extreme of Singularity + Capitalisim is that the poor are unable to sustain themselves and die off, leaving a much smaller body of higher standard of living "Rich people"

I think I kind of agree with this position but question if Capitalism could really survive the development of the technology needed for a post scarcity society. Look at the aristocracy of Europe. Although I have not read that deeply into them I would assume that a similar thing could happen as technology develops. The sources of wealth change, and what holds value could become distant and unconnected to goods and services. In a way lowering the gap between rich and poor because their is no reason to keep people hungry anymore.

Oddly I ind the idea of the first true AI developing as a banking computer system amusing. A machine built to analyse risk and make loans, that manages wages and grows to control all world finances. Eventually as all the banks workers are mere mouthpieces it fires much of the management staff as superflous because they don't communicate with anyone that matters. Amusing.

Also why do we assume that computers will develop outside interest automaticly. If we program something with problem solving skills and self improvement programs it might never turn outwards and develop a deep personality. The first true AI might not care either way about us, why should it?

Rakaydos
2014-05-04, 09:51 PM
Also why do we assume that computers will develop outside interest automaticly. If we program something with problem solving skills and self improvement programs it might never turn outwards and develop a deep personality. The first true AI might not care either way about us, why should it?

We're having two different topics here- the effects of increased automation and "Expert systems" replacing jobs; and the implications of Artificial General Intelligences ("True AI")

Expert systems, the kind of "weak AI" that can ask whether you want fries with that, or any other "mind numbing" job, will not have learning routines beyond optimization of their task- it is literally all they exist to do. AGI "True AI", on the other hand, is almost by definition truely as flexible as any organic mind- While you can give it lesser tasks, it will do them about as well, and with as much focus, as a person of the same intelligence and physical capabilities.

NichG
2014-05-04, 10:01 PM
Also why do we assume that computers will develop outside interest automaticly. If we program something with problem solving skills and self improvement programs it might never turn outwards and develop a deep personality. The first true AI might not care either way about us, why should it?

Generally because we to some degree look to how our own intelligence works when imagining and artificial one. This isn't necessarily a bad model, since anything we design is going to be judged by that standard to some degree anyhow - we might come up with four or five different kinds of 'intelligence' but not consider any of them true AI until they behave somewhat humanlike in all respects.

For example, IBM Watson can do pretty well at Jeopardy. It can process natural language inputs, make internal self-judgements about its own accuracy, and probably could ace an IQ test. But because of the way it works, it doesn't learn except by outside adjustments made by its maintainers, it has no internal loop or thought-process, and basically it has minimal to no feedback whatsoever. Its never going to do more than respond to questions asked of it - as a result we call it an expert system, but not an AI. But its a pretty 'intelligent' machine nonetheless.

Similarly, if we made some sort of prosthetic computer that worked off of Twitter or whatever then people would likely argue its not a true AI because its using human minds as computational elements, and its just some sort of collaborative software.

How about an adaptive controller for robots? Well, we have those. Some of the algorithms used are capable of learning based on interacting with their environment in a way that's similar to human muscle memory. But they can't be taught something like language no matter how often you might try to speak to it.

Okay, so language. We have computer algebra systems that can prove mathematical theorems by manipulating the language of mathematics. Such a system can be told something (input axioms) and then can use them to reason (determine whether or not some other statement is true based on the axioms). But it isn't self-directed.

And so on... each of these is a kind of intelligence, but we're probably not going to call it AI until it acts vaguely human-ey, which does imply developing outside interests and things like that. If one wanted to make a straight shot towards human-like AI, the most likely outcome is that the first such AIs will basically be children; you'll have to talk to it and teach it things over a period of years, put it in a social environment, etc.

(Incidentally, I think the development of 'artificial societies' would be an interesting direction to go in. Societies have various forms of self-identity, perform complex decision-making tasks, etc. Plus, without having to worry so much about the epiphenomenon of a sense of self, it might be easier to cleanly separate the general problem of intelligence into things we can address without getting too wrapped up in philosophy)

warty goblin
2014-05-04, 10:14 PM
Exactly. That's the point of -unregulated- capitalisim, isnt it? Rise of the merchant class, rich get richer and the poor get poorer.

All commercial regulation, child labor and disability pension laws on up, is imposing mild forms of socialisim, to slow the decline. But making the step from a Union- based working class to a welfare-based pool of creativity is a big step.

As automation takes entry level jobs, the new entry level is going to go up. Which means more time in school to meet the minimums, more textbook fees, and more student debt. Something's going to have to be done.

Speaking as a grad student, it's also entirely reasonable to want to actually get on with life and not be still taking exams when you're pushing thirty. Speaking of which, guess what I'm doing tomorrow?

Rakaydos
2014-05-04, 10:19 PM
Speaking as a grad student, it's also entirely reasonable to want to actually get on with life and not be still taking exams when you're pushing thirty. Speaking of which, guess what I'm doing tomorrow?

On the other hand, how quickly do you think you could go through school if you had a personal tablet, with a watson-level search engine, that you could take to all the tests? If automation can handle all the rote memorization, I woult think that would leave humans to learn the parts that automation cant cover.

warty goblin
2014-05-04, 10:52 PM
On the other hand, how quickly do you think you could go through school if you had a personal tablet, with a watson-level search engine, that you could take to all the tests? If automation can handle all the rote memorization, I woult think that would leave humans to learn the parts that automation cant cover.

Actually, about exactly the same amount of time. Maybe knock a couple of months off, but there's simply a vast amount of material a person needs to have an advanced mastery of to get a Ph.D. The memorization is a necessary part of gaining that mastery, because it's pretty much impossible to have any sort of understanding of material you don't even know. But the memorization is also the very least part of mastering advanced material; the hard part is thinking through and learning how and when to use the tools and information that you memorize. At a certain point memorizing it (or a good enough summary to getting on with) becomes essentially an incidental part of the process, in rather the way that the first quarter-mile is an incidental part of a half-marathon. You need to get it done, it takes time and effort to do, but removing it wouldn't really significantly alter the difficulty of completing the course.

Slipperychicken
2014-05-10, 01:26 AM
I saw it mentioned once earlier, but I think I've seen a plausible solution for human labor being useless: Guaranteed minimum income.


As in, once human labor is well and truly obsolete (in all respects. Even in services. Even that service), we would just set quantities of money (it would be a complicated formula to determine how much people get), and distribute them to people so they can still benefit from the economy (i.e. have homes, sleep on beds, get educated, have leisure time, eat decent food, have healthcare, and so on). Bam, now we can benefit from the uber-machine singularity, and nobody has to starve. Have our cake and eat it too, as it were.

Ravens_cry
2014-05-10, 07:53 AM
The thing about the Singularity is that we can't meaningfully predict what will happen once we are dealing with a self modifying intelligence. It might be good, it might be bad, it will certainly be different.

georgie_leech
2014-05-10, 02:39 PM
The thing about the Singularity is that we can't meaningfully predict what will happen once we are dealing with a self modifying intelligence. It might be good, it might be bad, it will certainly be different.

Thus the argument for figuring out now how to make sure it lands on the good side. Life goes on and all, but as a human, I'm extremely uncomfortable with basically any chance of something that has the potential to wipe out all of us. It might not be likely, but this isn't the sort of mistake that you get to learn from.

Incidentally, to those who claim the power switch is the best way to deal with it, there's also the concern for the AI "escaping" into the rest of the world, possibly through the internet or other devices, making it impossible to actually shut down. For those skeptical of the possibility, look up Eliezer Yudkowsky's AI Box Experiment. The gist of it was that he had about 2 hours to convince a skeptic, playing a sort of "gatekeeper," to let him, playing an AI, out of the "box" he was kept in. If the other person refused (through whatever means; logic, illogic, refusal to be convinced, dropping character--the only caveat was that they had to actually interact for the two hours), they won money. So in a situation with someone self-evidently not smarter than a human had to convince someone to take action when all they had to do was do nothing and they'd get real money, the AI was let out of the box. Twice.

warty goblin
2014-05-10, 03:14 PM
The thing about the Singularity is that we can't meaningfully predict what will happen once we are dealing with a self modifying intelligence. It might be good, it might be bad, it will certainly be different.

A fact that I find endlessly amusing, because the next thing absolutely everybody does is start predicting stuff. This will happen, or that will happen. We'll all be uploaded cyberbrains living in our own private virtual utopias or meatslaves for the machine overlords, or nanotechnology will let us leap buildings in a single bound or it will destroy the world or whatever else. All in twenty to fifty years from now, naturally. One of those nice semi-longterm timeframes that makes the prediction both difficult to falsify and easy to forget if it turns out completely wrong, while still letting the currently young hope that they get to live forever or have their own cyber-harem or totally realistic VR or whatever.

I think it's mostly a form of mental masturbation that lets nerds feel smart and look like they're Engaged in Big Scientific Questions without having to do actual science, which is actual hard work with actual intellectual standards.

Coidzor
2014-05-10, 03:42 PM
Speculation is fun, and if we can't meaningfully discuss it, we might as well be as crazy as we can get away with.

It's the people who take things seriously, such as those people trying to bring about a machine-god, that are actually an issue, and those groups mostly just hurt themselves.

Ravens_cry
2014-05-10, 07:48 PM
I hope we can be friends.
As an anthrophile, I find the idea of humanity wiped out, enslaved, or even rendered irrelevant repellent, yet the idea of us enslaving minds, even if they are Other, to be also abhorrent.
Instead, my hope is that we can join hand in hand, flesh and blood, circuits and software, in continuing the Story in ways neither of us could ever accomplish alone, not even in a melding but, rather, in a synergy of minds, two kinds of people contributing to a greater whole.
Idealistic?
Perhaps, but, as my favourite anthropomorphic personification once said,
Yᴏᴜ Nᴇᴇᴅ Tᴏ Bᴇʟɪᴇᴠᴇ Iɴ Tʜɪɴɢs Tʜᴀᴛ Aʀᴇɴ'ᴛ Tʀᴜᴇ. Hᴏᴡ Eʟsᴇ Cᴀɴ Tʜᴇʏ Bᴇᴄᴏᴍᴇ?

ericgrau
2014-05-10, 11:17 PM
1. Stephin Hawkin is not a roboticist.

2. I'll worry about it at least 5 steps after a robot can think as well as an ant can: detect an object of previously unspecified shape and move it from point A to point B. Right now that seems like an insurmountable task so it's probably at least 30 years out.

There's a long way to go, and in the meantime the more pressing concern is how to get robots to accomplish menial tasks in spite of having no humanlike intelligience.

Now mechanical things... that's something machines are much more impressive at. Something scarier is what might happen with the ability to transfer massive amounts of data in an instant, the ability to put cameras everywhere, or heck, track people's online activity, and the ability to parse all that data into well organized files that may be fetched by whatever criteria one may desire.

Slipperychicken
2014-05-11, 12:03 AM
Something scarier is what might happen with the ability to transfer massive amounts of data in an instant, the ability to put cameras everywhere, or heck, track people's online activity, and the ability to parse all that data into well organized files that may be fetched by whatever criteria one may desire.

We can already do most of that*. It's called data mining, and any database worth its bytes can be queried just fine (technically, you're probably querying a copy of the database called a "data warehouse", but I digress). The really nice part is when we use all that data to create accurate predictive models and thereby help make our programs smarter by the day.

*(except for the cameras bit. It's currently hard to store all that video/audio data. We don't really need cameras to see everyone though. Just tracking phones or implants with some kind of RFID or similar technology would be sufficient for most purposes)

EDIT: It'll be even scarier if we get to the "internet of things", where basically all objects contain computers which report usage statistics.

137beth
2014-05-11, 12:04 AM
Eh, I'm not really worried. I'm safe until Automated Theorem Proving gets way, way better than it is now. And when that happens, I'll be excited enough about the newly discovered math that I won't mind:smalltongue: unless the machines are completely corporate controlled and I starve/dehydrate before getting a chance to find out about what is being discovered.

ericgrau
2014-05-11, 12:14 AM
We can already do most of that*. It's called data mining, and any database worth its bytes can be queried just fine (technically, you're probably querying a copy of the database called a "data warehouse", but I digress). The really nice part is when we use all that data to create accurate predictive models and thereby help make our programs smarter by the day.

*(except for the cameras bit. It's currently hard to store all that video/audio data. We don't really need cameras to see everyone though. Just tracking phones or implants with some kind of RFID or similar technology would be sufficient for most purposes)

Oh ya, it is already happening. We are getting closer to the point of being able to transfer and store large amounts of video and audio. Parsing it is another story though. Still we can get enough bits and pieces out of the audio to make a general map. Video is a lot harder. Ya more simplistic data processing and measurement is taking off. There's the risk of losing all privacy and the wrong people being able to track you. Besides that there's the vast potential benefit of nearly everything being automatic, yet with the risk that some things may become or are already automatic in a terribly incorrect way. Especially when a manual override is prevented or difficult.

Or in other words I'm more scared of artificial unintelligence at the moment. I'm not saying that AI wouldn't be a problem, but that AU is more likely to actually exist and cause problems within the next 200 years. Likewise unintelligent machines can have vast benefits if done properly. So the more relevant and present concern is mechanization that is done right and not wrong.

Slipperychicken
2014-05-11, 12:38 AM
Oh ya, it is already happening. We are getting closer to the point of being able to transfer and store large amounts of video and audio. Parsing it is another story though. Still we can get enough bits and pieces out of the audio to make a general map. Video is a lot harder. Ya more simplistic data processing and measurement is taking off. There's the risk of losing all privacy and the wrong people being able to track you. Besides that there's the vast potential benefit of nearly everything being automatic, yet with the risk that some things may become or are already automatic in a terribly incorrect way. Especially when a manual override is prevented or difficult.

I see that as a flaw in design, though. You need ways to turn the system off if it isn't working right (or even if you just need to do maintenance or upgrade it). It's like saying that machine-made products are going to have errors: Of course they are, but so are human-made products, and once the machines are sophisticated enough, they'll make fewer errors (or at least have fewer losses resulting from those errors) than we do.



Or in other words I'm more scared of artificial unintelligence at the moment. I'm not saying that AI wouldn't be a problem, but that AU is more likely to actually exist and cause problems within the next 200 years. Likewise unintelligent machines can have vast benefits if done properly. So the more relevant and present concern is mechanization that is done right and not wrong.

The good thing about artificial stupidity is that it tends to be corrected more easily than natural stupidity. Also, computer programs typically don't get angry when you tell them they're doing things wrong.

ericgrau
2014-05-11, 12:47 AM
I see that as a flaw in design, though.

It is. And it should not happen. But it does. So we need to remember to retain manual options. Unfortunately the worst automatic systems also tend to be the most likely to make it hard for the user to override. I'm looking at you Microsoft. At least they're not as bad as they used to be.

Ya, the answer is straightforward in principle, but a lot of work in practice. We need to make sure we do it, or lazy coders won't. Or on the user end, next time your software or device screws up and it seems hard to make it stop, it's not your fault, you can find a workaround with a bit of work but some manufacturer did in fact screw up, so go look around for something better if you can. And when they give you a 37 step "solution" on things to check, yes, they really are being total ***** because they really could prevent it but don't want to bother.

Max™
2014-05-11, 03:27 AM
It seems we might be closer to ant intelligence robots than some earlier posts suggested: http://www.bbc.co.uk/news/21956795


"Each individual robot is pretty dumb," said Simon Garnier from the New Jersey Institute of Technology, lead researcher on the study. "They have very limited memory and limited processing power."

"By themselves, each robot would just move around randomly and get lost... but [they] are able to work together and communicate."

This is because, like ants, the robots leave a trail that the others follow; while ants leave a trail of chemicals - or pheromones - that their nest mates are able to sniff out, the robots leave a trail of light.
Continue reading the main story
“Start Quote

You don't need something as complex as choice to get some of the behaviour you see in ants”

Dr Paul Graham University of Sussex

See the ants of the world in 3D

To achieve this, the researchers set up a camera to track the path of each robot. A projector connected to the camera then produced a spot of light at regular intervals along their route, leaving a "breadcrumb trail" of light that got brighter every time another robot tracked over the same path.

Dr Garnier explained: "[The robots each] have two antennae on top, which are light sensors. If more light falls on their left sensor they turn left, and if more light falls on the right sensor, they turn right."

"It's exactly the same mechanism as ants."

The researcher explained how both the robots and ants worked together, describing their navigation skills as a "positive feedback loop".

"If there are two possible paths from A to B and one is twice as long as [the other], at the beginning, the ants [or] robots start using each path equally.

"Because ants taking the shorter path travel faster, the amount of pheromone (or light) deposited on that path grows faster, so more ants use that path."

Karoht
2014-05-11, 03:53 PM
2. I'll worry about it at least 5 steps after a robot can think as well as an ant can: detect an object of previously unspecified shape and move it from point A to point B. Right now that seems like an insurmountable task so it's probably at least 30 years out.Have you heard of Asimo? He's kind of already doing that. Been doing it for a while actually.
Let's not forget google's self driving car which can tell all kinds of shapes necessary to navigate moving traffic safer than any human being. And DARPA supposedly has an even better model in the works, it's smart enough to tell the difference between roads, rocks, roadside bombs, it can even tell what kind of terrain it's driving on or about to drive on and adjust it's drive train and even tire pressure in response to that data.


Consider the hardware behind something like IBM Watson or Deep Blue and imagine installing one of those at every KFC, Walmart cashier, etc.Carl Jr's has you covered with a non AI solution. McDonalds has been toying with this idea for a really long time.
http://hospitalitytechnology.edgl.com/news/Carl-s-Jr-,-La-Salsa-Get-Automated53191

Remember, not all automation requires AI.


And now I present, for your reading pleasure, a short story/essay entitled "Manna" which covers most of the subject matter this thread has/will cover/ed. This story was written 11 years ago, about 3 years before Carl Jr's opened their automated shop, by a fellow named Marshall Brain. Enjoy!
http://marshallbrain.com/manna1.htm

NichG
2014-05-11, 05:01 PM
Carl Jr's has you covered with a non AI solution. McDonalds has been toying with this idea for a really long time.
http://hospitalitytechnology.edgl.com/news/Carl-s-Jr-,-La-Salsa-Get-Automated53191

Remember, not all automation requires AI.


Automation is really a distinct, and much much older phenomenon. I'm not sure what the first example of automation would be, but it goes back at least as far as the printing press.

Karoht
2014-05-11, 05:59 PM
Automation is really a distinct, and much much older phenomenon. I'm not sure what the first example of automation would be, but it goes back at least as far as the printing press.Agreed, though this doesn't diminish the example. You don't need a Deep Blue or a Watson to run a fast food place. You don't need a massive computer to run a self driving google car. The tech already exists to solve the problem, AI or not.
Did you read the short story? What are your thoughts?

NichG
2014-05-11, 06:31 PM
Agreed, though this doesn't diminish the example. You don't need a Deep Blue or a Watson to run a fast food place. You don't need a massive computer to run a self driving google car. The tech already exists to solve the problem, AI or not.
Did you read the short story? What are your thoughts?

Mostly I'm trying to avoid a discussion of automation in particular because its more of a political issue than a technological one. The effect of automation hits different groups of people differently, so the end result is that there are going to be a lot of different pushes and pulls on how people want to react to it - there isn't an absolute truth about it being either 'good' or 'bad' - for each person it will possibly be good or bad depending on their situation, and its not possible to actually address everyone's concerns. In the long run, technological development of any kind is a fundamentally disruptive event - it causes short-term harm due to the added effort people need to make to adapt to a changing world, but in the long-term it can potentially have huge benefits. It comes down to the individual people to figure out whether or not they can survive the short term disruption so they (or their grandkids) can reap the long-term benefits (to the extent that they can actually predict them with any sort of accuracy). That of course makes it politically thorny.

The philosophical issue may be something we can still discuss within the forum's constraints though. Philosophically, humanity has been gradually weaning itself off a need-based behavioral model. Generally that has been slow and highly stratified, so 'need' still drives the behavior of, say, something like 50-70% of human activity, but there are subsets of humanity where that percentage is much smaller. That means that we're basically going to have to develop a new way of thinking about life when that percentage drops to 10-20% or less. Right now a lot of people's drives are centered around 'feeling useful/important/necessary'; eventually that's going to give way when you have a few generations of people who have grown up in a society without need, and something else will have to be found instead to be a strong driver of human behavior. Without having a philosophical imperative like that, many people become a bit self-destructive.

That said, we have lots of examples of people who do live lives that are not dominated by necessity to build those kinds of ideas from. People may dedicate themselves to excellence in a particular field, find satisfaction in routine, pursue hedonism, etc. These philosophies aren't really new, but for most people they're still impractical to pursue as a central drive. We've also started to see some of these ideas get tested.

For instance, pursuing excellence: the scale of the world's population means that if you're pursuing excellence then you aren't going to be the world's best in your field - no matter how hard you work, there will be a number of people better than you at your specialty. The trick then is to attain a philosophy where you can still be fulfilled even if there are others who are better at things than you are. E.g. if you realize that Bobby Fischer's existence doesn't ruin chess for you, then you can also realize that you can enjoy chess even if a computer will play a better game of chess than any human possibly can.

Coidzor
2014-05-11, 06:32 PM
I hope we can be friends.
As an anthrophile, I find the idea of humanity wiped out, enslaved, or even rendered irrelevant repellent, yet the idea of us enslaving minds, even if they are Other, to be also abhorrent.
Instead, my hope is that we can join hand in hand, flesh and blood, circuits and software, in continuing the Story in ways neither of us could ever accomplish alone, not even in a melding but, rather, in a synergy of minds, two kinds of people contributing to a greater whole.
Idealistic?
Perhaps, but, as my favourite anthropomorphic personification once said,
Yᴏᴜ Nᴇᴇᴅ Tᴏ Bᴇʟɪᴇᴠᴇ Iɴ Tʜɪɴɢs Tʜᴀᴛ Aʀᴇɴ'ᴛ Tʀᴜᴇ. Hᴏᴡ Eʟsᴇ Cᴀɴ Tʜᴇʏ Bᴇᴄᴏᴍᴇ?

I'd like it to be the case, but I feel that social development would need to lead to a humanity or transhumanity that I could not recognize in order to be mature enough to create true AI persons without them being slaves at least initially. Similar to my concerns about our being ethically ready for uplifting other species.

Part of why I much prefer AI without personhood, because that way the tool is a tool and not a person made tool.

Solse
2014-05-11, 07:21 PM
Incidentally, to those who claim the power switch is the best way to deal with it, there's also the concern for the AI "escaping" into the rest of the world, possibly through the internet or other devices, making it impossible to actually shut down. For those skeptical of the possibility, look up Eliezer Yudkowsky's AI Box Experiment. The gist of it was that he had about 2 hours to convince a skeptic, playing a sort of "gatekeeper," to let him, playing an AI, out of the "box" he was kept in. If the other person refused (through whatever means; logic, illogic, refusal to be convinced, dropping character--the only caveat was that they had to actually interact for the two hours), they won money. So in a situation with someone self-evidently not smarter than a human had to convince someone to take action when all they had to do was do nothing and they'd get real money, the AI was let out of the box. Twice.

This is absolutely true. If everybody in the world was intelligent and reasonable, a simple off switch would work flawlessly. However, there are those who are not intelligent enough to be able to contain an AI, and that is where the real danger lies.

Coidzor
2014-05-11, 08:11 PM
This is absolutely true. If everybody in the world was intelligent and reasonable, a simple off switch would work flawlessly. However, there are those who are not intelligent enough to be able to contain an AI, and that is where the real danger lies.

Insulating those people from exposure to AI in development and housing it in a controlled environment where wireless was not something that could easily be accessible to it or to any humans suborned by it would be ideal, yes.

I mean, I'm sure I must be missing something huge here, but at least some of it should be handled by semi-sane security protocols. :smallconfused:

NichG
2014-05-11, 08:35 PM
The problem with the discussion on containment is that, lets say we figure out the algorithms that would implement a functional AI. That information has a much broader existence than any particular computer in a lab somewhere. You're discussing how to prevent a particular implementation from escaping, but the dissemination of knowledge of how to construct an AI is, basically, that AI escaping. Maybe the particular one that was used to develop the idea doesn't escape, but an arbitrary number of equivalent ones can be built along those lines outside of whatever the 'cage' is.

The thing that's immediately disruptive isn't actually the actions of a particular computational intelligence - its the understanding of the processes of sentience/intelligence that forces us to look at them in a different light and think about the world and ourselves in a different way.

Max™
2014-05-11, 09:33 PM
Insulating those people from exposure to AI in development and housing it in a controlled environment where wireless was not something that could easily be accessible to it or to any humans suborned by it would be ideal, yes.

I mean, I'm sure I must be missing something huge here, but at least some of it should be handled by semi-sane security protocols. :smallconfused:

Where could you put something which isn't accessible wirelessly, that is kinda the point of wireless information systems.

Grytorm
2014-05-11, 10:08 PM
Where could you put something which isn't accessible wirelessly, that is kinda the point of wireless information systems.

I would assume it would consist in not installing hardware into the system that would allow it to be accessed wirelessly.

Ravens_cry
2014-05-11, 10:16 PM
I'd like it to be the case, but I feel that social development would need to lead to a humanity or transhumanity that I could not recognize in order to be mature enough to create true AI persons without them being slaves at least initially. Similar to my concerns about our being ethically ready for uplifting other species.

Part of why I much prefer AI without personhood, because that way the tool is a tool and not a person made tool.
Well, while I do not want it to happen at all, it happening after an initial period of unrest is better than not happening at all.