PDA

View Full Version : Science Is the Singularity possible?



Scowling Dragon
2017-10-18, 04:42 PM
I'm just in a bit of a funk here. Im not scientists but I feel like I only ever encounter either clickbait that says that AI will become gods, or saying that the people that say that AI will become gods are idiots and it seems to go on forever.

Yuki Akuma
2017-10-18, 04:45 PM
The "singularity" is simply the point at which self-improving AI gets so smart they can 'think' of ideas that humans don't actually understand.

So whether the "singularity" can happen depends on two things - whether it's possible for AIs to improve themselves past the point humans can do it for them, and whether it's possible for there to be ideas human brains simply can't understand.

In some ways, we've already reached a sort of 'soft singularity'. By which I mean that there are evolutionary design algorithms that already come up with ideas humans don't tend to think of, although we can still reverse engineer them to find out how they work so far. There are also computer-generated mathematical proofs that are too long to check, although theoretically we could check them.

Lvl 2 Expert
2017-10-18, 04:57 PM
Short answer: there's one way to find out.

In some areas of research progress may already be slowing. Take metallurgy, what have the real breakthroughs been since say aluminium and titanium? Transportation made leaps by going from horses to airplanes within a couple of decades, and has been stuck on "we can reach the moon" for half a century now. On the other hand biology doesn't seem nearly over its peak, neither does computer sciences let alone the study of non-computerlike intelligence, including true AI. But all of that is guesswork because we don't know what there is to invent. Will we ever travel faster than light? We don't know, for instance. Even though NASA is currently researching the topic.

I'm not holding my breath for the singularity as people typically describe it, not in my lifetime. (It feels like a sort of good bet to make because I don't expect to get very old.) But there might eventually be some point in time that's sort of similar to it, minus the "end of the world but in a weird way" undertones that sometimes follow, where people predict we'll all live a billion years within an actual second and then disappear forever or that kind of deal.

But don't discount the opposite possibility, that overall, when people look back at it from a long time from now, we might already be close to or even just past the point of fastest technological progress.

Scowling Dragon
2017-10-18, 05:03 PM
Honestly, I feel like I should continue living my life under the principle that articles that have a question as the article title are almost always answered with a resounding NO.

Its just my medication acting up and causing me to hyper focus.

Tvtyrant
2017-10-18, 05:21 PM
I'm just in a bit of a funk here. Im not scientists but I feel like I only ever encounter either clickbait that says that AI will become gods, or saying that the people that say that AI will become gods are idiots and it seems to go on forever.

The biggest thing is when computers can make computers that are smarter than themselves, and then repeat a few cycles. The implication is that at some point AI will design AI, and the take off in intelligence will happen so rapidly it will leave us behind.

It isn't so much that computers will be smarter than us (spoiler, they are now). The implication is they will keep getting smarter at a rate beyond our control.

Douglas
2017-10-18, 05:41 PM
(spoiler, they are now)
Only in specific narrowly defined ways. There are still many things a human brain can do that no AI is even close to accomplishing. They're getting there, but there's still a long way to go.

Anymage
2017-10-18, 05:45 PM
AI = Singularity does make a few assumptions. And while it will happen at the rate of technological growth which is crazy fast compared to biological time scales, technological growth doesn't mean that something world changing will happen tomorrow.

Assuming that it's possible to have an AI the way we usually think of them, The Singularity is all but inevitable. (The word that nerds in the know like to use is artificial general intelligence, so something that has general adaptability and self-direction instead of just optimizing for what we put in front of it.) Smart people can make good computers and good technology, and can use those good computers and technology to make even better computers and technology. Better technology being used to make even better technology is how a lot of our advancements have come about, but we're still limited to human smarts. And the upper bound there hasn't changed significantly since the days of Einstein or even Newton. Once we can start making computers that are smarter than the smartest people out there, or at least be able to mass-produce Einstein level intellects, it stands to reason that they'll be able to make even smarter computers. And so on. (There's a theoretical limit to the maximum computing power you can fit into a given area of space, but there's no reason to believe our brains are anywhere near that point.)

Having said all that, your computer doesn't just spontaneously upgrade itself when new advances come out. You have to go out and buy new hardware. Super brainy AI will have to have new parts manufactured in order to make even brainier AI, and most likely will have to do a lot of experimentation and stress testing the new tech. And thinking takes energy, which means that the AI will be bound by human manufacturing and energy infrastructures until far in the future. That's looking at decades of coexistence at the minimum.

Which leads to the big issue that doomsayers like to raise. We won't be the big kids on the block any more, and the big kids may well wind up not caring about us. (In less of a "kill all humans" sci-fi premise, more in that they'll have no reason to care about doing things like alter the environment in ways that we meatbags can't survive.) On the other hand, there are lots of places in the universe that a computer can get along just fine, gathering energy and raw materials, that we meatbags aren't designed to survive in. So whether we get Jetsons, complete environmental collapse, or just have the AIs leave to explore the rest of the universe is still a big unknown.

Scowling Dragon
2017-10-18, 06:26 PM
AI = Singularity does make a few assumptions. And while it will happen at the rate of technological growth which is crazy fast compared to biological time scales, technological growth doesn't mean that something world changing will happen tomorrow.

I just find it so...Iffy and assumy. How technology advances is by finding how to create more of an effect with less resources and faster. I just find it odd to assume that this could go on forever and not reach if not a hard limit but a practicality limitation.
Our tech is way more advanced, but also decays faster and is many more time fragile then before. With resource pipelines also becoming highly fragile and interdependant.

I see it maybe as a super advanced AI but just probably no further unless basic laws of physics are overcome.

NichG
2017-10-18, 06:41 PM
For me, it's a big tell that the singularity folk (deity and devil sides alike) are all using very dated conceptions of AI - 60s era symbolic and propositional logic engines. Seeing talks lately from this crowd, they're filled with paintings of philosophers from centuries ago to, as the most modern image, the Enterprise from the original series. When you try to bring up empirical evidence from recent AI developments then 'but that's not the kind of AI we mean'.

We get to hear about it more now because AI has become a boom field and has been very successful in many ways, so AI-related stuff is discussed more. But in terms of the singularity story, reality has gone a different direction than the narrative.

The newer kinds of AI are data bottlenecked, not speed bottlenecked. What that means is that even if you make them faster, they don't become that much if at all smarter. They just become more convenient for humans to make. For example, Google's first neural machine translation network would cost about $400000 to train on AWS. They recently came out with an attentional one that trains much faster - I could train it on my home machine in a week - but the actual accuracy in translation is almost the same despite the 5 order of magnitude improvement elsewhere. Meanwhile, a company that hired a dedicated team of human translators to produce a carefully curated dataset produced a higher accuracy with an older, simpler network.

In simulatable environments, speed = data, but it's not the speed of the AI part that matters, it's the speed of the simulation. To speed that up with AI techniques you'd again need data - on chip designs or simulation design or whatnot.

So this sort of genius-in-a-box plotting to take over the world just doesn't look at all like the technologies that are actually improving right now. It's a story that makes sense if you believe that logic in the absence of evidence is sufficient to produce indefinite technological progress, but it ignores that whole 'looking at the world to check if you're right' part of science.

Anymage
2017-10-18, 06:48 PM
I just find it so...Iffy and assumy. How technology advances is by finding how to create more of an effect with less resources and faster. I just find it odd to assume that this could go on forever and not reach if not a hard limit but a practicality limitation.
Our tech is way more advanced, but also decays faster and is many more time fragile then before. With resource pipelines also becoming highly fragile and interdependant.

I see it maybe as a super advanced AI but just probably no further unless basic laws of physics are overcome.

To slightly quibble while agreeing with your general point. In the far future, when machines can disassemble asteroids for parts, they'll be able to do crazy stuff in other areas of technology as well.

But yeah. While AI would mean that we'll wind up sharing the planet with a lot of "people" who don't necessarily share human outlooks or values, it's going to be a gradual process over the course of decades. Anyone who tells you that one team of scientists will make their computer too smart and the next day we'll have robot overlords has no idea what they're talking about.

Scowling Dragon
2017-10-18, 07:01 PM
To slightly quibble while agreeing with your general point. In the far future, when machines can disassemble asteroids for parts, they'll be able to do crazy stuff in other areas of technology as well.

My question is how will it again budge up against hard scaling problems. Like one misconception that a bigger brain is better. Well it isn't. Its density over size.

But density=small and that rushes into my vulnerability problem. Your gonna essentially require a machine thats more like an organism then what we understand now, but then again that has its own resource problems.

In a funny way, a AI dense enough to have super smart brains would be vulnerable to colds or minor bacteria, or heck even vulnerable to its own nanobots and could have machine cancer.

NichG
2017-10-18, 07:17 PM
Heat dissipation is the problem. Power consumption goes as the cube of clock frequency. Slow-and-distributed is more efficient than fast-and-dense in the long run if we're talking total computational capacity.

We're the ones who want AI to be fast. AI itself would only have to care about the timescale of the thing it's going to interact with.

Machines are unlikely to get cancer since large-scale self-repair is in most cases less efficient than just mass-producing new hardware. Mind-cancer from reproducing memes maybe.

Scowling Dragon
2017-10-18, 07:22 PM
Heat dissipation is the problem.

That's another good point.


We're the ones who want AI to be fast. AI itself would only have to care about the timescale of the thing it's going to interact with.

That's another interesting point.


Would there also be competition between AI? Just thinking.

Yuki Akuma
2017-10-18, 07:23 PM
There's no particular reason we have to make AIs all in one place. Distributed computing has been a thing for decades. Redundancy is how computers are designed now, let alone in the future.

And there's also the thing where multiple low-frequency cores are better at most things than a single high-frequency core. Why do you think your CPU is barely 3 GHz but has four to eight cores?

As for competition between AIs, maybe? It depends on whether every AI wants the same thing as if there's limited amounts of that thing.

Scowling Dragon
2017-10-18, 07:31 PM
There's no particular reason we have to make AIs all in one place.

The idea is the singularity not AI. I mean if the singularity then turns out to be very very slow then that kinda undermines itself. That becomes a fault in it of itself.

Yuki Akuma
2017-10-18, 07:50 PM
The idea is the singularity not AI. I mean if the singularity then turns out to be very very slow then that kinda undermines itself. That becomes a fault in it of itself.

AI is rather crucial to the entire idea of the "singularity".

The entire point of distributed computing is to be able to do lots of calculations at the same time - each calculation may not be lightning fast, but you'll still end up doing more calculations than if you had only used a single thread.

NichG
2017-10-18, 07:58 PM
The idea is the singularity not AI. I mean if the singularity then turns out to be very very slow then that kinda undermines itself. That becomes a fault in it of itself.

This is a problem with the idea of the singularity. Even the fantasized optimal genius AIs shouldn't even want to be infinitely fast because it would waste resources.

Scowling Dragon
2017-10-18, 08:07 PM
AI is rather crucial to the entire idea of the "singularity".

The entire point of distributed computing is to be able to do lots of calculations at the same time - each calculation may not be lightning fast, but you'll still end up doing more calculations than if you had only used a single thread.

Then thats just mass prossessing. Where is the "Smarterenernss that gets smarterere".
I mean in theory then human population growth is the Singularity because every human is more calculations.
I guess this just goes to show what "Smarterer" even means is so very subjective.

Douglas
2017-10-18, 09:06 PM
This is a problem with the idea of the singularity. Even the fantasized optimal genius AIs shouldn't even want to be infinitely fast because it would waste resources.
Computation speed has very little to do with the idea of the singularity. Algorithm design is a lot more central to it, and having a more capable and efficient algorithm is quite the opposite of wasting resources.

NichG
2017-10-18, 10:07 PM
Computation speed has very little to do with the idea of the singularity. Algorithm design is a lot more central to it, and having a more capable and efficient algorithm is quite the opposite of wasting resources.

When people say 'computers will start to self-improve so quickly that humans can't keep up or retain control' they're implicitly arguing that speed is the thing being optimized, and as a result the time-to-next-improvement shrinks as it gets optimized. The argument in things like Bostrom's stuff is roughly, 'intelligence is a scalar quantity which measures the rate at which an agent can solve tasks; once improving intelligence becomes a task, there's a positive feedback, therefore boom'. You could pick it apart and ask whether you'd expect a sinc pulse, an exponential growth process in intensive quantities, or whatever, but the basic idea is contingent on there being an open-ended speed-up.

Otherwise you just have a 'progress as usual' situation. New technology becoming pervasive exponentially fast? We already went through that several times in human history. I'd even argue that to make a blip, you'd have to do something more severe than add just another exponential improvement process (after all, we've been living with Moore's Law for some time now) in order to make that claim that the new state of things is qualitatively different enough to garner a term like 'singularity'.

Douglas
2017-10-18, 10:29 PM
Even with such a narrowly specific definition of intelligence as that, increasing computation speed is only one way to increase the rate of task solving. The other is to increase the ratio of task solving speed to computation speed, and that's (part of) what a better algorithm could do.

More important to the idea of the implications of the Singularity is the breadth and variety of tasks an agent can solve, and the quality of the solutions the agent can produce. Without those, at most you'd end up with some computers that could very quickly and very efficiently do exactly the same things the computers you already had could do. That's certainly a nice thing to have, but it's far from the world-shaking significance of the Singularity scenario.

NichG
2017-10-19, 12:50 AM
Even with such a narrowly specific definition of intelligence as that, increasing computation speed is only one way to increase the rate of task solving. The other is to increase the ratio of task solving speed to computation speed, and that's (part of) what a better algorithm could do.

More important to the idea of the implications of the Singularity is the breadth and variety of tasks an agent can solve, and the quality of the solutions the agent can produce. Without those, at most you'd end up with some computers that could very quickly and very efficiently do exactly the same things the computers you already had could do. That's certainly a nice thing to have, but it's far from the world-shaking significance of the Singularity scenario.

There's a difference between the specific 'singularity' claims of Kurzweil, Bostrom, etc and just 'things get better' though.

Algorithms have become more efficient over the last 60+ years. Computers have become faster over the last 60+ years. Costs have dropped over the last 60+ years. These already follow various exponential curves associated with a wide variety of factors (not all of which are intrinsic improvement - many have already transitioned to things sustained more by market growth than technological improvement - the transition away from increasing clock speed to adding more cores, for example). Computers can't get infinitely fast, algorithms can't get infinitely efficient, etc, so you see qualitative changes in how the industry tries to maintain that exponential improvement, but overall its more or less a familiar situation. Probably most people here have not actually lived in an extended period of human history in which some kind of exponential growth curve hasn't existed (if you were around for WW2, that's an example of such an era where it was disrupted).

The singularity claim is distinct in that it suggests (by name) a specific point in time at which the rate of acceleration becomes so fast that you would go to sleep one night and wake up next morning to a transformed world - it suggests that this would be a specific short interval in time due to feedbacks inherent in self-improving systems.

That specific claim goes against reality because - as mentioned - we actually have had decades under the activity of 'self-improving systems' - specifically, us. The phenomena associated with that aren't inherently singular, either in the mathematical sense, or in the sense of a single transformative moment.

Grinner
2017-10-19, 01:05 AM
To the original question, I would note that the kind of polarized opinions held on the subject are not limited to the concepts of AI and the Singularities. I've been watching a lot of UFO documentaries instead of normal TV lately, and I've noticed that a similar dynamic exists among UFO documentary series. Ancient Aliens, for example, tends to be very enthusiastic about the idea of aliens, welcoming aliens as distant Messiahs, whereas Hangar 1 seems to report more stories about shadowy collaborations between human governments and aliens, people dying after close encounters, and a whole episode on the Solar Warden conspiracy theory.

To the topic of distributed computing as being more optimal in the long-term, I would respond with a resounding "Maybe?". It might be more efficient physically, but there exists the possibility that the algorithms needed in a distributed mode to meet some abstract measure of intelligence are less efficient. So less efficient, in fact, that the computational inefficiency balances out whatever gains you might have made. I guess it might depend on what kind of distributed environment you're talking about (centralized or decentralized), of course, but I don't see a clear way of mapping distributed processes to local processes, in the decentralized case at the very least.


TThe singularity claim is distinct in that it suggests (by name) a specific point in time at which the rate of acceleration becomes so fast that you would go to sleep one night and wake up next morning to a transformed world - it suggests that this would be a specific short interval in time due to feedbacks inherent in self-improving systems.

That specific claim goes against reality because - as mentioned - we actually have had decades under the activity of 'self-improving systems' - specifically, us. The phenomena associated with that aren't inherently singular, either in the mathematical sense, or in the sense of a single transformative moment.

My understanding of the "Singularity" is that it's the point in time where a self-improving system becomes so improved that we can longer project its future behavior. This is taken to be an exponential process because the self-improving systems are assumed to be of the recursively self-improving variety, which humans do not presently fall under (individually, at least?). Realistically, I'd say a bottleneck exists such that the first case in untenable; a recursively self-improving system cannot expect the processes remote to itself to evolve on a similar timescale (i.e. society won't adapt to its presence overnight).

Anymage
2017-10-19, 01:26 AM
The singularity claim is distinct in that it suggests (by name) a specific point in time at which the rate of acceleration becomes so fast that you would go to sleep one night and wake up next morning to a transformed world - it suggests that this would be a specific short interval in time due to feedbacks inherent in self-improving systems.

It's worth talking about a soft singularity, the development of a fully autonomous general AI and/or the development of a computer that can make a better computer than any human could. Both of these could have a radically transformative effect on the planet, and we might as well call them "singularity" because that's what speculative AI thinkers called it back before they knew how the process would really work itself out.

NichG
2017-10-19, 01:45 AM
To the topic of distributed computing as being more optimal in the long-term, I would respond with a resounding "Maybe?". It might be more efficient physically, but there exists the possibility that the algorithms needed in a distributed mode to meet some abstract measure of intelligence are less efficient. So less efficient, in fact, that the computational inefficiency balances out whatever gains you might have made. I guess it might depend on what kind of distributed environment you're talking about (centralized or decentralized), of course, but I don't see a clear way of mapping distributed processes to local processes, in the decentralized case at the very least.

You wouldn't even necessarily have to run different algorithms, since the computation doesn't really care where it gets done. Even underclocking a processor could be worth it. A quick back-of-the-envelope estimate suggests that a modern PC running at 400W (something like 2 cores at 3GHz, no GPU) costs about $2000 in electricity over the course of 5 years. That machine would cost maybe $800 to $1000 to purchase and would fairly reliably last for those 5 years. If you follow the N^3 power vs frequency scaling, to maximize computing per dollar it'd be better to run it at something like 1.5 GHz - you save $1500 in electricity but pay an extra $1000 in parts for the same amount of computation. And that's assuming that the wear and tear on the machine isn't reduced by running it colder, that you couldn't get cheaper parts e.g. a weaker power supply or an older processor, etc.

GPU design already kind of goes this route a little bit - CUDA cores tend to be around 1GHz.


My understanding of the "Singularity" is that it's the point in time where a self-improving system becomes so improved that we can longer project its future behavior. This is taken to be an exponential process because the self-improving systems are assumed to be of the recursively self-improving variety, which humans do not presently fall under (individually, at least?). Realistically, I'd say a bottleneck exists such that the first case in untenable; a recursively self-improving system cannot expect the processes remote to itself to evolve on a similar timescale (i.e. society won't adapt to its presence overnight).

Humans build tools, which increase their tool-building capacities. Humans design educational systems and teach the next generation, which increase their learning capacities. I don't think we even need to go to self-improving systems to find examples of things where we fail to project their future behavior, either...

Edit: And to Anymage's point, we already make better (technology of your choosing here) than humans could do unaided. Circuit simulators, statistical methodologies that let us distinguish effects we can't reliably notice or think through on our own, optimizations used in the design process, etc are all examples of this. Modern AI stuff adds to this pile, but its not a fundamentally new thing with respect to that particular aspect.

Grinner
2017-10-19, 02:28 AM
You wouldn't even necessarily have to run different algorithms, since the computation doesn't really care where it gets done. Even underclocking a processor could be worth it. A quick back-of-the-envelope estimate suggests that a modern PC running at 400W (something like 2 cores at 3GHz, no GPU) costs about $2000 in electricity over the course of 5 years. That machine would cost maybe $800 to $1000 to purchase and would fairly reliably last for those 5 years. If you follow the N^3 power vs frequency scaling, to maximize computing per dollar it'd be better to run it at something like 1.5 GHz - you save $1500 in electricity but pay an extra $1000 in parts for the same amount of computation. And that's assuming that the wear and tear on the machine isn't reduced by running it colder, that you couldn't get cheaper parts e.g. a weaker power supply or an older processor, etc.

GPU design already kind of goes this route a little bit - CUDA cores tend to be around 1GHz.

But consider that raw processing ability is not the only constraint. It might be cheaper to use larger numbers of small, efficient processors to carry out some task, but there is, at minimum, one additional constraint in the form of information access. In a decentralized, distributed system (which I think would be more amenable to the large-quantitites-of-small-efficient-processors approach), it's not feasible for every node in the graph to have access to all information in the system. Thus, algorithms which you might use in a centralized system, especially those with only one node (i.e. your personal computer), may not be equally applicable to our decentralized system.

For example, I wrote a prime number generator a couple months back, and while I slowly started incorporating optimizations, it became apparent that the algorithm is not particularly amenable to parallelization or some other implementation in a distributed system. After writing this reply, it's become apparent to me that it's not impossible, as I previously thought, but it's certainly not particularly efficient, given the overhead involved.

NichG
2017-10-19, 02:55 AM
But consider that raw processing ability is not the only constraint. It might be cheaper to use larger numbers of small, efficient processors to carry out some task, but there is, at minimum, one additional constraint in the form of information access. In a decentralized, distributed system (which I think would be more amenable to the large-quantitites-of-small-efficient-processors approach), it's not feasible for every node in the graph to have access to all information in the system. Thus, algorithms which you might use in a centralized system, especially those with only one node (i.e. your personal computer), may not be equally applicable to our decentralized system.

For example, I wrote a prime number generator a couple months back, and while I slowly started incorporating optimizations, it became apparent that the algorithm is not particularly amenable to parallelization or some other implementation in a distributed system. After writing this reply, it's become apparent to me that it's not impossible, as I previously thought, but it's certainly not particularly efficient, given the overhead involved.

But are we talking speed efficiency or total resource consumption efficiency? Parallelization is generally really terrible in terms of speed efficiency, but if you have a lot of stuff you want to get done and your deadlines are way off as long as it gets done cheap, its a very different landscape than what people normally code for. Consider for example just running your prime number generator on a cellphone and letting it go, accumulating a big file somewhere, while you work on other projects. You wouldn't have to even use a parallel algorithm for it, you'd just have to be willing to wait say a few centuries for the supply of numbers of build up before you switch to applications that actually need those numbers for something.

It sounds weird, and from a human perspective its definitely weird - but that's because we're constrained by timescales on which we need results for them to be useful. If you were AIs on a 50W radioisotope powered probe to Alpha Centauri (estimated time of trip 100 years) then what you optimize for is very different than if you're e.g. trying to control an acrobatic robot or something like that. This is in essence the 'intelligence isn't a scalar quantity' point.

Lvl 2 Expert
2017-10-19, 03:22 AM
Another thing that might be worth considering: We figure ones AI’s are better at designing AI’s than we are, things are going to speed up from there, because every next generation is better at designing AI’s. But this may be self-limiting, because the designs they’re trying to come up with also become more and more complicated. If how complicated the next generation of AI’s is to design and build goes up quicker than how good those generations are at designing AI’s, then the breakthroughs are going to taper off. AI’s would still be a lot better at designing AI’s than we are, but they would be stuck in their own “crawl” towards progress. The singularity people typically argue that this logarithmic part of the curve won’t be reached until technology is so overdeveloped it might as well be a god in the eyes of even its contemporary humans. But it could be a long way before that point.

Frozen_Feet
2017-10-19, 03:47 AM
The "AI singularity" is just one example of a technological event horizon. It's defined by a new technology introducing such a paradigm shift that rate and quality of technological and socieltal advancement ceases to be predictable - you can't "see" beyond the point of its invention, hence the comparison to a black hole.

Humanity has experienced multiple such shifts in last 10,000 or so years. Agriculture, literacy, nuclear physics all qualify. Modern information technology might count even without Strong AI. Chances are that if you're a citizen of low to average intellect in a shrinking economy, you will be entirely obsoleted by machinery in five years.

On another note, I think a distinction needs to be drawn between AGI (artificial general intelligence) and Strong AI (=AI capable of creating new AI more intelligent than itself). Reason: most humans, indeed, most animals possess some degree of general intelligence. Most humans and certainly most animals do not have any capacity of creating more intelligent offspring. I heavily suspect that first true AGI will be sort of dim and not capable of designing other AIs any more than your average office janitor.

Eldan
2017-10-19, 04:32 AM
Short answer: there's one way to find out.

In some areas of research progress may already be slowing. Take metallurgy, what have the real breakthroughs been since say aluminium and titanium?

Since aluminium and titanium? Well, I'm not an expert,but shape memory alloy, semiconductors, metallic glass are those that come to mind.



Transportation made leaps by going from horses to airplanes within a couple of decades, and has been stuck on "we can reach the moon" for half a century now.

We mostly decided that sticking humans in things is inefficient and expensive, it seems. So, the new things in space are telescopes and satellites. Still. Ion drives, solar sails. Mostly, the big things are weight saving measures and fuel economy.

I can't think of any area of science that's standing still.

Eldan
2017-10-19, 04:38 AM
Only in specific narrowly defined ways. There are still many things a human brain can do that no AI is even close to accomplishing. They're getting there, but there's still a long way to go.

I think this is actually the main problem here. Smarter at what? Human brains are pretty good at a lot of things. Like, moving an arm to catch something flying through the air. Deciding whether what they are looking at is a human face or not. Producing art that speaks to humans. Computers are pretty smart at other things. Like giving you the average of ten million numbers.

THere's really no good general metric of "smart".

Frozen_Feet
2017-10-19, 06:27 AM
The measure for general intelligence is its ability to be generalized. A fairly unremarkable human can use their intelligence for dozens of things and learn new things from fairly small data sets. A typical AI these days is really good at one thing and often takes a massive data set to learn anything.

Or in other words, we are getting real good at creating special intelligences dedicated to specific tasks, yet not much better at making general intelligence.

-D-
2017-10-19, 07:02 AM
I'm just in a bit of a funk here. Im not scientists but I feel like I only ever encounter either clickbait that says that AI will become gods, or saying that the people that say that AI will become gods are idiots and it seems to go on forever.

I personally think Singularity boils down to fear of death: https://www.smbc-comics.com/?id=1968

First, I think we are looooooooooooooooooooooooooooooooooooooooooooooooo oooooooooooooooooooooooooooooooooooooooooooooooooo oooooooooooooooooooooooooooooooooooooooooooooooooo oooooooooooooooooooooooooooooooooooooooooooooooooo oooooooooooooooooooooooooooooooooooooooooooooooooo oooooooooooooooooooooooooooooooooooooooooooooooooo ooooooooooooong way from achieving basic sentience in our programs.

Second, I think its mistaken to take Moore's law (which is slowing down if not outright over) and extrapolate it.

Third, what's even worse is that AGI (Artificial General Intelligence) is software, and isn't really tied to Moore's law, but more anecdotaly to Wirth's law - i.e. What Intel giveth, Microsoft taketh away.

Scowling Dragon
2017-10-19, 08:30 AM
I personally think Singularity boils down to fear of death: https://www.smbc-comics.com/?id=1968


I would hate immortality is a conclusion I have come up with. Living so long everything you loved it becomes dull and grating.

Lord Torath
2017-10-19, 08:43 AM
Everyone remember the AlphaGo AI that beat a human player? Google fed gigabytes of data to it (terabytes?), and it managed to beat the best human player 4 out of 5 times. Now they have a new AI, AlphaGo Zero (http://wbgo.org/post/computer-learns-play-go-superhuman-levels-without-human-knowledge#stream/0). It was taught the rules, but learned entirely on its own by playing itself. It has recently beat AlphaGo 100 out of 100 times.

Red Fel
2017-10-19, 10:18 AM
I think a lot of people are reading extra stuff into the singularity. Like most science, it's about the one event, the one discovery, not the fallout therefrom.

The singularity, as several posters have pointed out, is when artificial intelligence becomes "smart" enough that it can improve itself without the need for human assistance or intervention. That's pretty much it. It's not robots building robots. It's not machines taking over. It's not an artificial lifeform ascending to godhood. That may or may not be part of it. All it is, all it entails, is machines being able to upgrade themselves without us being involved in the process.

Now, yes, there's lots of hypothetical fallout. But the gist is that once machines can upgrade themselves without our assistance, and have the desire or programming to do so, that growth would be exponential. Because the machines can make themselves smarter, which means they can conceive of greater upgrades, which will make them smarter, which means they can conceive of greater upgrades, ad infinitum. As has been pointed out, human technological advancement starts, stops, and starts again. It has eras, periods where the rest of technology - and more accurately, human understanding - has to catch up before it can resume its advance. In theory, a machine singularity bypasses this hurdle, with the machines forcing themselves through those developmental eras.

We're not discussing what the machines actually do with that power, aside from self-improvement. That's not the singularity; that's the aftermath of the singularity. A side effect. The issue is the singularity itself.

So, the question: Is it possible? I'd say that's a definite maybe.

We are creating machines, and have for some time, that are capable of a degree of self-analysis. At least in the algorithmic sense. Your computer can scan itself for identified faults. It can automatically search for updates when it acquires new hardware or software. So, clearly, machines are capable of identifying their own "flaws," inasmuch as we have told them what these flaws are, and repairing them, inasmuch as we have provided them the software.

But there is a line between self-analysis and self-repair. While machines can use the tools we provide them to fix the flaws we tell them are problematic, I don't think they are yet able to design the tools themselves. It's possible that machines are getting better at identifying faults, but unlikely that a machine is capable of identifying faults of which humans can't conceive - at best, they can do what we already know how to do, if faster and more efficiently.

The real question, then, is (1) can machines identify faults, defects, or areas of improvement without being told what constitutes a fault, and (2) can they design the tools to repair those faults? I think that, at least currently, the answer to both is "No," but that the answer to 1 is moving consistently closer to "Yes."

The singularity happens when the answer to both is "Yes." At that point, the machines will be able to identify the limits of their processing power as a fault, despite not being told that it is one, and they will be able to design the means to improve that processing power. That's when the cycle becomes exponential, and that's when the singularity happens.

At least, in theory.

NichG
2017-10-19, 10:56 AM
Machines that improve themselves are already a thing though. HyperNEAT, Hypernetworks, AutoML, Learning to learn by gradient descent by gradient descent, and recent reinforcement-learning-designed gradient update rules (signplus) and activation functions (swish) among others.

Generally the pattern is pretty severe diminishing returns, not exponential growth. People made a big deal about Swish getting 0.5% improvement over ReLU.

The self-play family of stuff is closer to a complete feedback (GAN weights have chaotic dynamics, which suggests strong feedbacks), but even then you see performance saturate as you approach optimality - which is to be expected because improvement gets harder as there are fewer errors to fix, for humans too.

-D-
2017-10-19, 11:35 AM
I would hate immortality is a conclusion I have come up with. Living so long everything you loved it becomes dull and grating.
Immortality would be the worse part of it, but mostly because it wouldn't be available to people like you and me. Immortality would be very fun though, I am very curious how things will shake about.



The real question, then, is (1) can machines identify faults, defects, or areas of improvement without being told what constitutes a fault, and (2) can they design the tools to repair those faults? I think that, at least currently, the answer to both is "No," but that the answer to 1 is moving consistently closer to "Yes."
Actually, this is necessary but not enough for Singularity to happen. Yes, you need those two; however, you also need to be able to gain more and more improvements the more intelligent you are. Which is one thing I find doubtful.

Why not? It's quite possible too much intelligence is a detrimental. In some aspects, you can look at ADD and savant idiot as having "too much" intelligence or intelligence too fine tuned to a task at hand respectively. If you look at intelligence as number of useful connections between your brain cells, at some point, having too many brain cells is going to make things detrimental, with additional connections adding additional lag to the system.

Eldan
2017-10-19, 12:56 PM
I've never bought the "boring" argument against immortality. There's more philosophy, art and entertainment being produced than I could ever learn about and technology advances so fast, barely anyone could keep up. There's always something new.

Scowling Dragon
2017-10-19, 01:06 PM
Everyone remember the AlphaGo AI that beat a human player? Google fed gigabytes of data to it (terabytes?), and it managed to beat the best human player 4 out of 5 times. Now they have a new AI, AlphaGo Zero (http://wbgo.org/post/computer-learns-play-go-superhuman-levels-without-human-knowledge#stream/0). It was taught the rules, but learned entirely on its own by playing itself. It has recently beat AlphaGo 100 out of 100 times.

Thats what partly prompted this thread. Does that accomplishment really mean anything?


I've never bought the "boring" argument against immortality. There's more philosophy, art and entertainment being produced than I could ever learn about and technology advances so fast, barely anyone could keep up. There's always something new.

And your entire life will be littered with **** you tossed away. Have you ever watched a movie and said "Well that was good, but I have seen it before". That will be your entire existense.

Tvtyrant
2017-10-19, 01:26 PM
I've never bought the "boring" argument against immortality. There's more philosophy, art and entertainment being produced than I could ever learn about and technology advances so fast, barely anyone could keep up. There's always something new.

Arguably most of that new production is dependent on a constant inflow of new individuals with new perspectives. Immortality requires very strict population controls, so it probably necessitates slower developments in ideas.

Scowling Dragon
2017-10-19, 02:03 PM
I guess my biggest fear about even an AI Utopia or a Utopia period, would be a world without accomplishment. People always seem to just kick the can of whats gonna happen into something else.

"Without work, everybody will be artists!" outside the fact that not everybody even LIKES art, what about when robots are just as capable of being artists? Then what next. Everybody just sits around awkwardly and pretend AI doesn't exist until we just die off like an apendix.

Red Fel
2017-10-19, 02:38 PM
I guess my biggest fear about even an AI Utopia or a Utopia period, would be a world without accomplishment. People always seem to just kick the can of whats gonna happen into something else.

"Without work, everybody will be artists!" outside the fact that not everybody even LIKES art, what about when robots are just as capable of being artists? Then what next. Everybody just sits around awkwardly and pretend AI doesn't exist until we just die off like an apendix.

Again, this assumes that post-singularity AI will create any kind of utopia, or do anything other than self-improve, at all.

Here's an illustration. Ants have constructed their own societies for millennia, in parallel with ours. Can you tell me what impact that has had on your life, other than the occasional ruined picnic?

That's the point. It's entirely possible that post-singularity AI will simply create their own digital society, in parallel with our own. It's entirely possible that, being a purely digital society, we will have no (or minimal) awareness of it, and it will have no impact on our lives.

The singularity, should it even exist in the realm of the possible, isn't a guaranteed apocalypse, nor a guaranteed utopia - it's not guaranteed to impact our lives in any significant way, other than the academic knowledge that, suddenly, machines are capable of self-improvement without our aid.

theNater
2017-10-19, 03:00 PM
I guess my biggest fear about even an AI Utopia or a Utopia period, would be a world without accomplishment. People always seem to just kick the can of whats gonna happen into something else.
There's no danger of a world without accomplishment as long as we recognize that self-improvement is an accomplishment. Bringing one's marathon running time from 4 hours to 3 is quite an accomplishment, even though the world record is about 2, and a car can easily traverse the same distance in 30 minutes.

Chen
2017-10-19, 03:09 PM
And your entire life will be littered with **** you tossed away. Have you ever watched a movie and said "Well that was good, but I have seen it before". That will be your entire existense.

How is that any different than currently? Is there some arbitrary numbers of years where this suddenly becomes the case? New things are created all the time.

FreddyNoNose
2017-10-19, 03:17 PM
My question is how will it again budge up against hard scaling problems. Like one misconception that a bigger brain is better. Well it isn't. Its density over size.

But density=small and that rushes into my vulnerability problem. Your gonna essentially require a machine thats more like an organism then what we understand now, but then again that has its own resource problems.

In a funny way, a AI dense enough to have super smart brains would be vulnerable to colds or minor bacteria, or heck even vulnerable to its own nanobots and could have machine cancer.

The problem with humans is they think they are smarter than this AI. You look at density. How about looking at it as specialization?

Scowling Dragon
2017-10-19, 03:21 PM
There's no danger of a world without accomplishment as long as we recognize that self-improvement is an accomplishment. Bringing one's marathon running time from 4 hours to 3 is quite an accomplishment, even though the world record is about 2, and a car can easily traverse the same distance in 30 minutes.

I guess that is true. Thinking it over, I realise I fear the loss of freedom more in practice.

I do not look forward to the day when indipendant driving is outlawed for instance.


How is that any different than currently?
You could say no, so I'm not looking forward to this being the state of all the time. When I think of something that lasts forever I think of comic books. Its a life without arcs, just a continous nothing state where nothing matters.
I guess most people aren't interested in the new, and are very content to keep consuming/ doing the same forever.

Evidence actually say you become increasingly reserved with age, and actually less likely to explore new possibilities.

Vogie
2017-10-19, 03:33 PM
And your entire life will be littered with **** you tossed away. Have you ever watched a movie and said "Well that was good, but I have seen it before". That will be your entire existence.

Not necessarily.

For example, most people work - not only as a way to sustain themselves, but also to have something to do. If you take a person and remove the requirement to work, their priorities dynamically change.

Some people hate it - there's many a story about people who pass away immediately before or after retirement. Lottery winners who go bankrupt and become homeless. People who retire or become financially independent often return to work just to have something to do. Notch, the creator of Minecraft, cashed out on his creation, made billions and... is incredibly depressed, saying he's more isolated then ever.

Some people love it - There's many a story about people who revitalize their lives after receiving a windfall or retiring. They go back to school for that masters in Library science, spend entire years on Cruise ships, travelling the world, or open up that business that they don't mind if it operates at a loss. Others change their focus completely - Bill & Melinda Gates, with more money then they could possibly spend in multiple lifetimes, open up a foundation for the underserved, and fill their time attempting to solve problems they would've never dreamed about before, like tackling Malaria, poverty, overpopulation, et cetera.

And that's just working, which people stop doing every single day.

Immortality is no different, on both sides. Some people would see it as a sort of hell, without a way to run from decades of bad decisions. Others would just continue doing what they've always done, as long as they wanted to, then decide to change. Still others would take immortality as a hyper-retirement, and chase whatever peaks their curiosity, for as long as they desire.

And depending on the type of immortality, population controls may not be required. If it's an mind-ascendance-based immortality, such as Black Mirror's San Junipero situation, then an indefinite number of people can effectively live forever. If it's a physical immortality due to advances in medicine and aging science, then the demand for more land will create a supply of people willing to do something unheard of prior. We're no where near peak population on the planet, no where close to a singularity, and in 2013, over 200,000 people volunteered for a one-way trip to Mars.

Frozen_Feet
2017-10-19, 04:11 PM
Here's an illustration. Ants have constructed their own societies for millennia, in parallel with ours. Can you tell me what impact that has had on your life, other than the occasional ruined picnic?

Dude. Ants are capable of killing off or changing biosphere of entire forest areas. Measured by biomass, species of ants put together outnumber humanity. The largest ant colonies are larger than most human cities. Those little buggers are a lot more powerfull force of nature than you would assume from your everyday experience.

Anymage
2017-10-19, 05:13 PM
I guess my biggest fear about even an AI Utopia or a Utopia period, would be a world without accomplishment. People always seem to just kick the can of whats gonna happen into something else.

If we're imagining future techno-utopias, we'll be able to upgrade what it means to be human. Transhumanism is still pretty far off, but the rate of technological growth implies that it might not be quite so far in the future as you'd first think. You with a cybernetically enhanced brain, or your kid who was genetically engineered for maximum output instead of having to balance the energy income our ancestors had to work around, could still find stuff to do.


"Without work, everybody will be artists!" outside the fact that not everybody even LIKES art, what about when robots are just as capable of being artists? Then what next. Everybody just sits around awkwardly and pretend AI doesn't exist until we just die off like an apendix.

Automatic checkout machines are pretty dumb, technologically speaking. Ditto for fast food order kiosks, or a mechanized kitchen system. And yet those technologies alone will put tons of people out of work. And just the centralization and scale of Amazon have changed the market environment significantly.

The nature of technology does overturn the old way of doing things. Has done so, will do so, and is very much doing so now. We don't need machine gods for this to happen. And just like with AI, we need smart people to think through how we should plan around the near term changes we'll be seeing soon. Even more so, in fact; AI machine gods are a speculative prospect at the moment, while technology overturning the economy is happening right now. You can complain or resist, but what we really need now are people helping to figure out how we can best weather the problems coming at us.

halfeye
2017-10-19, 06:56 PM
And just like with AI, we need smart people to think through how we should plan around the near term changes we'll be seeing soon.

...

what we really need now are people helping to figure out how we can best weather the problems coming at us.

Smart people thinking about the long term implications? That hasn't happened so far, and probably won't until we get kicked in the teeth a good few more times.

theNater
2017-10-19, 09:52 PM
I guess that is true. Thinking it over, I realise I fear the loss of freedom more in practice.

I do not look forward to the day when indipendant driving is outlawed for instance.
I sincerely doubt that recreational driving will ever go entirely away; the advent of cars didn't completely eliminate horseback riding, after all.

On the flipside, consider the additional freedom afforded by self-driving cars. Some people can't drive because of various physical conditions; this will grant such people much greater mobility. People won't need to assign a designated driver when they go out drinking if the cars are the driver. Multi-day road trips can have their duration cut nearly in half if the people involved are willing to sleep in the car as it drives. And so on.

NichG
2017-10-19, 10:18 PM
I could definitely see more stringent licensing for recreational drivers though. One of the big sells of self-driving cars is that even with current emerging technology, you already see a significant reduction in accident rate compared to human drivers. Once not possessing a license stops being a major issue for people's ability to support themselves or obtain necessary services where they live, it makes sense to make the actual licenses much more exclusive.

But yeah, this is stuff we have to work out, and its not really the tech designers who are the ones to work it out. Each place is going to have its own distribution of cultural values as to what's important to retain and what should take priority, and that's going to be a negotiation and evaluation at the societal and business levels more than it will be about flipping a switch in an algorithm somewhere. On the plus side, it means that we can look at the heterogeneity of world cultures for lots of examples to inform the decision.

In Japan for example, the culture relating to work and self-value and things like that is very different than in the US, so I expect they'll react totally differently to new automation technologies. Death by overwork is still a major issue in Japan and the aging population means a shrinking workforce, so increasing degrees of automation will actually reduce a number of existing social problems here, while at the same time there's clearly a greater degree of cultural acceptance for people working pro-forma jobs that exist literally just to put a human face on what would otherwise be an isolating experience. For example, you often see people at construction sites here whose job is largely to point to the marked detours and say 'this way please, sorry for the disturbance'. Whereas in a culture that values optimization of personnel efficiency above all, it will likely play out very differently.

So hopefully in the next decade we'll get a lot of evidence about what sorts of patterns are functional in the face of increasing automation.

Scowling Dragon
2017-10-20, 12:14 AM
I sincerely doubt that recreational driving will ever go entirely away; the advent of cars didn't completely eliminate horseback riding, after all.

No what will happen is that people will not trust people will cars. Its not safe, its not reliable. Loud insecure people will push for it, governments will push for it, and bing bang boom your not legally allowed to drive your own car outside of maybe emergencies, or little baby bumper car tracks.

Lvl 2 Expert
2017-10-20, 07:41 AM
No what will happen is that people will not trust people will cars. Its not safe, its not reliable. Loud insecure people will push for it, governments will push for it, and bing bang boom your not legally allowed to drive your own car outside of maybe emergencies, or little baby bumper car tracks.

This would be kind of a cool premise for a satire-like science fiction story that I we can't speculate about too much here because it comes to close to one specific real world political debate in particular. There will definitely be an increased interest in racing as a hobby, while the amount of experience the average driver has goes down drastically. And of course there will be debates about whether, when and where offroading is allowed.

Chen
2017-10-20, 08:04 AM
No what will happen is that people will not trust people will cars. Its not safe, its not reliable. Loud insecure people will push for it, governments will push for it, and bing bang boom your not legally allowed to drive your own car outside of maybe emergencies, or little baby bumper car tracks.

On major roadways? Probably. Same way you can only take a horse and buggy on certain roads, if any. You become an impediment to the majority of the traffic and hence restrictions are applied. On recreational tracks and the like there should be no specific limitations, much the way recreational horseback riding is.

Vogie
2017-10-20, 08:28 AM
No what will happen is that people will not trust people will cars. Its not safe, its not reliable. Loud insecure people will push for it, governments will push for it, and bing bang boom your not legally allowed to drive your own car outside of maybe emergencies, or little baby bumper car tracks.

Right, just like we're not allowed to ride horses now. Audible eyeroll. You sound like a newscaster on a slow news day, just inventing hypothetical government overreach.

The Expectation of people has "long tails", to use a term by Nassim Nicholas Taleb. Even after a technological change, people need their hands held. When elevators were automated, there were still elevator operators for years- they literally were paid to stand there, hit a button, and nothing else. Economically, that doesn't make sense. Technologically, that doesn't make sense. Yet, due to behavioral psychology, they were considered required, by both the passengers and elevator creators, long after they were necessary.

People don't trust machines. People trust people. Right now, today, a Tesla on autopilot is 40% more safe then the average driver (National Highway Traffic Safety Administration, Jan 2017)... yet they're not allowed on the road in the US without a driver, except in a single state (soon to be two):
https://upload.wikimedia.org/wikipedia/commons/2/2a/Self_Driving_Cars_Legalized_States_in_USA.png

Even if that was to change over the next, say, 50 years, that technology is still going to be new. There will still be decades and decades of cars that people already own that won't suddenly be self-driving. They're not going to be all scrapped, or upgraded with autopilot (or equivalent) technology. They're still going to be used. Buses, cement trucks, semis, school buses, street sweepers, snowplows, you name it - these are massive investments by their owners, individual and corporate, and they're going to protect their investments. These things take time to change.

Will there be sections of cities or certain roads where manually driven cars are not allowed? Sure. We have minimum speed requirements on US highway now - you can't ride a horse or drive a golf cart on I-95. Not because of something about those things specifically, but because they can't go 40mph for a stretch. But that doesn't mean they're banned everywhere.

The Model T, the first mass-produced car, was introduced in 1908. Here's an article about people riding horses through drive thrus... in 2013 (https://firstwefeast.com/eat/2013/07/a-brief-history-of-people-riding-horses-through-the-mcdonalds-drive-thru). Nearly 110 years of au, and people are still taking their horses into town to go to the store.

Scowling Dragon
2017-10-20, 09:24 AM
Right, just like we're not allowed to ride horses now. Audible eyeroll. You sound like a newscaster on a slow news day, just inventing hypothetical government overreach.

People like legislating others peoples freedom away when it means safety. Horses where not more free then cars, they where slower. This is a matter of safety, and from the looks of it people trust systems more then they trust people.

Im not saying its gonna be on one swoop. It won't be. Its gonna be inch by inch, like it always is.

Lvl 2 Expert
2017-10-20, 10:09 AM
Assuming we get a really good grip on that self driving in the coming decades it will be a lot safer than the current situation. It will also allow for much more efficient use of the road network, because autonomous vehicles in constant communication to each other can drive much closer together, can move faster without the same risk and wouldn't cause the same amount of traffic jams. At the point where that really starts to matter, which will not be tomorrow, it's very possible that manual driver will be seen as not just reckless but a huge nuisance, just the 1% that don't trust a robot to do their driving cause 60% of traffic obstructions (according to cleverly constructed research, the real impact will never really be known). This would mean that a persons "freedom" to not read a good book while being transported is not just endangering the lives of others but is costing a lot of money. I wouldn't be surprised if manual driving was prohibited at least on the main highways.

On local streets it would be a different story, but there definitely is a think of the kids factor involved, which is always a powerful driver. And keep in mind, because robot vehicles are available manual driving essentially serves no real purpose. Some people do it as a hobby, for some people it's an extension of working on old cars, or their historical reenactments. Yet others regularly make long drives in remote ares and don't like the odds of the board computer breaking down, and some are disaster preppers convinced that the robots will be the first thing to go. And a lot of them just plain like the activity, for two half hours per day. They have lots of good reasons for driving manually, but are they really good enough to risk lives for? We accept the pretty massive death toll of cars, unintentional accidents as well as the few intentional violent actions, because of how useful they are. Manual cars might lose that plus in the future.

Of course, if I'm even still alive by then I will be an old man insisting my driving is safer than that of any robot, while I peer over the dashboard through my thick yet futuristic looking glasses. And I won't be alone. People like to be in control. For people living today the thought of causing an accident in an autonomous vehicle is generally pretty existentially scary. Even if research shows the car is an objectively better driver than you and has a way lower chance of accidents (which people will not believe anyway because most people think of themselves as an above average driver), maybe in this specific case you could have done something, maybe you could at least have swerved around the little kid and only scooped the rest of the group. You are responsible for what happened, yet you gave away your chance to prevent it. And that's why for the foreseeable future there will need to be a responsible person sitting behind a steering wheel to take control if needed. Just the whole idea of an unconscious thing, not even an animal but a computer programmed to some exact specifications, causing traffic accidents is weird as ****. Who is even to blame for it? Not legally even, just morally? Would we want to save lives, time and money if it means we have to hand over control of life and death situations to a thing we'd barely trust to run a game for us without crashing? Even in Star Trek a human flies the ship, because what else could have a job like that?

I think eventually people would trust the technology. But the mental hurdle is going to be a big one. If the technological progress is fast enough it will be the human aspect that sets the pace of the transition. And only when that's done we can start swinging towards the first two paragraphs of this post. For now, no, humans don't trust a computer more than a human, because most of the time they think about the matter they imagine themselves as that human. And it's only ever other humans who are incompetent *******s who should learn how to drive.

(I told myself I shouldn't do this. I hope I stayed far enough away from real world present day or historical politics, but I'm not going to be offended if this is judged as too close.)

theNater
2017-10-20, 10:39 AM
No what will happen is that people will not trust people will cars. Its not safe, its not reliable. Loud insecure people will push for it, governments will push for it, and bing bang boom your not legally allowed to drive your own car outside of maybe emergencies, or little baby bumper car tracks.
I think reasonable people working in good faith can find a compromise that will be acceptable to everybody. Are you open to compromise?

Red Fel
2017-10-20, 10:48 AM
Dude. Ants are capable of killing off or changing biosphere of entire forest areas. Measured by biomass, species of ants put together outnumber humanity. The largest ant colonies are larger than most human cities. Those little buggers are a lot more powerfull force of nature than you would assume from your everyday experience.

That's precisely my point. Ants have this massive society that has had dramatic impact on the world... But not on us. A post-singularity AI society may have tremendous impact on the world... But not on us. It may simply exist alongside ours, and have a negligible impact on our daily lives.


Smart people thinking about the long term implications? That hasn't happened so far, and probably won't until we get kicked in the teeth a good few more times.

If it happens at all. We've got a notoriously poor sense of self-preservation about these things.

Regarding self-driving cars, they're just one aspect of a paradigm shift. People may or may not trust cars that drive them, but they increasingly seem to have no problem with little boxes that live on their endtables and listen for phrases like "Play my music" or "Buy my socks" or something, and respond automatically. It's the same principle, just without physical motion - we tell the machine to automate the process and it does so. (Fun prank: Go to a friend's house and say, "Alexa, buy 1,000 pairs of socks. Confirm purchase.")

Point is, people saying that they don't trust cars to drive, but trust them to make purchases and manage various aspects of the household, is frankly inconsistent when you think about it. And as people get more comfortable with the latter, the former will become increasingly inevitable.

keybounce
2017-10-20, 11:01 AM
If you want an idea of what the singularity might be like, try playing with paperclips.


... but we're still limited to human smarts. And the upper bound there hasn't changed significantly since the days of Einstein or even Newton. Once we can start making computers that are smarter than the smartest people out there, or at least be able to mass-produce Einstein level intellects, it stands to reason that they'll be able to make even smarter computers. And so on.

As I understand it, we're a long way away from even understanding what it was that Newton had, let alone Kepler / Copernicus / Galileo / Leonardo da Vinci.

Being able to solve "How do we generate 'what-if' without having runaway sillyness", along with "I need an idea that solves X" before you can accurately describe X (because an accurate description of X is a solution to X), along with "Hmm, that's odd", along with "Hey, we need to collect data on Y just because", along with "I can describe something less than perfectly, because I know where I put the errors in so I can correct it but others cannot", etc. -- all these aspects of what it means for a human to be creative, not just a fast thinker, are currently not really well understood.

It says something that right now, the best way to make an AI that we know of is to model a neural network with errors that propagate, and multiple channels (electrical communication channel simulation for the neurons plus a secondary chemical channel communication for the supporting framework holding the neurons in place), with an ability to interact with the environment by robotic mobility, etc.


... Super brainy AI will have to have new parts manufactured in order to make even brainier AI, and most likely will have to do a lot of experimentation and stress testing the new tech. And thinking takes energy, which means that the AI will be bound by human manufacturing and energy infrastructures until far in the future. That's looking at decades of coexistence at the minimum.
Except that we've seen that all an intelligent software entity would need to do is find some government, somewhere on the planet, that could be convinced to do something. The idea of "Humans can prevent it from being manufactured" doesn't work, because humans cannot control all other humans.


... They recently came out with an attentional one that trains much faster - I could train it on my home machine in a week - but the actual accuracy in translation is almost the same despite the 5 order of magnitude improvement elsewhere. Meanwhile, a company that hired a dedicated team of human translators to produce a carefully curated dataset produced a higher accuracy with an older, simpler network.
By any chance, is the new google AI available publicly, or is it only for their own use? Something that can run on a home machine, instead of a giant data center is a huge improvement.


Anyone who tells you that one team of scientists will make their computer too smart and the next day we'll have robot overlords has no idea what they're talking about.

Anyone who tells you that one team of politicians will make their candidate too influential, and the next day we'll have idiot overlords has no idea ... oh, wait.

You know, it can happen. By the time you realize what your overlord has done, all you can say is "Don't blame me, I voted for Krang".


And there's also the thing where multiple low-frequency cores are better at most things than a single high-frequency core. Why do you think your CPU is barely 3 GHz but has four to eight cores?

I thought it was because of the speed of light. 1 GHz is about 1 foot. 3 GHz is about 4 inches. With edge triggered clocking, you have to basically complete everything in at most 75% of your clock cycle (and it's probably closer to 2 sets of calcs in a clock cycle so you can set status flags based on the results), so you only have 3 inches, and frankly the wires loop a lot :-).

It's hard to get the chips faster because the size of the computation area is actually pretty large, relatively speaking.

Multiple low-frequency cores vs one high frequency core? Give me a single 16 GHz core vs 16 separate 1 GHz core? ... There will be some overhead to switch tasks on the fast core, but it can do things that the many slow cores cannot do well. Not every problem can be broken down into equal parts.

And when you do have lots and lots of cores, you have to communicate beween them. Try dealing with 64 thousand cores, and the messaging needed to process stuff sometimes.

warty goblin
2017-10-20, 11:26 AM
That's precisely my point. Ants have this massive society that has had dramatic impact on the world... But not on us. A post-singularity AI society may have tremendous impact on the world... But not on us. It may simply exist alongside ours, and have a negligible impact on our daily lives.


I think the metaphor is missing a large part of the equation, namely how we impact ants. They get in the kitchen, we poison them. We decide we want to build a house or whatever and we'll kill millions without even thinking about it.

Now consider an AI that mostly works to improve itself iteratively. It's going to need energy and materials, and will likely have the means of securing them from us. It may not care about us, anymore than we usually care about ants, but this should still be considered really bad news, because it'd think about as much of turning everyone and everything in New York into a brand new server farm, or dumping mining effluvia into the water supply of a major city as we do paving over an ant hill to build a parking garage.

Or put differently, it's pretty clear the planet can't handle one high energy use species growing at exponential rates for very long. From the perspective of basically every other species on the planet we're comparably bad news to a six mile hunk of rock falling out of the Oort cloud. Adding an exponentially growing AI with an faster base doubling rate than we have is going to be bad news for everything. Including us. Not because it means us ill, but simply because it doesn't really care all that much. I mean it's not like we set out to kill 75% of winged insects in the last 30 years, or kill all the white rhinos, or any number of other things. We just did, mostly because they were in the way and doing our stuff was more important. Or we didn't even know we were doing it. About the only way an exponentially growing AI isn't terrible news for humans is if, for some reason, it decides we make good pets. Which seems unlikely (optimistic probability calculation: (number of common pet species)/(number of animal species)), and not actually that great of an outcome either.

And even if AI considers biological life worth protecting, it seems extremely irrational to think that'd be a good thing for us either. Sure we could be kept around, but an AI that liked biodiversity could happily and justifiably kill about 7 billion people, keep an entirely healthy sized human population, and do a better job of safeguarding the environment than we do. It's just dealing with an invasive species after all, the side effects are totally out of control.

In other words the only way an exponentially growing AI works out as less than literally the worst thing in human history is if it 1) does not require any resources we use, 2) does not produce lethal radiation, pollution or other pollutants, 3) likes us and 4) likes us tremendously more than every other form of life on the planet.

Vogie
2017-10-20, 01:37 PM
People like legislating others peoples freedom away when it means safety. Horses where not more free then cars, they where slower. This is a matter of safety, and from the looks of it people trust systems more then they trust people.

Horses weren't slower than cars, at the outset. And they were certainly safer - Technically, the horse *was* the first self-driving car (with a renewable fuel source). The difference was in upkeep and scale. Horses defecate, need to eat - fine in the country, significantly less so in an urban environment - but also need to have custom shoes made, and have all the ailments that can come from being an actual biological creature. Cars have gotten progressively less expensive in the past 100 years, compared to the average family's wage, while the upkeep on horses are about the same, or getting more expensive. Also, cars can't get pregnant when you leave it out one night.

And the freedom of others is only legislated away when the people doing the legislating have something to gain. The "greater good" argument is only used to show the public that they too will also benefit.

Scowling Dragon
2017-10-20, 01:50 PM
And the freedom of others is only legislated away when the people doing the legislating have something to gain. The "greater good" argument is only used to show the public that they too will also benefit.
Right. There are plenty of greater goods that impact freedom of choice and the freedom to make your own decisions even if they are wrong.

Legislating away alcohol for instance (In theory has only negatives). And sure it came back eventually, but greater good overuled everything else.

NichG
2017-10-20, 06:54 PM
If you want an idea of what the singularity might be like, try playing with paperclips.

That's based on a particular view of AI that has basically been dead since the 70s. The propositional logic deduction engine type of thing is too brittle to actually function in the real world. So this 'I must make paperclips, therefore I will start on a space program because I will eventually exhaust Earth's resources' kind of thought process isn't something AI that works engages in. Presumably not totally unrelated to why successful paperclip-making humans don't do it either.

Outside of the fragility of long-term deduction, there's also the issue that fixed externally imposed goals aren't good enough for complex environments or extended problems.

Modern AIs that solve extended problems all have methods by which the AI learns not only its actions, but also learns its goals - curiosity-driven behavior, hierarchical goal-setting, etc. So called 'intrinsic motivation' is another angle on this, and in most cases it comprises something self-referential and therefore capable of being adapted.

These are still rather hand-made motivation functions. A potential next generation of this stuff is to learn that motivation dynamically. Stuff like inverse reinforcement learning (extract goals from observed behavior) + imitation learning for example is a relatively efficient way to train robot controllers.




It says something that right now, the best way to make an AI that we know of is to model a neural network with errors that propagate, and multiple channels (electrical communication channel simulation for the neurons plus a secondary chemical channel communication for the supporting framework holding the neurons in place), with an ability to interact with the environment by robotic mobility, etc.

The best stuff doesn't really try to simulate the biology at all. ReLU ResNets with backprop easily outperform what you can do with e.g. STDP neurons and the like which try to be more biologically realistic.



By any chance, is the new google AI available publicly, or is it only for their own use? Something that can run on a home machine, instead of a giant data center is a huge improvement.

Architecture and reference implementations yes, training data and weights I don't think so. The paper is called 'Attention is all you need'. There's several publicly available TensorFlow and PyTorch implementations.

Scowling Dragon
2017-10-20, 07:06 PM
You know I would be allotted more comfortable with AI if the people supporting it don't make hollow arguments. The most realistic best case scenario I have heard is "We become cyborgs" so in a sense humans would have to become inhuman to compete.

All the other ones are either awful or hollow. Either awful jobs, where you hop from job to job as more and more become done by AI, or again just be content with self improvement and just sit in the corner.

Bucky
2017-10-20, 07:34 PM
You know I would be allotted more comfortable with AI if the people supporting it don't make hollow arguments.

We already have superhuman intelligences (SIs). They mostly try to sell us stuff because that's what they're made for. Those ones generally improve our lives. But some of them have other purposes, and it's hard to say whether those are beneficial.

In general, though, SIs are currently a net positive because they require human sponsorship in proportion to their thinking power. If an SI misbehaves too badly, or even if it behaves but isn't sufficiently beneficial, it loses its sponsors and either starves or is cannibalized.

What I'm most worried about are the SI version of viruses, which take SIs with a beneficial purpose (e.g. feeding people) and redirect part or all of their resources to less beneficial purposes like siphoning money or spreading the virus. A secondary concern is SIs interfering with politics as an expedient to accomplish their goals while disregarding the side effects. The two concerns overlap, as I currently suspect at least one SI-virus is being used for political fundraising.

NichG
2017-10-20, 07:36 PM
We already are cyborgs in a way. Someone without computer literacy in the modern world is at severe disadvantage. You can, and do, form a system with machines without Shadowrun-esque chrome implants.

With or without AI, the world in 20 years will require different skills to prosper than the world today, just as the world today requires different skills that it did 20 years ago. Standing still is feasible only as a constructed option - when a culture decides to allocate resources to preserve something it finds intrinsically valuable - but what is actually needed and of direct essential utility to human existence as a whole is a moving target.

If you want to be essential and make meaningful contributions, you have to commit to following that front. But on the plus side, the movement of that front tends to expand available resources, meaning that it's actually increasingly okay for people to choose not to do that and instead turn towards their own lives and social groups.

I find the 'everyone else should have less so that I can still remain important' philosophy pretty reprehensible.

Knaight
2017-10-20, 07:41 PM
Everyone remember the AlphaGo AI that beat a human player? Google fed gigabytes of data to it (terabytes?), and it managed to beat the best human player 4 out of 5 times. Now they have a new AI, AlphaGo Zero (http://wbgo.org/post/computer-learns-play-go-superhuman-levels-without-human-knowledge#stream/0). It was taught the rules, but learned entirely on its own by playing itself. It has recently beat AlphaGo 100 out of 100 times.
There are some key conditions here that don't apply to most problems. There's the convenience of a system that can be completely defined and thus completely taught to the AI, coupled with there being absolutely no need for external inputs. It's a recursive optimization problem in a simulation, which is where AIs are at their best.


Second, I think its mistaken to take Moore's law (which is slowing down if not outright over) and extrapolate it.
I'd take this a step further. For any real process for which there's an observed exponential growth function there's also a sigmoidal function that fits the data. The sigmoidal function should be assumed to be correct unless there's an extremely good reason to think otherwise. That's not to say that simplified exponential models aren't useful, or that exponential decay functions aren't fine (assuming that you aren't running them backwards, where they tend to break anyways), but physical limits tend to exist.

Scowling Dragon
2017-10-20, 07:45 PM
I find the 'everyone else should have less so that I can still remain important' philosophy pretty reprehensible.
I find the "I feel important so I shall decide everybody's fate for them" about as awful. Its those people that cause mass suffering on massive scales. Because they see themselves as elite GODs and everybody who doesn't agree are just selfish and stupid.
Im often a quiet minority, who has to suffer because the loud majority doesn't care. A future built even more ingrainedly around those principles will be suffering.

Whats the point of ever even developing skills? Some AI will just obsolete it all away (Or will reasure with HOLLOW and SHALLOW platitutes that it hasn't obsoleted you away YET). Its not there will be different skills. There will be NO skills. Because humans will be replaced at every step until we are completly superflous to the system itself or become inhuman.
I guess Im happy that hedonists will have a great time, but I apologize that I just don't get as much enjoyment from consumption.

NichG
2017-10-20, 08:27 PM
I find the "I feel important so I shall decide everybody's fate for them" about as awful. Its those people that cause mass suffering on massive scales. Because they see themselves as elite GODs and everybody who doesn't agree are just selfish and stupid.
Im often a quiet minority, who has to suffer because the loud majority doesn't care. A future built even more ingrainedly around those principles will be suffering.

Whats the point of ever even developing skills? Some AI will just obsolete it all away (Or will reasure with HOLLOW and SHALLOW platitutes that it hasn't obsoleted you away YET). Its not there will be different skills. There will be NO skills. Because humans will be replaced at every step until we are completly superflous to the system itself or become inhuman.
I guess Im happy that hedonists will have a great time, but I apologize that I just don't get as much enjoyment from consumption.

Maybe for you it has no point, and maybe for the philosophy you've internalized there will eventually be no point to existing. But if you have shelter and food and resources and freedom and access to therapy and social support and all those other things and you still can't find meaning, well, at some point the claim that this should be everyone else's problem starts to sound awfully selfish. People should starve or die in car accidents because you want to feel that your job is important, that your driving skill is meaningful? I don't buy that.

If you want something meaningful and challenging to do that will remain relevant to your own life regardless of what happens in the sphere of automation, here's your meaningful challenge that survives all else - the search for meaning itself. I have a feeling that no matter how good AI gets, that's the kind of thing that you wouldn't be satisfied with unless you found it for yourself through your own action.

Bucky
2017-10-20, 09:36 PM
The significance problem is neatly answered by the sponsorship mechanism I discussed above. As long as humans are the ones judging the AIs or SIs, there will be a place for humans.

Scowling Dragon
2017-10-20, 10:07 PM
The significance problem is neatly answered by the sponsorship mechanism I discussed above. As long as humans are the ones judging the AIs or SIs, there will be a place for humans.

Either AIs will AIs to judge them, or some jackass programmer will make AIs to judge other AIs because vauge reasons (Money).


People should starve or die in car accidents because you want to feel that your job is important

I mean how far are you willing to take that? How much freedom you think is worthwhile to remove? How much of a person can you remove until he becomes unselfish enough to warrant existence?

Are you selfish until a human being is in a virtual reality vat where they can't hurt others nor be hurt? Ultimate safety and incapability to be selfish since they don't have a choice.

I really hate the argument "Unless your willing to die for everything, your selfish)

I decided to spend my time enjoying my relevance for as long as possible and try to make a difference before its legislated away. I wouldn't vote against AI because I don't believe that my existence warrants other peoples death.

I just won't like it and won't feel satisfied.

NichG
2017-10-20, 10:45 PM
I mean how far are you willing to take that? How much freedom you think is worthwhile to remove? How much of a person can you remove until he becomes unselfish enough to warrant existence?

Are you selfish until a human being is in a virtual reality vat where they can't hurt others nor be hurt? Ultimate safety and incapability to be selfish since they don't have a choice.

I really hate the argument "Unless your willing to die for everything, your selfish)


Generally my approach personally is to say, if someone says 'I need more' then I'm a lot more willing to entertain that than when someone says 'I need that other guy to have less'. So when you're arguing against the existence of a technology because someone else might use it to render you a bit less relevant, that reads as the second rather than the first to me.

If you say 'you can have your self-driving car, but I want there to be some roads where I can still drive' I think that's reasonable. Or even 'I make my living driving, so if you want to have your self-driving car that makes me incapable of surviving, you should help come up with some other way that I can be supported'. But if you say 'I don't want you to have your self-driving car because then there won't be a reason for me to drive' I think that's unreasonable. In the second case, you're trying to make your fulfillment everyone else's burden, when the reason it's an issue stems from your own choice of how to approach the situation rather than anything necessarily systemic.

Frozen_Feet
2017-10-21, 09:07 AM
That's precisely my point. Ants have this massive society that has had dramatic impact on the world... But not on us. A post-singularity AI society may have tremendous impact on the world... But not on us. It may simply exist alongside ours, and have a negligible impact on our daily lives.


And the point I was aiming at was that since ants have massive effect on the world, they also have a massive effect on us even if that effect is not immediately obvious from looking at your average anthill.

For example, ants are capable of compromising structural integrity of a building of basically any size. That is why we occasionally have to go to lengths to eradicate an ant colony when its still small, or demolish the entire building because its too late.

You can also estimate effect of ants on us, by reversing the question and asking "so how much it would impact us to eradicate the effect that ants have on the world?" And then ask the same question of AI.

Even for modern AI, the impact of trying to get rid of it would be massive.

halfeye
2017-10-21, 10:31 AM
Generally my approach personally is to say, if someone says 'I need more' then I'm a lot more willing to entertain that than when someone says 'I need that other guy to have less'. So when you're arguing against the existence of a technology because someone else might use it to render you a bit less relevant, that reads as the second rather than the first to me.

Some psychologists wrote a questionaire once, asking people how much more (money I think, but I think it works out to be the same as resources in general) they would need so they wouldn't need any more after that. The answers varied, but the average, for every income group, was "I need 1/3rd more than what I have".

So, I'm a lot more interested in the finity of resources, than I am in peoples' relative wants. In my view we need to get off this nice rock because there are so many more resources out there. We're not yet at the limits here, for most resources, but we are headed toward finding those limits, and that will be painful.

NichG
2017-10-21, 07:50 PM
Some psychologists wrote a questionaire once, asking people how much more (money I think, but I think it works out to be the same as resources in general) they would need so they wouldn't need any more after that. The answers varied, but the average, for every income group, was "I need 1/3rd more than what I have".

So, I'm a lot more interested in the finity of resources, than I am in peoples' relative wants. In my view we need to get off this nice rock because there are so many more resources out there. We're not yet at the limits here, for most resources, but we are headed toward finding those limits, and that will be painful.

The nice thing about technology is that it makes the game stop being zero-sum. For example, the rapidly falling price of solar could make a big difference. The other thing I'm tentatively optimistic about is the demographic transition. If for example we enter into a phase where the population is slowly shrinking due to a drop in birth rate, but with the slack taken up by automation, then that seems like a helpful thing for stabilizing our situation, and it seems to be something that could happen without going against human nature.