PDA

View Full Version : Trans-Humanism



Das Platyvark
2012-02-22, 09:47 PM
I've been quite intrigued by the movement lately, and was wondering what people consider to be the most important works in it. I don't want fiction so much as some kind of manifesto, but anything is good. Who/what are the most essential authors or works?

Icewalker
2012-02-22, 10:03 PM
Hmm, I can't say I know much if anything of Transhumanist non-fiction, books of philosophy and the like, save perhaps some of Nietzsche.

As far as fiction goes, I can suggest a few things! A couple of Kurt Vonnegut's stories from Welcome to the Monkey House are good I think (Unready to Wear specifically comes to mind, that one definitely).

It's a bit of a theme throughout Battlestar Galactica, although not in the usual kind of "build humans into better humans" way, except for one episode about 2/3rds of the way through the series with a great speech in it from Model #1.

Speaking of Nietzsche, one of the prominent species (well, they're transhuman, not a different species) in the sci fi show Andromeda (great show) are the Nietzscheans, and their very presence is pretty much entirely defined by extensions on Nietzsche's thoughts on transhumanism, the Übermensch, survival of the fittest, etc.

Hmm...I'm not sure what else. It comes up a bit in Neuromancer, by William Gibson, but it's not really a theme of the story so much as it is just a given of the setting, at least in the first book of the Sprawl trilogy.

nooblade
2012-02-22, 10:12 PM
The "I feel lucky" result of a search: http://www.singularityweblog.com/a-transhumanist-manifesto/

I think the comments are more interesting than the blog. I like the one that was mentioned there much better: http://www.transhumanist.biz/transhumanmanifesto.htm

What a wordy movement for saying the human frame isn't perfect.

hydroplatypus
2012-02-22, 10:31 PM
What a wordy movement for saying the human frame isn't perfect.

Yup it is. However I find it a cool movement nonetheless.

Connington
2012-02-23, 01:43 AM
Being both intellectually stunted and a broke college student, I'm more familiar with blogs than books. So, digging through the dark recesses of my Google Reader feed...

Sentient Developments (http://www.sentientdevelopments.com/) (Fairly typical transhumanist blog. Michael Anissimov or h+ Magazine would also be good starts)
PopBioethics (http://www.popbioethics.com/)(Used to be called PopTranshumanism, and the name still fits better. Deals with bioethics/transhumanist themes in popculture, obviously)
Futurisms (http://futurisms.thenewatlantis.com/) (For a counterpoint, try a blog devoted to criticizing transhumanism)

Ray Kurzweil is also important, but he's associated with the slightly loony "I'm going to live forever" crowd.

As far as fiction goes, Vernor Vinge is absolutely essential reading. Honestly, considering that he coined the term "technological singularity", he's pretty important in an absolute sense. Vinge is one of those guys who was pretty prophetic about the potential of the internet back in the 80s. True Names, Bookworm, Run!, and Fast Times at Ridgemont High are the three most relevant stories.

I'm only passing acquainted with transhumanist thinking, so forgive me if I've skipped anything hugely important.

Grinner
2012-02-23, 01:59 AM
I'm currently running a game called Eclipse Phase which discusses the social effects of transhumanism and posthumanism quite well. It's also free.

Here's a link to the core rulebook (http://robboyle.files.wordpress.com/2011/05/ps21000_eclipsephase_3rdprinting1.pdf).

I also saw a third party trailer (http://www.youtube.com/watch?v=uGzpzlvf0Gs#!) for the recent Deus Ex. I found it to be impactful.


It's a bit of a theme throughout Battlestar Galactica, although not in the usual kind of "build humans into better humans" way, except for one episode about 2/3rds of the way through the series with a great speech in it from Model #1.

You're speaking of the "I don't want to be human." spiel?

Ganurath
2012-02-23, 02:17 AM
Well, my experience on the matter is limited primarily to fiction, such as the webcomic Dresden Codak and being a player in Scotch's game. Therefore, I decided to look at the wikipedia article.

A History of Transhumanist Thought (http://www.nickbostrom.com/papers/history.pdf), by Nick Bostrom, was sited for a bunch of stuff in the article, including the initial summary paragraph. At was retrieved six years and two days ago, though.

Outline of Transhumanism (http://en.wikipedia.org/wiki/Outline_of_transhumanism) is a wikipedia article to summarize the main article, because there is so much that goes into transhumanism that even an internet's worth of data fiends found it a tedious read.

Connington: I understood Immortalism to be more of a "dying is bad, and I don't want it to ever happen" sort of thing.

Yora
2012-02-23, 05:42 AM
Ray Kurzweil is also important, but he's associated with the slightly loony "I'm going to live forever" crowd.
I think he's actually the king of the loonies I think the subject is a highly interesting one with lots of actual relevance for the present and future. However those people who currently are considered as the transhumanist movement and the futurists leaning in that direction seem pretty nuts to me. If you see the term "singularity", be careful. That's not science, that's a an apocalyptic religion that waits for their robot-messiah to end all suffering in the world and transform everyone into immortal energy beings. And like a good apocalyptic cult, it is going to happen within our own lifetime.

Eldan
2012-02-23, 08:29 AM
I happen to be a cultist of that religion, but at least I'm aware of it and make jokes about it :smalltongue:

Well, not so much the robot part. I don't really think machines are that much better than biological systems, being a biologist myself. There's a few sentences on those sites I really can't agree with, i.e. Biological evolution is perpetual but slow, inefficient, blind and dangerous. Technological evolution is fast, efficient, accelerating and better by design.

That is, honestly, dangerous thinking at worst and somewhere between simplified and silly at best.

Grinner
2012-02-23, 10:12 AM
Well, not so much the robot part. I don't really think machines are that much better than biological systems, being a biologist myself. There's a few sentences on those sites I really can't agree with, i.e. Biological evolution is perpetual but slow, inefficient, blind and dangerous. Technological evolution is fast, efficient, accelerating and better by design.

I've sometimes speculated on why humans aren't machines. Machines are, yes, durable, strong, and efficient. Compare to biological systems like ours, which are generally incapable of reaching the physical extremes of machines and are also vulnerable to disease.

I've realized that it's because biological systems are far superior where long term survival s concerned. If a man starves, the body can cannibalize its own flesh in order to sustain the whole. If a man breaks, so long as it's not severe enough to kill him outright, he can be reasonably certain that he will recover. Additionally, biological systems propagate more easily than any machine and last much longer.

kamikasei
2012-02-23, 10:25 AM
The Singularity Institute (http://www.singinst.org/) is more about AI but it's probably a good jumping-off point. Bear in mind that transhumanism is a big, broad umbrella term encompassing many different views and agendas. Not everyone it covers thinks the same things are possible or desirable.

I've sometimes speculated on why humans aren't machines...
I've realized that it's because biological systems are far superior where long term survival s concerned.
Would not "because we evolved, and thus necessarily are an evolvable form, rather than having been constructed by design to be optimal" not be a simpler answer to that speculation? "Machine" is a slippery term but the way you appear to be using it raises the question "how would we have gotten to be that way, then?".

WalkingTarget
2012-02-23, 10:26 AM
Dunno how helpful they'd be for people just beginning in the topic, but the comic book Transmetropolitan (http://en.wikipedia.org/wiki/Transmetropolitan) by Warren Ellis and Darick Robertson and the webcomic Dresden Codak (http://dresdencodak.com/) by Aaron Diaz both have some transhumanist themes.

Yora
2012-02-23, 10:26 AM
Machines are very good and efficient. At the one job they are designed for, in the environment they are meant to opperate in. But the world is not about performing a single task in a prepared environment. It's about being able to survive in non-optimal conditions.

Of all writers, the ideas of Shirow Masamune seem to make the most sense to me. Most cyberpunk does not age well. They failed to predict key technologies that changed everything and assumed speeds of progress that became completely off as time marched on. Appleseed started in 1985 and Ghost in the Shell in 1991. And from our current point of view and our current knowledge and technology, his visions still seem as possible as they were in the 80s and the speed by which technology advances still seems like a good estimate.
The 95% replacement cyborgs are a bit far out as well as ubiqous brain implants that allow internet browsing, but the basic technologies all do exist. It's only a matter of miniaturization and improved efficency, which in the world of electronics has always been only a matter of time.

Grinner
2012-02-23, 10:30 AM
Would not "because we evolved, and thus necessarily are an evolvable form, rather than having been constructed by design to be optimal" not be a simpler answer to that speculation? "Machine" is a slippery term but the way you appear to be using it raises the question "how would we have gotten to be that way, then?".

That's the boring answer. And, admittedly, I had been considering it from a mildly theological perspective.

Tyndmyr
2012-02-23, 11:00 AM
I think he's actually the king of the loonies I think the subject is a highly interesting one with lots of actual relevance for the present and future. However those people who currently are considered as the transhumanist movement and the futurists leaning in that direction seem pretty nuts to me. If you see the term "singularity", be careful. That's not science, that's a an apocalyptic religion that waits for their robot-messiah to end all suffering in the world and transform everyone into immortal energy beings. And like a good apocalyptic cult, it is going to happen within our own lifetime.

Basically yes.

I rather dislike that they taint the whole goal of living longer w science with their crazy. Living forever*, while not possible now, might be one day, and I certainly have no problems with research toward that end, overcoming some of the many problems we have.

The singularity business is crazy, though, and founded on a lot of unsupported assumptions.

*Well, till the world ends or whatever. A long time.

Psyren
2012-02-23, 11:04 AM
Here's some real-world stuff on the subject - double-amputee Olympic-level athlete, Oscar Pistorius (popularly known as "Blade Runner.")

His wikipedia article (http://en.wikipedia.org/wiki/Oscar_Pistorius)
Bunch of news articles (http://topics.nytimes.com/top/reference/timestopics/people/p/oscar_pistorius/index.html)

hydroplatypus
2012-02-23, 11:23 AM
I think he's actually the king of the loonies I think the subject is a highly interesting one with lots of actual relevance for the present and future. However those people who currently are considered as the transhumanist movement and the futurists leaning in that direction seem pretty nuts to me. If you see the term "singularity", be careful. That's not science, that's a an apocalyptic religion that waits for their robot-messiah to end all suffering in the world and transform everyone into immortal energy beings. And like a good apocalyptic cult, it is going to happen within our own lifetime.


you seem to have a complete misunderstanding of what the singularity is. It has nothing to do with robot messiah coming to save us. It is a term used to refer to when we design an AI that is smarter than human. We call this the singularity because we cannot predict with any accuracy what will happen when this occurs. We have had robot messiah predictions, robot apocalypse predictions, but when it comes down to it we just don't know. The singularity could also come about in a way that no AI is built, but we make humans smarter than they currently are. Similar effects but we remain in control.

The reason this is significant is that a more intelligent computer will be able to design computers better than itself, which design even better computers etc. Thus we end up with rapidly increasing intelligence. Some have theorized that this will lead to a utopia where computers do all the work (you called it robot messiah). Others have said that the robots will view us as insects and exterminate us. I think it will be somewhere in the middle. AI participating somewhat like humans are now. Some good, Some bad.

The reason that the singularity is thought to come soon is predictions based on the rate on which computers increase in computing power. Computers double every ~ 18 months. extrapolating this we have a computer with human level computing power within 50 (? been a while since I looked up the stats) years. Granted having the computing power doesn't imply that the computer will act in any way intelligent, but is shows that it has the capability to be intelligent if we were to program it well enough. So there are some good reasons to think that the singularity (whatever actually happens) will happen relatively soon.

Also, on ascending into energy beings ya that is absurd.

Eldan
2012-02-23, 11:42 AM
My favourite vision of the singularity is merging or cooperation. I really like the "Sapience is Sapience, independent of the substrate" aspect of Transhumanism. I really hope when it comes down to it, in the end, we won't be split along lines like "Human" or "AI" or "Dolphin".

Utopic, I know. That's why I don't entirely deny accusations of it being a religion.

Grinner
2012-02-23, 12:06 PM
The simplest definition of the Singularity is that it is the point at which machine becomes man.


The reason that the singularity is thought to come soon is predictions based on the rate on which computers increase in computing power. Computers double every ~ 18 months. extrapolating this we have a computer with human level computing power within 50 (? been a while since I looked up the stats) years. Granted having the computing power doesn't imply that the computer will act in any way intelligent, but is shows that it has the capability to be intelligent if we were to program it well enough. So there are some good reasons to think that the singularity (whatever actually happens) will happen relatively soon.

This trend is called Moore's Law, and, if I remember correctly, it's really just a self-fulfilling prophecy, not a natural occurrence. Researchers could choose to release their developments at a slower rate or, given enough help, a faster rate. However, the semiconductor industry has decided that they must follow this prediction and double their products' capacity for processing every 18 months.

Tyndmyr
2012-02-23, 12:11 PM
you seem to have a complete misunderstanding of what the singularity is. It has nothing to do with robot messiah coming to save us. It is a term used to refer to when we design an AI that is smarter than human. We call this the singularity because we cannot predict with any accuracy what will happen when this occurs. We have had robot messiah predictions, robot apocalypse predictions, but when it comes down to it we just don't know. The singularity could also come about in a way that no AI is built, but we make humans smarter than they currently are. Similar effects but we remain in control.

The reason this is significant is that a more intelligent computer will be able to design computers better than itself, which design even better computers etc. Thus we end up with rapidly increasing intelligence. Some have theorized that this will lead to a utopia where computers do all the work (you called it robot messiah). Others have said that the robots will view us as insects and exterminate us. I think it will be somewhere in the middle. AI participating somewhat like humans are now. Some good, Some bad.

So, the Singularity itself is basically entirely undefined and subjective. It ranges from utopia to the end of human life. That's..roughly as undefined as you can get.


The reason that the singularity is thought to come soon is predictions based on the rate on which computers increase in computing power. Computers double every ~ 18 months. extrapolating this we have a computer with human level computing power within 50 (? been a while since I looked up the stats) years. Granted having the computing power doesn't imply that the computer will act in any way intelligent, but is shows that it has the capability to be intelligent if we were to program it well enough. So there are some good reasons to think that the singularity (whatever actually happens) will happen relatively soon.

Also, on ascending into energy beings ya that is absurd.

Moore's law is an approximation of tends in transistor counts. This is only tangentally related to AI.

It is used in planning, and thus, has self fulfilling elements, and even it's creator did not expect or claim that it would be true indefinitely, something the singularity relies upon. So, there is absolutely no reason to believe there is any guarantee that the singularity, whatever it is, will occur at any specific time, or indeed, at all.

So, something might happen at some time.

Yeah...color me unimpressed.

Eldan
2012-02-23, 12:21 PM
No, the Singularity is defined. What is not defined is what will happen after it.

The term comes from black holes. We can not in any way measure what is behind a Schwartzschild-Radius. Similarly, we can not really predict what will happen once computers become self-improving.

The Singularity is the point in which the subject you are working on becomes better at improving itself than its creator is. The point where a computer can build a computer that is better than itself at everything. Where acceleration really takes off.

That much is defined. Do we now where it will end up? No, but we have an idea on what will start it.

Yora
2012-02-23, 12:59 PM
Once a computer is able to improve itself, the speed by which processing power improves with each generation becomes faster and, mathmatically, reaches infinity. And no, I don't have sources for that right now, but I've read in several places that this computer will then be able to invent and build everything, creating cybernetics and custom genetic engineered bodies that make humans immortal and improve our own mental capacity beyond what we can imagine today. There will be no more disease or hunger, which also means that the world will be a completely different world since all our physical needs are taken care of. And the point where the singularity happens is estimated to be in the next 20 to 30 years. (http://www.smbc-comics.com/index.php?db=comics&id=1968#comic)

It has all the aspects of the apocalypse:
- An onimipotent and omniscient being will appear.
- The being will end the world as we know it and transform the world into something new that we can't even imagine.
- Humans will leave behind their mortal forms and be transformed into perfect beings as well.
- And it happens within the lifetime of the believers, since it would suck if they had to die.

Some years ago, when I read some stuff about these things, this site (http://hplusmagazine.com/) got me the most riled up.

Eldan
2012-02-23, 01:21 PM
Once a computer is able to improve itself, the speed by which processing power improves with each generation becomes faster and, mathmatically, reaches infinity. And no, I don't have sources for that right now, but I've read in several places that this computer will then be able to invent and build everything, creating cybernetics and custom genetic engineered bodies that make humans immortal and improve our own mental capacity beyond what we can imagine today. There will be no more disease or hunger, which also means that the world will be a completely different world since all our physical needs are taken care of. And the point where the singularity happens is estimated to be in the next 20 to 30 years. (http://www.smbc-comics.com/index.php?db=comics&id=1968#comic)

It has all the aspects of the apocalypse:
- An onimipotent and omniscient being will appear.
- The being will end the world as we know it and transform the world into something new that we can't even imagine.
- Humans will leave behind their mortal forms and be transformed into perfect beings as well.
- And it happens within the lifetime of the believers, since it would suck if they had to die.

Some years ago, when I read some stuff about these things, this site (http://hplusmagazine.com/) got me the most riled up.


Yeah, that... that is religion disguised as science. It's the creation science for computer scientists instead of biologists, basically.

No, faster computers won't solve all our problems. They may very well solve all our problems. But I don't believe in a fundamental post-scarcity. Energy, at the very least, is always limited, no matter how good our technology becomes. Cybernetics and genetic engineering? Certainly. We are doing both already, and depending on your definition, we have done both for millennia, just with less refined methods.
Even the fastest computer won't solve the problem of differing ideologies between groups of people. Hunger and disease? Probably, to a good degree. Immortality? Maybe. Super-bright energy beings with no limits living in absolute, wonderful harmony? That's the rapture. Eschatology. Not a prediction.

Also, that linked comic always struck me as a bit silly. Things change continually. There will be no clear point after which everything is now suddenly bright and wonderful. It's a gradual process, and whether or not you believe it is actually an improvement or everything is getting worse or if it is a mixed bag is personal opinion.

Seraph
2012-02-23, 01:24 PM
There are generally two kinds of transhumanists, The smart ones who legitimately follow the scientific advancement of cybernetics and genetic modification, talk about how they will legitimately affect human society, and see the Singularity as an intellectual concept about how some day human technology will advance beyond the point of modern understanding.

the stupid ones are the ones who sit around in their basement talking about how The Singularity is going to solve everything forever and its just going to show all those religious people.

Your critical failing, Yora, is that you refuse to acknowledge that The Singularity is a legitimate concept with legitimate dimensions of discussion, and you place everyone who mentions it into the second group regardless of what they are actually saying. discussion of the Singularity is no more an apocalypse cult than, say, people in the 19th century talking about medical advances.

JoshuaZ
2012-02-23, 01:29 PM
No, the Singularity is defined. What is not defined is what will happen after it.



There are a variety of different notions of the Singularity. See this short essay (http://yudkowsky.net/singularity/schools) which outlines the three most common notions.

Eldan
2012-02-23, 01:45 PM
There are a variety of different notions of the Singularity. See this short essay (http://yudkowsky.net/singularity/schools) which outlines the three most common notions.

Strange. I don't see how these three are really mutually exclusive. They are talking about different points, not the same point. Accelerating change says that change accelerates continually over time. Intelligence explosion says change accelerates even more once intelligence begins changing itself. The event horizon says this accelerating change will lead to a future we can't predict.

Don't these all build on each other?

Tyndmyr
2012-02-23, 01:53 PM
Once a computer is able to improve itself, the speed by which processing power improves with each generation becomes faster and, mathmatically, reaches infinity. And no, I don't have sources for that right now, but I've read in several places that this computer will then be able to invent and build everything, creating cybernetics and custom genetic engineered bodies that make humans immortal and improve our own mental capacity beyond what we can imagine today. There will be no more disease or hunger, which also means that the world will be a completely different world since all our physical needs are taken care of. And the point where the singularity happens is estimated to be in the next 20 to 30 years. (http://www.smbc-comics.com/index.php?db=comics&id=1968#comic)

Oh, really? See, here's the thing. I'm a software engineer. Improvement isn't just about speed. Worse, genetic software design is not used precisely BECAUSE of speed/bloat issues. Evolution does not produce idealized designs. In fact, it produces rather a lot of cruft.

So, it's the same basic problem that humans have. Just because you've added computers doesn't make that basic problem go away. And that's if it works, which is highly doubtful.

We already have replacement electronic eyes, pacemakers, mentally controlled replacement limbs, etc. None of these were designed by AIs, or anything very much like an AI. They were designed by people.


It has all the aspects of the apocalypse:
- An onimipotent and omniscient being will appear.
- The being will end the world as we know it and transform the world into something new that we can't even imagine.
- Humans will leave behind their mortal forms and be transformed into perfect beings as well.
- And it happens within the lifetime of the believers, since it would suck if they had to die.

Some years ago, when I read some stuff about these things, this site (http://hplusmagazine.com/) got me the most riled up.

Yeah, that stuff is basically all religion.

On the other hand, I'm not even sure that we'll ever acheive strong AI. I'm not sure it even exists. I'm quite certain that if it does, it's not a mere function of transistor count and clock speeds.

Technological progress, longevity, transhumanism and AI are all interesting fields, worthy of conversation. The singularity does all of them a disservice by presenting them in am implausible, unrealistic form that is more akin to a fairy tale than science.

I would love to discuss more concrete topics instead. For instance, I'm quite comfortable in replacing busted portions of my body with mechanical replacements.

GenericGuy
2012-02-23, 01:54 PM
Once a computer is able to improve itself, the speed by which processing power improves with each generation becomes faster and, mathmatically, reaches infinity. And no, I don't have sources for that right now, but I've read in several places that this computer will then be able to invent and build everything, creating cybernetics and custom genetic engineered bodies that make humans immortal and improve our own mental capacity beyond what we can imagine today. There will be no more disease or hunger, which also means that the world will be a completely different world since all our physical needs are taken care of. And the point where the singularity happens is estimated to be in the next 20 to 30 years. (http://www.smbc-comics.com/index.php?db=comics&id=1968#comic)

It has all the aspects of the apocalypse:
- An onimipotent and omniscient being will appear.
- The being will end the world as we know it and transform the world into something new that we can't even imagine.
- Humans will leave behind their mortal forms and be transformed into perfect beings as well.
- And it happens within the lifetime of the believers, since it would suck if they had to die.

Some years ago, when I read some stuff about these things, this site (http://hplusmagazine.com/) got me the most riled up.

I think, in a way, you’re putting the cart before the horse when dismissing singularitarians as cult members because of the religious like undertones in some Singularity predictions. That it’s just a pallet swap of Christianity with different jargon.

Let me first say that my following statements could be mistakenly construed as “offensive” to religious belief, but are in absolutely no way meant to be dismissive or an attack of religious beliefs.

Religions offer people transcendence and of something greater than what they have now at some undetermined point (most often after death). This is not unique to any one religion, and religion is a creation of human beings (I’m not saying god, karma, etc. is a creation of man, just that humans all across the globe have this shared desire and so have created institutions devoted to it). Technology is a tool and is used to obtain what man desires, man desires transcendence and being more than what he is, is it not somewhat logical that he would try to use his tools to give himself transcendence? This is what I mean by putting the cart before the horse, for some singularitarians it’s not that it’s another religion, it’s that were at the point where are tools may allow us to grasp a desire we’ve had for eons. Does that make sense?

Tiki Snakes
2012-02-23, 01:55 PM
So, what your saying is that there are Multiple Singularities?

hydroplatypus
2012-02-23, 01:57 PM
Once a computer is able to improve itself, the speed by which processing power improves with each generation becomes faster and, mathmatically, reaches infinity.

Ummm... No it doesn't. Extrapolating existing trends computing power never reaches infinity. Even if computers improving themselves happens then they won't reach infinity. Certainly they will reach very large numbers with computing power but infinity cannot be reached. There is in fact a hard limit to all of this. Bascially each calculation requires some energy. There is a finite supply of energy. Therefore there is a maximum on computing power (being the sum total of energy in the universe used all at once). VERY VERY LARGE, but not infinite. And the chances of actually reaching that are 0.



I've read in several places that this computer will then be able to invent and build everything, creating cybernetics and custom genetic engineered bodies that make humans immortal and improve our own mental capacity beyond what we can imagine today. There will be no more disease or hunger, which also means that the world will be a completely different world since all our physical needs are taken care of.


cybernetics and genetic engineering are already happening. We don't need a singularity for that. So if a singularity does happen then it is certain these fields will advance quickly. Note I said quickly not instantaneously. These feilds will certainly be helpful but won't solve all of our problems. Physical immortality is certainly possible (barring violent death). We can replace some parts of the human body already, and theoretically everything is replaceable (except possibly the brain). With nanotech we could theoretically stop the brain from degrading (or repair the damage) and thus be immortal. Granted you still die if you get hit by an explosive or something that damages your brain/brain replacement, but we wouldn't age.

Biological disease could theoretically be largely killed if we develop nano-tech to a high degree, or bio-engineer our immune systems to be awesome. But then we have to deal with nano-bot diseases or computer viruses.

Increasing our mental capacity: probably will happen. We are currently researching ways to put computers in brains so that they can interact. This could theoretically increase intelligence. So even without a singularity this will happen.

Hunger: if we were willing to put a power plant in people so that it would recycle the body's nutrients into their pre-used form we could get by just with whatever fuel this power plant uses. If we can make fusion work on a very small scale then we could get by just drinking heavy water. This is however one of the more out there ideas, so don't expect it remotely soon.

completely different world? Yes. All needs taken care of? mabye if the more optimistic predictions come true.

as to 20 to 30 years. I have heard estimates ranging from 10 years to 200 years so we really have no reliable date here. Could be close to now, or could be distant future.



It has all the aspects of the apocalypse:
- An onimipotent and omniscient being will appear.
- The being will end the world as we know it and transform the world into something new that we can't even imagine.
- Humans will leave behind their mortal forms and be transformed into perfect beings as well.
- And it happens within the lifetime of the believers, since it would suck if they had to die.


Omnipotent/omniscient? where did we imply that? certainly AIs would be powerful, but nowhere near those levels of power. We would likely also be improving ourselves so they wouldn't be that far above us in terms of power/knowledge. Which by the way would still take time to increase. Granted it would go quickly but not infinitely quickly.

Being will end/transform world: not one being. AIs and humans together will likely change the world. And the world has changed as to become unrecognizable several times in history. Would a caveman recognize our world at all? What is so bad about saying the world will change?

humans leave behind forms/become perfect: Humans will change, this is true. But perfection is subjective and as such has little meaning in this discussion. Also define mortal form. Humanity will likely change from what it currently is but it will likely be cybernetics, not some mystical change that some omnipotent being does.

within our lifetimes: the estimates are fairly reasonable based on extrapolation. If you don't think so explain why. Also some estimates are well outside of our current lifetimes, so not everyone believes they will live to see it. Also the singularity doesn't imply immortality. It may cause it but in no way requires or implies it. It will simply advance technology really really fast. Immortality might not be discovered until centuries after the singularity.

Also you misunderstood my prior post. The singularity is defined as : when we produce intelligence superior to our current intelligence. You said that it was undefined because its effects were uncertain. This does not effect the definition itself, which is seperate from its effects.

Grinner
2012-02-23, 02:24 PM
This is what I mean by putting the cart before the horse, for some singularitarians it’s not that it’s another religion, it’s that were at the point where are tools may allow us to grasp a desire we’ve had for eons. Does that make sense?

Sure. It seems logical enough.

Yora
2012-02-23, 03:02 PM
Also, that linked comic always struck me as a bit silly. Things change continually. There will be no clear point after which everything is now suddenly bright and wonderful. It's a gradual process, and whether or not you believe it is actually an improvement or everything is getting worse or if it is a mixed bag is personal opinion.

Of course, it's mocking about the image many transhumanists present to outsiders. The estimate when the singularity happens and humans become immortal seems not to be based on scientific calculations, but "before I die". It's been a few year since I researched all those things, but there was a clear correlation between a writers age and the date the singularity was supposed to happen. It's usually about the time the writer is in his late 70s.

Your critical failing, Yora, is that you refuse to acknowledge that The Singularity is a legitimate concept with legitimate dimensions of discussion, and you place everyone who mentions it into the second group regardless of what they are actually saying. discussion of the Singularity is no more an apocalypse cult than, say, people in the 19th century talking about medical advances.
Okay, unfortunate chose of words on my side, I accept that. (And blame it on the limitations of forum-based communications.)
The singularity certain is a viable theoretical hypothesis. I personally doubt it, but I see the logic behind it.

My critic was aimed at the singularitarians, who appear to treat the singularity as their ascension to a higher existance.

Yora
2012-02-23, 03:07 PM
Forum won't show my post to edit it until there's a new post in the thread. :smallmad:
So here I go:


Ummm... No it doesn't. Extrapolating existing trends computing power never reaches infinity.
Yes, you are right. Mathmatically speaking, it doesn't reach infinity, bur approaches infinity.
The basic assumption is that a powerful computer can help you develop a more powerful computer than itself and the time it takes to create a new computer that has double the proccessing power becomes smaller every time. Extrapolated, the time between two such milestones becomes smaller and smaller until it gets infinitesimal small, which means the speed of advancement becomes almost infinite.

Religions offer people transcendence and of something greater than what they have now at some undetermined point (most often after death). This is not unique to any one religion, and religion is a creation of human beings (I’m not saying god, karma, etc. is a creation of man, just that humans all across the globe have this shared desire and so have created institutions devoted to it). Technology is a tool and is used to obtain what man desires, man desires transcendence and being more than what he is, is it not somewhat logical that he would try to use his tools to give himself transcendence? This is what I mean by putting the cart before the horse, for some singularitarians it’s not that it’s another religion, it’s that were at the point where are tools may allow us to grasp a desire we’ve had for eons. Does that make sense?
If people claim they are using technology as part of their religious experience, sure, you can't argue with that.
But I've never heard any transhumanists making any claims about connections to something divine or a higher sphere, or make references to religion.
If they claim that it is a purely technological and scientific process, then they have to accept it being criticized on a scientific basis.

Axolotl
2012-02-23, 03:13 PM
The thing that stikes me about alot of the extreme transhuman rhetoric is how you could replace transhumanism with eugenics and singularity with superman and you could be having largely the same conversations a centuary ago, the terminology's changed and the theories have changed but it all sounds the same, vague predictions based on shaky science.

This isn't helped at all by the fact that as noted above many of the predictions resemble bibilical apocalypses, making it seem more likely that it's just an incarnation of some Campbellian-monomyth for the information age.

That said it is likely that we will contiue to augment ourselves with new technology but I'm higly doubtful we will change significantly, certainly not in the next hundred years or so.

Yora
2012-02-23, 03:23 PM
I've heard somewhere (I think it was on QI, you hear some amazing thingy when Stephen Fry is goofing around), that human evolution has basically stopped. Instead of adapting and optimizing to changing environments, we manipulate our environment to be optimized for our needs. Evolutionary pressure has decreased dramatically.
What we will certainly see is a homogenization across races. With travel and relocation to the other side of the world being so easy in the modern world, and constantly increasing, mixed-ancestry people will become a lot more common and racial traits be diluded. However, most people still stay near where they were born and where people share the local culture and language, so Europeans, Chinese, Indians, and Africans will most likely remain clearly distinguishable for many more centuries. But we will have lots more individuals with mixed traits.

Grinner
2012-02-23, 03:27 PM
If people claim they are using technology as part of their religious experience, sure, you can't argue with that.
But I've never heard any transhumanists making any claims about connections to something divine or a higher sphere, or make references to religion.
If they claim that it is a purely technological and scientific process, then they have to accept it being criticized on a scientific basis.

I don't think he's speaking of it in a strictly dogmatic sense. I think he's just saying that the ethos of transhumanism speaks to a human need to reach higher than ourselves.

Eldan
2012-02-23, 03:32 PM
On the other hand, I'm not even sure that we'll ever acheive strong AI. I'm not sure it even exists. I'm quite certain that if it does, it's not a mere function of transistor count and clock speeds.

As a believer in a deeply naturalistic and mechanistic universe, I think it should be possible. If the human brain is built on deterministic processes and machine-like components, then those processes and components can eventually be replicated if we ever understand them in enough detail. We can already simulate simple neural networks, after that it a question of scaling up and refinement.

Grinner
2012-02-23, 03:36 PM
As a believer in a deeply naturalistic and mechanistic universe, I think it should be possible. If the human brain is built on deterministic processes and machine-like components, then those processes and components can eventually be replicated if we ever understand them in enough detail. We can already simulate simple neural networks, after that it a question of scaling up and refinement.

The question is not whether we can build a strong AI. It's whether we can prove that it is, in fact, sentient. I fear that there will always be contention on the subject, should we ever design such a thing.

Would it actually feel, or merely act like it? This is one of the key problems presented by solipsism.

Tyndmyr
2012-02-23, 03:39 PM
Of course, it's mocking about the image many transhumanists present to outsiders. The estimate when the singularity happens and humans become immortal seems not to be based on scientific calculations, but "before I die". It's been a few year since I researched all those things, but there was a clear correlation between a writers age and the date the singularity was supposed to happen. It's usually about the time the writer is in his late 70s.

I've also noticed that. I really do wish that immortality was a possible option within my lifetime, but I really don't think it will be, despite currently being in my twenties and being quite healthy.

Frankly, any time something is described as "fifty years away", it's generally a wild guess at best. Humanity has a terrible track record for predicting where we'll be then. Looking 50 years back, their predictions of today were...not terribly accurate.


The thing that stikes me about alot of the extreme transhuman rhetoric is how you could replace transhumanism with eugenics and singularity with superman and you could be having largely the same conversations a centuary ago, the terminology's changed and the theories have changed but it all sounds the same, vague predictions based on shaky science.

Yeah, you can see the similarities in sci-fi.

And frankly, genetic selection can improve humanity to a certain degree(selecting against recessive genetic diseases, for instance, something we now do somewhat), but it's just not anything like the panacea it was made out to be.

Technology and deliberate genetic engineering of ourselves will likely be similar. Each offers some nifty possibilities, but nothing fixes everything, and it's just not realistic to assume we'll leap vastly ahead in our lifetime. Average lifetimes are increasing, yes...but the rate of increase is just not that fast.


As a believer in a deeply naturalistic and mechanistic universe, I think it should be possible. If the human brain is built on deterministic processes and machine-like components, then those processes and components can eventually be replicated if we ever understand them in enough detail. We can already simulate simple neural networks, after that it a question of scaling up and refinement.

Well, it is, yes. But you get into sticky things like quantum mechanics, and...the certainty goes out the window(slight oversimplification, I know). But even if we do that...we can merely copy a human brain. That's nice, but it's not the same as a self-improving artificial intelligence, and it's really, really hard for something to fully understand itself.

Eldan
2012-02-23, 03:41 PM
To me, there is no real difference between a convincing "fake" sapience and "true" sapience. I've never heard a convincing, absolutely defining criterion for sapience anyway and I believe it not to be something that is either present or not, but a gradient. I'd say chimps and dolphins are pretty damn sapient already, and plenty of other animals show signs of it as well.

Grinner
2012-02-23, 03:49 PM
To me, there is no real difference between a convincing "fake" sapience and "true" sapience. I've never heard a convincing, absolutely defining criterion for sapience anyway and I believe it not to be something that is either present or not, but a gradient. I'd say chimps and dolphins are pretty damn sapient already, and plenty of other animals show signs of it as well.

Fair enough. It's just that in creating digital life, you first need to answer the question "What is life?".

I like this thread. :smallsmile:

Coidzor
2012-02-23, 03:59 PM
AI and P Zombies (http://en.wikipedia.org/wiki/Philosophical_zombie). Didn't suspect that within 2 pages.

Axolotl
2012-02-23, 05:11 PM
To me, there is no real difference between a convincing "fake" sapience and "true" sapience.I'd say there's a huge difference, I mean cleverbot can (badly) imitate sapience but it's nothing more than a tool for regurgitating racism.



I've never heard a convincing, absolutely defining criterion for sapience anyway and I believe it not to be something that is either present or not, but a gradient. I'd say chimps and dolphins are pretty damn sapient already, and plenty of other animals show signs of it as well.What signs?

I'd have to disagree with you that it's a gradient, something can eithe think or it can't, now certain animals may be closer to achieveing it than others but it's still binary.

Eldan
2012-02-23, 06:25 PM
That would be the convincing part. And of course we could debate the defintion of Sapience here. It does not seem to show any signs of self-awareness, self-reflection, awareness of the future or planning, which are often named as signs of sapience.

Where do you put the limit on Sapience, then? Chimpanzees use tools, they go to war, they recognize themselves in a mirror, they have complex social structure, they can learn simple sign language or language using a computer keyboard, they can clearly plan for the future and be thought how to operate simple devices (often more a problem of dexterity than brainpower), they are able to use fiat currency in exchange for goods and services. Dolphins and some birds have some of these, but not others.

Where exactly does something stop being merely sentient and become sapient, to you?

Soralin
2012-02-23, 06:32 PM
Yeah, singularity is basically the idea:

If, by using my intellect, I can build something more intelligent than myself, then it stands to reason that someone more intelligent than I am could do even better.
And, if I do build something more intelligent than myself, then I already have made someone more intelligent than I am to do so.
And, if that more intelligent someone does do better, it would mean that they have created someone more intelligent then they are, who would presumably be capable of doing even better then they did.
(and so on, and so forth)

There is a limit to computational power though, simply from the laws of physics of the universe, if nothing else: http://en.wikipedia.org/wiki/Limits_to_computation

Yora
2012-02-23, 06:46 PM
What signs?
The best sign I have seen was when our dog was out in the garden once, while my mother was pulling twigs out of one of the flower patches next to the trees. I was on the veranda, which our dog didn't notice. Since she knows she's not supposed to go at my mothers stuff, she walked behind her while she kept working on the flowers, and then sneked up while in my mothers blind spot to get one of the sticks she had thrown behind her. Then she turned around to neak away again but only made it a few meters until she noticed me watching her. Then there was a long moment of akwardness as she looked at me, but since I didn't say anything she kept her head low and continued to another part of the garden far away from my mother.

And she's a really stupid dog. Dogs are perfectly capable of understanding when they are doing things they are not supposed to do and when they get caught doing it.

I also once had a lengthy discussion with our cat if she can get out in the garden after 10 in the evening. When I replied to her meowing next to the backdoor that I wouldn't let her out since she wouldn't be back when we go to bed, she kept pestering while I was busy in the kitchen and I didn't even look at her while I kept telling her I wouldn't let her out, and she still got the message and went back to the hall to find something else to do. And this cat can sit a lot longer than 30 second next to a door demanding that someone lets her out, and I was still in the kitchen right next to the door when she gave up. I think she was quite clearly able to interpret my descision on a subject a lot more complex that basic instinctual fight or flight responses.
She wanted something, understood that I was aware of her wish but refused to comply instead of just waiting to finish what I was doing and then getting me to open the door for her.

There is a limit to computational power though, simply from the laws of physics of the universe, if nothing else: http://en.wikipedia.org/wiki/Limits_to_computation

You could always make a computer out of organic neurons. Well, not now, but there's no reason to assume you can't replicate the complexity of a human brain in the future, and then it's only a matter of making them bigger and more complex.
Theoretical upper limits posed by the limited amount of matter and energy in the universe don't really matter for practcal considerations.

Axolotl
2012-02-23, 07:09 PM
That would be the convincing part. And of course we could debate the defintion of Sapience here. It does not seem to show any signs of self-awareness, self-reflection, awareness of the future or planning, which are often named as signs of sapience.But even if it were convincing and could pass the Turing test reliably, cleverbot still wouldn't be thinking, it can only repeat what it's already been told, it could never, say invent calculus or write a grat play or novel. Sapience requires the ability to create and be original but it isn't nessesary for something that is merely imitating sapience.


Where do you put the limit on Sapience, then? Chimpanzees use tools, they go to war, they recognize themselves in a mirror, they have complex social structure, they can learn simple sign language or language using a computer keyboard, they can clearly plan for the future and be thought how to operate simple devices (often more a problem of dexterity than brainpower), they are able to use fiat currency in exchange for goods and services. Dolphins and some birds have some of these, but not others.

Where exactly does something stop being merely sentient and become sapient, to you?I'd say art would be the primary sign I'd look for regarding thought. Or religion, philosophy or complex mathematics. Actions that try to contextualise the world beyond the immediate senses. Art because it shows large ammounts of energy spent on things that do not help the survival of either the individual or their genes. I'm not saying that these define sapience but they're what I'd expect to see as observable effects. Tool use isn't really enough, it's just a survival mechanism, a very advanced one but not one that needs thought behind it. Insects often have very complex societal structures, in fact there are ants that practice what could be viewed as farming and slavery, but there's no thought, just repetition of actions that help survival.

To put it somewhat crudely in Dawkins' terms, I'd judge a biological species sapient when it places the spreading of memes over the spreading of genes. However I'm not entirely sure how I'd translate that for judgeing a computer's sapience.

pffh
2012-02-23, 07:15 PM
I'd say art would be the primary sign I'd look for regarding thought. Or religion, philosophy or complex mathematics. Actions that try to contextualise the world beyond the immediate senses. Art because it shows large ammounts of energy spent on things that do not help the survival of either the individual or their genes. I'm not saying that these define sapience but they're what I'd expect to see as observable effects. Tool use isn't really enough, it's just a survival mechanism, a very advanced one but not one that needs thought behind it. Insects often have very complex societal structures, in fact there are ants that practice what could be viewed as farming and slavery, but there's no thought, just repetition of actions that help survival.

To put it somewhat crudely in Dawkins' terms, I'd judge a biological species sapient when it places the spreading of memes over the spreading of genes. However I'm not entirely sure how I'd translate that for judgeing a computer's sapience.

http://www.youtube.com/watch?v=SNogdpHeuiE

Elephant painting elephants (there are also videos of it painting trees that it can see and other stuff so it's not just trained to paint certain things over and over again).

So there you have tool use, art and understanding of what it is and it's surroundings.

nooblade
2012-02-23, 07:15 PM
Now that we're hardly talking about where to find Transhumanist resources anymore (yay derail), I'd like to chip in my opinions.

I'm sure it's looked like thinking has been "on the edge of understanding" for a long time now. It's good to know, but I don't think transistors alone will be up to the task. How you think depends on how you feel and that depends on your body.

Immortality was a foolish quest in children's tales and I think extending average lifespan and quality of life are more worthy goals. We have plenty of hours of work in humanity already if only we could actually use them (make work satisfying).

But I like prosthetic stuff. Crutches and glasses were good, cranes, bulldozers, and simulations are better, and the future should be even better. I like how computers are called "bicycle for the mind" and I wish it would live up to that because you don't actually need a bigger mind to contain greater things. I don't think my brain is full capacity, or that it would be even with lifelong learning.

Eldan
2012-02-23, 07:25 PM
http://www.youtube.com/watch?v=SNogdpHeuiE

Elephant painting elephants (there are also videos of it painting trees that it can see and other stuff so it's not just trained to paint certain things over and over again).

So there you have tool use, art and understanding of what it is and it's surroundings.

Actually, I've been to an elephant center in Thailand where they had painting elephants and it was honestly a bit disappointing... there was a caretaker standing next to the elephant the entire time and if you looked into the giftshop, they had a dozen copies of every painting that were almost identical. So, it looked more like training to me.

That said, there are painting gorillas. (http://en.wikipedia.org/wiki/Michael_%28gorilla%29)

Grinner
2012-02-23, 07:26 PM
But even if it were convincing and could pass the Turing test reliably, cleverbot still wouldn't be thinking, it can only repeat what it's already been told, it could never, say invent calculus or write a grat play or novel. Sapience requires the ability to create and be original but it isn't nessesary for something that is merely imitating sapience.

And that is the difference between weak AI and strong AI.

Weak AI is like cleverbot, capable of decision-making but unaware of itself. Strong AI, on the other hand, would ideally be fully self-conscious. The problem is deciding whether it's actually self-conscious, or just acting like it.

Tiki Snakes
2012-02-23, 08:59 PM
Actually, I've been to an elephant center in Thailand where they had painting elephants and it was honestly a bit disappointing... there was a caretaker standing next to the elephant the entire time and if you looked into the giftshop, they had a dozen copies of every painting that were almost identical. So, it looked more like training to me.

That said, there are painting gorillas. (http://en.wikipedia.org/wiki/Michael_%28gorilla%29)

If Elephants in western Zoos and so on are capable of painting for fun, and Elephants in Thai zoos are capable of being exploited to create repeated pictures almost as a job, it doesn't necessarily mean that anything they are doing is less meaningful or impressive, just that they aren't given full creative freedom. :smallcool:

Also, the elephants on youtube are much better painters than Michael the gorrila. It's hard to hold it against him though, considering he could sign up to 600 words and hold meaningful conversation.

Eldan
2012-02-23, 09:09 PM
Oh, certainly. I was jsut saying that I saw an elephant painting the exact same pictures as in that video, but in different colours, and that they also had them, in yet again different colours, in the gift shop. So, at least those weren't real. Michael was just the only painting gorilla I could find an article on without much effort in research. There are others. :smalltongue:

JoshuaZ
2012-02-23, 10:31 PM
Strange. I don't see how these three are really mutually exclusive.

Don't these all build on each other?

Well, the point of that essay is twofold: 1) Whether they build on each other or not, they are distinct ideas that should be kept separate (and using the same term for all of them is just asking for confusion) 2) Each version has weaker and stronger versions, with stronger versions running into contradictions.

Seraph
2012-02-23, 11:57 PM
Also you misunderstood my prior post. The singularity is defined as : when we produce intelligence superior to our current intelligence.

that's not even the simplest. The way the concept of a singularity was explained to me was, "a technological shift so great that the nature of life after its occurrence is fundamentally beyond the understanding of those existing before." so-called "mini-singularities" are more or less historical fact, imagine trying to explain radio to someone living before telegrams or the concept of a book to someone in the age before the written word.

its why I find type-2 Transhumanists so comical, fantasizing about post-singularity life is the very antithesis of the concept's meaning.


To me, there is no real difference between a convincing "fake" sapience and "true" sapience. I've never heard a convincing, absolutely defining criterion for sapience anyway and I believe it not to be something that is either present or not, but a gradient.

It comes a question of how something can "fake" sapience when the act of faking would itself imply sapience. I think Cracked put it best, when discussing the displayed intelligence of Alex the grey parrot:

"He's not really understanding communication, he's just observing human idioms and responding in accordance."

"Okay, how is that not how humans do it already?"

hydroplatypus
2012-02-24, 12:19 AM
that's not even the simplest. The way the concept of a singularity was explained to me was, "a technological shift so great that the nature of life after its occurrence is fundamentally beyond the understanding of those existing before." so-called "mini-singularities" are more or less historical fact, imagine trying to explain radio to someone living before telegrams or the concept of a book to someone in the age before the written word.

I have heard many definitions, including the one I posted. That being said yours makes more sense, and I will use it form here on. Writing, agriculture, metalworking and to a lesser extent the printing press, computers, and the internet are good examples. How would you explain them?

Also, I think we all realize that speculating will be wrong. Don't expect something as silly as logic to stop us though:smalltongue:. We find it fun to speculate regardless of how accurate we are. Will be interesting to see what actually happens though.

VanBuren
2012-02-24, 01:28 AM
The simplest definition of the Singularity is that it is the point at which machine becomes man.



This trend is called Moore's Law, and, if I remember correctly, it's really just a self-fulfilling prophecy, not a natural occurrence. Researchers could choose to release their developments at a slower rate or, given enough help, a faster rate. However, the semiconductor industry has decided that they must follow this prediction and double their products' capacity for processing every 18 months.


No, the Singularity is defined. What is not defined is what will happen after it.

The term comes from black holes. We can not in any way measure what is behind a Schwartzschild-Radius. Similarly, we can not really predict what will happen once computers become self-improving.

The Singularity is the point in which the subject you are working on becomes better at improving itself than its creator is. The point where a computer can build a computer that is better than itself at everything. Where acceleration really takes off.

That much is defined. Do we now where it will end up? No, but we have an idea on what will start it.

My understanding of a singularity was that it was simply the point at which it is simply impossible to comprehend by those who came before it. That is, that we simply cannot predict what the world will be like beyond it. Arguably, we've already encountered several such singularities over human existence.

Grinner
2012-02-24, 01:31 AM
My understanding of a singularity was that it was simply the point at which it is simply impossible to comprehend by those who came before it. That is, that we simply cannot predict what the world will be like beyond it. Arguably, we've already encountered several such singularities over human existence.

Language is tricky that way.


It comes a question of how something can "fake" sapience when the act of faking would itself imply sapience. I think Cracked put it best, when discussing the displayed intelligence of Alex the grey parrot:

"He's not really understanding communication, he's just observing human idioms and responding in accordance."

"Okay, how is that not how humans do it already?"

I looked up that article (http://www.cracked.com/article_18930_6-amazingly-intelligent-animals-that-will-creep-you-out.html), and that's not even close to what it said. In fact, it argued quite the opposite (inverse? contra-positive?). It mentioned how the parrot had associated shapes with words, and when presented the object, could identify it and its characteristics.

Flickerdart
2012-02-24, 02:13 AM
My favourite prediction for the singularity was in a Stanislaw Lem story, where an inventor builds a machine smarter than himself to get it to compute something or other. The machine immediately built another one of itself and gave it that job instead, and, well, you can guess what happened afterwards.

Mewtarthio
2012-02-24, 09:45 AM
My favourite prediction for the singularity was in a Stanislaw Lem story, where an inventor builds a machine smarter than himself to get it to compute something or other. The machine immediately built another one of itself and gave it that job instead, and, well, you can guess what happened afterwards.

The Vogons destroyed it to make room for a hyperspace bypass?

Eldan
2012-02-24, 10:04 AM
It ended up restarting the universe to reverse entropy with the words "Fiat Lux"? :smallwink:

Yora
2012-02-24, 10:42 AM
Wouldn't in a universe of maximum entropy still be gravity at work? Tiny pieces of dust spread through the whole universe would take insanely long to be drawn together, but wouldn't eventually all mass collapse into a single black hole again?

Tyndmyr
2012-02-24, 11:09 AM
Now that we're hardly talking about where to find Transhumanist resources anymore (yay derail), I'd like to chip in my opinions.

I'm sure it's looked like thinking has been "on the edge of understanding" for a long time now. It's good to know, but I don't think transistors alone will be up to the task. How you think depends on how you feel and that depends on your body.

Turing complete is turing complete. If something is achievable in one turing complete system, it is in any other. It need not be as easy, but it's mathematically provable.


Immortality was a foolish quest in children's tales and I think extending average lifespan and quality of life are more worthy goals. We have plenty of hours of work in humanity already if only we could actually use them (make work satisfying).

I don't particularly base my life off children's tales. I see the goal for immortality as...basically the same as extending average lifespan. There isn't a magic button that makes us live forever...we need to fix everything that makes us stop living.


Wouldn't in a universe of maximum entropy still be gravity at work? Tiny pieces of dust spread through the whole universe would take insanely long to be drawn together, but wouldn't eventually all mass collapse into a single black hole again?

Not necessarily. Gravity is an immensely weak force, and it decreases rapidly as distance increases. So, the rate of expansion should slow due to gravity...but the rate of slowing is itself rapidly slowing. It'll approach, but never actually reach zero. So, you never actually have gravity entirely overcome expansion momentum.*

*calculations may not apply to all universes, as initial momentum is not guaranteed to be equal, and total mass/mass distribution may vary.

hydroplatypus
2012-02-24, 01:06 PM
Not necessarily. Gravity is an immensely weak force, and it decreases rapidly as distance increases. So, the rate of expansion should slow due to gravity...but the rate of slowing is itself rapidly slowing. It'll approach, but never actually reach zero. So, you never actually have gravity entirely overcome expansion momentum.*

*calculations may not apply to all universes, as initial momentum is not guaranteed to be equal, and total mass/mass distribution may vary.

Actually based on current research the rate of expansion is increasing due to the dark energy (that stuff no one really understands) or something. So as the universe expands the rate of expansion increases. I think we have ~ 25% of the mass necessary to get the situation you described (decreasing but never 0 rate of expansion, no contraction).

Also it would change in different universes. If mass = 100% then it will do what you described. If mass < 100% then it will expand more rapidly as it expands. If mass > 100% it collapses into a singularity.

Flickerdart
2012-02-24, 01:46 PM
The Vogons destroyed it to make room for a hyperspace bypass?
No, the machines kept building more of themselves and shirking responsibility for doing the actual work. Forever.

Coidzor
2012-02-24, 04:59 PM
My favourite prediction for the singularity was in a Stanislaw Lem story, where an inventor builds a machine smarter than himself to get it to compute something or other. The machine immediately built another one of itself and gave it that job instead, and, well, you can guess what happened afterwards.

AI technology was scrapped because they were too lazy to do anything but make more of themselves to do the work for them?

PirateMonk
2012-02-24, 06:56 PM
In answer to the original question: Transhumanism as Simplified Humanism (http://yudkowsky.net/singularity/simplified)


The question is not whether we can build a strong AI. It's whether we can prove that it is, in fact, sentient. I fear that there will always be contention on the subject, should we ever design such a thing.

Complex arguments over the possibility or impossibility of P-Zombies aside, why does this matter? An AI does not have to be "sentient," let alone provably so, in order to start a singularity or do anything else worthwhile (in fact, some transhumanists are against AI sentience).


Gravity is an immensely weak force

Off Topic: This has always bugged me. Why do people always say "gravity is very weak" rather than "there is very little mass, relatively to other force charges, in the universe/a proton"?

Mewtarthio
2012-02-24, 06:59 PM
AI technology was scrapped because they were too lazy to do anything but make more of themselves to do the work for them?

Of course. Don't you know anything about programming? The proper response to a bug in the code is to rage against the heavens at the cruel, yet poetic, twist of fate. Software testing? Bug fixes? Only in the shiny, utopian futures. :smalltongue:

Coidzor
2012-02-24, 08:27 PM
Of course. Don't you know anything about programming? The proper response to a bug in the code is to rage against the heavens at the cruel, yet poetic, twist of fate. Software testing? Bug fixes? Only in the shiny, utopian futures. :smalltongue:

Well if the AIs kept building more advanced versions of themselves to do the job, then that implies that they never fixed that problem or were incapable of such, yes.

Grinner
2012-02-24, 09:38 PM
Complex arguments over the possibility or impossibility of P-Zombies aside, why does this matter? An AI does not have to be "sentient," let alone provably so, in order to start a singularity or do anything else worthwhile (in fact, some transhumanists are against AI sentience).

Please don't use the term "singularity", at least until this thread can settle on a definition for it.

As for your question, I'd say it depends on where you stand. If you just want something to fetch things or initiate a Terminator-esque genocide of mankind, then no, it doesn't really matter.

If, however, you want to create legitimate life or just play God, then it very well does matter.


Well if the AIs kept building more advanced versions of themselves to do the job, then that implies that they never fixed that problem or were incapable of such, yes.

Maybe they achieved true sentience; maybe they were lazy.

Coidzor
2012-02-25, 12:26 AM
Maybe they achieved true sentience; maybe they were lazy.

Yes, and if you realize that AIs won't do any work for you and it's a hard and fast rule that one can't resolve this critical flaw, there's not a whole lot of purpose, let alone promise, for having or creating AIs, even if the few iterations that were made before the thing was declared a wash are truly sapient and do gain enough rights/security to not be recycled.

Grinner
2012-02-25, 12:41 AM
Yes, and if you realize that AIs won't do any work for you and it's a hard and fast rule that one can't resolve this critical flaw, there's not a whole lot of purpose, let alone promise, for having or creating AIs, even if the few iterations that were made before the thing was declared a wash are truly sapient and do gain enough rights/security to not be recycled.

We don't always do things because they will benefit us. Sometimes, we do things just because we can.

If you're only looking to profit, then you're better off sticking to weak AI.

Cikomyr
2012-02-25, 01:34 AM
well, there is a way to develop beyond simply robotification and AI development. What's to stop us from genetically engineer the specie a bit, to at the very least stop us from developping more of the congenital deseases that no longer kills us and thu we allow to spread in the gene pool?

I'd make the argument that right now, humanity is genetically getting weaker. It's only the progressive mix of genes in our western societies (where medical science is at its best) coming from immigration influx that we allow gene diversity to stay strong, but there will be a point when we will reach a critical point where this gene diversification won't suffice any longer to counteract the genetical deterioration caused by the spread of congenital deseases.


(Oh, and a side-note, no matter what Moore's law may say about the speed of technological progress, I think the upper limit on the harness of AI will come more from engineering limits, like the availability of energy. Which is why I believe humanity will need ways to properly start to harness energy in an order of magnitude higher than what it currently is. Current fossil fuel technology just can't handle this task, so I believe it'll probably be space-based solar power. Whatever kind of computer you can develop, there will come a time you cannot power it anymore.)

Coidzor
2012-02-25, 01:50 AM
We don't always do things because they will benefit us. Sometimes, we do things just because we can.

If you're only looking to profit, then you're better off sticking to weak AI.

Why are you taking this so personally? :smallconfused: We're talking about a very, very specific example from a single story and then you all act like I murdered someone's mother.

Grinner
2012-02-25, 02:08 AM
Why are you taking this so personally? :smallconfused: We're talking about a very, very specific example from a single story and then you all act like I murdered someone's mother.

I am? Sorry. Guess I came off the wrong way. :smallconfused:

Let me try again: From an economic viewpoint, creating a sentient being is not be the best idea at all, since it may think like any person. To that end, it's going to want rights.

Weak AI, on the other hand, is mechanical in nature, making ideal cheap labor.

Edit: ...And I miss the point. What I'm trying to say is: where intellectual labor is involved, an advanced weak AI is always preferable a strong AI, for reasons highlighted in the story.

hydroplatypus
2012-02-25, 12:30 PM
(Oh, and a side-note, no matter what Moore's law may say about the speed of technological progress, I think the upper limit on the harness of AI will come more from engineering limits, like the availability of energy. Which is why I believe humanity will need ways to properly start to harness energy in an order of magnitude higher than what it currently is. Current fossil fuel technology just can't handle this task, so I believe it'll probably be space-based solar power. Whatever kind of computer you can develop, there will come a time you cannot power it anymore.)

Actually for the moment Energy isn't the problem for supercomputers, cooling them is.

For more energy use Thorium nuclear reactors (http://www.ted.com/talks/lang/en/kirk_sorensen_thorium_an_alternative_nuclear_fuel. html). A mine the size of a football field could power the world. And if I remember correctly China is pouring lots of money into researching these technologies so we might get them soon.

1dominator
2012-02-25, 12:39 PM
Can humanity be improved? Perhaps. Do any of us have the wisdom to understand how? That I doubt.

Cikomyr
2012-02-25, 04:47 PM
Actually for the moment Energy isn't the problem for supercomputers, cooling them is.

For more energy use Thorium nuclear reactors (http://www.ted.com/talks/lang/en/kirk_sorensen_thorium_an_alternative_nuclear_fuel. html). A mine the size of a football field could power the world. And if I remember correctly China is pouring lots of money into researching these technologies so we might get them soon.

Correction. THEY would get them soon. But I somehow doubt it.


So... what are the cooling solutions?

Grinner
2012-02-25, 06:14 PM
So... what are the cooling solutions?

Liquid nitrogen is sometimes popular, but the future does not lie so much in cooling so much as it does in materials science (http://www.popsci.com/technology/article/2010-02/graphene-based-computers-may-end-silicon-age).

Flickerdart
2012-02-26, 12:16 AM
Can humanity be improved? Perhaps. Do any of us have the wisdom to understand how? That I doubt.
You're probably thinking of intelligence, not wisdom. Wisdom tends to be the "why not" rather than the "how" of things.

Eldan
2012-02-26, 07:23 AM
No, he's probably thinking of wisdom, as in "Which improvements will benefit us, and which will turn out to be a horrible idea."

THat said, I don't really believe in wisdom. I believe in experimentation.

Frozen_Feet
2012-02-26, 08:23 AM
Can humanity be improved? Perhaps. Do any of us have the wisdom to understand how? That I doubt.

Phooey. There are several, rather clear areas of ability where humanity could be improved. In fact, we've been doing this via tools for ages.

Near-sightedness? Use eyeglasses. Sugar causes your teeth to rot? Toothpaste. Your auto-immune system isn't up to snuff against some disease? Vaccines.

In my opinion, people often get irrationally emotive about "improving" humans, because some prejudice or another makes them think it's a much bigger deal than it really is. When it really is more like... developing a new Kata to better train martial arts.

Eldan
2012-02-26, 09:17 AM
Plus, thinking about it again... where should Wisdom come from, if not experience? How would we get that experience, if not by trying?

hydroplatypus
2012-02-26, 09:44 AM
Even if someone is worried about the wisdom of transhumanism, there are still some areas that could be improved without much actual argument. For instance cybernetic limbs for amputees, cybernetic eyes for the blind etc. Even in a very conservative society this would probably be accepted. Then it becomes only a matter of time until society becomes used to it enough to go a bit further, then a bit further, and so on.

Also on the wisdom issue: None of these changes would be introduced to society very quickly. They would be tested, and then probably released slowly in order to get people used to them. This would eliminate much of the risk.

Also, what parts of humanity would those saying it is unwise say we shouldn't improve?

Tyndmyr
2012-02-26, 01:40 PM
Can humanity be improved? Perhaps. Do any of us have the wisdom to understand how? That I doubt.

Yes. Absolutely.

As for the wisdom? Certainly, at least in many ways. For instance, there's a number of genetic conditions that cause death or pain and suffering that could be edited out. We already select against many of them intentionally, reducing their incidence, and we've mapped the human genome and done some experimenting on retroviruses to edit it. We're not to full on easy genetic editing, yet....but we're pretty solidly on the way, and there are some fairly non-controversially beneficial uses for it.

Do I wish that my family didn't tend towards near-sightedness? Certainly. If that can be fixed for future generations(or even this one), why wouldn't I want that? Perfect eyesight would not make me a lesser person.

I figure by the time we get done solving all the obvious problems we have today, we'll have identified plenty more to work on that pose no ethical difficulties(or we'll have solved those). Humans are complex critters, and we're capable of quite a lot. I'm pretty ok with a lot more self-improvement, even on the genetic level.

Yora
2012-02-26, 03:02 PM
Even if someone is worried about the wisdom of transhumanism, there are still some areas that could be improved without much actual argument.
It's not as if there is someone who is making the descisions whether or not certain things should be introduced and widely adopted. The market gets what the market demands.