PDA

View Full Version : DM Help Worldbuilding help- Hard Future Sci-Fi, reasons the singularity didn't occur



Mastikator
2014-09-02, 04:49 AM
I've been casually working on a world building project that I intend to DM some time in the future.

The short version of it is this: Distant future (about 340 years in the future), the Sol system is densely colonized, there is FTL travel (but with severe limitations and significant risks and no FTL communications), a handful of extrasolar colonies, contact and even some migration with (very) alien intelligent life forms, and here's where I hit a snag; no augmentation of the human body or mind and I want the technological development to have come to a grind without having to invoke some dystopian reasons.

The question is basically, what reasons could there be that technology and the advancement of science has slowly come to a grinding halt, and why don't people augment their intelligence with technology?

Bulhakov
2014-09-02, 05:05 AM
Outside intervention? Friendly aliens came by and said: "Stop messing with that AI and digital augmentation of brains, it'll only lead to another planet-computer or hive mind spamming the galaxy with von neumann probes and other galactic civilizations really don't like that. We can give you this neat FTL and matter assembler tech, but please stay away from self-replicating nanobots and strong AIs."

Altruistic AI? The first few mega-AIs start fighting covert "AI wars" (something like the Person of Interest TV series). The "winner" starts running different simulations of the singularity and shows that all scenarios lead to various dystopian futures, once the world governments are convinced to stop AI and brain augmentation research, the altruistic AI self-destructs for the good of humanity (or maybe is just locked away somewhere with no access to the world "AI in the box").

Yora
2014-09-02, 05:09 AM
Maybe it's possible, but the brain is so complex that any attempts to rewire it causes unpredictable side effects to the mind, so it wasn't something anyone was ever really willing to make investments in further research. Anything they ever accomplished didn't make people more efficient but the opposite.

aldeayeah
2014-09-02, 05:36 AM
Physical limits were encountered and it just didn't work out with the available tech? (seems very in line with your FTL travel limits)

Purely biological transhumanism (probably initially to help humans adapt to other planets) became the dominant line of research? Although that opens a wholly different can of worms, of course.

hamishspence
2014-09-02, 05:41 AM
Purely biological transhumanism (probably initially to help humans adapt to other planets) became the dominant line of research? Although that opens a wholly different can of worms, of course.
However:


no augmentation of the human body or mind and I want the technological development to have come to a grind without having to invoke some dystopian reasons.

The question is basically, what reasons could there be that technology and the advancement of science has slowly come to a grinding halt, and why don't people augment their intelligence with technology?

I'm guessing "no augmentation" includes pure biotech.

Mastikator
2014-09-02, 05:56 AM
However:



I'm guessing "no augmentation" includes pure biotech.

Yep. People use wear-able exoskeletons to help carry heavy objects or if you are physically disabled somehow (and is treated like a vehicle). And there are robotic prosthetics for those who lose limbs in accidents or w/e, but nobody uses them in place of their own natural bodies.
And no DNA altering to make super-humans, at best people use gene therapy to cure some disease, but nobody uses it to become superhumans.

Basically, there are no superhumans. I want this mainly for balance reasons, and it's easier to play that way. But I want the world to seem plausible so "because balance" just won't cut it.

hamishspence
2014-09-02, 06:01 AM
Star Trek's answer was The Eugenics Wars - a period of superhuman warfare and tyranny, and afterward, people said "Never again".

Anonymouswizard
2014-09-02, 06:07 AM
I'm going to provide answers related more to cybernetics than bio augmentation, because I dropped biology as fast as I could, and electronics is my specialisation.

Plain old problems with the technology? Fully functional cybernetic arms are completely possible with our current technology level, just not finalised or cheap. The first idea is that augmentation is so expensive that only basic in utero fixes are worthwhile. You get people without genetic diseases, but people aren't enhanced.

Maybe augmentation is impossible, and nobody can exceed the limits of their species. This is believable, if not exactly scientifically correct, especially if you allow augmentation to the species cap but not above it.

Your best bet with electronic enhancements is to state that the power consumption is too high to be worthwhile. If my replacement limbs only have enough power to work at normal capacity for 4 hours before needing a recharge, you are not going to be using it at maximum power in anything but the direst circumstances (which requires hacking the limb to just be possible), and not getting the installed at all if you have a healthy one.

For mental enhancements I second the idea that attempts all backfired due to poor understanding of how the brain works. People can get a DNI, but anything to increase their mental capacity either doesn't work, or is a massive gamble (1% chance to increase brain capacity, 99% chance to turn into a vegetable).

Sith_Happens
2014-09-02, 07:19 AM
Physical limits

This, as far as general tech level stagnation. Theoretically speaking, there exists some point at which the laws of physics start to say "Sorry, the buck stops here." You can obviously still respond with "That's what you think," but the more you do so the harder it gets.

Cyborgs happen well before that point, though, so you're going to need to come up with something on the political/social/historical side for that. I like the "crib from Star Trek" idea.

The Witch-King
2014-09-02, 07:19 AM
Human augmentation has simply been outlawed. There's no need for it anyway. Yes, you could build some device into your body to do fill-in-the-blank. Or -- you could just use this hand-held device. Which is easier to divest yourself of if it malfunctions or shuts down entirely? Which is easier to get rid of when the police surround you and are pointing guns at you for whatever reason? The cyborg gets hosed by a dozen cops because they aren't taking any chances. The guy with the devices just throws them down and puts his hands in the air. Likewise travel or prominent person security (i.e., they don't let people with crazy mods within two miles of the President or the Pope). The sheer amount of flaming hoops that a combat cyborg would be made to go through at an airport would make your eyes bleed -- and that's assuming they're allowed to travel via ordinary public aircraft at all.

The biggest problem as I see it, is that no one wants their children to become a technological commodity and that's the biggest reason to ban such augmentation. You're a sports enthusiast, your boy is one too and he wants to go out for the local sports team in high school. You've decided not to augment your kids with DNA mods or cyber tech -- but what if your neighbors feel differently?

You don't want to be forced into a situation -- I don't think the majority of people want to be forced into a situation where they feel they HAVE to modify their children so their children can compete. Little Johnny might be brilliant and he might have had a bright future ahead of him but if there's only so many slots open at MIT, say, he won't get one if the neighbors down the street have raised their kids intelligence by 50 points with gene mods or given them data chips in their heads with encyclopedic information or had their sleep patterns altered so they only have to sleep two hours a day and can study more than Johnny can. And all of that tech, like any tech, gets dated pretty quickly. Again, do you want to live in a society, do you want your kids to live in a society, where not only do you have to modify them with all kinds of crazy tech so they have a chance of a decent future but every few years when the computers improve or the gene mods get better -- you have to do it all over again to upgrade? What if you can't afford it? What if you can afford it for your kids but not for yourself? Do you really want to live long enough to watch your kids turn into unrecognizable monsters?

Tech that you can hold, wear, carry, etc. will get everything you need to do done and without people being forced to turn their kids into unrecognizable mutants. It's easier to ban the gene and cyber mods and be done with it. A few people will complain about the loss of freedom but it won't take many incidents of heavily modified individuals committing robberies, school shootings, mass murders, etc. to get the populace to turn against them. Although, there will always be some outlying areas that will allow such things because they have different laws. But even those can be dealt with if the major powers all agree to embargo such nations. You don't want a future rogue nation making a living by exporting cyborg terrorists after all.

There will be a pressure on governments to employ modified operatives but even that can be mitigated by treaties. The problem there is decommissioning. You've been working for the future equivalent of the CIA and have been heavily modified. It's time to retire. After twenty years of having your mods invasively improved year after year after year, since there's little point if you aren't using the latest hardware, you either let them turn down the specs on your mods -- if they can do that -- or spend the rest of your life under heavy surveillance and experiencing travel restrictions and such. You still have to deal with medical problems from your mods and even if they can turn them down some how, you've gotten used to having increased speed, increased strength, being able to see through walls, hear things far away. How easy will it be to return to being human after being superhuman? And how much easier would it have been to just hand in your equipment if you wore goggles instead of having cyber-eyes? If the major governments agree not to augment their military and intelligence personnel, yeah, there will be a few cheaters but given operatives can still have access to those abilities via carried equipment, it's just not a big deal.

Finally, there's the problem of outside interference, at least with cyborg equipment. How often does your email get hacked? How often do you hear about Anonymous or some other group of hackers penetrating some website and getting sensitive data? Do you really want to worry about your brain being hacked? Or your body being taken over? Some third-world dictator spends millions on cyborg bodyguards only for some hacker half a world away to assassinate him with his own bodyguards while eating Cheetos and drinking Mountain Dew.

Storm_Of_Snow
2014-09-02, 07:19 AM
How about a lot of bio-enhancement tech requires plastics, and with natural sources of oil having been used up (or being limited on extrasolar worlds), what they can make from other sources is reserved for other uses - pharmaceuticals and medicines, fertilizers, lubricants, industrial uses and so on.

That could still be a lot of tonnes/day.

You could also make it illegal - maybe someone tried it in the past, and it went very, very wrong (as in "people still point out the massive crater" wrong).

You could also say that there's very little interest in scientific research - they've achieved FTL, maybe a few other technologies, and it's good enough, so people aren't really looking at anything else. Couple that with the vast majority of people simply trying to survive (colony worlds may spend all their time growing enough food to survive through to the next harvest, while the heavily populated core worlds may have populations that have exceeded the food production capabilities), and there's very few people even able to do any research.

Or tie it into the "thing that went very, very wrong", and make all scientific research a massive taboo, and what is still going on is incredibly tightly controlled.

Although I'm now thinking of a cell network of underground scientists, and some dodgy bloke trading in microscope slides and analytical tools rather than drugs or weapons. :smallamused:

Anonymouswizard
2014-09-02, 10:45 AM
You could also say that there's very little interest in scientific research - they've achieved FTL, maybe a few other technologies, and it's good enough, so people aren't really looking at anything else. Couple that with the vast majority of people simply trying to survive (colony worlds may spend all their time growing enough food to survive through to the next harvest, while the heavily populated core worlds may have populations that have exceeded the food production capabilities), and there's very few people even able to do any research.

While this might work for the younger colonies, a blanket non-interest in science wouldn't happen on the capital planets (planets developed at least as much as earth is now, I just want a term for them) there is likely to be a huge interest in science. I like the idea as a concept, so I'll suggest some changes to make it more believable.

In the 2000s, augmentation was discovered to be a dead-end technology with no real benefits, and was shelved. Augmented humans still appear in fantasy stories and soft science fiction, but as everybody knows it was discovered to be a dead-end nobody has thought to look at it again with modern knowledge, those who do are the laughing stock of the scientific community.

Instead scientific research is focused on other areas, mainly better FTL (which I'm assuming is light years/week or light years/year, maybe even light years/day, but people don't like visiting the cousins on alpha centuri to take a day there and back), quantum entanglement and other FTL comms fields (because FTL post-drones are too insecure for governments), medicine, and possibly areas like nano fabrication. All areas that would make sense for an interstellar society, but aren't as unbalancing as augmentation.


Or tie it into the "thing that went very, very wrong", and make all scientific research a massive taboo, and what is still going on is incredibly tightly controlled.

Although I'm now thinking of a cell network of underground scientists, and some dodgy bloke trading in microscope slides and analytical tools rather than drugs or weapons. :smallamused:

Amusing, and an underground network of transhumans is a very good idea if augmentation is banned, but you are more likely going to blacklist a few areas (genetics, cybernetic augmentation, possibly some materials sciences) than whitelist individual areas.

Maybe there could be a splinter culture of transhumans. The descendants of people who didn't like the ban on augmentation, instead of having interchangeable exosuits and handheld technology, they have a caste system where they graft technology onto their body. They probably shouldn't have true augmentation, but ports that allow better interfacing with exosuits eliminate the need (there's no need to have super strength all the time, especially if there are several models of exosuit).

A Tad Insane
2014-09-02, 11:10 AM
The aliens, while however non hostile you want them to be, are technologically superior to humanity and don't want that supremacy to be threatened, but people are more or less okay with it because the aliens protect them from the chaos gods/zerg swarm/greedy megalomanical humans with evil robot armies

ReaderAt2046
2014-09-02, 11:31 AM
Well, the most obvious idea would be that it simply isn't possible to significantly augment humanity. The human body is, after all, the product of divine engineering, and it's probably not possible to significantly improve on what we already have.

You could also rule that certain kinds of enhancement just don't work. For example, you can't give humans computer brain implants because brains and computers use very different mechanisms and can't properly interface.

Or make it so that other flavors of augmentation are possible, but only at great cost or with a high risk of catastrophe. Say there exists a genetic enhancement treatment that has a 20% chance of killing you via massive systemic shock, a 60% chance of causing horrible sickness and/or mutation, and a 20% chance of significant improvement.

Red Fel
2014-09-02, 11:47 AM
Well, the most obvious idea would be that it simply isn't possible to significantly augment humanity. The human body is, after all, the product of divine engineering, and it's probably not possible to significantly improve on what we already have.

One option is simply to make a political extension of this. Say that, as technology advanced, the public knee-jerk backlash to technology escalated. One of the major political powers is now a generic anti-augmentation strawman party, call it "The Order of Sacred Form" or "The Church of Divine Man" or something like that, which holds substantial wealth and influence and is opposed to any change in the physical structure of the human body. Through lobbying and other forms of influence, it has managed to encourage the myriad human governments to render human augmentation illegal.

Another option is to have somebody back up and consider the security exploits. Say someone, at some point, actually did develop cyberbrains, or cybernetic limb replacements with functional nerve connectors, but upon examination it was discovered that anyone with reasonable tech access could hack them. In essence, highly advanced human augments were highly susceptible to outside influence. Having your phone cracked is a disaster; having actual human beings hacked (or able to convincingly argue that their actions were involuntary due to "hacking") would be an absolute cataclysm. (Just watch Ghost in the Shell: Stand Alone Complex if you need illustrations.) So scientists and politicians, in a rare act of solidarity, joined hands and decided that this was one technological line that they would not cross, for the protection of humanity.

Megaduck
2014-09-02, 01:00 PM
I love this because I've written a Sci Fi series that had to answer a lot of these questions. (Though I went with Bio Augmentations that drive the plot)

As a general rule in world building, remind yourself that no technology is perfect. Ask what is this technologies limitations, restrictions, and downsides.


No augmentation of the human body or mind and I want the technological development to have come to a grind without having to invoke some dystopian reasons.

The question is basically, what reasons could there be that technology and the advancement of science has slowly come to a grinding halt, and why don't people augment their intelligence with technology?

Ok, I have a couple answers to that though I'm not the first on this thread with all of them.

First. Why Doesn't Singularity happen? With computers making better computers making better computers?

One possibility is that computers are not "Smart" they are good at specific tasks like number crunching but have severe limitations. They might not be able to see past 'what is' to 'what could be' so don't create anything new.


Why are there no mechanical augmentations?

They're impractical. Sure, you could put wires into your skull so you could take pictures with your eyes, or you could just buy an Iphone 3000. Metal arms? Get a forklift. In short there is nothing that they could do that is both simpler and cheaper to do with something else.

They're unpleasant. Mechanical implants don't give the same sensation as the organic replacements. They cause pain. They're just distracting with to much input.

They're hard to learn. You've been using your hands your entire life, you've gotten pretty good with them. If you get mechanical hands you'll have to start learning how to learn to use your hands again, from scratch.

Why no biological enhancements?

DNA is not building blocks. You can't find the DNA for say a cow tail, and just graft it into a person. Nor is DNA a blueprint, the body doesn't know what it's doing. It's a very long set of instructions that may be followed approximately and you can't mess with those instructions to much.

It's not just what DNA a person has it's how they grow, if you wanted a specific type of person you'd have to put them in tube and monitor their growth for 20 years and that is just not practical.

It's imprecise. You could spend 100M on a bio design but you have no idea if you're going to get your moneys worth in 20 years.

Its very slow. It takes 7 years for the human body to regenerate itself completely. You have to wait that long for the change to take effect. Or maybe some changes are faster then others.

It's unstable. You can only make very small changes, the larger the change the greater the unintended consequences. It might make you super human, or it might give you a terrible case of cancer.

Ettina
2014-09-02, 02:49 PM
Here are some augmentations we are already capable of doing:


controlling an electronic device with your thoughts
giving a deaf person some distorted hearing, provided their cochlea is intact
sticking an electrode in your brain to stimulate the pleasure centers and create the most powerful addiction we've observed yet
get a blurry, distorted image of what a person is seeing or visualizing by scanning their brain
induce a localized seizure (this is by far the easiest one to do, we were starting to do it in the 1800s)
prevent a child from losing memories of early childhood as they grow up, but at the cost of probably impairing their adult memory capabilities
reduce the severity of Parkinsons (or, presumably do the reverse, if we wanted to)


So it'll be hard to explain us not having significant improvements in our knowledge of brain implantation in 340 years. Increasing intelligence will be one of the hardest things to do, but eventually we'll get there.

Easier to explain it by the same reason we don't all have the ability to turn our TVs on with a thought now - the technology is expensive, takes a lot of training to use, and carries potential health risks that make it only practical for a small segment of the population. In addition, our culture frowns upon augmentation for anything other than curing a disability, so if those attitudes haven't changed, you're likely to find the only augmented people being people like Geordi.

Anonymouswizard
2014-09-02, 05:30 PM
They're hard to learn. You've been using your hands your entire life, you've gotten pretty good with them. If you get mechanical hands you'll have to start learning how to learn to use your hands again, from scratch.

Most of your post is good, but I take issue with this. We can make cybernetics learn the person's nerve signals with a single session. It's the same as teaching any program to recognise your unique control scheme, just controlling an arm.

Sorry for nitpicking, but I've met engineers who pick up on that very quickly.

Megaduck
2014-09-03, 12:40 PM
Most of your post is good, but I take issue with this. We can make cybernetics learn the person's nerve signals with a single session. It's the same as teaching any program to recognise your unique control scheme, just controlling an arm.

Sorry for nitpicking, but I've met engineers who pick up on that very quickly.

Bring on the Nits!

I'm more thinking about the brain having to deal with something new. Yes, you could get the nerve singles right, but the arm is probably a different weight, it probably has a different sensation, it probably moves faster or slowly. All this could lead to a situation that is comparable to switching from your right hand to your left hand and trying to write. Possible, but it would take work and practice.

Then there is adding something completely new. How do you command your camera eye to take a picture? Is this a new command that take a while to learn? Does it take the picture to often?

I think you could add a lot of little things that add up to a larger problem.

JusticeZero
2014-09-03, 01:33 PM
You could also declare that augmentation already happened - everyone is stronger and faster and healthier, but when everyone is super, no one is, and specialty stuff would just cause issues. All of the adjustment is in the genome already,and has been there for a couple generations, so everyone has it. They don't use cybernetics, because it's easier to just regrow it. It's like the vaccination project - a public health miracle, which we haven't really had to improve on and which nobody really thinks much about anymore.

Mastikator
2014-09-03, 02:42 PM
[snip]
I think I'll go with this, integrated cybernetic augmentations are never as good as "the real thing", mental augmentations turn you into a vegetable, and genetic augmentations just mess you up. It would also explain why aliens don't do it either and it would exclude the possibility of some underground network of transhumans that might tempt the players into thinking they can become gods, that's not what the game is intended to be about.

I'll probably stat out prosthetics anyway for players that lose limbs, but make them overall inferior to your original body.

Edit-
But thanks to everyone who contributed, you've given me a lot of ideas.

LibraryOgre
2014-09-03, 03:28 PM
"The tech didn't work the way we thought it would."

Basically, you say "In this setting, they thought they were moving towards a singularity... but they weren't. They weren't able to make bootstrap AIs that could reach the singularity, merely very smart machines. Obviously, they can replace human limbs, but the power requirements of making true augmentation makes it impractical in implants... if you want to improve human capacity, it's easier to make a suit than to put it into a body. Those suits combine smart machines and a degree of EEG mind-reading to work REALLY well, but they're about the peak we've got, and have had for the past 50 years or so."

Let people have goals to beat the current tech... someone always does. Doesn't mean they'll MEET those goals, if it's impossible.

Ebon2014
2014-09-10, 07:40 PM
1. The simple answer could be 'It's a lot harder than anyone thought'. We're already very experienced with this idea. When I was a kid, it was simply a given that we'd have hotels on the Moon and personal robot servants and work 20 hours a week if that. Things didn't turn out that way. Space travel stayed hugely expensive. Robots proved stupidly hard to build, and almost impossible to condense down to a humanoid form, much less program them. What we think of today as significant advances in augmentation prove to be just the first fumbling baby steps up a hugely steep mountain road, and we keep falling off.

2. There could be an outside influence. Harsh new economic realities keep 'pure research' science on the skids. A collapse of the higher education system prevents young scientists from getting the education they needed, or access to resources. An even-stronger anti-science/anti-intellectual cultural movement defunds almost all research into brain augmentation on grounds that it is 'unnatural'. Truly deep cultural movements take many decades or centuries to die out, and maybe we hit one that cripples such research.

3. The Singularity is an airy-fairy lie built out of the imaginings and speculations of non-scientists, much like Drexler's beautiful and utterly impossible nanotechnology dreams. It will never happen.

Stellar_Magic
2014-09-10, 08:34 PM
Well, part of it may be that any further increases to cognitive abilities via biology aren't possible without some sort of cost. You can't just make someone run like Usain Bolt without paying a cost in other abilities and so forth. For example: You can't make someone's cognition greater without increasing caloric intake by a large factor, and oddly enough upping fat intake as well.

This doesn't mean genetic modification isn't possible, or maybe even commonplace, but relatively mundane. Space colonies receive regular doses of radiation, the risk of bone calcification, and so forth... Perhaps those problems have been genetically engineered out of the space colonists... but are otherwise no different from other people. Temper the tech with logical limits and applications.

Considering the human brain is roughly estimated to be around 100 teraflops, perhaps it turns out that there's no way to augment the human brain's computation and cognitive functions because there isn't enough space in the human brain cavity to fit enough microchips to actually augment the person's cognitive abilities.

Direct Neural Interfaces and Ghost in the Shell's full body prosthetics may be possible, but dead end technology because...

A.) Human body parts can be replaced through biotech, cloning processes, or applications for stem cells but they have to match the DNA of recipient or risk rejection. As a result, prosthetics as a science are obsolete since the people can have replacement body parts grown for them instead of having to work with cybernetics.

B.) Direct Neural Interfaces have risks of feedback and other problems, but Neural Interfaces need not be direct and still have the same functionality. People can use Neural Interfaces without surgery and it's a rather common and everyday piece of tech most people use, while DNI never took off. Why go under surgery when you can get the same utility from wearing a neck brace?

Remember technology has limits imposed on it by physics. If you think a technology is too powerful try to find a reason to limit in physical laws. If you have FTL, make it sensitive to gravity waves. If you have a human level AI, make it's processors take up a room or more space.

Remember G loads for ship acceleration, as people don't do well when thrown through space at 40Gs, which is well within our world's technological limits already, and so forth...

boss45
2014-09-11, 11:08 PM
A few thoughts.


Delaying Strong AI is easy. Just say that the actual though process(free will, choice, sapience) has something to do with a level of physics that have yet to be fully explored and perhaps the research is banned or regulated like nuclear weapons. Weak AIs are pretty useful when used by a human, but on their own are still only of insect level intelligence at best.

As to a slowdown of technological innovation, it could be ascribed to a loss of creativity as the human race became less superstitious as well as technology itself suffering the effects of diminishing returns as fewer and fewer new technologies remain to be discovered. Less new technologies but current ones would be streamlined and perfected.

As for why most people are still not augmenting themselves mentally? Well, such a thing would generally require a brain computer interface that would allow data to be streamed into the brain and (later on)be read from the brain as well. The brain doesn't stop developing until age 25, meaning augmentation before that age would be more risky. In addition, there would be privacy concerns(think thought police). Oh, and if that wasn't bad enough, if your brain can be digitally interfaced with then it can be HACKED too. Also, there is more! At this point EMP weaponry should be fairly common. EMP weaponry renders cyborgs extremely vulnerable but makes exosuits(that won't trap wearers if disabled by EMP) and augmented reality technology still relevant and pre-electronics technology also still relevant(especially among poor guerrilla groups and terrorist movements who don't have the money or access to the advanced technologies that would be vulnerable to EMP who will find an effective tactic in dousing an area with EMP and using their non-electronics equipment to take advantage). Most folks just say no thanks, though future generations may be more comfortable.

Hardening against EMP will be available, but all hardened equipment will become MUCH more bulky as it must be contained within an effective Faraday Cage.

Those who do opt for brain augmentation are often people with an actual medical NEED for it like people with Alzheimer's Disease who then ironically have photographic memories but also have all these drawbacks. The others will be wealthy first adopters and virtual reality addicts who can afford to live in very peaceful areas with high security.

Also remember that drones will work alongside handler soldiers in warfare as radio jamming technology will hold a higher and higher priority as drones become more effective and high power control systems used at short range (perhaps 100s of feet) will be needed to maintain control of drones instead of controlling them from miles away in a Forward Operating Base. These will also be used by bad guys.

As for genetically enhanced folks. Most could be simply paragon humans with very few defective genes-baseline PCs. Heirloom(un-enhanced) humans could still be the majority and calling the shots. Genetically enhanced "breeds" of supersoldiers could exist as leftovers from various uncovered illegal programs(thin tinpot dictator with a furry obsession) but their anatomy and genetics would be different enough from a baseline human that medicines and technology(like healing micromachine bays) would be less useful or even harmful to them and medical accomodations for them being reduced to that of 20th century technology (but with the tradeoff of fast natural healing(not as good as tech healing but maybe double natural hp recovered per day) and slow regeneration(1 point of ability drain recovered per month and limbs regrow perhaps over the course of 6 months). If you do create supersoldiers, it would be best to create them with a desired role and a specific creator(company, country) in mind like super pilots created by a black ops group in a 2nd world country.

Mastikator
2014-09-12, 10:07 AM
[snip]
This doesn't mean genetic modification isn't possible, or maybe even commonplace, but relatively mundane. Space colonies receive regular doses of radiation, the risk of bone calcification, and so forth... Perhaps those problems have been genetically engineered out of the space colonists... but are otherwise no different from other people. Temper the tech with logical limits and applications.[snip]
This is something I've taken into consideration when thinking about exoplanetary colonies, I've decided that in this future there will be a independent Mars colony with dozens of millions inhabitants, Mars has been terraformed so there's breathable air (similar to Earth atmosphere) and it's not so cold that you'd freeze to death. But it is colder and the gravity is 37% of Earth's, so the atmosphere is also lighter. I'm thinking of making "Martians" a separate race of humans (separate from Earthikans), they'd be taller, have weaker muscles and have higher red blood cell count to counter the reduced levels of oxygen. They would probably have had to go through several generations of genetic modification to deal with all the consequences of their modification. But I only have a hobby level understanding of biology.

JusticeZero
2014-09-12, 07:57 PM
It's actually very likely that once the surface of Mars is warmed, the amount of CO2. in frozen form will bring the atmosphere to a pressure that is well within unmodified human tolerance. Issues are that altering the composition to a breathable state removes the greenhouse effect needed to retain the heat at a terrestrial norm.
Second, in the long term, the planet's core is dead. This means that once the atmosphere is brought up to pressure, it will very slowly lose atmosphere to solar particles knocking the gas into space.

Mastikator
2014-09-13, 11:04 AM
Is there anyway to counteract that?

Stellar_Magic
2014-09-13, 11:29 AM
Not really besides continual injection of atmosphere or smashing it with a big enough planetoid to cause the world to reform, also the lack of molten core makes any colonists no more protected then if they were in space, as that means there's no magnetosphere. In fact they're less protected then people in Earth's orbit, since that's well within our planet's magnetosphere.

You may have better luck making Venus habitable in the long term, as it's still got a molten core. Unfortunately, that terraforming project would be several orders of magnitude bigger and probably require a planned cometary bombardment to get enough water onto the world to start the process of soaking up the CO2 in the world's atmosphere, maybe if you planned it right you could start the planet spinning again.

Europa may be more habitable in some ways then Mars, as the moon has both large amounts of water and a magnetosphere... in fact it's got more radiation protection then Earth, since it's within Jupiter's massive magnetosphere. It'd lack an atmosphere, but you can make one pretty easy within domed structures...

I have a setting I've been working on for a novel series that tackles some of these issues. In it I separated humanity into three main 'races': The Stellan (those genetically modified for spaceborn life to avoid bone calcification and are radiation resistant), the Lunan (those born in low-gravity environments, usually genetically modified somewhat but not as severely as the Lunan), and Terrans (those born in medium to high-gravity environments, usually not modified due to such environments typically not requiring such adaptions).

Any martian race would still require adaptions to be more radiation resistant or they'd live in a 'hardened' environment. At least with an atmosphere you could have some Ozone generation to reduce some of the radiation levels.

bulbaquil
2014-09-13, 02:10 PM
Economic and political issues could play a role.

An economic catastrophe on the order of the Great Depression occurs. Dollars for augmentation R&D dry up and many firms involved in it go bankrupt. Simultaneously with this are populist revolts against automation, already perceived by certain substrata of the general public to be associated with loss of jobs for humans - Luddism II, if you will - but these revolts turn out to be more successful than the original strain of Luddism was. Couple this, in turn, with the potential security, privacy, and technological development issues brought up by other posters in this thread, and by the time the economy's recovered, any attempt at robotics or human augmentation too much beyond what's possible with 2014 technology is so fraught with red tape that the surviving companies have simply moved in a different direction in terms of technological development.

Belial_the_Leveler
2014-09-13, 04:18 PM
Why Doesn't Singularity happen? With computers making better computers making better computers?

Time: The average computer has 100 million to 1 billion circuits. The average brain has 10 trillion cells, each with a couple hundred thousand individual organelles. That's nine orders of magnitude higher complexity. How soon do you expect technology to advance by nine orders of magnitude?

Design: Nine orders of magnitude higher complexity means nine orders of magnitude more stuff to decide upon and design, before even you start building. The human brain is more complex than all the current computer networks all over the world put together. Sure, you got the tech to build that network. But the designers still need to make the plans for it and by the time they make the plans for something that complex it'll be the next ice age.

Debugging: Uhuh, you build that ridiculously complex piece of machinery made up of 10 trillion interactive units, each as complex as the factories in Silicon Valley. Now, one of those units is malfunctioning and because it is interacting with the rest, you can get anything from psychosis, to nightmares, to apathy, to Alzheimer's. Merely to categorize the symptoms of those problems took centuries of research for an entire sector of the scientific community. Have fun finding the exact problem in this exact unit and fixing it in your remaining lifetime... when said unit does not want you to find the problem. It likes being angry, unreasonable, overly emotional, self-destructive, self-deluding and all that crap.

Inclination: A.I. is not going to happen. The various world leaders are not the most technically savvy people in the world; they're the most socially adept. They are where they are because they literally manipulated the rest of the world better than everyone and they don't need a new form of intelligence they don't understand to mess up the good thing they got.

Craft (Cheese)
2014-09-14, 12:23 PM
Why Doesn't Singularity happen? With computers making better computers making better computers?

Time: The average computer has 100 million to 1 billion circuits. The average brain has 10 trillion cells, each with a couple hundred thousand individual organelles. That's nine orders of magnitude higher complexity. How soon do you expect technology to advance by nine orders of magnitude?

Assuming Moore's law holds? About 45 years. And that's for the *average* computer: There are supercomputers that approach (if not exceed) the level of computational power necessary to simulate the human brain right now. Now, mind uploading is probably not as simple as building a neural network, scanning a brain, then inputting the appropriate states into the network.


Design: Nine orders of magnitude higher complexity means nine orders of magnitude more stuff to decide upon and design, before even you start building. The human brain is more complex than all the current computer networks all over the world put together. Sure, you got the tech to build that network. But the designers still need to make the plans for it and by the time they make the plans for something that complex it'll be the next ice age.

The first Strong AI's will probably be nowhere near as complex or powerful as a human brain; Humans have tons and tons of unnecessary cruft.


Debugging: Uhuh, you build that ridiculously complex piece of machinery made up of 10 trillion interactive units, each as complex as the factories in Silicon Valley. Now, one of those units is malfunctioning and because it is interacting with the rest, you can get anything from psychosis, to nightmares, to apathy, to Alzheimer's. Merely to categorize the symptoms of those problems took centuries of research for an entire sector of the scientific community. Have fun finding the exact problem in this exact unit and fixing it in your remaining lifetime... when said unit does not want you to find the problem. It likes being angry, unreasonable, overly emotional, self-destructive, self-deluding and all that crap.

This is why any real-world singularity is 99.99% likely in the effectively immediate extinction of all of humanity, possibly of all sapient minds in the universe. Self-improving AI (so long as it's in enough of a working order to be able to self-improve) will rapidly gain enough knowledge and power to be able to instantly curbstomp anything that opposes it, save possibly an SAI built earlier by a sapient race in another galaxy or something. Best case scenario, it'll be a benevolent god-emperor. If we're slightly less lucky, it'll just murder everyone. If we're *really* unlucky, it'll be malevolent and it *won't* kill us.


Inclination: A.I. is not going to happen. The various world leaders are not the most technically savvy people in the world; they're the most socially adept. They are where they are because they literally manipulated the rest of the world better than everyone and they don't need a new form of intelligence they don't understand to mess up the good thing they got.

If SAI is not built by a team of scientists sponsored by world leaders, then it'll most likely instead be built by a terrorist organization of some sort, or by a hostile government like North Korea. If somehow not those, then it'll eventually be built by some kid in his parent's basement. And if not then, it'll happen by complete accident; Say a corrupted program file that gets executed and turns out to be an SAI. Whether in 10 years from now or 10,000, it'll happen unless humanity goes extinct for some other reason first.



On-topic: The easiest answer is "It turned out harder than we expected." For decades, we've been trying to figure out how to get fusion reactors to work. It always looks just 10 years away, but every time we figure out one problem it turns out there were a dozen even bigger problems hiding behind it. Right now it's looking like we'll hit a singularity (for good or for ill) in about 30-40 years, but it may turn out to take centuries instead.

On the other hand, some things turn out way easier than expected: Nobody in the 1940's or 50's imagined the advent of the transistor, and thus the invention of the unmanned satellite. A lot sci-fi novels from that time postulate futures where weather patterns are monitored by manned space stations who look at cloud formations from above with telescopes. We can laugh at those writers now, but what sorts of things will we have in 2074 that are utterly inconceivable to us now?

hamishspence
2014-09-14, 02:29 PM
On the other hand, some things turn out way easier than expected: Nobody in the 1940's or 50's imagined the advent of the transistor, and thus the invention of the unmanned satellite. A lot sci-fi novels from that time postulate futures where weather patterns are monitored by manned space stations who look at cloud formations from above with telescopes. We can laugh at those writers now, but what sorts of things will we have in 2074 that are utterly inconceivable to us now?

I'm told Murray Leinster's A Logic Named Joe in 1946 was extremely accurate when it comes to present-day computer use.

The computer's sentience, on the other hand, is a bit less accurate.

Sith_Happens
2014-09-14, 03:37 PM
Assuming Moore's law holds? About 45 years. And that's for the *average* computer: There are supercomputers that approach (if not exceed) the level of computational power necessary to simulate the human brain right now. Now, mind uploading is probably not as simple as building a neural network, scanning a brain, then inputting the appropriate states into the network.

Yes and no. Moore's law technically pertains to feature size, which usually but doesn't have to correspond to interconnectivity. Not to mention that we're only about 15 years from the theoretical minimum size (~0.1 nm for a single-atom switch) if it continues to hold.

runeghost
2014-09-14, 11:46 PM
I've been casually working on a world building project that I intend to DM some time in the future.

The short version of it is this: Distant future (about 340 years in the future), the Sol system is densely colonized, there is FTL travel (but with severe limitations and significant risks and no FTL communications), a handful of extrasolar colonies, contact and even some migration with (very) alien intelligent life forms, and here's where I hit a snag; no augmentation of the human body or mind and I want the technological development to have come to a grind without having to invoke some dystopian reasons.

The question is basically, what reasons could there be that technology and the advancement of science has slowly come to a grinding halt, and why don't people augment their intelligence with technology?

Perhaps increasing intelligence has diminishing returns? There are lots of ways to limit the scaling, but here's one: Getting to effective AI equal to a human comes at X cost in resources (computing power, etc). Getting to 110% of base human costs 2X, 120% of base human costs 4x, 130% is 8X, 140% is 16X, and so on. You can tweak the numbers in a lot of ways, but it basically boils down to the idea that (in your universe) getting a singularity-class AI would basically take a planet made of computronium.

You can also throw in energy caps. I don't know how you're doing your FTL, but if its already energy intensive, that might be a nice place to start. Basically, everything much beyond our current tech level takes stupiudly huge amounts of energy, either because it requires the creation of exotic matter, or reality-warping levels of energy, or whatever. Humanity may be tapping the earth's core, running thousands of deep-space fusion plants, building a partial dyson sphere in close solar orbit, tapping jupiter's magentosphere, and still barely has enough energy to run a few FTL ships, a crappy FTL com-system, and maybe one or two other nifty but limited tricks. (Maybe... building exotic matter space elevators for earth, mars, and mercury.) They theoretically know how to do more, but the energy cost and associated trade-off is so stupidly high no one is willing to pay it.

"Sure, we could build one more super-battle mech (or a half-dozen swarms of multi-purpose nanties, or one-AI as smart as stephen hawking, or one FTL drive) OR we can cut the workday for everyone in the solar sytem by an hour AND give everyone another 1000 square feet of living space AND create 100,000,000 hours of entertainment. Which one do you want?"

Storm_Of_Snow
2014-09-15, 03:22 AM
If SAI is not built by a team of scientists sponsored by world leaders, then it'll most likely instead be built by a terrorist organization of some sort, or by a hostile government like North Korea. If somehow not those, then it'll eventually be built by some kid in his parent's basement. And if not then, it'll happen by complete accident; Say a corrupted program file that gets executed and turns out to be an SAI. Whether in 10 years from now or 10,000, it'll happen unless humanity goes extinct for some other reason first.

And then you get the Turing Police from Neuromancer.

An alternative source for AI is something like a research department, either a university one, or a major technology company who're playing in that field for whatever reasons (say Google's search algorithms get a little too good, or Apple's got a scan of Steve Jobs' brain activity patterns.

supermonkeyjoe
2014-09-15, 04:35 AM
Historically speaking a major disaster involving technology X usually does well to set development and uptake back a significant amount, just look at how Chernobyl affected the public perception of nuclear power, if that didn't happen then who knows how many nuclear power plants there would be around today.

Maybe something similar happened in your setting for AI, the first truly sentient AI immediately went full Skynet on the world and nearly succeeded so the response to other any other AIs would be a fairly unanimous "hell no" from pretty much every government.

Avian Overlord
2014-09-15, 07:19 AM
Non-invasive Neural control - Often the best way to prevent a certain technology from becoming widespread is to let another technology render it obsolete. In this case, the existence of a way to control devices directly with the brain without needing to drill holes in the skull, which would more or less kill permanent implants. Permanently stitching a piece of technology into one's flesh is pretty much the least convenient, most expensive, and most dangerous way to use it. I have no idea how possible this form of control is, but it would certainly solve your problem if it existed.

The path of any technology - "If Moore's Law holds" Interesting phrase there. Pretty much every new technology goes through a phase of rapid development, before reaching maturity and plateauing. Planes doubled in speed frequently not too long ago, but we didn't have some speed singularity that let us go faster than light. There's no real reason to think computers are magically immune to this.

It just doesn't work that way - We don't actually know if stuff like intelligence enhancement, Strong AI, any AI, general enhancement of humans, and so on are possible. There's no real need for a handwave other than that.

Craft (Cheese)
2014-09-15, 10:07 AM
Non-invasive Neural control - Often the best way to prevent a certain technology from becoming widespread is to let another technology render it obsolete. In this case, the existence of a way to control devices directly with the brain without needing to drill holes in the skull, which would more or less kill permanent implants. Permanently stitching a piece of technology into one's flesh is pretty much the least convenient, most expensive, and most dangerous way to use it. I have no idea how possible this form of control is, but it would certainly solve your problem if it existed.

Such tech exists today, actually. The main problem is poor spatial resolution; You can't use it to read/change the state of a single neuron, you can only change the average state of a huge group, and you can't affect neurons deep in the brain without also affecting the ones in front of them, closer to the surface of the skull. Very recently it was used to perform effectively brain-to-brain communication, experienced by the users as flashes of light that can be used to send morse code. If a technological breakthrough or seven allows these problems to be solved, then there is indeed a strong argument that brain implants will just never catch on.

However, that doesn't really meet the intention of "no cyborgs": It just means everybody can have all sorts of cyborg-like telepathic superpowers without actually needing surgery to use them.

Mastikator
2014-09-16, 12:02 AM
Such tech exists today, actually. The main problem is poor spatial resolution; You can't use it to read/change the state of a single neuron, you can only change the average state of a huge group, and you can't affect neurons deep in the brain without also affecting the ones in front of them, closer to the surface of the skull. Very recently it was used to perform effectively brain-to-brain communication, experienced by the users as flashes of light that can be used to send morse code. If a technological breakthrough or seven allows these problems to be solved, then there is indeed a strong argument that brain implants will just never catch on.

However, that doesn't really meet the intention of "no cyborgs": It just means everybody can have all sorts of cyborg-like telepathic superpowers without actually needing surgery to use them.

Though, they do little that a phone, touchscreen or joystick can't do, perhaps they increase the bandwidth, but increased bandwidth might mean that it's incredibly mentally exhausting to use. A neural link is a tool, not an augmentation, an exoskeleton/power armor is a vehicle, not an augmentation.

Nobot
2014-09-16, 08:58 AM
I've been casually working on a world building project that I intend to DM some time in the future.

The short version of it is this: Distant future (about 340 years in the future), the Sol system is densely colonized, there is FTL travel (but with severe limitations and significant risks and no FTL communications), a handful of extrasolar colonies, contact and even some migration with (very) alien intelligent life forms, and here's where I hit a snag; no augmentation of the human body or mind and I want the technological development to have come to a grind without having to invoke some dystopian reasons.

The question is basically, what reasons could there be that technology and the advancement of science has slowly come to a grinding halt, and why don't people augment their intelligence with technology?

Sounds fun!

Maybe you could find some inspiration in Frank Herbert's Dune universe. In that universe, artificial intelligence and (advanced) computers were eradicated in a holy war against all machines created in man's likeness. In the Dune universe, this put a stop to development of AI and lead simultaneously to man training himself to fulfill the tasks of computers (the Mentats and such), which gives you a premise to introduce supernatural abilities through specialised training to your PC's. Alternatively, in your universe, progress could have simply been halted.

It would be possible to link this ban on AI to the alien presence. Maybe they are hyperintelligent and offer an organic alternative to computers to mankind (voluntarily or otherwise)?

Good luck!

Grinner
2014-09-16, 09:22 AM
...and why don't people augment their intelligence with technology?

'Cause it isn't pretty.

Think about it. The human brain is complex thing. We can make general observations about it and the mind, but their secrets still elude us. Now, there are simple things you can do to enhance cognition: nootropics (http://en.wikipedia.org/wiki/Nootropics), deep-brain stimulation (https://en.wikipedia.org/wiki/Deep_brain_stimulation), and the like. However, you're liable to knock something loose if you start tinkering with the chemicals too much, poke one too many holes, or sustain exposure to electromagnetic/ultrasonic fields too long.

In this way, long-time brainhackers are likely to be similar to long-time drug addicts: unpredictable and possibly dangerous.

As for technology, it occurs to me that our technology has developed faster as our communications have become faster. With humanity spread so far out, inventors, scientists, and thinkers will have trouble convening to exchange ideas.


Debugging: Uhuh, you build that ridiculously complex piece of machinery made up of 10 trillion interactive units, each as complex as the factories in Silicon Valley. Now, one of those units is malfunctioning and because it is interacting with the rest, you can get anything from psychosis, to nightmares, to apathy, to Alzheimer's. Merely to categorize the symptoms of those problems took centuries of research for an entire sector of the scientific community. Have fun finding the exact problem in this exact unit and fixing it in your remaining lifetime... when said unit does not want you to find the problem. It likes being angry, unreasonable, overly emotional, self-destructive, self-deluding and all that crap.

I've been working through a book by Michio Kaku recently, and he observed that if you remove a single transistor from a circuit, the whole thing will fail*. However, if you remove a single neuron from a brain, you're not likely to notice much difference. In fact, people have had entire hemispheres of their brains removed, and that actually improved their cognitive functioning. On a practical level, neurons and transistors don't equate well.

*I recall reading elsewhere that modern CPUs have redundant circuitry built into them for specifically this reason.

Frozen_Feet
2014-09-16, 10:33 AM
I've been working through a book by Michio Kaku recently, and he observed that if you remove a single transistor from a circuit, the whole thing will fail ... I recall reading elsewhere that modern CPUs have redundant circuitry built into them for specifically this reason.

Not quite. The redundant transistors are put there because part of the transistors will inevitably fail. Modern transistors are so small that they can get destroyed by quantum phenomenoms.

Modern consumer transistor technology is, as a direct result, less stable and more prone to overheating or otherwise self-destructing. Which is one of the reasons why tech used for space probes and the like tends to not be as cutting-edge as you'd think.

This leads me to extrapolate one reason why Strong AI didn't happen: transistor-based technology that could lead to such phenomena would be so fine that it'd be exceedingly vulnerable to cosmic radiation. You take circuitry capable of housing a powerful AI to space, and it fries. The level of redundancy required to prevent that would be too costly to be economical.

Cosmic radiation is possible a good reason to say "no" to other sorts of advanced electronics in space. They just don't work right in absence of a geologically active planet's electromagnetic field, gravity etc.

Craft (Cheese)
2014-09-16, 08:16 PM
Though, they do little that a phone, touchscreen or joystick can't do, perhaps they increase the bandwidth, but increased bandwidth might mean that it's incredibly mentally exhausting to use. A neural link is a tool, not an augmentation, an exoskeleton/power armor is a vehicle, not an augmentation.

Direct modification and reading of neurons, in real time, opens up huge possibilities. Most of the big plot-changing technologies that are normally only available in settings with human augmentation become available with this tech: Just think of the massive social ramifications of a "Push button, receive orgasm" device alone. (Which is possible today, if you're willing to get a spinal implant for it. Users report sensations far more intense than ones attained naturally.)

Mastikator
2014-09-16, 11:54 PM
Direct modification and reading of neurons, in real time, opens up huge possibilities. Most of the big plot-changing technologies that are normally only available in settings with human augmentation become available with this tech: Just think of the massive social ramifications of a "Push button, receive orgasm" device alone. (Which is possible today, if you're willing to get a spinal implant for it. Users report sensations far more intense than ones attained naturally.)

That seems... like it would be extremely addicting. Like people would do nothing but push buttons all day.

Actually I could work with that.

Storm_Of_Snow
2014-09-17, 04:07 AM
Not quite. The redundant transistors are put there because part of the transistors will inevitably fail. Modern transistors are so small that they can get destroyed by quantum phenomenoms.

Modern consumer transistor technology is, as a direct result, less stable and more prone to overheating or otherwise self-destructing. Which is one of the reasons why tech used for space probes and the like tends to not be as cutting-edge as you'd think.

A friend of mine worked on a part of the GAIA probe - a lot of the coding they did was for identifying and working around the effects of cosmic radiation.

But even the launch can be a very traumatic event for electronics, which is why satellites go on things vibration rigs.



This leads me to extrapolate one reason why Strong AI didn't happen: transistor-based technology that could lead to such phenomena would be so fine that it'd be exceedingly vulnerable to cosmic radiation. You take circuitry capable of housing a powerful AI to space, and it fries. The level of redundancy required to prevent that would be too costly to be economical.

Or maybe the act of initialising the AI causes enough interference to fry most of the hardware and kill it before it even starts to wake up.

Grinner
2014-09-17, 06:16 AM
That seems... like it would be extremely addicting. Like people would do nothing but push buttons all day.

I think that's what usually happens, yes.

hamishspence
2014-09-17, 06:27 AM
In the Deathstalker novel series, there's a sect that had their minds altered so they were in a permanent state of ecstasy. They were very weird, but occasionally had insights that appeared to be foreshadowing when the book was reread.

Craft (Cheese)
2014-09-18, 09:43 PM
That seems... like it would be extremely addicting. Like people would do nothing but push buttons all day.

Actually I could work with that.

Actually it'd be even bigger than that: *Anyone* you choose to allow to do so, can push your orgasm button. From the other side of a planet, if need be, thanks to the internet. Imagine a society where pressing each other's orgasm-buttons became the equivalent of sending pokes on facebook, and what this would do to the existing dynamic of sexuality (and all the social constructs built around it, like dating and marriage).

the OOD
2014-09-19, 08:42 PM
Actually it'd be even bigger than that: *Anyone* you choose to allow to do so, can push your orgasm button. From the other side of a planet, if need be, thanks to the internet. Imagine a society where pressing each other's orgasm-buttons became the equivalent of sending pokes on facebook, and what this would do to the existing dynamic of sexuality (and all the social constructs built around it, like dating and marriage).
perhaps for you, but my orgasm button will require sender authentication.:smallbiggrin:
(that said, kudos to you, the implications of this one are gonna be rattling around my brain for days)

have you considered the idea that when any AI reaches a certain level of thought/processing power, they immediately auto-immolate? no-one knows why, they just do.(perhaps they hit an existential crisis, or realize that reality is doomed, or uncover some other horror(read the opening of call of Cthulhu))
rather than shoving it aside, you add lore to your game without having to deal with all-powerful AIs(plus the added terror when the PC's face aforementioned horror)