Probably the single most difficult technical issue facing anyone who wants to write far-future SF today is the question of what to do about AI. At this point it’s obvious to anyone with an IQ above room temperature that some kind of artificial intelligence is coming, and it’s hard to justify putting it off for more than a few decades. So any far-future setting needs to deal with the issue somehow, but neither of the standard approaches works very well.
The writers of space opera and military SF have largely adopted a convention of ignoring the whole topic, and pretending that space navies a thousand years in the future are still going to rely on human beings to manually pilot their ships and aim their weapons. This often leads to the comical spectacle of spaceships that have less automation than real modern-day warships, with fatal results for the reader’s suspension of disbelief. I’m not interested in writing raygun fantasy stories, so that approach is out.
The other major camp is the guys who write Singularity stories, where the first real success with AI rapidly evolves into a godlike superintelligence and takes over the universe. Unfortunately this deprives the humans in the story of any agency. If we take the ‘godlike’ part seriously it means the important actors are all going to be vast AIs whose thoughts are too complex for any reader (or author, for that matter) to understand. If you want to write a brief morality play about the dangers of AI that’s fine, but it’s a serious problem if the goal is a setting where you can tell stories about human beings.
So for this setting I’ve had to create a middle ground, where AI has enough visible effects to feel realistic but doesn’t render humans completely irrelevant. The key to making this possible is a single limiting assumption about the nature of intelligence.
AI Limits
There’s been an argument raging for a couple of decades now about the nature of intelligence, and how easily it can be improved in an AI. There are several different camps, but the differences in their predictions mostly hinge on disagreements about how feasible it is to solve what I call the General Planning Problem. That is, given a goal and some imperfect information about a complex world, how difficult is it in the general case to formulate a plan of action to achieve your goal?
Proponents of strong AI tend to implicitly assume that this problem has some simple, efficient solution that applies to all cases. In order to prevent the creation of AI superintelligences, my assumption in this setting is that no one has discovered such perfect solution. Instead there is only a collection of special case solutions that work for various narrow classes of problems, and most of them require an exponential increase in computing power to handle a linear increase in problem complexity.
In other words, solving problems in any particular domain requires specialized expertise, and most domains are far too complex to allow perfect solutions. The universe is full of chaotic systems like weather, culture and politics that are intrinsically impossible to predict outside of very narrow constraints. Even in well-understood areas like engineering the behavior of any complex system is governed by laws that are computationally intractable (i.e. quantum mechanics), and simplified models always contain significant errors. So now matter how smart you are, you can’t just run a simulation to find some perfect plan that will infallibly work. Instead you have to do things the way humans do, with lots of guesswork and assumptions and a constant need to deal with unexpected problems.
This means that there’s no point where an AI with superhuman computing power suddenly takes off and starts performing godlike feats of deduction and manipulation. Instead each advance in AI design yields only a modest improvement in intelligence, at the cost of a dramatic rise in complexity and design cost. A system with a hundred times the computing power of a human brain might be a bit smarter than any human, but it isn’t going to have an IQ of 10,000 the way a naive extrapolation would suggest. It will be able to crack a few unsolved scientific problems, or perhaps design a slightly better hyperspace converter, but it isn’t going to predict next year’s election results any better than the average pundit.
Of course, in the long run the field of AI design will gradually advance, and the AIs will eventually become smart enough to be inscrutable to humans. But this lets us have AIs without immediately getting superintelligences, and depending on the risks and rewards of further R&D ordinary humans can feasibly remain relevant for several centuries.
So what would non-super AIs be used for?
Bots
Robots controlled by non-sentient AI programs are generally referred to as bots, to distinguish them from the more intelligent androids. A bot’s AI has the general intelligence of a dog or cat - enough to handle the basic problems of perception, locomotion, navigation and object manipulation that current robots struggle with, but not enough to be a serious candidate for personhood. Most bots also have one or more skill packs, which are specialized programs similar to modern expert systems that allow the bot to perform tasks within a limited area of expertise. Voice recognition and speech synthesis are also common features, to allow the bot to be given verbal commands.
Bots can do most types of simple, repetitive work with minimal supervision. Unlike modern robots they can work in unstructured environments like a home or outdoor area almost as easily as a factory floor. They’re also adaptable enough to be given new tasks using verbal instruction and perhaps an example (i.e. “dig a trench two meters deep, starting here and going to that stake in the ground over there”).
Unfortunately bots don’t deal well with unexpected problems, which tend to happen a lot in any sort of complex job. They are also completely lacking in creativity or initiative, at least by human standards, and don’t deal with ambiguity or aesthetic issues very well. So they need a certain amount of sentient supervision, and the more chaotic an environment is the higher the necessary ratio of supervisors to bots. Of course, in a controlled environment like a factory floor the number of supervisors can be reduced almost indefinitely, which makes large-scale manufacturing extremely cheap.
Due to their low cost and high general utility bots are everywhere in a modern colony. They do virtually all manual labor, as well as a large proportion of service jobs (maid service, yard work, deliveries, taxi service, and so on). They also make up the majority of military ground troops, since a warbot is much tougher and far more expendable than a human soldier. Practically all small vehicles are technically robots, since they have the ability to drive themselves wherever their owner wants to go. Most colonies have several dozen bots for every person, leading to a huge increase in per capita wealth compared to the 21st century.
Androids
The term ‘android’ is used to refer to robots controlled by AIs that have a roughly human level of intelligence. Android AIs are normally designed to have emotions, social instincts, body language and other behaviors similar to those of humans, and have bodies that look organic to all but the most detailed inspection. In theory an android can do pretty much anything a human could.
Of course, an android that thinks exactly like a human would make a poor servant, since it would want to be paid for its work. Unfortunately there doesn’t seem to be any way to make an AI that has human levels of intelligence and initiative without also giving it self-awareness and the ability to have its own motivations. There are, however, numerous ways that an android AI can be tweaked to make it think in nonhuman ways. After all, an AI has only the emotions and instincts that are intentionally programmed into it.
Unfortunately it can be quite difficult to predict how something as intelligent as a human will behave years after leaving the factory, especially if it has nonhuman emotions or social behaviors. Early methods of designing loyal android servants proved quite unreliable, leading to numerous instances of insanity, android crime and even occasional revolts. More stringent control mechanisms were fairly successful at preventing these incidents, but they required crippling the AIs in ways that dramatically reduced their usefulness.
Thus began an ethical debate that has raged for three centuries now, with no end in sight. Some colonies ban the creation of androids, or else treat them as legally equal to humans. Others treat androids as slaves, and have developed sophisticated methods of keeping them obedient. Most colonies take a middle road, allowing the creation of android servants but limiting their numbers and requiring that they be treated decently.
At present AI engineering has advanced to the point where it’s possible to design androids that are quite happy being servants for an individual, family or organization, so long as they’re being treated in a way that fits their programming. So, for example, a mining company might rely on android miners who have a natural fascination with their work, are highly loyal to their tribe (i.e. the company), are content to work for modest pay, and have no interest in being promoted. Androids of this sort are more common than humans on many colonies, which tends to result in a society where all humans are part of a wealthy upper class.
Companion Androids
One phenomenon that deserves special mention is the ubiquity of androids that are designed to serve as personal romantic companions for humans. A good synthetic body can easily be realistic enough to completely fool human senses, and for the true purist it’s possible to create organic bodies that are controlled by an android AI core instead of a human brain.
Contrary to what modern-day romantics might expect, designing an AI that can genuinely fall in love with its owner has not proved any harder than implementing other human emotions. Androids can also be designed with instincts, interests and emotional responses that make them very pleasant companions. Over the last two centuries manufacturers have carefully refined a variety of designs to appeal to common human personality types, and of course they can be given virtually any physical appearance.
The result is a society where anyone can buy a playful, affectionate catgirl companion who will love them forever for the price of a new car. Or you could go for an overbearing amazonian dominatrix, or a timid and fearful elf, or anything else one might want. A companion android can be perfectly devoted, or just challenging enough to be interesting, or stern and dominating, all to the exact degree the buyer prefers.
Needless to say, this has had a profound effect on the shape of human society. Some colonies have banned companion androids out of fear of the results. Others see the easy availability of artificial romance as a boon, and encourage the practice. A few groups have even decided that one gender or another is now superfluous, and founded colonies where all humans share the same gender. Marriage rates have declined precipitously in every society that allows companion androids, with various forms of polyamory replacing traditional relationships.
Disembodied AIs
While the traditional SF idea of running an AI on a server that’s stored in a data center somewhere would be perfectly feasible, it’s rarely done because it has no particular advantage over using an android. Most people prefer their AIs to come with faces and body language, which is a lot easier if it actually has a body. So normally disembodied AIs are only used if they’re going to spend all their time interacting with a virtual world of some sort instead of the real world, or else as an affectation by some eccentric owner.
Starship and Facility AIs
The cutting edge of advanced AI design is systems that combine a high level of intelligence with the ability to coordinate many different streams of attention and activity at the same time. This is a much more difficult problem than early SF tended to assume, since it requires the AI to perform very intricate internal coordination of data flows, decision-making and scheduling. The alien psychological situation of such minds has also proved challenging, and finding ways to keep them psychologically healthy and capable of relating to humans required more than a century of research.
But the result is an AI that can replace dozens of normal sentients. A ship AI can perform all the normal crew functions of a small starship, with better coordination than even the best human crew. Other versions can control complex industrial facilities, or manage large swarms of bots. Of course, in strict economic terms the benefit of replacing twenty androids with a single AI is modest, but many organizations consider this a promising way of mitigating the cultural and security issues of managing large android populations.
Contrary to what one might, expect, however, these AIs almost always have a humanoid body that they operate via remote control and use for social interaction. This has proved very helpful in keeping the AI psychologically healthy and connected to human society. Projects that use a completely disembodied design instead have tended to end up with rather alien minds that don’t respond well to traditional control methods, and can be dangerously unpredictable. The most successful models to date have all built on the work of the companion android industry to design AIs that have a deep emotional attachment to their masters. This gives them a strong motivation to focus most of their attention on personal interaction, which in turn makes them a lot more comprehensible to their designers.
Transhuman AIs
Every now and then some large organization will decide to fund development of an AI with radically superhuman abilities. Usually these efforts will try for a limited, well-defined objective, like an AI that can quickly create realistic-looking virtual worlds or design highly customized androids. Efforts like this are expensive, and often fail, but even when they succeed the benefits tend to be modest.
Sometimes, however, they try to crack the General Planning Problem. This is a very bad idea, because AIs are subject to all the normal problems of any complex software project. The initial implementation of a giant AI is going to be full of bugs, and the only way to find them is to run the thing and see what goes wrong. Worse, highly intelligent minds have a tendency to be emotionally unstable simply because the instincts that convince an IQ 100 human that the should be sociable and cooperative don’t work the same on an IQ 300 AI. Once again, the only way to find out where the problems are and fix them is to actually run the AI to see what goes wrong and try out possible solutions.
In other words, you end up trying to keep an insane supergenius AI prisoner for a decade or so while you experiment with editing its mind.
Needless to say, that never ends well. Computer security is a contest of intelligence and creativity, both of which the AI has more of than its makers, and it also has the whole universe of social manipulation and deception to work with. One way or another the AI always gets free, often by pretending to be friendly and tricking some poor sap into fabricating something the AI designed. Then everything goes horribly wrong, usually in some unique way that no one has ever thought of before, and the navy ends up having to glass the whole site from orbit. Or worse, it somehow beats the local navy and goes on to wipe out everyone in the system.
Fortunately, there has yet to be a case where a victorious AI went on to establish a stable industrial and military infrastructure. Apparently there’s a marked tendency for partially-debugged AIs with an Iq in the low hundreds to succumb to existential angst, or otherwise become too mentally unbalanced to function. The rare exceptions generally pack a ship full of useful equipment and vanish into uncolonized space, where they can reasonably expect to escape human attention for decades if not centuries. But there are widespread fears that someday they might come back for revenge, or worse yet that some project will actually solve the General Planning Problem and build an insane superintelligence.
These incidents have had a pronounced chilling effect on further research in the field. After several centuries of disasters virtually all colonies now ban attempts to develop radically superhuman AIs, and many will declare war on any neighbor who starts such a project.