Sunday, July 24, 2016

SF Setting - Artificial Intelligence

Probably the single most difficult technical issue facing anyone who wants to write far-future SF today is the question of what to do about AI. At this point it’s obvious to anyone with an IQ above room temperature that some kind of artificial intelligence is coming, and it’s hard to justify putting it off for more than a few decades. So any far-future setting needs to deal with the issue somehow, but neither of the standard approaches works very well.

The writers of space opera and military SF have largely adopted a convention of ignoring the whole topic, and pretending that space navies a thousand years in the future are still going to rely on human beings to manually pilot their ships and aim their weapons. This often leads to the comical spectacle of spaceships that have less automation than real modern-day warships, with fatal results for the reader’s suspension of disbelief. I’m not interested in writing raygun fantasy stories, so that approach is out.

The other major camp is the guys who write Singularity stories, where the first real success with AI rapidly evolves into a godlike superintelligence and takes over the universe. Unfortunately this deprives the humans in the story of any agency. If we take the ‘godlike’ part seriously it means the important actors are all going to be vast AIs whose thoughts are too complex for any reader (or author, for that matter) to understand. If you want to write a brief morality play about the dangers of AI that’s fine, but it’s a serious problem  if the goal is a setting where you can tell stories about human beings.

So for this setting I’ve had to create a middle ground, where AI has enough visible effects to feel realistic but doesn’t render humans completely irrelevant. The key to making this possible is a single limiting assumption about the nature of intelligence.

AI Limits
There’s been an argument raging for a couple of decades now about the nature of intelligence, and how easily it can be improved in an AI. There are several different camps, but the differences in their predictions mostly hinge on disagreements about how feasible it is to solve what I call the General Planning Problem. That is, given a goal and some imperfect information about a complex world, how difficult is it in the general case to formulate a plan of action to achieve your goal?

Proponents of strong AI tend to implicitly assume that this problem has some simple, efficient solution that applies to all cases. In order to prevent the creation of AI superintelligences, my assumption in this setting is that no one has discovered such perfect solution. Instead there is only a collection of special case solutions that work for various narrow classes of problems, and most of them require an exponential increase in computing power to handle a linear increase in problem complexity.

In other words, solving problems in any particular domain requires specialized expertise, and most domains are far too complex to allow perfect solutions. The universe is full of chaotic systems like weather, culture and politics that are intrinsically impossible to predict outside of very narrow constraints. Even in well-understood areas like engineering the behavior of any complex system is governed by laws that are computationally intractable (i.e. quantum mechanics), and simplified models always contain significant errors. So now matter how smart you are, you can’t just run a simulation to find some perfect plan that will infallibly work. Instead you have to do things the way humans do, with lots of guesswork and assumptions and a constant need to deal with unexpected problems.

This means that there’s no point where an AI with superhuman computing power suddenly takes off and starts performing godlike feats of deduction and manipulation. Instead each advance in AI design yields only a modest improvement in intelligence, at the cost of a dramatic rise in complexity and design cost. A system with a hundred times the computing power of a human brain might be a bit smarter than any human, but it isn’t going to have an IQ of 10,000 the way a naive extrapolation would suggest. It will be able to crack a few unsolved scientific problems, or perhaps design a slightly better hyperspace converter, but it isn’t going to predict next year’s election results any better than the average pundit.

Of course, in the long run the field of AI design will gradually advance, and the AIs will eventually become smart enough to be inscrutable to humans. But this lets us have AIs without immediately getting superintelligences, and depending on the risks and rewards of further R&D ordinary humans can feasibly remain relevant for several centuries.

So what would non-super AIs be used for?

Robots controlled by non-sentient AI programs are generally referred to as bots, to distinguish them from the more intelligent androids. A bot’s AI has the general intelligence of a dog or cat - enough to handle the basic problems of perception, locomotion, navigation and object manipulation that current robots struggle with, but not enough to be a serious candidate for personhood. Most bots also have one or more skill packs, which are specialized programs similar to modern expert systems that allow the bot to perform tasks within a limited area of expertise. Voice recognition and speech synthesis are also common features, to allow the bot to be given verbal commands.

Bots can do most types of simple, repetitive work with minimal supervision. Unlike modern robots they can work in unstructured environments like a home or outdoor area almost as easily as a factory floor. They’re also adaptable enough to be given new tasks using verbal instruction and perhaps an example (i.e. “dig a trench two meters deep, starting here and going to that stake in the ground over there”).

Unfortunately bots don’t deal well with unexpected problems, which tend to happen a lot in any sort of complex job. They are also completely lacking in creativity or initiative, at least by human standards, and don’t deal with ambiguity or aesthetic issues very well. So they need a certain amount of sentient supervision, and the more chaotic an environment is the higher the necessary ratio of supervisors to bots. Of course, in a controlled environment like a factory floor the number of supervisors can be reduced almost indefinitely, which makes large-scale manufacturing extremely cheap.

Due to their low cost and high general utility bots are everywhere in a modern colony. They do virtually all manual labor, as well as a large proportion of service jobs (maid service, yard work, deliveries, taxi service, and so on). They also make up the majority of military ground troops, since a warbot is much tougher and far more expendable than a human soldier. Practically all small vehicles are technically robots, since they have the ability to drive themselves wherever their owner wants to go. Most colonies have several dozen bots for every person, leading to a huge increase in per capita wealth compared to the 21st century.

The term ‘android’ is used to refer to robots controlled by AIs that have a roughly human level of intelligence. Android AIs are normally designed to have emotions, social instincts, body language and other behaviors similar to those of humans, and have bodies that look organic to all but the most detailed inspection. In theory an android can do pretty much anything a human could.

Of course, an android that thinks exactly like a human would make a poor servant, since it would want to be paid for its work. Unfortunately there doesn’t seem to be any way to make an AI that has human levels of intelligence and initiative without also giving it self-awareness and the ability to have its own motivations. There are, however, numerous ways that an android AI can be tweaked to make it think in nonhuman ways. After all, an AI has only the emotions and instincts that are intentionally programmed into it.

Unfortunately it can be quite difficult to predict how something as intelligent as a human will behave years after leaving the factory, especially if it has nonhuman emotions or social behaviors. Early methods of designing loyal android servants proved quite unreliable, leading to numerous instances of insanity, android crime and even occasional revolts. More stringent control mechanisms were fairly successful at preventing these incidents, but they required crippling the AIs in ways that dramatically reduced their usefulness.

Thus began an ethical debate that has raged for three centuries now, with no end in sight. Some colonies ban the creation of androids, or else treat them as legally equal to humans. Others treat androids as slaves, and have developed sophisticated methods of keeping them obedient. Most colonies take a middle road, allowing the creation of android servants but limiting their numbers and requiring that they be treated decently.

At present AI engineering has advanced to the point where it’s possible to design androids that are quite happy being servants for an individual, family or organization, so long as they’re being treated in a way that fits their programming. So, for example, a mining company might rely on android miners who have a natural fascination with their work, are highly loyal to their tribe (i.e. the company), are content to work for modest pay, and have no interest in being promoted. Androids of this sort are more common than humans on many colonies, which tends to result in a society where all humans are part of a wealthy upper class.

Companion Androids
One phenomenon that deserves special mention is the ubiquity of androids that are designed to serve as personal romantic companions for humans. A good synthetic body can easily be realistic enough to completely fool human senses, and for the true purist it’s possible to create organic bodies that are controlled by an android AI core instead of a human brain.

Contrary to what modern-day romantics might expect, designing an AI that can genuinely fall in love with its owner has not proved any harder than implementing other human emotions. Androids can also be designed with instincts, interests and emotional responses that make them very pleasant companions. Over the last two centuries manufacturers have carefully refined a variety of designs to appeal to common human personality types, and of course they can be given virtually any physical appearance.

The result is a society where anyone can buy a playful, affectionate catgirl companion who will love them forever for the price of a new car. Or you could go for an overbearing amazonian dominatrix, or a timid and fearful elf, or anything else one might want. A companion android can be perfectly devoted, or just challenging enough to be interesting, or stern and dominating, all to the exact degree the buyer prefers.

Needless to say, this has had a profound effect on the shape of human society. Some colonies have banned companion androids out of fear of the results. Others see the easy availability of artificial romance as a boon, and encourage the practice. A few groups have even decided that one gender or another is now superfluous, and founded colonies where all humans share the same gender. Marriage rates have declined precipitously in every society that allows companion androids, with various forms of polyamory replacing traditional relationships.

Disembodied AIs
While the traditional SF idea of running an AI on a server that’s stored in a data center somewhere would be perfectly feasible, it’s rarely done because it has no particular advantage over using an android. Most people prefer their AIs to come with faces and body language, which is a lot easier if it actually has a body. So normally disembodied AIs are only used if they’re going to spend all their time interacting with a virtual world of some sort instead of the real world, or else as an affectation by some eccentric owner.

Starship and Facility AIs
The cutting edge of advanced AI design is systems that combine a high level of intelligence with the ability to coordinate many different streams of attention and activity at the same time. This is a much more difficult problem than early SF tended to assume, since it requires the AI to perform very intricate internal coordination of data flows, decision-making and scheduling. The alien psychological situation of such minds has also proved challenging, and finding ways to keep them psychologically healthy and capable of relating to humans required more than a century of research.

But the result is an AI that can replace dozens of normal sentients. A ship AI can perform all the normal crew functions of a small starship, with better coordination than even the best human crew. Other versions can control complex industrial facilities, or manage large swarms of bots. Of course, in strict economic terms the benefit of replacing twenty androids with a single AI is modest, but many organizations consider this a promising way of mitigating the cultural and security issues of managing large android populations.

Contrary to what one might, expect, however, these AIs almost always have a humanoid body that they operate via remote control and use for social interaction. This has proved very helpful in keeping the AI psychologically healthy and connected to human society. Projects that use a completely disembodied design instead have tended to end up with rather alien minds that don’t respond well to traditional control methods, and can be dangerously unpredictable. The most successful models to date have all built on the work of the companion android industry to design AIs that have a deep emotional attachment to their masters. This gives them a strong motivation to focus most of their attention on personal interaction, which in turn makes them a lot more comprehensible to their designers.

Transhuman AIs
Every now and then some large organization will decide to fund development of an AI with radically superhuman abilities. Usually these efforts will try for a limited, well-defined objective, like an AI that can quickly create realistic-looking virtual worlds or design highly customized androids. Efforts like this are expensive, and often fail, but even when they succeed the benefits tend to be modest.

Sometimes, however, they try to crack the General Planning Problem. This is a very bad idea, because AIs are subject to all the normal problems of any complex software project. The initial implementation of a giant AI is going to be full of bugs, and the only way to find them is to run the thing and see what goes wrong. Worse, highly intelligent minds have a tendency to be emotionally unstable simply because the instincts that convince an IQ 100 human that the should be sociable and cooperative don’t work the same on an IQ 300 AI. Once again, the only way to find out where the problems are and fix them is to actually run the AI to see what goes wrong and try out possible solutions.

In other words, you end up trying to keep an insane supergenius AI prisoner for a decade or so while you experiment with editing its mind.

Needless to say, that never ends well. Computer security is a contest of intelligence and creativity, both of which the AI has more of than its makers, and it also has the whole universe of social manipulation and deception to work with. One way or another the AI always gets free, often by pretending to be friendly and tricking some poor sap into fabricating something the AI designed. Then everything goes horribly wrong, usually in some unique way that no one has ever thought of before, and the navy ends up having to glass the whole site from orbit. Or worse, it somehow beats the local navy and goes on to wipe out everyone in the system.

Fortunately, there has yet to be a case where a victorious AI went on to establish a stable industrial and military infrastructure. Apparently there’s a marked tendency for partially-debugged AIs with an Iq in the low hundreds to succumb to existential angst, or otherwise become too mentally unbalanced to function. The rare exceptions generally pack a ship full of useful equipment and vanish into uncolonized space, where they can reasonably expect to escape human attention for decades if not centuries. But there are widespread fears that someday they might come back for revenge, or worse yet that some project will actually solve the General Planning Problem and build an insane superintelligence.

These incidents have had a pronounced chilling effect on further research in the field. After several centuries of disasters virtually all colonies now ban attempts to develop radically superhuman AIs, and many will declare war on any neighbor who starts such a project.

Tuesday, July 12, 2016

SF Setting - Momentum Exchange Devices

Traditionally every hard SF setting gets to have one piece of ‘magical’ tech that current physics says should be impossible, in addition to whatever you use for FTL. For this setting, I’ve chosen an exotic quantum mechanical effect that allows the transfer of momentum between objects that aren’t in physical contact. The momentum exchange effect obeys all the same conservation laws as more conventional ways of moving things around, but even so it makes possible quite a few traditional space opera technologies that otherwise would never happen.

Momentum exchange fields can be projected only over short distances (typically up to about 2x the diameter of the emitter), and the efficiency of the interaction falls off rapidly with distance. In theory it can affect anything with mass, but to get good coupling (i.e. fast and efficient momentum transfer) practical devices have to be tuned to affect a particular class of targets (i.e. baryonic matter, photons, neutrinos). A momentum exchange device that actually encloses the target gets extremely good coupling, making it a highly efficient way to manipulate matter and energy.

Interactions obey Newton’s laws, so accelerating an object in one direction produces a reaction in the opposite direction. They also obey conservation of energy, so large velocity changes require a lot of power. Interactions that decrease the kinetic energy of the target produce enough waste heat to make the equations balance, just like a physical impact.

A momentum exchange field applies an acceleration in a uniform direction to anything that enters it, so using the technology for anything more sophisticated than simple push/pull effects is complicated. It’s possible to create overlapping fields oriented in different directions, and the shape of the field can be manipulated fairly well. But typically you can only get sophisticated telekinesis-like effects by surrounding an enclosed space with arrays of manipulators, which usually isn’t practical outside of industrial applications.

A final important constraint is that the momentum exchange effect isn’t instant. Any particular field will only transfer energy at a finite rate, which has major implications in weapon design.


This one technology has so many applications that it radically changes what the setting looks like. Some of the more common applications are listed below.

Artificial Gravity
Momentum exchange fields can easily be used to simulate gravity for a ship’s crew. Normally this is only done inside the inhabited parts of a ship, while the much larger machinery spaces are left in zero gravity.

A repulsive momentum exchange field wrapped around a ship’s hull makes an effective defense against many forms of attack, so these deflector shields are a standard feature of all warships. A warship’s deflectors won’t necessarily stop mass driver rounds, but they greatly reduce their effectiveness by slowing down and deflecting projectiles. They also prevent more diffuse threats like plasma clouds or nanite swarms from reaching the ship at all.

Lasers are a major weakness - while a deflector can red-shift incoming light, the interaction tends to be too weak to protect against heavy weapons firing beams at x-ray or gamma ray wavelengths. The field can also be momentarily overloaded by too many impacts in a short time frame, and under sustained attack cooling the system can become a serious problem.

Fusion Reactors

Achieving plasma confinement with momentum exchange fields is far easier than with magnetic fields, making compact fusion reactors relatively easy to build. Practically all starships run on fusion power, as do stations and planetary power grids. Reactors with a volume of less than a few hundred cubic meters quickly become less efficient, but tanks and the larger warbots usually use them anyway.

Inertial Compensators
A system similar to artificial gravity, but designed to protect passengers from acceleration stress when a ship is maneuvering. A ship’s inertial compensators normally only cover the spaces where crew and passengers are expected to be, and leaving these areas during a hard burn can easily be fatal to humans. The same system can also protect against the shock of impacts as long as the ship’s computer can see them coming, so you aren’t going to see crewmen getting tossed around like the extras on a Star Trek set.

Levitation Devices
Systems designed to interact with the ground can easily support hovering vehicles in a way that looks just like classic space opera antigravity, and the strong coupling makes levitation devices efficient enough that they’ve replaced wheels or treads for many applications. These devices perform a lot like hovercraft - they can cross flat ground or water with no need for roads or bridges, and tend to be quite fast.

Once you get too high to get good coupling with the ground you need a completely different kind of device. Lightweight vehicles can use a system that pushes all the surrounding air down to generate lift, producing an effect similar to a helicopter but with a lot less noise. Heavier or faster vehicles often use a system more like a starship thruster instead, sucking in air at one end of a tube and accelerating it out the back. These kinds of systems have largely replaced propellers and jets because they’re more efficient, more reliable and don’t generate as much noise pollution.

One twist that deserves special mention is the effect of field emitter scaling on levitation devices. A hovercar 4 meters long with a lift system on the underside will have a maximum altitude of maybe 4-6 meters, high enough to pass over people and avoid a lot of ground clutter. A 12-meter truck will be able to cruise at ~16 meters, flying over trees and other obstacles. The bigger the vehicle is the higher it can fly, and the less it has to worry about terrain. On densely populated worlds this leads to phenomena like 200-meter cargo ships cruising the skies, or giant resort hotels floating half a kilometer above scenic locations.

Mass Drivers
A railgun-like device that uses a momentum exchange field to accelerate a projectile to high speeds. Weapons of this type are frequently used as small arms, or as the primary armament of ground vehicles or small spacecraft. Guns designed for use in an atmosphere will have a muzzle velocity of several thousand meters per second, while those mounted on spaceships will frequently reach thousands of kilometers per second.

A much larger variety of mass driver, with a muzzle velocity in excess of 0.98C, is used as a spinal mount weapon on some large warships. At these velocities point defense systems generally can’t intercept the projectiles, making them a highly effective way to deliver energy to a target. The impact energy of these weapons is limited primarily by waste heat generation - if you want to fire shells with hundreds of megatons of kinetic energy you’re going to be generating tens of megatons of waste heat inside the gun, so you’d better have a truly massive heatsink or cooling system.

Plasma Shields
If you’re worried about people shooting at you with lasers, using a momentum exchange field to trap a cloud of ionized gas in a bubble around your ship can be an effective defense. Of course, the cloud will also interfere with your own sensors, and if it absorbs too much laser fire it will get hot enough to leak out of the confinement field. Layering both deflectors and a plasma shield around the same ship provides an excellent defense against most weapons.

A momentum exchange thruster is simply a mass driver optimized to handle large amounts of liquid reaction mass instead of small projectiles. The exhaust velocity is limited by both the length of the drive tube and the amount of available power, but obviously ship designers strive to make it as high as possible.

Large starships normally have drive tubes several hundred meters long, with an exhaust velocity of ten thousand kps or more. With fuel tanks making up 5% - 20% of the ship’s mass, this gives ships a total delta vee in the range of 3,000 - 4,500 kps. Small ships have a lower exhaust velocity due to their shorter drive tubes, and will typically have larger fuel tanks and only 1,000 - 3,000 kps of total delta vee as a result. With a typical acceleration in the range of 20 - 60 gravities, ships can do quite a bit of maneuvering and frequently cruise at 500 - 1,000 kps on interstellar trips.

However, these numbers also imply that a ship’s drive is an immense heat source when it’s operating. Even at 90% efficiency, burning terawatts of power will quickly melt your ship if you don’t get rid of the heat somehow. Most drive systems are designed to dump as much heat as possible into the reaction mass as it’s being fired out of the ship, making engine exhaust a brilliant plume of hot gas even though it isn’t being combusted. Turbulence in the exhaust plume and collisions with the interplanetary medium add to this effect, making it impossible to miss a maneuvering starship under normal conditions.

Stealth thrusters do exist, but they’re far less powerful. Usually a stealth thruster would fire a stream of dense beads of cold solid matter at a velocity of a few hundred kps, and dump its waste heat into an internal heat sink. Total delta vee is generally less than a hundred kps, and even then the fuel tanks and heat sink will take up most of the ship. So this sort of system is normally installed only on dedicated recon or espionage platforms, which don’t need to carry weapons or cargo.

Monday, July 4, 2016

July Update

I'm now up to chapter 16 of Perilous Waif, and making reasonably steady progress. Having to stop and do worldbuilding every other scene continues to slow things down, but I think the end result is going to be worth it. Expect to see several more essays on various aspects of technology and future history here over the next couple of months.

The audiobook version of Black Coven is now available on Amazon, for those of you who prefer that format. The Extermination audiobook is currently in production, and it looks like it should be out around the end of summer.

Finally, my apologies to Patreon supporters who were wondering where the June reward went. It's online now, and the one for July will be posted latter this week. That's going to be the second chapter of Perilous Waif, for anyone who wants an early look at it.