Sunday, April 30, 2017

April Update

Well, it looks like my foray into SF has been a success. To date Perilous Waif has sold almost twice as many copies as Extermination did in the same period, and the Kindle Unlimited readership is up 400%. So there are definitely going to be more Alice Long books on the way.

For those of you who prefer other formats, Perilous Waif is also available in print and audiobook versions. I must admit the sales of the print edition have been disappointing, but the audiobook (narrated by Mare Trevathan) seems to be pretty popular.

But I haven't forgotten all you Daniel Black fans. I took a bit of a break in February, and then spend several weeks struggling to convince my subconscious that this isn't the right time to start yet another new series. That left me with a collection of opening scenes and first chapters that will probably end up as Patreon rewards at some point, but I did finally manage to get my muse back on track.

At this point I've just started chapter 7 of Thrall, which is threatening to end up a bit longer than Extermination. If I had to guess at a release date right now I'd go with early fall, but as usual that may change depending on how the writing goes. If you don't want to wait that long, I'll be posting advance chapters as usual on my Patreon account over the next few months.

Saturday, January 21, 2017

January Update

The Kindle edition of Perilous Waif is now available for pre-orders on Amazon, with a scheduled release date of Feb 1. I'm currently doing the formatting work to set up a paperback edition, which should end up being available by then as well. I'll be interested to see what the relative sales of the two editions look like - I keep seeing claims that the Kindle sales market is bigger than the print market these days, but it's hard to know for sure.

For those of you who prefer audiobooks, production on the audio version of Perilous Waif has started. The producer is hoping to have it out around the end of February, assuming all goes well. We're going to be using a different narrator than for the Daniel Black books, so let me know what you think of her.

Monday, December 26, 2016

December Update

Perilous Waif is finally complete. At 31 chapters this is by far my longest work to date, so at least you're going to get a substantial book to make up for the wait. I'm currently doing editing and procuring cover art, but this should only take a few weeks. The planned release date for the book will be February 1st. The audiobook version will take a bit longer, since of course it has to go through production, but it should end up being available sometime in the spring.

After that I'll be back to working on the next Daniel Black novel, Revenant. I'm tentatively planning to have that out sometime in the middle of 2017, and I'll be reporting progress here as usual.

Tuesday, October 25, 2016

October Update

I'm up to chapter 25 of Perilous Waif, and expecting to finally finish the first draft sometime in early November. This project has taken a lot longer than I'd originally hoped, but it's almost done. At this point I expect to finish editing and proofreading over the following month while I get cover art lined up, and publish the book sometime around the end of the year.

So next time around I should have some progress to report on the new Daniel Black book that so many of you have been asking about.

Sunday, October 2, 2016

SF Setting - Nanotechnology

Yes, it’s time to talk about one of the most troublesome technologies in science fiction. As with artificial intelligence, the full promise of nanotechnology is so powerful that it’s hard to see how to write a story in a setting where it has been realized. It takes some serious thought to even begin to get a grasp on what kinds of things it can and can’t do, and the post-scarcity society that it logically leads to is more or less incomprehensible.

As a result, mainstream SF generally doesn’t try. In most stories nanotech doesn’t even exist. When it does it’s usually just a thinly veiled justification for nonsensical space magic, and its more plausible applications are ignored. Outside of a few singularity stories, hardly anyone makes a serious attempt to grapple with the full set of capabilities that it implies and how they would affect society.

Fortunately, we don’t have to go all the way with it. Drexler’s work focused mainly on the ultimate physical limits of manufacturing technology, not the practical problems involved in reaching those limits. Those details are mostly hand-waved by invoking powerful AI engineers running on supercomputers to take care of all the messy details. But we’ve already seen that this setting doesn’t have any super-AIs to conveniently do all the hard work for us.

So what if we suppose that technology has simply continued to advance step by step for a few hundred years? With no magic AI wand to wave engineers still have to grapple with technical limitations and practical complexities the hard way. The ability to move individual atoms around solves a lot of problems, of course. But the mind-boggling complexity of the machines nanotech can build creates a whole new level of challenges to replace them.

The history of technology tells us that these challenges will eventually be solved. But doing so with nothing but human ingenuity means that you get a long process of gradual refinement, instead of a sudden leap to virtual godhood. By setting a story somewhere in the middle of this period of refinement we can have nanotechnology, but also have a recognizable economy instead of some kind of post-scarcity wonderland. Sure, the nanotech fabricators can make anything, but someone has to mine elements and process them into feedstock materials for them first. Someone has to run the fabricators, and deal with all the flaws and limitations of an imperfect manufacturing capacity. Someone has to design all those amazing (and amazingly complex) devices the nanotech can fabricate, and market them, and deliver them to the customer.

So let’s take a look at how this partially-developed nanotech economy works, in a universe without godlike AIs.

Mining
In order to build anything you need a supply of the correct atoms. This is a bit harder than it sounds, since advanced technology tends to use a lot of the more exotic elements as well as the common stuff like iron and carbon.

So any colony with a significant amount of industry needs to mine a lot of different sources to get all the elements it needs. Asteroid mining is obviously going to be a major activity, since it will easily provide essentially unlimited amounts of CHON and nickel-iron along with many of the less common elements. Depending on local geography small moons or even planets may also be economical sources for some elements.

This leads to a vision of giant mining ships carving up asteroids to feed them into huge ore processing units, while smaller ships prospect for deposits of rare elements that are only found in limited quantities. Any rare element that is used in a disproportionately large quantity will tend to be a bottleneck in production, which could lead to trade in raw materials between systems with different abundances of elements.

Some specialization in the design of the ore processing systems also seems likely. Realistic nanotech devices will have to be designed with a fairly specific chemical environment in mind, and bulk processing will tend to be faster than sorting a load of ore atom by atom. So ore processing is a multi-step process where raw materials are partially refined using the same kinds of methods we have today, and only the final step of purification involves nanotech. The whole process is likely different depending on the expected input as well. Refining a load of nickel-iron with trace amounts of gold and platinum is going to call for a completely different setup than refining a load of icy water-methane slush, or a mass of rocky sulfur compounds.

Of course, even the limited level of AI available can make these activities fairly automated. With robot prospecting drones, mining bots, self-piloting shuttles and other such innovations the price of raw materials is generally ten to a hundred times lower than in real life.

Limits of Fabrication
In theory nanotechnology can be used to manufacture anything, perfectly placing every atom exactly where it needs to be to assemble any structure that’s allowed by the laws of physics. Unfortunately, practical devices are a lot more limited. To understand why, let’s look at how a nanotech assembler might work.

A typical industrial fabricator for personal goods might have a flat assembly plate, covered on one side with atomic-scale manipulators that position atoms being fed to them through billions of tiny channels running through the plate. On the other side is a set of feedstock reservoirs filled with various elements it might need, with each atom attached to a molecule that acts as a handle to allow the whole system to easily manipulate it. The control computer has to feed exactly the right feedstock molecules through the correct channels in the order needed by the manipulator arms, which put the payload atoms where they’re supposed to go and then strip off the handle molecules and feed them into a disposal system.

Unfortunately, if we do the math we discover that this marvel of engineering is going to take several hours to assemble a layer of finished product the thickness of a sheet of paper. At that rate it’s going to take weeks to make something like a hair dryer, let alone furniture or vehicles.

The process will also release enough waste heat to melt the whole machine several times over, so it needs a substantial flow of coolant and a giant heatsink somewhere. This is complicated by the fact that the assembly arms need a hard vacuum to work in, to ensure that there are no unwanted chemical reactions taking place on the surface of the work piece. Oh, but that means it can only build objects that can withstand exposure to vacuum. Flexible objects are also problematic, since even a tiny amount of flexing would ruin the accuracy of the build, and don’t even think about assembling materials that would chemically react with the assembly arms.

Yeah, this whole business isn’t as easy as it sounds.

The usual way to get around the speed problem is to work at a larger scale. Instead of building the final product atom by atom in one big assembly area, you have thousands of tiny fabricators building components the size of a dust mote. Then your main fabricator assembles components instead of individual atoms, which is a much faster process. For larger products you might go through several stages of putting together progressively larger subassemblies in order to get the job done in a reasonable time frame.

Unfortunately this also makes the whole process a lot more complicated, and adds a lot of new constraints. You can’t get every atom in the final product exactly where you want it, because all those subassemblies have to fit together somehow. They have to be stable enough to survive storage and handling, and you can’t necessarily fit them together with sub-nanometer precision like you could individual atoms.

The other problems are addressed by using more specialized fabricator designs, which introduces further limitations. If you want to manufacture liquids or gasses you need a fabricator designed for that. If you want to work with molten lead or cryogenic nitrogen you need a special extreme environment fabricator. If you want to make food or medical compounds you need a fabricator designed to work with floppy hyper-complex biological molecules. If you want to make living tissue, well, you’re going to need a very complicated system indeed, and probably a team of professionals to run it.

Fabricators
Despite their limitations, fabricators are still far superior to conventional assembly lines. Large industrial fabricators can produce manufactured goods with very little sentient supervision, and can easily switch from one product to another without any retooling. High-precision fabricators can cheaply produce microscopic computers, sensors, medical implants and microbots. Low-precision devices can assemble prefabricated building block molecules into bulk goods for hardly more than the cost of the raw materials. Hybrid systems can produce bots, vehicles, homes and other large products that combine near-atomic precision for parts that need it with lower precision for parts that don’t. Taking into account the low cost of raw materials, an efficient factory can easily produce manufactured goods at a cost a thousand times lower than what we’re used to.

Of course, fabricators are too useful to be confined to factories. Every spaceship or isolated facility will have at least one fabricator on hand to manufacture replacement parts. Every home will have fabricators that can make clothing, furniture and other simple items. Many retail outlets will have fabricators on site to build products to order, instead of stocking merchandise. These ad-hoc production methods will be slower than a finely tuned factory mass-production operation, which will make them more expensive. But in many cases the flexibility of getting exactly what you want on demand will be more important than the price difference, especially when costs are so low to begin with.

So does this mean all physical goods are ultra-cheap? Well, not necessarily. Products like spaceships, sentient androids and shapechanging smart mater clothing are going to be incredibly complex, which means someone has to invest massive amounts of engineering effort in designing them. They’re going to want to get their investment back somehow. But how?

Copy Protection
Unfortunately, one of the things that nanotechnology allows you to do much better than conventional engineering is install tamper-proofing measures in your products. A genuine GalTech laser rifle might use all sorts of interesting micron-scale machinery to optimise its performance, but it’s also protected by a specialized AI designed to prevent anyone from taking it apart to see how it works. Devoting just a few percent of the weapon’s mass to defensive measures gives it sophisticated sensors, reserves of combat nanites, a radioactive decay battery good for decades of monitoring, and a self-destruct system for its critical components.

Obviously no defense is perfect, but this sort of hardware protection can be much harder to beat than software copy protection. Add in the fact that special fabrication devices may be needed to produce advanced tech, and a new product can easily be on the market for years before anyone manages to crack the protection and make a knock-off version. The knock-offs probably aren’t going to be free, either, because anyone who invests hundreds of man-years in cracking a product’s protection and reverse-engineering it is going to want some return on that investment.

All of this means that the best modern goods are going to command premium prices. If a cheap, generic car would cost five credits to build at the local fabrication shop, this year’s luxury sedan probably sells for a few hundred credits. The same goes for bots, androids, personal equipment and just about anything else with real complexity to hide.

Which is still a heck of an improvement over paying a hundred grand for a new BMW.

Common Benefits
Aside from low manufacturing costs, one of the more universal benefits of nanotech is the ubiquitous use of wonder materials. Drexler is fond of pointing out that diamondoid materials (i.e. synthetic diamond) have a hundred times the strength to weight ratio of aircraft aluminum, and would be dirt cheap since they’re made entirely of carbon. Materials science is full of predictions about other materials that would have amazing properties, if only we could make them. Well, now we can. Perfect metallic crystals, exotic alloys and hard-to-create compounds, superconductors and superfluids - with four hundred years of advances in material science, and the cheap fine-scale manipulation that fabricators can do, whole libraries of wonder materials with extreme properties have become commonplace.

So everything is dramatically stronger, lighter, more durable and more capable than the modern equivalent. A typical car weighs a few hundred kilograms, can fly several thousand kilometers with a few tons of cargo before it needs a recharge, can drive itself, and could probably plow through a brick wall at a hundred kph without sustaining any real damage.

Another common feature is the use of smart matter. This is a generic term for any material that combines microscopic networks of computers and sensors with a power storage and distribution system, mobile microscopic fabricators, and internal piping to distribute feedstock materials and remove waste products. Smart matter materials are self-maintaining and self-healing, although the repair rate is generally a bit slow for military applications. They often include other complex features, such as smart matter clothing that can change shape and color while providing temperature control for its wearer. Unfortunately smart matter is also a lot more expensive than dumb materials, but it’s often worth paying five times as much for equipment that will never wear out.

With better materials, integrated electronics and arbitrarily small feature sizes, most types of equipment can also make use of extreme redundancy to be absurdly reliable. The climate control in your house uses thousands of tiny heat exchangers instead of one big one, and they’ll never all break down at once. The same goes for everything from electrical wiring to your car’s engine - with sufficient ingenuity most devices can be made highly parallel, and centuries of effort have long since found solutions for every common need.

This does imply that technology needs constant low-level maintenance to repair failed subsystems, but that job can largely be handled by self-repair systems, maintenance bots and the occasional android technician. The benefit is that the familiar modern experience of having a machine fail to work simply never happens. Instead most people can live out their whole lives without ever having their technology fail them.

Now that’s advanced technology.

Wednesday, August 31, 2016

August Update

I'm currently working on chapter 20 of Perilous Waif, so the end is finally in sight. I'm currently projecting that this book will be 27 chapter long, so by next month I should have a good idea of when it's going to be finished.

The audiobook version of Extermination is now available on Amazon. I expect that future audiobooks will be out within a month or two of the e-book version, since that seems to be how long production takes.

For those of you waiting for the next Daniel Back novel, I plan to go back to work on it as soon as the first draft of Perilous Waif is finished.

Sunday, July 24, 2016

SF Setting - Artificial Intelligence

Probably the single most difficult technical issue facing anyone who wants to write far-future SF today is the question of what to do about AI. At this point it’s obvious to anyone with an IQ above room temperature that some kind of artificial intelligence is coming, and it’s hard to justify putting it off for more than a few decades. So any far-future setting needs to deal with the issue somehow, but neither of the standard approaches works very well.


The writers of space opera and military SF have largely adopted a convention of ignoring the whole topic, and pretending that space navies a thousand years in the future are still going to rely on human beings to manually pilot their ships and aim their weapons. This often leads to the comical spectacle of spaceships that have less automation than real modern-day warships, with fatal results for the reader’s suspension of disbelief. I’m not interested in writing raygun fantasy stories, so that approach is out.


The other major camp is the guys who write Singularity stories, where the first real success with AI rapidly evolves into a godlike superintelligence and takes over the universe. Unfortunately this deprives the humans in the story of any agency. If we take the ‘godlike’ part seriously it means the important actors are all going to be vast AIs whose thoughts are too complex for any reader (or author, for that matter) to understand. If you want to write a brief morality play about the dangers of AI that’s fine, but it’s a serious problem  if the goal is a setting where you can tell stories about human beings.


So for this setting I’ve had to create a middle ground, where AI has enough visible effects to feel realistic but doesn’t render humans completely irrelevant. The key to making this possible is a single limiting assumption about the nature of intelligence.


AI Limits
There’s been an argument raging for a couple of decades now about the nature of intelligence, and how easily it can be improved in an AI. There are several different camps, but the differences in their predictions mostly hinge on disagreements about how feasible it is to solve what I call the General Planning Problem. That is, given a goal and some imperfect information about a complex world, how difficult is it in the general case to formulate a plan of action to achieve your goal?


Proponents of strong AI tend to implicitly assume that this problem has some simple, efficient solution that applies to all cases. In order to prevent the creation of AI superintelligences, my assumption in this setting is that no one has discovered such perfect solution. Instead there is only a collection of special case solutions that work for various narrow classes of problems, and most of them require an exponential increase in computing power to handle a linear increase in problem complexity.


In other words, solving problems in any particular domain requires specialized expertise, and most domains are far too complex to allow perfect solutions. The universe is full of chaotic systems like weather, culture and politics that are intrinsically impossible to predict outside of very narrow constraints. Even in well-understood areas like engineering the behavior of any complex system is governed by laws that are computationally intractable (i.e. quantum mechanics), and simplified models always contain significant errors. So now matter how smart you are, you can’t just run a simulation to find some perfect plan that will infallibly work. Instead you have to do things the way humans do, with lots of guesswork and assumptions and a constant need to deal with unexpected problems.


This means that there’s no point where an AI with superhuman computing power suddenly takes off and starts performing godlike feats of deduction and manipulation. Instead each advance in AI design yields only a modest improvement in intelligence, at the cost of a dramatic rise in complexity and design cost. A system with a hundred times the computing power of a human brain might be a bit smarter than any human, but it isn’t going to have an IQ of 10,000 the way a naive extrapolation would suggest. It will be able to crack a few unsolved scientific problems, or perhaps design a slightly better hyperspace converter, but it isn’t going to predict next year’s election results any better than the average pundit.


Of course, in the long run the field of AI design will gradually advance, and the AIs will eventually become smart enough to be inscrutable to humans. But this lets us have AIs without immediately getting superintelligences, and depending on the risks and rewards of further R&D ordinary humans can feasibly remain relevant for several centuries.


So what would non-super AIs be used for?


Bots
Robots controlled by non-sentient AI programs are generally referred to as bots, to distinguish them from the more intelligent androids. A bot’s AI has the general intelligence of a dog or cat - enough to handle the basic problems of perception, locomotion, navigation and object manipulation that current robots struggle with, but not enough to be a serious candidate for personhood. Most bots also have one or more skill packs, which are specialized programs similar to modern expert systems that allow the bot to perform tasks within a limited area of expertise. Voice recognition and speech synthesis are also common features, to allow the bot to be given verbal commands.


Bots can do most types of simple, repetitive work with minimal supervision. Unlike modern robots they can work in unstructured environments like a home or outdoor area almost as easily as a factory floor. They’re also adaptable enough to be given new tasks using verbal instruction and perhaps an example (i.e. “dig a trench two meters deep, starting here and going to that stake in the ground over there”).


Unfortunately bots don’t deal well with unexpected problems, which tend to happen a lot in any sort of complex job. They are also completely lacking in creativity or initiative, at least by human standards, and don’t deal with ambiguity or aesthetic issues very well. So they need a certain amount of sentient supervision, and the more chaotic an environment is the higher the necessary ratio of supervisors to bots. Of course, in a controlled environment like a factory floor the number of supervisors can be reduced almost indefinitely, which makes large-scale manufacturing extremely cheap.


Due to their low cost and high general utility bots are everywhere in a modern colony. They do virtually all manual labor, as well as a large proportion of service jobs (maid service, yard work, deliveries, taxi service, and so on). They also make up the majority of military ground troops, since a warbot is much tougher and far more expendable than a human soldier. Practically all small vehicles are technically robots, since they have the ability to drive themselves wherever their owner wants to go. Most colonies have several dozen bots for every person, leading to a huge increase in per capita wealth compared to the 21st century.


Androids
The term ‘android’ is used to refer to robots controlled by AIs that have a roughly human level of intelligence. Android AIs are normally designed to have emotions, social instincts, body language and other behaviors similar to those of humans, and have bodies that look organic to all but the most detailed inspection. In theory an android can do pretty much anything a human could.


Of course, an android that thinks exactly like a human would make a poor servant, since it would want to be paid for its work. Unfortunately there doesn’t seem to be any way to make an AI that has human levels of intelligence and initiative without also giving it self-awareness and the ability to have its own motivations. There are, however, numerous ways that an android AI can be tweaked to make it think in nonhuman ways. After all, an AI has only the emotions and instincts that are intentionally programmed into it.


Unfortunately it can be quite difficult to predict how something as intelligent as a human will behave years after leaving the factory, especially if it has nonhuman emotions or social behaviors. Early methods of designing loyal android servants proved quite unreliable, leading to numerous instances of insanity, android crime and even occasional revolts. More stringent control mechanisms were fairly successful at preventing these incidents, but they required crippling the AIs in ways that dramatically reduced their usefulness.


Thus began an ethical debate that has raged for three centuries now, with no end in sight. Some colonies ban the creation of androids, or else treat them as legally equal to humans. Others treat androids as slaves, and have developed sophisticated methods of keeping them obedient. Most colonies take a middle road, allowing the creation of android servants but limiting their numbers and requiring that they be treated decently.


At present AI engineering has advanced to the point where it’s possible to design androids that are quite happy being servants for an individual, family or organization, so long as they’re being treated in a way that fits their programming. So, for example, a mining company might rely on android miners who have a natural fascination with their work, are highly loyal to their tribe (i.e. the company), are content to work for modest pay, and have no interest in being promoted. Androids of this sort are more common than humans on many colonies, which tends to result in a society where all humans are part of a wealthy upper class.


Companion Androids
One phenomenon that deserves special mention is the ubiquity of androids that are designed to serve as personal romantic companions for humans. A good synthetic body can easily be realistic enough to completely fool human senses, and for the true purist it’s possible to create organic bodies that are controlled by an android AI core instead of a human brain.


Contrary to what modern-day romantics might expect, designing an AI that can genuinely fall in love with its owner has not proved any harder than implementing other human emotions. Androids can also be designed with instincts, interests and emotional responses that make them very pleasant companions. Over the last two centuries manufacturers have carefully refined a variety of designs to appeal to common human personality types, and of course they can be given virtually any physical appearance.


The result is a society where anyone can buy a playful, affectionate catgirl companion who will love them forever for the price of a new car. Or you could go for an overbearing amazonian dominatrix, or a timid and fearful elf, or anything else one might want. A companion android can be perfectly devoted, or just challenging enough to be interesting, or stern and dominating, all to the exact degree the buyer prefers.


Needless to say, this has had a profound effect on the shape of human society. Some colonies have banned companion androids out of fear of the results. Others see the easy availability of artificial romance as a boon, and encourage the practice. A few groups have even decided that one gender or another is now superfluous, and founded colonies where all humans share the same gender. Marriage rates have declined precipitously in every society that allows companion androids, with various forms of polyamory replacing traditional relationships.


Disembodied AIs
While the traditional SF idea of running an AI on a server that’s stored in a data center somewhere would be perfectly feasible, it’s rarely done because it has no particular advantage over using an android. Most people prefer their AIs to come with faces and body language, which is a lot easier if it actually has a body. So normally disembodied AIs are only used if they’re going to spend all their time interacting with a virtual world of some sort instead of the real world, or else as an affectation by some eccentric owner.


Starship and Facility AIs
The cutting edge of advanced AI design is systems that combine a high level of intelligence with the ability to coordinate many different streams of attention and activity at the same time. This is a much more difficult problem than early SF tended to assume, since it requires the AI to perform very intricate internal coordination of data flows, decision-making and scheduling. The alien psychological situation of such minds has also proved challenging, and finding ways to keep them psychologically healthy and capable of relating to humans required more than a century of research.


But the result is an AI that can replace dozens of normal sentients. A ship AI can perform all the normal crew functions of a small starship, with better coordination than even the best human crew. Other versions can control complex industrial facilities, or manage large swarms of bots. Of course, in strict economic terms the benefit of replacing twenty androids with a single AI is modest, but many organizations consider this a promising way of mitigating the cultural and security issues of managing large android populations.


Contrary to what one might, expect, however, these AIs almost always have a humanoid body that they operate via remote control and use for social interaction. This has proved very helpful in keeping the AI psychologically healthy and connected to human society. Projects that use a completely disembodied design instead have tended to end up with rather alien minds that don’t respond well to traditional control methods, and can be dangerously unpredictable. The most successful models to date have all built on the work of the companion android industry to design AIs that have a deep emotional attachment to their masters. This gives them a strong motivation to focus most of their attention on personal interaction, which in turn makes them a lot more comprehensible to their designers.


Transhuman AIs
Every now and then some large organization will decide to fund development of an AI with radically superhuman abilities. Usually these efforts will try for a limited, well-defined objective, like an AI that can quickly create realistic-looking virtual worlds or design highly customized androids. Efforts like this are expensive, and often fail, but even when they succeed the benefits tend to be modest.


Sometimes, however, they try to crack the General Planning Problem. This is a very bad idea, because AIs are subject to all the normal problems of any complex software project. The initial implementation of a giant AI is going to be full of bugs, and the only way to find them is to run the thing and see what goes wrong. Worse, highly intelligent minds have a tendency to be emotionally unstable simply because the instincts that convince an IQ 100 human that the should be sociable and cooperative don’t work the same on an IQ 300 AI. Once again, the only way to find out where the problems are and fix them is to actually run the AI to see what goes wrong and try out possible solutions.


In other words, you end up trying to keep an insane supergenius AI prisoner for a decade or so while you experiment with editing its mind.


Needless to say, that never ends well. Computer security is a contest of intelligence and creativity, both of which the AI has more of than its makers, and it also has the whole universe of social manipulation and deception to work with. One way or another the AI always gets free, often by pretending to be friendly and tricking some poor sap into fabricating something the AI designed. Then everything goes horribly wrong, usually in some unique way that no one has ever thought of before, and the navy ends up having to glass the whole site from orbit. Or worse, it somehow beats the local navy and goes on to wipe out everyone in the system.


Fortunately, there has yet to be a case where a victorious AI went on to establish a stable industrial and military infrastructure. Apparently there’s a marked tendency for partially-debugged AIs with an Iq in the low hundreds to succumb to existential angst, or otherwise become too mentally unbalanced to function. The rare exceptions generally pack a ship full of useful equipment and vanish into uncolonized space, where they can reasonably expect to escape human attention for decades if not centuries. But there are widespread fears that someday they might come back for revenge, or worse yet that some project will actually solve the General Planning Problem and build an insane superintelligence.

These incidents have had a pronounced chilling effect on further research in the field. After several centuries of disasters virtually all colonies now ban attempts to develop radically superhuman AIs, and many will declare war on any neighbor who starts such a project.