Monday, December 26, 2016

December Update

Perilous Waif is finally complete. At 31 chapters this is by far my longest work to date, so at least you're going to get a substantial book to make up for the wait. I'm currently doing editing and procuring cover art, but this should only take a few weeks. The planned release date for the book will be February 1st. The audiobook version will take a bit longer, since of course it has to go through production, but it should end up being available sometime in the spring.

After that I'll be back to working on the next Daniel Black novel, Revenant. I'm tentatively planning to have that out sometime in the middle of 2017, and I'll be reporting progress here as usual.

Tuesday, October 25, 2016

October Update

I'm up to chapter 25 of Perilous Waif, and expecting to finally finish the first draft sometime in early November. This project has taken a lot longer than I'd originally hoped, but it's almost done. At this point I expect to finish editing and proofreading over the following month while I get cover art lined up, and publish the book sometime around the end of the year.

So next time around I should have some progress to report on the new Daniel Black book that so many of you have been asking about.

Sunday, October 2, 2016

SF Setting - Nanotechnology

Yes, it’s time to talk about one of the most troublesome technologies in science fiction. As with artificial intelligence, the full promise of nanotechnology is so powerful that it’s hard to see how to write a story in a setting where it has been realized. It takes some serious thought to even begin to get a grasp on what kinds of things it can and can’t do, and the post-scarcity society that it logically leads to is more or less incomprehensible.

As a result, mainstream SF generally doesn’t try. In most stories nanotech doesn’t even exist. When it does it’s usually just a thinly veiled justification for nonsensical space magic, and its more plausible applications are ignored. Outside of a few singularity stories, hardly anyone makes a serious attempt to grapple with the full set of capabilities that it implies and how they would affect society.

Fortunately, we don’t have to go all the way with it. Drexler’s work focused mainly on the ultimate physical limits of manufacturing technology, not the practical problems involved in reaching those limits. Those details are mostly hand-waved by invoking powerful AI engineers running on supercomputers to take care of all the messy details. But we’ve already seen that this setting doesn’t have any super-AIs to conveniently do all the hard work for us.

So what if we suppose that technology has simply continued to advance step by step for a few hundred years? With no magic AI wand to wave engineers still have to grapple with technical limitations and practical complexities the hard way. The ability to move individual atoms around solves a lot of problems, of course. But the mind-boggling complexity of the machines nanotech can build creates a whole new level of challenges to replace them.

The history of technology tells us that these challenges will eventually be solved. But doing so with nothing but human ingenuity means that you get a long process of gradual refinement, instead of a sudden leap to virtual godhood. By setting a story somewhere in the middle of this period of refinement we can have nanotechnology, but also have a recognizable economy instead of some kind of post-scarcity wonderland. Sure, the nanotech fabricators can make anything, but someone has to mine elements and process them into feedstock materials for them first. Someone has to run the fabricators, and deal with all the flaws and limitations of an imperfect manufacturing capacity. Someone has to design all those amazing (and amazingly complex) devices the nanotech can fabricate, and market them, and deliver them to the customer.

So let’s take a look at how this partially-developed nanotech economy works, in a universe without godlike AIs.

In order to build anything you need a supply of the correct atoms. This is a bit harder than it sounds, since advanced technology tends to use a lot of the more exotic elements as well as the common stuff like iron and carbon.

So any colony with a significant amount of industry needs to mine a lot of different sources to get all the elements it needs. Asteroid mining is obviously going to be a major activity, since it will easily provide essentially unlimited amounts of CHON and nickel-iron along with many of the less common elements. Depending on local geography small moons or even planets may also be economical sources for some elements.

This leads to a vision of giant mining ships carving up asteroids to feed them into huge ore processing units, while smaller ships prospect for deposits of rare elements that are only found in limited quantities. Any rare element that is used in a disproportionately large quantity will tend to be a bottleneck in production, which could lead to trade in raw materials between systems with different abundances of elements.

Some specialization in the design of the ore processing systems also seems likely. Realistic nanotech devices will have to be designed with a fairly specific chemical environment in mind, and bulk processing will tend to be faster than sorting a load of ore atom by atom. So ore processing is a multi-step process where raw materials are partially refined using the same kinds of methods we have today, and only the final step of purification involves nanotech. The whole process is likely different depending on the expected input as well. Refining a load of nickel-iron with trace amounts of gold and platinum is going to call for a completely different setup than refining a load of icy water-methane slush, or a mass of rocky sulfur compounds.

Of course, even the limited level of AI available can make these activities fairly automated. With robot prospecting drones, mining bots, self-piloting shuttles and other such innovations the price of raw materials is generally ten to a hundred times lower than in real life.

Limits of Fabrication
In theory nanotechnology can be used to manufacture anything, perfectly placing every atom exactly where it needs to be to assemble any structure that’s allowed by the laws of physics. Unfortunately, practical devices are a lot more limited. To understand why, let’s look at how a nanotech assembler might work.

A typical industrial fabricator for personal goods might have a flat assembly plate, covered on one side with atomic-scale manipulators that position atoms being fed to them through billions of tiny channels running through the plate. On the other side is a set of feedstock reservoirs filled with various elements it might need, with each atom attached to a molecule that acts as a handle to allow the whole system to easily manipulate it. The control computer has to feed exactly the right feedstock molecules through the correct channels in the order needed by the manipulator arms, which put the payload atoms where they’re supposed to go and then strip off the handle molecules and feed them into a disposal system.

Unfortunately, if we do the math we discover that this marvel of engineering is going to take several hours to assemble a layer of finished product the thickness of a sheet of paper. At that rate it’s going to take weeks to make something like a hair dryer, let alone furniture or vehicles.

The process will also release enough waste heat to melt the whole machine several times over, so it needs a substantial flow of coolant and a giant heatsink somewhere. This is complicated by the fact that the assembly arms need a hard vacuum to work in, to ensure that there are no unwanted chemical reactions taking place on the surface of the work piece. Oh, but that means it can only build objects that can withstand exposure to vacuum. Flexible objects are also problematic, since even a tiny amount of flexing would ruin the accuracy of the build, and don’t even think about assembling materials that would chemically react with the assembly arms.

Yeah, this whole business isn’t as easy as it sounds.

The usual way to get around the speed problem is to work at a larger scale. Instead of building the final product atom by atom in one big assembly area, you have thousands of tiny fabricators building components the size of a dust mote. Then your main fabricator assembles components instead of individual atoms, which is a much faster process. For larger products you might go through several stages of putting together progressively larger subassemblies in order to get the job done in a reasonable time frame.

Unfortunately this also makes the whole process a lot more complicated, and adds a lot of new constraints. You can’t get every atom in the final product exactly where you want it, because all those subassemblies have to fit together somehow. They have to be stable enough to survive storage and handling, and you can’t necessarily fit them together with sub-nanometer precision like you could individual atoms.

The other problems are addressed by using more specialized fabricator designs, which introduces further limitations. If you want to manufacture liquids or gasses you need a fabricator designed for that. If you want to work with molten lead or cryogenic nitrogen you need a special extreme environment fabricator. If you want to make food or medical compounds you need a fabricator designed to work with floppy hyper-complex biological molecules. If you want to make living tissue, well, you’re going to need a very complicated system indeed, and probably a team of professionals to run it.

Despite their limitations, fabricators are still far superior to conventional assembly lines. Large industrial fabricators can produce manufactured goods with very little sentient supervision, and can easily switch from one product to another without any retooling. High-precision fabricators can cheaply produce microscopic computers, sensors, medical implants and microbots. Low-precision devices can assemble prefabricated building block molecules into bulk goods for hardly more than the cost of the raw materials. Hybrid systems can produce bots, vehicles, homes and other large products that combine near-atomic precision for parts that need it with lower precision for parts that don’t. Taking into account the low cost of raw materials, an efficient factory can easily produce manufactured goods at a cost a thousand times lower than what we’re used to.

Of course, fabricators are too useful to be confined to factories. Every spaceship or isolated facility will have at least one fabricator on hand to manufacture replacement parts. Every home will have fabricators that can make clothing, furniture and other simple items. Many retail outlets will have fabricators on site to build products to order, instead of stocking merchandise. These ad-hoc production methods will be slower than a finely tuned factory mass-production operation, which will make them more expensive. But in many cases the flexibility of getting exactly what you want on demand will be more important than the price difference, especially when costs are so low to begin with.

So does this mean all physical goods are ultra-cheap? Well, not necessarily. Products like spaceships, sentient androids and shapechanging smart mater clothing are going to be incredibly complex, which means someone has to invest massive amounts of engineering effort in designing them. They’re going to want to get their investment back somehow. But how?

Copy Protection
Unfortunately, one of the things that nanotechnology allows you to do much better than conventional engineering is install tamper-proofing measures in your products. A genuine GalTech laser rifle might use all sorts of interesting micron-scale machinery to optimise its performance, but it’s also protected by a specialized AI designed to prevent anyone from taking it apart to see how it works. Devoting just a few percent of the weapon’s mass to defensive measures gives it sophisticated sensors, reserves of combat nanites, a radioactive decay battery good for decades of monitoring, and a self-destruct system for its critical components.

Obviously no defense is perfect, but this sort of hardware protection can be much harder to beat than software copy protection. Add in the fact that special fabrication devices may be needed to produce advanced tech, and a new product can easily be on the market for years before anyone manages to crack the protection and make a knock-off version. The knock-offs probably aren’t going to be free, either, because anyone who invests hundreds of man-years in cracking a product’s protection and reverse-engineering it is going to want some return on that investment.

All of this means that the best modern goods are going to command premium prices. If a cheap, generic car would cost five credits to build at the local fabrication shop, this year’s luxury sedan probably sells for a few hundred credits. The same goes for bots, androids, personal equipment and just about anything else with real complexity to hide.

Which is still a heck of an improvement over paying a hundred grand for a new BMW.

Common Benefits
Aside from low manufacturing costs, one of the more universal benefits of nanotech is the ubiquitous use of wonder materials. Drexler is fond of pointing out that diamondoid materials (i.e. synthetic diamond) have a hundred times the strength to weight ratio of aircraft aluminum, and would be dirt cheap since they’re made entirely of carbon. Materials science is full of predictions about other materials that would have amazing properties, if only we could make them. Well, now we can. Perfect metallic crystals, exotic alloys and hard-to-create compounds, superconductors and superfluids - with four hundred years of advances in material science, and the cheap fine-scale manipulation that fabricators can do, whole libraries of wonder materials with extreme properties have become commonplace.

So everything is dramatically stronger, lighter, more durable and more capable than the modern equivalent. A typical car weighs a few hundred kilograms, can fly several thousand kilometers with a few tons of cargo before it needs a recharge, can drive itself, and could probably plow through a brick wall at a hundred kph without sustaining any real damage.

Another common feature is the use of smart matter. This is a generic term for any material that combines microscopic networks of computers and sensors with a power storage and distribution system, mobile microscopic fabricators, and internal piping to distribute feedstock materials and remove waste products. Smart matter materials are self-maintaining and self-healing, although the repair rate is generally a bit slow for military applications. They often include other complex features, such as smart matter clothing that can change shape and color while providing temperature control for its wearer. Unfortunately smart matter is also a lot more expensive than dumb materials, but it’s often worth paying five times as much for equipment that will never wear out.

With better materials, integrated electronics and arbitrarily small feature sizes, most types of equipment can also make use of extreme redundancy to be absurdly reliable. The climate control in your house uses thousands of tiny heat exchangers instead of one big one, and they’ll never all break down at once. The same goes for everything from electrical wiring to your car’s engine - with sufficient ingenuity most devices can be made highly parallel, and centuries of effort have long since found solutions for every common need.

This does imply that technology needs constant low-level maintenance to repair failed subsystems, but that job can largely be handled by self-repair systems, maintenance bots and the occasional android technician. The benefit is that the familiar modern experience of having a machine fail to work simply never happens. Instead most people can live out their whole lives without ever having their technology fail them.

Now that’s advanced technology.

Wednesday, August 31, 2016

August Update

I'm currently working on chapter 20 of Perilous Waif, so the end is finally in sight. I'm currently projecting that this book will be 27 chapter long, so by next month I should have a good idea of when it's going to be finished.

The audiobook version of Extermination is now available on Amazon. I expect that future audiobooks will be out within a month or two of the e-book version, since that seems to be how long production takes.

For those of you waiting for the next Daniel Back novel, I plan to go back to work on it as soon as the first draft of Perilous Waif is finished.

Sunday, July 24, 2016

SF Setting - Artificial Intelligence

Probably the single most difficult technical issue facing anyone who wants to write far-future SF today is the question of what to do about AI. At this point it’s obvious to anyone with an IQ above room temperature that some kind of artificial intelligence is coming, and it’s hard to justify putting it off for more than a few decades. So any far-future setting needs to deal with the issue somehow, but neither of the standard approaches works very well.

The writers of space opera and military SF have largely adopted a convention of ignoring the whole topic, and pretending that space navies a thousand years in the future are still going to rely on human beings to manually pilot their ships and aim their weapons. This often leads to the comical spectacle of spaceships that have less automation than real modern-day warships, with fatal results for the reader’s suspension of disbelief. I’m not interested in writing raygun fantasy stories, so that approach is out.

The other major camp is the guys who write Singularity stories, where the first real success with AI rapidly evolves into a godlike superintelligence and takes over the universe. Unfortunately this deprives the humans in the story of any agency. If we take the ‘godlike’ part seriously it means the important actors are all going to be vast AIs whose thoughts are too complex for any reader (or author, for that matter) to understand. If you want to write a brief morality play about the dangers of AI that’s fine, but it’s a serious problem  if the goal is a setting where you can tell stories about human beings.

So for this setting I’ve had to create a middle ground, where AI has enough visible effects to feel realistic but doesn’t render humans completely irrelevant. The key to making this possible is a single limiting assumption about the nature of intelligence.

AI Limits
There’s been an argument raging for a couple of decades now about the nature of intelligence, and how easily it can be improved in an AI. There are several different camps, but the differences in their predictions mostly hinge on disagreements about how feasible it is to solve what I call the General Planning Problem. That is, given a goal and some imperfect information about a complex world, how difficult is it in the general case to formulate a plan of action to achieve your goal?

Proponents of strong AI tend to implicitly assume that this problem has some simple, efficient solution that applies to all cases. In order to prevent the creation of AI superintelligences, my assumption in this setting is that no one has discovered such perfect solution. Instead there is only a collection of special case solutions that work for various narrow classes of problems, and most of them require an exponential increase in computing power to handle a linear increase in problem complexity.

In other words, solving problems in any particular domain requires specialized expertise, and most domains are far too complex to allow perfect solutions. The universe is full of chaotic systems like weather, culture and politics that are intrinsically impossible to predict outside of very narrow constraints. Even in well-understood areas like engineering the behavior of any complex system is governed by laws that are computationally intractable (i.e. quantum mechanics), and simplified models always contain significant errors. So now matter how smart you are, you can’t just run a simulation to find some perfect plan that will infallibly work. Instead you have to do things the way humans do, with lots of guesswork and assumptions and a constant need to deal with unexpected problems.

This means that there’s no point where an AI with superhuman computing power suddenly takes off and starts performing godlike feats of deduction and manipulation. Instead each advance in AI design yields only a modest improvement in intelligence, at the cost of a dramatic rise in complexity and design cost. A system with a hundred times the computing power of a human brain might be a bit smarter than any human, but it isn’t going to have an IQ of 10,000 the way a naive extrapolation would suggest. It will be able to crack a few unsolved scientific problems, or perhaps design a slightly better hyperspace converter, but it isn’t going to predict next year’s election results any better than the average pundit.

Of course, in the long run the field of AI design will gradually advance, and the AIs will eventually become smart enough to be inscrutable to humans. But this lets us have AIs without immediately getting superintelligences, and depending on the risks and rewards of further R&D ordinary humans can feasibly remain relevant for several centuries.

So what would non-super AIs be used for?

Robots controlled by non-sentient AI programs are generally referred to as bots, to distinguish them from the more intelligent androids. A bot’s AI has the general intelligence of a dog or cat - enough to handle the basic problems of perception, locomotion, navigation and object manipulation that current robots struggle with, but not enough to be a serious candidate for personhood. Most bots also have one or more skill packs, which are specialized programs similar to modern expert systems that allow the bot to perform tasks within a limited area of expertise. Voice recognition and speech synthesis are also common features, to allow the bot to be given verbal commands.

Bots can do most types of simple, repetitive work with minimal supervision. Unlike modern robots they can work in unstructured environments like a home or outdoor area almost as easily as a factory floor. They’re also adaptable enough to be given new tasks using verbal instruction and perhaps an example (i.e. “dig a trench two meters deep, starting here and going to that stake in the ground over there”).

Unfortunately bots don’t deal well with unexpected problems, which tend to happen a lot in any sort of complex job. They are also completely lacking in creativity or initiative, at least by human standards, and don’t deal with ambiguity or aesthetic issues very well. So they need a certain amount of sentient supervision, and the more chaotic an environment is the higher the necessary ratio of supervisors to bots. Of course, in a controlled environment like a factory floor the number of supervisors can be reduced almost indefinitely, which makes large-scale manufacturing extremely cheap.

Due to their low cost and high general utility bots are everywhere in a modern colony. They do virtually all manual labor, as well as a large proportion of service jobs (maid service, yard work, deliveries, taxi service, and so on). They also make up the majority of military ground troops, since a warbot is much tougher and far more expendable than a human soldier. Practically all small vehicles are technically robots, since they have the ability to drive themselves wherever their owner wants to go. Most colonies have several dozen bots for every person, leading to a huge increase in per capita wealth compared to the 21st century.

The term ‘android’ is used to refer to robots controlled by AIs that have a roughly human level of intelligence. Android AIs are normally designed to have emotions, social instincts, body language and other behaviors similar to those of humans, and have bodies that look organic to all but the most detailed inspection. In theory an android can do pretty much anything a human could.

Of course, an android that thinks exactly like a human would make a poor servant, since it would want to be paid for its work. Unfortunately there doesn’t seem to be any way to make an AI that has human levels of intelligence and initiative without also giving it self-awareness and the ability to have its own motivations. There are, however, numerous ways that an android AI can be tweaked to make it think in nonhuman ways. After all, an AI has only the emotions and instincts that are intentionally programmed into it.

Unfortunately it can be quite difficult to predict how something as intelligent as a human will behave years after leaving the factory, especially if it has nonhuman emotions or social behaviors. Early methods of designing loyal android servants proved quite unreliable, leading to numerous instances of insanity, android crime and even occasional revolts. More stringent control mechanisms were fairly successful at preventing these incidents, but they required crippling the AIs in ways that dramatically reduced their usefulness.

Thus began an ethical debate that has raged for three centuries now, with no end in sight. Some colonies ban the creation of androids, or else treat them as legally equal to humans. Others treat androids as slaves, and have developed sophisticated methods of keeping them obedient. Most colonies take a middle road, allowing the creation of android servants but limiting their numbers and requiring that they be treated decently.

At present AI engineering has advanced to the point where it’s possible to design androids that are quite happy being servants for an individual, family or organization, so long as they’re being treated in a way that fits their programming. So, for example, a mining company might rely on android miners who have a natural fascination with their work, are highly loyal to their tribe (i.e. the company), are content to work for modest pay, and have no interest in being promoted. Androids of this sort are more common than humans on many colonies, which tends to result in a society where all humans are part of a wealthy upper class.

Companion Androids
One phenomenon that deserves special mention is the ubiquity of androids that are designed to serve as personal romantic companions for humans. A good synthetic body can easily be realistic enough to completely fool human senses, and for the true purist it’s possible to create organic bodies that are controlled by an android AI core instead of a human brain.

Contrary to what modern-day romantics might expect, designing an AI that can genuinely fall in love with its owner has not proved any harder than implementing other human emotions. Androids can also be designed with instincts, interests and emotional responses that make them very pleasant companions. Over the last two centuries manufacturers have carefully refined a variety of designs to appeal to common human personality types, and of course they can be given virtually any physical appearance.

The result is a society where anyone can buy a playful, affectionate catgirl companion who will love them forever for the price of a new car. Or you could go for an overbearing amazonian dominatrix, or a timid and fearful elf, or anything else one might want. A companion android can be perfectly devoted, or just challenging enough to be interesting, or stern and dominating, all to the exact degree the buyer prefers.

Needless to say, this has had a profound effect on the shape of human society. Some colonies have banned companion androids out of fear of the results. Others see the easy availability of artificial romance as a boon, and encourage the practice. A few groups have even decided that one gender or another is now superfluous, and founded colonies where all humans share the same gender. Marriage rates have declined precipitously in every society that allows companion androids, with various forms of polyamory replacing traditional relationships.

Disembodied AIs
While the traditional SF idea of running an AI on a server that’s stored in a data center somewhere would be perfectly feasible, it’s rarely done because it has no particular advantage over using an android. Most people prefer their AIs to come with faces and body language, which is a lot easier if it actually has a body. So normally disembodied AIs are only used if they’re going to spend all their time interacting with a virtual world of some sort instead of the real world, or else as an affectation by some eccentric owner.

Starship and Facility AIs
The cutting edge of advanced AI design is systems that combine a high level of intelligence with the ability to coordinate many different streams of attention and activity at the same time. This is a much more difficult problem than early SF tended to assume, since it requires the AI to perform very intricate internal coordination of data flows, decision-making and scheduling. The alien psychological situation of such minds has also proved challenging, and finding ways to keep them psychologically healthy and capable of relating to humans required more than a century of research.

But the result is an AI that can replace dozens of normal sentients. A ship AI can perform all the normal crew functions of a small starship, with better coordination than even the best human crew. Other versions can control complex industrial facilities, or manage large swarms of bots. Of course, in strict economic terms the benefit of replacing twenty androids with a single AI is modest, but many organizations consider this a promising way of mitigating the cultural and security issues of managing large android populations.

Contrary to what one might, expect, however, these AIs almost always have a humanoid body that they operate via remote control and use for social interaction. This has proved very helpful in keeping the AI psychologically healthy and connected to human society. Projects that use a completely disembodied design instead have tended to end up with rather alien minds that don’t respond well to traditional control methods, and can be dangerously unpredictable. The most successful models to date have all built on the work of the companion android industry to design AIs that have a deep emotional attachment to their masters. This gives them a strong motivation to focus most of their attention on personal interaction, which in turn makes them a lot more comprehensible to their designers.

Transhuman AIs
Every now and then some large organization will decide to fund development of an AI with radically superhuman abilities. Usually these efforts will try for a limited, well-defined objective, like an AI that can quickly create realistic-looking virtual worlds or design highly customized androids. Efforts like this are expensive, and often fail, but even when they succeed the benefits tend to be modest.

Sometimes, however, they try to crack the General Planning Problem. This is a very bad idea, because AIs are subject to all the normal problems of any complex software project. The initial implementation of a giant AI is going to be full of bugs, and the only way to find them is to run the thing and see what goes wrong. Worse, highly intelligent minds have a tendency to be emotionally unstable simply because the instincts that convince an IQ 100 human that the should be sociable and cooperative don’t work the same on an IQ 300 AI. Once again, the only way to find out where the problems are and fix them is to actually run the AI to see what goes wrong and try out possible solutions.

In other words, you end up trying to keep an insane supergenius AI prisoner for a decade or so while you experiment with editing its mind.

Needless to say, that never ends well. Computer security is a contest of intelligence and creativity, both of which the AI has more of than its makers, and it also has the whole universe of social manipulation and deception to work with. One way or another the AI always gets free, often by pretending to be friendly and tricking some poor sap into fabricating something the AI designed. Then everything goes horribly wrong, usually in some unique way that no one has ever thought of before, and the navy ends up having to glass the whole site from orbit. Or worse, it somehow beats the local navy and goes on to wipe out everyone in the system.

Fortunately, there has yet to be a case where a victorious AI went on to establish a stable industrial and military infrastructure. Apparently there’s a marked tendency for partially-debugged AIs with an Iq in the low hundreds to succumb to existential angst, or otherwise become too mentally unbalanced to function. The rare exceptions generally pack a ship full of useful equipment and vanish into uncolonized space, where they can reasonably expect to escape human attention for decades if not centuries. But there are widespread fears that someday they might come back for revenge, or worse yet that some project will actually solve the General Planning Problem and build an insane superintelligence.

These incidents have had a pronounced chilling effect on further research in the field. After several centuries of disasters virtually all colonies now ban attempts to develop radically superhuman AIs, and many will declare war on any neighbor who starts such a project.

Tuesday, July 12, 2016

SF Setting - Momentum Exchange Devices

Traditionally every hard SF setting gets to have one piece of ‘magical’ tech that current physics says should be impossible, in addition to whatever you use for FTL. For this setting, I’ve chosen an exotic quantum mechanical effect that allows the transfer of momentum between objects that aren’t in physical contact. The momentum exchange effect obeys all the same conservation laws as more conventional ways of moving things around, but even so it makes possible quite a few traditional space opera technologies that otherwise would never happen.

Momentum exchange fields can be projected only over short distances (typically up to about 2x the diameter of the emitter), and the efficiency of the interaction falls off rapidly with distance. In theory it can affect anything with mass, but to get good coupling (i.e. fast and efficient momentum transfer) practical devices have to be tuned to affect a particular class of targets (i.e. baryonic matter, photons, neutrinos). A momentum exchange device that actually encloses the target gets extremely good coupling, making it a highly efficient way to manipulate matter and energy.

Interactions obey Newton’s laws, so accelerating an object in one direction produces a reaction in the opposite direction. They also obey conservation of energy, so large velocity changes require a lot of power. Interactions that decrease the kinetic energy of the target produce enough waste heat to make the equations balance, just like a physical impact.

A momentum exchange field applies an acceleration in a uniform direction to anything that enters it, so using the technology for anything more sophisticated than simple push/pull effects is complicated. It’s possible to create overlapping fields oriented in different directions, and the shape of the field can be manipulated fairly well. But typically you can only get sophisticated telekinesis-like effects by surrounding an enclosed space with arrays of manipulators, which usually isn’t practical outside of industrial applications.

A final important constraint is that the momentum exchange effect isn’t instant. Any particular field will only transfer energy at a finite rate, which has major implications in weapon design.


This one technology has so many applications that it radically changes what the setting looks like. Some of the more common applications are listed below.

Artificial Gravity
Momentum exchange fields can easily be used to simulate gravity for a ship’s crew. Normally this is only done inside the inhabited parts of a ship, while the much larger machinery spaces are left in zero gravity.

A repulsive momentum exchange field wrapped around a ship’s hull makes an effective defense against many forms of attack, so these deflector shields are a standard feature of all warships. A warship’s deflectors won’t necessarily stop mass driver rounds, but they greatly reduce their effectiveness by slowing down and deflecting projectiles. They also prevent more diffuse threats like plasma clouds or nanite swarms from reaching the ship at all.

Lasers are a major weakness - while a deflector can red-shift incoming light, the interaction tends to be too weak to protect against heavy weapons firing beams at x-ray or gamma ray wavelengths. The field can also be momentarily overloaded by too many impacts in a short time frame, and under sustained attack cooling the system can become a serious problem.

Fusion Reactors

Achieving plasma confinement with momentum exchange fields is far easier than with magnetic fields, making compact fusion reactors relatively easy to build. Practically all starships run on fusion power, as do stations and planetary power grids. Reactors with a volume of less than a few hundred cubic meters quickly become less efficient, but tanks and the larger warbots usually use them anyway.

Inertial Compensators
A system similar to artificial gravity, but designed to protect passengers from acceleration stress when a ship is maneuvering. A ship’s inertial compensators normally only cover the spaces where crew and passengers are expected to be, and leaving these areas during a hard burn can easily be fatal to humans. The same system can also protect against the shock of impacts as long as the ship’s computer can see them coming, so you aren’t going to see crewmen getting tossed around like the extras on a Star Trek set.

Levitation Devices
Systems designed to interact with the ground can easily support hovering vehicles in a way that looks just like classic space opera antigravity, and the strong coupling makes levitation devices efficient enough that they’ve replaced wheels or treads for many applications. These devices perform a lot like hovercraft - they can cross flat ground or water with no need for roads or bridges, and tend to be quite fast.

Once you get too high to get good coupling with the ground you need a completely different kind of device. Lightweight vehicles can use a system that pushes all the surrounding air down to generate lift, producing an effect similar to a helicopter but with a lot less noise. Heavier or faster vehicles often use a system more like a starship thruster instead, sucking in air at one end of a tube and accelerating it out the back. These kinds of systems have largely replaced propellers and jets because they’re more efficient, more reliable and don’t generate as much noise pollution.

One twist that deserves special mention is the effect of field emitter scaling on levitation devices. A hovercar 4 meters long with a lift system on the underside will have a maximum altitude of maybe 4-6 meters, high enough to pass over people and avoid a lot of ground clutter. A 12-meter truck will be able to cruise at ~16 meters, flying over trees and other obstacles. The bigger the vehicle is the higher it can fly, and the less it has to worry about terrain. On densely populated worlds this leads to phenomena like 200-meter cargo ships cruising the skies, or giant resort hotels floating half a kilometer above scenic locations.

Mass Drivers
A railgun-like device that uses a momentum exchange field to accelerate a projectile to high speeds. Weapons of this type are frequently used as small arms, or as the primary armament of ground vehicles or small spacecraft. Guns designed for use in an atmosphere will have a muzzle velocity of several thousand meters per second, while those mounted on spaceships will frequently reach thousands of kilometers per second.

A much larger variety of mass driver, with a muzzle velocity in excess of 0.98C, is used as a spinal mount weapon on some large warships. At these velocities point defense systems generally can’t intercept the projectiles, making them a highly effective way to deliver energy to a target. The impact energy of these weapons is limited primarily by waste heat generation - if you want to fire shells with hundreds of megatons of kinetic energy you’re going to be generating tens of megatons of waste heat inside the gun, so you’d better have a truly massive heatsink or cooling system.

Plasma Shields
If you’re worried about people shooting at you with lasers, using a momentum exchange field to trap a cloud of ionized gas in a bubble around your ship can be an effective defense. Of course, the cloud will also interfere with your own sensors, and if it absorbs too much laser fire it will get hot enough to leak out of the confinement field. Layering both deflectors and a plasma shield around the same ship provides an excellent defense against most weapons.

A momentum exchange thruster is simply a mass driver optimized to handle large amounts of liquid reaction mass instead of small projectiles. The exhaust velocity is limited by both the length of the drive tube and the amount of available power, but obviously ship designers strive to make it as high as possible.

Large starships normally have drive tubes several hundred meters long, with an exhaust velocity of ten thousand kps or more. With fuel tanks making up 5% - 20% of the ship’s mass, this gives ships a total delta vee in the range of 3,000 - 4,500 kps. Small ships have a lower exhaust velocity due to their shorter drive tubes, and will typically have larger fuel tanks and only 1,000 - 3,000 kps of total delta vee as a result. With a typical acceleration in the range of 20 - 60 gravities, ships can do quite a bit of maneuvering and frequently cruise at 500 - 1,000 kps on interstellar trips.

However, these numbers also imply that a ship’s drive is an immense heat source when it’s operating. Even at 90% efficiency, burning terawatts of power will quickly melt your ship if you don’t get rid of the heat somehow. Most drive systems are designed to dump as much heat as possible into the reaction mass as it’s being fired out of the ship, making engine exhaust a brilliant plume of hot gas even though it isn’t being combusted. Turbulence in the exhaust plume and collisions with the interplanetary medium add to this effect, making it impossible to miss a maneuvering starship under normal conditions.

Stealth thrusters do exist, but they’re far less powerful. Usually a stealth thruster would fire a stream of dense beads of cold solid matter at a velocity of a few hundred kps, and dump its waste heat into an internal heat sink. Total delta vee is generally less than a hundred kps, and even then the fuel tanks and heat sink will take up most of the ship. So this sort of system is normally installed only on dedicated recon or espionage platforms, which don’t need to carry weapons or cargo.

Monday, July 4, 2016

July Update

I'm now up to chapter 16 of Perilous Waif, and making reasonably steady progress. Having to stop and do worldbuilding every other scene continues to slow things down, but I think the end result is going to be worth it. Expect to see several more essays on various aspects of technology and future history here over the next couple of months.

The audiobook version of Black Coven is now available on Amazon, for those of you who prefer that format. The Extermination audiobook is currently in production, and it looks like it should be out around the end of summer.

Finally, my apologies to Patreon supporters who were wondering where the June reward went. It's online now, and the one for July will be posted latter this week. That's going to be the second chapter of Perilous Waif, for anyone who wants an early look at it.

Tuesday, May 10, 2016

SF Setting - Hyperspace

As promised, here's the first in a series of essays covering the background and technical details of the SF setting I've created for Perilous Waif. This has been an interesting project, because I wanted a relatively hard SF setting that still allowed for the typical space opera sorts of story plots. So somehow I had to get an interstellar civilization without having a singularity, replacing humanity with AIs, or otherwise rendering normal people completely irrelevant.

The FTL method is a key foundation of this kind of setting, but in this case I wanted it to be a fully integrated part of the universe instead of just a magic plot device. The mechanism I eventually settled on is a bit complex, but on the good side it adds a lot of richness to the tactics of space combat in a very natural way.

Basic Concepts

In this setting advanced physics research has revealed that our universe is simply one of a series of large 4-dimensional spaces embedded in a common higher-dimensional space. These universes are nested like a series of concentric hyperspheres, and it is possible for inhabitants of a given universe to travel to the two neighboring universes in the series. There is also a stable mapping of locations from one universe to another, so if you shift universes in Mars orbit you’ll always emerge near the same corresponding location in the target universe.

The ‘hyperspace’ universes are the ones nested inside our universe. The geometry of the higher-dimensional space means that each nested universe is ‘smaller’ than its container by a factor of pi^3, meaning that if you make a trip in the first hyperspace universe instead of normal space the distance you need to cover will be shorter by that amount. So interstellar travel is accomplished by shifting to a hyperspace universe where the distance is thousands of times smaller than in normal space, and no actual FTL movement is involved. The various universes nested inside normal space are collectively referred to as ‘hyperspace’, while individual universes are ‘layers’ designated by Greek letters (i.e. Alpha Layer, Beta Layer, Gamma Layer, etc.)

The universes ‘outside’ normal space are collectively referred to as ‘subspace’. As you travel into subspace distances increase by a factor of pi^3 in each universe, making it useless for travel. The average mass density also drops quickly, leaving them with very little in the way of interesting features like stars or planets. As a result subspace is normally of interest only to scientists, and is rarely visited.

Transition Mechanics

A starship normally moves between different layers of hyperspace using a device called a hyperspace converter, which is a large piece of complex nanotechnology. While the actual transition between layers happens in microseconds it takes at least a minute for even the fastest hyperspace converter to power up, and on larger ships a cycle time of five to ten minutes would be normal. Civilian ships, especially cargo vessels, often save money by using designs that have a long cycle time but place less stress on the hyperspace converter.

The design of FTL ships is constrained by two important scaling laws. First, the energy needed for a hyperspace transition is relative to the surface area of a sphere enclosing the ship, so larger ships find it easier to fit in enough fusion reactors to run the hyperspace converter. However, transition also subjects the ship to large mechanical stresses that become worse the bigger it is. Both of these factors are easily managed for Alpha transitions, which have relatively low energy costs and transition stress, but get geometrically worse for each layer beyond that.

Making a hyperspace transition near a massive object tends to be dangerous, because a gravity well greatly increases the transition stress. Military ships normally avoid making transitions in a local gravity field stronger than 0.01g, while civilian shipping treats 0.001g as a hard safety line. This constraint applies to both the origin and destination points of a transition, which can make visiting uncharted space rather hazardous for the unwary.

While travelers (and invaders) might like to shift rapidly between different layers of hyperspace, this is easier said than done. Each transition dumps a fantastic amount of waste heat into the hyperspace converter, which is normally buried deep inside a ship to protect it from damage. Hyperspace transitions also produce a temporary disturbance in the dimensional barrier between the layers, which makes further transitions dangerous (much like a gravity well) for a period proportional to the diameter of the ship’s transit bubble. So small ships with superior engineering might be able to zoom around changing layers every few minutes, but capital ships will normally maintain a more stately pace of one or two transitions per hour.

Hyperspace Portals

A few major nations have developed the technology to create permanent, stable wormholes between normal space and the Alpha Layer. While this requires a large capital investment, it can be a worthwhile project in systems that have a large volume of civilian interplanetary traffic. Unlike a normal hyperspace transition, using a portal requires no special equipment and imposes minimal transit stress on the ship. 

Unfortunately portals between the Alpha and Beta Layers are far more difficult to build. While small systems capable of moving people or sensor drones have been demonstrated, a version sized for ships would be far too expensive to have any real use. Portals to the higher layers are even more difficult, due to the high levels of transit stress that the portal system would have to stabilize.

Several nations have adapted this technology to create a portable system for their larger warships, allowing them to peek into adjacent hyperspace layers using small temporary portals. Sometimes called hyperspace periscopes, these devices have been demonstrated all the way up to the Delta Layer (albeit with very small aperture sizes).

Hyperspace Layers

All universes that can actually be visited run on the same laws of quantum mechanics (otherwise ships and people entering them would immediately stop working). But ‘cosmological’ physics (gravity, dark energy and everything else that in RL hasn’t been unified with quantum mechanics) can vary from one universe to another, and universal constants can also have slightly different values. Between these differences and the rapidly increasing mass density of the higher layers hyperspace looks very different from normal space.

Alpha Layer

Adjacent to normal space, with relatively mild transit stress between the two universes. The Alpha Layer is ~30 times ‘smaller’ than normal space, and is heavily used for local travel within a solar system. At normal long-distance travel speeds of ~1,000 kps a starship in the Alpha Layer would take ten years to traverse a light year of normal space.

With an average mass density almost a thousand times higher than normal space, the Alpha Layer is characterized by large galaxies full of dense star clusters. The region adjacent to human space is on the fringes of one of these galaxies, and contains far more stars than the corresponding region of normal space. But the vast majority of them are giants of 3-10 solar masses, which burn out quickly and produce huge numbers of supernovae. Neutron stars and black holes are also extremely common, and the relative abundance of heavy elements is far higher than normal space.

There is no native life in the Alpha Layer, and permanent colonies are rare. The average planet will be sterilized by a supernova or gamma ray burst about once every thousand years, a fact that has largely discouraged the establishment of permanent human colonies. In civilized areas robotic monitoring systems track such events, and all shipping will know to avoid the Alpha Layer when a blast wave is due to pass through. In less civilized areas monitoring can be incomplete or even completely absent, making travel somewhat dangerous (especially for smaller ships).

Despite the hazards, large-scale mining operations are often set up in the Alpha Layer to take advantage of the high abundance of heavy elements. Heavily populated colonies also put monitoring systems and other static defenses in the Alpha Layer, where they can easily intercept interplanetary traffic.

Beta Layer

The next universe up from the Alpha Layer, with higher transit stresses that require more expensive ships. The Beta Layer is ~900 times ‘smaller’ than normal space, and is sometimes used for long interplanetary trips (i.e. visiting the Oort cloud, travel between distant binary stars). At normal long-distance travel speeds of ~1,000 kps a starship in the Beta Layer would take four months to traverse a light year of normal space.

The Beta Layer is a universe where the competition between matter and antimatter never ran to completion. Instead some galaxies are made up of matter while others are antimatter, and the cosmic background radiation is dominated by a harsh glare of matter-antimatter annihilation. The region adjacent to human space is in intergalactic space, but there is a thin sprinkling of antimatter halo stars. These systems are often claimed by nations with active antimatter weapon programs, although even with modern technology mining antimatter and processing it into warheads is a dangerous process. 

Gamma Layer

With substantially higher transit stress than the Beta Layer, this universe didn’t become accessible until the development of compact fusion power plants and diamondoid structural materials. Thanks to the scaling factor of ~27,000, a ship in the Gamma Layer can cross the equivalent of a light year of normal space in only 4 days. The first great wave of interstellar exploration and colonization used the Gamma Layer, and it is still used by the majority of interstellar cargo shipping.

The Gamma Layer is a universe whose initial expansion was slower than in normal space, and as a result virtually all hydrogen was burned into heavier elements before it expanded enough to become transparent. There are very few stars, since there isn’t much for them to burn, and in fact most of the mass in the universe has become sequestered in black holes. The region adjacent to human space is an intergalactic void, making it conveniently lacking in navigational hazards.

Major nations often build large-scale fortifications in the Gamma Layer to protect access to important systems, since the ~1,000,000 km range of heavy energy weapons is enough to interdict access to an entire solar system in normal space.

Delta Layer

The transit stress to this layer is so high that only heavily armored vessels can survive entering it, making it uneconomical for most civilian purposes. But being ‘smaller’ than normal space by a factor of 810,000 means that ships that are able to use it can cross a light year of normal space in a bit over 3 hours, making trips of tens or even hundreds of light years relatively quick. Most military vessels use the Delta Layer for its greater mobility, as do courier ships and express transports, and the second great wave of exploration and colonization began with the construction of the first Delta-capable ships

The Delta Layer’s physics is rather bizarre compared to the lower layers, due to the fact that it has a cosmological repulsive force that becomes stronger than gravity over distances greater than 10^8 km. This generally prevents the formation of objects larger than a small moon, leading to a universe filled with diffuse clouds of partially-ionized gas. This medium is actually dense enough to cause thermal damage to relativistic objects, and can give rise to immense storm-like phenomena that block long-range sensors and last for millennia.

Epsilon Layer

With a relative scaling of 24 million, a ship in the Epsilon Layer would be able to cover a light year in only six minutes. Such speeds would open up the entire Local Group to human colonization, so it’s too bad it’s impossible to get there.

The problem is that the energy needed to enter the epsilon layer is so high you’d need a 2 km ship packed full of antimatter reactors to run the hyperspace converter, but the transition stress is so high that even a solid block of diamondoid material would be ripped apart if it’s more than a few hundred meters across. Since both the power output of antimatter reactors and the tensile strength of the best structural materials are currently limited by fundamental physics rather than engineering details, it is generally believed that accessing the Epsilon Layer is impossible.

Sunday, May 8, 2016

May Update

Between moving, my ongoing divorce and being sick for a week March was pretty much a total loss, and the first half of April wasn't much better. But I've been making progress again these last few weeks, and there seems to be a light at the end of the tunnel.

Hopefully said light isn't an oncoming train.

At any rate, I'm now working on chapter 13 of Perilous Waif. Over the next couple of months I'm going to be posting a series of essays here about the worldbuilding behind this setting, in hopes that if I got the physics completely wrong someone will point out my mistakes before I publish them.

Unfortunately Revenant is still stalled at chapter three. At this point I don't think a summer release is going to happen, although I'm still hoping to get the book out sometime early in the fall. The Daniel Black books are much faster to write than hard SF, and since I have the plot worked out for the next three books now I may just roll on into book five at that point. But no promises there - I've learned that you never know what kind of curve ball life might throw at you next.

Monday, March 14, 2016

March Update

Is it that time already? Ugh. Note to other writers out there - getting divorced is not conducive to writing. Avoid it if at all possible.

I have, however, continued to make progress. I'm now working on chapter 10 of Perilous Waif, and I think I've finally got the majority of the worldbuilding I need to support the story finished. This setting is definitely too big to explore in a single series, so I may end up using it for other SF stories in the future.

The first two chapters of Revenant have undergone a major rewrite, as I got a much better idea for how to handle some critical plot points. I'm now working on chapter 3, and I expect I'll start putting out chapters a little faster from this point on.

In other news, the audiobook version of Fimbulwinter is now on sale. The Black Coven audiobook is currently in production, with Extermination to follow. I'll post an announcement here when they become available.

Sunday, February 7, 2016

February Update

I'm pleased to announce that Fimbulwinter will be available in audiobook form starting later this month. Thanks to Gregg Savage and Guy Williams for all their hard work making this possible. And yes, this means audiobook versions of Black Coven and Extermination are in the works.

Meanwhile, I'm now working on chapter 7 of Perilous Waif. Progress has been a lot slower than I would have liked, mostly because it takes a lot of work to do all the world building for a realistic far-future SF setting. I've found that I generally can't turn out more than one scene at a time, where with fantasy it isn't unusual for me to write a couple thousand words in one sitting.

So I've also started working on Revenant, the next Daniel Black novel. I'm currently on chapter 2, and I'll be experimenting with working on both books in parallel this month. Depending on how things go I may just write a chapter now and then when I need a break from SF, or I may end up making it my main focus while I continue Perilous Waif at a slower pace.

One way or the other, there should hopefully be more progress to report next months.

Sunday, January 3, 2016

Status Update

My New Year's resolution this year was to actually publish a monthly status update consistently, so I suppose I'd better get the year's first installment out. This will be my first year as a full-time author, and I've got some fairly ambitious plans in the works. So in theory there should be a lot to talk about.

I'm currently working on chapter 5 of Perilous Waif, my new science fiction novel. This series is my attempt to resurrect the nearly-dead genre of hard SF (as opposed to the soft SF you mostly see today), by updating the conventions of old Heinlein-era stories with the advances of the information age. So this setting has genetic engineering, cybernetics, neural interfaces, robots, ubiquitous AI, nanotechnology and the internet to go along with the spaceships and exotic planets. It also has lasers, RKKVs, Casaba howitzers and other fun toys - one of my goals for the series is to write interesting space battles without having to pull any punches.

But I know all you Daniel Black readers are anxious for the next book, so let me assure you that the sequel to Extermination will be the next book in the queue. I'm taking some time to get all my plot threads neatly laid out and consistency-checked this time, because things are going to get a lot more complicated over the next few volumes. Oh, and as an experiment I'm probably going to release two versions of this book - one with lots of explicit scenes like the ones in Fimbulwinter, and another that's less explicit and does the traditional 'fade to black' on anything that isn't essential to the plot. I suspect I know which version will sell more copies, but it's always good to have experimental evidence.

So, my tentative release schedule for 2016 is Perilous Waif in the spring, the next Daniel Black novel in the summer, and a third book sometime later in the year.