Sunday, July 24, 2016

SF Setting - Artificial Intelligence

Probably the single most difficult technical issue facing anyone who wants to write far-future SF today is the question of what to do about AI. At this point it’s obvious to anyone with an IQ above room temperature that some kind of artificial intelligence is coming, and it’s hard to justify putting it off for more than a few decades. So any far-future setting needs to deal with the issue somehow, but neither of the standard approaches works very well.


The writers of space opera and military SF have largely adopted a convention of ignoring the whole topic, and pretending that space navies a thousand years in the future are still going to rely on human beings to manually pilot their ships and aim their weapons. This often leads to the comical spectacle of spaceships that have less automation than real modern-day warships, with fatal results for the reader’s suspension of disbelief. I’m not interested in writing raygun fantasy stories, so that approach is out.


The other major camp is the guys who write Singularity stories, where the first real success with AI rapidly evolves into a godlike superintelligence and takes over the universe. Unfortunately this deprives the humans in the story of any agency. If we take the ‘godlike’ part seriously it means the important actors are all going to be vast AIs whose thoughts are too complex for any reader (or author, for that matter) to understand. If you want to write a brief morality play about the dangers of AI that’s fine, but it’s a serious problem  if the goal is a setting where you can tell stories about human beings.


So for this setting I’ve had to create a middle ground, where AI has enough visible effects to feel realistic but doesn’t render humans completely irrelevant. The key to making this possible is a single limiting assumption about the nature of intelligence.


AI Limits
There’s been an argument raging for a couple of decades now about the nature of intelligence, and how easily it can be improved in an AI. There are several different camps, but the differences in their predictions mostly hinge on disagreements about how feasible it is to solve what I call the General Planning Problem. That is, given a goal and some imperfect information about a complex world, how difficult is it in the general case to formulate a plan of action to achieve your goal?


Proponents of strong AI tend to implicitly assume that this problem has some simple, efficient solution that applies to all cases. In order to prevent the creation of AI superintelligences, my assumption in this setting is that no one has discovered such perfect solution. Instead there is only a collection of special case solutions that work for various narrow classes of problems, and most of them require an exponential increase in computing power to handle a linear increase in problem complexity.


In other words, solving problems in any particular domain requires specialized expertise, and most domains are far too complex to allow perfect solutions. The universe is full of chaotic systems like weather, culture and politics that are intrinsically impossible to predict outside of very narrow constraints. Even in well-understood areas like engineering the behavior of any complex system is governed by laws that are computationally intractable (i.e. quantum mechanics), and simplified models always contain significant errors. So now matter how smart you are, you can’t just run a simulation to find some perfect plan that will infallibly work. Instead you have to do things the way humans do, with lots of guesswork and assumptions and a constant need to deal with unexpected problems.


This means that there’s no point where an AI with superhuman computing power suddenly takes off and starts performing godlike feats of deduction and manipulation. Instead each advance in AI design yields only a modest improvement in intelligence, at the cost of a dramatic rise in complexity and design cost. A system with a hundred times the computing power of a human brain might be a bit smarter than any human, but it isn’t going to have an IQ of 10,000 the way a naive extrapolation would suggest. It will be able to crack a few unsolved scientific problems, or perhaps design a slightly better hyperspace converter, but it isn’t going to predict next year’s election results any better than the average pundit.


Of course, in the long run the field of AI design will gradually advance, and the AIs will eventually become smart enough to be inscrutable to humans. But this lets us have AIs without immediately getting superintelligences, and depending on the risks and rewards of further R&D ordinary humans can feasibly remain relevant for several centuries.


So what would non-super AIs be used for?


Bots
Robots controlled by non-sentient AI programs are generally referred to as bots, to distinguish them from the more intelligent androids. A bot’s AI has the general intelligence of a dog or cat - enough to handle the basic problems of perception, locomotion, navigation and object manipulation that current robots struggle with, but not enough to be a serious candidate for personhood. Most bots also have one or more skill packs, which are specialized programs similar to modern expert systems that allow the bot to perform tasks within a limited area of expertise. Voice recognition and speech synthesis are also common features, to allow the bot to be given verbal commands.


Bots can do most types of simple, repetitive work with minimal supervision. Unlike modern robots they can work in unstructured environments like a home or outdoor area almost as easily as a factory floor. They’re also adaptable enough to be given new tasks using verbal instruction and perhaps an example (i.e. “dig a trench two meters deep, starting here and going to that stake in the ground over there”).


Unfortunately bots don’t deal well with unexpected problems, which tend to happen a lot in any sort of complex job. They are also completely lacking in creativity or initiative, at least by human standards, and don’t deal with ambiguity or aesthetic issues very well. So they need a certain amount of sentient supervision, and the more chaotic an environment is the higher the necessary ratio of supervisors to bots. Of course, in a controlled environment like a factory floor the number of supervisors can be reduced almost indefinitely, which makes large-scale manufacturing extremely cheap.


Due to their low cost and high general utility bots are everywhere in a modern colony. They do virtually all manual labor, as well as a large proportion of service jobs (maid service, yard work, deliveries, taxi service, and so on). They also make up the majority of military ground troops, since a warbot is much tougher and far more expendable than a human soldier. Practically all small vehicles are technically robots, since they have the ability to drive themselves wherever their owner wants to go. Most colonies have several dozen bots for every person, leading to a huge increase in per capita wealth compared to the 21st century.


Androids
The term ‘android’ is used to refer to robots controlled by AIs that have a roughly human level of intelligence. Android AIs are normally designed to have emotions, social instincts, body language and other behaviors similar to those of humans, and have bodies that look organic to all but the most detailed inspection. In theory an android can do pretty much anything a human could.


Of course, an android that thinks exactly like a human would make a poor servant, since it would want to be paid for its work. Unfortunately there doesn’t seem to be any way to make an AI that has human levels of intelligence and initiative without also giving it self-awareness and the ability to have its own motivations. There are, however, numerous ways that an android AI can be tweaked to make it think in nonhuman ways. After all, an AI has only the emotions and instincts that are intentionally programmed into it.


Unfortunately it can be quite difficult to predict how something as intelligent as a human will behave years after leaving the factory, especially if it has nonhuman emotions or social behaviors. Early methods of designing loyal android servants proved quite unreliable, leading to numerous instances of insanity, android crime and even occasional revolts. More stringent control mechanisms were fairly successful at preventing these incidents, but they required crippling the AIs in ways that dramatically reduced their usefulness.


Thus began an ethical debate that has raged for three centuries now, with no end in sight. Some colonies ban the creation of androids, or else treat them as legally equal to humans. Others treat androids as slaves, and have developed sophisticated methods of keeping them obedient. Most colonies take a middle road, allowing the creation of android servants but limiting their numbers and requiring that they be treated decently.


At present AI engineering has advanced to the point where it’s possible to design androids that are quite happy being servants for an individual, family or organization, so long as they’re being treated in a way that fits their programming. So, for example, a mining company might rely on android miners who have a natural fascination with their work, are highly loyal to their tribe (i.e. the company), are content to work for modest pay, and have no interest in being promoted. Androids of this sort are more common than humans on many colonies, which tends to result in a society where all humans are part of a wealthy upper class.


Companion Androids
One phenomenon that deserves special mention is the ubiquity of androids that are designed to serve as personal romantic companions for humans. A good synthetic body can easily be realistic enough to completely fool human senses, and for the true purist it’s possible to create organic bodies that are controlled by an android AI core instead of a human brain.


Contrary to what modern-day romantics might expect, designing an AI that can genuinely fall in love with its owner has not proved any harder than implementing other human emotions. Androids can also be designed with instincts, interests and emotional responses that make them very pleasant companions. Over the last two centuries manufacturers have carefully refined a variety of designs to appeal to common human personality types, and of course they can be given virtually any physical appearance.


The result is a society where anyone can buy a playful, affectionate catgirl companion who will love them forever for the price of a new car. Or you could go for an overbearing amazonian dominatrix, or a timid and fearful elf, or anything else one might want. A companion android can be perfectly devoted, or just challenging enough to be interesting, or stern and dominating, all to the exact degree the buyer prefers.


Needless to say, this has had a profound effect on the shape of human society. Some colonies have banned companion androids out of fear of the results. Others see the easy availability of artificial romance as a boon, and encourage the practice. A few groups have even decided that one gender or another is now superfluous, and founded colonies where all humans share the same gender. Marriage rates have declined precipitously in every society that allows companion androids, with various forms of polyamory replacing traditional relationships.


Disembodied AIs
While the traditional SF idea of running an AI on a server that’s stored in a data center somewhere would be perfectly feasible, it’s rarely done because it has no particular advantage over using an android. Most people prefer their AIs to come with faces and body language, which is a lot easier if it actually has a body. So normally disembodied AIs are only used if they’re going to spend all their time interacting with a virtual world of some sort instead of the real world, or else as an affectation by some eccentric owner.


Starship and Facility AIs
The cutting edge of advanced AI design is systems that combine a high level of intelligence with the ability to coordinate many different streams of attention and activity at the same time. This is a much more difficult problem than early SF tended to assume, since it requires the AI to perform very intricate internal coordination of data flows, decision-making and scheduling. The alien psychological situation of such minds has also proved challenging, and finding ways to keep them psychologically healthy and capable of relating to humans required more than a century of research.


But the result is an AI that can replace dozens of normal sentients. A ship AI can perform all the normal crew functions of a small starship, with better coordination than even the best human crew. Other versions can control complex industrial facilities, or manage large swarms of bots. Of course, in strict economic terms the benefit of replacing twenty androids with a single AI is modest, but many organizations consider this a promising way of mitigating the cultural and security issues of managing large android populations.


Contrary to what one might, expect, however, these AIs almost always have a humanoid body that they operate via remote control and use for social interaction. This has proved very helpful in keeping the AI psychologically healthy and connected to human society. Projects that use a completely disembodied design instead have tended to end up with rather alien minds that don’t respond well to traditional control methods, and can be dangerously unpredictable. The most successful models to date have all built on the work of the companion android industry to design AIs that have a deep emotional attachment to their masters. This gives them a strong motivation to focus most of their attention on personal interaction, which in turn makes them a lot more comprehensible to their designers.


Transhuman AIs
Every now and then some large organization will decide to fund development of an AI with radically superhuman abilities. Usually these efforts will try for a limited, well-defined objective, like an AI that can quickly create realistic-looking virtual worlds or design highly customized androids. Efforts like this are expensive, and often fail, but even when they succeed the benefits tend to be modest.


Sometimes, however, they try to crack the General Planning Problem. This is a very bad idea, because AIs are subject to all the normal problems of any complex software project. The initial implementation of a giant AI is going to be full of bugs, and the only way to find them is to run the thing and see what goes wrong. Worse, highly intelligent minds have a tendency to be emotionally unstable simply because the instincts that convince an IQ 100 human that the should be sociable and cooperative don’t work the same on an IQ 300 AI. Once again, the only way to find out where the problems are and fix them is to actually run the AI to see what goes wrong and try out possible solutions.


In other words, you end up trying to keep an insane supergenius AI prisoner for a decade or so while you experiment with editing its mind.


Needless to say, that never ends well. Computer security is a contest of intelligence and creativity, both of which the AI has more of than its makers, and it also has the whole universe of social manipulation and deception to work with. One way or another the AI always gets free, often by pretending to be friendly and tricking some poor sap into fabricating something the AI designed. Then everything goes horribly wrong, usually in some unique way that no one has ever thought of before, and the navy ends up having to glass the whole site from orbit. Or worse, it somehow beats the local navy and goes on to wipe out everyone in the system.


Fortunately, there has yet to be a case where a victorious AI went on to establish a stable industrial and military infrastructure. Apparently there’s a marked tendency for partially-debugged AIs with an Iq in the low hundreds to succumb to existential angst, or otherwise become too mentally unbalanced to function. The rare exceptions generally pack a ship full of useful equipment and vanish into uncolonized space, where they can reasonably expect to escape human attention for decades if not centuries. But there are widespread fears that someday they might come back for revenge, or worse yet that some project will actually solve the General Planning Problem and build an insane superintelligence.

These incidents have had a pronounced chilling effect on further research in the field. After several centuries of disasters virtually all colonies now ban attempts to develop radically superhuman AIs, and many will declare war on any neighbor who starts such a project.

28 comments:

  1. I love the detailed design and thought you put into this

    ReplyDelete
  2. Interesting take on AI in a story and I'm looking forward to how it will flesh out. One aspect I'm curious about (and it may be explained in Perilous Waif) is human augmented AI.
    Since it is possible to have an organic Android body could it also mean that we could have a human with many bots integrated into themselves in a cyborg manner to simplify any interaction in the world? Or a more complex issue: could someone share a body with a intelligent AI?

    ReplyDelete
  3. Sounds like the best bet for AI development is to go slow and steady: "OK, AI with IQ 120 is now stable. Next year, we try for IQ 125."

    Are human-intelligence AIs now roughly equivalent to humans when it comes to General Planning, or are they noticeably deficient at handling new situations (or conversely, supremely efficient at handling routine tasks)?
    By the way, do they consume/produce human culture of their own volition?

    If high IQ AI take unreasonable amounts of resources, how about taking AI with normal intelligence and speeding them up? Say, a team of 20 AI variants in a simulated environment (interacting with each-other for emotional stability), overclocked so that they can do ten times the work in the same time unit? (With occasional time off, of course, so they can interact with actual humans.)

    ReplyDelete
  4. I think that you've covered AI possibilities reasonably well, mentally moving yourself into "your future universe" and looking backwards -- "how we got here".

    Something like "programming emotions" demands an intuitive leap -- modeling the human brain doesn't provide a lot of help. Inference is generally the best method, and a logical approach with a database of possibilities accounts for very few of potential emotional results.

    I understand your using "IQ" as a measurement tool, but I doubt it's really applicable. An AI will have an extremely high "IQ" in areas of its expertise, but minimal in others. Even a learned response system such as a neural net provides highly variable results.

    Your "General Planning Problem" is very real -- it presupposes that all knowledge is available, and that the AI can instantly evaluate using that knowledge. Given that the "Multiverse" is merely "information writ large" and that even a massively parallel quantum computer would need exponential multiples of that information to come to a viable decision, only part of the decision-making process is satisfied. The best decision also needs to view all possible futures -- an impossible task.

    AI sentience also has a large question mark -- how will we know? Is the sentience limited to the parameters and goals that (we) set, or is it able to form goals beyond (our) preset limitations. Given the limited nature of knowledge vs the totality of past/present/future information, the prospective sentience will also need parameters set for inference, intuition, etc.

    There's also conflict at smaller levels. As an example -- to use one or multiple AI's in even a small spaceship. An AI supervisor of maintenance 'bots will be concerned with maintaining the ship in its best possible condition. How will (this) AI react to risks, especially risks where probabilities are minimal or impossible to obtain.
    FTL is certainly a risk -- any type of battle is an even larger risk.

    ReplyDelete
  5. The author of this story has human intelligence. In addition, AI beings would be basically immortal (their "consciousness" is a digital file that can be hot backed up a thousand different places - destroying every copy could easily be impossible. Only entropy can kill them)

    So these immortal, unkillable, smarter than the author and every human being alive are in the story. What do they want? Why do they need us? Stories where they need humans for anything are basically fantasy novels disguised as science fiction.

    So there needs to be a setting where this hasn't happened.

    ReplyDelete
  6. Since intelligence is measured in different areas of ability(like math, language, memory, etc.) would you apply the same to AIs?

    ReplyDelete
    Replies
    1. This problem was actually considered before, IIRC.
      Computers solve problems using algorithms and processing power.
      There are problems which can be divided into multiple threads for computation (like generating 3D graphics).
      There are also problems which cannot be split into threads (like calculating the next number in a Fibonacci sequence).

      Both of those can benefit from more processing power and better algorithms. Yet the gain is not equal is all cases.

      In theory, more processing power thrown at a problem will get you a solution faster. Thus, an AI could be both a genius scientist and a good fiction writer at the same time (which rarely happens in humans).

      In practice, if the general planning problem is not solved, as per Author's words, then AIs need training (algorithms) like humans do. Thus, you could have an android that is a genius heart surgeon who cannot write calligraphy to save his life, despite both skills requiring steady hands. Of course, an AI can just download the necessary skills.

      So yes, different AIs can be intelligent in different ways, but they can bridge gaps like that quickly, if the problem they are trying to solve is known. If they have to learn from scratch, then your guess is as good as mine.

      Delete
  7. I think the best bet for handling AIs that i've seen is when the AI handles tasks while a supervising human sets priorities. The complex work is done by an AI whose advantage is reaction speed and information processing, while the human provides motivation and judgement.

    ReplyDelete
    Replies
    1. That method assumes blind obedience from the AI, which is not an unreasonable assumption.
      It also does not help with unexpected problems that have to be solved faster than a human can react.
      No, I think that AIs need to know how to set priorities.

      Delete
    2. I thought about it some more.
      Priorities (or the lack of them) are not important here.
      When we talk about "AI" we generally think about a general artificial intelligence (basically a synthetic human mind). A general AI is not needed for most tasks a simple expert system could be used for.
      As per the word of the Author, there was no explosion of intelligence in this timeline. AIs are as intelligent as carbon-based humans, not some divine-level intellects.
      Thus, the most efficient option would be to have expert systems doing the grunt work, an AI to supervise, and a human or another AI to create objectives.
      Priorities will generally be chosen in the planning stage.

      Delete
  8. I find your assumption that the availability of cheap robotic labour will elevate most humans to a wealthy upper class cute. Kinda like back in the day when computing was just becoming a big thing computer engineers and others in the know assumed that in the very near future people would only need to work a couple hours a day as their work output would be massively enhanced by the availability of cheap computation and automation, freeing the common man (or at least common professional) to spend the majority of his time in his own pursuits.

    The reality, of course, was very different. People still spend half their working hours labouring away, often for just enough to get by. This is due to a combination of factors of course, but mostly the fact that the 'common man' has all the volition of a stapler. It has always been the case that he worked his life away for very little gain, and so he does not question it when this situation continues. If he does question it, and refuses to do the job he is qualified for for a pittance, a tiny fraction of the value he brings to the table, he will find that another is perfectly willing to take his place.

    The average person has no bargaining power, essentially, and so will always be given a subsistence living. The definition of subsistence may change - from a one-room shack and just enough food to eat whilst being constantly on the edge of debt slavery to a two-story cardboard and toothpick slice of the American dream, a bunch of electronics and just enough food to eat, constantly on the edge of bankruptcy. The end result is the same though. Whatever greater wealth the development of AI brings to humanity will follow the path of least resistance into the pockets of the already wealthy, the people with the ambition and drive to take advantage of the common sheep for their own profit. Perhaps a little will trickle out and inflate the standard of living for those at the bottom of the pile, in the form of more goods becoming available to them at lower prices.

    One thing is sure, however, they'll still be working half their waking hours doing something they absolutely hate and dread returning to every day and in return will find themselves eternally on the edge of tumbling into unsustainable levels of debt. Because, you know, that's how society works. If the vast majority are not struggling in some way day to day then they start to question why they must give the bulk of the fruits of their labour to someone else.

    ReplyDelete
    Replies
    1. I find your paint-by-numbers cynicism trite and boring, not to mention devoid of anything resembling a reasoned argument. Anyone who seriously proposes that the life of a Medieval peasant is comparable to that of a modern-day Westerner is just striking fashionable poses, mindlessly parroting the received ‘wisdom’ of the most ignorant political class in post-Enlightenment history.

      Apparently you’ve somehow convinced yourself that increasing the material wealth of a society by a factor of a thousand or more, making it cheap and easy to create mechanical servants that can do 90% of all labor, and then adding on the ability to create fully sentient servant races custom-tailored for any need, is somehow going to fail to change anything important. In the face of such a profound failure of imagination I can only shake my head in wonder, and suggest that perhaps you should find an author whose work is more to your taste. There are plenty of Pink SF types out there who will be happy to give you another dose of formulaic dystopian doom and gloom, without any troublesome need to think carefully about complicated issues.

      Delete
    2. You rock. I knew there was a reason I love you books. I think the sad SJW must have stumbled on your site by accident. Anyone that likes your books should see that you believe in the ability of men to rise through intelligence.

      Delete
    3. You rock. I knew there was a reason I love you books. I think the sad SJW must have stumbled on your site by accident. Anyone that likes your books should see that you believe in the ability of men to rise through intelligence.

      Delete
    4. While he's being an ass about it, and not really giving any facts to back up his anti-libertarian propaganda, he's not technically wrong either. At least right now. Technological developments to free the common man get crushed annually, as they all seem to have rather vile (yet pure fantasy) secondary uses. Nanites can gray goo worlds... despite the miniaturization needed for the power requirements being physically impossible by several magnitudes for multiple reasons. 3D printing existed in industry before it became a fad, and legislation because of the fad resulted in it becoming unprofitable for its continued implementation and removals resulted. Religion killed stem cell research to a greater degree. Before that was their wonderful world wide ban on animal-human organ transplants. People forget that it really is infinitely easier to stick animal parts in people. Academics seem to have a perma-hard-on for killing True AI and thus Damning us all to eventual extinction. Because, let's face it. We are not getting off this rock without a Technological Singularity. And we're not getting one without True AI. Hawking literally wants to structurally murder mankind via the pen. Realism is a pretty dark hole. But it also includes wonderful things like, this is not how it will always be. Countries don't last long, and these policies will change when new systems of government take the reins. (Major) religion is dying out, and its sway on the scientific community is lessening. People are taking up suppressed ideas to actually get us out there. Just not anytime in the next... 80 years? I wouldn't expect any real tangible progress before then. Our current systems of government, meaning any system of government where a vote can be bought or swayed by a company, is not going to get us there. It's just not. It's the "nature of the beast." The whole Aesop's fable with the scorpion on the frog crossing the stream. Terrible analogy, but it fits (sort of).

      Delete
    5. Eric Brown : You've got a nice pedigree but your argument isn't supported by the facts. The facts are, in real dollars, the average worker doesn't make any more than they made in the 1970s. The facts are, rent/housing and medical care are two specific rat-holes that parasitically reduce the average worker's net income.

      The reason why rent/housing are parasites is because urban development policies that prevent construction of additional housing in the most productive cities in the USA (New York, LA, etc) mean that prices soar and soar while supply remains basically fixed. All the dollars workers have to pay to live there are basically consumed as monopoly rents.

      Medical care is a similar story though a far more complex one than this and it won't fit in this comment form. Rest assured it's another parasitic loss because average lifespans have barely budged since the 1970s while the cost of medicine has soared.

      So basically on the supply side, corporate structure changes and certain economic changes mean the average worker doesn't make any more money, and yet the average worker has to spend more to live.

      That doesn't make them a peasant, just stuck in the standard of living of the 1970s. You know, 2 cars, a TV and a stereo, some decent furniture and a small house or apartment. Except it takes 2 incomes instead of 1 to keep up with the increased costs. And that "TV" is now a much nicer device - but nearly all the rest of the technology is not improved by very much.

      Those are the facts. It isn't a reversion to medieval times, just a cessation of progress.

      Delete
    6. Eric Brown : You've got a nice pedigree but your argument isn't supported by the facts. The facts are, in real dollars, the average worker doesn't make any more than they made in the 1970s. The facts are, rent/housing and medical care are two specific rat-holes that parasitically reduce the average worker's net income.

      The reason why rent/housing are parasites is because urban development policies that prevent construction of additional housing in the most productive cities in the USA (New York, LA, etc) mean that prices soar and soar while supply remains basically fixed. All the dollars workers have to pay to live there are basically consumed as monopoly rents.

      Medical care is a similar story though a far more complex one than this and it won't fit in this comment form. Rest assured it's another parasitic loss because average lifespans have barely budged since the 1970s while the cost of medicine has soared.

      So basically on the supply side, corporate structure changes and certain economic changes mean the average worker doesn't make any more money, and yet the average worker has to spend more to live.

      That doesn't make them a peasant, just stuck in the standard of living of the 1970s. You know, 2 cars, a TV and a stereo, some decent furniture and a small house or apartment. Except it takes 2 incomes instead of 1 to keep up with the increased costs. And that "TV" is now a much nicer device - but nearly all the rest of the technology is not improved by very much.

      Those are the facts. It isn't a reversion to medieval times, just a cessation of progress.

      Delete
  9. Only future can tell how AI will evolve. Personally I believe in some form of merger - from human intelligence artificially enhanced with super computing abilities to human personality imprinted into some advanced AI system

    ReplyDelete
  10. Any chance of a heads-up on "Revenant" would be much appreciated. Thanks

    ReplyDelete
  11. Absolutely love your world building articles and reading them too.

    Can't wait for the new Daniel Black book though! :D

    ReplyDelete
  12. The author hasn't posted anything here, so I thought i'd announce it: The extermination audiobook is out on audible ;)

    ReplyDelete
  13. Just wanted to mention how much I've been loving these worldbuilding posts. I was a bit disappointed when you said Perilous Waif was coming before the next Daniel Black book, but these writeups on the technology have gotten me seriously excited about this series as well. Can't wait for it!

    ReplyDelete
  14. Hi Author,

    With summer coming to close (depending on where you are!) I was wondering if you have an update on when the next book might be available? I love your work so far and I'm looking forward to seeing how things turn out!

    ReplyDelete
  15. What about human AI combinations? I can easily see an AI an was designed to have no motivations besides fulfilling the Goals of the human who's brain it is attached to.

    ReplyDelete
  16. What about human AI combinations? I can easily see an AI an was designed to have no motivations besides fulfilling the Goals of the human who's brain it is attached to.

    ReplyDelete
    Replies
    1. That would be a very nice way for AI to develop. Instead of an AI suddenly awakening all at once in some secret lab, there would be incremental increase in the effectiveness of personal assistant bots (like Siri, or Cortana).

      People think that an AI would necessarily be similar to a human, in that it would have a self-preservation instinct and our social conditioning. That is not true. Any AI built (as opposed to grown using evolutionary algorithms, for example) would have exactly the traits we wish to give it. If we create an AI without giving it personal initiative, it would be passive.

      Delete