Art and Life, Imitating Each Other

PARTIALS, and it’s sequels, are primarily about Kira and her personal journey through the post-apocalyptic world. In designing that world, my editor Jordan Brown and I did a lot of background work (a LOT of background work) to explain exactly how and why the world ended, and where RM came from, and where the Partials came from, because it was important for us to know in order to present the world correctly, but a lot of it wasn’t directly relevant to Kira’s journey so it never came up in the books. You hear hints about it, but you never get a full description of exactly what happened and why. This makes the books stronger, I think, because they keep the focus tight and personal, but we still wanted to use that other info. Eventually we came up with the idea of creating a bunch of in-world documents, ‘collected’ by the conspiracy theorist/hermit/crazy person Afa Demoux, cataloging the fall of the human race. This is similar to what we did with the book trailers (which, you may have noticed, are also part of the Afa Demoux Archive). Most of those documents were slipped into the back of the trade paperback edition of PARTIALS, but some of them are floating around online.

The top document at that link is a United Nations resolution mandating “human-like emotion” in artificial sentients. The background behind this is hinted at in the books, but here’s the full story: America got involved in a very long and deadly war in the Middle East, eventually centering on Iran and resulting in catastrophic losses for all sides. This war made heavy use of drones, with increasingly complex intelligence, which Jordan and I thought was a nice guess at where things were headed in the real world–keep in mind that we were doing this back in 2010, before combat drones were as overwhelmingly prevalent as they are today. As drone attacks increased in 2011 and 2012, Jordan and I both cringed at the news and patted each other on the back for calling it correctly; such are the confusing emotions of writing science fiction :)

Okay, back in the fictional backstory again: several years after the war in Iran, fighting the infamous Isolation War in China, the drones were back in action and causing more and more problems, for the same reasons we see them causing problems in the real world: they don’t distinguish friend from foe the same way a human does, and have a tendency to cause a lot of collateral damage, including the loss of innocent life. In 2049 the UN addressed the question directly and decided that any battlefield combatant, particularly one with artificial intelligence, must have some kind of real, human emotion to govern their decisions. To quote the document: “A human soldier seeks war as a means of protecting human life; a construct seeks only the completion of military objectives. While it may be possible to ‘program’ certain failsafes and behaviors into a machine or artificial species, it is simpler and safer to remove the problem completely by imbuing that species with the necessary emotions and ethics to keep itself in check. … They should be able to identify a child, for example, not just as a non-combatant but as a precious life and an object of love and protection. Our constructs will not be heartless killing machines, but thinking—and more importantly feeling—individuals.”

Jordan and I saw this as the final piece of the puzzle leading to the creation of the Partials: the world needs soldiers, but doesn’t want to risk humans, and can no longer bear the consequences of amoral drone technology, so they turn to the burgeoning field of biotech and build the perfect soldiers. The Partials can not only fight our wars for us, they can protect innocents on the field of battle, make ethical choices about combatants and prisoners, and wage war not as indifferent killers, but as a means to a peaceful end. That seems like a great idea, but this decision is also the beginning of humanity’s downfall. Look at it from the Partials’ point of view: we built them to love humans, and then told them to kill humans. We built them to love us, and then when they came home to us from a successful war we treated them like subhuman garbage, marginalized and ignored and oppressed because we refused to see them as equals. In trying to separate ourselves from the consequences and responsibilities of war, we sowed the seeds of our own destruction.

But! This is where it gets cool and/or scary. Back in the real world, Jordan and I were patting ourselves on the back, delighted that we’d not only come up with a cool story idea, but based it on a some real-life events and politics. Then, in April of 2013, the UN started down the very same road we put them on in our science fiction book. This document is not one of mine, it’s a real one from the real UN–not a resolution yet, but a report about the ongoing use of combat drones. Some of the vocabulary is different, of course–I called them “fully-artificial drone combatants,” and the UN calls them “lethal autonomous robotics”–but the idea is the same. Artificially intelligent weapons are replacing human soldiers on the battlefield, and they are making questionable or outright unconscionable decisions, and the world is upset. Whether you call it warfare or “extrajudicial execution,” we are seeing what happens when we send unfeeling machines out to kill people, and we don’t like it. In a haunting echo of my fictional UN statement, this real one declares that “They raise far-reaching concerns about the protection of life during war and peace. … robots should not have the power of life and death over human beings.” Did you feel that deep, rumbling shift in your brain? Because your entire world just changed. Things that used to be science fiction–like robots having the power of life and death over human beings–are not science fiction anymore. These things are real, and real governments are dealing with them in real situations.

This is one of my favorite sections of the report, because it illuminates the unsolvable moral web at the heart of this issue; I’ll present it to you in two halves: “Some argue that robots could never meet the requirements of international humanitarian law (IHL) or international human rights law (IHRL), and that, even if they could, as a matter of principle robots should not be granted the power to decide who should live and die. These critics call for a blanket ban on their development, production and use.” This sounds pretty reasonable, right? Nobody wants robots running around just killing whoever they want to (or whoever their programming tells them to). Banning robotic weapon systems seems like a good idea. But now here’s the second half of the paragraph: “To others, such technological advances–if kept within proper bounds–represent legitimate military advances, which could in some respects even help to make armed conflict more humane and save lives on all sides. According to this argument, to reject this technology altogether could amount to not properly protecting life.” That’s the gut-punch, because this ALSO sounds completely reasonable. By banning robotic weapons you are forcing human soldiers into the line of fire, inevitably resulting in human casualties. If we can prevent those casualties we should, right? No one would argue that we should willingly risk more human life. Except we just did, in a roundabout way, in the first half of this very paragraph. Both sides of this argument have really, really good points.

The best answer, of course, is to just not have anymore wars, but until you can convince all the tyrants and dictators and terrorists of the world to abide by the same principle, that’s not a feasible option. The next-best answer, then, would be to have robotic drones replace our soldiers (thus fulfilling one half of our unsolvable quandary), but governed by human compassion and judgment (thus fulfilling the other half). This is the answer my fictional UN came to, and the real UN is headed in this same direction in their report: “Decisions over life and death in armed conflict may require compassion and intuition.” And thus the first step toward Partials, in whatever form they eventually take, has been made. In the real world.

If you share my fascination with this kind of thing, I encourage you to read the entire UN report, even if only to experience the brain-melting collision of science fiction and reality. It continues to blow my mind that we have literally reached the threshold that stands at the center of so many science fiction stories; by developing autonomous robotic weapons, we’re setting the stage for the Terminator, or the Matrix, or any number of apocalyptic science fictional scenarios. Think I’m overreacting? The UN doesn’t. We’re giving machines the power and freedom to kill us, and we’re barreling forward so fast our decisions can’t keep up with our own technology. I’ll close with the most chilling line in the report:
“If left too long to its own devices, the matter will, quite literally, be taken out of human hands.”

2 Responses to “Art and Life, Imitating Each Other”

  1. Nate Hatfield says:

    It is very cool that you called this to the extent that you have.

    I don’t feel quite as urgently about this as you. There is debate about how far away we are from military AI, but the fact is that we still aren’t there. Unmanned aerial vehicles (UAVs) are still guided by human pilots. They are more like guided missiles than AI airplanes in my opinion. So the UN report is as much science fiction as Partials is – it’s predicting what may happen and responding to it’s own predictions.

    And I’ve always been fairly amused by “rules of war” and the idea that we should enforce some sort of humanity on violent conflict (the ultimate inhumanity.) War is, as they say, [i]ultima ratio regis[/i], or the last argument of kings. Meaning it’s what you turn to when there are no other options. When the rubber hits the road, when the ____ hits the fan, when it’s all on the line, you don’t break out your rulebook and see what you can or cannot do.

    The only real rules governing warfare are cost/benefit calculations: if I nuke Tel Aviv/Tehran/Washington D.C can I get away with it without losing power/sovereignty/etc.? The US is working this out right now. Drones are less of a “footprint” than boots on the ground. Pakistan would be up in arms (maybe quite literally) if US ground forces were conducting operations on their soil. They might even be more upset if manned airplanes were conducting the strikes. They’re still upset that UAVs are striking there, but not so much that their representative government is doing everything it can to stop them. So far the benefits (US gets to kill its perceived enemies) outweigh the costs (Pakistani people demonstrate while the world starts to think about what’s happening.) This calculation very well may change in the future, but any UN resolution would only trail the real consequences that people in power pay attention to.

  2. Bryce says:

    This reminds me of Philip K Dick’s short story “The Defenders.” Just another case of how science fiction writers saw it coming before the rest of us.

    http://www.gutenberg.org/ebooks/28767

Leave a Reply