Archive for June, 2013

Some Dan Wells Short Fiction

Saturday, June 22nd, 2013

Two years ago I started a specific effort to try to teach myself how to do short fiction, since it’s not a skill I’d really worked on before. I had some expert help from friends like Mary Robinette Kowal, and a lot of false starts that never went anywhere, but I do think I’m getting better–at the very least, I was able to sell some stories, so at least somebody thinks I’m getting better. Here are two that I’m particularly excited about, with more to come in the future:

The Butcher of Khardov
I’m a big fan of games, and a year or two ago I threw myself headlong into building and painting (and occasionally playing) the Privateer Press game Warmachine, a steampunk-y miniature wargame with great models and a fantastic backstory. There are several armies in the game to choose from, each with a host of cool, flavorful characters, and I was instantly drawn to the Russian-inspired kingdom of Khador, and particularly to the crazed warrior Orsus Zoktavir, a semi-psychotic super-warrior haunted by some horrible event in his past. About a year ago Privateer started a new fiction initiative, hiring awesome genre authors like Larry Correia and Dave Gross to write stories about their game characters, and you can imagine my delight when they contacted me about filling in some of the tragic backstory of Orsus Zoktavir. How did this expert warrior become a crazed berserker? What led him to the massacre that earned him the nickname The Butcher of Khardov? And down at the root of it all, why did he name is axe Lola? I had a devilishly good time reaching deep into this guy’s past to dredge up old horrors, invent some new ones, and present it all through the eyes of a mind fractured by loss and hatred and guilt. It’s a dark story, and a sad story, but also a high-octane butt-kicking story, all rolled up into one: magic and monsters and giant robots and paranoid hallucinations and all the things I love to write about. Orsus is a great character, and I had a great time writing about him. You can find my novella here and/or here. It’s worth noting that Howard Tayler has a Privateer novella as well, coming out sometime in the next few months; I’ve read it, and it’s awesome.

A Knight in the Silk Purse
My editor for the Privateer project was Scott Taylor, and after working with me on that story he offered me another opportunity I couldn’t refuse: a spot in a shared-world anthology set in a cool fantasy world first explored in the Kickstarter-ed Tales of the Emerald Serpent. What attracted me to this idea was, again, a fantastic character: Kalomir is a sort of reverse Elric, an evil necromancer kept alive by a sentient and benevolent sword that forces him to wander the world doing good. I can’t give you much info on this story yet, because we’re still putting it together, but the shared-world nature of the storytelling has been a fantastic experience so far, in particular the chance to work with people like Juliet McKenna–the two of us are intertwining our stories a bit, which is a new thing for me, but very fun to play around with. The Knight in the Silk Purse anthology is still being Kickstarter-ed, so if you want to read the stories–and I assure you that you do–drop a few bucks on it.

The BookSmash Challenge

Tuesday, June 18th, 2013

So HarperCollins is doing a cool thing, and I wanted to let you all know about it. The idea is that ebooks could be so much more than they are, but so far nobody’s really figured out what that’s supposed to be. Right now we’re basically treating ebooks as print books, and the experience of reading them is more or less the same, give or take a few trade-offs. Aside from a couple of minor things, though, like changing front sizes and such, no one’s really leveraged the idea that this is a new technology with a lot of new possibilities. Maybe you like ebooks as they are–a lot of people do. But maybe you have an idea so brilliant, and an implementation so amazing, that we’ll wonder how we ever lived without it.

Which brings us to the cool thing. The publishing industry doesn’t really know what to do with ebooks, but maybe you do. Maybe you have some brilliant idea about what an ebook could be, or should be, and you’ve just never had a book to play with to put your idea together. Well, now you can use mine, and if you use it well you can win a bunch of money. HarperCollins has created the BookSmash Challenge as a contest to see what people can do with ebooks, and they’ve released the full digitial assets of some of their books, including my PARTIALS, for you to do whatever you want with.

There’s a whole press release, which I won’t bore you with, but there’s the salient part:

Submissions will be accepted between June 6, 2013, and September 5, 2013, via http://booksmash.challengepost.com/. They can be developed as an app, enhanced book, or other digital reading experience, and should be available to readers on iOS and Android platforms, as well as across the web. Judging of the digital products will be based on originality of idea, implementation of idea, and potential impact.
 
From the eligible submissions, there will be four prizes awarded: Grand Prize, Runner-Up, Popular Choice, and the HarperCollins Recognition Award. A group of judges, including entrepreneur and author of Curation Nation Steve Rosenbaum, entertainment industry veteran Paul Vidich, co-founder of N3TWORK and former Apple Executive Erik Lammerding, and CEO of LiveDeal, Inc., Mike Edelhart, will review the entries and select the Grand Prize and Runner-Up winners. Additionally, the projects will be posted to the ChallengePost online gallery this Fall, where members of the public will be able to vote for their favorite submission for the Popular Choice category. Lastly, the HarperCollins Recognition Award, a non-cash prize, will be awarded to a participating large organization. A total of $25,000 in cash prizes will be awarded to the creators of the winning projects.

Sound like fun? Go for it. I’d love to see what you can do with PARTIALS, so let your imagination run loose. An app? An enhanced ebook? Something we haven’t even considered yet? Time to make the future.

Art and Life, Imitating Each Other

Thursday, June 6th, 2013

PARTIALS, and it’s sequels, are primarily about Kira and her personal journey through the post-apocalyptic world. In designing that world, my editor Jordan Brown and I did a lot of background work (a LOT of background work) to explain exactly how and why the world ended, and where RM came from, and where the Partials came from, because it was important for us to know in order to present the world correctly, but a lot of it wasn’t directly relevant to Kira’s journey so it never came up in the books. You hear hints about it, but you never get a full description of exactly what happened and why. This makes the books stronger, I think, because they keep the focus tight and personal, but we still wanted to use that other info. Eventually we came up with the idea of creating a bunch of in-world documents, ‘collected’ by the conspiracy theorist/hermit/crazy person Afa Demoux, cataloging the fall of the human race. This is similar to what we did with the book trailers (which, you may have noticed, are also part of the Afa Demoux Archive). Most of those documents were slipped into the back of the trade paperback edition of PARTIALS, but some of them are floating around online.

The top document at that link is a United Nations resolution mandating “human-like emotion” in artificial sentients. The background behind this is hinted at in the books, but here’s the full story: America got involved in a very long and deadly war in the Middle East, eventually centering on Iran and resulting in catastrophic losses for all sides. This war made heavy use of drones, with increasingly complex intelligence, which Jordan and I thought was a nice guess at where things were headed in the real world–keep in mind that we were doing this back in 2010, before combat drones were as overwhelmingly prevalent as they are today. As drone attacks increased in 2011 and 2012, Jordan and I both cringed at the news and patted each other on the back for calling it correctly; such are the confusing emotions of writing science fiction :)

Okay, back in the fictional backstory again: several years after the war in Iran, fighting the infamous Isolation War in China, the drones were back in action and causing more and more problems, for the same reasons we see them causing problems in the real world: they don’t distinguish friend from foe the same way a human does, and have a tendency to cause a lot of collateral damage, including the loss of innocent life. In 2049 the UN addressed the question directly and decided that any battlefield combatant, particularly one with artificial intelligence, must have some kind of real, human emotion to govern their decisions. To quote the document: “A human soldier seeks war as a means of protecting human life; a construct seeks only the completion of military objectives. While it may be possible to ‘program’ certain failsafes and behaviors into a machine or artificial species, it is simpler and safer to remove the problem completely by imbuing that species with the necessary emotions and ethics to keep itself in check. … They should be able to identify a child, for example, not just as a non-combatant but as a precious life and an object of love and protection. Our constructs will not be heartless killing machines, but thinking—and more importantly feeling—individuals.”

Jordan and I saw this as the final piece of the puzzle leading to the creation of the Partials: the world needs soldiers, but doesn’t want to risk humans, and can no longer bear the consequences of amoral drone technology, so they turn to the burgeoning field of biotech and build the perfect soldiers. The Partials can not only fight our wars for us, they can protect innocents on the field of battle, make ethical choices about combatants and prisoners, and wage war not as indifferent killers, but as a means to a peaceful end. That seems like a great idea, but this decision is also the beginning of humanity’s downfall. Look at it from the Partials’ point of view: we built them to love humans, and then told them to kill humans. We built them to love us, and then when they came home to us from a successful war we treated them like subhuman garbage, marginalized and ignored and oppressed because we refused to see them as equals. In trying to separate ourselves from the consequences and responsibilities of war, we sowed the seeds of our own destruction.

But! This is where it gets cool and/or scary. Back in the real world, Jordan and I were patting ourselves on the back, delighted that we’d not only come up with a cool story idea, but based it on a some real-life events and politics. Then, in April of 2013, the UN started down the very same road we put them on in our science fiction book. This document is not one of mine, it’s a real one from the real UN–not a resolution yet, but a report about the ongoing use of combat drones. Some of the vocabulary is different, of course–I called them “fully-artificial drone combatants,” and the UN calls them “lethal autonomous robotics”–but the idea is the same. Artificially intelligent weapons are replacing human soldiers on the battlefield, and they are making questionable or outright unconscionable decisions, and the world is upset. Whether you call it warfare or “extrajudicial execution,” we are seeing what happens when we send unfeeling machines out to kill people, and we don’t like it. In a haunting echo of my fictional UN statement, this real one declares that “They raise far-reaching concerns about the protection of life during war and peace. … robots should not have the power of life and death over human beings.” Did you feel that deep, rumbling shift in your brain? Because your entire world just changed. Things that used to be science fiction–like robots having the power of life and death over human beings–are not science fiction anymore. These things are real, and real governments are dealing with them in real situations.

This is one of my favorite sections of the report, because it illuminates the unsolvable moral web at the heart of this issue; I’ll present it to you in two halves: “Some argue that robots could never meet the requirements of international humanitarian law (IHL) or international human rights law (IHRL), and that, even if they could, as a matter of principle robots should not be granted the power to decide who should live and die. These critics call for a blanket ban on their development, production and use.” This sounds pretty reasonable, right? Nobody wants robots running around just killing whoever they want to (or whoever their programming tells them to). Banning robotic weapon systems seems like a good idea. But now here’s the second half of the paragraph: “To others, such technological advances–if kept within proper bounds–represent legitimate military advances, which could in some respects even help to make armed conflict more humane and save lives on all sides. According to this argument, to reject this technology altogether could amount to not properly protecting life.” That’s the gut-punch, because this ALSO sounds completely reasonable. By banning robotic weapons you are forcing human soldiers into the line of fire, inevitably resulting in human casualties. If we can prevent those casualties we should, right? No one would argue that we should willingly risk more human life. Except we just did, in a roundabout way, in the first half of this very paragraph. Both sides of this argument have really, really good points.

The best answer, of course, is to just not have anymore wars, but until you can convince all the tyrants and dictators and terrorists of the world to abide by the same principle, that’s not a feasible option. The next-best answer, then, would be to have robotic drones replace our soldiers (thus fulfilling one half of our unsolvable quandary), but governed by human compassion and judgment (thus fulfilling the other half). This is the answer my fictional UN came to, and the real UN is headed in this same direction in their report: “Decisions over life and death in armed conflict may require compassion and intuition.” And thus the first step toward Partials, in whatever form they eventually take, has been made. In the real world.

If you share my fascination with this kind of thing, I encourage you to read the entire UN report, even if only to experience the brain-melting collision of science fiction and reality. It continues to blow my mind that we have literally reached the threshold that stands at the center of so many science fiction stories; by developing autonomous robotic weapons, we’re setting the stage for the Terminator, or the Matrix, or any number of apocalyptic science fictional scenarios. Think I’m overreacting? The UN doesn’t. We’re giving machines the power and freedom to kill us, and we’re barreling forward so fast our decisions can’t keep up with our own technology. I’ll close with the most chilling line in the report:
“If left too long to its own devices, the matter will, quite literally, be taken out of human hands.”

The Superman Problem, and my bet with my brother

Tuesday, June 4th, 2013

So there’s a new Superman movie coming out soon, and this has prompted many conversations about “The Superman Problem.” I’ve talked about this on Writing Excuses before, and it sums up as this:

“If your main character will always make the right decision and can always defeat any bad guy, your story is boring because it has no tension.”

Here’s the thing about The Superman Problem: it’s a complete and utter fallacy. No character actually has this problem unless they’re being written poorly. The best writers will always find ways to put their characters into situations where there is no clear “right” choice, and will strive to pit their characters against conflicts and obstacles they can’t easily overcome; this applies to Superman just as much as it applies to anyone else. Yes, Superman can beat up any villain–so what? Is every good story in the world solved by the main character physically dominating everyone else? If we truly believe what our mothers tell us about violence never solving anything, Superman’s ability to punch bad guys is arguably the most useless super ability ever; a good Superman story, like a good anyone story, will test his wits, his judgment, his will, his emotions, and so on. In The Dark Knight, Batman was able to beat up the Joker with no problem, but nobody complained that that made the story bad because the story wasn’t about beating him up, it was about order and chaos and self sacrifice. Just because the Superman movies haven’t really done that before doesn’t mean they never can, it just means we’re still waiting for a movie that treats the character as intelligently as the comics do.

One of my favorite Superman stories is the graphic novel Kingdom Come, about a hypothetical future where super-beings have gotten completely out of hand, becoming more like roving gangs than heroes, and Superman tries to restore order. Sure, he can beat them all up if he wants to, but that’s exactly the point of the story: he doesn’t want to spend his life beating people up. He rounds up all the supers and puts them in a giant prison, and then…what then? Does he just keep them locked up forever? Does he kill them? What if the humans decide to kill them–does Superman beat up or kill the humans in retaliation, or maybe even pre-emptively? Is it even Superman’s place to make these decisions? This is not a story that can be solved by violence and domination, because those are the problem, not the solution; the story isn’t asking if Superman has enough power to stop the bad guys, it’s asking how Superman should use his power in the first place. These are questions the human race has never fully answered for itself (where is the line between safety and freedom? Between punishment and reformation? Between leadership and tyranny?), and just because Superman always tries to do the right thing doesn’t give him any magical answers the rest of us don’t have access to. Most of us always try to do the right thing, and we still  manage to be flawed, conflicted, fascinating people in spite of that.

So I guess what I’m trying to say is that Superman is a deeper character than most people give him credit for, and that the upcoming movie will be walking a tightrope between awesomeness and crappiness. I have high hopes that it will be awesome, but it’s just so easy to get him wrong.

Tangential to this, in a Twitter discussion about the movie, my brother (who considers the Superman Problem to be insurmountable) (because he is foolish) declared that the only way to make Superman interesting is to take away his powers. Obviously I disagree, but he has the weight of movie-based evidence on his side. The previous Superman movies have all relied on kryptonite and other tricks as a way of weakening Supes, trying to solve the Superman Problem from completely the wrong direction–take away his powers and suddenly you can put his life in danger, or stop him from beating up a bad guy, or whatever. The comics don’t rely on this nearly as much, but for the movies it’s pretty much standard procedure. Being eternally optimistic, I bet my brother that this movie wouldn’t do that: that it would solve the Superman Problem the right way, by making the core conflict something that can’t be solved by punches. Sure, there will be fighting, but there will also be more: a cloudy moral quagmire, an impossible choice, or something similarly unsolvable to create the real tension of the story. I don’t know what this will be yet, but based on the trailers I expect it to focus, as Kingdom Come does, on the nature of power. They won’t take away his powers because his sheer overpowering-ness will be at the heart of the conflict.

So: Rob took my bet, and to make it interesting we wagered a cool 20 bucks. Since he actually owes me a couple thousand dollars at the moment, this is less interesting than you might think, but neither of us are really gamblers anyway. The exact terms of the bet are these:

1) The final arbiters will be Rob and I, based on our own viewing of the movie.

2) If the movie has or mentions kryptonite that’s not an automatic loss; it has to actually be used to drain Superman’s power.

3) We’re only counting powers he displays in this movie. Just because he’s not likely to fly around the world backwards and reverse time in a grotesque deus ex machina doesn’t mean I lose the bet :)

4) Only actual, in-story power loss counts. If the writers conveniently ‘forget’ a power during a key scene, fabricating artificial tension by, for example, having him punch something that could much more easily be laser-visioned, that’s different. What we’re looking for is a specific point in the movie where Superman is weakened by the loss of a power he’d already used.

The bet is really a separate issue from the Superman Problem, but I’m curious to hear what you think about both of them. Do you think I’ll win, or my brother? Beyond that, do you think they’ll solve the Superman Problem? And what are your opinions on the movie in general, or the trailers? Personally, I’m delighted they’ve broken away from the “evil real estate agent” nonsense they keep getting into with Lex Luthor, using Zod and Faora instead. Based on the most recent trailer it seems like they’re presenting the movie as less of a superhero story and more of an alien invasion story, which is a really cool direction to take it.