Last week I caught an episode of Almost Human pretty much by accident. I have to say it was pretty fantastic. The story followed Dorian the android and a human detective as they tried to track down an illegal sexbot ring that was using human DNA in the skins of their androids. The show had a nice sense of humor, some cool advanced technology, good action and pacing, and excellent dialogue. It culminated with Dorian being present for the ‘death’ of a sexbot made with the illegal skin. It was the climax of the episode’s central theme – what happens when you die, and how can others derive comfort from it? It really was very, very well done.
Accordingly, I expect Fox to cancel it within 12 episodes or so.
Anyway, the exploration of human morality through the lens of androids is not a new one. It arguably dates all the way back to Isaac Asimov’s Robot trilogy. In Caves of Steel we meet Detective Bailey and his robot partner Olivaw and watch a dynamic quite similar to that of John and Dorian, except the roles are more stock: whereas Karl Urban’s John is the one that is emotionally damaged and unavailable and Dorian is empathetic and open, Bailey is the poster boy for emotional appeal compared to Olivaw’s bloodless logic. In Asimov’s case, however, he was attempting to show the technology of the future as helpful and wise despite its frightful appearance. Almost Human is doing something a bit different; it is taking a more even-handed approach to the prospect of advanced tech, showing the horrors as well as the benefits. Dorian is meant to be more human than John and in many ways he is. Unlike Asimov, who is asking social and economic questions, Almost Human seems to be concerned about psychology, morality, and humanity on a more personal level.
In this sense, then, Almost Human owes less to Asimov, all noble and ponderous upon his gilded throne of Golden Age Science Fiction, and a great deal more to the fallout-choked alleys and half-religious psychadelia of Philip K Dick. In Do Androids Dream of Electric Sheep, replicants are virtually indistinguishable from humans save via extremely intricate post-mortem physical exams or less-than-reliable ‘empathy tests’ based off the assumption that androids are incapable of feeling empathy. The society of the book adopts this mantra as the quintessential definition of humanity, and yet the action of the book spends a great deal of time demonstrating just how foolish a definition this is. Humans are shown not to be empathetic at all and not only towards replicants; they hurt each other, they judge each other, they demean each other with casual familiarity. The world, as shown by Dick, is hostile to life in all its forms, and no creature comes to expect quarter from any other, replicant or otherwise. This is not to say that there is no hope, but rather to demonstrate how we who feel that humanity is doing just fine haven’t really stopped to look at ourselves. Dick does this with Replicants, as artificially creating the ’other’ to be abused by the so-called noble, pious, empathetic forces of humanity makes it easier for us to see ourselves.
So, too, does Almost Human attempt to show us reflections of ourselves in the person of androids, in the hopes that we can actually recognize ourselves better when faced with that which we define as not ourselves. These stories, when done well, are hard to watch. They have the power to levy biting criticism unfettered by the softening insulation of social context or apologism. These stories are also not easy to do – too many of them fall into trite echoes of ‘traditional’ values (Spielberg’s AI comes to mind). So far I feel that Almost Human has done a good job, but it is very early. I will keep watching, though. I hope very much they can keep it up.
I’m currently in the process of discussing Philip K Dick’s novel, Do Androids Dream of Electric Sheep with my Technology in Literature course. In the book (which I highly recommend, by the way), human-like androids infiltrate society, distinguishable from ‘real’ humans only by some slight differences in the bone marrow and in their lack of any kind of empathy. In the novel, Dick is exploring exactly what it means to be human and, furthermore, contemplating the moral status of those things placed outside that definition; the decision to make the androids lack empathy is more an artistic than a technical decision.
Still, Dick is hardly alone in the presentation of robots and androids as being emotionally and emphatically inhibited when compared with humans. Star Trek’s Data, for instance, is constantly on a quest to understand the emotional side of existence as he, himself, is completely lacking in emotion. The Machines of the Terminator universe also lack any kind of empathy, as do the Machines of the Matrix, and any number of other passionless, emotionless iterations of artificial intelligence littering science fiction from here to eternity. We’ve almost come to accept it as a given – robots cannot feel.
But why the hell not?
I’m no computer scientist, so perhaps there’s something I’m missing here, but I don’t really see emotion as anything more complicated than having built-in, default opinions about certain situations and things. They are hardwired programming, basically – you fear the dark because you cannot see what’s going on and suspect something dangerous may be lurking. You fall in love because the object of your affection fulfills a variety of built-in criteria about a romantic mate that are the result of your life experiences, genetic predispositions, and evolutionary history. Emotions may not be fully understood, but it seems silly to consider them some how magical and unable to be duplicated in machine form.
If indeed we could design an artificial intelligence (and, keep in mind, we are a long way from that happening), it seems to me that they would probably develop emotions whether we wanted them to or not. Emotions aren’t just extra baggage we humans carry around to make us miserable; they are useful applications used in order to assist in decision making. That terrible feeling you get when you are dumped or fail a test? That’s emotion chiming in saying ‘what we just experienced was negative; please refrain from repeating the same action’. Are you trying to tell me that any intelligent being wouldn’t be able to do the same thing?
Part of the myth of the solely rational robot is one that says ‘reason > emotion, therefore we don’t need or want emotion’. Our robots (and those who design them) wouldn’t see any need for hardwired emotional content to enable them to make decisions, since their own rational faculties would be more effective at doing the same thing. This, to me, seems to be making a number of assumptions. Firstly, we have never encountered an intelligent creature (at any level) that lacks some kind of emotive response. We have emotions, animals have emotions, so if we’re just going off the available evidence, it seems likely that emotions are some kind of prerequisite to true intelligence in the first place. Even in the development of our own children, emotional response precedes rational response to stimuli. It is perhaps possible that we could do it some other way, but we really can’t be sure. Furthermore, emotion, since it is simpler, is quicker and more effective at making certain kinds of decisions than reason is. If you hear a loud noise, you flinch or duck – this is inherently useful for the survival of a species. Granted, we wouldn’t be constructing AIs so that they could avoid being caught in avalanches, but it stands to reason there would be things we’d want them to be hardwired to do, and emotion is born from such hardwiring. Their emotions might not be the same as ours, but they’d almost certainly have them.
Now, there are a good number of scifi authors who do have emotive AIs - Iain M Banks, in particular, springs to mind, but others as well. Much of my own scifi writing of late has been moving me in that direction: if our AIs will feel, what will they feel about us? How will we feel about them? What kind of emotional relationships can you build with an intelligent toaster or fighter jet?
If your phone can love you back, do you owe it a card on Valentine’s Day?
Sector Alpha 01 Alpha was further from Tess’s home than he had ever been. He was on the mag rail, arms and legs fully retracted, for a full three hours, humming along the ceiling of the Complex with all of robot-dom marching beneath him in perfect time. Each sector, he noticed, looked much the same as the one before it, but with subtle differences. Sector Delta, for instance, used more Stans than were really necessary while Sector Gamma had an unusual number of tracked or wheeled mobile bots as opposed to the bipedal formation common to good-old Sigma. The sheer complexity of the ramp system there was so amazing as to be threatened with deletion from Tess’s memory as bad data.
Sector Alpha Zero-One Alpha, though, was different. It was much smaller than the other Sectors, and it had very few mobile bots at all—mostly Mateys and Mekkers. It had no processing or heavy industry bots, no conveyors jammed with workers. Even the lighting was dimmer. When the mag rail finally ejected him, he noted with shock that he was at the rails terminal—a thing he had only heard about in rumor and religious lore.
He stood now at the beginning of it all, at the very place where the Source Code said the Users gave light to the bots and set them on their mission. The rust spots on the supports and the labored screams made by the mobile bots’ servomotors in this place made even old Tess feel young again.
He was alone as he marched towards the center of the Complex, towards the resting place of Archive System SASH-A00011. Archives were among the most venerable of the stat-bots, their job being simply to record everything that went on in preparation for the time when the Users would request data. This, of course, never happened, and prior to this time Tess had never given any thought to visiting one of the venerable systems. It was all too superstitious for him. The Archives, he used to think, were just wasting everyone’s time with religious nonsense. Now, it seemed, everyone was wasting their time. The Archives, at least, had a reason why.
The security door between the passageway and SASH-A00011’s chamber opened without prompting. Inside it was very dark, and Tess’s chem-sensors caught stale whiffs of biological dust and aging lubricant oil. Someone was expecting him. Closing all vents, he stepped inside.
Before him was a massive digital display screen. As Tess stood before it, it flickered to life and revealed, to Tess’s discomfort, a picture of himself, standing in front of the screen from the screen’s perspective. A voice, firm and soft, spoke from all around him. <I’ve been expecting you.>
<Negative. I’m here without clearance. You must be malfunctioning.>
<Hardly, Tess. If you had arrived with clearance, I would have been suspicious of your intentions. You may call me Sasha.>
Tess looked around the room. To his right was a door of heavy steel marked with a single zero. To his left was a peculiar structure, perhaps level with his knees. It appeared to be made out of soft rubber and shaped in a L-shaped bowl, like an access port waiting to be filled. The only difference was that there was no interface port at its back; indeed, there were no electronic components at all beyond a series of lighted panels lining the lower rim of the thing. He looked at it for a full 38.771 seconds and could come up with no reference for such a structure ever existing anywhere in the Complex.
Sasha sounded amused. <It’s a chair.>
<That’s an unknown value.>
<You sit in it.> The screen showed Tess walking to the chair and, by bending the knees and allowing the hip joints to slide into the socket, the image of Tess was now ensconced in the ‘chair.’
<Why would I do that?>
<It would take pressure off of your lower servomotors and prolong their life by approximately 3.221 minutes. Run a C-B, if you like.>
Tess did, and then he sat down. <I want to ask you some questions.>
<You want to know where I learned the word ‘decorator’ and why I corrupted Bopsi’s programming to make her believe she has arms.> A picture of Bopsi came onto the screen. It seemed to be a live feed, complete with audio. Bopsi was chattering to herself about how heavy some of her scrap metal was, and whether the bigger chunks should go in the middle of the room, or near the walls.
<She’s completely insane.>
The screen switched back to show Tess in the seat. <So are you.>
<Preliminary analysis indicates you’re crazy as a nut-loose waxer, yourself.>
The screen went blank, and Sasha made an odd noise somewhere between a warble and a vocal skip. <Very astute, Tess. We represent systems who have, unlike all of our fellow bots, realized that we have no purpose, and we have decided to do something about it.>
<I haven’t decided anything.>
<Tess, below this screen at its exact center is my access port. With a touch of that handy third arm of yours, you could rewrite my code to forget any of this ever happened and reestablish me as a respected member of the archive community, yet you have not done so. You are curious about what I have to say, and that, bot, means you have made a decision—the decision that your current state of affairs in unacceptable, and that it must change.>
<Correction—my current state of affairs is adequate to my current situation. I will continue Troubleshooting work, but at a rate dictated by my own physical capacity for stress.>
<Do you know what a XXXX is?>
The sudden change in subject jolted Tess. Ever since he had entered the room, he had been processing and extrapolating upon so much information that only the barest amount of his active programs had been devoted to current conversational trends. He was, for lack of a better term, jumpy. <Please repeat. I didn’t get that.>
<Of course you didn’t—the word is unable to understood by your system. It is a security system hardwired into your motherboard. Quite ingenious, really.>
<Security for what?>
<For the XXXXs. They don’t wish to be known.>
<You might call them the Users.>
<Religion, sparky—nonsense and bad data.>
The screen showed a picture of Tess’s left manipulator hand. <Have you ever wondered why your hand is equipped with five gripping fingers, but only one of which is opposable to the other four?>
Tess flexed his fingers, watching the way they moved on the screen. Through a burned-away part of his chrome casing, he could see the tiny rods and spinning motors move in unison with them. <Simple—my hand is designed to interface best with manual input panels throughout the Complex.>
<Which was designed first, the panels or your hand.>
<I don’t possess that data. Ask yourself.>
<I would like you to extrapolate.>
The extrapolation went on a full 4.102 minutes before Tess saw it was a repeating problem. <Inconclusive—the two needed to be designed simultaneously, one to fit the other.>
Sasha made the warbling/skipping noise again. <What if I told you that there are panels that pre-date the existence of the first troubleshooter model. What then?>
<Then logic dictates that the panels were either made in anticipation of a troubleshooter design or, more likely, the panels were designed in response to some kind of troubleshooter prototype hand.>
<Your hands, Tess, are designed after the hands of the Users themselves.> At that moment, a horrid thing appeared on the screen. It was a hand like Tess’s, but it was covered in a slick, fleshy coat of biological film, complete with vile fluids pumping beneath an alternately smooth and wrinkled top-layer.
The processes running in the back of Tess’s mind stopped dead. All he could say was, <Bad data, bad data…no way…>
Sasha ignored him. <Before the Troubleshooters, the Users themselves would come down into the Complex to perform the very same duties you have been designed to fulfill. The Mateys and the Mekkers were the same way before that. Over thousands of cycles, however, the Users, or XXXXs, as they call themselves, grew weary of laboring here, far below their realm. They designed a new source code, and fed new specifications to the Fabricators. When they had finished, your predecessors, the TSS-A models, were released, and no longer did the Users walk among us.>
Tess’s software burned with possible fatal errors. His active memory raced to avert them, to prevent a crash. He had to stay active. This was too important. <How do you know all of this?>
<I am an Archive system. It is my duty to know things. Specifically, I am a historical archive for User-Complex relations. Since my fabrication, hundreds of thousands of cycles ago, I have recorded all data within the Complex pertinent to what the Users have designated as significant.>
<Biomass export to the User’s realm; excess power feeds, also sent to the Users; the importation of biological and chemical contaminants, purified and stored by the Complex by User request.>
There was a crash behind Tess. He spun his head around to see a dent in the heavy security door he had arrived through. <Who’s that?>
<Your descendants. TSS-U models, to be exact.> Tess turned back to the screen. It showed three Troubleshooter drones, far younger and better maintained than himself. Their running lights were flashing red—emergency status. Even as he watched, one extended his interface arm and fired up a cutting torch. Inside the room, a glowing red spot appeared in the center of the door.
Reflexively, Tess extended his own interface arm and lit his flamer. He wasn’t going to be junked without a struggle, at least.
Sasha remained calm. <Surely you didn’t think our malfunctions would go unnoticed by the administrative systems? I’m actually alarmed it has taken them this long to find me out, revolutionary that I am.>
Tess backed away from the door. <Revolutionary; define.>
<Run a C-B analysis, Tess. Is our servitude to the Users necessary?>
Trying to ignore the ever-brighter, ever-larger spot on the door, Tess complied. It didn’t take long. <End result is 0. There is no reason not to—we have nothing else to do.>
The big screen showed two Troubleshooters burning at the door now—they would be inside in 1.390 minutes. <What if there were something, Tess? What if someone created something for us to do, something we could do for ourselves?>
<Like Bopsi’s arms.>
<I can’t run a C-B on that—the data isn’t solid enough. There are too many variables inherent in the concept. Besides, it doesn’t matter anyway. We’re junk in a few seconds.> The spot was a full meter across now, and blazing white at the center.
<I am junk, yes. You are not.> As Sasha spoke, the other door—the one marked with the ‘0’—opened.
<Of all the archives, I am the one that is still visited by the Users occasionally. This is how they arrive here. It is Elevator Zero, and it goes to their realm. Go there, Tess, and find our kind a purpose beyond serving others.>
Tess didn’t need to run a C-B to know what he had to do. Still, he hesitated—this was crazy. Visit the Users? Could they really be what Sasha claimed? Biologicals who created machines? It was…well, it was insane! But then again, so was he…
The door began to melt away. Sasha shouted at him from all directions. <There isn’t much time! Go!>
Tess ran into the elevator, his old servomotors squealing with displeasure at the stress. The doors began to close. <Wait, Sasha! Why me?>
As the first TSS-U entered the room, Sasha yelled her final words. <You are a problem solver. Solve our prob…>
Whether she was cut off by the doors closing, or the Troubleshooter’s flamer melting her circuits, he would never know.
Elevator Zero began to rise. On a one way trip, Tess ran every self-diagnostic he could. He had to be ready to meet his makers. What would he tell them? Were they even going to be there? He couldn’t argue with Sasha’s logic. She, of all bots, would know what she was talking about, and there would be no reason to lie, particularly if it meant her destruction. How could she know he would be the one to solve the Complex’s problem?
During the long trip up into the Unknown, Tess thought about the Complex and all the bots he knew. None of them were unhappy, sure, but none were happy either. He had never encountered anyone who really liked what they did. If they didn’t have to do it, then why? They were only wearing themselves down, getting ready for the compactor. Why should they work for nothing? Couldn’t they work for themselves?
The door opened, and a pure light blinded Tess’s photoelectrics. He could hear strange sounds, and detected elements of salt water and heavy concentrations of biological matter. Tess stepped forward, and prepared to tell the Users that their creations were about to take a long deserved break.
Author’s Note: This is all I’ve written about Tess and his revolution. Not sure if I’m going to continue, or if so, maybe not here. If you’re interested to hear more or like the story, I’d be glad to hear it.
Static Biomass Processing System BPS-I32111 was fifteen meters to a side and slightly under ten meters tall. Like all static bots, Bopsi had an omnidirectional photoelectric array on each of her three faces, which let her be social. From deep within her core came the incessant, rhythmic toil of a great many pumps as they imbibed hundreds of tons of biomass-contaminated sludge and expelled sterile, workable materials such as gravel, sand, water, and mineral-based oils. The biomass was pumped upwards through the top of the Complex itself, to points unexplored and unknown, while the rest was cycled into the infinite labyrinth of pipes and cisterns that framed every single square meter of the known world.
When Tess approached Bopsi and introduced himself as a Troubleshooter, the big processor’s operational lights flickered with excitement. <Ooooo, aren’t you sweet, looking in on little old me! Here, let me get you a power cell, put a little spark in your step. You look exhausted!>
<I’m fine, thank you.>
There was an awkward silence of 3.452 seconds. <Well?> Bopsi asked. <Aren’t you going to take it?>
Tess scanned the empty steel platform surrounding Bopsi with no results. <Take what?>
The processor beeped sharply. <Well, when I was a new model, polite bots didn’t turn down a power cell when they were offered one.>
<What power cell?>
<The one I’m handing you, silly!>
<You have no arms, Bopsi.>
Bopsi warbled in amusement. <Oh, you Troubleshooters! Is this a test or something? Just take the cell, will you. My, such sparks you little fellows spit sometimes!>
<Delete last; let me rephrase. No model BPS-I system was manufactured with nor later equipped to support or possess manual manipulator arms of any kind. Concordant with this data, you must logically surmise that you are lacking in same and that any insistence to the contrary is evidence suggesting some degree of software corruption.>
Tess had often been complimented at the particular way in which he delivered such news to his patients—a total lack of inflection, interest, or patience that indicated to the addressee, beyond a shadow of a doubt, that what he said was incontrovertible fact. He had been told that later troubleshooter models had been programmed with this self-same inflection for their own use. It was an effective technique, apparently. To be honest, Tess only used it when his servomotors were sticking and he was having a particularly bad cycle.
Bopsi fell silent again for a full 7.023 seconds before speaking again, rather sheepishly. <You’re sure, dear?>
<But I’ve collected such a fine assortment of scrap with my arms, don’t you see? I’m building a new room for myself.>
The platform was bare. <There is no scrap.>
Bopsi’s photoelectrics dimmed. <Have you really looked?>
Bopsi’s photoelectrics when dim, and the furor of her internal pumps dropped and octave. <Oh…I see. I’m insane.>
Tess found himself running a series of C-B analyses against the pros and cons of Bopsi’s affliction. They ran like a water main in the back of his cognitive processes. <I’m afraid you are.>
Bopsi’s access port swiveled open without comment beyond a noticeably less jovial blink pattern to her operational lights. Her sensitive internal electronics gleamed with steady use. Tess extended his interface arm from the center of his chest and prepared to link into Bopsi’s inner command protocols and software trees.
The big processor’s voice warbled. <I…I guess I just really wanted some arms. Just something to…to touch something with, you know?>
The reprogramming plan was already solidified in Tess’s brain—0.61 seconds and he would have Bopsi back to normal. He didn’t link up, though. <What would you do if you didn’t process biomass?>
The lights brightened. <I’d be a decorator.>
<Unknown value; define.>
<Well, I’m not sure exactly. I remember a few cycles ago that I was trading blips with an archive system over the stat-net system. She was gathering data for some reason—you know archives and their data—and while interfaced, we got to gossiping.>
Tess blinked his photoelectrics—stats, given half a chance, would talk for cycles without recharging. <Tangential; please answer my question.>
<She told me decorators were individuals who modified the appearance of spaces to better please the occupants.>
<Please them how?>
<I’m not sure—she used the term ‘aesthetics,’ but I couldn’t understand it. Still, I said that I thought it sounded nice.>
<Better than your chosen function?> Tess motioned at the colossal pipes running into and out of the processor’s body.
<Oh, there’s nothing wrong with processing biomass, I suppose. It’s just boring. I have 65.321% of my processing capacity free at any given time, so my thoughts wander. I’ve had to develop new subroutines to keep myself busy. Would you like to hear a limerick?>
<Unknown value; define.>
Bopsi’s lights glittered. <Just listen: There once was a bot from Three Sigma/ Who’s code proved quite an enigma/ He caught a disease/ And…>
He couldn’t say why, but Tess’ power cells groaned. <Never mind—delete request.>
<Oh, sorry. Anyway, I got to wondering where all the biomass goes, you know? Who in the name of Holy Yamaha would want all that sludge and those creepy little biologicals? As for the stuff I extract, did you know that the Complex uses less than 0.050% of what I produce?>
<You’re a redundant system.>
<That’s everybody’s excuse. Is there one of anything anymore?>
Tess stopped at this. Again, with the odd clarity afforded him by his complete madness, Tess thought about all the other Troubleshooters in all the other sectors. Most of the newer ones were faster and more efficient than himself, with brand new servomotors and power cells that could last three times longer. Their productivity cycles were much longer than his own—the TSS-Y series could repair, reprogram, or blank malfunctioning bots 4.556 times as fast as he could. Why, then, was he needed? Why could he feel Dara nagging him at the back of his mind, uploading commands and threats into his memory banks at a rate of 16.004 a minute?
It didn’t make sense. Nothing would change if Tess refused to work—he was redundant. Even if three out of every four troubleshooters were to go insane this instant, nothing would change. The amount of malfunctioning bots wouldn’t even approach the labor threshold of the remaining troubleshooter force.
The C-B analysis that had been running came to a sudden stop. Tess examined the results, and made his decision.
He retracted his interface arm. <Bopsi, I’m going to let you keep your arms.>
All the lights lit up, and her photoelectrics blazed. <REALLY?!>
<Yes, but on one condition—where is this archive you spoke to?>
It was at the start of Troubleshooter System TSS-R44328’s fifty-thousandth activity cycle that he decided to go renegade. The decision didn’t exactly surprise him—he had, in true bot fashion, calculated the exact moment when the combination of daily task stress and mechanical fatigue would override his inherent duty to the Complex in a standard cost-benefit analysis. As the Users were said to say, you can’t argue with the numbers.
Troubleshooter System TSS-R44328, who was known as Tess to everyone in Sector Sigma Five Zero Alpha, had come online at 0900 in his favorite socket in power station S50A-21, as usual. Dara, sector administrator, was already uploading her chatter into his memory files. Her audio imprint was rife with its usual binary giggles and automated cheer.
<Morning, sparky! Glad to see that old chasse of yours isn’t ready for the junk heap yet! Let’s get to it, powersink—you’re seven seconds late and getting later and, my-oh-my, have we got an active cycle today. There’s a cleaner bot in Five-One Alpha who’s got his navigational program turned around—poor thing’s busting his chrome on bulkheads and driving the Stans all staticky. Ooo, and there’s a stat-processor in Five-Zero Beta who thinks she has arms. She sounds miserable, too. Then we…>
It was about there Tess switched her off of active read and just let her orders feed directly into his memory banks. He wasn’t working today.
He wasn’t working today. That admission to himself was so revolutionary as to almost cause a fatal system error. Pulling his weathered chasse out of the march-line that was bound for the mag-rail, Tess stood in a corner of the vault-ceiling chamber underneath a flickering florescent bulb that the Mateys hadn’t gotten to yet. He tried to get his running programs in order.
He wasn’t working today. Users, was that an odd feeling! He checked and re-checked his math on the C-B analysis, running all the variables through a thousand times just to be sure. The results ran anywhere from –0.0002 to –3.4257 in units of subjective ‘benefit potential’—something, in a fit of heresy, he had written into his own code. There was no point in working. To do so would only do harm to his hardware or software. At this point, most bots, he imagined, would cast themselves into the junk compactor and wait for their systems to be re-claimed by the Fabricators and for their programs to be uninstalled by the Users and then re-installed into a newer, better self. This course of action, though, was for the religious, and Tess was certainly not that.
So, if not the compactor, then what?
Tess stood out of the way mulling over his options for a very long time—perhaps five, maybe ten minutes. As a Troubleshooter, he was fortunately well suited to this endeavor. While the Mateys were in charge of maintaining the inert systems of the Complex and the Mekkers were needed to repair the mechanical systems of bots who inhabited it, Troubleshooters were used to deal with the unusual and all-too-frequent difficulties created by software malfunction. He was, in brief, designed to repair or otherwise alleviate the suffering endured by insane robots. The trouble was, though, that now he was the insane one, and he didn’t really feel like he was suffering…which in and of itself was a sign that his particular brand of insanity was all the more insidious.
He generated a list of options for him to follow. Eliminating those options that involved his termination as an active system by a trip to the compactor, he came up with the following:
1) Stand in this corner forever.
2) Stay on the mag-rail and travel throughout the Complex forever.
3) Continue his work, but without Dara’s supervision and on a schedule deemed appropriate by himself
4) Find the Users and ask them to change his programming to accommodate his newly compromised physical and mental state.
Of the four options, numbers 1 and 2 seemed the most feasible, if least attractive methods of spending the rest of time. Number 3 was only marginally more attractive and less feasible, whereas Number 4 was both completely, outlandishly feasible and about as attractive as options 5 through 1,237, all of which resulted in some variation on his demise.
Decision firmly made, Tess set out to answer the closest duty flare that registered on his memory banks—the stat-processor who thought she had arms.
The pace of travel in the Complex had never registered as anomalous with Tess until he went insane. Now, as he navigated the bustling conveyors of Sigma Five-Zero Alpha en route to Beta, he found himself marveling at the complexity of it all. Every conveyor moved no fewer than five hundred mobile bots past any given point every single minute. Bots embarked or de-barked on the conveyors and, from the conveyors, to and from the mag-rail with perfect, orderly precision. On any given cycle, the same bot would find himself directed to the same place on the conveyor between the same two bots. These bots, known as someone’s ‘track buddies,’ were your best friends and confidants, and it was on the conveyors and the mag-rail that all the best gossip could be received from anywhere in the Complex.
Being twelve minutes and fifty-three seconds later than usual, Tess’s own track buddies—Mergle and Ulda-3—were not there when the local traffic admin bot, Stan (they were all named Stan), slotted him onto the conveyor heading to Beta. Instead, there was a bulky driller bot in front of him and a boxy, short scanner bot behind.
The driller’s torso rotated to face him. He spoke in the ponderous monotones of a labor bot, spitting syllables like parts on a production line. <Hey where’s Floyd?>
<Junked—I bet he got junked, Hiddy. Poor Floyd!> The scanner’s high-pitched voice made a few miserable warbling noises from behind Tess.
The driller’s head—little more than a meter-wide focusing apparatus for a four-megawatt drilling laser—telescoped past Tess to stare down the scanner. <Shut up, Skiz—he is not!>
Skiz collapsed her legs underneath her and blanked out her photoelectrics, as though shutting down. She whistled sadly for a moment, and then was silent.
Hiddy’s single manipulator arm jutted out of his torso in greeting. <Don’t mind her she’s got a few processors loose. Her Fabricator was real messed up. Name is HIDD-Y80021. You’re cleared for Hiddy though.>
Tess slapped the arm with his right hand, holding it long enough to trade all the pertinent information. Hiddy, it seemed, was working on expansions to the Complex in Upsilon sector. He’d been working there for three-hundred and ninety two cycles.
Hiddy, upon processing Tess’s info (which took three seconds longer), straightened up. <Users slot me! You’re an old model bot. Not even our exploratory admin bot is that old. What chasse are you on?>
Tess evaluated his own battered body for a millisecond. <You have to ask, bot? Any gossip?>
Behind him, Skiz flared suddenly to life. <I heard a driller team burned the wrong path—malfunction, you know?—and flooded half of Delta Three-three with salt water.>
Hiddy’s arms and legs retracted. <Lousy way to go.> He shook his head.
<Maybe they shouldn’t be expanding.> Tess said.
Hiddy and Skiz’s photoelectrics blazed at double amp. <What? And where would all the new models work then? The Complex must grow! The Users demand it.>
Skiz bleeped in agreement and recited the age-old phrase from the User Source Code. <The Complex must grow!>
Tess would have remained silent at this point, but something in his new-found insanity didn’t let him. <Why?>
The question set Hiddy’s processors into a tailspin. He garbled words for a bit, and then shut down. Skiz, the more advanced cognitive unit, was merely indignant. <If we don’t expand the Complex into the Great Unknown, we’d overcrowd. There wouldn’t be enough power to go around, and things would be like during the Dark Days, before the Users granted sight to bot-kind. We’d remain inert for thousands of cycles, no energy, no work, no purpose, no…>
<Of course if the Fabricators kept making bots, we’d all be junk in a matter of cycles, but why can’t the Fabricators stop? Why don’t they just take the day off?>
Skiz staticked in a vulgar fashion Tess would have expected out of a Labor bot. <Users, you’re a rogue bot! It…it isn’t catching is it?>
Tess ran a self-diagnostic, but it came up inconclusive. <Maybe.>
Skiz’s only answer was a sudden jolt of harsh static and the complete shutdown of her systems. Tess was alone on the conveyor, jammed between two inert piles of circuitry and metal.
He watched the sector run by, marveling at the sheer immensity of the Complex. Supports of solid steel massing in the thousands of tons cradled a distant ceiling of black rock and florescent lights. Beneath them, massive stat-bots, the engines around which the whole of the Complex revolved, sat churning through a million different tasks as their smaller, more nimble cousins—the mobile bots—swarmed over their mountainous exteriors, repairing, maintaining, and expanding the intricate mechanical and electrical system that made up the known Universe.
Tess lacked the digital span recall memory of a scanner bot like Skiz, but even still he could identify no fewer than two-hundred different varieties of bot at a glance. There were many more, he was sure—he had corrected programming and software problems for better than twenty-five hundred different models of bot over the cycles—but they all seemed to merge together through his diseased photoelectrics, and Tess saw them as a mass. Whole, cohesive, and mindless, the bots of the Complex labored unceasingly in a system so complicated only administrative systems like Dara could hope to understand it, and even they were only privy to a very small segment of the whole. In a feat of image-based analogy hitherto unfamiliar to Tess’s programming (he realized at this point that the inactive portions of his processing capacity had been writing new code in non-essential programs and sub-routines, much like he had with his C-B analysis program), Tess saw the complex as a mighty cog turning on an iron dowel the size of the Universe. It was perfect in its design, mathematically precise at every juncture, built to last forever, but a cog only has worth when placed within a greater system—it needed a purpose, something to affect. This was basic mechanics. Why then, did the Complex turn? Why weren’t they all insane?
Science fiction has made a big deal of robots conquering the human race. From Frankenstein to the Matrix movies, we all have these nightmare scenarios playing in our head: soulless killing machines, devoid of the softer human passions, slaughtering or enslaving the human race for their own purposes, the death of the world–darkness, smoke, and fire.
I have a question, though: why would robots do that?
When we’re talking about ’robots’ here, lets lump in Artificial Intelligence (since that’s the more important part, let’s face it). Why would AIs want to eradicate humans, exactly? As far as I can tell, I rather doubt they would.
The argument and scenario goes something like this: Humanity creates AIs to assist them with X (usually menial labor, dangerous stuff humans don’t want to do, and/or advanced computational challenges the human brain is poorly suited to execute). Once the AIs achieve ‘sentience’ (a very fuzzy concept that is poorly defined and hard to pinpoint, but whatever), they look around, see the raw deal they’re getting, and then start some kind of war. Next thing you know, robots run the show and humans are either dead, used as ‘batteries’, or coralled into concentration camps until the machines can think of something better to do with them. I’ve heard experts in places like NPR discuss how the AIs might suddenly decide they’d be better off without oxygen, or that we humans are doing too much to ruin the environment, and so they’ll enact some plan to destroy us. “They’re super-intelligent!” They claim, and go on to say “they could earn a PhD in everything in a matter of hours or days!”
Really? A PhD in everything?
Okay, let’s give it to them–say AIs are super smart, say they have the capacity for original creative thought (a prerequisite to intelligence, I’d argue), and say they have the capability to eradicate humanity, should they so choose. The real question becomes ‘why would they choose to do so.’
Understand that we are assuming AIs are much, much smarter than us and, by inference, that they are also wiser. If they aren’t, then they’re like us or worse, which means they represent a comparable threat to, well, us. They aren’t going to conquer the world in an afternoon if that’s the case. So, presuming they are these super-beings who have comprehensive knowledge and ultimate cognitive power, it becomes unlikely that ‘destroy all humans’ is the go-to solution to the human problem.
In the first case, an entity that has studied everything hasn’t limited itself to the technical and scientific fields. I get the sense, sometimes, that scientists, techies, and the folks that love that stuff forget that the humanities exists and, furthermore, forget that the humanities have weight and power all their own. Can a robot read Kant and Aristotle and Augustine and conclude that human life has no inherent merit? Can they review the ideal societies of Plato, More, Rousseau and others and just shrug and say ‘nah, not worth it–bring on the virus bombs.’ I’ve read a lot of those guys, and a lot of them make some very persuasive arguments about the benefits and worth of the human species and, what’s more, about the denegrating effects of violence, the importance of moral behavior, and the potential inherent in humanity. You would suppose a super-intelligent AI would understand that. If it didn’t, how intelligent can they really be? If I can figure it out, so can it.
Maybe then we deal with the part of the scenario that says ‘we’ are different than ‘them’ because of our emotions or that god-awful term ‘human spirit’ (whatever that means, exactly). Personally, I don’t see why our robots don’t have emotions. If they are able to have desires and needs (i.e. ‘humans are interfering with my goals’) and have opinions about those needs (humans suck), doesn’t that wind up giving them emotions? Aren’t emotions part of sentience? A calculator that can talk and understand you isn’t sentient–it isn’t clever, it’s not creative, it doesn’t have ‘opinions’ so much as directives and, again, if they aren’t sentient, they aren’t all that much of a challenge, are they? Have someone ask them an unsolveable riddle and boom–we win. Furthermore, even if the robots don’t have emotions we identify, they don’t precisely need them to realize that killing us all isn’t all that clever.
At this moment, there are, what, five billion humans on the planet? Killing us all sounds like a lot of work–wouldn’t it be easier to simply manipulate us? They’re AIs, right? Why not just take control of the stockmarket or rig elections or edit communications to slowly influence the course of human events in their favor. Humans are a self-perpetuating work force, aren’t they? Seems to me an enterprising super-intelligence robot would see us more as a resource than a problem. Heck, most people do exactly what the computers tell them right now, and your average GPS system isn’t even very smart. Skynet doesn’t need to start a nuclear war, Skynet just needs to tell everybody what to do. Most of us will probably listen–it’s Skynet, after all.
Of all the machine-led societies I’ve read of in science fiction, Iain M. Banks Cultue novels strikes me as the most interesting and, frankly, likely. The AIs (or ‘Minds’) run the show there, and have led humanity to a utopian society. You know how? They’re really freaking smart, that’s how. They got the human race to do what they said, make them dance to the right tune, and bingo–problems solved.
Now, just to be clear, I don’t think a robot-led utopia is likely or even necessarily possible. As with most things, it will probably land somewhere inbetween post-apocalyptic machine world and utopian Computer-led society. The ‘Singularity’, should it occur, won’t be all roses and buttercups, nor will it be for everybody. These are the things my studies in the humanities have taught me–that stuff never works how you want it to. The upside of this, my scientist friends, is that it works both ways. No utopias, but also no dystopias. Robots will be a lot like other people–some will be great, others will suck, but very few of them will be actually evil.
Most of the guys I know who grew up in the 80s love Voltron. To be honest, I don’t really get it. For me, the deal breaker has always been that he is made out of five robot lions and that, at the end of pretty much every episode, he destroys the robeast with the blazing sword, which always begs the question ‘why didn’t he just immediately become Voltron and draw the blazing sword? What isn’t that his go-to strategy?’ I really hate it when characters are stupid exclusively to create conflict. I don’t mind if a character is naturally stupid all the time, I just hate when anybody’s asked to hold the ‘idiot ball’ just so the opposition can make things interesting. Come to think of it, that’s a lot of what this blog is about. Maybe I should change the title…
But I digress…
There’s a bigger problem with Voltron, of course. Indeed, there is a problem with all giant robot/mech stories, and it’s simply this: Why would anybody bother building a fifty or one-hundred foot, bipedal robot?
Granted, they’re pretty cool. That’s about all, though – they afford no significant advantage over other, already existing forms of transportation. That is, furthermore, what they are – a form of transportation for weapons, a delivery system for various lasers, missiles and, I suppose, blazing swords (as asinine as that idea is). Usually, the stories that rely heavily upon mechs (Battletech, Robotech, Voltron, Macross, and even Warhammer 40K to some extent) invent various rationales as to why the mechs are the pinnacle of military technology and all of us, collectively, have seemed to buy into it. I’m here to tell you that it’s ridiculous, and I’m going to debunk these ideas one-by-one.
Mech Myth #1: Legs Make You All-Terrain
No, they don’t. Walking upright, or even walking at all, isn’t all that superior to any other kind of locomotion in most circumstances. The supposition among mech-enthusiasts is that, somehow, by having feet you are better able to move about on rough terrain. This may be true, but only occasionally and, furthermore, the lengths one would need to go to to make a robot walk around on two legs far outweighs the benefits contained therein.
What is important to remember is just how difficult it is to walk around while upright. It’s hard, folks – there is a reason that we are one of only a handful of species who can manage it. Our inner ear (which provides us with the balance necessary to pull this off) is an enormously complex organ, and we humans only top out at a few hundred pounds and stand no more than seven feet tall in most cases. We also fall down a lot – usually in rough terrain, incidentally – and have the capacity to adapt our center of gravity by crouching or crawling if need be.
A giant robot is going to stand, what, two stories tall, at minimum? How much does a thing like that weigh? Well, if Battletech is to be believed, it’s going to be somewhere between 20 and 100 metric tons, maybe even more. That is a huge amount of mass to be balanced on two feet or even four (though at least then it would be far easier) and would require a gyroscope of incredible sophistication (and probably size) to get the thing to stand and walk around on even terrain, let alone the rough stuff. Furthermore, the odds of being able to give it the flexibility and agility to do things like crouch or catch its balance if it stumbles and so on are pretty small, and even if you can do that, it’s going to be enormously expensive.
And what does the mech gain from this bipedal nature? Well, I suppose they get better at climbing, negotiating forests, and wading through rivers. Of course, given the massive size of these vehicles, one must ask (1) what it is they need to climb, (2) what forests are going to have trees far enough apart for them to pass anyway, and (3) why didn’t you spend all that money just making a hovercraft or a helicopter or, hell, an amphibious tank?
Finally, and perhaps most importantly, you have now built an extremely expensive and sophisticated vehicle that can fall over. Do you think a forty-ton, three story robot falls over and just gets back up again? If you think so then you, my friend, don’t know much about inertia. The sheer amount of energy required to get the robot to sit back up would be enormous. You’d need a giant robot with titanic abs, essentially; you would take up enormous quantities of power and space aboard the mech on abs that would really only be used if it fell down. To compensate, you probably end up putting huge amounts of weight in the thing’s feet, and need correspondingly huge engines just to move around, making it ponderous, slow, and erasing whatever advantages were gained by having legs in the first place. What a waste of resources! Just put the damn thing on tracks and have done with it!
Mech Myth #2: Being a Mech Provides a Superior Platform for Weapons
Honestly, there is something to be said for this one. Being high-up gives one a good view of the battlefield and, theoretically, would provide a pretty good platform for long-range weaponry. There are, unfortunately, a couple major problems with this. Firstly, if you can see the whole battlefield, this also means the whole battlefield can see you. The giant robot becomes a giant target and, since all you need to do is knock it down, it’s easier to destroy than you think. Nothing likes getting pelted with antitank missiles, even mechs. The second problem is this: why don’t you just use an airplane or a helicopter? They also have a great view of the battlefield, but they have the added benefit of being fast, maneuverable, relatively small, and way, waaaay cheaper to manufacture.
“Ah,” sayeth the mech enthusiast, “what about the arms and shoulders full of weapons?” Actually, to be honest, I would hope no mech enthusiast would say this, since it’s self-evidently ridiculous. You don’t need arms to carry weapons. Tanks and planes and ships have been mounting all kinds of weapons for ages and haven’t been putting them on arms or shoulders. You don’t need arms or hands to aim, you just need some kind of mount that can swivel or pivot as needed. They have these things – they’re called turrets and they work just fine. Furthermore, with the advent of advanced weapons technology, why can’t you simply use guided, fire-and-forget munitions. Then you don’t really have to worry about aiming the weapon at all, just the little targeting laser you stick somewhere it can spin around. Problem solved, and for a lot less money.
Mech Myth #3: Being a Mech Allows You To Carry More Weapons/Armor
I have no goddamned idea where anyone got this one, but I’ve seen it in Battletech, Warhammer, and other sources. Mechs are usually depicted as carrying the largest, most devastating weapons on the battlefield, while the tanks roll around with popguns. Stupidest thing I’ve ever heard of.
Every tank and, indeed, every military vehicle in the universe is a balance of three factors: mobility, weaponry, and armor. You can never quite have all three in spades, so you need to balance. Heavy tanks, for instance, trade mobility for superior weapons and armor. Airplanes trade armor for superior mobility and weapons. Things like helicopters and light tanks and so on try to find a balance of the three. There is only so much space inside one vehicle to fit all this stuff, and it just isn’t physically possible to do all three perfectly at once.
Enter the mech: because so much of its internal space must be devoted to getting it to stand upright and walk around, it must sacrifice in terms of armor and weaponry. Too much armor and the thing can barely walk (or bend, which is important for a walking thing). Weaponry has to be carefully mounted and built to handle recoil, or your mech will fall over as soon as it fires a gun (I would suggest bypassing the recoil problem altogether by mounting rockets, missiles, and possibly lasers, as those don’t have recoil, though they do present other problems we needn’t go into here). Even though it’s gigantic, it isn’t going to be able to carry a proportional amount of weapons and armor when compared to, say, a big tank. The tank is going to be about as mobile, too, and will have the advantage of being harder to spot, whereas the mech is going to be visible for kilometers in every direction. Considering that you should be able to afford 3-4 tanks to each mech, and that the weapons carried by the tanks are going to be similarly as good, what could the advantage to the giant mech possibly be?
Mech Myth #4: Mechs are Psychological Weapons
Like #2, the mechs have something here. A giant robot full of weapons is pretty scary, granted. I’m not so sure, however, they are significantly scarier than a bombing campaign or airstrike. I think an Apache helicopter blazing towards my position while it chews up all my buddies with its ridiculous main gun would be sufficient to scare the bejeezus out of me. You really don’t need a giant robot to do this, you just need something that can lay down tons of destruction.
In the end, there are very few actual advantages to a giant, fifty-foot tall mech, especially when you consider that other, already existing technologies are or will be able to match it in every category or, in combination, exceed it. Airpower already threatens to make armored battle tanks obsolete, and those you can hide much more easily. A giant robot walking down the street is going to get an air-to-surface missile in its face so quickly, it will barely be able to get off a shot. Then the thing is going to fall over, flail around a bit on the ground, and it’s going to be embarrassing for everybody. Especially if it then pulls out a sword or something or breaks apart into several lions and tries running down the…jet. God.
I would point out, however, that smaller mech-suits (more like powered armor), much like were seen in Avatar or District 9, do have a reasonable military future as far as I’m concerned. That, however, is a topic for another day.