Aye, the Good Doctor

Another fine product of US Robots and Mechanical Men, Inc.

Another fine product of US Robots and Mechanical Men, Inc.

Meeting on a mild evening in the middle of May, the Beamers put our positronic brains to work on the venerable robot stories of Isaac Asimov, collected as I, Robot.  While the tales were definitely “old school” classic sf, we still found a number of relevant and humorous touchstones that make them valuable for even the newest fans, as well as some old attitudes that cling to their skin like vaporizing iron carbonyl “rust”.    

Things Humans WERE Meant to Know

Dipping into a companion collection (The Rest of the Robots, edited by the Good Doctor in 1964), I found a number of his story introductions and afterwords that put perspective onto what he was trying to accomplish with his pioneering robot stories in I, Robot.  Mainly, he was fighting against what he deemed the “Frankenstein complex”, the notion that human creations are inherently dangerous and liable to turn out to be threats to their creators, us. Since any inherently dangerous device would have safety systems built in, Asimov decided to do the same for robots and thus were born the Three Laws, unbreakable codes that would (or should) prevent any mechanical mayhem.  Much of the fun of an Asimov robot story is watching to see how he exploits the loopholes in the Three Laws or shows how such “loopholes” are more the product of human perception than robot logic.

Not Just Good Ideas, But the Laws!

When discussing the Three Laws (1. Robots shall not harm humans or allow a human to be harmed, 2. Robots will obey humans, 3. Robots shall take care of themselves), we noted how they form an ethical code that is superior to most of the philosophies that humans typically follow.  They outdo the Golden Rule (Treat others as you would wish to be treated) since they mandate taking active steps to prevent harm, not merely to avoid causing it.

No wonder that Susan Calvin, Asimov’s only female protagonist, denied treating robots like humans since, as she put it, “robots are fundamentally decent beings”.  Chris was positively charmed by QT, the robot who, in the story “Reason”, logically concluded that humans could not be his superiors nor his creators, but only earlier, less servants of the Master (the space station itself).  Of course, QT still treated his humans as valued, if soon to be retired, colleagues, despite their illogical insistence on human superiority.

Asimov, though starting off with the thought of robotic morality as a joke, did carry through on his idea.  In later stories, like “The Evitable Conflict”, he developed the idea of a Zeroth Law, that robots were to serve all humanity, which could mean inflicting some harm on one or more individual humans.  So, in the best tradition of sf, he was willing to extrapolate his idea to its fullest conclusion, even if it does leave humans a bit lower on the evolutionary totem pole than we might have wished to be.

[Postscript: Alan provided us with an earlier version of the story “Evidence” in which a rising politician is accused of being a robot and “proves” his humanity by striking a protester during a speech, thereby breaking the First Law.  However, in the earlier draft, his main accuser forces him to stand on his head, seemingly in obedience to the Second Law.  No indication of why Dr. Asimov dropped the passage or its implication of a Second Law test undoing the candidate’s chances were indicated in the draft.]

The Future Ain’t What It Used to Be

What struck us as less plausible were the advances in robotics that Asimov projected, with walking, talking robots following almost immediately on the early prototypes.  Like flying cars, the home companion robots of I, Robot seem a lot farther off in the future than even his generous 21st century dates.  Liz noted that Robbie, the nursemaid robot, was sufficiently delicate enough to pick up and whirl around a child, as well as being clever enough to listen to and understand her retellings of fairy tales.  So much for cold equations when robots can be controlled by their desire to hear “Cinderella” told again and again.  Even with our current advances in facial and voice recognition and voice synthesis, it seemed to most of us that getting a mechanical system to react like an Asimovian robot would still take a technological breakthrough or three, though Nick was confident that the pieces were out there and only needed to be assembled into a unified package.

Stylin’ It, Old-School

Even as parts of the stories were still futuristic, there were elements that did date them back to the era of their writing.  Reading the collection for the third time, I focused on Asimov’s style, a “non-literary” type of writing that he liked to label “transparent prose”.  What I seemed to read, though, was a callback to the nineteenth century Romantic novelists, including even the chatty omniscient “narrator” voice.  People never “said” when they could “snarl” their remarks, and fingers were not “raised” so much as “jabbed” into the air.  Liz objected that Asimovian prose was not “purple”, but he could clutter up a sentence with a string of adjectives (“Deep, dark, dank, dismal silence!”) and wax poetic (“The stream of high-speed electrons impinging upon the energy beam fluoresced into ultraspicules of intense light.  The beam stretched out into shrinking nothingness, a-glitter with dancing, shining motes.”) with the best of the pulp writers.  And she did remark that his engineers, Donovan and Powell, were a bit heavy with the banter right out of a bad Abbott & Costello routine.

Man (Mainly European) is the Measure of All Things?

Less cheerfully, his sexism was a bit of a blot on his works.  We were in general agreement that the steady disdain directed at Susan Calvin would not have been warranted toward a male character, even allowing for her “prickly” nature, as Chris put it.  A male Calvin would have been “determined” and “dedicated”, never labeled as an unmarriable bachelor “spinster”.  And there were no other female role models, nor even many other female characters, aside from the somewhat shrill mother, Grace, in “Robbie”.  “Harridans in space”, as Nick called it.  No robots were given female names or attributes, even when they took stereotypical feminine roles like nursemaids, as Robbie did.  Kathy thought it was possible to read Grace’s reaction as jealousy of Robbie usurping her role as nurturer, but Robbie still did not read as “female” in any sense.  Husband George being unthreatened by Robbie was part and parcel with the patriarchal family pattern that Asimov projected into the future of 1998.

Perhaps even more disturbing were the undisguised racial undertones of the stories, where robots called humans “Master” and in turn were addressed as “boy”, even by their strongest advocate, Susan herself.  How, in the 1940s and ’50s could anyone read “boy” as anything other than a racially charged epithet, a derogatory term meant to demean and belittle?  And yet, in the future of I, Robot, humans (presumably Euro-American) are tossing it around freely without even a bit of shame for its past association.  Weird, we thought.

Not quite as bad were the dismissive tones about anyone who objected to the “natural” progress of robots into human society, an attitude that often appeared with a condescending “Of course, the *labor unions* will object …” to the idea of millions of jobs being lost.  And why shouldn’t they object?  Fran recently posted a review (“How Robots and Algorithms Are Taking Over”) of Nicholas Carr’s book on automation, in which the claim is made that 50% of American jobs could be lost to robots and/or computers in the next 20 years.  What happens to a society with over 50% unemployment?   Kathy thought creative careers could blossom, adding how in Scandinavian countries higher education was keyed to ability and interest and not to financial assets or future earnings.  Asimov, alas, had no answers for us.

Lest We Forget

But, all in all, he kept us amused, even when he left us confused, as in “Escape!”, where a psychologically stressed Brain plays practical jokes on a pair of helpless astronauts, a plot that bemused Chris, who did not see the humor, even as it reminded me of another story about helpless astronauts struggling with a mechanical mind that was unhinged by trying to hold to contradictory directives.  (“Open the pod bay door, Hal”, Chris intoned, immediately seeing where I was going with the analogy.)  Even the psychedelic light shows were similar.

For all its flaws (and perhaps because of them), I, Robot is a classic work of sf, taking an idea of mechanical beings and adding in the kinds of logical working out of their details that delineate sf from other literature that deals with worlds beyond the one in which we live.  As a re-read for us, it is a book that perhaps has lost a bit of its novelty and hence is slipping a touch in our regard (‘7’ now replacing ‘8’ in our ratings).  But, returning to it every decade or so did not seem to be a chore as much as a labor of love.



  1. A distant cousin of mine discovered, in the bowels of one of BU’s libraries, a manuscript titled “Direct Evidence” that appears to be an early version of “Evidence,” the Stephen Byerley story in “I, Robot.” It has the feel of Asimov; narrative by way of dialogue.

    Much of the story is as included in “I, Robot,” but the story does divulge. Here’s the relevant portion:

    Quinn: “Do you think you can? Do you suppose that your failure to make any attempt to disprove the robot charge – when you could easily, by breaking one of the Three Laws – does anything but convince the people that you are a robot?”

    Byerley: “All I see so far is that from being a rather vaguely known, but still largely obscure metropolitan lawyer, I have now become a world figure. You’re a good publicist.”

    Quinn: “But you are a robot.”

    Byerley: “So it’s been said, but not proven.”

    Quinn: “ Are you familiar with the Three Laws of Robotics?”

    Byerley: “Of course.”

    Quinn: “Then, of course, you know that any robot with a positronic brain must follow the Three Laws.”

    Byerley” “I believe that is the case. But, I think that we’re both wasting our time with your stating of the obvious.”

    Quinn: “Do you think that all robots must obey the Second Law, ‘A robot must obey the orders given it by human beings, except where such orders would conflict with the First Law.’?”

    Byerley: “I’m not a robotics expert. I couldn’t say for certain.”

    Quinn: “I’d like you to stand on your head.”

    Byerley: “Good day, Mr. Quinn.”

    Quinn: “Stand on your head.”

    And he did.

    • Eugene R.

      Thank you, Alan, for giving us the whole excerpt from the early draft of “Evidence”.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

%d bloggers like this: