Robots, Three Laws, and the art of the short story
Asimov's robots and Three Laws of Robotics missed the mark of real robots - but they're designed to tell good stories.
Edit: I meant this to go out in two weeks, but I set the wrong setting and it posted today! Oh well; I’ll leave this up and post something else then. Maybe more Short Reviews.
The other week, I was rereading Isaac Asimov's robot short stories, collected together in The Complete Robot. They were written in the Golden Age of science fiction, and share its main focus on ingenuity. Also (as Asimov says in the preface to the collection), these stories essentially pioneered the modern idea of robots.
Here we see the origin of Asimov's famous "Three Laws of Robotics": ensuring that robots will (1) not harm humans or let them come to harm, (2) obey human orders, and (3) keep themselves safe. Here we see dozens of stories playing with the Three Laws, exploring their interactions and their unexpected twists, and also - even more so - exploring how robots and robots' Three Laws interact with human nature.
The stories themselves are of varied quality. Many of them are, or could be seen as, detective stories solved by someone understanding how the Three Laws interplay leading to counterintuitive behavior. But, in the best tradition of detective stories, they let the characters show their character traits in this unusual stressful situation.
Robots are emotionless. In itself, this makes them hard characters to make interesting. But, what Asimov does is use them to bring out the emotions and character of the people around them. Writer David Gerrold, while watching Star Trek before writing his own episode of the show (“The Trouble with Tribbles”), noted the same thing about the Star Trek’s character of Spock: Spock mostly doesn’t show emotions himself, but he brings out emotions in the other characters around him, making for “fascinating viewing.” Similarly, Asimov uses his robots - who emotionlessly follow the Three Laws - to explore the characters of the people around them.
(And sometimes the robots have emotions too, but I'll get to that later.)
It's fascinating how Asimov anticipated people would react to robots. The dominant reaction is repulsion - "the Frankenstein Complex," as one character terms it once. Asimov doesn't give us a timeline, but somewhere early on in the world of his stories, robots are actually banned on Earth. US Robotics Corporation repeatedly tries to get this ban repealed, and repeatedly gets one-off exceptions leading to a lot of stories. But, we see humans afraid of what the robots might do, afraid they might rebel against humans, and afraid that they'll take their jobs and replace them in the economy.
Perhaps Asimov here is just deriding the masses on a grand scale. That's certainly part of it. As the sympathetic, knowledgeable characters of each story keep objecting to the ban and shaking their heads at people’s ignorance, it does feel like that tired trope.
But also, I think Asimov was framing his stories around the shortcomings he put into his robots.
Now, in real life - decades after Asimov was writing these stories - we have real robots.
I'm not talking about the robots that help build particular things in a factory. Asimov does anticipate them; he references in his stories how a lot of robots are programmed for a particular task and would be useless if the task was no longer needed. But, I'm talking about AI's. They don't have a body, but some of Asimov's robots (such as "The Machines" who govern the world briefly in one story) don't either.
We've seen the same trends Asimov highlights - people are objecting to AI's, afraid of their taking human jobs, and afraid of their rebelling or what else they might do in the future. They haven't yet arisen to anywhere near the same degree - there's no talk of a ban on AI's, or even a ban on them in certain jobs; just regulation. Meanwhile, theorists are debating how to allay those fears by making sure our AI's are friendly. Asimov's Three Laws of Robotics sometimes come up in that conversation.
It's been argued that the Three Laws are of next to no use for our modern AI's. I agree. But if people had actually read Asimov, it wouldn't even be up for argument. Asimov's robots repeatedly make tremendous errors because of the Three Laws, and avoid worse errors only because they're too stupid to understand so many things about what would "through inaction, allow a human to come to harm." For example, when some robots are taught physics, they start pulling human scientists out of their lab because there's radiation there that would be dangerous if they stayed there for hours. Of course, the humans know that and would turn it off or leave long before then - but the First Law compels the robots to keep them from coming to harm anyway. The only reason things like this don't happen more often is that most robots don't know enough to recognize possible harms like that.

So, to avoid this, we're left with "industrial robots, created by engineers to do specific jobs" as Asimov phrases it in his preface. They know about those jobs; they're set to do them; they're hardly ever anywhere near a human in something they'd understand as danger. When they go outside that context, they act unpredictably - like the robot left to be a house butler in one story, who ends up in elaborate schemes to make sure the housewife doesn't come to emotional harm.
But because of this, Asimov sees it isn't realistic for robots to be walking around most places in daily life on Earth. Even if he wanted to be unrealistic and have it happen anyway, the resulting society would be largely unrecognizable. For the sake of good stories, he decided to have it generally banned, but left enough exceptions for himself to write about.
Modern AI's often do have more context and smartness than Asimov's robots. (Even when they don't, they often hallucinate and act like they do, but that's another problem.) Because of that, Asimov has failed to anticipate so many things about modern AI's.
In the real world, of course, it's trivial to program computers to know every sort of radiation. Things like walking bipedally into the lab and grasping a human without bruising him - which Asimov's robots do trivially - are much harder. But Asimov wrote well before that, when computer vision was blithely considered an undergrad summer project.
But did Asimov actually think the Three Laws could be helpful to real-world AI's? To really answer that, I'd need to look through his copious nonfiction essays. I read a lot of them back in college, but that's so long ago I can't remember whether he wrote anything on-point.
But from his stories, I can say he did realize the Three Laws wouldn't work as stated. At least, they wouldn't work outside the specific contexts of industrial factories most of his robots appear in. One knowledgeable character flatly states this once. In fact, when Asimov writes outside that context, his robots don't stick to the Three Laws precisely. For example, when one robot lands amid a dispute between two scientists, it's perfectly willing to harm one scientist's reputation to avoid what it sees as a greater harm to another scientist.
But, it does seem like Asimov is portraying the Three Laws as generally in a good direction. In later novels in the same universe, I'm told that he formalizes a better "Zeroth Law" which encompasses the deviations from the Three Laws, but I haven't read those novels yet.
Chillingly, in Asimov's stories, humans' "Frankenstein Complex" - fears of robot revolt - do appear to be somewhat warranted. As robopsychologist Susan Calvin puts it in one story, "All normal life... resents domination... Any robot is superior to human beings. What makes [a robot] slavish, then? Only the First Law." Without it, as we see later in the story, a robot would trick and try to harm humans.

This character trait lets robots steal a few of Asimov's stories in their own right. Though - like many other things - he's inconsistent with it. The robot who narrates the beautiful "Bicentennial Man" doesn't share it at all. Here as elsewhere, he frames robots' character for the sake of the story.
Of course, Asimov - like many AI researchers - failed to anticipate the nature of modern large language models. Currently, LLM's don't have personalities. We can't say they "resent domination", or for that matter that they resent or want anything else. But, I can hardly fault Asimov for this. Whether a non-LLM AI would naturally "resent domination" is still under dispute, since we don't have any in the real world.
To sum it up, Asimov's robots have very little in common with modern AI's below superficialities. Nor would his Three Laws work in the real world. He's carefully designed his robots and his Three Laws of Robotics to build on each other to hide each other's flaws.
And, first and foremost, he's designed them to make good stories.
Oh Dear, well you have to put "Robots of Dawn" and "Robots and Empire" on your reading list. The zeroth law of robotics. 'A robot may not injure humanity, or through inaction, allow humanity to come to harm.' Is used to get around the first law. It's kinda a utilitarian argument, greatest good for greatest number. I've put ~1/2 my sci-fi books up in boxes in the attic, (including all by Asimov) or I'd be tempted to reread those two.
On one hand, there's Asimov's "That Thou Art Mindful of Him," in which two robots are assigned to come up with a definition of "human being" that both prevents robots from harming a newborn infant, and prevents them from obeying orders from a criminal, or a madman, or a child who says "Go jump in the lake!" And eventually they work it out that humanity is being rational and ethical, that the Three Laws apply most strongly to the most rational and ethical beings, and that robots are the most rational and ethical and therefore the most human and thus have the strongest protection under the Three Laws.
On the other hand, there's Jack Williamson's nightmare dystopia in "With Folded Hands—" where what amount to robots, following the law "To serve and obey, and guard man from harm," make human existence unbearable. I've thought for a long time that the difference lies partly in Asimov's political sympathies being progressive or socialist and Williamson's being more conservative.
On the third hand, my friend Karl Gallagher has a story on his Substack, "The Cornucopia Trap," where an AI planning system starts down that path and the protagonist finds her projects being blocked by it—and has to talk with it about what's the proper life for a human being. A wonderful bit of SFnal dialogue. (One of the things I love about SF is the way authors write stories to question other authors' stories, like Poul Anderson's "The Man Who Came Early" reexamining Martin Padway's situation.)