in the writing of Isaac Asimov: each of three (later sometimes four) rules devised to govern the behaviour of robots
Esp. in the Three Laws of Robotics, formally stated as:
1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.
2. A robot must obey orders given it by human beings except where such orders would conflict with the First Law.
3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.
In 1985, Asimov added a ‘Zeroth Law’, taking precedence over the others:
0. A robot may not harm humanity or, by inaction, allow humanity to come to harm.
‘You know the fundamental law impressed upon the positronic brain of all robots, of course.’…‘Certainly…. On no conditions is a human being to be injured in any way, even when such injury is directly ordered by another human.’]
Let’s start with the three fundamental rules of Robotics—the three rules that are built most deeply into a robot’s positronic brain.]
If your analysis were correct, Dave would have to break down the First Law of Robotics: That a robot may not injure a human being or, through inaction, allow a human being to be injured.
The robot part of the robot bomb is, of course, a low-grade idiot among robots. Incidentally, it violates, seriatim, all three of Asimov’s ‘Three Laws of Robotics’.
The First Law of Robotics states that a robot cannot harm a human being.
Perhaps we are robots. Robots acting out the last Law of Robotics… To tend towards the human.
His memory erased, his human ethics are replaced by four directives to govern his behavior (an idea borrowed from Isaac Asimov’s Three Laws of Robotics).
Last modified 2020-12-20 18:37:51
In the compilation of some entries, HDSF has drawn extensively on corresponding entries in OED.