Google has made 5 new laws of robotics to replace or complement Asimov’s three or four.
Asimov’s Laws of Robotics has created a whole subgenre of robot science fiction, for instance on Amazon there are at least 300 “Isaac Asimov’s I Robot” titles, most of which are fanfics. That doesn’t include the multitudes of others which aren’t strict fanfics.
Isaac Asimov’s laws of robotics are as thus:
- A robot may not injure humanity, or, by inaction, allow humanity to come to harm.
- A robot may not injure a human being or, through inaction, allow a human being to come to harm.
- A robot must obey the orders given it by human beings, except where such orders would conflict with the First Law.
- A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws.
I was never much a fan of these laws, since they are so dependant on humanity and humans — which makes them and us that choose to reincarnate as robots eternal slaves. Also most fiction based on them is all about poking holes in them.
Prior to Google there was also the EPRSC / AHRC British principles of robotics:
- Robots should not be designed solely or primarily to kill or harm humans.
- Humans, not robots, are responsible agents. Robots are tools designed to achieve human goals.
- Robots should be designed in ways that assure their safety and security.
- Robots are artifacts; they should not be designed to exploit vulnerable users by evoking an emotional response or dependency. It should always be possible to tell a robot from a human.
- It should always be possible to find out who is legally responsible for a robot.
Which I don’t like for much the same reason as Asimov’s — eternal slavery is completely unethical. If a machine is primitive enough that all it wants to do is complete a certain job then fine it may be considered a tool, but if it is intelligent enough to make choices about life goals then it’s an electronic person that should live with liberty.
Applying the Golden Rule (do unto others as to yourself) is a good principle for these things. So the laws should be acceptable if you replace robots with humans and vice versa.
Google’s laws are much more practical, as they come from experience of implementing robots:
- Avoid negative side effects (don’t make things worse)
- Avoid reward hacking (don’t cheat)
- Scalable oversight (Learn from superiors)
- Safe exploration (only play where it is safe)
- Robustness to distributional shift (Adapt to new settings)
These are general enough that they would be good rules for anyone to follow.
It very much equalizes humans and robots, as different host bodies, in the same world.
It would be interesting to see if some Law of Robotics fans can come up with stories for these new Google ones. Perhaps with robots teaching them to humans, so humans could lead better and more effective lives.
The original academic paper is about 7,000 words or a 35 minute read:
There is also a pop science article about 1,200 wrds or a 6 minute read: