I was recently discussing Isaac Asimov’s rules on Facebook and decided to post my opinions here, where I am king and cannot be argued with. 🙂 Asimov’s three rules of robotics are so simple, they are almost primitive. It is that simplicity that has made them so famous (you can read the rules at the end of the post). Like the ten commandments, they take complex issues and fears and break them down into easily digested and understood expectations. I am not trying to take away from Asimov’s talent at constructing such a simple set of rules, it is not easy to create something that the public can digest, understand and regurgitate. More importantly, the rules have affected those who actually work in the field of robotics and artificial intelligence, meaning Asimov’s rules on robots are actually being incorporated into the plans for modern robots. Nice job, Isaac!

Unfortunately, in real life, they would never hold up. The main reason is because the rules are horribly restrictive to what we need them to do.  Robots are being designed for two basic functions: Human companionship and hazardous work. If you are making cute teddy bear robots, you are obviously going to want to incorporate the rules, however if you’re creating robots for the second reason, you are going to find the rules impossible to implement. The primary job of a robot is to substitute itself for a human being. Whether the robot is mining lava or fighting crime, its basic programming would HAVE to allow it to place itself in danger, otherwise it would be useless. The world doesn’t need a robot nurse, it needs a robot soldier, combat or rescue are the two primary uses for intelligent machines.  I’ll get back to this one in a minute, for now lets read through the first rule.

  1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.

This rule is almost naive in its simplicity. Does this mean you cannot have a robot doctor or dentist? Because surgery and medicine are major industries for new technologies. Can a robot be a mid-wife? Not only is it hard to determine what exactly constitutes an injury (no robot wrestlers I guess), it is almost impossible to expect a machine to stop a human being from coming to harm. The only way a robot could fulfill this expectation would be to nanny human beings, every moment of the day, all day, forever. No smoking, no running at the pool, no diving, nothing at all that might cause injury or promote an unhealthy lifestyle. Will the robots force everyone to take yoga classes, or refuse to let bad drivers get behind the wheel? Once a robot is forced to value human life, it is a short jump to calculating which human lives are worth more. When two people of different cultures and races are in conflict, who does the robot assist? You see if a robot cannot clearly comprehend and deal with issues like our military deal with in places like Iraq, then the robot is useless.

2. A robot must obey the orders given it by human beings, except where such orders would conflict with the First Law. You know what humans love best about machines? They obey. This rule is almost laughable and goes against our entire culture of ownership. If I owned a robot, it would be MINE, it would listen to only my instructions, follow only my rules, do what I tell it to do. We all have cellphones but how often do you share it? How many people know your password? This rule makes it sound like anyone can come along and tell your robot what to do and it would have to listen. That would make for some funny pranks, but that’s not how our culture works. When you buy something, you own it, no one wants to share with other human beings.

3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws. This is the rule that has always made me laugh. This sounds more like a civil rights bill than a rule for robots. A robot is supposed to be destroyed, that it what it is built for. Machines are made to be used until they break and they are either repaired or scrapped for junk. What could would a squad or space exploring robots be, if they ducked for cover at the first sign of danger? How could we send a robot to the bottom of the sea if its programming tells it to protect itself? A robot not willing to be damaged would be useless. Asimov’s robots lack motivation and dedication, because their programming is build around avoidance of danger. If a robot cannot take my place in dangerous situations and will not automatically protect me from other humans, why would I pay so much for the damn thing? All the most interesting robots are immoral or are killers. Asimov’s rules make us feel safe with the concept of robots, they give us hope that we’ll have absolute control over our computerized children, but they would never actually be implemented. There are just too many reasons to kill and ways to die that a machine that avoids harm would be virtually useless. If she can’t hurt you, how much fun can you possibly have??