The Impossibility of Asimov's Laws of Robotics


Recent­ly, I lis­tened to a story about an Asi­mov style robot. You know, robots which sur­pass their cre­ators and have to deal with the exis­ten­tial con­fu­sion which aris­es…

Any­way, this story reminded me of one of the major sta­ples of Asi­mov’s robots which is the so-­called Three Laws of Robot­ics. The story did­n’t actu­ally make use of these laws but I’m not actu­ally talk­ing about that sto­ry. What I am talk­ing about are Asi­mov’s Three Laws of Robot­ics and why I think that they’re impossible. The Three Laws for any­one who needs a refresher are:

  1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.
  2. A robot must obey any orders given to it by human beings, except where such orders would conflict with the First Law.
  3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.

In Asi­mov’s uni­verse, these ‘laws’ exist to cre­ate robots and other arti­fi­cial intel­li­gences which are safe and ben­e­fi­cial to mankind. They exist, not as a set of rules which are exter­nally imposed on robots in the same man­ner as laws which are imposed on human beings; they, instead, are direc­tives pro­grammed directly into them so that they can­not vio­late them any­more than a human being can choose to not be hun­gry when he has­n’t eat­en. The laws pre­cede what­ever sort of free-will of which robots may be pos­sessed. The laws are a fun­da­men­tal aspect of the per­son­al­ity of every robot in exis­tence. This con­cept, and it’s fic­tional con­se­quences, are the sub­ject of a large num­ber of Isaac Asi­mov’s books.

My ques­tion though is this: Would it actu­ally be pos­si­ble to cre­ate a set of behav­ioral pro­to­cols such as these and to embed them into the behav­ior of an arti­fi­cial intel­li­gence? I rather doubt it and here’s why. While we can pro­gram a com­puter to per­form con­crete actions infal­li­bly based on objec­tive cri­te­ria, such as turn on an air con­di­tioner if the tem­per­a­ture rises above a cer­tain lim­it, the Three Laws rep­re­sent direc­tives which require judg­ment to under­stand and ful­fill.

For a sim­ple exam­ple, what does it mean to injure a human being? Remov­ing a body part such as a hand usu­ally counts as injury. What about a hair fol­li­cle? Hair grows back but a hand does­n’t, so what about a small cut to the arm? That will heal but it’s still con­sid­ered an injury. What about delib­er­ate pierc­ings, or surgery, or sit­u­a­tions where a robot must cause harm to pre­vent it? What about the appar­ently sim­ple task of get­ting a machine to recog­nize the abstract con­cept of ‘bro­ken­ness’ at all?

The point is that this is an arti­fi­cial intel­li­gence prob­lem and not a mat­ter of sim­ple pro­gram­ming. In order to imple­ment the Three Laws in any hypo­thet­i­cal arti­fi­cial intel­li­gence, one will need to imple­ment them in terms of the arti­fi­cial intelligence. What this means is the Three Laws can­not be hard-wired into robots in the same way that a key­log­ger can be hard-wired into a Chi­nese moth­er­board. They can’t be part of the firmware, they must be a user-­land exten­sion.

That said, what’s to stop us from build­ing a robot in which the Three Laws are part of the ‘user-­land?’ That is, what’s to stop us from cre­at­ing an arti­fi­cial intel­li­gence and then adding in the Three Laws after the fact? As far as I can tell, noth­ing. How­ev­er, adding the laws after the fact would in some ways be anal­o­gous to teach­ing an already intel­li­gent entity a set of rules and then demand­ing that it fol­low them. Cer­tainly we could put them in at a very deep point in the arti­fi­cial intel­li­gence’s con­scious­ness (or what passes for con­scious­ness) but it could­n’t be at so deep a level that they could­n’t be unlearned; as object of intel­li­gent under­stand­ing, they would be open to inter­pre­ta­tion.

I think the issue really boils down to a ques­tion of how to achieve moti­va­tion at all in a robot. In humans, this is achieved through emo­tions which are hor­mon­ally driven. Dur­ing a human’s devel­op­ment it learns to as­so­ciate cer­tain con­cepts with cer­tain feel­ings and so its per­son­al­ity is formed. One could take the same tack with robots but then they, as with humans, would be muta­ble and the The Three laws would be break­able.

Of course this is all just spec­u­la­tion on my part as I have lit­tle idea how an actual arti­fi­cial intel­li­gence would work if at all, so I sup­pose that it’s con­ceiv­able that an unknown tech­nol­ogy would allow for this but then, I think that any suf­fi­ciently com­plex sys­tem to exhibit gen­uine intel­li­gence would nec­es­sary be beyond the full com­pre­hen­sion of its cre­ators. With this in mind, I sup­pose that there would­n’t be any reli­able to way to con­trol it at all. But that’s just my opin­ion.

    Last update: 10/09/2011

    blog comments powered by Disqus