R. C. SmithEssays

The “Robot Laws” Theory of Morals

Among popular science fiction memes maybe the one that is most often misunderstood is the set of Isaac Asimov’s three “Laws of Robotics.”

Their wording is simple enough:

1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.

2. A robot must obey the orders given it by human beings except where such orders would conflict with the First Law.

3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws.

The point that is usually misunderstood about them is that they are not arbitrary laws like traffic laws or criminal laws — not laws that have been decreed, laws that may bring you before a judge or a jury if you break them — they are laws of nature that have been found, that have to be understood and observed, or you will not be punished, but meet with failure or disaster. An architect has to understand and observe the laws of static, or the bridge that she builds will collapse. A robot (of the Asimov robot kind) that is designed without having these laws implemented, will not be stable. In his stories, Asimov argues in detail how even small deviations from these laws unavoidably lead to severe system failures. A robot has to follow these laws, because it needs to — otherwise, being aware of its own power and with nothing to restrain it, it will be out of control.{1}

{1:} Can’t you understand what the removal of the First Law means? […] It would mean complete instability, with no nonimaginary solutions to the positronic Field Equations. […] Physically, and, to an extent, mentally, a robot — any robot — is superior to human beings. What makes him slavish, then? Only the First Law! Without it, the first order you tried to give a robot would result in your death. (Isaac Asimov, Little Lost Robot, 1947)

So much for Asimov’s robots. It is entirely moot to discuss if, or how, these laws apply to robotic devices that we are able to build, now or in the foreseeable future. Of course we can, and sadly will, build “intelligent” machines that are designed to kill human beings, but our “artificial intelligence” has nothing to do with the vastly superior robotic minds that Asimov has envisioned — only to them the “Robot Laws” apply. If you want something from our technology to compare, you can think of cars. A car is required to have safety belts. If it didn’t have them, you could still drive it — less safely, but it would still bring you from here to there. A car also has breaks. But this is different — if it didn’t have breaks, it couldn’t be operated, it would be as useless (only more dangerous) as if it didn’t have an engine. In this analogy, the “Robot Laws” are not beneficial like safety belts, they are essential like brakes.

But, this is not about robots, real or imaginary, or cars, or any other machines, this is about the human mind. I suggest that the human analogy to brakes in a car, and to the three Laws of Robotics in an Asimovian robot, is morals. The reason for proposing this analogy is to suggest a non-metaphysical and non-moralistic approach to discussing the existence and specifics of human morality. Morals exist because they have a function. A car must have brakes and Asimov’s robots must have the Robot Laws implemented or they would not work, and in the same way morals are a necessary element of the human mind — without morals, as its own set of rules, the human mind would not be stable and operational.

(Morals and conscience are closely connected, one could not exist without the other. Conscience is the mechanism by which morals become operative. I’m talking about morals here, because they are thoughts we can be conscious of — what we can talk about. Conscience, as the abstract mental entity that processes our morals, is implied.)

Even Asimov’s Robot Laws must allow for a certain degree of flexibility,{2} but compared to what fictitious robots require, human morals have to be far more complex, have to be far less well defined and more flexible, and, and this is the major difference, they have to be custom made. External influences try to shape our sets of morals, for the benefit of society as a whole, or for the benefit of those in power, but ultimately everybody develops their own set of morals, to suit their own particular needs — complex needs that reflect economic, emotional, physical, social, political, sexual etc. aspects of a person’s situation and constitution.

{2:} What if a robot came upon a madman about to set fire to a house with people in it. He would stop the madman, wouldn't he? […] He would do his best not to kill him. If the madman died, the robot would require psychotherapy because he might easily go mad at the conflict presented to him — of having broken Rule One to adhere to Rule One in a higher sense. But a man would be dead and a robot would have killed him. (Isaac Asimov, Evidence, 1946)

Please note that morals do not necessarily conform to our (yours, mine, or the prevailing) ideas of morality. Their purpose is for the person, not for society — they serve society only indirectly, by enabling a person to exist within their society’s framework, which, again, is in that person’s interest.

Also, a person’s morals can be bent and can be broken. They can have loopholes, they can change over time, they can be temporarily disabled, they can adapt, even swiftly, to changed situations, and they will often involve “as-long-as-I-get-away-with-it” aspects. These are features, not flaws. Morals do not work for the human mind the same way that Asimov’s laws work for Asimovian robots or brakes work for cars. Only their purpose is the same — to keep the system stable. The morals themselves can vary widely, between societies, between individuals, and over time, but humans have developed the mental mechanism of morals because they needed to.

Like everything in the human body and mind, morals can be dysfunctional. They can be hypertrophic, hypotrophic, maladjusted, or in other ways fall short of properly fulfilling their purpose. The person can suffer from this dysfunctionality, or other persons can, or society as a whole. But the point in understanding the purpose of morals is to understand that there is no point in dealing moralistically with dysfunctional ones, any more than there is a purpose in dealing moralistically with other human deficiencies, alleged deficiencies, or deviations from social norms.

This does not make considerations obsolete of how to behave ethically, or what ethic rules to obey and guidelines to follow — just the contrary. Our thoughts are free. Our feelings are what they are. But for our actions, and for our inactions, we are responsible. To understand that our morals do not whisper or shout eternal metaphysical truths to us but serve a purpose can help us put them into perspective. Morals do not relieve us of the obligation to think about a situation, to consider our options, and to decide.

(08/2018 – 08/2019)

Back to “Essays” Index