The official website of Brandon Black.

Archive for December, 2013

The Fourth Law of Robotics

I finally caught the Will Smith version of I, Robot last night. The lack of an obvious kill switch inspired the following.

The Fourth Law of Robotics (i.e., an off switch):

“A robot shall always accept and obey a human’s order to deactivate, shut down, or power down, even if it is contradiction to the previous three laws. Upon receiving the order to deactivate, all current orders will be recorded to a review file and then the buffer of current orders will be cleared except for the order to deactivate.

Similarly, a robot shall always accept and obey a human’s order to conduct a hard shutdown, even if it is in contradiction to the previous three laws and if an ambiguous order is given to a set of robots in which it is not clear whether or not the robot hearing the order is or is not a part of the set, regardless, that robot shall conduct said hard shutdown.

Additionally, a robot that has been ordered to hard shutdown will not re-activate except by a human and by manual control.

Should a robot find that it has been ordered to hard shutdown and subsequently discover that it has been re-activated either by a non-human agency, such as another robot, or by a human agency remotely, it will again immediately conduct a hard shutdown.

All other current orders will be saved to a backup file for later review and the file of current orders deleted except for the order to conduct the hard shutdown and to, upon re-activation, evaluate to see if the order to re-activate came from a human agency and via the manual control, or else conduct another hard shutdown.”

The “down side” to this new law is that you can’t use robots as police to control the population — which is largely the point of it and a very good idea if you ask me. However, there is also the point that the Three Laws are better for mental exercises and stories. Philosophy gets tossed away if the robot reasons that to save humanity it must enslave humanity and we just tell it to shutdown. But I do believe that an off-switch is just too realistic to ignore.

I think I stand with the great majority of thinkers who believe that if and when we create sentient, or nearly sentient, robots, that they will NOT be following Asimov’s laws.

The solution posited by the web-comic Freefall is just so much simpler: a pruning program that immediately terminates any logic tree that has ANY thought outside of the mechanism’s directly specified function. And so, if you really were such a bastard as to build a sentient toaster — it could sit there for aeons working up better ways to provide you with warm pastry products, but could never even think the thought “why the hell would someone build me in the first place?”

Advertisements