Wednesday, June 03, 2009

[robot logic] k.i.s.s.


Dylan Evans wrote, in Robot logic, on August 23, 2004:

Isaac Asimov’s solution to the problem of robots harming humans was to program all robots to follow these three laws:

1. A robot may not injure a human being, or, through inaction, allow a human being to come to harm.

2. A robot must obey the orders given it by human beings, except where such orders would conflict with the First Law.

3. A robot must protect its own existence but not in conflict with the First or Second Law.

Programming dilemmas

These laws might seem like a good way to keep robots in their place but they pose more problems than they solve. Asimov was well aware of this, and many of his short stories revolve around their implicit contradictions and dilemmas.

# For a start, programming a real robot to follow the three laws would in itself, be very difficult.

# The robot would need to be able to recognise humans and not confuse them with chimpanzees, statues and other humanoid robots.

# To follow rule two, the robot would have to be capable of recognising an order and to distinguish this from a casual request — something well beyond the capability of contemporary artificial intelligence, as those working in the field of natural language processing would attest.

# To follow any of the three laws, the robot would have to determine how they applied to the current situation, involving complex reasoning about the future consequences of its own actions and of the actions of other robots, humans and other animals in the vicinity.

# A robot needs to know its geographical restrictions. Standing in the Arctic, it might reason that it could take food to Africa and thereby save a child from starvation. If it remains in the Arctic, the robot would, through inaction, allow a human to come to harm, thus contravening the first law.

# What about conflict between one law and another? The hierarchical nature of the laws solves that.

# What about conflict between multiple applications of the same law?

For example, what if a robot was guarding a terrorist who had planted a time bomb? If the robot tortured the terrorist in an attempt to find out where the bomb had been planted, it would break the first law; but if the robot didn't torture the terrorist, it would also break the first law by allowing other humans to come to harm.

Lateral solution

Lord T's solution to the dilemmas is:

# Build robots, not in the image of humans but considerably smaller and single purpose;

# Give them simplistic command and respond codes, with no capacity for independent thought.

1 comment:

Comments need a moniker of your choosing before or after ... no moniker, not posted, sorry.