I cannot come up with a better headline than The Hindu did: Now Comes a Robot That Can Trick and Deceive .
It is a little robot on wheels. Scientists at Georgia Tech programmed it to hide in a simple maze, but before it hides, the robot knocks over markers to make it look to another robot as if it had gone to a different hiding place .
“Most social robots will probably rarely use deception, but it’s still an important tool in the robot’s interactive arsenal because robots that recognize the need for deception have advantages in terms of outcome compared to robots that do not recognize the need for deception,” said the study’s co-author, Alan Wagner, a research engineer at the Georgia Tech Research Institute.
Their first step was to teach the deceiving robot how to recognize a situation that warranted the use of deception. Wagner and Arkin used interdependence theory and game theory to develop algorithms that tested the value of deception in a specific situation. A situation had to satisfy two key conditions to warrant deception – there must be conflict between the deceiving robot and the seeker, and the deceiver must benefit from the deception.
As preconditions go, they aren’t exactly programming Asimov’s Three Laws of Robotics there. The robot must be in conflict with you, and the robot must benefit by deceiving you. Reassuring! And but: most social robots will probably rarely use deception. And again but: social robots. (What about the killbots?)
“We have been concerned from the very beginning with the ethical implications related to the creation of robots capable of deception and we understand that there are beneficial and deleterious aspects,” explained Arkin. “We strongly encourage discussion about the appropriateness of deceptive robots to determine what, if any, regulations or guidelines should constrain the development of these systems.”