Robotics improve ethics in the battlefield

The hypothesis of my research is that intelligent robots could behave in a more ethical in the battlefield that humans at present," says Ronald Arkin, Computer Technology Institute of Georgia, which designs software for robots the battlefield, hired by the U.S. Army. "That's the theory I advocate."

Spy planes, mine detectors and sensors and robotic devices are used on a regular basis on the battlefield, but controls the man. What Arkin is talking about is real robots that function alone.
He and others claim that the technology necessary to produce autonomous robots with lethal capability is not very expensive and is proliferating, and that is only a matter of time before these robots are deployed in the battlefield. This means, adding that it is time that people start to talk about if this technology is something we want to use.

Noel Sharkey, of the University of Sheffield in the United Kingdom, wrote last year in the journal Innovative Technology for Computer Professionals, "This is not science fiction of Terminator-style, but the stark reality." He added that South Korea and Israel are deploying armed robot border guards. "We will not reach the point where we say that we should have done 20 years ago this debate," said Colin Allen, a philosopher at the University of Indiana in Bloomington and co-author of the new book Moral Machines: Teaching Robots the difference between good and evil.

Randy Zachery, who heads the Department of the Army Research Office, an agency which funds the work of Arkin says the Army expects that this "basic science" can show how the soldiers can use human and autonomous systems interact with them and can develop a software that enables autonomous systems to operate within the limits imposed by the fighter. "In a report for the Army last year, Arkin described some of the potential advantages of autonomous robotic fighters. To begin with, can be designed without the survival instinct and therefore no tendency to flee in fear. Can be manufactured so that it does not feel anger or rashness, Arkin added, and make them invulnerable to what he calls "the psychological problem of" fulfillment of expectations, "that causes people to absorb new information more easily if it is consistent with their preconceived ideas.

His report was based on a 2006 survey conducted by the health authorities of the Army, which revealed that less than half of soldiers and Marines serving in Iraq stated that non-combatants should be treated with dignity and respect and that 17 % ensure that all civilians should be treated as insurgents. Arkin provides a few ways that could be used for autonomous robots in operations against snipers, to clear buildings of suspected terrorists or other dangerous assignments. But first would have to program the rules and instructions on who to shoot, when is it acceptable to fire and how to distinguish the enemy from attacking civilians, the wounded or someone who is trying to surrender.

The simulations of battlefield Arkin are in computer screens. Robot pilots with information which could have a human pilot, such as maps showing the location of churches, apartment buildings, schools and other centers of civic life. It teaches them exactly where the enemy troops, war material and priority targets. And they are given the rules of engagement, guidelines that limit the circumstances in which they can initiate and carry out combat. In one simulation, a robot pilot over a small cemetery. The pilot discovers a tank at the entrance to the cemetery, a possible target. But there is also a group of civilians present, so the pilot decided to go ahead and soon find another armored, which is alone in a field. The pilot fired, the target is destroyed.

Some people who have studied this issue are concerned that the battlefield robots designed without emotions they lack compassion. Arkin, a Christian who acknowledges the help of God and Jesus in the prologue of his book of 1998 Robotics based on behavior, reasoning that since the rules of the Geneva Convention are based on human principles, if you join the mental architecture of a machine, would give them something akin to compassion. But adds that it would be difficult to design "perceptual algorithms" able to recognize, for example, if people are injured or waving a white flag. Arkin believes that provoking debate about technology is the most important part of their work.

0 comments: