A recent report prepared by the Department of Ethics and Emerging Technology of the California State Polytechnic University for the Office of Naval Research of the Navy of the United States, warns about the dangers of the use of robots in combat and makes suggestions on the code conduct that should have, which could not be easier with the growing advancement of technologies in artificial intelligence. According to this study, one in the cognitive answered is how the algorithms used in real cases, where the consequences of a mistake can be catastrophic.
So I open this paragraph with the open after a controversial incident in Iraq with American soldiers and military robots, which has gone unnoticed by most media.
The U.S. Army has decided to withdraw combat robots deployed in Iraq, after an incident in which one of these devices began to move his weapon without instructions for the maneuver.
The robot is involved in the incident SWORD, a remotely operated robot that can be equipped with a wide range of weapons, although the preference is a 50 caliber machine gun.
These robots are driven by radio, and the incident was that one of them began to move his weapon without the operator for operating the controls. The most likely cause of failure is a software problem.
Although the robot did not open fire, and consequently there was no risk to the soldiers, the possibility that a device equipped with a machine gun can get out is worrying enough for the military has decided not to continue using these robots.
Iraq had deployed four of these armed robots, but planned to deploy 18 more.
The SWORD is a modified version of Talon, a robot that support the U.S. military has been using since 2000 in surveillance missions and deactivation of explosives.
"The military autonomous robots that will fight the wars of the future must be programmed to follow a strict" military code. "Otherwise, you risk the world to suffer untold atrocities at the hands of their steel."
A report prepared and funded by the Office of Naval Research, a secret high-tech and owned by the United States Navy, makes this stark warning and also includes considerations about a possible scenario in the style of 'Terminator' in which robots turned against their human owners.
The report, the first serious work of this nature on the ethics of robot soldiers, to foreshadow a future increasingly close, an era in which robots will be smart enough to make decisions on the battlefield so far reserved for humans. In the end, he warned, the robots could develop significant cognitive benefits on the soldiers of the species Homo Sapiens.
"There is a very common mistake is to believe that robots will do just that are programmed to do," says Patrick Lin, the coordinator of the report. "Unfortunately, this belief is seriously out of date: data from a time that the programs could only be written and understood by a single person."
The reality, Lin said, is that modern programs have millions of lines of code to write and teams of programmers, none of whom knows the program. Consequently, no individual could accurately predict how they would interact with different parts of large programs without extensive testing on the ground, an option that could either not be available or may be deliberately disabled by the designers of the robot soldiers.
The solution, he suggests, is to combine a standards-based programming with a period of 'learning' of things that can and can not be made in the war.
The report covers a wide variety of scenarios in which ethical issues looming, legal, social and political issues that will arise when the robot technology progresses. How do we protect our armies of robots against hackers or terrorist gangs of software bugs? Who gets blamed if a robot were crazy before a crowd of civilians: the robot, its developer, the U.S. president? Should robots have a 'suicide option' or should be programmed to defend your life?
The report, coordinated by the Department of Ethics and Emerging Technologies for Politéctica California State University, the U.S. military harshly warns against complacency and shortcuts, now that the designers of military robots are involved in the race to reach the market and is increasing the pace of advances in artificial intelligence.
The sense of urgency among the designers may have been accentuated by the Congressional mandate in 2010 to one third of all aircraft are operating drones attack in 2015 and they are also one third of all infantry fighting vehicles .
"The race to reach the market increases the risk of an inappropriate design or programming. Worse, without a significant and sustained effort to instill ethical codes autonomous systems, there is little hope that the first generations of such systems and robots are appropriate, therefore commit mistakes that could cost human lives, "warns the report.
A simple code of ethics in line with the 'Three Laws of Robotics "which ran in 1950, Isaac Asimov, the science fiction writer, will be insufficient to ensure ethical behavior by the autonomous military machines.
"We're going to need a code," says Lin. "These gadgets are military and can not be peaceful, so we have to think in terms of the ethics of war. We're going to need a warrior's code."
Principles or laws of Robotics, Isaac Asimov set in his fictional novel, written in 1950, "I Robot" were:
1) A robot must not attack a human or with their inaction, allow a human damage.
2) A robot must obeceder orders given by human beings except where such term is in conflict with the first made up of laws.
3) A robot must protect its own existence as long as it does not conflict with the first and second of these laws.
However with the gradual integration of robot soldiers in armed conflict breaks the first law in the larobótica fiction novel by Asimov, which have already been incorporated in the case of the Korean Government.
THE ROBOT PROTAGONIST MILITARY SWORD OF INCIDENT IN IRAQ WITH U.S. soldiers.
0 comments:
Post a Comment