In science fiction the Three Laws of Robotics are a set of rules written by Isaac Asimov, which most of the robots from their novels and stories are designed to meet. In this universe, the laws are mathematical formulas printed on the trails positronic brain "of the robots (which today would call ROM). First appears in the story Runaround (1942), provides:
A robot must not injure a human being or, by inaction, allow a human being suffers harm.
A robot must obey orders given it by human beings except where such orders are in conflict with the First Law
A robot must protect its own existence, so far as such protection does not conflict with the First or Second Law
This formulation of laws is the conventional way in which the human stories of the state; would be the real form of an equivalent set of instructions and more complex in the brain of the robot.
Asimov attributed to the three Acts John W. Campbell, who would have drafted during a conversation held on December 23 of 1940. However, Campbell argues that Asimov and had thought, and simply expressed between the two in a more formal.
The three laws in a number of stories from Asimov, as shown in its full range of robots, as well as several related stories, and the series of novels out by Lucky Starr. They have also been used by other authors when they have worked in the fictional universe of Asimov, and frequent references to them in other works, both fiction and other genres.
Purpose :
These laws arise as a measure of protection for human beings. According to the Asimov, the conception of the laws of robotics want to counter an alleged "Frankenstein complex," ie, a fear that human beings developed compared to the machines that hypothetically might rebel and rise against their creators. Even trying to disobey a law, the robot's positronic brain be irreversibly damaged and die robot. A first level presents no problem to equip robots with such laws, after all, are machines created by man for his service. The complexity is that the robot can distinguish which cover all situations that the three laws, that is able to deduct at the time. For example in a particular situation to know whether a person is running or no damage, and deduct what the source of the damage.
The three laws of robotics represents the moral code of the robot. A robot will always act under the imperatives of his three laws. For all intents and purposes, a robot will behave as a being morally correct. However, it is legitimate to ask: Is it possible that a robot violates any of the three laws? Can a robot "damage" to a human being? Most of the Asimov robot stories were based on situations where despite the three laws, we could answer the above questions with "yes."
Asimov created a universe in which robots are key to over ten thousand years of human history and continue to have a decisive role for ten thousand years. It is logical to think that the level of development of robots vary over time, increasing their level of increasing complexity.
History of the Three Laws of Robotics
The first robots built on Earth (viewed, for example, I, Robot) models were developed recently. It was a time where robopsicología not yet developed. These robots could be faced with situations in which they were in conflict with their laws. One of the easiest situation is when a robot to harm a human being to keep two or suffer more damage. Here the robots decided on the basis of a purely quantitative approach, then left unused, to be forced to violate the first law.
Subsequent developments in robotics, allowed the construction of more complex circuits, with a greater capacity for self-reflection. One peculiarity is that robots could redefine their concept of "harm" as their experiences and to determine levels of it. His valuation of human beings can also be determined by the environment. So a robot could injure a human being to another to protect it considers most valuable, particularly his master. It could also be the case that a robot to physically harm a human being to another to avoid being damaged psychologically because it becomes a tendency to consider psychological harm more serious than physical. These situations would never have occurred in older robots. Poses Asimov robot stories in the most diverse situations, always considering the logical possibilities which could lead to robots with such situations.
The Act Zero
One can reach with fond Asimov robots, which shows in their stories robots increasingly "human". In Bicentennial Man, Asimov tells the story of Andrew Martin, born robot, and that struggle throughout his life to be recognized as a human being. Also R. Daneel Olivaw and R. Giskard Reventlov, which have a key role in the second expansion of humans and the subsequent founding of the Galactic Empire. Being the most complex robots ever created, were able to develop the law of robotics zero ( "Zero-law" in English) as a corollary of the first philosophical:
A robot may not harm humanity, or, by inaction, allow humanity to suffer damage.
R. Giskard died in Robots and Empire, after being forced to harm a human being under the law zero. The fundamental problem with this law is to define "humanity" and to determine what constitutes a "harm" for Humanity. R. Daneel achieved through the sacrifice of assimilation Giskard, since becoming the protector in the shadow of humanity. Daneel under different identities, he became one of the most important cycle Trantor (formed by the robot stories and novels, novels of empire, and the saga of foundations: 17 books) is also a key element in continuity.
Criticism
In principle it should be considered for autonomous intelligent machines that violate the law.
A missile can be considered intelligent and has no problem killing humans. A computer system that coordinates air strikes and land use in a military operations center violates the law. A police robot that could be created should be able to kill like the police man. This would involve reviewing the fundamental concepts that underpin a society. For example, if private property premium on human life, then the police should be able to kill humans to protect private property. That is a robot or a human did not make a difference because the result is the same.
The problem arises when considering a set of robots can take over the resources and enslave or annihilate the human no matter what humans are concerned. Any human regardless of race, religion or economic status would be treated the same way. This leaves aside the problem of humans who enslave one another and the genocide being committed against certain groups. Would it be more serious than what makes a group of robots?
In general science fiction movies like I, Robot pose an extremely intelligent robot may decide to control the destiny of humans and that this is unacceptable. These films suggest that the thought of a completely logical robot is unacceptable because emotions may regulate the conduct of a most appropriate way, more human.
It should be stressed that this would contravene the natural evolution. If robots are human creations and beyond us in intelligence and ability, then that humans are annihilated not pose a problem, because the robots would be a higher evolutionary state, and humans have been the missing link between apes and robots. The robots are evolving field.
0 comments:
Post a Comment