Why AI is dangerous?


To put it in one sentence, I would say that is because only a minority of cognitively possible sets of objectives set high on the continued survival of human beings and the structures that we value. Another reason is that we can not specify in sufficient detail what mathematical value to transfer to a new species without requiring much trouble.

It would be easy if we could transfer the group aims for a "typical human" or a "good person" and want the best. But there is one problem: we have no experimental evidence of what happens when a human being can alter their own objectives, or increase your own intelligence and / or physical power exponentially.

The little evidence we have of the stage where people can take many in a short time indicates that the results are not usually good. In fact, we have complicated democratic mechanisms built into our society to protect ourselves from these results.

Most AI designers are failing the challenge because no one wants to take responsibility for creating the first truly intelligent being. They just want to play with your program. The idea of taking any responsibility for the products of the research itself is a relatively recent concept, which has weight only in a minority of scientists and engineers, even today. This is normal because scientists and engineers are embedded in vast institutional apparatus that puts the responsibility so far up the chain of command that the actual researchers are exempt from most, if not total, responsibility. Let us return to original subject of all objectives. Here are some likely uses for more advanced AI technologies in the next 10-20 years:

Intelligence analysis and war games. Law enforcement. Analysis of interstate politics. Finance, banking, and investments. Control of robot combat. Automate workflows.

There are many others, but I put them on top of the list because they have more economic or political importance, and therefore get more money for research.

While the AI in these areas are progressing, the systems produce decisions from Iran only when they are explicitly asked to produce decisions continuously and automatically. When an employee check the computer for human input, is more like a tip in a jet cup and fill in the existing flow of knowledge and decision-making-of, rather than operate a light switch or press' run 'for a conventional computer program.

As entities that constantly think and make decisions, these systems have implicit IA maximum targets, whether persons or not the explicit program. The implicit goal of a maximum Automator workflow will accelerate the completion of productive work. The implicit goal of the robot will choose the finance actions to maximize the return on investment. The implicit goal of the IA will be fighting to kill or capture the persons specified by certain data files from your memory.

What makes AI so potentially dangerous is the lack of history of humanity and common sense that we take for granted. When the clock is 5, the majority of workers leave their work and finish the day. Go home and spend time with his family, watching TV or playing games, or just relax. A worker would have no such artificial "normal background" unless you schedule it. Is at work, 24 hours a day, 7 days a week, while you continue taking your energy from the wall. That kind of monomaniacal dedication is what puts humanity at risk of an IA when it starts to leave the lab and enter in the real world. IA with a maximum implicit wish to strengthen these objectives and achieve goals more effectively, where the "objectives" are not the same as would a human being who has passed a piece of paper written with these objectives, but are represented in the context of the decision structure and worldview of the IA.

Rationality and reasonableness of the goals are not easily transferred to a mind without the knowledge and common sense embodied in every normal human being neurolígicamente. A blank slate intelligence sitting in the middle of a forest could develop models and make inferences about the many aspects of their environment, these trees are tall, these animals are moved but not the plants, the climate changes in cycles. But what about the inferences about "doing what is right"? Unable to get a `should 'of a thing. Putting an AI in a social environment with humans or other IA does not help, because without some deep-seated reasons for this strange thing to worry about "moral" in the first place, only one IA will cheerfully carrying out the objectives that have no subtlety originally allocated.

Achieved while improving the ability to graze their own intelligence or power of robotics, will continue to improve more and more to achieve those goals, and humans will be more and more difficult to reach and pass the motivation to worry about morale in the abstract. If AI in any of the previously cited applications the ability to overcome to achieve in a meaningful way, mentally or physically, the highest implicit goals that were given will be magnified many times. There will be little reason to modify the AI these goals, unless such flexibility mechanisms were explicitly programmed. When someone sees a human hunger, he tends to feel embarrassed and at least want to be able to help. When a human sees someone attacking a defenseless child, tends to get angry. For a typical IA, a hungry person or a child are attacked only relevant in the context of the objectives already has - "How does this human hunger for stock prices", or "Can you give me this information hungry human regarding the location of my next goal? "are two questions that may come to mind.

Freedom, empathy, self-determination, consensus building, conflict resolution, aesthetics, friendship and understanding ... these values and inclinations are automatically embedded in every human being without serious brain defects. For the IA a share, must be made in terms of lines of code and mathematical rigor. What programmer has time to do all that work when the general intelligence without a moral human appearance will be significantly easier to achieve?

This disparity in difficulty between general intelligence and general intelligence naked morally sophisticated is what makes AI so dangerous in the long term.

European scientists have invented a robot that learns math concepts


Scientists at the European research project XPERO have invented a robot that has the ability to learn basic physical and mathematical concepts and use them to move, announced today the University of Bonn-Rhein-Sieg in Sankt Augustin (West Germany).

concepts such as location or orientation through a system of coordinates.
Initially, the robot moves aimlessly around and fixed simultaneously recorded sensory data, which then used to produce a pattern or model. This pattern makes it possible, later, the android can anticipate the location of objects and how they vary their position when it moves.
The authors of the algorithm, and Ivan Jure Zabkar Bratke, University of Ljubljana (Slovenia) stresses that "what a person is something trivial, a robot can lead to an enormous difficulty." He further explained that the android designed "has less knowledge than a baby," since it does not distinguish objects, but only spots of color and the edges of them.
Distinguish MANCHAS
"You do not have notions of the concept of an object or its position in a coordinate system, nor is it aware of how the system varies according to their own movements," he adds.
Therefore, the robot must not learn to tell a coordinate system, or how.
Scientists have developed a mechanism that allows the PLC to establish a routine that helps to remove the sensory data and to translate them into a model to help explain and anticipate what surrounds it. Using the same algorithm we have taught some physical concepts such as "flexibility" of an object or the "degree of freedom of movement."


IMPORTANT DISCOVERY
What originally was considered a problem was academic in nature has a great technical importance, says project coordinator Erwin Prassler of the German University of Bonn-Rhein-Sieg (the west).
The project XPERO sets the groundwork for the first time in the future that will develop the key technology to give birth to the next generation of service robots that can clean the house, mow the lawn or cleaning the shoes. The androids that had been invented to date lack intelligence because they are pre-programmed devices, which are unable to export data that does not know or use operating systems that have not previously studied. However, service robots of the future may treat a large number of new concepts and models on the basis of knowledge and learning and sensory-alerts, and perform new tasks.
Automata XPERO project will provide a demonstration at the conference technological FET (Future and Emerging Technologies) this year, taking place in Prague between now and Thursday, April 23.

Robotics



Robotics is a new discipline with its own problems, its foundations and its laws. Is twofold: the theory and practice. In the theoretical Gathering input from the auto, computer science and artificial intelligence. On a practical or technical aspects of construction are (mechanical, electronics) and management (control, programming). The robot then presents a decidedly transdisciplinary.
The first industrial robot was installed in 1961 at a General Motors plant in United States. After United States, the first to robotic industry were Japan and Sweden, while the remaining countries of the OECD robots was first introduced during the decade of the seventies.

However, the growing market of robotics did not take place until the mid-eighties when the cumulative annual rates of growth in the number of robots made more than 20 per 100. The strong growth in demand was mainly due to improved technology, the price of robots in industrial processes and added a number of factors directly related to the competitiveness of business.

The impact of robotic systems is bound to other variables (such as the introduction of new systems of organization, new technologies, globalization, offshoring of business, etc.) Influencing the trend of declining labor conflicts that characterized, according to experts interviewed, the development of advanced technological economies in the coming years.

Flexible automation technologies such as robots and flexible manufacturing systems are one of the elements that supports the "light output".

This production is characterized by the increase in organizational flexibility that allows the manufacture of a greater variety of products in short series. The slight shift to the production requires a series of multi-organizational changes of workers, new relationships with suppliers, etc., but also requires the adoption of technologies that integrate information flows, and flexible manufacturing technologies increase the degree of variability in production.

The robots we make the procedures seem very simple and, with a foolproof security, again and again the same operations. But appearances are deceptive: after a simple motion of lifting and placing a lot of underlying technology.

Two students design a robot to control leakage monoboyas


Two students in the second year of high school Odiel of Gibraleon, Raquel Garrido and Carlos Romero Romero Charro, have won the second prize of Jerome Ayanz on Industry and Technology, convened by the Chemical Industries Association Basic and Huelva (AIQB) and Delegación Provincial de Huelva of the Ministry of Education.

The award-winning work is called 'Design and simulation of a robotics system for the control of accidental leakage monoboyas', which was coordinated by Prof. Alberto Bouzón, the department's Technology Center said. This is a comprehensive work that even developed a prototype robot and perform test runs.


The award aims to foster Jeronimo de Ayanz among students Huelva knowledge of the industry of the province, an area with which to live and in the near future that could carry their work and professional career. At the same time seeks to encourage analytical skills, and expository writing students through the papers presented, either individually or in teams

Three Laws of Robotics


In science fiction the Three Laws of Robotics are a set of rules written by Isaac Asimov, which most of the robots from their novels and stories are designed to meet. In this universe, the laws are mathematical formulas printed on the trails positronic brain "of the robots (which today would call ROM). First appears in the story Runaround (1942), provides:
A robot must not injure a human being or, by inaction, allow a human being suffers harm.
A robot must obey orders given it by human beings except where such orders are in conflict with the First Law
A robot must protect its own existence, so far as such protection does not conflict with the First or Second Law

This formulation of laws is the conventional way in which the human stories of the state; would be the real form of an equivalent set of instructions and more complex in the brain of the robot.

Asimov attributed to the three Acts John W. Campbell, who would have drafted during a conversation held on December 23 of 1940. However, Campbell argues that Asimov and had thought, and simply expressed between the two in a more formal.

The three laws in a number of stories from Asimov, as shown in its full range of robots, as well as several related stories, and the series of novels out by Lucky Starr. They have also been used by other authors when they have worked in the fictional universe of Asimov, and frequent references to them in other works, both fiction and other genres.

Purpose :

These laws arise as a measure of protection for human beings. According to the Asimov, the conception of the laws of robotics want to counter an alleged "Frankenstein complex," ie, a fear that human beings developed compared to the machines that hypothetically might rebel and rise against their creators. Even trying to disobey a law, the robot's positronic brain be irreversibly damaged and die robot. A first level presents no problem to equip robots with such laws, after all, are machines created by man for his service. The complexity is that the robot can distinguish which cover all situations that the three laws, that is able to deduct at the time. For example in a particular situation to know whether a person is running or no damage, and deduct what the source of the damage.

The three laws of robotics represents the moral code of the robot. A robot will always act under the imperatives of his three laws. For all intents and purposes, a robot will behave as a being morally correct. However, it is legitimate to ask: Is it possible that a robot violates any of the three laws? Can a robot "damage" to a human being? Most of the Asimov robot stories were based on situations where despite the three laws, we could answer the above questions with "yes."

Asimov created a universe in which robots are key to over ten thousand years of human history and continue to have a decisive role for ten thousand years. It is logical to think that the level of development of robots vary over time, increasing their level of increasing complexity.

History of the Three Laws of Robotics

The first robots built on Earth (viewed, for example, I, Robot) models were developed recently. It was a time where robopsicología not yet developed. These robots could be faced with situations in which they were in conflict with their laws. One of the easiest situation is when a robot to harm a human being to keep two or suffer more damage. Here the robots decided on the basis of a purely quantitative approach, then left unused, to be forced to violate the first law.

Subsequent developments in robotics, allowed the construction of more complex circuits, with a greater capacity for self-reflection. One peculiarity is that robots could redefine their concept of "harm" as their experiences and to determine levels of it. His valuation of human beings can also be determined by the environment. So a robot could injure a human being to another to protect it considers most valuable, particularly his master. It could also be the case that a robot to physically harm a human being to another to avoid being damaged psychologically because it becomes a tendency to consider psychological harm more serious than physical. These situations would never have occurred in older robots. Poses Asimov robot stories in the most diverse situations, always considering the logical possibilities which could lead to robots with such situations.

The Act Zero

One can reach with fond Asimov robots, which shows in their stories robots increasingly "human". In Bicentennial Man, Asimov tells the story of Andrew Martin, born robot, and that struggle throughout his life to be recognized as a human being. Also R. Daneel Olivaw and R. Giskard Reventlov, which have a key role in the second expansion of humans and the subsequent founding of the Galactic Empire. Being the most complex robots ever created, were able to develop the law of robotics zero ( "Zero-law" in English) as a corollary of the first philosophical:
A robot may not harm humanity, or, by inaction, allow humanity to suffer damage.

R. Giskard died in Robots and Empire, after being forced to harm a human being under the law zero. The fundamental problem with this law is to define "humanity" and to determine what constitutes a "harm" for Humanity. R. Daneel achieved through the sacrifice of assimilation Giskard, since becoming the protector in the shadow of humanity. Daneel under different identities, he became one of the most important cycle Trantor (formed by the robot stories and novels, novels of empire, and the saga of foundations: 17 books) is also a key element in continuity.

Criticism

In principle it should be considered for autonomous intelligent machines that violate the law.

A missile can be considered intelligent and has no problem killing humans. A computer system that coordinates air strikes and land use in a military operations center violates the law. A police robot that could be created should be able to kill like the police man. This would involve reviewing the fundamental concepts that underpin a society. For example, if private property premium on human life, then the police should be able to kill humans to protect private property. That is a robot or a human did not make a difference because the result is the same.

The problem arises when considering a set of robots can take over the resources and enslave or annihilate the human no matter what humans are concerned. Any human regardless of race, religion or economic status would be treated the same way. This leaves aside the problem of humans who enslave one another and the genocide being committed against certain groups. Would it be more serious than what makes a group of robots?

In general science fiction movies like I, Robot pose an extremely intelligent robot may decide to control the destiny of humans and that this is unacceptable. These films suggest that the thought of a completely logical robot is unacceptable because emotions may regulate the conduct of a most appropriate way, more human.

It should be stressed that this would contravene the natural evolution. If robots are human creations and beyond us in intelligence and ability, then that humans are annihilated not pose a problem, because the robots would be a higher evolutionary state, and humans have been the missing link between apes and robots. The robots are evolving field.


graph shows the distribution of robots worldwide compared with the number of human workers, ie, there are many industrial robots per 10,000 workers in the same sector in each country. Robotics never ceases to surprise us. The latter is a great alternative if no friends or you're an only child. Now you can play the legendary and simple game of Pong against a machine and we do not mean the computer, but an opponent to be physical with you while crushing the keys.

In total there is already a million robots around the world and especially Japan stresses as expected, with a huge difference on the following. Spain is ranked # 10 of the world, which could be said that is not bad.

By region, surprisingly, Europe is ahead of America, which in turn is ahead of Asia. Other details are curious in Japan are installed almost 5 new robots every hour in Germany and that the ratio between robots and human workers in the automotive sector is 7 to 1. 33% of robots are used in the automotive and 10% globally in the manufacture of electronic products.
Our contricante is a robot that uses a webcam to display the game screen, solenoids fingers to press the keyboard and a laptop computer as the brain. A formidable rival, and probably never committed any errors. So you can confront him for ever and ever, until the computer is lost or loses power. In the video we show how to defend against a human.
the University of California and the Association of DARPA, he managed to control rhinoceros beetles by radio.Atravez signs of a module located in the shell of six arthropod control electrodes connected to the brain and muscles of the insect. The complete apparatus is very light, weighing about 1.3 grams, which is considerably less than 3 grams you can load these beetles. With time they plan to use insect biological sensors (eyes, for example, to see images), and the same energy to operate the control module.
see what can make this robot developed by the University of Waseda.Los engineers instead of focusing on the development of legs "skilful" like the Honda Asimo, have focused on developing hands "skilful." The Twendy-one is capable of starting an egg, put a straw in a glass, put bread in a toaster with tweezers, set the table etc..

Here are some detailed photos of the structure of the hands and what Twendy-one can do with them.

Flexible Educational RoboBuilder


Welcome to the new world of flexible modular platform RoboBuilder education. It's like an advanced version of LEGO Mindstorms NXT 4 without the constraints of ports for sensors and three servo ports, and is an ideal choice for many schools and beginners in robotics. They are perfect for education, hobby and competition, and you can deploy a large variety of mechanical configurations. Balances in the education efforts between the construction (usually the most expensive on other platforms), programming and testing phase to a more balanced and productive learning.