Why AI is dangerous?


To put it in one sentence, I would say that is because only a minority of cognitively possible sets of objectives set high on the continued survival of human beings and the structures that we value. Another reason is that we can not specify in sufficient detail what mathematical value to transfer to a new species without requiring much trouble.

It would be easy if we could transfer the group aims for a "typical human" or a "good person" and want the best. But there is one problem: we have no experimental evidence of what happens when a human being can alter their own objectives, or increase your own intelligence and / or physical power exponentially.

The little evidence we have of the stage where people can take many in a short time indicates that the results are not usually good. In fact, we have complicated democratic mechanisms built into our society to protect ourselves from these results.

Most AI designers are failing the challenge because no one wants to take responsibility for creating the first truly intelligent being. They just want to play with your program. The idea of taking any responsibility for the products of the research itself is a relatively recent concept, which has weight only in a minority of scientists and engineers, even today. This is normal because scientists and engineers are embedded in vast institutional apparatus that puts the responsibility so far up the chain of command that the actual researchers are exempt from most, if not total, responsibility. Let us return to original subject of all objectives. Here are some likely uses for more advanced AI technologies in the next 10-20 years:

Intelligence analysis and war games. Law enforcement. Analysis of interstate politics. Finance, banking, and investments. Control of robot combat. Automate workflows.

There are many others, but I put them on top of the list because they have more economic or political importance, and therefore get more money for research.

While the AI in these areas are progressing, the systems produce decisions from Iran only when they are explicitly asked to produce decisions continuously and automatically. When an employee check the computer for human input, is more like a tip in a jet cup and fill in the existing flow of knowledge and decision-making-of, rather than operate a light switch or press' run 'for a conventional computer program.

As entities that constantly think and make decisions, these systems have implicit IA maximum targets, whether persons or not the explicit program. The implicit goal of a maximum Automator workflow will accelerate the completion of productive work. The implicit goal of the robot will choose the finance actions to maximize the return on investment. The implicit goal of the IA will be fighting to kill or capture the persons specified by certain data files from your memory.

What makes AI so potentially dangerous is the lack of history of humanity and common sense that we take for granted. When the clock is 5, the majority of workers leave their work and finish the day. Go home and spend time with his family, watching TV or playing games, or just relax. A worker would have no such artificial "normal background" unless you schedule it. Is at work, 24 hours a day, 7 days a week, while you continue taking your energy from the wall. That kind of monomaniacal dedication is what puts humanity at risk of an IA when it starts to leave the lab and enter in the real world. IA with a maximum implicit wish to strengthen these objectives and achieve goals more effectively, where the "objectives" are not the same as would a human being who has passed a piece of paper written with these objectives, but are represented in the context of the decision structure and worldview of the IA.

Rationality and reasonableness of the goals are not easily transferred to a mind without the knowledge and common sense embodied in every normal human being neurolígicamente. A blank slate intelligence sitting in the middle of a forest could develop models and make inferences about the many aspects of their environment, these trees are tall, these animals are moved but not the plants, the climate changes in cycles. But what about the inferences about "doing what is right"? Unable to get a `should 'of a thing. Putting an AI in a social environment with humans or other IA does not help, because without some deep-seated reasons for this strange thing to worry about "moral" in the first place, only one IA will cheerfully carrying out the objectives that have no subtlety originally allocated.

Achieved while improving the ability to graze their own intelligence or power of robotics, will continue to improve more and more to achieve those goals, and humans will be more and more difficult to reach and pass the motivation to worry about morale in the abstract. If AI in any of the previously cited applications the ability to overcome to achieve in a meaningful way, mentally or physically, the highest implicit goals that were given will be magnified many times. There will be little reason to modify the AI these goals, unless such flexibility mechanisms were explicitly programmed. When someone sees a human hunger, he tends to feel embarrassed and at least want to be able to help. When a human sees someone attacking a defenseless child, tends to get angry. For a typical IA, a hungry person or a child are attacked only relevant in the context of the objectives already has - "How does this human hunger for stock prices", or "Can you give me this information hungry human regarding the location of my next goal? "are two questions that may come to mind.

Freedom, empathy, self-determination, consensus building, conflict resolution, aesthetics, friendship and understanding ... these values and inclinations are automatically embedded in every human being without serious brain defects. For the IA a share, must be made in terms of lines of code and mathematical rigor. What programmer has time to do all that work when the general intelligence without a moral human appearance will be significantly easier to achieve?

This disparity in difficulty between general intelligence and general intelligence naked morally sophisticated is what makes AI so dangerous in the long term.

0 comments: