MERRIT AND DEMERIT OF ARTIFICIAL INTELLIGENCE

in #arificialintelligence6 years ago (edited)


From SIRI to cars without a driver, artificial intelligence (AI) progresses rapidly. While science fiction often portrays AI as robots with similar human characteristics, AI can range from Google search algorithms to IBM's Watson and autonomous weapons.
Currently, artificial intelligence is known as narrow AI (or weak AI), as it is designed to perform a limited task (for example, only facial recognition or just Internet searches or just driving a car). However, the long-term goal of many researchers is to create general AI (AGI or strong AI). While narrow AI can outperform humans in whatever their specific task, such as playing chess or solving equations, AGI will outperform humans in almost all cognitive tasks.
In the short term, the objective of maintaining the impact of AI on society benefits research in many areas, from economics and law to technical issues such as verification, validity, safety and control. While it may be little more than a minor annoyance if your laptop fails or is hacked, it becomes even more important for an artificial intelligence system to do what you want it to do if you control your car, your plane, your pacemaker, your business automated system or its electrical network Another short-term challenge is to avoid a devastating arms race in autonomous lethal weapons.
In the long term, an important question is what will happen if the search for strong AI is successful and an artificial intelligence system becomes better than humans in all cognitive tasks. As indicated by I.J. Well in 1965, the design of intelligent AI systems is in itself a cognitive task. Such a system could potentially experience recursive self-improvement, unleashing an explosion of intelligence that leaves the human intellect far behind. By inventing new revolutionary technologies, such super intelligence could help us to eradicate war, disease and poverty, so that the creation of strong AI could be the greatest event in the history of mankind. Some experts have expressed concern, however, that it may also be the last one, unless we learn to align the objectives of the AI with ours before it becomes super intelligent.
()()
There are those who wonder if ever a strong artificial intelligence will be achieved, and others who insist that the creation of super intelligent AI is guaranteed to be beneficial. At FLI we recognize these two possibilities, but we also recognize the possibility that an artificial intelligence system causes intentional or unintentional harm. We believe that current research will help us prepare better and prevent such potentially negative consequences in the future, enjoying the benefits of AI and avoiding the pitfalls
Most researchers agree that a super intelligent AI is unlikely to show human emotions such as love or hate, and that there is no reason to expect AI to become intentionally benevolent or malevolent. In contrast, when considering how AI can become a risk, experts consider that there are two most likely scenarios:

AI is programmed to do something devastating: autonomous weapons are artificial intelligence systems that are programmed to kill. In the hands of the wrong person, these weapons could easily cause massive casualties. In addition, an artificial intelligence arms race could inadvertently lead to an AI war that would also result in mass casualties. To avoid being foiled by the enemy, these weapons would be designed to be extremely difficult to "turn off", so humans could lose control of such a situation. This risk is one that is present even with narrow IA, but it grows as AI's intelligence and autonomy levels increase.
AI is programmed to do something beneficial, but it develops a destructive method to achieve its goal: this can happen as long as we do not fully align the objectives of the AI with ours, which is surprisingly difficult. If you ask an obedient and intelligent car to take you to the airport as fast as possible, it is possible that they will chase you in helicopters and cover you with vomiting, without doing what you wanted but literally what you asked for. If a super intelligent system has the task of carrying out an ambitious geoengineering project, it could wreak havoc on our ecosystem as a side effect, and see human attempts to stop it as a threat that would be met.
As these examples illustrate, the concern about advanced AI is not malevolence, but competition. A super intelligent AI will be extremely good at achieving its goals, and if those goals are not aligned with ours, we have a problem. You are probably not an evil enemy of the ants that treads the ants for evil, but if you are in charge of a hydroelectric green energy project and there is an anthill in the region that is flooded, too bad for the ants. A key goal of AI security research is to never place humanity in the position of those ants.
Stephen Hawking, Elon Musk, Steve Wozniak, Bill Gates and many other great names in science and technology have recently expressed their concern in the media and in open letters about the risks posed by artificial intelligence, along with many leading researchers in artificial intelligence. Why the subject suddenly appears in the headlines?
The idea that the search for a strong artificial intelligence would finally possibility of super intelligence in our lives. While some experts still guess that human AI is centuries away, most AI research at the Puerto Rico Conference in 2015 assumed that it would happen before 2060. Since it can take decades to complete the required safety investigation, It is prudent to start now. .
Because artificial intelligence has the potential to become more intelligent than any human being, we have no sure way to predict how it will behave. We cannot use past technological developments as a foundation because we have never created something that has the ability to, cunningly or unconsciously, be more cunning than we are. The best example of what we could face may be our own evolution. People now control the planet, not because we are the strongest, the fastest or the biggest, but because we are the smartest. If we are no longer the smartest, are we sure we maintain control?

The position of FLI is that our civilization will prosper whenever we win the race between the growing power of technology and the wisdom with which we manage it. In the case of AI technology, FLI's position is that the best way to win that race is not to impede the first one, but to accelerate the latter, supporting IA security research.

Coin Marketplace

STEEM 0.19
TRX 0.14
JST 0.029
BTC 67015.44
ETH 3247.79
USDT 1.00
SBD 2.64