STEPHEN HAWKING : The benefits of artificial intelligence are not without risks

in #technology6 years ago

f1.jpg

The British physicist Stephen Hawking warns that industrial intelligence may pose a threat to the human race if it outweighs human thinking.

LOS Angeles – When the famous British computer scientist Alan Torong was introduced 65 years ago the next question "Can the machines think?" , he immediately responded to the question as a ridiculous idea, and this was echoed again by the film that presented this year's story about his life titled "The Game of Tradition".

However, what has been known as the "Torong test" is an experiment: can you deceive a human being and make him think she's human too? He became the golden rule of research on artificial intelligence, as such a landmark test turned into a theory for several decades.

This test--in which the computer answers questions posed by someone who does not see him believing that the computer is someone like him--is being used in a practical way everywhere.

The test has become a digital intellectual capacity that feeds the "Siri" application, a personal assistant and a cognitive browser for the iphone to answer questions Sonic, as well as intelligent transportation systems that are basically a super-Watson computer that defeated his competitors of humans to win the TV contests program. " Gibardy in 2011. Theories assume that the greater the potential of the computer in artificial intelligence, the greater its ability to learn, and the greater its ability to learn as it grows.

In 2013, Fikarius developed an artificial intelligence programme that could pass a widely used online test, designed to address humans and computer individually, the test called "Captcha" is a shorthand for the phrase (the Torong General test is completely automated to talk to Computer and human beings individually), the test requires humans to rewrite a short set of numbers or secret characters.

The co-founder of Vicarius, Scott Phoenix, said he wanted to go further and devise computers that could learn how to treat diseases, produce renewable energy and perform most of the jobs performed by humans.

The aim is to invent "a computer that thinks like a man, except that he does not have to eat or sleep", the newspaper quoted him as reporting. However, some brilliant technology-minded people say that a great deal in the development of artificial intelligence can also destroy humanity.

For example, the founder of the Tesla electric car company, Mon Mussk, describes artificial intelligence as "the greatest threat to our existence as human beings", the quasi-machines that contemplate nuclear weapons and the "devil".

Mussk told a group of audiences who had a wave of laughter at MIT last October: "It's like what happens in all these tales about the man who holds the magical ring of Suleiman and the sacred water and such things, is he sure he can control the devil? This will not work. "

Also, the world of British astrophysicist, "the perfect artificial intelligence--the invention of computers with their own brains--can be the end of the human race." This year, scientists at Cambridge University, where Hawking is the director of research, have founded the Centre for Risk studies on human existence, and among its objectives are to study how to maximize the benefits of man from artificial intelligence and avoid a catastrophe similar to what we see in science fiction novels.

However, both goals are still far from being investigated, and the philosopher and author Nick Bostrom conducted a poll between the group of artificial intelligence experts about when they trust that science will achieve a "high level of machine intelligence".

These scholars believed that this would be achieved on average in 2075, and 30 years later, intelligent machines could be devised super, which can outgrow human thinking, but 21 percent of them said it would never happen.

Please dont forget to Upvote & Resteem if you like this article

Sort:  

In my view the two biggest risks of AI are:

  • The AI is programmed to do something devastating: Autonomous weapons are artificial intelligence systems that are programmed to kill. In the hands of the wrong person, these weapons could easily cause mass casualties. Moreover, an AI arms race could inadvertently lead to an AI war that also results in mass casualties. To avoid being thwarted by the enemy, these weapons would be designed to be extremely difficult to simply “turn off,” so humans could plausibly lose control of such a situation. This risk is one that’s present even with narrow AI, but grows as levels of AI intelligence and autonomy increase.
  • The AI is programmed to do something beneficial, but it develops a destructive method for achieving its goal: This can happen whenever we fail to fully align the AI’s goals with ours, which is strikingly difficult. If you ask an obedient intelligent car to take you to the airport as fast as possible, it might get you there chased by helicopters and covered in vomit, doing not what you wanted but literally what you asked for. If a super-intelligent system is tasked with a ambitious geoengineering project, it might wreak havoc with our ecosystem as a side effect, and view human attempts to stop it as a threat to be met.

I truly agree with you buddy, but although there will be always somethings unexpected to come, Technology cant be trusted.

Coin Marketplace

STEEM 0.27
TRX 0.11
JST 0.030
BTC 69149.45
ETH 3824.39
USDT 1.00
SBD 3.50