Is The Threat of Artificial Intelligence Real?

in #ai7 years ago (edited)

We've all seen movies like Ex Machina, or hit TV shows like Westworld or Humans. The future they envision is one where humanoid AI's start to have self awareness and gradually break away from their human subjugation, often in a violent way.

humans2-group-v06crop-none_a2.jpg

Recently there's been a lot of noise around this issue, with many experts being worried about the probability of this scenario actually happening. We've been getting great thinkers like Elon Musk or Sam Harris weighing in, and warning us about the possible dangers lying ahead.

I would argue that such a scenario is highly unlikely and that there's no need to worry. Here's why...

There will be one global brain

 
Having your own humanoid robot at home may very well happen one day. But the intelligence of that robot will most likely not be local to that machine. It will be connected to the internet, and its brain will probably be in the cloud.

We see a huge leap in IOT (Internet Of Things). And with that progress we see the following trend: "smart" products are becoming merely terminals for interacting with the physical world.

Look at Tesla's autopilot AI. Thousands of cars send data to the cloud where it is being processed, the AI improves, and the benefits are propagated to the network of vehicles. This feedback loop is happening in real time.

So does each car have a mind of its own, or is there a single mind, in the cloud, that operates all the cars? I would argue it's the latter.

It's reasonable to foresee that all these specific AI's, at one point will become interconnected, working as one global artificial general intelligence that controls every connected object.

Therefore your humanoid robot that cleans and cooks for you, that you will have meaningful conversations with, will most likely be just a mindless machine being operated by a central brain in the cloud.

A.I. gone rogue!

 
This brain will get really smart, really fast. We can't even imagine the levels of intelligence it could reach. How do we ensure it doesn't just go rogue, in a Skynet like scenario?

Let's start from the beginning. Right now every AI is being developed to enhance our human experience in some way, directly or indirectly. This will be done more accurately and more efficiently as the tech evolves.

Eventually we will have a system that can identify what it needs to learn to make our lives better, and learn it. One could argue that we'll even get to a point where the AI will be totally self aware, even conscious.

At that point we could start to see a divergence between the AI's goals and our own. While there is absolutely no reason to think it will want to harm us intentionally, it might harm us unintentionally in the same way we harm other animals by negligence.

I find this hypothesis unlikely. Being a highly evolved system, it will probably continue to take our agenda into consideration, much like we are starting to take other animals' well being into consideration.

And remember we are talking about AI that can potentially perform millions of times the intellectual work of the entire human race, in a single day! A system like that would be able to estimate, with a high degree of accuracy, the consequences of every action it takes.

Of course in the process of getting to that level of accuracy, unintended consequences could occur. But I believe those negative effects would be either short term or at a small scale.

So what does future AI look like?

 
A global decentralised brain, controlling hundreds of billions of terminals. It will probably have an agenda of its own, but our well being will probably be taken into account.

Sort:  

Steemit is Skynet. Steemit will rule the world.

The singularity changes everything we will have no job
We will have no capitalism the singularity will control and now what we need
So we will have lots of time for quality live
It's gonna be great time for wife and kids, sports ,arts,church,maybe gardening ,maybe repairing old cars , cooking,

There's an endless debate on the topic. Elon Musk is especially afraid of a sentient AI and compares it to "summoning the demon". Oh well, we shall live and see.

"It will probably have an agenda of its own, but our well being will probably be taken into account."

Don't get me wrong, but "probably" is not enough for me... I'm kinda worried about the new era of this kind of technologies....maybe I'm too oldschool type of a guy :)

Coin Marketplace

STEEM 0.27
TRX 0.13
JST 0.032
BTC 60826.65
ETH 2907.17
USDT 1.00
SBD 3.54