AI is not dangerous because of the reason most people thinksteemCreated with Sketch.

in #technology7 years ago (edited)

If you look at cinema, you'd see countless examples of how artificial intelligence could one day be the end of humanity. And in many ways it's the same tired old plot that we are all familiar with - AI becomes conscious and self-aware, AI wants optimization and/or survival, AI deems humanity surplus to requirements and/or a danger, AI sets on a mission to destroy humanity (preferably with extremely inefficient human-shaped robots because tanks and missiles don't make good movie villains).

But does this picture make much sense? Is AI likely to be the end of us because it become so smart that it decides it's time?

dangers.jpg

We are not going to develop conscious AI anytime soon

Let's be realistic about the state of artificial intelligence - we are neither after, nor really have the needed understanding to develop artificial consciousness. Currently examples of artificial intelligence are just algorithms that are better, faster or at least more efficient than humans in taking certain types of the decisions. We don't care if the algorithm we are using develops self-awareness of some sort, what we care about is the outcome - a correct decision. And we judge the decision's correctness or usefulness based on some pretty specific and narrowly-tailored criteria.

That creepy puppet is just
taking up a passenger seat...

We don't need a self-driving car to appreciate the beauty of the scenery, to be annoyed with our boring conversations, or to give us life advice as part of small talk. We need it to get us from point A to point B safely. The algorithm doing this doesn't really require self-awareness, it only requires efficiency at its job and this is the only criteria it gets judged on.

And this is true even for learning algorithms. They can make themselves better all they want, this doesn't change their objectives and the criteria they are in a sense hard-coded to judge their own efficiency on. And none of these algorithms are going to be complex enough to start developing consciousness just out of the blue and as a side-product of the simple procedures that allow then to optimize themselves based on a certain objective anytime soon.

The thing with consciousness is that we are not yet sure what it is, how at all it emerges when it does and by the looks of it, the chances for us to stumble into creating it as a side product of something practical are actually minuscule. Yes, it is not impossible for an artificial intelligence to develop consciousness, to use it to determine that humanity needs to be obliterated and to find the means to achieve that new objective somehow (preferably sexy killer robots because any conscious AI would surely have a kink or two). But as poetic as it might be, that's a really round-about way of destroying civilization as we know it, isn't it?

AI wouldn't need consciousness to destroy or harm humanity

As time goes, despite the fact the learning algorithms we are creating are far from being conscious, they continue to become more and more pervasive. As time goes, machines and the smart algorithms that guide them will start taking over more and more crucial assignments, so knowing that they are reliable will become increasingly vital. It's probably a good time to notice what even the most complex artificial intelligence possessing the ability to learn is at the most fundamental level. It's software. And what is the most common problem with software? Bugs.

Well, there you have it. Artificial intelligence doesn't need to be conscious to create huge problems, it just needs to be responsible for something really crucial and it needs to have a bug or two that would allow it to spiral out of control. Keep in mind that there have already been Wall Street crashes because of trading algorithms gone rouge, so it could surely happen elsewhere given enough chances.

The thing that we expect from algorithms is that the more advanced they become, the safer they would be because they will be smarter and therefore supposedly better at it. But that is not necessarily the case. A piece of software is as unintelligent as the oversights by the programmers the created it and their colleagues that didn't manage to catch them. But with algorithms optimizing themselves and dealing with unimaginable amounts of data at unimaginable speeds, our ability to monitor and understand them properly might start diminishing much before the algorithms start hating us.

So with learning and self-optimizing algorithms continuing to grow more complex and continuing to taking more and more responsibility for the way our society operates and our ability to monitor them continueing to decline, the chance of a pesky bug causing a major disaster is inevitably going to rise. I think this is the real danger Elon Musk and Stephen Hawking are trying to warn us about.

It's not that algorithms will become so smart that they will decide to kill us - it's that we'll remain too stupid to keep them bug free.


Sort:  

Yo, use the steemstem tag in the future! i'm part of the curation team for STEM posts now, and can get you some consistent rewards if your stuff passes the criteria:

stem.jpg

Thanks for the heads up. Unfortunately, I don't get enough time to craft good posts right now, but I hope that can change soon.

I'm not sure I'm 100% clear on the sources in the references. Like opinion/general knowledge pieces that I haven't used anything as a real source for my writing?

Though it might not be a bad policy at all to develop the habit of looking for sources when I come to think about it.

Well quality doesnt have to be superb, but yeah crediting images and sources wher eyou get info is most important. If it's scientific and in the steemstem tag, you might enjoy the rewards a bit more =D

We are already having this problem over at Google. The algorithm that recommends you Youtube videos based on all available data on previous preferences is in the form of a neural network which has performed so many internal modifications that the programmers and engineers there don't know how it runs the algorithm anymore. They are still fully capable of controlling it, but learning the particulars of the algorithm exacts such a high price in terms of manpower dedicated to it that even Google doesn't have enough brains to fully vet it.

Now, I wonder if that'll happen to DTube....

Yep, that's a great point and a great example. It's not about having an algorithm that itself is overly complex, it's just that it changes so fast and makes so many decisions that at some point it becomes practically impossible to keep track of. And things are just getting started.

I wonder if self-driving cars would at some point realize that drifting allows them to corner at higher speeds and at some point decide to start fast-and-furious-ing passengers that are in a hurry because they are learning driving from GTA (which some are) and because one way to teach algorithms about stuff is to make them watch YouTube videos :P

I doubt DTube would ever evolve into this as its goal is to be open and there is nothing less open then an incomprehensible black-box algorithms.

Congratulations @rocking-dave! You have completed some achievement on Steemit and have been rewarded with new badge(s) :

Award for the number of comments received

Click on any badge to view your own Board of Honor on SteemitBoard.
For more information about SteemitBoard, click here

If you no longer want to receive notifications, reply to this comment with the word STOP

By upvoting this notification, you can help all Steemit users. Learn how here!

Coin Marketplace

STEEM 0.32
TRX 0.12
JST 0.034
BTC 64647.93
ETH 3160.25
USDT 1.00
SBD 4.09