Researchers Hack Self-Driving Cars with Stickers on SignssteemCreated with Sketch.

in #security7 years ago (edited)

Researchers were able to fool AI systems attempting to recognize traffic signs, through minor manipulations. By placing stickers on signs, subtle changes were achieved that were sufficient for autonomous systems to misclassify or improperly read the sign. These modifications could easily be camouflaged or beyond the range of observation for humans. This type of academic study is called adversarial research as it is intended to showcase weaknesses that attackers might use to abuse or undermine the capabilities of AI systems.   

The full article can be found here: Researchers hack a self-driving car by putting stickers on street signs

Let’s not allow our imaginations to run wild prematurely. There are two important points to consider.    

First, Artificial Intelligence (AI), as incredible as it is, is just a tool. It can be defeated, misused, and manipulated like every other tool in existence. Second, from a cybersecurity perspective, just because Deep Learning inputs could be manipulated for malicious intent, it does not mean it actually would occur in the real world. Is this a vulnerability, yes. But like most, unless it is tied to satiating the objectives of an attacker, it won’t be widely exploited. I think there is a greater chance that autonomous cars are compromised with custom Ransomware, as it would result in direct financial benefits to the criminals, than by modified street signs.    

That said, it is important to continue such adversarial research, to see how far this rabbit-hole goes and if there are use-cases compelling to cyber-threats.   


Image Source: https://www.autoblog.com/2017/08/04/self-driving-car-sign-hack-stickers/     


Interested in more? Follow me on LinkedIn, Twitter (@Matt_Rosenquist), Information Security Strategy, and Steemit to hear insights and what is going on in cybersecurity. 

Sort:  

very interesting. however, i think it would be incredibly easy to misuse AI which could become a very big problem.

I am glad to hear that this was discovered by researchers, its pretty interesting.

someone at the maker of the cars would have probably thought of this and put in some kind of anti hacker stuff in it

Sounds exciting. Blockchain for insurance purposes.

Definitely worth an upvote and a resteem :]

Nowdays we are willingly or unwillingly engaged with many types of AI. They this kind of ' adversarial research' will help us to fight and prevent us from some AI missuses.Nice info share.Upvoted.

Yeah, I don't trust AI that relies on signs. I'd rather it have a DB of stop signs and lights. This way it can even drive "blind." But even that is not safe...what happens when that DB is hacked?

Now, if the logic was also on a blockchain...well...maybe then it can be safe.

I think we can agree that using a multitude of data sources, sensors, and systems provides a defense-in-depth structure which provides better security and safety.

Agreed. Now, getting all the different vendors to work together is another thing. Everyone is using their own method for detecting objects. Unless they all adopt the same method to work with each other...we won't get there.

One example that things can be tricked with a simple tweek, interesting article! gonna follow you for more :D

Welcome to hackers world AI.

AI will have many lessons to learn!

haha that is so sick

It is kind of cool.

Coin Marketplace

STEEM 0.35
TRX 0.12
JST 0.040
BTC 70638.80
ETH 3565.34
USDT 1.00
SBD 4.73