Elon Musk says AI could trigger a world war and I agree with him

in #ai7 years ago (edited)

In a new article released by The Guardian Elon Musk is claiming that AI could trigger a third world war. I agree with him and in this post I'll explain my reasons.

  • Most AI researchers have limited knowledge of cybersecurity or risk assessment.
  • Most AI researchers are biased in favor of promoting and enabling research.
  • Cybersecurity professionals or professionals in fields which have a high degree of security have different opinions than the AI researchers.

Elon Musk and Stephen Hawking are focused on security because they work in fields which require a greater level of caution and risk management. When you're heading Space X or building an automated car there can be lives at risk so the level of cybersecurity for these projects are going to be great. Stephen Hawking while I cannot speak on him, at least does have some idea of the level of detail and precision which must go into building something like a space probe. The issue is AI safety and unfortunately governments do not have a good track record on human rights or on safety.

It is true that there will be good usage for most technologies but can we honestly say we have more human rights now under high technology than we did under lower technology? More technology combined with bad policies and war orientation leads to loss of human rights and liberty. It might not be something AI researchers are aware of because they are the one's being paid potentially to develop the new weapon whether they realize it or they don't. It's important for AI researchers to try their best to avoid developing autonomous weapons or any AI which can evolve into something weaponized.

AI must always be aligned with human values

It's important for security that any AI we develop be aligned with human values. If we want AI to protect human rights then it must absolutely be loyal to human values. Just building AI to build it is not going to protect human rights or necessarily promote progress. AI has to be beneficial and if AI is weaponized or used in autonomous weapons it almost certainly wont be.

Elon Musk warns of an arms race

Elon Musk (@elonmusk)
China, Russia, soon all countries w strong computer science. Competition for AI superiority at national level most likely cause of WW3 imo.
September 4, 2017

Now, if we take a look at North Korea right now we can see the exact arms race he is communicating about. The arms race is a cyber arms race, and cyber warfare is the type of warfare which would benefit from autonomous weapons. In fact autonomous weapons may already exist if we consider Stuxnet to be an example. These autonomous AI weapons would put all of us at risk and worst of all could result in us losing whatever ability we have to enforce human rights.

How do human rights defenders feel about the threat of autonomous weapons and autonomous AI? How would human rights defenders protect people from being tortured by an AI? If you think cyber harassment is bad today, just wait until it's automated by AI and targeted. The AI will have access to big data, to continuous surveillance, and to cyberspace, and can use any information to hurt anyone in many ways, why would we want to create such a monster?

On the other hand it might be unavoidable. It might be that we simply cannot stop autonomous weapons from being created and that AI will be used to hurt people. If it is the case that we cannot stop humans from hating and being nasty to each other then perhaps we shouldn't rush to create general AI without first figuring out how to solve the primary problems such as human rights abuses, etc. If AI can be used to protect rights rather than to violate rights then I would be all for it, provided that the AI understands human values well enough to abide by them. Just creating AI because it can be created, without any thought on the ethics, the safety, the security, the impact on society, the risk management, is irresponsible.

Conclusion

In my opinion the ideal solution for the time is to create a consortium of AI researchers and cybersecurity professionals. People who want to create AI while also using the AI to improve the human condition. People who actually want to find ways to make sure the AI is safe. People who at least are thinking about what to do to mitigate risks, whether it be from hacks, or from harmful usages. Vlad Zamfir wrote an article on how the blockchain could be harmful and he has a point, in that if the developers of blockchain technology don't think about how it can positively or negatively impact society, or about ethics, that it can be long term a negative consequence.

References

  1. https://www.theguardian.com/technology/2017/sep/04/elon-musk-ai-third-world-war-vladimir-putin
  2. https://medium.com/@Vlad_Zamfir/blockchains-considered-potentially-harmful-d039888c3208
Sort:  

Nice post. Certainly AI could be responsible for extinction of human race if not checked.

I hope that humanity will be more intelligent than I think about it ...

I agree AI is heading this way and humanity needs to stand against it now!

Take drones for example they are used to desensitise people from murder turning it into a computer game. This is now common practice, think of the next steps. We need to start thinking about these actions and actively trying to stop them, if we stand aside and watch it will seen be too late. Musk is spot on here, hes a forward thinker who shoul be listened too :)

Coin Marketplace

STEEM 0.20
TRX 0.13
JST 0.030
BTC 64956.33
ETH 3456.79
USDT 1.00
SBD 2.55