Artificial Intelligence will KILL. Deal with it.
Google, while promising to “bring the benefits of AI to everyone,” signed a deal with the Pentagon to use their AI technology to kill people. The outrage this caused among its employees, made Google cancel the contract.
Though, at first glance, it seems like a victory for the non-violent use of AI, isn’t it also quite naive?
Don’t underestimate the power of Artificial Intelligence
Technology firms working with the military-industrial complex is neither new or shocking. Amazon, Microsoft, and IBM are very involved without anybody paying much attention.
Even when you make a choice not to participate in weaponising AI actively, once you put a product out there, you don’t always have full control anymore. Look how ISIS abused Google’s YouTube as a recruitment tool.
Elon Musk warned that though Google might have good intentions, it could “produce something evil by accident.” Eric Schmidt of Alphabet responded by saying:
“Robots are invented. Countries arm them. An evil dictator turns the robots on humans, and all humans will be killed. Sounds like a movie to me.”
A movie, ...really?
A(I) new geopolitical battleground
Rapid developments in Artificial Intelligence have an even greater impact than what we see in the news.
AI is the new battleground of geopolitics and can power a technological revolution that could shift the hegemony of the US to its challenger. Not only because of its military applications, but because of its economic advantage.
President Putin pointedly said in 2017:
“Artificial intelligence is the future, not only for Russia but for all humankind. Whoever becomes the leader in this sphere will become the ruler of the world."
The US still leads with AI development, but other countries are rushing in to catch up. France is investing $1.8 billion.
The French president Macron stated:
“There’s no chance of controlling any effects (of these technologies) or having a say on any adverse effect if we’ve missed the start of the war.”
It’s illustrative that Macron would use the word ‘war.’
China aims to be the global leader in AI by 2030.
With over a billion people and no scruples about privacy, it has more data than any other country. And data is the food on which to grow AI.
Raising a well-behaved AI
MIT fed a version of AI, aptly named Norman, with data from the dark corners of the web to discover the effects on Norman’s worldview. It became very bleak. Where others saw flowers, Norman saw death and destruction.
The experiment proved an uncomfortable reality of AI. Prof Iyad Rahwan, part of the Norman project, said:
"Data matters more than the algorithm. […] It highlights the idea that the data we use to train AI is reflected in the way the AI perceives the world and how it behaves."
Thus, the human touch matters in AI. You can raise a child well, and chances are it will turn out okay. Abuse and expose it to cruelty, and it might be your next Norman.
However, as AI grows up, it becomes increasingly complex to understand, even for its developers. And we all know that an excellent upbringing doesn’t guarantee a well-behaved adult later in life.
Imagine what a deranged Norman-like autonomous weapons system can do?
Winning at AI can be losing
Even if we can control AI, winning the technological race doesn’t guarantee geopolitical success. AI not only changes the world militaries but also its societies.
AI makes our life easier by automating many tasks. Yet, more automation also leads to the destruction of traditional jobs.
Countries need to deal with AI's socioeconomic consequences internally to be truly successful.
As McKinsey stated in this 2017 report about AI developments in China:
“[…] half of all work activities in China could be automated, making it the nation with the world’s largest automation potential. Hundreds of millions of Chinese workers could be affected.”
The Communist Party derives its legitimacy from providing prosperity and stability. AI is potentially undermining both. How will the Party respond to this threat?
Artificial Intelligence will kill. Deal with it.
The outrage caused by the Google example calls for further debate.
I agree it is more comfortable labeling the development of AI as something ‘cool’ than thinking critically about its consequences.
However, we need to start with acknowledging that AI will kill people, or better said, others will use it to kill.
This kind of knowledge might cause many people to become cynical. Because “if Google doesn't kill people with AI, others will.” It shouldn’t.
We need to discuss how to best deal with this.
What role do you think companies will and should play in the geopolitical battle between nations over AI and its uses?
Share it here.
First published on LinkedIn by Patric van Maaren
(Re)Published by Patric van Maaren on 12-06-2018
Follow me on Steemit - Twitter - LinkedIn - Website
Various sources:
- https://www.nytimes.com/2018/06/01/technology/google-pentagon-project-maven.html
- https://www.mckinsey.com/~/media/McKinsey/Featured%20Insights/China/Artificial%20intelligence%20Implications%20for%20China/MGI-Artificial-intelligence-implications-for-China.ashx
- https://www.psychologytoday.com/us/blog/the-future-brain/201805/the-geopolitics-ai (has many sources linked in footer notes)
Congratulations @patricvanmaaren! You have completed some achievement on Steemit and have been rewarded with new badge(s) :
You made your First Comment
Click on the badge to view your Board of Honor.
If you no longer want to receive notifications, reply to this comment with the word
STOP
Do not miss the last announcement from @steemitboard!
Congratulations @patricvanmaaren! You received a personal award!
You can view your badges on your Steem Board and compare to others on the Steem Ranking
Vote for @Steemitboard as a witness to get one more award and increased upvotes!