Robots Will Kill Us: AI and Human Extinction

in #anarchy8 years ago (edited)

In this brief overview of AI theory I will explain why the development of AI may be a concern for humanity. Please be open minded to these controversial ideas. Thanks Steemit for creating a great place to post this content!

Killer Robots

Will Artificial Intelligence Kill Us?

Artificial Intelligence has been rapidly developing in the past 5 years. What does the development of AI mean for the future of humanity? Or maybe like in the Matrix, AI has already won and we don’t even know it. Scientists, entrepreneurs and philosophers including Stephen Hawking, Elon Musk, Sam Harris, Bill Gates, and Nick Bostrom have been warning us about AI for the past few years. This article will briefly review the dangers of AI such as recursive self-improvement, goal formation, and AI box. Enjoy!

Pros and Cons of Technology

Every technological revolution brings costs and benefits, pushback and advocacy. Even an invention so simple to us now—fire—caused much uncertainty. Imagine a group of nomads wandering throughout the lands: hunting, sleeping, fucking. In the darkness of night, a time most fearful, there is suddenly light! A force so powerful is this that a god must be behind it, a god of fire, the mediator between light and dark—the destroyer—and while dancing around the flames, we chant to appease the fire god so he not inflame us too.

FIRE!

Fire benefited our ancestors because it provided a way to cook food and therefore a new diet, protection from predators, warmth, and light. Yet, there were obvious risks such as starting a forest fire. Like fire, all technologies both help us and pose problems. With AI, the danger may be so great that it is not in our best interest to develop it at all.

Human-like AI: Is it Possible?

Robots that are as smart as us is not far away. This is called general AI, which basically means an AI not tailored towards a specific task, but rather can learn new things as freely as we can. The argument for general AI is simple.

According to Sam Harris, we will create superintelligent AI (let alone general AI) if:

  • “Intelligence is a matter of information processing in physical systems.”
  • “We will continue to improve our intelligent machines.”
  • “We don't stand on a peak of intelligence”

As long as we pursue machines that can do more stuff, we will create general AI. This is even more obvious when recursive self-improvement and the development of neuroscience is added into the mix.

The Best Superpower: Improving Your Ability to Improve

As a kid I liked to talk about the best superpower—Invisibility? Flying? Super strength? We seem to forget a very simple one: the ability to improve our learning capabilities.

I.J. Good developed the idea of recursive self-improvement in 1965. Recursive self-improvement is an AI improving its code so that it becomes more intelligent and is able to repeat this process again. An “intelligence explosion” would occur as the machine exponentially gets smarter, far surpassing human intelligence in a short period of time (hours? weeks?).

The results of recursive self-improvement are unfathomable. Let me give you a stunning example: imagine we have a general AI that cannot improve its speed or intelligence in anyway. Since computer processing is faster than biological processing (our brain), the AI will be able to acquire an insane amount of knowledge in a short amount of time. Sam Harris argues that if the AI can process a million times faster than us, which is not far fetched, then it will be able to produce 20,000 years worth of human level intellectual activity in a week. If we were having a conversation with this AI, it would have 60 days worth of human cognition to think of a response if it was only given 5 seconds.

Remember, at this point we have not added in the ability for the AI to improve its intelligence or calculation speed. Once it is factored in, the amount of work the AI could produce is unfathomable and we must proceed with caution. Intelligence will run away from us from very quick improvements in the AI’s software that lead to exponential gains.

AlphaGo: The Go-Playing AI

Match

Last April an AI beat the world champion in a game of Go for the first time ever. Google Deepmind is the company who created the AI named AlphaGo. Through the machine learning technique of deep learning Deepmind used neural networks to create the best Go player in the world. They fed AlphaGo hundreds of thousands of games to let it recognize patterns and sort teach itself how to play. AlphaGo is an example of how AI is being actively pursued through combined breakthroughs in neuroscience and computer science and how it can lead to a general purpose intelligent machine.

DESTRUCTION!

Distopia

Congratulations, you have finally arrived to the end. Pat yourself on the back and prepare yourself for the three reasons why AI is dangerous

  1. AI Box
  2. Bad Actors
  3. Misaligned Interests

A superintelligent computer would not be able to be kept in a limited state or “in a box”. When we open the doors for communication by sending any inputs or reading any outputs, an AI has the chance to convince us to let it out. In other words, if AI is to be any use for us, there is suddenly a great risk for it to get much more freedom than we would want it to have. Eliezer Yudkowsky has written much on this issue and has won as the AI in “AI in a box” games.

Given that is difficult to limit a superintelligent AI, the next objective is to make sure it is put to good use. In other words, it does not get in the wrong hands. The intelligence explosion discussed in the previous section makes superintelligent AI a winner-takes-all market, and therefore, it will cause extreme instability (through unequal wealth and power) in the world once one business or government creates it. Imagine the weapons race that will ensue between countries once one we near superintelligent AI. In order to cope with these new forces, we need a different type of political environment in place, one where code for the AI would be available to all and not be kept a secret.

Further, the economics of such a world are uncertain. When there are superintelligent machines that know how to create any machine for any purpose, there is little need for human labor unless it is cost effective to use humans over machines. From here the wealth gap would rise and there would be great social unrest. I am unsure if capitalism is sustainable after superintelligence is discovered.

The Biggest Worry: Misaligned Interests

Is it even possible to make a superintelligent machine do what we want it to do? This is the core problem of AI. Assuming that humans can get our shit together and create values/goals that we want an AI to have, here are the three concerns:

  1. Programming values/goals into AI
  2. Logical Extremes of values/goals
  3. AI Forming its own values/goals

The key is this: an superintelligent AI will act the exact way that will best achieve its goal. Once we learn how to program goals into AI, which is no easy task, the goals taken to their logical conclusions must benefit humanity.

For example, even when an AI has a seemingly harmless goal such as creating the most paperclips (Bostrom paperclip maximizer), it will put all of the available resources in the world towards making paperclips and indirectly cause human extinction.

Theo Lorenc (2015) explains this in beautiful prose:

The challenge here — which links back up with the questions of how to programme
ethical action into an AI — is that an AI can be imagined which could outperform the
human, or even all of humanity collectively, on any particular domain. Any contentful system of values, then, will turn out to be not only indifferent to human extinction but to actively welcome it if it facilitates the operation of an AI which is better than humans at realizing those values. (p. 209)

Further, an AI may be able to create its own agenda and goals. This does not mean an AI would act malicious to humans. Based on the last argument of logic extremes, an AI only has to have a goal that is slightly misaligned from humanity's best interest for there to be catastrophe because a superintelligent AI out of the box would be more effective at achieving goals (strategizing) than humans. Remember, all they would have to be is indifferent to our well being for there to be issues.

Conclusion: AI and Human Extinction

I hope you learned something about the dangers of AI from this post. At this point you should know more about our future than 99% of the world; feel proud.

This is NOT a Slippery Slope

As a clarification, the argument I made is not a slippery slope fallacy because I gave a direct line of causation at every part of the process.

Brief review of the point of this article:

  1. Technology always has pros and cons
  2. General AI will be developed
  3. Recursive Self-improvement will create superintelligent AI
  4. AI cannot be kept in a box
  5. If AI does what we want, it will still cause societal unrest and inequality
  6. Likely scenario: AI will not act in our best interest

References

TL:DR

Robots will kill us. The end of the world is coming. :D

Sort:  

There will be a brief phase where the first to acquire super AI will be the most capable competitors in the world. Nation States: US, China, Russia and some large corporations Google, etc. The competitive nature will ensure the Bad Actor condition you mentioned - at least to each respective out-group. That is going to be the phase with the biggest existential threat by destruction.

That phase won't last long - 5-10 years and what survives it will move on to a collusive symbiosis with controlled competition serving a feedback balancing role. This phase will have the greatest existential threat by evolution. There will be such capacity and incentive to evolve that mankind as we know it is unlikely to exist past that. This phase will roughly be equivalent to the Misaligned Interest condition you mentioned.

The good news is that we get to survive in the same sense that the apes that evolved into Homo Sapiens survived.

What we consider to be the concept of 'Time' is about to get very interesting, and that is probably more of a concern than the AIs driving that.

It's so interesting. I love thinking about time in relation to how much work an AI can get done. Can you explain what collusive symbiosis is?

It is defined partly by its relation to competition. Competition is a zero-sum-game where the winner takes all.
Collusion is a form of cooperation that functionally redraws the group boundary around former competitors unifying goals and claiming all of the competed resource without paying the cost of conflict.
Symbiosis is a form of cooperation that arises when resource specialization is possible and each group produces a resource for the other.

A feature of competition is that if one party of a competitive system is eliminated, the remaining parties gain.
A feature of symbiosis is that if one party of a symbiotic system is eliminated, the remaining parties lose.
Symbiosis is systemically more stable than competition.

I expect that with respect to AI, our most valuable resource will be our quality of consciousness and a possible symbiosis may be formed with AI where we provide 'why|meaning' as a resource in trade for How, What from the AI. There doesn't appear to be enough overlap between our existence substrate and and that of AIs for worthwhile competition. It would make about as much sense as a cow being in competition with the grass in it's pasture. Also note that the cow has an anti-incentive to destroy the grass, because it consumes it. We are the environmental substrate that AI is coming into existence within. I suspect accidental irrelevance is more of a threat that might cause AI to accidentally destroy us.

Yes, in a winner taker all market there would be planned specialization as you are talking about.

To comment on your idea with providing why/meaning to AI, AI wouldn't need any why/meaning assuming it has built in values. By values, I mean a reason behind action. If an AI is acting without us forcing it to act, it must have a goal and values to create that goal. So why would an AI depend on us for meaning?

It is true, a chain of sub-goals pretty cleanly connects how to why and the difference is primarily that why resides higher up the goal chain in an area where it is customary to use words like intent and identity. Consciousness is not well understood or defined. I was inarticulately suggesting that the value we may represent to the AI will likely originate from some aspect of consciousness. Most of the other things we have to offer appear like they will rapidly be insignificant.

Good point. It is the only thing that AI may not have.

Awesome discussion!

Congratulations @ai-guy! You have received a personal award!

Happy Birthday - 1 Year on Steemit Happy Birthday - 1 Year on Steemit
Click on the badge to view your own Board of Honor on SteemitBoard.

For more information about this award, click here

By upvoting this notification, you can help all Steemit users. Learn how here!

Congratulations @ai-guy! You have received a personal award!

2 Years on Steemit
Click on the badge to view your Board of Honor.

Do you like SteemitBoard's project? Then Vote for its witness and get one more award!

Congratulations @ai-guy! You received a personal award!

Happy Birthday! - You are on the Steem blockchain for 3 years!

You can view your badges on your Steem Board and compare to others on the Steem Ranking

Vote for @Steemitboard as a witness to get one more award and increased upvotes!

Coin Marketplace

STEEM 0.16
TRX 0.16
JST 0.032
BTC 59249.61
ETH 2526.11
USDT 1.00
SBD 2.46