Sam Harris’ TED Talk on Artificial Intelligence Will Blow Your Mind and Broaden Your WorldviewsteemCreated with Sketch.

in #ai8 years ago

“I’m going to describe how the gains we make in artificial intelligence could ultimately destroy us. And in fact I think it’s very difficult to see how they won’t destroy us.”

— Sam Harris

It’s fairly likely that you read the title of this post and thought, “Artificial intelligence? What’s this nonsense?”

I don’t fault anyone for having such a reaction. When I first began to encounter a number of highly intelligent people discussing the risks surrounding superintelligent AI, I was baffled, skeptical. This seems like some sci-fi foolery, I thought.

However, I continued to read the arguments of those who are worried about AI, and I continued to discover more brilliant people—everyone from Oxford philosopher Nick Bostrom to visionary entrepreneur Elon Musk—who are vocally concerned about AI.

After a substantial amount of research, I concluded that there are, in fact, a number of potentially disastrous scenarios that may arise, if we develop artificial intelligence which surpasses the cognitive abilities of human beings.

About three weeks ago, TED released a new talk by neuroscientist Sam Harris (whose podcast I recommend), in which Harris provides an engaging 15-minute introduction to the topic of superintelligent AI. He argues that given a long enough timescale (and assuming humanity doesn’t destroy itself first), we are extremely likely to develop a superintelligent AI. He then elaborates several of the arguments for why this could be a catastrophic development, if we aren’t sufficiently cautious and clever in designing the AI.

This topic is potentially one of paramount importance, as it has implications for the future of all life on Earth, and this talk is the best short introduction to the topic I’ve found. I highly encourage anyone with even a remote interest to take 15 minutes and watch the talk now:

More on Superintelligent AI

If you found that talk as fascinating as I did, you may want to delve deeper into the topic. If so, I really can’t recommend enough Wait But Why’s two-part essay series on artificial intelligence. It’s an extremely accessible, in-depth exploration of the topic.

You may also want to take a look at Wikipedia’s page on superintelligence. And if you end up wanting to go deeper still, Nick Bostrom’s book, Superintelligence: Paths, Dangers, Strategies, is widely considered to be one of the foundational texts in contemporary AI theory.

Existential Risk

The risks surrounding superintelligent AI belong to a large category of risks known as “existential risks.

Although the world we occupy today is by many measures better than it’s been at any other time in human history, the risks we now face as a species are greater than ever. There are currently far more global catastrophic risks—what Nick Bostrom calls “existential risks”—than at any other point in history. In 2008, a group of experts at the Global Catastrophic Risk Conference at Oxford estimated that there is a 19% chance of human extinction before 2100.

These existential risks have led Elon Musk and others to believe that we must attempt to colonize space and transform humanity into a multi-planetary species. If our species were to erect a self-sustaining civilization on another planet, the chances of our extinction would be greatly reduced.

It is my sincerest hope that we grasp the enormity of these existential risks, address/avert them, and preserve intelligent earthly life into the deep future. I believe existential risk is the largest issue humanity faces now and for the foreseeable future. Thankfully, organizations such as the Future of Life Institute, the Future of Humanity Institute, the Center for the Study of Existential Risk, and the Global Catastrophic Risk Institute are researching existential risks and the most effective means of addressing/averting them. In my opinion, the average person can help mitigate existential risk by raising awareness (share articles like this one) and by living sustainably, minimalistically, compassionately, cooperatively, and non-dogmatically.

More on Existential Risk

If you’re interested to know/think more about existential risk, I recommend the following:

Existential Risks: Analyzing Human Extinction Scenarios and Related Hazards by Nick Bostrom
This list I compiled on Twitter of the best sources of information on existential risk
The Wikipedia page on global catastrophic risk
— Vinay Gupta’s original two-part interview with the Future Thinkers Podcast (Part 1 | Part 2)
— Wait But Why’s two-part introduction to the prospect of superintelligent AI (Part 1 | Part 2)
My essay on why humanity must become a multi-planetary species
80,000 Hours
The Global Priorities project
Sam Harris’ interview with Will MacAskill, the founder of effective altruism

Sort:  

Computers may, in the future, be able to analyse data better than humans, in fact, it takes computers to analyse the big data we currently use in the sciences.

However, there will not be a computer that has creativity. Yes, you can get a computer to compose music, based on algorithms inherit in classic composers. But, you will not see a computer create the next paradigm shift in technology.

What we have a problem with is that our computers do not work for us.

I like Linux, because I can tell it to die, and it does so. In winders you can't even tell it to kill a virus.
Our smart phones run spyware. Designed into the firmware, spyware. It runs spyware in the operating system. And is constantly under attack to run more spyware.
If the computing device was under our control, it would not run any of these.

Instead we have to resort to putting a bandaid over the camera of our tablets.

Computers, as they are now, are not good tools. We now need to work on them becoming good tools.

I respect Sam, and he is spot on with many of his theories. Here is the problem I see with the concept of "escaping" to other planets. If we create an AI that is more intelligent than all of humanity combined, what makes us think that we could escape to another planet to get away from what we have created? If we create a dangerous AI, and I have my doubts, then we doom ourselves regardless or which part of the universe we inhabit.

Coin Marketplace

STEEM 0.27
TRX 0.11
JST 0.030
BTC 67621.06
ETH 3787.11
USDT 1.00
SBD 3.50