Artificial intelligence against humanity
Is the AI apocalypse near? Movies like the Terminator franchise and the Matrix have long portrayed dystopian futures where computers develop superhuman intelligence and destroy the human race — and there are also thinkers who think this kind of scenario is a real danger.
We interviewed one of them, Oxford philosopher Nick Bostrom, last year. Others include singularity theorist Ray Kurzweil and Robin Hanson, an economist at George Mason University.
But these thinkers overestimate the likelihood that we'll have computers as smart as human beings and exaggerate the danger that such computers would pose to the human race. In reality, the development of intelligent machines is likely to be a slow and gradual process, and computers with superhuman intelligence, if they ever exist, will need us at least as much as we need them. Here's why.
Bostrom, Kurzweil, and other theorists of super-human intelligence have seemingly infinite faith in the power of raw computational power to solve almost any intellectual problem. Yet in many cases, a shortage of intellectual horsepower isn't the real problem.
To see why, imagine taking a brilliant English speaker who has never spoken a word of Chinese, locking her in a room with an enormous stack of books about the Chinese language, and asking her to become fluent in speaking Chinese. No matter how smart she is, how long she studies, and how many textbooks she has, she's not going to be able to learn enough to pass herself off as a native Chinese speaker.
That's because an essential part of becoming fluent in a language is interacting with other fluent speakers. Talking to natives is the only way to learn local slang, discover subtle shades in the meanings of words, and learn about social conventions and popular conversation topics. In principle, all of these things could be written down in a textbook, but in practice most of them aren't — in part because they vary so much from place to place and over time.
A machine trying to develop human-level intelligence faces a much more severe version of this same problem. A computer program has never grown up in a human family, fallen in love, been cold, hungry or tired, and so forth. In short, they lack a huge amount of the context that allows human beings to relate naturally to one another.
And a similar point applies to lots of other problems intelligent machines might tackle, from drilling an oil well to helping people with their taxes. Most of the information you need to solve hard problems isn't written down anywhere, so no amount of theoretical reasoning or number crunching, on its own, will get you to the right answers. The only way to become an expert is by trying things and seeing if they work.
We must consider the key moral and policy questions around artificial intelligence and cyborg technologies to ensure our innovations don’t destroy us.
How much do we really know about the impact of scientific breakthroughs — on technology or on society? Not enough, says Marcelo Gleiser, the Appleton Professor of Natural Philosophy and a professor of physics and astronomy at Dartmouth College.
As someone who explores the intersection between science and philosophy, Gleiser argues that morality needs to play a stronger role in innovations such as artificial intelligence and cyborg technologies due to the risk they could pose to humanity. He has described an artificial intelligence more creative and powerful than humans as the greatest threat to our species.
While noting that scientific breakthroughs have the potential to bring great harm or great good, Gleiser calls himself an optimist. But says in this interview that “the creation of a transhuman being is clearly ripe for a careful moral analysis.”
When it comes to understanding how to enhance humans through artificial intelligence or embedded technologies, what do you view as the greatest unknowns we have yet to consider?
At the most basic level, if we do indeed enhance our abilities through a combination of artificial intelligence and embedded technologies, we must consider how these changes to the very way we function will affect our psychology. Will a super-strong, super-smart post-human creature have the same morals that we do? Will an enhancement of intelligence change our value system?
At a social level, we must wonder who will have access to these technologies. Most probably, they will initially be costly and accessible to a minority. (Not to mention military forces.) The greatest unknown is how this now divided society will function. Will the different humans cooperate or battle for dominance?
As a philosopher, physicist and astronomer, do you believe morality should play a greater role in scientific discovery?
Yes, especially in topics where the results of research can affect us as individuals and society. The creation of a transhuman being is clearly ripe for a careful moral analysis. Who should be in charge of such research? What moral principles should guide it? Are there changes in our essential humanity that violate universal moral values?
For example, should parents be able to select specific genetic traits for their children? If a chip could be implanted in someone’s brain to enhance its creative output, who should be the recipient? Should such developments be part of military research (which seems unavoidable at present)?
You’ve cited warnings by Stephen Hawking and Elon Musk in suggesting that we need to find ways to ensure that AI doesn’t end up destroying us. What would you would as a good starting point?
The greatest fear behind AI is loss of control — the machine that we want as an ally becomes a competitor. Given its presumably superior intellectual powers, if such a battle would ensue, we would lose.
We must make sure this situation never occurs. There are technological safeguards that could be implemented to avoid this sort of escalation. An AI is still a computer code that humans have written, so in principle, it is possible to input certain moral values that would ensure that an AI would not rebel against its creator.