We do not understand artificial intelligence, because we don't understand intelligence...

in #science8 years ago (edited)

Heralds of artificial intelligence, including Elon Musk, Stephen Hawking and Ray Kurzweil, predict that by 2030, machines will acquire a consciousness of human level intelligence. However, by a series of pleasant, neutral and terrible consequences. For example, the Musk, Hawking, and dozens of other researchers have signed in January 2015, the petition on which machine-based AI could lead to "the elimination of disease and poverty" in the near future. This is obviously a pleasant effect.

But there are neutral: Kurzweil, who first suggested the idea of a technological singularity, believes that by 2030, people will be able to download their consciousness, thus merging with machines. Of the frightening consequences: Musk envisions a future where people will become Pets for the masters from the machine. Looking further, almost all agree that once people disappear as a species in favor of the machines by merging with them or not.

To say that these statements are unfounded — to say nothing. In recent decades we are witnessing a rapid surge in the development of technology, computers become more powerful and accessible, not by days but by hours. In 2011, Watson beat two former Champions in the intellectual game, using not the most difficult (by today's standards) artificial intelligence and natural language processing. The future comes faster than we have time to adapt to it.

Schedule onset of the technological singularity Kurzweil's based on the law of increasing returns: the more powerful computers become, the faster they develop. This chart exponentiale extraordinary growth and we now stand on the threshold of a steep curve that leads us to intelligent machines, and the world will be ruled by robots. To believe Kurzweil, Musk, Hawking and many other researchers of AI. In the end, it is so human to believe. By 2045 we ourselves can become the machines. We just need to create a sufficiently advanced AI, and then BAM — intelligent machines.

Only without me.

In this article we are talking about intelligence not as a measure of human knowledge, but as synonymous with the mind.

I do not argue: technology is evolving faster and faster; we're seeing this now and there is no reason to believe that we will reach a sort of plateau in the development of computing power. However, the transition from advanced technology to artificially created consciousness is a giant leap. The most promising statements on the topic of AI is based on a false premise: that we understand the human mind and consciousness.

Experts in AI are working with a specific definition of intelligence: ability to learn, to recognize patterns, to display the emotional behavior and solve analytical problems. However, this is only one definition of intelligence from the sea of the controversial and vaguely formed ideas about the nature of knowledge. The neuroscience and neuropsychology do not provide a specific definition of intelligence — rather, have many. Different areas, even different researchers define intelligence is completely different, sometimes mutually exclusive and incompatible terms.

In a broad sense, scientists consider intelligence as the ability to adapt to the environment, realizing their own goals, or even the ability to choose the best option in specific circumstances. However, this definition is mainly based on biological understanding of intelligence, more related to evolution and natural selection. In practice, the neuroscientists and psychologists argue a lot on the subject of mind, both among themselves and with scientists from other areas.

Consider the following opinion of psychologists Michael Ramsey and Cecil Reynolds:

"Theorists suggested, but scientists have confirmed that intelligence is a set of relatively stable abilities, slowly changing with time. Although intelligence can be considered as potential, it has no built-in or immutable characteristics. Modern psychologists and other scholars are of the opinion that intelligence emerges from a complex interaction between environmental and genetic factors. But despite hundreds of years of ongoing research, this interaction is understood to be very bad and vague. Finally, intelligence has nothing purely biological or purely social in its basis. Some authors suggest that intelligence is any measure of intelligence."

The paragraph above does not give any specific description. And psychology is only one of a dozen fields associated with the human brain, consciousness, mind, intelligence.

Our understanding of technology is growing and expanding constantly, but our understanding is more vague notions of intelligence, consciousness, mind — remains ridiculously childish. Technology is ready to bring us into the age of people on the basis of the computer, but neuroscience, psychology and philosophy — no. And these gaps in understanding will definitely slow down the chronology of the development of AI.

Most experts who study the brain and mind agree at least on two things: we don't know specifically and unanimously, what is intelligence. And we don't know what consciousness is.

"To reach the singularity, just enough to make today's software to run faster," wrote Microsoft co-founder Paul Allen in 2011. — We also need to create a more intelligent and capable program. To create this kind of intelligence, we need to scientifically understand the basis of human consciousness, and we barely scratched the surface of this knowledge."

The definition of human intelligence and consciousness remains the property of philosophy than neuroscience. So let's think philosophically.

Conscious creativity

Musk, Kurzweil and other proponents of the technological singularity again and again echoed by that increasing processing power will automatically lead us to machine consciousness and intelligence at the human level. They insist that the faster technology develops, the faster and improved other fields of science.

"I don't think that only having powerful enough computers, powerful enough hardware, we get intelligence at the human level," says Kurzweil in 2006. — We need to understand the working principles of the human mind, how the brain performs its functions. What's his software, algorithms, content? So we got ambitious project, which I call reverse engineering of the human brain, it will help us understand its mechanisms. And here we see the same exponential progress, as in other fields, e.g., biology.

Kurzweil recognizes the need to understand the human mind before to accurately recreate it in the car, but his decision — reverse engineering of the brain — crosses the field of neuroscience, psychology and philosophy. It requires too much: that to build a brain is the same, and build consciousness.

These two terms, "brain" and "consciousness" are not interchangeable. We may be able to recreate the brain; it's infinitely complex structure, but it is the physical thing that we will one day be able to map, split and re-assemble. Just this month, IBM said that it has created a working artificial neuron that can recognize patterns in noisy data and unpredictable behaving like a normal neuron. To create neuron — it is certainly not that create a whole human brain, but it's a start.

Nevertheless, it is still not consciousness, not mind, not thinking. Even if scientists develop technology to create an artificial brain, there is no evidence that this process will automatically generate consciousness. There is no guarantee that the machine suddenly becomes conscious. What can I say, if we do not understand the nature of consciousness, for a miracle to hope for?

Take at least an aspect of consciousness, mind and intelligence, like creativity. In itself, the work is relatively diverse and murky thing; everyone sees it differently. For one person the creative process includes weeks spent in the private quarters; for another he starts with three glasses of whiskey; for the third creativity is unpredictable flashes of inspiration in the last minute of the month. The fourth and did under work understand the concentration in the process, and the fifth to procrastination.

The question is whether machines with artificial intelligence to procrastinate?

Probably not. The singularity implies that, ultimately, AI will be billions of times stronger than a man from the point of view of intelligence. It is unlikely the AI will keep such useless things like procrastination, alcohol and soft focus. There is no doubt that one day software will be able to write beautiful, creative things with minimal (or no) human intervention. But beautiful doesn't mean the best. Creativity is not always conscious.

Kurzweil, Musk and others did not predict the appearance of the bot Tay on Twitter; they only say that in 20 years we will copy the human brain, we'll put him in an artificial box and thus recreate the human mind. No, we have to create something even more beautiful: consciousness, whatever it is that you do not need to procrastinate to be creatively productive. The mind, which does not need to be conscious of whatever the two terms may be actually.

Technological explosion is coming, but our understanding of psychology, neurology and philosophy remains unclear. All these areas must work in harmony to bring the singularity. Scientists have made impressive progress in technology in recent years, and computers are getting better every day, but a more powerful computer can not be equated to a breakthrough in the philosophical understanding. Accurate map of the brain do not show that we understand the mind.

Before trying to recreate, you need to understand how it works. Miracles will happen if we don't solve the secrets of the human mind. Source: nlo-mir.ru

Sort:  

I have a very good outlook on advanced AI in the future helping us. I think robots without self awareness will be good for us. AI would probably just leave earth for the stars.

Creating artificial intelligence when we don't understand our own intelligence worries me.

I have the same perspective about intelligence, conscience, mind, brain. I think AI is very far away from the Singularity point, when it becomes aware of itself because I don't comprehend how AI will overcome the algorithms limitations and will be able to abstract.

I 'm aware that we know too little about our brain (how exactly works, how organizes the info, how recall memories, etc) and even less about conscience. We are very far from the Theory of Everything, but it seems scientists have chosen conscience as an important part of it. And we don't know much ....

But, regardless of my narrow view of AI I have faith in Kurzweil. He demonstrated he's one of the most important contemporary scientists/innovators. He's even changing his body (you knew he's taking 150-200 pills every day, right?).

Simply putted, I'm aware of my small capacity to understand complex discussions (like the development of AI) but I have faith that others can do it. Eventually.

I agree, we know too little about the human intellect, the psyche, so currently it is something like a fascinating game.

Coin Marketplace

STEEM 0.19
TRX 0.14
JST 0.029
BTC 64249.62
ETH 3184.05
USDT 1.00
SBD 2.48