Is Self-Aware Artificial Intelligence Possible?
The article linked below argues two points:
(1) Artificial intelligence is impossible; and
(2) Therefore, the chance that we could be living in a simulated reality is also impossible.
https://www.facebook.com/notes/david-robison/are-we-living-in-a-simulation-like-some-tech-billionaires-believe/805729166252013/
I'm not here to argue about simulated realities. It does seem an interested proposition. I lean toward it not being the case and I find the author's arguments against this idea to be compelling. I'm here to present an argument for the possibility of self-aware artificial intelligence being not only possible, but that it will happen by the year 2022.
Let's dive in!
(1) The author makes a case that humans are not getting smarter. This is a logical way to lay a foundation for his argument; if we haven't come up yet with a way to create self-aware machines, we never will because we aren't going to think up something we haven't yet because we aren't getting any smarter.
I make this interpretation of the author's intent based on the following from his article:
"The reason it took us so long to progress technologically had nothing to do with our ability, or inability that is, to understand the underlying physical mechanisms that enable a technology, at least not on a fundamental level.
Technology is mainly developed through trial and error. Getting something to work does not necessitate a fundamental understanding of how it works, only a memorizing of the steps necessary to build it. Even the equations used in engineering were learned through tinkering and experimentation. Engineering and technology are based on the accumulation of practical knowledge through pattern recognition. Pattern recognition does not necessitate pattern explanation. I could build a simple motor using magnets, but that would not mean I suddenly understand the physical mechanism behind magnetism."
I contend that humans are getting measurably smarter and here is why:
(a) https://en.wikipedia.org/wiki/Flynn_effect.
(b) My second argument is a question: "At what point in human evolution do you propose humans' intelligence stopped increasing? Certainly not Homo Erectus or Homo Neanderthalensis. Do you propose that it stopped at Homo Sapien? Why?"
(c) Concrete vs. abstract thinking. The main functional localizations of abstract thought are the cerebral frontal lobes, like 1/3 of our brains; it's quite likely that they were not fully developed among the Neanderthals, in spite of their phonation organs.
(2) "Consciousness is defined as neural activity in the brain, a brain being a network of highly specialized cells which are intimately interconnected by pathways capable of rapid electrical and chemical signal transmission."
And the author goes into a bit of detail as to some of the ways a brain is quite complex by current understanding. I contend that every activity the author describes here, while difficult or impossible with today's technology, is not necessarily impossible given enough time to research and experiment. Especially if human knowledge and intelligence are increasing.
For example, "One neuron connects to between 1,000 and 10,000 others within a human brain, meaning within a few short hops you can get from one neuron to any other."
OK. In 1971 the single-core Intel 4004 processor had 2,300 transistors and ran one operation at a time (one "thread" at a time); no parallel processing. In 2012, the 62-core Intel Xeon Phi processor had 5,000,000,000 transistors and can run up to 248 processes (threads) simultaneously.
Is this the same thing as every neuron connecting to 1,000 to 10,000 other neurons in the human brain? No. What it shows, though, is progress and, hopefully, helps people visualize how easy it might actually be to create equivalent connections between artificial versions of neurons. Put it this way: If right now we can easily create an artificial neural network where each "neuron" can connect to 100 other neurons, why would it not be easy to imagine that number jumping to x10, x100, or even x1000 within a few years?
(3) More: "Neural functioning is dependent upon DNA and highly dynamic processes like protein synthesis (hundreds of proteins per second can be made in a cell), the building up/breaking down of microtubule scaffolding structures, and many other complex mechanisms."
Sure. These are "complex mechanisms". Does that make them impossible to emulate or even improve upon? Given TODAY'S knowledge and technology, the answer could very well be, "Yeah. Impossible." But every day human knowledge, understanding, and intelligence are increasing. If we are asking the question WILL something EVER be possible, it's somewhat contradictory to not consider what can happen in the FUTURE. Now if the question was, "IS artificial intelligence (a machine being self-aware) possible NOW, then we can with more surety answer the question because we are more likely to know what our CURRENT limitations are. That's the probem with this entire article. It assumes a static view of human knowledge, understanding, and ability. And that is at the root of this critique.
(4) Final example: "The brain has neural plasticity, neural migration, synaptic pruning, growth of new connections, and strengthening/weakening of existing connections. These are all part of the ongoing structure/dynamics feedback as well."
This is a great argument; pointing out how software (psychology) AND hardware (physiology) in a human brain constantly change and adapt in a highly dynamic manner. Again, why would it be impossible to design an artificial neural net that can do this? As a software engineer, I see this as merely a more complex way of software being able to modify the structure of a database it uses. It's rare that I've put this kind of functionality into software I've created, but I have. And that is the SUPER SIMPLE compared to some of the stuff I know friends were building in the a.i. world three years ago. Sure, a more accurate analogy than software changing its underlying database would be software changing the transistor structure, voltage, etc. of the hardware it runs on. But looking at the "problem" that way is to assume too many limitations to what engineers on the cutting edge of a.i. are working on now and will be working on in even just two or three years.
I'll spare us all the time required to look at more examples because I'd say the same thing about all of them; that they all point out some amazing thing the brain does that CURRENT technology can't emulate, ignoring past-to-present progress and future potential.
Speaking of past-to-present progress, soak up the following:
- 100BC paper was invented.
- ~700 fireworks were invented.
- 1709 the piano was invented.
- 1712 the steam engine was invented.
- 1793 the cotton gin was invented.
- 1837 the telegraph was invented.
- 1879 the light bulb was invented.
- 1903 the Wright brothers flew for the first time. How many people thought it was impossible?
- 1954, the nuclear power plant APS-1 in Obninsk, Russia output 5MW of electricity.
- 1969 Apollo 11 landed the first two humans on the moon. People still don't believe this happened!
- 1973 the mobile phone was invented.
- 1975 the first personal computer, the MITS Altair 8800.
- 1983 TCP/IP (the Internet's backbone) was invented.
And now in just the past 10 years:
- Tablets
- Solar roofs
- GPS in your pocket
- Kickstarter
- GitHub
- Mars exploration robot "Curiosity"
- Reusable space rockets
- Hadron collider
- 3D printing
- Virtual reality and augmented reality
- Speech translation/understanding by software
- Self-driving cars
- Blockchain
- Genetic engineering
- Bionic eyes and limbs
- Brain to machine interfaces (https://en.wikipedia.org/wiki/Brain-computer_interface)
Finally, I see two potentially important factors the author did not bring up:
(1) Our tools for learning and creating are becoming more efficient. Even having tools that are "specialized a.i." increases the effectiveness of individual humans.
(2) Speaking of tools, the Internet is allowing individual humans to collaborate at an efficiency level probably unimaginable 30 years ago. To me, the many benefits of such collaboration seem obvious.
I contend that we are in the beginning slope of a "hockey stick" of exponential growth in technological understanding and we do ourselves a disservice by assuming anything without zooming out to look at the advances in the last 100 years, see the acceleration in just the last 10, and extrapolate/imagine a bit, just a bit, into the future. From my perspective, which I'll admit is optimistic, I predict the first self-aware a.i. to come into being by 2022. Crazy soon, right?
What do YOU think?
Given enough time of course it is possible.
I think its possible and inevitable we create self aware AI. If we do create it, I think its more than likely we could be in a simulated reality. I believe it was Elon Musk that said if that's the case, probability is one in billions that we are a base reality. But you didn't want to get into that so I'll move on.
I think your estimate of in 5 years it will be hear is entirely possible as well. Have you seen stuff on quantam computers? Ive seen several videos of the head of D Wave (https://www.dwavesys.com) talking about how quantam computers will make self aware AI a reality. Its crazy stuff. Not to get off topic but they say d wave will be able to crack block chains, but they think they will be able to use quantum entanglement to create a uncrackable alternative.
I wish I was optimistic about AI. Really who knows what will happen, I'm hoping for the best , but it makes sense to me that us creating something considerably smarter than us could be our downfall. I have also heard the theory that our purpose could just be we are the evolutionary stepping stone for AI. Either way its really exciting, I cant stop thinking of different scenarios of how its going to play out.
As someone who has studied machine learning and neural networks, there are interesting connections you can make between the brain and these networks which are greatly increasing in complexity. But will we create a self-aware entity or one that imitates how we would expect a self-aware entity would behave. We may never know. Some people are fooled by chatbots which are rather primitive in terms of natural language processors and simply imitate a conversation rather than generate one.
Good one!
Nice to meet you I'm Jeff.