Towards Artificial Minds 3: Recent Breakthroughs and some Philosophical Inquires
Source: Pixabay.com
Recent Breakthroughs in Artificial Intelligence
In this section I am going to write about some recent breakthroughs that happened in artificial intelligence research in the last ten years that also received attention from the public and not only from the artificial intelligence research community. I am limiting myself to the last decade because it was a decade where a lot of progress was made and artificial intelligence came back into the public's mind. After Deep Blue defeated chess grandmaster Garry Kasparov in 1996 [CHhH02], a major milestone in the development of artificial intelligence was reached, early artificial intelligence researchers even went so far to believe, that solving computer chess might by roughly equivalent to solve artificial intelligence in general, since it would have penetrated the core of human intellectual behaviour [Mue16]. But today, over twenty years after this actually happened, we are safe to say that this was not the case, we didn't learn that much about human intellectual behaviour, but a lot about how to play chess. After this happened, artificial intelligence surprisingly went out of the public's mind1, regarding the reasons for that I can only speculate, but I guess that it is connected to the contemporary world affairs, especially the burst of the dot-com bubble in early 2000 and the aftermath of the 9/11 attacks, in 2001 stopped people from being overly optimistic.
From, I would say, 2007 or 2008 on, this changed, since suddenly there was a lot of data available (in social networking sites) and computers had the computational power of processing them in an effective manner, so that old ideas like e.g. neural networks got back in fashion within the artificial intelligence research community and from the advancements made their way back into the public's mind. An interesting side note is, that the video gaming industry is indirectly responsible for the rise in computational power, since the demand for better 3D graphics led to the development of specialised processors that can perform certain operations in a highly parallel fashion, something that also speeds up the training times of neural networks by several orders of magnitude.
IBM's Watson defeats the best human contestants at Jeopardy!
Jeopardy! is a long-running TV quiz show, where the contestants get an answer and have to formulate the right question for that answer. These questions are split into different categories and have different difficulties. The higher the difficulty, the higher the monetary reward for that question. For humans this is quite a challenging task, since a lot of knowledge and contextual informations are required, for an artificial intelligence system this is even more difficult since apart from these requirements, also the questions (which are formulated in English) have to be processed and "understood" by the system. In February 2011, Watson was playing against two former constants that won record amounts of money on Jeopardy! and was able to defeat the two former champions [Fer12]. This was quite an achievement and brought artificial intelligence systems back into the public's mind, IBM spent around 5 years developing the system.
To achieve this, Watson used a very large "knowledge" base2 that was filled with information for various sources like e.g. Wikipedia, article from the New York Times or the bible. After the linguistic preprocessing of the answer, the system starts many semantic searches in parallel. More of these searches come to a similar result, the more likely it is that this result is correct, results that are less likely to be correct are filtered out. Once the system decided on a result, it formulates it as a question and outputs it through a speech-synthesis system.
The interesting thing about Watson is, that it was build utilising a lot of techniques from GOFAI and combining them with approaches from machine learning (e.g. for deciding which answers to filter out). Since its triumph on Jeopardy! the system has been continuously improved and extended its capability to applications outside of quiz shows, like medical diagnosis [FLB+13] or made accessible to the public for creating new cooking recipes 3.
Deepmind releases an AI that is able to play classic Video Games
In 2013 a London based startup called Deepmind published a paper about playing classic Atari 2600 video games using deep reinforcement learning [MKS+13]. Their artificial intelligence system only "sees" the graphical output of the video games as input, just like a human would. The system was able to learn how to play the games just from this input and the score it gets from a game. The games were trained on the raw pixel data, unlike previous systems no special features had to be engineered for the system to work. Deepmind used this approach on six different games, in three of them the system was able to score better results than a human expert.
What made this publication special was that the researchers from Deepmind combined neural networks with reinforcement learning and used it for predicting a value of an action. By doing so, they were able to use the high-level feature extraction of deep learning in order to come up with a better estimation of the value function of an action for a given input state. Even though all if this has rather little practical applications, it convinced Google to acquire Deepmind.
Google Deepmind's AlphaGo defeats the best Go player in the World
In March 2016 Google Deepmind held a competition where its artificial intelligence system AlphaGo was playing against Lee Sedol, who is one, probably the best Go player in the world. Go is an ancient Chinese board game that is rather simple to learn, but very difficult to master. Go is played on a grid board, where two players alternating put their black or white game tokens in order to conquer as much territory as possible. The game does have some similarities to chess, but it is played on a 19 by 19 grid, which (due to the combinatorial explosion) makes it much more difficult to master for a computer, since there are so many possibilities that classical search approach that was traditionally used for chess, is not feasible. The competition of Lee Sedol against AlphaGo was often compared to the competition of Garry Kasparov against Deep Blue in 1996. AlphaGo winning against Lee Sedol was a big surprise, since it was previously thought that computers beating the top human player in Go is still a decade away [SHM+16].
AlphaGo used two different types of deep neural networks: a policy network to determine which move to make, this network is trained through a combination of supervised learning, where a lot of previously played games of Go were used as input data and reinforcement learning and a value network to evaluate the current positions on the game board. In addition, a randomised search called Monte-Carlo tree search, which tries to find the promising move based on random sampling of the search space. All three methods are combined for AlphaGo to make a move.
After AlphaGo's success, the team from Deepmind kept working on the system and in October 2017, they introduced AlphaGo Zero, a version of AlphaGo that was trained without using supervised learning and just "knew" the rules of the game, also the hardware requirements were drastically reduced. The system was trained by playing games against itself and managed to beat the version of AlphaGo that defeated Lee Sedol after only three days of training [SSS+17]. In December 2017 the same team introduced AlphaZero, an improvement of AlphaGo Zero, that is capable of also learning chess and 4 just from the rules of one of these three games. After a few hours of training it was better than any software ever written for Go, chess and shogi, achieving superhuman performance in all three of these games [SHS+17].
Philosophical Inquires of the Recent Developments in Artificial Intelligence
All three examples I picked have one thing in common: an artificial intelligence was able to outperform humans on tasks, that used to be only solvable by humans5. Especially Watson's triumph on Jeopardy! is remarkable, since it defeated the top human contestants in a domain where natural language understanding and contextual knowledge are the key factors, two things which are traditionally very difficult for computers, since they are quite fuzzy in its nature, something that humans can handle quite well, but computers can not. I think hearing about machines outperforming humans in cognitive tasks, that were for a very long time known to be exclusive to humans is very unsettling for quite some people. Together with the self-aggrandisement humans have and a general ressentiment against artificial intelligence6, the door for imagining apocalyptic scenarios and philosophising about the end human kind is wide open. I admit that being outperformed by machines in tasks that require human intelligence is quite an insult to the collective human ego. But at the same time I have to say, that worrying about machines taking over isn't the right answer.
The above mentioned conception of the early artificial intelligence researchers, that solving chess will be roughly equivalent to solving (artificial) intelligence in general [Mue16], comes to mind. This account overlooks one fundamental restriction that all the recent breakthroughs in artificial intelligence have in common: they only work that well in their respective domain and these domains are all very narrow. All of these domains are well-defined and the problem that has to be solved is quite well-studied, at least enough so that it can be solved. The methods used to solve these problems are by no means some occult "black magic", but rather mathematical procedures that are known for decades (if not centuries). The reason why the beginning of this paper is quite heavy on mathematics is, because I wanted to show what are the underlying principles of the recent breakthroughs in artificial intelligence. I didn't want to sweep them under the carpet or just plainly ignore them, because if something like this happens, artificial intelligence quite easily starts to look like some kind of "black magic", something that it is absolutely not7.
The mind has undeniably something mysterious, but also familiar, since every human being has one, about its very nature. In western philosophy the dualism between mind and body has a long tradition, most prominent in the works of René Descartes, but its origins can be traced back to ancient Greek philosophy: one of the main topics in Plato's republic is the separation between the "perfect" world of mind, where the ideas reside and the "imperfect" world of matter, where the body with resides. Plato's philosophy set the tone about the attitude towards mind and matter for millennia to come. "Mind over matter" was the established philosophical paradigm for almost two millennia after Plato. Given the mysterious, yet familiar notion of the human mind and the primacy of mind over matter, it is very understandable that emergence of machines that are able to outperform humans in tasks requiring human intelligence is discomforting. One of the last resorts that humans have to stand out against the "unthinking matter" is creativity, but even in this domain machines are not totally "untalented" [Col12].
Instead of catching at every straw to save the special (cognitive) status of humans in this world, we should rather take an unbiased look at the differences of the human mind and artificial intelligence systems. One of the most striking differences between the human mind and artificial intelligence systems is consciousness8, something that is closely related to that is intentionality. Being aware of something and having thoughts that are about something is quite evident to a human being, but something that has not been artificial intelligence systems are not really capable of doing (yet), the big philosophical question is: is it even possible for an artificial system to develop some "mechanisms" like consciousness and intentionality. Since both are very subjective in their nature, it is very difficult to come up with a crisp scientific theory about them, I guess this is one of the reasons keeping the development of such system back, since we are not really sure what it is, that we are looking for. I think that intentionality is the easier one to grasp, but even there we can only argue, that the intentionality of an artificial intelligence system is so far only the intentionality of its creator, or more precise the intention of its creator to solve a specific problem. Consciousness on the other hand is something that current artificial intelligence systems are not really showing and you also can't really argue, like for intentionality, that the consciousness of its creator is "transferred" to the artificial intelligence system.
Going back to the foundations of current artificial intelligence systems, we see that they are only mathematical operations. The big question is, how or if it is possible for such systems, that are based on rather abstract operations to develop something like a consciousness? Since artificial neural networks are basically just solving an optimisation problem, it is rather not likely, but one necessary requirement for consciousness is knowledge representation as internal states, meanings and knowledge representation can be seen as distinguishing criteria [Šef08]. And distinguishing between different (trained) categories is something that artificial neural networks are quite good at and it was actually one of their first practical use cases.
Conclusion
Artificial intelligence was thriving in the last decade and a lot of breakthroughs have been achieved. This development has had a huge impact on the human self-concept and led to wild speculations about its influences on humanity, but in the end what was achieved is just finding a very performant solution for difficult, but neatly defined problems.
The current technology is still far away for reproducing features of the human mind like consciousness or intentionality, but the current developments do not steadfastly refuse the emergence of these features either.
Bibliography
[Bui02] James Buickerood.
Two dissertations concerning sense, and the imagination, with an essay on consciousness (1728): A study in attribution.
1650-1850: Ideas, Aesthetics, and Inquiries in the Early Modern Era, 7:51-86, 2002.[CHhH02] Murray Campbell, A.Joseph Hoane, and Feng hsiung Hsu.
Deep blue.
Artificial Intelligence, 134(1):57 - 83, 2002.[Col12] Simon Colton.
The Painting Fool: Stories from Building an Automated Painter.
Springer Berlin Heidelberg, Berlin, Heidelberg, 2012.[Fer12] David Ferrucci.
Introduction to "this is watson".
IBM Journal of Research and Development, 56(3.4):1:1-1:15, May 2012.[FLB+13] David Ferrucci, Anthony Levas, Sugato Bagchi, David Gondek, and Erik T. Mueller.
Watson: Beyond jeopardy!
Artificial Intelligence, 199-200:93 - 105, 2013.[MKS+13] Volodymyr Mnih, Koray Kavukcuoglu, David Silver, Alex Graves, Ioannis Antonoglou, Daan Wierstra, and Martin Riedmiller.
Playing atari with deep reinforcement learning.
arXiv preprint arXiv:1312.5602, 2013.[Mue16] Luke Muehlhauser.
What should we learn from past ai forecasts?, 05 2016.
[Online; accessed 2018-02-18].[Šef08] Ján Šefránek.
Knowledge representation for animal reasoning.
Proceedings of AKRR, 8, 2008.[SHM+16] David Silver, Aja Huang, Chris J. Maddison, Arthur Guez, Laurent Sifre, George van den Driessche, Julian Schrittwieser, Ioannis Antonoglou, Veda Panneershelvam, Marc Lanctot, Sander Dieleman, Dominik Grewe, Nal Nham, Johnand Kalchbrenner, Ilya Sutskever, Timothy Lillicrap, Madeleine Leach, Koray Kavukcuoglu, Thore Graepel, and Demis Hassabis.
Mastering the game of go with deep neural networks and tree search.
Nature, 529:484 EP -, Jan 2016.[SHS+17]David Silver, Thomas Hubert, Julian Schrittwieser, Ioannis Antonoglou, Matthew Lai, Arthur Guez, Marc Lanctot, Laurent Sifre, Dharshan Kumaran, Thore Graepel, et al.
Mastering chess and shogi by self-play with a general reinforcement learning algorithm.
arXiv preprint arXiv:1712.01815, 2017.[SSS+17] David Silver, Julian Schrittwieser, Karen Simonyan, Ioannis Antonoglou, Aja Huang, Arthur Guez, Thomas Hubert, Lucas Baker, Matthew Lai, Adrian Bolton, Yutian Chen, Timothy Lillicrap, Fan Hui, Laurent Sifre, George van den Driessche, Thore Graepel, and Demis Hassabis.
Mastering the game of go without human knowledge.
Nature, 550:354 EP -, Oct 2017.[Thi11] Udo Thiel.
The early modern subject: Self-consciousness and personal identity from Descartes to Hume.
Oxford Univ. Press, 2011.
Footnotes
[1] This is a personal observation of myself, back in these days I wasn't really actively following the developments of artificial intelligence, so all the information about new developments I got from newspapers. Of course there was some development, but with some exceptions like the DARAPA Grand Challenge (http://edition.cnn.com/2004/TECH/ptech/03/14/darpa.race/ verified 2018-02-20) or NASA's Spirit and Opportunity rovers exploring Mars (http://news.bbc.co.uk/2/hi/science/nature/3427045.stm verified 2018-02-20), but in general apart from these events, artificial intelligence was rather a niche topic.
[2] During its run on Jeopardy! Watson did not have access to the internet, so it had to rely on its internal database, in general it is possible for Watson to use the internet as information source as well.
[3] https://www.ibmchefwatson.com/, verified 2018-02-21
[4] "Japanese chess", it is a game with some visible similarities to chess, but it has some different rules like a token can change its owner and a bigger game board, which make it computationally more complex than chess.
[5] I admit, that computers outperforming humans in classic video games might not look like a big deal, but in this case the "how" is important, the machine learned to play those games from receiving only raw pixels as input, without any prior knowledge about the rules of the game.
[6] Which is as I wrote in section [LINK ZUM ERSTEN TEIL] is also dependent on the cultural background, ressentiment might be a strong word, probably distrust towards novelties is the more appropriate phrase to use.
[7] Another reason is that since my study programme is interdisciplinary, I feel like I have to bring together formal science and philosophy.
[8]
In today's discussions about philosophy of the mind, consciousness plays a central role, but this importance is relatively new, considering the long history of philosophy of the mind. The first treatise that exclusively focused on consciousness was the anonymously published Essay on Consciousness from 1728, which itself was only the appendix to another publication, the full title of the publication was Two Dissertations concerning Sense, and the Imagination. With an Essay on Consciousness [Thi11]. The author of this publication was unknown for a long time, but it turned out that it was Charles Mein, a customs officer from London [Bui02].
Hey @cpufronz thank you for this incredible blog!
IBM's Watson was definitely a technological wonder back then. (I think it was built before 2006, correct me if I'm wrong.) It would be interesting to see how this develops in the future in medical or scientific fields.
Please keep up the quality work!
Thank you :)
Yes, the development of Waton started around 2006 and the technology has incredible potential, but unfortunately for IBM the competition in AI, especially for finding talented people is very though. IBM is very good when it comes to marketing of Watson, but deploying this technology isn't as easy as they hoped. This (German) article basically says that the actual performance of Watson couldn't keep up with the expectations people have about it, something that is actually very common when it comes to AI systems.
Even by today's standards Watson is a technological wonder, but as always putting such a system into "production" is way more difficult than initially thought.
Thanks for the detailed response!
I truly appreciate your efforts!
You're welcome :)
aa you listened to me, there's no equations, nice :D
Yes, hopefully this time it will reach a larger audience :D
Do you want me to resteem your blog post to over 32,800 followers? https://steemit.com/@a-0-0
Congratulations @cpufronz! You have completed some achievement on Steemit and have been rewarded with new badge(s) :
Click on any badge to view your own Board of Honor on SteemitBoard.
To support your work, I also upvoted your post!
For more information about SteemitBoard, click here
If you no longer want to receive notifications, reply to this comment with the word
STOP