Artificial Intelligence: 700 Years of History

in #science7 years ago

image

This Friday, the government presented the roadmap of France in the field of artificial intelligence, with the aim of pooling the efforts of researchers, start-ups and industrialists.

"The challenge is to make emerge a French AI community that is able to collaborate directly with the most advanced researchers with start-ups and large industrialists," said Axelle Lemaire, Secretary of State for Digital.

Forbes France offers 700 years of artificial intelligence, with its beginnings dating back to 1308 when the Catalan poet and theologian Raymond Lulle published an essay entitled Ars Generalis Ultima.

1308 - The Catalonian poet and theologian Raymond Lulle publishes Ars Generalis Ultima (Ultimate General Art), perfecting his method of using paper-based mechanical means to conceive new knowledge by combinations of concepts.

1666 - The mathematician and philosopher Gottfried Leibniz publishes Dissertatio de arte combinatoria (From Combinatory Art) and follows Lulle by proposing an alphabet of human thought; He puts forward the idea that all ideas are nothing but combinations of a relatively small number of simple concepts.

1763 - Thomas Bayes develops a framework of reasoning about the probability of events. Bayesian inference will become a leading approach in "machine learning".

1854 - George Boole argues that logical reasoning could be conducted in the same way as solving a system of equations.

1898 - At the newly completed Madison Square Garden Electricity Show, Nikola Tesla demonstrates the world's first radio-controlled ship. The ship was equipped, as Tesla describes, "of a borrowed spirit."

1914 - Spanish engineer Leonardo Torres y Quevedos presents the first machine that plays chess. She is capable of making mat with turn and king against secluded king without any human intervention.

1921 - The Czech writer Karel Capek introduces the word 'robot' in his play "RUR" (Rossum's Universal Robot). The term 'robot' is derived from the word 'robota' which means 'work'.

1925 - Houdina Radio Control, a US radio equipment company, runs a car-driven, radio-driven car in the streets of New York.

1927 - Release of the science fiction film Metropolis. He staged a robot, double of a young peasant, which triggers chaos in Berlin in 2026. It is the first robot depicted in a film. It will later inspire the Art Deco look of C-3PO in Star Wars.

1929 - Makoto Nishimura built Gakutensoku (Japanese word for "learning the laws of nature"), the first robot produced in Japan. He could change his facial expressions and move his hands and head, using a compressed air mechanism.

1943 - Warren S. McCulloch and Walter Pitts published "A Logical Calculus of the Ideas Immanent in Nervous Activity" in the Bulletin of Mathematical Biophysics. They discuss simplified artificial neural networks and how to perform simple logical functions. This article will become an inspiration for computers based on 'neural networks' (and later for 'deep learning') in what they 'imitate the brain', according to the phrase.

1949 - Edmund Berkelay, in "Giant Brains: Gold Machines That Think", writes: "Recently, we have often heard strange giant machines that are capable of handling Information at high speed and with talent ... These machines resemble what a brain would be if it consisted of devices and cables instead of flesh and nerves ... A machine can process information; It can calculate, conclude and make choices; She can do calculations with information. Therefore, a machine can think.

1949 - Donald Hebb publishes "Organization of Behavior: a Neuropsychological Theory (" The Organization of Behavior: A Theory of Neuropsychology "). He suggests a theory that learning would be based on conjectures among neural networks and the ability of synapses to strengthen or weaken over time.

1950 - "Programming a Computer for playing Chess", by Claude Shannon, is the first published article on the development of a computer game of chess.

1950 - Alan Turing publishes "Computing Machinery and Intelligence" ("Computers, Machines and Intelligence"). He proposes the 'play of imitation' which will later be called 'Turing Test'.
1951 - Marvin Minsky and Dean Edmunds construct SNARC, the first artificial neural network, using 3000 vacuum tubes to simulate a network of 40 neurons.

1952 - Arthur Samuel develops the "Samuel Checkers-playing Program", the first self-study program.

August 31, 1955 - The term 'artificial intelligence' is introduced in a study proposal, "2 months, 10 man study of artificial intelligence" submitted by John McCarthy (Darthmouth College), Marvin Minsky (Harvard University), Nahaniel Rochester (IBM) and Claude Shannon (Bell - Telephony Research Department). The workshop was set up one year later in July and August 1956. This date is generally considered the date of birth of the new field of research.

December 1955 - Frank Rosenblatt develops Perceptron, the first artificial neural network that allows pattern recognition based on a two-layer computer learning network. The New York Times described the Perceptron as "a computer embryo [the National Navy] is waiting to see walk, talk, see, write, reproduce and be aware of its own existence." The New Yorker described it as a remarkable machine ... capable of something that comes down to thinking ".

1958 - John McCarthy develops the LISP programming language which becomes the most popular language in research around artificial intelligence.

1959 - Arthur Samuel invents the term "machine learning". You have to program a computer so that it can play the ladies in a better way than the person who wrote the program.

1959 - Oliver Selfridges publishes "Pandemonium: a Paradigm for learning", in which he describes a process model by which computers could recognize forms that they would not have presented to them in advance.

1959 - John McCarthy publishes "Programs with Common Sense". He describes the 'Advice Taker', a program to solve problems by manipulating very formal sentences, with the ultimate goal of making programs that "would learn from their experiences as do humans ".

1961 - The first industrial robot, Unimate, began work on an assembly line at a General Motors plant in New Jersey.

1961 - James Slagle develops SAINT (Symbolic Automatic INTegrator), a formal system capable of performing integral calculations.

1964 - Daniel Bobrow completes his doctoral thesis at MIT: "Natural language input for a computer problem solving system". He develops STUDENT, a program of understanding of the natural language by the computer.

1965 - Herbert Simon predicts that "in the next 20 years, machines will be able to do all the work that a man can do".

1965 - Hubert Dreyfus publishes "Alchemy and Artificial Intelligence" and demonstrates that the mind is not like a computer, and that there are limits beyond which artificial intelligence can not evolve.

1965 - I.J. Good writes that the ultra-intelligent machine is the latest invention that man needs to invent, provided that it is wise enough to tell us how to keep it under control.

1965 - Joseph Weizenbaum develops ELIZA, an interactive program that leads a dialogue in English on any subject. Weizenbaum, who wanted to demonstrate the superficial side of communication between man and machine, was surprised at the number of people who attributed human feelings to his computer program.

1965 - Edward Feigenbaum, Bruce G. Buchanan, Joshua Lederberg and Carl Djerassi begin work on the DENDRAL project at Stanford University. The first expert system, it automated the decision-making process and problem-solving behaviors for organic chemistry, with the main objective of studying the formation of hypotheses and constructing models for the induction processes in organic chemistry. Empirical research.

1966 - The Shakey robot is the first versatile mobile robot to be able to reason with its own actions. In an article in Life magazine in 1970, Marvin Minsky assured him with certainty: "In three to eight years we will have a machine as intelligent as an average human."

1968 - "2001: the Odyssey of Space" is released in theaters. It features Hal, a conscious computer.

1968 - Terry Winograd develops SHRDLU, the first computer program to understand natural language.

1969 - Arthur Bryson and Yu-Chi Ho describe the backpropagation algorithm as a method for optimizing a multi-stage dynamic system. The learning algorithm for multi-layered neural networks contributed greatly to the success of 'deep learning' in the years 2000 and 2010, once the computing power had reached a sufficient level to work on large networks.

1969 - Marvin Minsky and Seymour Papert publish "Perceptrons: an Introduction to Computational Geometry". They stress the limitations of simplified neural networks. In a supplemented edition of 1988, they responded to complaints that their 1969 findings had led to decreased investment in neural network research: "Our view is that the progress was already virtually paused due to a Lack of good basic theories ... In the mid-1960s, there were some good experiments on perceptrons, but none was able to explain why they recognized certain forms and not others. "

1970 - The first anthropomorphic robot, WABOT-1 is built at the University of Waseda in Japan. It contained a member control system, a vision system and another for conversation.

1972 - MYCIN is developed at Stanford University. This is the first expert system in charge of identifying bacteria responsible for serious infections and recommending the right antibiotics.

1973 - James Lighthill reports to the British Scientific Research Council on the state of artificial intelligence research. He concludes that "none of the discoveries in all the research fields had the impact that it promised". This leads to a drastic reduction in government funding for AI research.

1976 - Raj Reddy publishes "Speech Recognition by Machine: A Review" which summarizes the first work on natural language processing.
1978 - XCON (eXpert CONfigurer) is an expert computer configuration program that can select components according to customer expectations.

1979 - The "Stanford Cart" (generations of autonomous vehicles of Stanford University) manages to cross a room full of chairs in about 5 hours, and without human intervention. It becomes one of the very first autonomous vehicles.

1980 - WABOT-2 is built in Waseda. It is an android musician capable of communicating with a real person, playing a score, and playing medium difficulty pieces on an electronic organ.

1981 - The Japanese Minister of International Trade and Industry is allocating a budget of $ 850 million (equivalent to almost 4 billion francs at the time, or about 676 million euros) to the computer project Of fifth generation. The project aims to develop computers that can follow conversations, translate languages, interpret images and reason like humans.

1984 - Exit of the film "Electric Dreams" whose subject deals with a love triangle between a man, a woman and a computer.

1984 - At the Fourth National Conference on Artificial Intelligence, Roger Schank and Marvin Minsky announce the arrival of an "AI Winter", Imminent explosion of the AI ​​bubble (which will occur three years later), similar to the reduction of investments for research in the field in the 1970s.

1986 - First car without driver, a Mercedes-Benz pickup truck, equipped with cameras and sensors, travels at a maximum speed of 88 km / h in empty streets.

October 1986 - David Rumelhart, Geoffrey Hinton and Ronald Williams publish "Learning Representations by Back-Propagating Errors", and describe a new procedure, retropropagation, for networks of neuron-like units .

1987 - The video "Knowledge Navigator" presented at Educom during the keynote of Apple CEO John Sculley predicts a future in which knowledge applications would be accessible to intelligent agents working on networks connected to large quantities Of digitized information.

1988 - Judea Pearl publishes "Probabilistic reasoning in intelligent systems". We read of him, during his Turing Award: "Judea Pearl created the bases of representation and calculation for the processing of information under uncertainty. It is attributed to the invention of Bayesian networks, a mathematical formalism to define complex models of probability, as well as the main algorithms used for inference in these models. This work not only revolutionizes the field of AI, but has also become an important tool for many other areas of engineering and natural sciences. "

1988 - Rollo Carpenter develops Jabberwacky chat-bot to "simulate a human conversation in an interesting, entertaining and funny way". This is a first attempt to create an AI in interaction with the human.

1988 - Members of the IBM TJ Watson Research Center publish "A statistical approach to translation". They announce the shift from rules-based machine translation to probability-based machine translation. More broadly, we move on to machine learning based on the statistical analysis of known examples and no longer on understanding the task to be accomplished (the IBM Candide project that translated correctly between English and French was based on 2.2 Million pairs of sentences, mainly from Canadian parliamentary procedures).

1988 - Marvin Minsky and Seymour Papert produce an expanded version of their 1969 book, "Perceptrons". In the prologue, "Une Vue depuis 1988", they write: "One of the reasons why progress has been so slow in this field of research is that researchers unfamiliar with the history of this field have perpetuated Many of the same mistakes made before them ".

1989 - Yann LeCun and other researchers at AT & T Bell Labs managed to apply a back-propagation algorithm to a multi-layered neural network to make it recognize postal codes. Given the hardware limitations at the time, it took three days to complete the task, but it was already a significant improvement from the beginning.

1990 - Rodney Brooks publishes "The Elephants do not play Chess". He proposes a new approach to systems integrating artificial intelligence, especially robots, based on a continuous interaction with the environment. "The world is its own best model ... The trick is to capture it the right way and enough often".

1993 - Vernor Vinge writes "The Coming Technological Singularity". He predicts that in the next 30 years we will have the technological means to create superhuman intelligence. Soon after, the era of Man will be over. "

1995 - Richard Wallace develops the ALICE (Artificial Linguistic Internet Computer Entity) chat-bot, inspired by ELIZA (see above), but adding natural language to it via a collection of samples of an unprecedented size, The advent of the Web.

1997 - Sepp Hochreiter and Jürgen Schmidhuber propose a type of long-lasting memory network used today in character recognition and voice recognition.

1997 - Deep Blue becomes the first chess program to beat a reigning world champion.

1998 - Dave Hampton and Caleb Chung create Furby, the first domestic pet robot.

1998 - Yann LeCun, Yoshua Bengio and others publish articles on the application of neural networks to the recognition of handwriting and on the optimization of backpropagation.

2000 - Cynthia Breazeal of MIT develops Kismet, a robot that can recognize and simulate emotions.

2000 - Asimo, Honda's intelligent humanoid robot, is able to walk as fast as a human and deliver trays to customers at the table at the restaurant.

2001 - AI, Artificial Intelligence, a Spielberg film, features David, an android child program to love.

2004 - The first Grand Challenge of the DARPA, a prize for autonomous vehicles, is launched in the Mojave desert. None of the vehicles involved finished the 240 km of road.

2006 - Oren Etzioni, Michele Banko and Michael Cafarella launch the term "machine reading", "an autonomous understanding of a text without supervision".

2006 - Geoffrey Hinton publishes "Learning multiple Layers of Representation" which summarizes the ideas that have led to "multi-layered neural networks that contain high-low connections and their training to generate sensory data rather than classify them." This is the new approach to deep-learning.

2007 - Fei Fei Li and colleagues from Princetown begin to assemble ImageNet, a large database of annotated images to help object recognition software.

2009 - Rajat Raina, Anand Madhavan and Andrew Ng publish "Large-scale Deep Unsupervised Learning using Graphics Processors" and argue that modern graphics processors far exceed the capabilities Of multi-core processors, and can revolutionize the applicability of unsupervised learning methods.

2009 - Google enters, secretly, the development of a car without a driver. In 2014, she will become the first to pass the driving test in Nevada.

2009 - At the smart information lab at Northwestern University, Stats Monkey, a program that writes sports news and information, is developed without human intervention.

2010 - Launch of the ImageNet Large Image Recognition Challenge (ILSVCR), an annual AI object recognition competition.

2011 - A convolutional neural network wins the German signboard recognition competition with 99.46% success (human: 99.22%).

2011 - Watson, a Q & A computer participates in the game Jeopardy! And defeated two former champions.

2011 - Researchers at IDSIA in Switzerland report an error rate of 0.27% in the recognition of handwriting by a convolutional neural network, a significant improvement from 0.25 to 0 , 40% in previous years.

June 2012 - Jeff Dean and Andrew Ng recount an experiment in which they showed 10 million images without indication and randomly pulled from Youtube to a very large neural network and "to our great amusement, one of our neurons Artificial has learned to respond on images of ... cats ".

October 2012 - A convolutional neural network created by researchers at the University of Toronto reaches an error rate of only 16% at the ImageNet object recognition competition. This is a significant improvement since the 25% of the year before.

March 2016 - Google's Alpha Go beats Go champion, Lee Sedol.

Coin Marketplace

STEEM 0.18
TRX 0.13
JST 0.028
BTC 64021.36
ETH 3150.26
USDT 1.00
SBD 2.54