A desire to meet other forms of intelligent life has occupied the minds for centuries. That was a driver for ancient people to look up to the sky, to invent a telescope, and eventually to assemble a space rocket, to study animals in order to examine their intellectual abilities and try to find patterns in the behavior of insects. Widely covered in fiction and even myths, the concept of granting an intellect to artificial objects was thrilling for people with different cultural and scientific backgrounds. Not only with an idle interest, but rather with a desire to change the life for better.
One of the first examples of an artificial intelligence creation was the myth about Hephaestus and his giant Talos along with two mechanical dogs made out of precious metals with a purpose to guard the palace of Alcinous. He not only created a first robot, but also gave him human-like characteristics, which to the current day represent the classic AI in science fiction. We may also notice that the purposes of robotics have expanded over time; however, still some scientists keep pursuing the goal of creating armed robo-forces.
This is, undoubtedly, one of the most exciting industries nowadays, which has been growing by leaps. Let’s briefly recap the AI achievements over time:
1964 — the first ever natural language processing computer program (now known as a chatterbot) was created at the MIT Artificial Intelligence Laboratory. Her name was ELIZA and she was developed in attempt to grant a computer an ability to talk like a real human. The most famous script written for Eliza was intended to simulate a speech with a psychotherapist. Overall, the first human-machine dialogue was implemented due to the special methodology, which allowed the machine to understand specific patterns during speech and generate answers. Sometimes, when Eliza did not identify keywords in a sentence, she paraphrased the initial line of her interlocutor and gave it as an output. It is worth mentioning that many individuals who first used the program, thought that they were talking to a real human. We must remember that this achievement had taken place 15 years before personal computers became more or less available to average people. Eight years later another chatterbot PARRY was created. He was intended to simulate a talk with a mentally ill person with schizophrenia; still, it was much more improved from Eliza. Both programs took the Turing test, which is supposed to evaluate roughly a computer’s “ability to think”. There were several “judges”, who simultaneously talked with a computer and a human via text messaging and then needed to say whom they were talking to.
1980 — Eliza and Parry both partially passed the Turing test, but could they really think? John Searle — an American philosopher — argued that a computer program was not able to actually ponder and analyze the input, instead it followed an algorithm, which allowed to transform any input to some kind of an output. The philosopher tried to represent this process as him sitting in a closed “Chinese room” with one slot for input (incoming cards with questions in Chinese) and another — for output. The output was expected to be an intelligent answer in Chinese as well. However, Searle did not know a word in Chinese and used a book with algorithm, which enabled him to translate the foreign symbols and use the right ones for a response. If a computer follows the same process during the aforementioned “communication”, it is clear that such machine cannot be dubbed an Artificial Intelligence, because it is not required to comprehend the questions (as long as Searle did not need to understand Chinese hieroglyphs to communicate via an algorithm). However, a truly life-changing technology requires a more or less real thinking process, which would happen very soon.
1980’s — an Australian mathematician Rodney Brooks opened a new chapter in robotics, creating the principle of Nouvelle Artificial Intelligence. It was believed — as the main difference from the Traditional AI programming approach — that human intelligence was not based on symbols that can be manipulated and processed. Thus the “top-down” principle of programming (giving a computer all the necessary information) was changed to the “bottom-up” one (programming in as little information as possible, but with an ability to learn and adapt). Brooks first incorporated this principle into his insectoid robots, one of them — Genghis — was a six-legged insectlike robot, which was able to detect the heat of a living creature and turn on its “stalking mode”. In that mode Genghis followed its prey, moving around and avoiding obstacles, in order to keep the target in sight. Although the insect could identify the environment, “see” the terrain and create a safe path to follow something, it could not perform any other task in the same time, like humans do.
1988 — a computer named Deep Thought, created by students at Carnegie Mellon University in Pittsburgh, was the first one to beat grand master Brent Larsen at a single game in chess. The chess was chosen as an etalon game to show one’s capability of intelligent thinking, because it provides approximately 10120 possible moves. No one can comprehend this many moves, but the best players are able to anticipate the strategy of their opponents and quickly separate small sets of possible options for further game. That is what the students tried to implement in their powerful computer. Sometime later, IBM took over the Deep Thought, reprogrammed it, dressed in blue and presented the world the most sophisticated computer — Deep Blue. Later in 1996 it would defeat world’s best chess player — Gary Kasparov of Russia. Its advantage was in speed and processing abilities. It could analyze 200 million positions per second, whereas Kasparov — only 3 positions. After the game, Kasparov admitted that he “sensed a new kind of intelligence” fighting against him. This was another turning point in the history of AI.
A real breakthrough in the field was the invention of neural nets. But not this technology itself shifted the whole concept of how robots worked, but instead its modification, called connectionism. It suggested that the way the brain works is all about making the right connections, and those connections can just as easily be made using silicon and wire as living neurons and dendrites. Dubbed artificial neural networks (ANNs), these programs work in the same way as the brain’s neural network. An artificial neuron has a number of connections or inputs. To mimic a real neuron, each input is weighted with a fraction between 0 and 1. The weight indicates how important the incoming signal for that input is going to be. Using the weights, a computer can make decisions and then evaluate them, simultaneously changing the weights of wrong inputs. This process is called self-learning.
2010 — a series of experiments leading to groundbreaking innovations that enable robots to lie. For good, as initially stated the scientists from Georgia Institute of Technology, who created first capable of lying AI. “We have developed algorithms that allow a robot to determine whether it should deceive a human or other intelligent machine and we have designed techniques that help the robot select the best deceptive strategy to reduce its chance of being discovered,” said Ronald Arkin, a Regents professor in the Georgia Tech School of Interactive Computing. These robots will be the most valuable in army (deceiving enemies in order to conceal the military bases and create a false trail), search and rescue operations (a robot may lie to calm a panicking victim and receive cooperation). The primary law for such robots is to deceit only when the benefits from the deception will be more significant than losses, thus they must know how to evaluate this.
As you see, AI has started to cover more and more aspects of our lifestyle, in the future aiming to change the way we live our lives completely.
2016 — FindFace was created — a website that implements neural net search for members of the social network VK, recognizing a human by his or her photograph. The facial recognition technology was developed by N-Tech.Lab. It incorporates stages of examining a photo, detecting boundaries of objects, semantic segmentation (by the object type), saliency (looking for the feature a person would notice), recognizing human parts and faces. Then the network examines the face only, in order to distinguish its special features. The features are compared with the set of those in the program’s data base, identifying the human.
2016 — Libratus, an artificial intelligence developed by Carnegie Mellon University, made history by defeating four of the world’s best professional poker players in a marathon 20-day poker competition, called “Brains Vs. Artificial Intelligence: Upping the Ante” at Rivers Casino in Pittsburgh. “The best AI’s ability to do strategic reasoning with imperfect information has now surpassed that of the best humans,” said the system’s developer.
This history, although brief, is quite impressive. With every example, we become convinced that Artificial Intelligence has changed our lives indeed. Now it is time to step further and use neural nets to help investors make better decision in order to secure their future profits and come up with the best strategies. And what is more important — to make this system accessible to anyone.
2017 — Mirocana was created — a complex, self-reinforcing system based on deep learning neural nets and other modern machine learning models that predicts stock, currency and crypto-currency markets. It collects, stores, processes and analyzes huge volumes of financial data, a self-learning core of the program is designed to constantly increase the accuracy of predictions. In order to successfully make predictions, it uses an ever-growing set of strategies for interpreting financial data from hundreds of different sources.
Mirocana is an artificial intelligence trading system, that is capable of constant learning and is subject to no limits. Including those of time and stamina, allowing it to analyze more than 400 companies and 200 currency (as well as crypto-currency) pairs in real-time. Meaning that it can read new tweets in a blink of an eye and make conclusions based on hot news and articles available on the Internet seconds after they are released. It establishes its predictions on the market orders of more than 10 thousand professional traders, analysts and hedge-fund managers. It readjusts its forecasts every 5 seconds, taking into account a vast number of strategies.
Being empowered by deep-learning neural nets, it enables the program to find hidden patterns in the market activity and base its strategies not only on the open data, but also its insights. The process is implemented due to the constant evaluation of the strategies and granting them changing weights. When a strategy’s performance is poor, it gets a lowered weight, sometimes even a negative one, when its predictions guide an investor in the opposite direction.
In general, when average traders become exhausted and make more mistakes working long hours, the same time makes Mirocana only smarter. What can be more important for businessmen, who run multiple projects but still are concerned about their money? With Mirocana the high quality and accuracy of predictions are ensured.
Most importantly, it is always open for productive collaborations: that is why any skillful data-scientists can contribute their code, getting paid and making the AI more accurate. That’s just the beginning.
Find additional information at mirocana.com