Computer Intuition

in #science8 years ago (edited)

Just as I am, perhaps this is an article that speaks about a "possible future" under a grandiloquent tone. I risk, in my defense to mention that we're in the middle of a new technologic and scientific leap. You, just stick to the facts; the rest is debatable.

I'd like to start this story back in 1997, when the chess player and ex world champion Garry Kasparov lost in a 6 match duel (3 1/2, vs 2 1/2) against a supercomputer (Deep Blue), developed by IBM specifically to play chess. Deep Blue's methods to beat Kasparov were brute force based, as in, exploiting the computer's capability of analyzing 200 million possible moves per second (and a bit of manual help that some people still argue about), on Kasparov's side: 5 moves per second. Yeah, the computer won, but it's evident that there's something about human thoughts and learning that totally surpasses computer equivalents: Kasparov does not waste time analyzing absurd moves, he understands the game in a way a computer could not.

To balance this little inefficiency of needing to analyze 200 million moves vs 5 to win, since some years ago the Deep Learning programming started developing, a branch of Machine Learning. It is based in a kind of intelligence that emulates learning by shaping different layers of neural networks. What is done is expose the computer and the neural network to a learning situation (as in, playing chess) and this network models itself as it acquires "new experiences" (I used the quotation marks to be politically correct, partially because I'm very uncomfortable of a program acquiring experience without quotes, syntax error). Evidence shows that the performance of this algorithm improves as it learns. We can mention that a deep learning computer (preloaded with chess' rules and training matches) was left playing against itself, in, 72 hours, reached the International Master level. Level: Master. International. In. 72. Hours. Imagine giving it a yo-yo. Schoolin'.

A kind of learning algorithm (that does not require of preloaded data to learn) is called unsupervised learning. As a counterpart, there's another one, called (Hold it! Don't spoil the surprise!) supervised learning, that requires initial training data to work. Later, as it plays, the machine learns, as in, it remodel the neural networks to its favor. There's another kind the reinforcement learning, that works under the notion of a reward. You, bribe the computer to work better.


Here's where your jaw may drop.

With similar algorithms Google is able or recognizing images, Google's AI is capable of recognizing patterns in images. Google can recognize a dog just like we would. Preload a bunch of dog pictures, google "dog" and it will show you all the dogs it can find, even if the filename has nothing to do with it. (It may fail, as it did with that Black couple and the infamous "gorilla" search).

But wait, this is nothing. To understand the next step, we first need to understand how hard is this thing I say is so hard to do, I need to back it up! Lets analyze this so anti-intuitive situation for us humans: exponential growth. Take a second to answer this question to yourself: How high do you think a common sheet of paper would reach after folding it in half 50 times?

Exponential growth appears whenever you want to multiply something a certain amount of times. For example, 2 multiplied 10 times by itself is 1024; this is 2^10 (two, times two, times two... ten times... some may laugh at this explanation, but I know of people that do not know it!). The thing about exponential growth is that the speed at which numbers grow is... crazy. If you grab a 0,1 mm thick paper (a common paper), and fold it in half, the thickness is 0,2 mm, do it again 0,4 mm, and so on. Do it 10 times, and the math is 0,1 mm multiplied by 10 times 2: 0,1 * 2^10, that's 10 cm. After doing that 50 times, the result is 70% of the distance of earth to the Sun. Your intuition may say "that is impossible", but, do the math, you'll see. 0,1 mm multiplied by 2^50, results in 112 million 600 thousand km, more or less. Crazy thing, fold it 33 more times, 83 in total, and you'll cover the whole length of our galaxy (100.000 light years), fold it 103 times in total (20 more times), and you'll reach the length of the observable universe (0,1 millimeters * 2^103 = 93.000 million light years).

Now we know that exponential numbers climb faster than that new female worker that laughs at all of your boss' jokes, lets estimate how hard it'd be for a computer to store all the possible chess moves. Say that, in average, a chess player can make 30 moves per turn, and an average match has 40 moves per player. 30^80 (30 possible moves 40*2 moves per match). Or, 10^118 (a 1 followed by 118 zeros). To lower this amount into mortal's concepts, a good estimate of the amount of particles in the universe is 10^81. As we see, the phrase "there are more possible chess moves than atoms in the universe" is true, many more, 10^37 more. If I wanted to store every possible chess match in an atom, I would need 10^37 universes. There's no possible way to store that amount. Not to mention the time needed to check which one is better than another one, something more than useful when facing Kasparov.

During March 2016 a Google AI programmed with Deep Learning beated 4 out of 5 matches of a milenary tabletop game called Go. Simple of rules (there's almost none) yet very complex to play well; the usual board is a 19 by 19 grid (361 total squares), one vs one players, white and black stones. The game is pretty much, place them wherever you like. The objective, lock the opponent's pieces or control and close down empty fields. It is a very intuitive game, and many of the strategies are subjective, there's no particular "recipes" as chess has (control of the center, mobility...), neither there's standard openings or "finishes" as chess does have. The game has been perfecting itself over 2500 years. The possibilities at this game are infinite (meh, finite; but unreachable).

So 361 squares, a player can place one of his pieces at any of them, an average game lasts around 250 moves... So, the approximate amount of possible go matches is of 220^250, this is 10^585. (220 and not 361 because as the game progresses, there's less free spots where to put the pieces at; do the math or believe me; considering that the order is important). If we again, wanted to store every possible match into an atom we would need around 10^504 universes. If this number looks overwhelming, welcome to the club, nobody knows what is it to have 10^504 "somethings".

AlphaGo (the machine that won playing this game) is under an algorithm that is not a specialized one, it can learn anything (multi-purpose). It uses two neural networks: tactic networks (the ones that heuristically analyze moves to discard the useless ones, saving up resources), and value networks (they evaluate the situation and task to follow according to the circumstances). AlphaGo trained nearly a year playing against itself.

We should start thinking that machines may start learning human skills soon, identify images, sound, handwritten text. Even harder things, like playing "go" better than any human ever.
We only need a large company to start making them, and we will stop saying "if" to say "when".

Today, computers are able of processing efficiently more possibilities than all the atoms in many universes. Who are we to stop them?

Sort:  

Very well written. I think it's astounding how far machine learning has come just in the last YEAR. It's fast approaching the point where computers are better able to diagnose disease and detect cancer earlier: http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.90.8502&rep=rep1&type=pdf

It wont be long before we will have to contend with machines, or programs at least, making discoveries on par with nobel prize winners. Who get's the prize then? The coder that made the program, who could have little scientific background, the users that loaded the initial data, or the program that made the discovery? We're fast-tracking ourselves out of most high-paying jobs that used to rely on years of schooling, and that could be a much bigger threat than factories "stealing" the jobs of manual laborers. We long ago accepted that a machine could out-perform our bodies, but how easy will it be to accept it out-performing our minds?

Dangerous ground you're touching there. Since you elegantly avoid the fact that advanced AIs could have self consciousness. A natural human fear of being surpassed by their creations, we only keep this superiority with the "I can pull the plug whenever I want" method.

But, what will happen when machines become a VITAL part of humankind's survival, and "pulling the plug" disenvolved into millions of people starving?

...

Thanks for the idea, now I've something new to add into my "to do" list of articles :D

yes, but before we have advanced, noticeably conscious AIs, we will have to contend with deeply-learned algorithms replacing most if not all "higher thinking" jobs. Engineering, call center service, medical diagnostics, teaching, could all easily be replaced by something that's learned but not necessarily "intelligent" per-say. Sure, this will be a stepping stone toward a true AI, but we will have to contend with machines being better than us long before we will have to worry about them being a threat to us. That's the moral and philosophical break that we will have to get over first.

If you ask me, I rather be operated by a robotic surgeon than by a human one that may've been partying all night long a few hours ago.

Just as the Industrial revolution made a lot of people lose their job (forcing them to adapt into new professions), we will have to do exactly the same thing.

One step at a time.

As I state at the start:
"Just as I am, perhaps this is an article that speaks about a "possible future" under a grandiloquent tone. I risk, in my defense to mention that we're in the middle of a new technologic and scientific leap. You, just stick to the facts; the rest is debatable."

Oh I 100% agree. I'd also rather have all the vehicles drive themselves. The issue is, that we can only remove so many jobs without a functioning and intelligent political system before we have a complete and utter economic collapse, which will be paired with a huge depression and possible civil war. If our governments dont wake up to these things, among other issues, no amount of awesome technology will stop us from collapsing in on ourselves.

Nice article, thanks!
"Who are we to stop them?" - since we have created them, doesn't this mean our capacity is still outreaching the ones of the computers or do you think our own invention will surpass us by far?
I think there are several techniques where you still can beat computers in calculating. Sounds weird but https://en.wikipedia.org/wiki/Shakuntala_Devi as an example.
And as you put it, first the computer bruteforced the matches, then they went ahead calculating with different learning techniques. In the end it will not be about power of calculations but the strategy to choose to use the calculations on.

Bruteforcing is already an archaic method of processing variables.
It'd be the astrolabe and clock vs the GPS.
We still know how it's made, but it's no longer efficient.

"Devi included calculating the cube root of 61,629,875 (395) and the seventh root of 170,859,375 (15)"
That's WAYYYY less than 10^585, so less, I cant think of an adequate proportion. Yes, it's an amazing skill; she's to math and computers what Phelps is swimming to fish.

Coin Marketplace

STEEM 0.28
TRX 0.13
JST 0.033
BTC 67205.80
ETH 3112.68
USDT 1.00
SBD 3.71