Perceptrons To A.I. To Common Sense

in philosophy •  2 months ago 

There's much to do about artificial intelligence. Not only because of the threat it poses to jobs that now require a higher education, but also the ultimate fear that A.I. will somehow take over, or that it will somehow prove that the human mind and consciousness can be explained in strictly materialistic terms only.


ai_small.jpg

Image by Mike MacKenzie - source: Flickr

Reputable scientists like Elon Musk and famous tech-leaders like Bill Gates have often warned against A.I. Some of the most popular modern myths, as seen in films like Terminator, I-Robot and Spielberg's A.I., play into that same fear. Westworld, the movie as well as the series, examines in great philosophical detail the ethical and moral complexities that arise when machine intelligence displays human traits, and the difficulties associated with determining whether or not we've past the "uncanny valley" phase; how can we determine if an A.I. really feels the emotions it displays? And if we are ever able to separate mimicked behavior from real behavior, will we then have to grant A.I. the same human rights we now reserve for our species exclusively?

My personal opinion on this is somewhat ambiguous; I can understand the fears, all of them, but on the other hand I'm fully aware that we don't even know how consciousness works. We don't even know what "intelligence" is exactly, or how to quantify it; there's a lot to be said against the generally accepted measurement of Intelligence Quotient or IQ. But there's also a lot to be said for it. I believe that our efforts to create better and better A.I. not only has the potential to make our lifes better, but also to help us understand these phenomena better; it potentially could even put an end to the age-old mind-body problem. The fear that we lose our special status as the only creatures with a soul might prohibit us from following along this particular path of discovery and self-discovery, which I would regret.

The development of artificial intelligence based on mimicking the human neural network is as old as computers themselves. One of the first efforts in this direction was the perceptron, a simple model of a biological neuron in an artificial neural network. Invented by psychologist and programmer Frank Rosenblatt, the perceptron immediately ignited a heated discussion within the A.I. community, and that discussion goes on to this day:

In a 1958 press conference organized by the US Navy, Rosenblatt made statements about the perceptron that caused a heated controversy among the fledgling AI community; based on Rosenblatt's statements, The New York Times reported the perceptron to be "the embryo of an electronic computer that [the Navy] expects will be able to walk, talk, see, write, reproduce itself and be conscious of its existence. [...] The perceptron is a simplified model of a biological neuron. While the complexity of biological neuron models is often required to fully understand neural behavior, research suggests a perceptron-like linear model can produce some behavior seen in real neurons."
source: Wikipedia

The discussion between scientists Sean Carroll and Melanie Mitchell in the linked video describes the back and forth between those who believe in Rosenblatt's promise of walking, talking, seeing, writing and self reproducing A.I. and those who do not. It's very enlightening and sobering at the same time when you hear scientists themselves admit that they don't fully understand how the self-learning neural network accomplishes what it accomplishes, due to the complexity generated by the computer's ability to work with huge amounts of data at lightning speeds. Melanie Mitchell says about facial recognition neural networks: "You train it on millions of faces, and now it's able to recognize faces, or certain people. So, we humans look at it and say 'it's good at recognizing faces' but you don't actually know for sure that's what it's recognizing."

A.I. has surpassed human beings only in situations of very limited scope, like the games of chess and go. This is impressive of course, but we're talking about a very finite set of rules and a very constricted playing field, not comparable at all to the real world situations human or animal consciousness have to negotiate constantly. Recognition neural networks trained to recognize fire-trucks completely fails when a fire-truck is placed in an unnatural position with Photoshop, like flying in mid air; changing one specific aspect of the object causes A.I. to fail in situations where the human brain would have no problem at all in recognizing what it's looking at. Some scientists in the 1950s and 1960s claimed that if A.I. would be able to play chess on the level of a grand master, we would be done, A.I. would then be "complete"; we know now that nothing is further from the truth, and Carroll even suggests that games like chess and go would be the first places where computers would be able to surpass humans.

Through discussing the difficulties we still have with making self-driving cars really reliable, we come to the essential point of "common sense" and the inability of A.I. and self learning neural networks to develop anything that comes even close to it. Common sense can be seen as the countless things we humans can do without even thinking about it; one of the problems with self-driving cars as opposed to human brains is their inability to predict human behavior in the multitude of circumstances they encounter, where we are specialized at that, the same goes for facial recognition that can be fooled if you know what you're doing. And I hope you've recognized that all we're talking about is the simulation of one particular section of the brain, the visual cortex, as it relates to behavioral characteristics; in the human brain there's so much more that guides our behavior, which is done on auto-pilot for 95% of the time. There's no solution in sight that will enable us to mimic the world-model as it exists in our consciousness in an artificial neural network.

Don't interpret this discussion as saying that we'll never be able to create a semblance of the human mind, or even something that's effectively on the same level. It's an exiting discussion that delves much deeper into the moral and philosophical ramifications as well as the surprising complexity of something as common as common sense, and the fact that things we find easy, like recognizing faces, are hard for A.I. and conversely, things we find hard, like playing chess or doing complex calculations, are easy for A.I.; learning from huge amounts of data won't develop that common sense... until now at least ;-) They touch on the dangers posed by the combination of A.I. with deepfakes, the uneasy realization that persons at work in a kitchen are more readily recognized as women, or that black persons are more likely to get hit by a self-driving car, all due to the data that lies at the foundation of these self-learning neural networks, and the possibility that A.I. could lead to the kind of intelligence needed to really understand the foundations of quantum mechanics; it's worth the 1 hour and 27 minute duration (skip, if you want, the annoying advertisements from the 16:11 minute mark until 17:17, and 35:05 until 36:14).


Mindscape 68 | Melanie Mitchell on Artificial Intelligence and the Challenge of Common Sense


Thanks so much for visiting my blog and reading my posts dear reader, I appreciate that a lot :-) If you like my content, please consider leaving a comment, upvote or resteem. I'll be back here tomorrow and sincerely hope you'll join me. Until then, keep steeming!


wave-13 divider odrau steem

Recent articles you might be interested in:

Latest article >>>>>>>>>>>Ben's Feelings
Dolly Parton; Marxist Icon?Narrow Minded Sheep
The UBI TrapEternal Unity
Thanos In Real LifeGood Old Bernie

wave-13 divider odrau steem

Thanks for stopping by and reading. If you really liked this content, if you disagree (or if you do agree), please leave a comment. Of course, upvotes, follows, resteems are all greatly appreciated, but nothing brings me and you more growth than sharing our ideas. It's what Steemit is made for!
Helpienaut_post_banner_02-01.png

I am a proud helpinaut! @Helpie is looking for new members! Helpie has been growing nicely and we are always on the lookout for new valuable members. We are very supportive and community oriented. If you would like to be scouted for @helpie , please drop a comment on THIS POST or contact @paintingangels on discord at paintingangels(serena)#3668.

wave-13 divider odrau steem

Just for Full Disclosure, I'm invested in these crypto-currencies:

Bitcoin | Litecoin | EOS | OmiseGo | FunFair | KIN | Pillar | DENT | Polymath | XDCE | 0x | Decred | Ethereum | Carmel | XYO

wave-13 divider odrau steem

@helpie is a WITNESS now! So please help @helpie help you by voting for us here!Helpie_01.png

Authors get paid when people like you upvote their post.
If you enjoyed what you read here, create your account today and start earning FREE STEEM!
Sort Order:  

@tipu curate

Wow... Thank you so much @futuremind; I very much appreciate this awesome gesture my friend :-)

You're welcome friend. Very nice job outlining your thoughts on AI:)

Hi @zyx066!

Your post was upvoted by @steem-ua, new Steem dApp, using UserAuthority for algorithmic post curation!
Your UA account score is currently 3.896 which ranks you at #4553 across all Steem accounts.
Your rank has not changed in the last three days.

In our last Algorithmic Curation Round, consisting of 103 contributions, your post is ranked at #48.

Evaluation of your UA score:
  • You're on the right track, try to gather more followers.
  • The readers like your work!
  • Try to work on user engagement: the more people that interact with you via the comments, the higher your UA score!

Feel free to join our @steem-ua Discord server