THE CONCEPT OF IDENTITY FOR PATTERN RECOGNITION. — Feature detection versus feature extraction. — Learning. Effortlessly. — Part 2. ... [ Word Count: 2.500 ~ 10 PAGES | Revised: 2018.5.25 ]

in #science6 years ago (edited)

branches.jpg

 

Learn how is it that your brain can recognize features without any effort. Q & A.

READ THE FIRST PART AND SEE REFERENCES AT THIS LINK.

 

— 〈  1  〉—

``You can summarize the last post in words ... right?''

 
Yes: a lot of pattern recognition is like a child giving different uses to objects which they don't really know much about. And then later sometimes doing the reverse: inferring the object from the combination of uses.

The brain primarily operates such that it keeps the organism whose part it is alive.

It uses things as means to ends and some means are means to other means. And so on. Indeed it often completely ignores what it cannot use because of the scale or rate. Or simply because it has a mental model of the world where this thing has no use.

Uses can identify the thing; but more generally how a thing interacts with other things is what identifies it. We may feel we know what it is but it remains a black box that acts and is acted upon. One reason we like to pretend as children do that objects are like human actors with human attributes: we are mostly surrounded by black boxes.

Most things we think we know what they are really known as black boxes. Some more useful than others. Good for various things.

The best nonmathematical further reading in plain English are PRI71 and BOD06. For those new to the subject.

Now for example: most people know how to use a computer.

They just don't know how it works — and therefore what it is.

No: so far are they're concerned a computer is what they can use this or that way. Whatever they can use like a ``computer'' is a computer.

So you let them play with an unidentified something. It looks strange.

Well then they can't say right away what it is. And then later they tell you: Oh, it's a computer. Yeah, that was a computer.

But they still don't know what a computer is — not really. And it doesn't faze them one bit.

You think you know until you're asked to explain it: and then you recognize you don't know. That's just it: mostly you don't know.

Also you don't care that you don't know.

Think about an apple. I probably know more about an apple when I tried to eat it and eaten it than I do if I was given a chemical composition of the apple. Later, the chemical structure, once I've eaten it, is transparent: Oh, yeah, that's why it's edible.

Going about life you pick up most of the relevant information needed to classify what objects are without intentionally trying to get knowledge. Side effect. Which is usually why it's effortless, compared to a problem in mathematics, which you encounter for the first time and have to sit down and think, and whatever you learn will come only be meditating, chewing it over, and thinking about it.

If you ask me what something is ... I'll probably say easy question I know what it is ... so you ask me what ... and I I find that what it is ... so far as I know ... is mostly a list of words ... which don't really refer to it ... and what the apple does or what somebody can do to it or with it ... what it can be used for. It's behavior.

(You say Apple, I thing Green. But the apple in front of me is red. You know, however, you can bite it and eat it, you can make pie with it, it sits too long and rots and then you can't eat it, you can cut it with a knife, so it's not very hard.)

Mostly you identify things by what you can do with them, what they can do to you, what for you can use them, and because you go about living your life, these HARD(apples)=1, EDIBLE(apples)=1, DOGFOOD(apples)=0, MELTSINOVEN(apples)=1, EXPENSIVE(apples)=0, FRIABLE(apples)=0, ... are picked up by the way. Not intentionally. And suddenly you see a blurry image and feel you know what it is and you're not wrong. Which is most probably amazing. Considering how noisy this image is.

This works because more often than not what something really is happens to be intimately tied up with what it can do and what you can do with it.

So to get performance like that from a computer is massive, massive labor. Somehow all that tacit knowledge has to get related. Which easily happens in people just by living for a few years in a real body.

Yes, you can train a computer to minimize error in reacting to pixels.

Say, this pattern of pixels is this and that with such a probability distribution.

But then you get a novel case. It's a little green, and looks round. Doesn't really fit any pattern. Too much noise.

You and I, however, looking at the CAPTCHAs, think, yeah, that's edible and cuttable. It's an apple.

We see a little bit of green, and roundish, and think edible and cuttable, by habit, and make the association.

In principle that can all be programmed. Much of the habit arises from actually operating with things while also being in all sorts of different proximities to them. Or simply distances from them. No effort.

The effort can be made. Somebody sits around and having played with each of the things in pictures just tells the machine what each pattern of pixels is. Then the machine trains on the immense [images,labels] set.

But the computer has a cost to acquire knowledge. Therefore, for it's budget, it will acquires less knowledge. Most of our knowledge is free and we'll simply have more of it if we're active for a long time.

Suppose however a massive team tells the computer everything they personally know by experience. They code a Dreyfus common sense ontology and train the computer.

Problem solved right? No: not really.

Observe the following subtlety.

Even applies to hands off feature extraction.

We see an object. Not from very near, not from very far.

But from many angles.

You move your head, look around, see how other things act on it, how it acts on them, what it does to light and what light does to it, then identify it. That's enough information to identify it.

Having identified it we move away even further and look back and know there is ... X.

Maybe we don't move away ... just the noise increases. But there's interactivity.

When all we really see is a brown dot. But we say: they is ... X.

All we see is a few pixels. You cannot get X from those pixels. X is not in the pixels.

Nor was X in any of the images we had of it on our retinas. We could move around and wait and watch the light change and the picture change and then we can guess X from our habits and our history.

A machine learning system that associates a brown dot with X is going to make too many errors. ... But if it doesn't associate the brown dot with X ... it's going to make too many errors. ... It's going to make too many errors.

Errors in some very important cases I mean specifically. Many images computer identify easily, correctly, and with less error that people.

So then you program a combinatorial exclusion: if there's a brown dot ... but also a red dot ... it's not X.

Which is a pain ... because there's an infinity of situations where a brown dot is not ... X.

The issue is that it must get knowledge at a cost and usually only has one image with no interactivity from which to extract information that is otherwise destroyed by the noisiness of the image.

We can solve CAPTCHAs because our pragmatic brain and interaction almost certainly give us enough information to identify just about anything. For a computer most things are not going to be in any database. It costs real effort to get ontological information organized and into a training database.

And we identify the unknown from correlation of tacit parameters in our much vaster databases.

Then in noisy situations, being able to change the environment, and get many images of the same thing, however noisy, we can figure out pretty quickly and pretty easily what we are looking at, without any of the images statistically having this information in it.

 

— 〈  2  〉—

``Efficient but not perfect?''

@firedream writes: ``This clearly shows human brain is most efficient but not perfect. It has evolved to quickly analyze data and act immediately. That is why we have shortcuts in our brain, which also gives it an advantage in survival but with many vulnerabilities. Artificial intelligence systems don't have to think like us. Saying they do is like saying flapping like birds is the only way to fly.''

Exactly.

Submarines don't swim. They just move underwater. Birds fly. Aircraft do not. Not in the strict and old sense of the word. They just move high above in the air. Very, very quickly.

Exist maybe several thousand know different cognition systems. Some are better than other in various contexts.

Our human one just has a few very powerful features. Which are often ignored or taken for granted. Combined with our mobility. And with our manual dexterity.

Michael Arbib thinks our speech centers are where are brain gripping centers are because speech arose from gestures and imitation as we are predisposed to imitate the manual operations for means ends reasons of others. So we learn to imitate the gestural and vocal actions of others. Lo! Speech. Alway why languages are pragmatic in their meaning and operation as tools for solving problems without manipulating physical things ... only names we loosely couple with things.

 

— 〈  3  〉—

``So what does this mean for us people who work?''

 
Good news. Bad news. Which do you prefer first?

The good news? For us people? Most of useful image recognition has little do with images themselves.

So for about half of the useful work there is no competition.

Not with people? Correct.

For further reading check out ``embodied cognition''.

There's Piagetian learning, and history of bodily interaction with the thing to figure out what it is. Not enough information in the image itself. You just moved away and associated by habit various blurry images with your interactive experience with the thing. What the thing was you identified by using it. (Max Wertheimer said you need to participate and act on things, use them, to learn quickly.)

That's how Manuel Blum came up with modern CAPTCHAs. Images are hard to sort because there's often just not enough information in the images.

So the good news is we're not going to be seriously competing with AI anytime soon. AI today is not embodied. The good news is that it's marketing.

Machine learning is more or less marketing today. The early research from 40's, 50's, and 90's is starting to get rebranded as new hot stuff and people are using it. Computers were not fast enough to really do anything when most of the ideas were published.

Speaking of which: Manuel Blum thinks the consciousness problem is actually solved. Made a lecture somewhere.

And the bad news? For us people? The bad news is that it's marketing.

Marketing works.

We hear about this or that company ``Using Machine Learning blah blah" to "impartially" do "stuff"... and ... well ... it's neural net ... plus a thousand or so people correcting it manually. They do most of what the so called AI is presented as doing.

That's one reason some of these internet space companies have so many employees. (You don't need them if you really have useful AI.)

The word AI is also vague.

Earlier researchers considered text recognition as AI. Then it was not. When it became easy.

Companies can pretend something like censorship is a "bug" in the learning. When really it was completely intentional. More or less what is happening. Their so called AI is going to be primarily the go to excuse in the next couple years that companies use to shirk responsibility. Or do something they would not want to admit their employees are doing.

Just like any database with electronic signing by using parties is "a blockchain" ... and just about everyone is "developing for the blockchain" ... except their "blockchain" is not a "decentralized blockchain" ... which is to say ... it's not really a blockchain. None of the benefits of blockchain technology. That has benefits only when blockchain means decentralized blockchain.

REFERENCES

 
[BOD06]   Margaret BODEN, Mind as machine a history of cognitive science, 1, 2, Oxford: University Press, 2006.

[PRI71]   Karl PRIBRAM, Languages of the brain, Englewood Cliffs: Prenctice Hall, 1971.

 

ABOUT ME

I'm a scientist who writes fantasy and science fiction under various names.

            #thealliance     ◕ ‿‿ ◕ つ

      #writing #creativity #science #fiction #creative #novel #publishing
              #thealliance #isleofwrite #thewritersblock #nobidbot #blog
                        #technology #scifi #future #history #life #philosophy
                            CHECK OUT: @TRIBESTEEMUP   AND   @SMG

      Word count: 1.800 ~ 7 PAGES   |   Revised: 2018.5.23

 

UPVOTE !     FOLLOW !

 
|   SCIENCE FICTION & FANTASY   |   TOOLS & TECHNOLOGY   |
|   PRACTICAL THINKING — LATESTRECENT POPULAR   |

©2018 tibra. Creative Commons License
This work is licensed under a Creative Commons Attribution-ShareAlike 4.0 International License  . . .   . . .   . . .    Text and images: @tibra. @communicate on minds.com

Sort:  

Somewhere at the very top of the text above I put a tag: — Revised: Date.

And I did that why? . . . Often I'll later significantly enlarge the text which I wrote.

Leave comments below, with suggestions.
              Points to discuss — as time permits.

Finished reading? Well, then, come back at a later time.

Meanwhile the length may've doubled . . . ¯\ _ (ツ) _ /¯ . . .


2018.5.25 — POSTED — WORDS: 2.500.

 

Now matter how annoying I find captchas, they work, and I do believe they wil filter out a lot of stuff (based on 0 experience :D )

I like how you are making this in bitesized digestable pieces, because on age 1 was kind of lost dude!

I think I reached about 2.7% of understanding how computers actually work on this now..thank! :)

Coin Marketplace

STEEM 0.19
TRX 0.13
JST 0.029
BTC 58809.44
ETH 3151.28
USDT 1.00
SBD 2.43