The Machine Learning Myth

in #programming7 years ago (edited)

As with any field there are a lot of myths in the field of AI, and machine learning. 

Some of the myths are based on outdated information, and will eventually fade as industry practices change. Others are toxic, and limits the advancement of the field. 

However, the most toxic myth that is currently crippling new practitioners, and thus withholds creativity in the field is the myth regarding the learning of machine learning.  

The myth states that you need to know all the math behind machine learning algorithms before you can use them.

To me, this is like saying that you need to know the math behind the heat dissipation of your computer’s CPU in order to use it; sure it may help you troubleshoot your computer, and create new CPUs.

But if you just want to use the computer, having to figure out how the heat equation works will significantly slow you down at first without really giving you much advantage.

Furthermore, being faced with long, somewhat scary looking equations at page 1 will discourage most people from even trying, fearing what there may be on page 2, or page 10.

The widespread belief in this myth, and the attempt to uphold it by elitist, experienced machine learning practitioners ultimately traps creativity, ingenuity, and limits the field as whole.

In order for the field to continue to flourish, this myth must be done away with at large. This requires the coordinated effort of seasoned practitioners, educators, and learners to stop requiring a complete understanding of the math behind the algorithms, before allowing learners to play around with the algorithms in a library. 

This is not to say the math is not important; the math is very important, and useful. Just like the math behind the heat dissipation of a computer's CPU is important if you want to troubleshoot it, or invent new CPUs, the math behind machine learning algorithms is vital for the continued advancement of the field. 

But the math should be used to help understand the theory behind existing practical knowledge; not the other way around.  

Sort:  

Yours is a very applied approach; I would say that about 5% of all AI engineers know what PAC learning is. But of course its them who develop those huge frameworks and libraries like tensorflow, keras or caret.
It depends on what you want. Do you want to gain more insight into the blackbox called nature or just want to use it to provide good predictions without knowing why your model does that?
Especially in algorithmic modelling it takes significant extra effort to understand the underlying principles of probability theory.
The huge benefits you get out of it:

  • brain training and
  • an exact number of how big your dataset has to be to guarantee an error rate with a certain probability.

It's worth it. Never give up at math.

I like your posts. They inspire me. ...second day for me on this platform. Just great to meet some fellas here.

Coin Marketplace

STEEM 0.17
TRX 0.13
JST 0.027
BTC 60562.69
ETH 2903.08
USDT 1.00
SBD 2.34