Neural Networks and TensorFlow - Deep Learning Series [Part 5]

in #programming7 years ago (edited)

Neural Networks and Tensorflow - 5 - Backpropagation.png


In the previous lesson we discussed the gradient descent optimizer. This optimizer helps us modify the parameters of our model, in most cases the weights and the bias, to minimize the loss or the cost function - hence, to improve the performance of the algorithm.

So, in this lesson we're going to look at backpropagation or how the optimization process takes place. When the computation has reached the output, we'll compare the predicted output (achieved through forward propagation) with the real output (from our labels). That is going to give us the cost function.

Then, it'll adjust and back-propagate the weights and the bias in order to minimize the loss or the cost function. Reducing this function will bring us closer to the real output, thus the performance of our algorithm will be improved.

Please watch the full lesson below for a slightly more technical explanation of backpropagation, which is a crucial concept in deep learning and neural networks.



To stay in touch with me, follow @cristi


Cristi Vlad Self-Experimenter and Author

Sort:  

Basics, excellent!

At some point, I would LOVE it if you could do a video on how to build a network using LSTMs at a low level, like with Numpy and stuff. I use them all the time, but I can't wrap my head around they do the unrolling trick or compute the gradient. Thanks!

That is definitely in the workings. We'll first go through convolutional neural networks first.

To listen to the audio version of this article click on the play image.

Brought to you by @tts. If you find it useful please consider upvote this reply.

Coin Marketplace

STEEM 0.17
TRX 0.15
JST 0.028
BTC 61651.16
ETH 2369.36
USDT 1.00
SBD 2.50