Neural Networks and TensorFlow - Deep Learning Series [Part 17]

in deep-learning •  5 months ago

In this lesson of the series of building convolutional neural networks with TensorFlow and Python, we're going to build the training process of the CNN for which we've constructed the architecture previously.

First we're gonna instantiate the dataset, after which we instantiate the loss function and the optimization.

So, for the loss function we're going to use cross entropy with logits (we have to specify the logits and the labels), while for optimization we're going to use the ADAM optimizer. The learning rate we use for ADAM is very small (0.0001), which can also be passed in (as argument) as 1e-4 (it basically means the same thing).

Then we're going to define the prediction and the accuracy, which we'll help us track the training and the performance of the model as it's being trained. Up next, we're going to create a session and execute the graph we've built - so to train the model.

To stay in touch with me, follow @cristi

Cristi Vlad Self-Experimenter and Author

Authors get paid when people like you upvote their post.
If you enjoyed what you read here, create your account today and start earning FREE STEEM!
Sort Order:  

Hi @cristi!

Your post was upvoted by in cooperation with steemstem - supporting knowledge, innovation and technological advancement on the Steem Blockchain.

Contribute to Open Source with

Learn how to contribute on our website and join the new open source economy.

Want to chat? Join the Utopian Community on Discord

Thank You so much for another awesome programming tutorial,

I had a programming book some month ago in my class.

its really hard for me.

Really Its Deep Learning Class | Programing is not easy | so i tell you that you started is very hard work for us.

Thank You so much for another tutorial, its really beneficial for beginner programar.