Autoencoders...

in #mathematics7 years ago

Autoencoders


Sorry I haven't posted in a long time, but I've been working on a model for the zillow kaggle competition. (Kaggle is a website which hosts datasets and competitions with prizes!) Autoencoders are basically feed forward neural networks which is trained on the identity and which has small hidden layers...Here's a post explaining what I know and asking a question for brownie points!


So anyway...look at this picture of an autoencoder!

Autoencoders are simply feedforward neural networks as I've described in previous posts..

HOWEVER this one is trained on data where the output is the same as the input...in other words, you are training a neural network to predict the identity on a set of data!

This sounds stupid and counter-intuitive at first glance...but it can help you do feature detection and/or dimensionality reduction. Here's how...

It's remarkably simple...you just make the middle layer of the network have very few neurons. In order to learn how to reproduce the input on the other side, it has to learn a compressed representation in that small hidden layer! This is the key insight!


Let me know if you have any questions about how this works!

Meanwhile...I have a question for anyone willing to answer: Is it common practice to pass the hidden representation layer through a nonlinearity like sigmoid as you would any other hidden layer? Or is it more common to have the hidden representation layer be linear?

Coin Marketplace

STEEM 0.18
TRX 0.16
JST 0.029
BTC 62847.35
ETH 2464.17
USDT 1.00
SBD 2.64