Auto-encoders are neural networks and may be shallow or deep, as with other neural networks we have discussed so far. What distinguishes auto-encoders from other forms of neural network is that auto-encoders are trained to reproduce or predict the inputs. Thus the hidden layers and neurons are not maps between an input and some other outcome, but are self (auto)-encoding.
Unlike the more common cases of neural networks where the outcome is some variable we are interested in predicting; given sufficient complexity, auto-encoders can simply learn the identity function and the hidden neurons will exactly mirror the raw data, resulting in no meaningful benefit. Because the outcome used for training is the same as the inputs, the best auto-encoder is not necessarily the most accurate one, but one that reveals some meaningful structure or architecture in the data or one that reduces noise, identifies outliers or anomalous data, or some other useful side effect that is...