Unveiling the secrets of hidden units in hidden layers post about discovering the methods to get the number of hidden units and number of hidden layers. Artificial Neural Network is a mimic of human brain; it can compute many complex problems in a simple and efficient way. The single-layered and multi-layered architecture of neural network consist of input-output layer and input-hidden-output layer respectively. Hidden neurons also referred as “learned feature detectors” or “re-representation units”.
The hidden layers are additional layers in neural network that play an important role. The number of hidden layers in a network can be determined based on the given input data. Majority of neural networks fit with single hidden layers. Zero hidden layer can only capable of representing linear separable functions or decision. One hidden layer can approximate any function that encompass a continuous scaling from one finite space to another. Two hidden layers can represent an arbitrary decision boundary to attribute accuracy with rational activation functions. Adding two or more number of hidden layers may increase the accuracy but the overall complexity and the total training time of the neural network will also increase.
Hidden layer contains number of hidden units. It may vary from one neural network to another. The ideal number of hidden units often depends on the number of input and output units, the number of features taken for training, the amount of noise in the targets, the architecture of the type of hidden unit activation function and the algorithms used for training. If the number of hidden units adopted far exceeds an optimal figure, it might cause over-fitting. If the number of hidden units is less than the demanded hidden units, it takes towards the under-fitting.