Stable improved softmax using constant normalisation

In deep learning architectures, rectified linear unit based functions are widely used as activation functions of hidden layers, and the softmax is used for the output layers. Two critical problems of the softmax are introduced, and an improved softmax method to resolve the problems is proposed. The proposed method minimises instability of the softmax while reducing its losses. Moreover, this method is straightforward so its computation complexity is low, but it is substantially reasonable and operates robustly. Therefore, the proposed method can replace the softmax functions.