Improvement of Neural Network Learning Performance by Resting and Working State

Currently, many researchers research about relationships of a rest to a work in a field of biomechanical science. Because taking the rests improve efficiency of our works. In fact, we sometime decrease our concentration power by hard works. Then, we must take a rest, because we become tired by a same work for a long time. However, if we take too much rest, the work efficiency is decreased. We should consider the balance between the rest and the work. In this study, we propose a Multi-Layer Perceptron with Resting State (RSMLP). The RSMLP has two different states, one of the state is a resting state, on the other state is a working state. By computer simulations, we confirm that the RSMLP has better performance than the conventional MLP and the MLP with random noise by learning a step function.