Elimination of a Catastrophic Destruction of a Memory in the Hopfield Model

For the standard Hopfield model a catastrophic destruction of the memory has place when the last is overfull (so called catastrophic forgetting). We eliminate the catastrophic forgetting assigning different weights to input patterns. As the weights one can use the frequencies of appearance of the patterns during the learning process. We show that only patterns whose weights are larger than some critical weight would be recognized. The case of the weights that are the terms of a geometric series is studied in details. The theoretical results are in good agreement with computer simulations.