Developing appendable memory system towards permanent memory for artificial intelligence to acquire new knowledge after deployment

Artificial intelligence that accumulates knowledge and presents it to users has the potential to become a vital ally for humanity. It is thought to be advantageous for such artificial intelligence to have the capability to continually accumulate knowledge while operating, as opposed to needing to relearn each time new knowledge is acquired. In this study, we developed an appendable memory system for artificial intelligence to acquire new knowledge after deployment. Artificial intelligence that can share past acquired knowledge with users is perceived to be a more human-like entity. Some dialogue agents developed to date can partially utilize memories of past conversations. Such memories are occasionally referred to as long-term memory. However, the capacity of existing dialogue agents to continuously accumulate and share knowledge with users is currently insufficient. Given their naive implementation, their memory capacity is ultimately limited. It is considered necessary to develop methods to realize a system capable of storing any amount of information within a finite medium. Furthermore, when developing dialogue agents through the repetitive use of human conversation data as training data, doubts arise regarding whether artificial intelligence can effectively utilize the information they have acquired in the past. In this study, we demonstrate this impossibility. In contrast, this research proposes a method to store various information repeatedly within a single finite vector and retrieve it, utilizing a neural network with a structure similar to a recurrent neural network and encoder–decoder network, which we named memorizer–recaller. The system we aim to build is one capable of generating external memory. Generally, parameters of trained neural networks are static; in other words, trained normal neural networks predict values by referring to the static memory. In contrast, we aspire to develop a system that can generate and utilize dynamic memory information. This study has only produced very preliminary results, but it is fundamentally different from the traditional learning methods of neural networks and represents a foundational consideration for building artificial intelligence that can store any information within finite-sized data and freely utilize them in the future.