Towards intentional neural systems: experiments with MAGNUS

The term "intentionality" arises in connection with natural language understanding in a computer. The problem is not one of speech recognition. It remains a problem even if the words of the language were perfectly encoded by a speech recognizer, or even typed on a keyboard. It is believed that the ability to visualize events when hearing sentences that describe them is a clue to the way in which artificial neural networks need to be structured and trained. The assessment which gives the title to this paper is that of Searle (1992), who suggests that classical logical models fail to capture "understanding" as they have no intentional relationship with the objects they represent. Searle illustrated his point with the now well-known example of the Chinese Room where, he argued, the symbols of a language can be manipulated to give answers to questions about a sequence of symbols that make up a story. In this paper, we show that, through a process of "iconic" training, a neural state machine can develop an "intentional" representation. An example of this is shown as implemented on MAGNUS (Multiple Automata of General Neural UnitS) software.