We present a new type of Artificial Neural Networks: the Self-Reflexive Networks. We utter the theoretical presuppositions; their dynamics is analogous to the one ascribed to autopoietic systems: self-referentiality, unsupervised learning and unintentionally cooperative and contractual activities of their own units. We also hypothesize a new concept of perception. We present the basic equations of Self-Reflexive Networks, new concepts as the one of dynamic target, of Re-entry with dedicated and fixed connections, of Meta-Units. Therefore, we experiment a specific type of Self-Reflexive Networks, the Monodedicated, within the interpretation of a toy-DB and we have hinted at other already made experimentations, experimentations in process and planned experimentations. From the applicative work that we present a few specifics and novelties of this type of Neural Networks emerge:(a)the capability of answering to complex, strange, wrong or not precise questions, through the same algorithms through which the learning phase took place.(b)the capability of spontaneously transforming their own learning inaccuracy in analogic capability and original self-organization capability.(c)the capability of spontaneously integrate the models that it experienced in different moments in an achronical hyper-model.(d)the capability of behaving as it had explored a decisions graph of large dimensions, both deeply and in extension. With the consequence of behaving as an Addressing Memory forself-dynamic Contents.(e)the capability of always learning, rapidly and anyway, besides the complexity of the learning patterns.(f)the capability of answering simultaneously from different points of view, behaving, in this case, as a network that builds more similarity models for each vector-stimulus that it receives.(g)the capability of adjusting in a biunivocal way, each question to the consulting DB and each DB to the question that are submitted. The consequence of this fact is the continuous creation of new answering models.(h)the capability of building during the learning phase, a weights matrix that provides a subconceptual representation of the bi-directional relations between each couple of input variables.(i)the capability, through the Metaunits, to integrate in a unitary typology, nodes with different saturation speed and, therefore, with different memory: in fact, while the SR units are short memory nodes, since each new stumulus zeros the previous stimulus, the Metaunits memorize the SR different stimulus during time, functioning as an average length memory. This fact should confirm that the avarage length memory is of a different level from the immediate memory and that it is based only uponrelation among perceptive stimulus which are distributed in parallel and in sequence. In this context the weights matrix constitute the SR long term memory. And in this sense it will be opportune to think at a methodic through which the Metaunits can influence during time, the same weights matrix. In any case, in the SR there areservice nodes orfilter nodes andlearning nodes as if they were weights (the Metaunits).
[1]
C. Waddington.
The evolution of an evolutionist
,
1975
.
[2]
Sergei Ovchinnikov,et al.
Fuzzy sets and applications
,
1987
.
[3]
F. Varela.
Principles of biological autonomy
,
1979
.
[4]
René Thom,et al.
Modèles mathématiques de la morphogénèse
,
1971
.
[5]
M. Minsky.
The Society of Mind
,
1986
.
[6]
I. Prigogine,et al.
Exploring Complexity: An Introduction
,
1989
.
[7]
R. Riedl.
Order in living organisms
,
1978
.
[8]
James L. McClelland.
Explorations In Parallel Distributed Processing
,
1988
.
[9]
S. Grossberg,et al.
Pattern Recognition by Self-Organizing Neural Networks
,
1991
.
[10]
H. Maturana,et al.
Autopoiesis and Cognition
,
1980
.
[11]
Marilyn Bohl,et al.
Information processing
,
1971
.
[12]
Kurt J. Schmucker,et al.
Fuzzy Sets, Natural Language Computations, and Risk Analysis
,
1983
.
[13]
P. H. Lindsay.
Human Information Processing
,
1977
.
[14]
Lotfi A. Zadeh,et al.
Fuzzy Sets
,
1996,
Inf. Control..
[15]
David Levy,et al.
Book review: Neural Networks and Fuzzy Systems: A Dynamical Systems Approach to Machine Intelligence by Bart Kosko (Prentice Hall 1992)
,
1992,
CARN.