Strands of Memory

Software-intensive systems in difficult environments often have to operate without timely human control. That level of autonomy argues for systems that have highly capable self-and situation-awareness processes, to protect the system, perform its tasks, and also to predict its external environment, so that appropriate plans can be made as far in advance as possible, and modified as conditions change. For this level of adaptability, we have long advocated self-modeling systems, which are systems that have models of their behavior, that they interpret to produce that behavior. That means that changing the models changes the behavior. In this paper, we describe two aspects of the memory in such systems, as reflected in two properties about the management of situation awareness: • How to navigate the world and not get surprised (very much). • How to recognize situations when they recur (soon enough to do something useful). Our conclusion is that the system must maintain many many “strands” of memory, that is, partially completed stories about what the environment has done, is doing, and is expected to do, and each one of the strands is (usually implicitly) waiting for some kind of extending, supporting, confirming, or refuting observations or other evidence. The details to be determined are about what those observations can be, what levels of abstraction are required, how they are determined, how the stories are managed, and how they relate to each other. This paper is a first step in that direction.

[1]  John C. Bean,et al.  Writing Arguments : A Rhetoric with Readings , 1997 .

[2]  Christopher Landauer Infrastructure for Studying Infrastructure , 2013, ESOS.

[3]  Alex M. Andrew,et al.  Intelligent Systems: Architecture, Design, and Control , 2002 .

[4]  K. Bellman,et al.  Common origin of linguistic and movement abilities. , 1984, The American journal of physiology.

[5]  F. Keil,et al.  Explanation and understanding , 2015 .

[6]  F. Krasne,et al.  Adaptive Complexity of Interactions Between Feeding and Escape in Crayfish , 1983, Science.

[7]  James S. Albus,et al.  Engineering of Mind: An Introduction to the Science of Intelligent Systems , 2001 .

[8]  Deepak Kumar Subedi,et al.  Signal and Noise: Why So Many Predictions Fail – but Some Don't , 2013 .

[9]  Christopher Landauer,et al.  An Architecture for Self-Awareness Experiments , 2017, 2017 IEEE International Conference on Autonomic Computing (ICAC).

[10]  Christopher Landauer,et al.  Active Loop Programming for Adaptive Systems , 2020, HICSS.

[11]  Christopher Landauer,et al.  Living in a Sensor Limited World , 2019, 2019 IEEE Conference on Cognitive and Computational Aspects of Situation Management (CogSIMA).

[12]  Christopher Landauer,et al.  Mitigating the Inevitable Failure of Knowledge Representation , 2017, 2017 IEEE International Conference on Autonomic Computing (ICAC).

[13]  Mica R. Endsley,et al.  Situation Awareness Oriented Design: From User's Cognitive Requirements to Creating Effective Supporting Technologies , 2003 .

[14]  A. Meystel Multiresolutional Architectures for Autonomous Systems with Incomplete and Inadequate Knowledge Representation , 1995 .

[15]  Karin Baier,et al.  The Uses Of Argument , 2016 .

[16]  M. Tyworth,et al.  The distributed nature of cyber situation awareness , 2012, 2012 IEEE International Multi-Disciplinary Conference on Cognitive Methods in Situation Awareness and Decision Support.

[17]  Donlin M. Long Silver blaze , 2001 .

[18]  Andrei Cimpian,et al.  Preface for the special issue on The Process of Explanation , 2017, Psychonomic bulletin & review.

[19]  A. Tversky,et al.  Judgment under Uncertainty: Heuristics and Biases , 1974, Science.