Emotional Disorders in Autonomous Agents?
暂无分享,去创建一个
It has been recently suggested by a number of authors that modelling of emotions and related motivational systems in agents might have great practical value, apart from the interest of providing possible explanations for the emotional mechanisms of human agents. Emotions, or needs, may be used as signalling mechanisms between different subsystems (subagents) inside an agent, as well as between different agents. In this paper, we investigate some problems that may arise with emotional agents. Since needs and emotions are largely global, stable reaction tendencies, they may exhibit rigidities that lead to different forms of maladaptive behavior, i.e. behavior that is not well suited to the present environment of the agent. We investigate emotional learning in agents by an utterly simplified decision-theoretical model. We show that even in this very simple model agents may develop maladaptive patterns of behavior that closely resemble patterns found in emotional disorders in humans. The maladaptive behavior patterns axe due to non-optimal values for the two decision parameters, which are functions of the prior beliefs of the agent.
[1] T. Gomi,et al. Elements of artificial emotion , 1995, Proceedings 4th IEEE International Workshop on Robot and Human Communication.
[2] H. Simon,et al. Motivational and emotional controls of cognition. , 1967, Psychological review.
[3] K. Oatley. Best Laid Schemes: The Psychology of Emotions , 1992 .
[4] T. Honkela,et al. Maladaptive Emotion-Based Behaviors in Autonomous Agents , 1998 .
[5] Aaron Sloman,et al. Why Robots Will Have Emotions , 1981, IJCAI.