Motivation Based Goal Adoption for Autonomous Intelligent Agents

An intelligent agent situated in some environment needs to know the preferred states it is expected to achieve so that it can work towards achieving them. The preferred states the agent has selected to achieve at a given time are its"goals". One popular approach for deciding which preferred state to adopt as goal at a given time is to assign utility values to these states and then choose the one with the highest utility at a given time. However a preferred state can be useful to a varying degree depending upon the situation the agent is in and hence such static utility cannot represent its usefulness indifferent situations. In this paper we propose an approach of representing utility of preferred states based on the concept of motivations which adjusts their utility according to the situation the agent is in.