The Many Faces of Agents
暂无分享,去创建一个
Meanwhile, the object-oriented community was defining and refining the notion of objects as units of software design that encapsulate state. They have some control over their state in the sense that the state can only be accessed or modified using the methods that the object provides. Objects could be distributed and invoked remotely through remote procedure calls or message passing. Networking and distributed systems technology was developing a fast and reliable infrastructure that facilitates fast and secure communication and efficient distributed computation. Human-computer interface research was coming up with task delegation as an alternative to direct manipulation as a way for humans to interact with computer systems. All these separate strands of research and technology gave rise to the realization that these communities were concerned with different aspects of the notion of agency. Agency was defined by Eisenhardt (1989) and mathematically modeled. An agency relationship is present when one party (the principal) depends on another party (the agent) to undertake some task on the principal’s behalf. The notion of agency covers cooperative coordination in MASs (agents depend on each other’s cooperation to perform their tasks); delegation in human interface design; object-oriented programming, where an object uses another; and self-interested coordination through contracting in MASs. Since the early 1990s, there has practically been a deluge of research papers dealing with agents and implemented systems that claim to be agent based (spanning e-mail filtering, information retrieval from the web, electronic commerce, entertainment, and The recent interest and excitement in the area of agent-based systems has resulted from the confluence of a variety of research disciplines and technologies, notably AI, object-oriented programming, humancomputer interfaces, and networking. Developing agents that could perceive the world, reason about what they perceive in relation to their own goals and acts, has been the Holy Grail of AI. Early attempts at such holistic intelligence (for example, SRI International’s SHAKEY robot in the late 1970s) proved frustrating in part because of the immaturity of the AI technologies but most importantly because of the unreliability of the hardware. AI researchers turned their attention to component technologies for structuring a single agent, such as planning, knowledge representation, diagnosis, and learning. Although most of AI research was focused on single-agent issues, a small number of AI researchers gathered at the Massachusetts Institute of Technology Endicott House in 1980 for the First Workshop on Distributed AI. The main scientific goal of distributed AI (DAI) is to understand the principles underlying the behavior of multiple entities in the world, called agents and their interactions. The discipline is concerned with how agent interactions produce overall multiagent system (MAS) behavior. The presence of other agents in the world that must be taken into account in each agent’s reasoning forced DAI researchers to confront issues of agent situatedness in a multiagent environment, perception of other’s behavior, communication, and action that affects the behavior of others. In addition, because of limited local information, agents in an MAS always operate in the presence of some degree of uncertainty. spacecraft control). This diversity, albeit demonstrating the vitality and excitement of the field, also contributes to the confusing picture it provides. This confusion is intensified by the “overhyping” of the term agent. There have been many attempts at defining an agent and endless discussions about what constitutes agenthood. Although no agreed-on definition exists yet, there seems to be a convergence of opinion that an agent is a computer software system whose main characteristics are situatedness, autonomy, adaptivity, and sociability. We hold that all these characteristics must be present simultaneously for a system to qualify as an agent. Situatedness means that the agent receives some form of sensory input from its environment, and it performs some action that changes its environment in some way. The physical world and the internet are examples of environments in which an agent can be situated. Autonomy means that the agent can act without direct intervention by humans or other agents and that it has control over its own actions and internal state. Adaptivity means that an agent is capable of (1) reacting flexibly to changes in its environment; (2) taking goal-directed initiative, when appropriate; and (3) learning from its own experience, its environment, and interactions with others. Sociability means that an agent is capable of interacting in a peer-to-peer manner with other agents or humans. Many researchers emphasize different aspects of agency, such as mobility (such additional properties might nevertheless be useful for certain applications). We believe that the previous four properties, when present in a single software entity, are what uniquely characterize an agent as opposed to related software paradigms, such as object-oriented systems, or expert systems (see also Jennings, Sycara, and Wooldridge [1998] for more detailed discussion). The agent paradigm offers a new promise for building complex software because of the abstraction and flexibility it provides. These complex systems are conceived as organizations of coordinating agents. A complex domain could be decomposed into modular, functionally specific Introduction to This Special Issue
[1] K. Eisenhardt. Agency Theory: An Assessment and Review , 1989 .
[2] Michael Wooldridge,et al. Autonomous agents and multi-agent systems , 2014 .