In an Intelligent Environment, the user and the environment work together in a unique manner; the user expresses what he wishes to do, and the environment recognizes his intentions and helps out however it can. If well-implemented, such an environment allows the user to interact with it in the manner that is most natural for him personally. He should need virtually no time to learn to use it and should be more productive once he has. But to implement a useful and natural Intelligent Environment, the designers are faced with a daunting task: they must design a software system that senses what its users do, understands their intentions, and then responds appropriately. In this paper we argue that, in order to function reasonably in any of these ways, an Intelligent Environment must make use of declarative representations of what the user might do. We present our evidence in the context of the Intelligent Classroom, a facility that aids a speaker in this way and uses its understanding to produce a video of his presentation.
[1]
R. James Firby,et al.
Combined execution and monitoring for control of autonomous agents
,
1997,
AGENTS '97.
[2]
Michael J. Swain,et al.
GARGOYLE: An Environment for Real-Time, Context-Sensitive Active Vision
,
1996,
AAAI/IAAI, Vol. 2.
[3]
R. James Firby,et al.
GARGOYLE: Context-sensitive active vision for mobile robots
,
1996
.
[4]
R. Schank,et al.
Inside Computer Understanding
,
1981
.
[5]
R. James Firby.
Task Networks for Controlling Continuous Processes
,
1994,
AIPS.
[6]
Roger C. Schank,et al.
SCRIPTS, PLANS, GOALS, AND UNDERSTANDING
,
1988
.
[7]
Michael J. Swain,et al.
An Architecture for Vision and Action
,
1995,
IJCAI.