IF PLANNING and decision aids are to meet expectations as force multipliers on the battlefield, something must be done to analyze, summarize, aggregate, and display their data in a manner which is consistent with the goal structure and cognitive characteristics of each individual user. Without this type of preprocessing, voluminous data will rapidly overload the processing ability of the human half of the human-machine system. It then becomes likely that an increasingly intelligent generation of support tools will be abandoned in favor of a less informed "seat of the pants" approach to meeting highly dynamic mission objectives. The range of variability of probable users of coordinated integrated battlefield databases is great. Not only do users exist with significantly different functional requirements (such as the commander, G-2, and G-3), but each user may require a different view of the data depending on his skills, experience, mental acuity (considering such factors as fatigue and morale), and accustomed problem-solving style. To these parameters must be added the influence of context; different classes of situation interact with user characteristics to yield variations in optimum data selection, prioritization, level of detail and resolution, "chunking," figure-ground relations, alarm thresholds, envelopes constraining exception processing, descriptive characterization of uncertainties, and choice of metaphor. Halpin [1], [2] and others [3] have proposed the creation of general purpose intelligent interfaces or intelligent assistants to facilitate the dialog between the user and complex systems. Fig. 1 illustrates the architecture for such an intelligent interface. Interposed between the user and the application software in a system, the intelligent interface consists of models or representations of the user, of the user's goals and tasks, and of the world (goal/task context) as well as the software to update those models dynamically and use the models to control information flow to and from the user. The premise is that we can develop the interface to the point that it understands the user well enough to interpret his requests and commands,
[1]
Noam Chomsky,et al.
Three models for the description of language
,
1956,
IRE Trans. Inf. Theory.
[2]
Allan D. Saja,et al.
The cognitive model: an approach to designing the human-computer interface
,
1985,
SGCH.
[3]
Stanley Halpin.
A proposal for an intelligent interface in man-machine systems
,
1984,
The 23rd IEEE Conference on Decision and Control.
[4]
Allen Newell,et al.
The psychology of human-computer interaction
,
1983
.
[5]
David Robson,et al.
Smalltalk-80: The Language and Its Implementation
,
1983
.
[6]
Michael L. Brodie,et al.
On Conceptual Modelling
,
1984,
Topics in Information Systems.
[7]
Ben Shneiderman,et al.
Human Factors Experiments in Designing Interactive Systems
,
1979,
Computer.
[8]
Gordon M. Bull,et al.
A model for software design facilitating man-machine interface variations
,
1984,
SGCH.
[9]
Donald A. Norman,et al.
Design rules based on analyses of human error
,
1983,
CACM.
[10]
Donald A. Norman,et al.
Design principles for human-computer interfaces
,
1983,
CHI '83.
[11]
Gregory Piatetsky-Shapiro,et al.
An Intelligent Database Assistant
,
1986,
IEEE Expert.
[12]
Richard Bandler,et al.
The Structure Of Magic
,
1975
.
[13]
Howard Mozeico.
A human/computer interface to accommodate user learning stages
,
1982,
CACM.