Visual Scanning, Memory Scanning, and Computational Human Performance Modeling

This article describes two studies that were conducted as part of a systematic effort at the University of Michigan to develop computational and comprehensive models of complex human performance and to improve the scientific basis for these models. The focus of the first study was the integration of models of divided attention and models of selective attention in performing complex tasks. Two experiments were conducted, which required the subjects to perform a simple information acquisition task in the first experiment and a complex information integration task in the second experiment. The two types of tasks were performed either alone or concurrently with a tracking task, and involved either spatial or verbal material. The location of the relevant spatial and verbal material was displayed with 4 levels of spatial uncertainty, but with approximately the same expected distance for visual scanning. The results demonstrated the strengths and limitations of existing models. The potential value of power functions in quantifying different aspects of task interference was proposed in the paper. A queuing network model, that was proposed recently (Liu, 1993a) as an unifying theory and an integrated computational model of human multi-task performance, was also tested in this study. In the second study, a computational model was derived from models of memory scanning and visual scanning and evaluated through an experiment to examine the integration of the two aspects of human performance modeling. We report here the first of a series of experiments of the study, which required the subjects to search through an organized array of circles to decide whether any of the circles carried any of the memorized items in their working memory. The joint effects of two experimental factors were investigated: the number of items in working memory and the number of circles need to be searched.