Research Methods in Human-Computer Interaction

Publisher Summary This chapter focuses on the research methods in human–computer interaction. Human performance analysis consists of finding out in what ways and why tasks are hard to accomplish for human users of systems. Classical task analysis can describe at least one way that people can perform a task successfully, and can also point to areas for improvement. More focused methods for analytical research include failure analysis, in which detailed studies are made of what goes wrong when people fail to achieve critical subgoals; (2) individual difference analysis, in which correlations of the difficulty of task components with variations in the measured specific abilities of users pinpoint excessive performance demands; and (3) time profiling, in which task components that demand the most total and most variable time are identified as potential focus for better design. Iterative evaluation, also called formative evaluation, is the most effective method available for ensuring the actual development of a usable system.

[1]  Jonathan Grudin,et al.  The Cognitive Demands of Learning and Representing Command Names for Text Editing , 1984 .

[2]  Timothy D. Wilson,et al.  Telling more than we can know: Verbal reports on mental processes. , 1977 .

[3]  R. Gnanadesikan,et al.  Probability plotting methods for the analysis of data. , 1968, Biometrika.

[4]  Thomas P. Moran,et al.  The evaluation of text editors: methodology and empirical results. , 1983, CACM.

[5]  J. Tukey Conclusions vs Decisions , 1960 .

[6]  John D. Gould,et al.  The 1984 Olympic Message System: a test of behavioral principles of system design , 1987, CACM.

[7]  John B. Black,et al.  An invited article Facilitating human–computer communication , 1981, Applied Psycholinguistics.

[8]  Thomas K. Landauer,et al.  What Makes a Difference When? Comments on Grudin and Barnard , 1984 .

[9]  Thomas K. Landauer,et al.  Natural command names and initial learning: a study of text-editing terms , 1983, CACM.

[10]  Dennis R. Wixon,et al.  Building a user-derived interface , 1984, CACM.

[11]  J. Grudin,et al.  Designing in the dark: logics that compete with the user , 1986, CHI '86.

[12]  Susan T. Dumais,et al.  The vocabulary problem in human-system communication , 1987, CACM.

[13]  John R. Anderson,et al.  The Transfer of Text-Editing Skill , 1985, Int. J. Man Mach. Stud..

[14]  K. A. Ericsson,et al.  Verbal reports as data. , 1980 .

[15]  S. T. Dumais,et al.  Human factors and behavioral science: Statistical semantics: Analysis of the potential performance of key-word information systems , 1983, The Bell System Technical Journal.

[16]  J. B. Black,et al.  Facilitating Human-Computer Communication , 1981 .

[17]  Larry Tesler Enlisting user help in software design , 1983, SGCH.