Reasoning About Gestural Interaction

Many of the reported developments in the design of virtual spaces or visualisation systems are based on improvements in technology, either physical devices or algorithms for achieving realistic renderings within real‐time constraints. While this experimental approach produces a wealth of empirical results, it operates largely without a sound underlying theory that can be used to design systems that will effectively support users in real‐world domains. One of the main problems is that these sophisticated technologies rely on, but rarely assess, the cognitive abilities of the user. This paper introduces a new approach to modelling human‐system interaction. A syndetic model combines a formal expression of system behaviour with an approximate representation of cognitive resources to allow reasoning about the flow and utilisation of information within the combined system. The power of the approach to provide insight into novel interaction techniques is illustrated by developing a syndetic model of a gesture‐driven user interface.