Sensor fusion for context understanding

To answer the challenge of context-understanding for HCI, we propose and test experimentally a top-down sensor fusion approach. We seek to systematize the sensing process in two steps. first, decompose relevant context information in such a way that it can be described in a model of discrete facts and quantitative measurements, second, we build a generalizable sensor fusion architecture to deal with highly distributed sensors in a dynamic configuration to collect, fuse and populate our context information model. This paper describes our information model, system architecture, and preliminary experimental results.