Increasing interest in human-centered information fusion systems involves; (1) humans as sensors (viz., “soft sensors”), (2) humans performing pattern recognition and participating in the fusion cognitive process, and (3) human groups performing collaborative analysis (viz., “crowd-sourcing” of analysis). Test and evaluation of such systems is challenging because we must develop both representative test data (involving both physical sensors and human observers) and test environments to evaluate the performance of the hardware, software and humans-in-the-loop. This paper describes an experimental facility called an extreme events laboratory, a test and evaluation approach, and evolving test data sets for evaluation of human-centered information fusion systems for situation awareness. The data sets include both synthetic data as well as data obtained using human subjects in campus wide experiments.
[1]
Pablo O. Arambel,et al.
Generation of a fundamental data set for hard/soft information fusion
,
2008,
2008 11th International Conference on Information Fusion.
[2]
Anna Lena Phillips.
Of Sunflowers and Citizens
,
2008
.
[3]
D. L. Hall,et al.
Mathematical Techniques in Multisensor Data Fusion
,
1992
.
[4]
M. Hansen,et al.
Participatory Sensing
,
2019,
Internet of Things.
[5]
David J. Saab.
An Ethnorelative Framework for Information Systems Design
,
2008,
AMCIS.
[6]
Leysia Palen,et al.
Twitter adoption and use in mass convergence and emergency events
,
2009
.