A Benchmarking Model for Sensors in Smart Environments

In smart environments, developers can choose from a large variety of sensors supporting their use case that have specific advantages or disadvantages. In this work we present a benchmarking model that allows estimating the utility of a sensor technology for a use case by calculating a single score, based on a weighting factor for applications and a set of sensor features. This set takes into account the complexity of smart environment systems that are comprised of multiple subsystems and applied in non-static environments. We show how the model can be used to find a suitable sensor for a use case and the inverse option to find suitable use cases for a given set of sensors. Additionally, extensions are presented that normalize differently rated systems and compensate for central tendency bias. The model is verified by estimating technology popularity using a frequency analysis of associated search terms in two scientific databases.

[1]  K. Wehrle,et al.  Accurate prediction of power consumption in sensor networks , 2005, The Second IEEE Workshop on Embedded Networked Sensors, 2005. EmNetS-II..

[2]  Lieven Eeckhout,et al.  Performance Evaluation and Benchmarking , 2005 .

[3]  J. Huttenlocher,et al.  Category Effects on Estimates of Stimuli: Perception or Reconstruction? , 2000, Psychological science.

[4]  Reinhold Weicker,et al.  Dhrystone: a synthetic systems programming benchmark , 1984, CACM.

[5]  Massimo Bertozzi,et al.  GOLD: a parallel real-time stereo vision system for generic obstacle and lane detection , 1998, IEEE Trans. Image Process..

[6]  Robert C. Camp,et al.  Benchmarking: The Search for Industry Best Practices That Lead to Superior Performance , 1989 .

[7]  Alain Crolotte Issues in Benchmark Metric Selection , 2009, TPCTC.

[8]  Arjan Kuijper,et al.  Capacitive proximity sensing in smart environments , 2015, J. Ambient Intell. Smart Environ..

[9]  James E. Smith,et al.  Characterizing computer performance with a single number , 1988, CACM.

[10]  Stefano Chessa,et al.  Evaluating Ambient Assisted Living Solutions: The Localization Competition , 2013, IEEE Pervasive Computing.

[11]  John L. Henning SPEC CPU2000: Measuring CPU Performance in the New Millennium , 2000, Computer.

[12]  Káthia Marçal de Oliveira,et al.  A Quality Model for Human-Computer Interaction Evaluation in Ubiquitous Systems , 2013, CLIHC.

[13]  Jack J. Dongarra,et al.  The LINPACK Benchmark: past, present and future , 2003, Concurr. Comput. Pract. Exp..

[14]  John S. Wilson,et al.  Sensor Technology Handbook , 2004 .

[15]  Brian P. Bailey,et al.  Towards a pervasive computing benchmark , 2005, Third IEEE International Conference on Pervasive Computing and Communications Workshops.

[16]  Bryon C. Lewis,et al.  The Evolution of Benchmarking as a Computer Performance Evaluation Technique , 1985, MIS Q..

[17]  Ronan Fitzpatrick,et al.  Measuring Privacy in Ubiquitous Computing Applications , 2011 .

[18]  Ray Jain,et al.  The art of computer systems performance analysis - techniques for experimental design, measurement, simulation, and modeling , 1991, Wiley professional computing.

[19]  J. Fleiss Measuring nominal scale agreement among many raters. , 1971 .