SensorTune: a mobile auditory interface for DIY wireless sensor networks

Wireless Sensor Networks (WSNs) allow the monitoring of activity or environmental conditions over a large area, from homes to industrial plants, from agriculture fields to forests and glaciers. They can support a variety of applications, from assisted living to natural disaster prevention. WSNs can, however, be challenging to setup and maintain, reducing the potential for real-world adoption. To address this limitation, this paper introduces SensorTune, a novel mobile interface to support non-expert users in iteratively setting up a WSN. SensorTune uses non-speech audio to present to its users information regarding the connectivity of the network they are setting up, allowing them to decide how to extend it. To simplify the interpretation of the data presented, the system adopts the metaphor of tuning a consumer analog radio, a very common and well known operation. A user study was conducted in which 20 subjects setup real multi-hop networks inside a large building using a limited number of wireless nodes. Subjects repeated the task with SensorTune and with a comparable mobile GUI interface. Experimental results show a statistically significant difference in the task completion time and a clear preference of users for the auditory interface.

[1]  H. L. R. Ong,et al.  Glacial Environment Monitoring using Sensor Networks , 2005 .

[2]  Terri L. Bonebright,et al.  Sonific ation Report: Status of the Field and Research Agenda , 2010 .

[3]  Harry Newton,et al.  Newton's Telecom Dictionary , 1994 .

[4]  T. V. Prabhakar,et al.  Commonsense net: A wireless sensor network for resource-poor agriculture in the semiarid areas of developing countries , 2007 .

[5]  Deborah Estrin,et al.  New Approaches in Embedded Networked Sensing for Terrestrial Ecological Observatories , 2007 .

[6]  Wei Hong,et al.  TASK: sensor network in a box , 2005, Proceeedings of the Second European Workshop on Wireless Sensor Networks, 2005..

[7]  Stephen Brewster,et al.  Presenting Dynamic Information on Mobile Computers , 2000 .

[8]  Sandra Pauletto,et al.  The sonification of EMG data , 2006 .

[9]  Joseph A. Paradiso,et al.  Metaphor and Manifestation Cross-Reality with Ubiquitous Sensor/Actuator Networks , 2009, IEEE Pervasive Computing.

[10]  Bill Kapralos,et al.  Toward Sound-Assisted Intrusion Detection Systems , 2007, OTM Conferences.

[11]  Sandra Pauletto,et al.  A comparision of audio & visual analysis of complex time-series data sets , 2005 .

[12]  Günter Geiger PDa: Real Time Signal Processing and Sound Generation on Handheld Devices , 2003, ICMC.

[13]  K.V. Nesbitt,et al.  Finding trading patterns in stock market data , 2004, IEEE Computer Graphics and Applications.

[14]  Alva L. Couch,et al.  Peep (The Network Auralizer): Monitoring Your Network with Sound , 2000, LISA.

[15]  John G. Neuhoff,et al.  Sonification Report: Status of the Field and Research Agenda Prepared for the National Science Foundation by members of the International Community for Auditory Display , 1999 .

[16]  Gaurav S. Sukhatme,et al.  Designing Wireless Sensor Networks as a Shared Resource for Sustainable Development , 2006, 2006 International Conference on Information and Communication Technologies and Development.

[17]  Matt Welsh,et al.  Fidelity and yield in a volcano monitoring sensor network , 2006, OSDI '06.

[18]  Ephraim P. Glinert,et al.  An experiment into the use of auditory cues to reduce visual workload , 1989, CHI '89.

[19]  Robert G Loeb,et al.  A laboratory evaluation of an auditory display designed to enhance intraoperative monitoring. , 2002, Anesthesia and analgesia.

[20]  Alberto Negro,et al.  NeMoS: Network monitoring with sound , 2003 .

[21]  Stephen Barrass,et al.  Using sonification , 1999, Multimedia Systems.

[22]  John A. Stankovic,et al.  LUSTER: wireless sensor network for environmental research , 2007, SenSys '07.

[23]  Mary Sansalone,et al.  Use of sound for the interpretation of impact-echo signals , 1997 .