The Sweet-Home speech and multimodal corpus for home automation interaction

Ambient Assisted Living aims at enhancing the quality of life of older and disabled people at home thanks to Smart Homes and Home Automation. However, many studies do not include tests in real settings, because data collection in this domain is very expensive and challenging and because of the few available data sets. The S WEET-H OME multimodal corpus is a dataset recorded in realistic conditions in D OMUS, a fully equipped Smart Home with microphones and home automation sensors, in which participants performed Activities of Daily living (ADL). This corpus is made of a multimodal subset, a French home automation speech subset recorded in Distant Speech conditions, and two interaction subsets, the first one being recorded by 16 persons without disabilities and the second one by 6 seniors and 5 visually impaired people. This corpus was used in studies related to ADL recognition, context aware interaction and distant speech recognition applied to home automation controled through voice.

[1]  S. Katz,et al.  A Measure of Primary Sociobiological Functions , 1976, International journal of health services : planning, administration, evaluation.

[2]  David Garlan,et al.  Context is key , 2005, CACM.

[3]  Jon Barker,et al.  An audio-visual corpus for speech perception and automatic speech recognition. , 2006, The Journal of the Acoustical Society of America.

[4]  Georges Linarès,et al.  The LIA Speech Recognition System: From 10xRT to 1xRT , 2007, TSD.

[5]  Michel Vacher,et al.  Preliminary evaluation of speech/sound recognition for telemedicine application in a real environment , 2008, INTERSPEECH.

[6]  Eric Campo,et al.  A review of smart homes - Present state and future challenges , 2008, Comput. Methods Programs Biomed..

[7]  John McDonough,et al.  Distant Speech Recognition , 2009 .

[8]  Ning Ma,et al.  The CHiME corpus: a resource and a challenge for computational hearing in multisource environments , 2010, INTERSPEECH.

[9]  Michel Vacher,et al.  SVM-Based Multimodal Classification of Activities of Daily Living in Health Smart Homes: Sensors, Algorithms, and First Experimental Results , 2010, IEEE Transactions on Information Technology in Biomedicine.

[10]  Brigitte Meillon,et al.  Design and evaluation of a smart home voice interface for the elderly: acceptability and objection aspects , 2011, Personal and Ubiquitous Computing.

[11]  Michel Vacher,et al.  Distant speech recognition for home automation: Preliminary experimental results in a smart home , 2011, 2011 6th Conference on Speech Technology and Human-Computer Dialogue (SpeD).

[12]  Brigitte Meillon,et al.  The sweet-home project: Audio technology in smart homes to improve well-being and reliance , 2011, 2011 Annual International Conference of the IEEE Engineering in Medicine and Biology Society.

[13]  Michel Vacher,et al.  Recognition of voice commands by multisource ASR and noise cancellation in a smart home environment , 2012, 2012 Proceedings of the 20th European Signal Processing Conference (EUSIPCO).

[14]  Michel Vacher,et al.  Using Markov Logic Network for On-Line Activity Recognition from Non-visual Home Automation Sensors , 2012, AmI.

[15]  Michel Vacher,et al.  A french corpus of audio and multimodal interactions in a health smart home , 2012, Journal on Multimodal User Interfaces.

[16]  Michel Vacher,et al.  Evaluation of a real-time voice order recognition system from multiple audio channels in a home , 2013, INTERSPEECH.

[17]  Michel Vacher,et al.  Experimental Evaluation of Speech Recognition Technologies for Voice-based Home Automation Control in a Smart Home , 2013, SLPAT.

[18]  Michel Vacher,et al.  Making Context Aware Decision from Uncertain Information in a Smart Home: A Markov Logic Network Approach , 2013, AmI.