SounDroid: Supporting Real-Time Sound Applications on Commodity Mobile Devices

A variety of advantages from sounds such as measurement and accessibility introduces a new opportunity for mobile applications to offer broad types of interesting, valuable functionalities, supporting a richer user experience. However, in spite of the growing interests on mobile sound applications, few or no works have been done in focusing on managing an audio device effectively. More specifically, their low level of real-time capability for audio resources makes it challenging to satisfy tight timing requirements of mobile sound applications, e.g., a high sensing rate of acoustic sensing applications. To address this problem, this work presents the SounDroid framework, an audio device management framework for real-time audio requests from mobile sound applications. The design of SounDroid is based on the requirement analysis of audio requests as well as an understanding of the audio playback procedure including the audio request scheduling and dispatching on Android. It then incorporates both real-time audio request scheduling algorithms, called EDF-V and AFDS, and dispatching optimization techniques into mobile platforms, and thus improves the quality-of-service of mobile sound applications. Our experimental results with the prototype implementation of SounDroid demonstrate that it is able to enhance scheduling performance for audio requests, compared to traditional mechanisms (by up to 40% of improvement), while allowing deterministic dispatching latency.

[1]  Sang Jeong Lee,et al.  Mobile maestro: enabling immersive multi-speaker audio applications on commodity mobile devices , 2014, UbiComp.

[2]  David Chu,et al.  SwordFight: enabling a new class of phone-to-phone action games on commodity phones , 2012, MobiSys '12.

[3]  Pei Zhang,et al.  Spartacus: spatially-aware interaction for mobile devices through energy-efficient audio sensing , 2013, MobiSys '13.

[4]  H S Colburn,et al.  The precedence effect. , 1999, The Journal of the Acoustical Society of America.

[5]  Lukasz Ziarek,et al.  Real-time android with RTDroid , 2014, MobiSys.

[6]  Desney S. Tan,et al.  SoundWave: using the doppler effect to sense gestures , 2012, CHI.

[7]  Charles U. Martel,et al.  On non-preemptive scheduling of period and sporadic tasks , 1991, [1991] Proceedings Twelfth Real-Time Systems Symposium.

[8]  J. Blauert Spatial Hearing: The Psychophysics of Human Sound Localization , 1983 .

[9]  Song Han,et al.  RT-WiFi: Real-Time High-Speed Communication Protocol for Wireless Cyber-Physical Control Applications , 2013, 2013 IEEE 34th Real-Time Systems Symposium.

[10]  Guobin Shen,et al.  BeepBeep: a high accuracy acoustic ranging system using COTS mobile devices , 2007, SenSys '07.

[11]  Darren Hart,et al.  Real-time Linux in real time , 2008, IBM Syst. J..

[12]  Ben Shneiderman,et al.  Designing The User Interface , 2013 .

[13]  Cecilia Ekelin,et al.  Clairvoyant non-preemptive EDF scheduling , 2006, 18th Euromicro Conference on Real-Time Systems (ECRTS'06).

[14]  Eric C. Larson,et al.  DopLink: using the doppler effect for multi-device interaction , 2013, UbiComp.

[15]  Insik Shin,et al.  Aciom: Application characteristics-aware disk and network I/O management on Android platform , 2011, 2011 Proceedings of the Ninth ACM International Conference on Embedded Software (EMSOFT).

[16]  David Chu,et al.  On the feasibility of real-time phone-to-phone 3D localization , 2011, SenSys.

[17]  Xiaolin Li,et al.  Guoguo: enabling fine-grained indoor localization via smartphone , 2013, MobiSys '13.

[18]  Shwetak N. Patel,et al.  AirLink: sharing files between multiple devices using in-air gestures , 2014, UbiComp.