Enable Traditional Laptops with Virtual Writing Capability Leveraging Acoustic Signals

Human–computer interaction through touch screens plays an increasingly important role in our daily lives. Besides smartphones and tablets, laptops are the most prevalent mobile devices for both work and leisure. To satisfy the requirements of some applications, it is desirable to re-equip a typical laptop with both handwriting and drawing capability. In this paper, we design a virtual writing tablet system, VPad, for traditional laptops without touch screens. VPad leverages two speakers and one microphone, which are available in most commodity laptops, to accurately track hand movements and recognize writing characters in the air without additional hardware. Specifically, VPad emits inaudible acoustic signals from two speakers in a laptop and then analyzes energy features and Doppler shifts of acoustic signals received by the microphone to track the trajectory of hand movements. Furthermore, we propose a state machine-based trajectory optimization method to correct the unexpected trajectory and employ a stroke direction sequence model based on probability estimation to recognize characters users write in the air. Experimental results show that VPad achieves the average error of 1.55 cm for trajectory tracking and the accuracy over 90% of character recognition merely through built-in audio devices on a laptop.

[1]  Ching Y. Suen,et al.  Online handwriting recognition-a survey , 1988, [1988 Proceedings] 9th International Conference on Pattern Recognition.

[2]  Réjean Plamondon,et al.  On-line recognition of handprinted characters: Survey and beta tests , 1990, Pattern Recognit..

[3]  Claus Bahlmann,et al.  Online handwriting recognition with support vector machines - a kernel approach , 2002, Proceedings Eighth International Workshop on Frontiers in Handwriting Recognition.

[4]  Fabrice Labeau,et al.  Discrete Time Signal Processing , 2004 .

[5]  T. Munich,et al.  Offline Handwriting Recognition with Multidimensional Recurrent Neural Networks , 2008, NIPS.

[6]  Latesh G. Malik,et al.  Fine Classification & Recognition of Hand Written Devnagari Characters with Regular Expressions & Minimum Edit Distance Method , 2008, J. Comput..

[7]  E. Sheader,et al.  The Nintendo Wii , 2010 .

[8]  Romit Roy Choudhury,et al.  Using mobile phones to write in air , 2011, MobiSys '11.

[9]  Mun Choon Chan,et al.  Low cost crowd counting using audio tones , 2012, SenSys '12.

[10]  Shwetak N. Patel,et al.  Whole-home gesture recognition using wireless signals , 2013, MobiCom.

[11]  Jie Xiong,et al.  ArrayTrack: A Fine-Grained Indoor Location System , 2011, NSDI.

[12]  Ling Shao,et al.  Enhanced Computer Vision With Microsoft Kinect Sensor: A Review , 2013, IEEE Transactions on Cybernetics.

[13]  Xinyu Zhang,et al.  Ubiquitous keyboard for small mobile devices: harnessing multipath fading for fine-grained keystroke localization , 2014, MobiSys.

[14]  Yunhao Liu,et al.  Shake and walk: Acoustic direction finding and fine-grained indoor localization using smartphones , 2014, IEEE INFOCOM 2014 - IEEE Conference on Computer Communications.

[15]  Shwetak N. Patel,et al.  SurfaceLink: using inertial and acoustic sensing to enable multi-device interaction on a surface , 2014, CHI.

[16]  Dina Katabi,et al.  RF-IDraw: virtual touch screen in the air using RF signals , 2014, S3 '14.

[17]  Wolfgang Stuerzlinger,et al.  LeapLook: a free-hand gestural travel technique using the leap motion finger tracker , 2014, SUI.

[18]  Shwetak N. Patel,et al.  AirLink: sharing files between multiple devices using in-air gestures , 2014, UbiComp.

[19]  Li Sun,et al.  WiDraw: Enabling Hands-free Drawing in the Air on Commodity WiFi Devices , 2015, MobiCom.

[20]  Shyamnath Gollakota,et al.  Contactless Sleep Apnea Detection on Smartphones , 2015, GetMobile Mob. Comput. Commun..

[21]  He Wang,et al.  MoLe: Motion Leaks through Smartwatch Sensors , 2015, MobiCom.

[22]  Minglu Li,et al.  Sensing Human-Screen Interaction for Energy-Efficient Frame Rate Adaptation on Smartphones , 2015, IEEE Transactions on Mobile Computing.

[23]  Jie Yang,et al.  Snooping Keystrokes with mm-level Audio Ranging on a Single Phone , 2015, MobiCom.

[24]  Sangki Yun,et al.  Turning a Mobile Device into a Mouse in the Air , 2015, MobiSys.

[25]  He Wang,et al.  I am a Smartwatch and I can Track my User's Arm , 2016, MobiSys.

[26]  Wei Wang,et al.  Device-free gesture tracking using acoustic signals , 2016, MobiCom.

[27]  Minglu Li,et al.  L3: Sensing driving conditions for vehicle lane-level localization on highways , 2016, IEEE INFOCOM 2016 - The 35th Annual IEEE International Conference on Computer Communications.

[28]  Desney S. Tan,et al.  FingerIO: Using Active Sonar for Fine-Grained Finger Tracking , 2016, CHI.

[29]  Lili Qiu,et al.  CAT: high-precision acoustic motion tracking , 2016, MobiCom.

[30]  Sangki Yun,et al.  Strata: Fine-Grained Acoustic-based Device-Free Tracking , 2017, MobiSys.

[31]  Minglu Li,et al.  LipPass: Lip Reading-based User Authentication on Smartphones Leveraging Acoustic Signals , 2018, IEEE INFOCOM 2018 - IEEE Conference on Computer Communications.

[32]  Minglu Li,et al.  KeyLiSterber: Inferring Keystrokes on QWERTY Keyboard of Touch Screen through Acoustic Signals , 2019, IEEE INFOCOM 2019 - IEEE Conference on Computer Communications.

[33]  Minglu Li,et al.  Lip Reading-Based User Authentication Through Acoustic Sensing on Smartphones , 2019, IEEE/ACM Transactions on Networking.

[34]  Minglu Li,et al.  I3 , 2019, Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies.