Scribe4Me: Evaluating a Mobile Sound Transcription Tool for the Deaf

People who are deaf or hard-of-hearing may have challenges communicating with others via spoken words and may have challenges being aware of audio events in their environments. This is especially true in public places, which may not have accessible ways of communicating announcements and other audio events. In this paper, we present the design and evaluation of a mobile sound transcription tool for the deaf and hard-of-hearing. Our tool, Scribe4Me, is designed to improve awareness of sound-based information in any location. When a button is pushed on the tool, a transcription of the last 30 seconds of sound is given to the user in a text message. Transcriptions include dialog and descriptions of environmental sounds. We describe a 2-week field study of an exploratory prototype, which shows that our approach is feasible, highlights particular contexts in which it is useful, and provides information about what should be contained in transcriptions.

[1]  Alistair D. N. Edwards Progress in Sign Languages Recognition , 1997, Gesture Workshop.

[2]  Suat Akyol,et al.  Finding Relevant Image Content for mobile Sign Language Recognition , 2001 .

[3]  Scott E. Hudson,et al.  Rapid construction of functioning physical interfaces from cardboard, thumbtacks, tin foil and masking tape , 2006, UIST.

[4]  Mark S. Ackerman,et al.  Impromptu: managing networked audio applications for mobile users , 2004, MobiSys '04.

[5]  Jean E. Maki,et al.  Speech Spectrographic Display: Use of Visual Feedback by Hearing-Impaired Adults During Independent Articulation Practice , 1987, American annals of the deaf.

[6]  William C. Mann,et al.  Assistive technology for persons with disabilities , 1995 .

[7]  Manuel Blum,et al.  Peekaboom: a game for locating objects in images , 2006, CHI.

[8]  Jennifer Mankoff,et al.  When participants do the capturing: the role of media in diary studies , 2005, CHI.

[9]  Michael Picheny,et al.  Use of statistical N-gram models in natural language generation for machine translation , 2003, 2003 IEEE International Conference on Acoustics, Speech, and Signal Processing, 2003. Proceedings. (ICASSP '03)..

[10]  Alistair D. N. Edwards Progress in sign language recognition , 1998 .

[11]  Carolyn Penstein Rosé,et al.  Recent Advances in JANUS: A Speech Translation System , 1993, TMI.

[12]  L. Bernstein,et al.  Temporal and spatio-temporal vibrotactile displays for voice fundamental frequency: an initial evaluation of a new vibrotactile speech perception aid with normal-hearing and hearing-impaired individuals. , 1998, The Journal of the Acoustical Society of America.

[13]  Albert M. Cook,et al.  Assistive Technologies: Principles and Practice , 1995 .

[14]  Mary R. Power,et al.  Everyone here speaks TXT: deaf people using SMS in Australia and the rest of the world. , 2004, Journal of deaf studies and deaf education.

[15]  Tara Matthews,et al.  Evaluating non-speech sound visualizations for the deaf , 2006, Behav. Inf. Technol..

[16]  Alexander I. Rudnicky,et al.  Rapid development of speech-to-speech translation systems , 2002, INTERSPEECH.

[17]  Francisco Casacuberta,et al.  Eutrans: a speech-to-speech translator prototype , 2001, INTERSPEECH.

[18]  Ipke Wachsmuth,et al.  Gesture and Sign Language in Human-Computer Interaction , 1998, Lecture Notes in Computer Science.

[19]  Gregory D. Abowd,et al.  Prototyping and sampling experience to evaluate ubiquitous computing privacy in the real world , 2006, CHI.