A Paradigm of a Pervasive Multimodal Multimedia Computing System for the Visually-Impaired Users

Incorporating multimodality in a computing system makes computing more accessible to a wide range of users, including those with impairments. This work presents a paradigm of a multimodal multimedia computing system to make informatics accessible to visually-impaired users. The system's infrastructure determines the suitable applications to be used. The user's context and user data type are considered in determining the types of applications, media and modalities that are appropriate to use. The system design is pervasive, fault-tolerant and capable of self-adaptation under varying conditions (e.g. missing or defective components). It uses machine learning so that the system would behave in a pre-defined manner given a pre-conceived scenario. Incremental learning is adapted for added machine knowledge acquisition. A simulation of system's behaviour, using a test case scenario, is presented in this paper. This work is our original contribution to an ongoing research to make informatics more accessible to handicapped users.

[1]  T. Horiuchi,et al.  Noise-Robust Hands-Free Speech Recognition on PDAs Using Microphone Array Technology , .

[2]  Amar Ramdane-Cherif,et al.  A ubiquitous context-sensitive multimodal multimedia computing system and its machine learning-based reconfiguration at the architectural level , 2005, Seventh IEEE International Symposium on Multimedia (ISM'05).

[3]  P. Blenkhorn,et al.  WebbIE : a Web Browser for Visually Impaired People , 2004 .

[4]  Christophe G. Giraud-Carrier,et al.  A Note on the Utility of Incremental Learning , 2000, AI Commun..

[5]  Jian Lu,et al.  Structure analysis for dynamic software architecture , 2005, Sixth International Conference on Software Engineering, Artificial Intelligence, Networking and Parallel/Distributed Computing and First ACIS International Workshop on Self-Assembling Wireless Network.

[6]  Y. Bellik Interfaces multimodales : concepts, modeles et architectures , 1995 .

[7]  Nikolaos G. Bourbakis,et al.  An intelligent assistant for navigation of visually impaired people , 2001, Proceedings 2nd Annual IEEE International Symposium on Bioinformatics and Bioengineering (BIBE 2001).

[8]  Thomas G. Dietterich What is machine learning? , 2020, Archives of Disease in Childhood.

[9]  Michael Wooldridge,et al.  Introduction to multiagent systems , 2001 .

[10]  Y. Shoham Introduction to Multi-Agent Systems , 2002 .

[11]  Petr Jan Horn,et al.  Autonomic Computing: IBM's Perspective on the State of Information Technology , 2001 .

[12]  Masayuki Okamoto,et al.  Design and applications of learning conversational agents , 2003 .

[13]  Samir Benarif,et al.  Generic Multimedia Multimodal Agents Paradigms and Their Dynamic Reconfiguration at the Architectural Level , 2004, EURASIP J. Adv. Signal Process..

[14]  Diamantino Freitas,et al.  Enhancing the Accessibility of Mathematics for Blind People: The AudioMath Project , 2004, ICCHP.

[15]  T. Horiuchi,et al.  Hands-free speech recognition and communication on PDAs using microphone array technology , 2005, IEEE Workshop on Automatic Speech Recognition and Understanding, 2005..

[16]  Giuliano Antoniol,et al.  A distributed architecture for dynamic analyses on user-profile data , 2004, Eighth European Conference on Software Maintenance and Reengineering, 2004. CSMR 2004. Proceedings..

[17]  Frédéric Delmond,et al.  The user profile for the virtual home environment , 2003, IEEE Commun. Mag..

[18]  Dominique Archambault,et al.  Automatic Conversions of Mathematical Braille: A Survey of Main Difficulties in Different Languages , 2004, ICCHP.

[19]  David A. Ross Cyber crumbs for successful aging with vision loss , 2004, IEEE Pervasive Computing.