Special issue: “Emerging Technologies for Deaf Accessibility in the Information Society”
暂无分享,去创建一个
The definition of Universal Access and Design for All, as established in the framework of the Information Society, requires cross-disciplinary collaboration for both research and implementation activities in order to support advanced human–computer interaction. The current special issue brings together indicative leading research work on those emerging technologies that allow for development of novel solutions to support accessibility to products and services by the deaf. The technologies discussed here address crucial aspects of communication and information exchange among the deaf, as well as between deaf and hearing individuals, in the context of human–computer interaction. Leading technologies for these purposes include sign recognition, sign synthesis, and natural language processing applied to sign languages. Sign recognition basically exploits knowledge from the domains of image and video processing and computer vision, supported by sign language resources and natural language processing methodologies. Sign synthesis exploits virtual agent (avatar) technologies with the aim of producing dynamic signing utterances on the basis of knowledge provided through appropriately coded resources of sign languages. In this environment, natural language processing allows for the creation of adequate language resources and the development of systems necessary for the implementation of tools that support Universal Access Applications. Language resources include annotated corpora, lexical databases and electronic grammars that directly feed systems of sign synthesis, sign recognition and conversion from spoken to sign language. This special issue aims at providing an overview of the current state-of-the-art on technological advances and open scientific issues relating to sign language technologies. The aspect of sign recognition is approached through an overall presentation (Article 1), followed by a discussion on particularities of face recognition (Article 2). Discussion on sign synthesis is introduced with the presentation of one of the best known systems (Article 3), followed by an approach to sign language modelling (Article 4). An approach to sign generation that focuses on exploitation of language resources to support conversion from written representations of spoken language to sign language structures follows (Article 5). Finally, an application of sign language representation is presented that is based on machine translation principles and exploitation of signing avatars (Article 6). Article 1, ‘‘Recent developments in visual sign language recognition’’ by U. von Agris, J. Zieren, U. Canzler, B. Bauer and K.-F. Kraiss, discusses a signer-independent sign recognition system for mobile operation in uncontrolled environments, that uses both manual and facial features. Article 2, ‘‘Facial Movement Analysis in ASL’’ by Ch. Vogler and S. Goldenstein, focuses on facial movement analysis and presents a 3D deformable model tracking system with special reference to occlusion problems. Article 3, ‘‘Linguistic Modelling and Language-processing Technologies for Avatar-based Sign Language Presentation’’ by R. Elliott, J. R. W. Glauert, J. R. Kennaway, I. Marshall and E. Safar, presents a sign generation system that exploits machine translation resources to support synthesis and visual sign realisation by a virtual human signing avatar. E. Efthimiou (&) E. Fotinea Institute for Language and Speech Processing/‘‘Athena’’ R.C., Epidavrou & Artemidos 6, Marousi, 151 25 Athens, Greece e-mail: eleni_e@ilsp.gr