Dynamic agent based reconfiguration of multimedia multimodal architecture

Multimodal feature fusion natural human-computer interaction involves complex intelligent architectures facing unexpected errors and mistakes made by users. These architectures should react to events that occur simultaneously with eventual redundancy from different input media. Intelligent agent based genetic architectures for multimedia multimodal dialog protocols are proposed. Global agents are decomposed into their relevant components, and each element is modeled separately using timed colored Petri nets. The elementary models are then linked together to obtain the full architecture. Generic components of the application are then monitored by an agent based expert system to perform dynamic changes in reconfiguration, adaptation and evolution at the architectural level. For validation purposes, the proposed multi-agent architecture and its dynamic reconfiguration are respectively applied on practical examples.

[1]  Richard A. Bolt,et al.  “Put-that-there”: Voice and gesture at the graphics interface , 1980, SIGGRAPH '80.

[2]  Jeff Magee,et al.  The Evolving Philosophers Problem: Dynamic Change Management , 1990, IEEE Trans. Software Eng..

[3]  James D. Hollan,et al.  Direct Manipulation Interfaces , 1985, Hum. Comput. Interact..

[4]  Sharon L. Oviatt,et al.  Multimodal system processing in mobile environments , 2000, UIST '00.

[5]  Peyman Oreizy,et al.  Architecture-based runtime software evolution , 1998, Proceedings of the 20th International Conference on Software Engineering.

[6]  Michael Wooldridge,et al.  Applications of Agent Technology , 1998 .

[7]  James M. Purtilo,et al.  The POLYLITH software bus , 1994, TOPL.

[8]  Jeff Magee,et al.  Dynamic structure in software architectures , 1996, SIGSOFT '96.

[9]  H. Jürgen Müller,et al.  Negotiation principles , 1996 .

[10]  Jean-Raymond Abrial,et al.  The B-book - assigning programs to meanings , 1996 .

[11]  Toby Bloom,et al.  Reconfiguration and module replacement in Argus: theory and practice , 1993, Softw. Eng. J..

[12]  Michael Wooldridge,et al.  Applications of intelligent agents , 1998 .

[13]  Alan H. Bond,et al.  Readings in Distributed Artificial Intelligence , 1988 .

[14]  Sharon L. Oviatt,et al.  Designing the User Interface for Multimodal Speech and Pen-Based Gesture Applications: State-of-the-Art Systems and Future Research Directions , 2000, Hum. Comput. Interact..

[15]  Kurt Jensen,et al.  Coloured Petri Nets: Basic Concepts, Analysis Methods and Practical Use. Vol. 2, Analysis Methods , 1992 .

[16]  Mario Barbacci,et al.  Durra: a structure description language for developing distributed applications , 1993, Softw. Eng. J..

[17]  Philip R. Cohen,et al.  Something from nothing: augmenting a paper-based work practice via multimodal interaction , 2000, DARE '00.

[18]  James L. Crowley,et al.  Multi-modal tracking of faces for video communications , 1997, Proceedings of IEEE Computer Society Conference on Computer Vision and Pattern Recognition.

[19]  石田 亨 Real-time search for learning autonomous agents , 1997 .

[20]  Jeff Magee,et al.  Self organising software architectures , 1996, ISAW '96.

[21]  David Garlan,et al.  Specifying Dynamism in Software Architectures , 1997 .

[22]  Shawn D. Bird,et al.  Toward a Taxonomy of Multi-Agent Systems , 1993, Int. J. Man Mach. Stud..

[23]  Sharon L. Oviatt,et al.  Mutual disambiguation of recognition errors in a multimodel architecture , 1999, CHI '99.

[24]  Richard N. Taylor,et al.  A Component- and Message-Based Architectural Style for GUI Software , 1995, 1995 17th International Conference on Software Engineering.

[25]  Yacine Bellik,et al.  Multimodal interfaces: new solutions to the problem of computer accessibilty for the blind , 1994, CHI '94.

[26]  Sharon L. Oviatt Multimodal signal processing in naturalistic noisy environments , 2000, INTERSPEECH.