Using communicative acts in high-level specifications of user interfaces for their automated synthesis

User interfaces are very important for the success of many computer-based applications these days. However, their development takes time, requires experts for user-interface design as well as experienced programmers and is very expensive. This problem becomes even more severe through the ubiquitous use of a variety of devices such as PCs, mobile phones, PDAs etc., since each of these devices has its own specifics that require a special user interface.Therefore, we developed a tool-supported approach to automatically synthesize multi-device user interfaces from high-level specifications in the form of models. In contrast to previous approaches focusing on abstracting the user interface per se, we make use of communicative acts derived from speech act theory for the specification of desired user intentions in interactions. In this way, we approach a solution to the given problem, since user interfaces can be efficiently provided without experience in implementing them.

[1]  A. Koller,et al.  Speech Acts: An Essay in the Philosophy of Language , 1969 .

[2]  R. E. Kurt Stirewalt,et al.  Automating UI generation by model composition , 1998, Proceedings 13th IEEE International Conference on Automated Software Engineering (Cat. No.98EX239).

[3]  By MarcAbrams,et al.  UIML : An XML Language for Building Device-Independent User Interfaces , 1999 .

[4]  Fabio Paternò,et al.  Design and development of multidevice user interfaces through multiple logical descriptions , 2004, IEEE Transactions on Software Engineering.