Where, what, why and how - 3W1H: uma abordagem prática para desenvolvimento de ambientes interativos

The advent of Kinect triggered the growth of applications aimed at natural interaction, gesture recognition and interactive environments. Over time, we realized that interface solutions based on such types of interaction grew in a rapid and disorderly way, without any concern for the formalization of development stages. In addition, issues related to the way these interactions are represented, the context in which they occur, and the environmental behavior in response to these interactions became relevant. In this sense, this paper brings a contribution to the practical approach of a Where-What-Why-How Model for the development of interactive environments. The proposal focuses on three main points: (i) actions that must be performed by the interactive environment;(ii) the situations that trigger the implementation of the actions by the environment; and (iii) the expected behavior once the situations have been recognized. The illustration of the proposal is made through a complete case study that contemplates all development stages and the physical implementation of a remote control that triggers actions in the real world (TV) and in the virtual world (graphical representation of the remote control), which compose the interactive environment.

[1]  Per Ola Kristensson,et al.  Memorability of pre-designed and user-defined gesture sets , 2013, CHI.

[2]  Chin-Yang Lin,et al.  Projection-Based User Interface for Smart Home Environments , 2013, 2013 IEEE 37th Annual Computer Software and Applications Conference Workshops.

[3]  Celso A. S. Santos,et al.  An Event Driven Approach for Integrating Multi-sensory Effects to Interactive Environments , 2015, 2015 IEEE International Conference on Systems, Man, and Cybernetics.

[4]  Dan Ionescu,et al.  An intelligent gesture interface for controlling TV sets and set-top boxes , 2011, 2011 6th IEEE International Symposium on Applied Computational Intelligence and Informatics (SACI).

[5]  Sebastian Möller,et al.  I'm home: Defining and evaluating a gesture set for smart-home control , 2011, Int. J. Hum. Comput. Stud..

[6]  Angel R. Puerta,et al.  A Model-Based Interface Development Environment , 1997, IEEE Softw..

[7]  O. Fernando Engenharia de Software , 2006 .

[8]  Ferhat Sen,et al.  A novel gesture-based interface for a VR simulation: Re-discovering Vrouw Maria , 2012, 2012 18th International Conference on Virtual Systems and Multimedia.

[9]  David Fleer,et al.  MISO: a context-sensitive multimodal interface for smart objects based on hand gestures and finger snaps , 2012, UIST Adjunct Proceedings '12.

[10]  Youngsoo Jung,et al.  Building information modelling (BIM) framework for practical implementation , 2011 .

[11]  Henri Achten,et al.  A Design Methodological Framework for Interactive Architecture , 2010, eCAADe proceedings.

[12]  Elena Mugellini,et al.  Humans and smart environments: a novel multimodal interaction approach , 2011, ICMI '11.

[13]  Fred D. Davis Perceived Usefulness, Perceived Ease of Use, and User Acceptance of Information Technology , 1989, MIS Q..

[14]  Radu-Daniel Vatavu Point & click mediated interactions for large home entertainment displays , 2010, Multimedia Tools and Applications.

[15]  Enric Martí,et al.  Enhancing Accessibility through Speech Technologies on AAL Telemedicine Services for iTV , 2011, AmI.

[16]  Philippe Kruchten,et al.  What do software architects really do? , 2008, J. Syst. Softw..

[17]  Marco Roccetti,et al.  Playing into the wild: A gesture-based interface for gaming in public spaces , 2012, J. Vis. Commun. Image Represent..

[18]  Celso A. S. Santos,et al.  Touch the air: an event-driven framework for interactive environments , 2013, WebMedia.