How Companion-Technology can Enhance a Multi-Screen Television Experience: A Test Bed for Adaptive Multimodal Interaction in Domestic Environments

This article deals with a novel multi-screen interactive TV setup (smarTVision) and its enhancement through Companion-Technology. Due to their flexibility and the variety of interaction options, such multi-screen scenarios are hardly intuitive for the user. While research known so far focuses on technology and features, the user itself is often not considered adequately. Companion-Technology has the potential of making such interfaces really user-friendly. Building upon smarTVision, it’s extension via concepts of Companion-Technology is envisioned. This combination represents a versatile test bed that not only can be used for evaluating usefulness of Companion-Technology in a TV scenario, but can also serve to evaluate Companion-Systems in general.

[1]  Crysta J. Metcalf,et al.  FANFEEDS: evaluation of socially generated information feed on second screen as a TV show companion , 2012, EuroITV.

[2]  Gregor Bertrand,et al.  Companion-Technology: Towards User- and Situation-Adaptive Functionality of Technical Systems , 2014, 2014 International Conference on Intelligent Environments.

[3]  Frank Bentley,et al.  Ambient social tv: drawing people into a shared experience , 2008, CHI.

[4]  Patrick Baudisch,et al.  Kickables: tangibles for feet , 2014, CHI.

[5]  Claudio S. Pinhanez The Everywhere Displays Projector: A Device to Create Ubiquitous Graphical Interfaces , 2001, UbiComp.

[6]  Crysta J. Metcalf,et al.  Field trial of a dual device user experience for iTV , 2011, EuroITV '11.

[7]  Antonio Krüger,et al.  SurfacePhone: a mobile projection device for single- and multiuser everywhere tabletop interaction , 2014, CHI.

[8]  Pablo César,et al.  Usages of the Secondary Screen in an Interactive Television Environment: Control, Enrich, Share, and Transfer Television Content , 2008, EuroITV.

[9]  L. Miller Family togetherness and the suburban ideal , 1995 .

[10]  Frank Honold,et al.  Using the Transferable Belief Model for Multimodal Input Fusion in Companion Systems , 2012, MPRSS.

[11]  Marita Franzke,et al.  Dual device user interface design: PDAs and interactive television , 1996, CHI.

[12]  Blair MacIntyre,et al.  RoomAlive: magical experiences enabled by scalable, adaptive projector-camera units , 2014, UIST.

[13]  Sahin Albayrak,et al.  A meta user interface to control multimodal interaction in smart environments , 2009, IUI.

[14]  Ann Blandford,et al.  Four easy pieces for assessing the usability of multimodal interaction: the CARE properties , 1995, INTERACT.

[15]  Frank Honold,et al.  Multimodal Interaction History and its use in Error Detection and Recovery , 2014, ICMI.

[16]  Joëlle Coutaz,et al.  Meta-User Interfaces for Ambient Spaces , 2006, TAMODIA.

[17]  Joseph J. LaViola,et al.  Multimodal Input for Perceptual User Interfaces , 2014 .

[18]  Evelien D'heer,et al.  Second screen applications and tablet users: constellation, awareness, experience, and interest , 2012, EuroITV.

[19]  Radu-Daniel Vatavu Designing gestural interfaces for the interactive TV , 2013, EuroITV.

[20]  Albrecht Schmidt,et al.  Let me catch this!: experiencing interactive 3D cinema through collecting content with a mobile phone , 2014, CHI.

[21]  Frank Honold,et al.  The Automated Interplay of Multimodal Fission and Fusion in Adaptive HCI , 2014, 2014 International Conference on Intelligent Environments.

[22]  Julian Seifert,et al.  Mobile mediated interaction with pervasive displays , 2015 .

[23]  Masanori Sugimoto,et al.  Caretta: a system for supporting face-to-face collaboration by integrating personal and shared spaces , 2004, CHI.

[24]  Enrico Rukzio,et al.  UbiBeam: An Interactive Projector-Camera System for Domestic Deployment , 2014, ITS '14.

[25]  Brad A. Myers,et al.  Using handhelds and PCs together , 2001, CACM.

[26]  Sriram Subramanian,et al.  Steerable projection: exploring alignment in interactive mobile displays , 2011, Personal and Ubiquitous Computing.

[27]  Gregory D. Abowd,et al.  Towards a Better Understanding of Context and Context-Awareness , 1999, HUC.

[28]  Jacob O. Wobbrock,et al.  Bonfire: a nomadic system for hybrid laptop-tabletop interaction , 2009, UIST '09.

[29]  Andreas Wendemuth,et al.  Companion-Technology for Cognitive Technical Systems , 2011, KI - Künstliche Intelligenz.

[30]  Frank Honold,et al.  Adaptive probabilistic fission for multimodal systems , 2012, OZCHI.

[31]  Philippe A. Palanque,et al.  Fusion engines for multimodal input: a survey , 2009, ICMI-MLMI '09.

[32]  Andreas Wendemuth,et al.  Companion-Technology for Cognitive Technical Systems , 2016, KI - Künstliche Intelligenz.

[33]  Friedhelm Schwenker,et al.  Fusion paradigms in cognitive technical systems for human-computer interaction , 2015, Neurocomputing.

[34]  Lieven De Marez,et al.  Triple screen viewing practices: diversification or compartmentalization? , 2011, EuroITV '11.

[35]  John Hardy,et al.  Toolkit support for interactive projected displays , 2012, MUM.

[36]  Eyal Ofek,et al.  IllumiRoom: peripheral projected illusions for interactive experiences , 2013, SIGGRAPH '13.

[37]  Marco Blumendorf,et al.  Using Meta User Interfaces to Control Multimodal Interaction in Smart Environments , 2009 .

[38]  Joëlle Coutaz,et al.  A generic platform for addressing the multimodal challenge , 1995, CHI '95.