Your reactions suggest you liked the movie: automatic content rating via reaction sensing

This paper describes a system for automatically rating content - mainly movies and videos - at multiple granularities. Our key observation is that the rich set of sensors available on today's smartphones and tablets could be used to capture a wide spectrum of user reactions while users are watching movies on these devices. Examples range from acoustic signatures of laughter to detect which scenes were funny, to the stillness of the tablet indicating intense drama. Moreover, unlike in most conventional systems, these ratings need not result in just one numeric score, but could be expanded to capture the user's experience. We combine these ideas into an Android based prototype called Pulse, and test it with 11 users each of whom watched 4 to 6 movies on Samsung tablets. Encouraging results show consistent correlation between the user's actual ratings and those generated by the system. With more rigorous testing and optimization, Pulse could be a candidate for real-world adoption.

[1]  Marios C. Angelides,et al.  Advances in Semantic Media Adaptation and Personalization , 2008, Advances in Semantic Media Adaptation and Personalization.

[2]  Ling Bao,et al.  Activity Recognition from User-Annotated Acceleration Data , 2004, Pervasive.

[3]  Yang Li,et al.  Bootstrapping personal gesture shortcuts with the wisdom of the crowd and handwriting recognition , 2012, CHI.

[4]  Nuria Oliver,et al.  Gaze and Gestures in Telepresence: multimodality, embodiment, and roles of collaboration , 2010, ArXiv.

[5]  P. Ekman,et al.  Unmasking the face : a guide to recognizing emotions from facial clues , 1975 .

[6]  Eric A. Brewer,et al.  N-smarts: networked suite of mobile atmospheric real-time sensors , 2008, NSDR '08.

[7]  Stephan Steglich,et al.  ConBrowse - Contextual Content Browsing , 2010, 2010 7th IEEE Consumer Communications and Networking Conference.

[8]  Yi Wang,et al.  A framework of energy efficient mobile sensing for automatic user state recognition , 2009, MobiSys '09.

[9]  Alec Wolman,et al.  MAUI: making smartphones last longer with code offload , 2010, MobiSys '10.

[10]  Wei Pan,et al.  SoundSense: scalable sound sensing for people-centric applications on mobile phones , 2009, MobiSys '09.

[11]  Louis B. Rosenfeld,et al.  Information architecture for the world wide web - designing large-scale web sites , 1998 .

[12]  M. Bradley,et al.  Looking at pictures: affective, facial, visceral, and behavioral reactions. , 1993, Psychophysiology.

[13]  Chuan Qin,et al.  TagSense: a smartphone-based approach to automatic image tagging , 2011, MobiSys '11.

[14]  Wonyong Sung,et al.  A statistical model-based voice activity detection , 1999, IEEE Signal Processing Letters.

[15]  John Riedl,et al.  Item-based collaborative filtering recommendation algorithms , 2001, WWW '01.

[16]  Romit Roy Choudhury,et al.  MoVi: mobile phone based video highlights via collaborative sensing , 2010, MobiSys '10.

[17]  Jie Liu,et al.  LittleRock: Enabling Energy-Efficient Continuous Sensing on Mobile Phones , 2011, IEEE Pervasive Computing.

[18]  Christopher Hunt,et al.  Notes on the OpenSURF Library , 2009 .

[19]  Jeffrey Nichols,et al.  Here's what i did: sharing and reusing web activity with ActionShot , 2010, CHI.

[20]  Jaime Teevan,et al.  Visual snippets: summarizing web pages for search and revisitation , 2009, CHI.

[21]  Andrew T. Campbell,et al.  Visage: A Face Interpretation Engine for Smartphone Applications , 2012, MobiCASE.

[22]  Cong Yu,et al.  It takes variety to make a world: diversification in recommender systems , 2009, EDBT '09.

[23]  Paul Blenkhorn,et al.  Blink detection for real-time eye tracking , 2002, J. Netw. Comput. Appl..

[24]  Pablo Castells,et al.  A Multi-Purpose Ontology-Based Approach for Personalized Content Filtering and Retrieval , 2006, 2006 First International Workshop on Semantic Media Adaptation and Personalization (SMAP'06).

[25]  Albrecht Schmidt,et al.  Bringing Semantic Services to Real-World Objects , 2008, Int. J. Semantic Web Inf. Syst..

[26]  Jeroen Breebaart,et al.  Features for audio and music classification , 2003, ISMIR.

[27]  Daniel P. W. Ellis,et al.  Laughter Detection in Meetings , 2004 .