Audio-Visual Sentiment Analysis for Learning Emotional Arcs in Movies

Stories can have tremendous power – not only useful for entertainment, they can activate our interests and mobilize our actions. The degree to which a story resonates with its audience may be in part reflected in the emotional journey it takes the audience upon. In this paper, we use machine learning methods to construct emotional arcs in movies, calculate families of arcs, and demonstrate the ability for certain arcs to predict audience engagement. The system is applied to Hollywood films and high quality shorts found on the web. We begin by using deep convolutional neural networks for audio and visual sentiment analysis. These models are trained on both new and existing large-scale datasets, after which they can be used to compute separate audio and visual emotional arcs. We then crowdsource annotations for 30-second video clips extracted from highs and lows in the arcs in order to assess the micro-level precision of the system, with precision measured in terms of agreement in polarity between the system's predictions and annotators' ratings. These annotations are also used to combine the audio and visual predictions. Next, we look at macro-level characterizations of movies by investigating whether there exist 'universal shapes' of emotional arcs. In particular, we develop a clustering approach to discover distinct classes of emotional arcs. Finally, we show on a sample corpus of short web videos that certain emotional arcs are statistically significant predictors of the number of comments a video receives. These results suggest that the emotional arcs learned by our approach successfully represent macroscopic aspects of a video story that drive audience engagement. Such machine understanding could be used to predict audience reactions to video stories, ultimately improving our ability as storytellers to communicate with each other.

[1]  Eamonn Keogh Exact Indexing of Dynamic Time Warping , 2002, VLDB.

[2]  Christopher Joseph Pal,et al.  Movie Description , 2016, International Journal of Computer Vision.

[3]  Mike Thelwall,et al.  Sentiment in short strength detection informal text , 2010 .

[4]  D. Massey A Brief History of Human Society: The Origin and Role of Emotion in Social Life , 2002, American Sociological Review.

[5]  J. Lerner,et al.  Emotion and decision making. , 2015, Annual review of psychology.

[6]  Katherine L. Milkman,et al.  The science of sharing and the sharing of science , 2014, Proceedings of the National Academy of Sciences.

[7]  Eamonn J. Keogh,et al.  Everything you know about Dynamic Time Warping is Wrong , 2004 .

[8]  Geoffrey E. Hinton,et al.  ImageNet classification with deep convolutional neural networks , 2012, Commun. ACM.

[9]  Christopher Joseph Pal,et al.  Using Descriptive Video Services to Create a Large Data Source for Video Annotation Research , 2015, ArXiv.

[10]  Yoshua Bengio,et al.  Gradient-based learning applied to document recognition , 1998, Proc. IEEE.

[11]  Katherine L. Milkman,et al.  What Makes Online Content Viral? , 2012 .

[12]  Thierry Bertin-Mahieux,et al.  The Million Song Dataset , 2011, ISMIR.

[13]  Christopher M. Danforth,et al.  The emotional arcs of stories are dominated by six basic shapes , 2016, EPJ Data Science.

[14]  Riccardo Leonardi,et al.  Affective Recommendation of Movies Based on Selected Connotative Features , 2013, IEEE Transactions on Circuits and Systems for Video Technology.

[15]  Zoubin Ghahramani,et al.  Dropout as a Bayesian Approximation: Representing Model Uncertainty in Deep Learning , 2015, ICML.

[16]  Rongrong Ji,et al.  Large-scale visual sentiment ontology and detectors using adjective noun pairs , 2013, ACM Multimedia.

[17]  Navneet Kaur,et al.  Opinion mining and sentiment analysis , 2016, 2016 3rd International Conference on Computing for Sustainable Global Development (INDIACom).

[18]  Mark B. Sandler,et al.  Automatic Tagging Using Deep Convolutional Neural Networks , 2016, ISMIR.

[19]  Mike Graham,et al.  Extracting information about emotions in films , 2003, ACM Multimedia.

[20]  Peter J. Rousseeuw,et al.  Clustering by means of medoids , 1987 .

[21]  Loong Fah Cheong,et al.  Affective understanding in film , 2006, IEEE Trans. Circuits Syst. Video Technol..

[22]  R. Ulrich Visual landscapes and psychological well‐being , 1979 .

[23]  Rich Caruana,et al.  Predicting good probabilities with supervised learning , 2005, ICML.

[24]  Brendan T. O'Connor,et al.  Learning Latent Personas of Film Characters , 2013, ACL.

[25]  ThelwallMike,et al.  Sentiment strength detection in short informal text , 2010 .

[26]  Erich Elsen,et al.  Deep Speech: Scaling up end-to-end speech recognition , 2014, ArXiv.

[27]  J. MacQueen Some methods for classification and analysis of multivariate observations , 1967 .

[28]  Sanja Fidler,et al.  MovieQA: Understanding Stories in Movies through Question-Answering , 2015, 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).

[29]  Hui Ding,et al.  Querying and mining of time series data: experimental comparison of representations and distance measures , 2008, Proc. VLDB Endow..