Log2Intent: Towards Interpretable User Modeling via Recurrent Semantics Memory Unit

Modeling user behavior from unstructured software log-trace data is critical in providing personalized service (\emphe.g., cross-platform recommendation). Existing user modeling approaches cannot well handle the long-term temporal information in log data, or produce semantically meaningful results for interpreting user logs. To address these challenges, we propose a Log2Intent framework for interpretable user modeling in this paper. Log2Intent adopts a deep sequential modeling framework that contains a temporal encoder, a semantic encoder and a log action decoder, and it fully captures the long-term temporal information in user sessions. Moreover, to bridge the semantic gap between log-trace data and human language, a recurrent semantics memory unit (RSMU) is proposed to encode the annotation sentences from an auxiliary software tutorial dataset, and the output of RSMU is fed into the semantic encoder of Log2Intent. Comprehensive experiments on a real-world Photoshop log-trace dataset with an auxiliary Photoshop tutorial dataset demonstrate the effectiveness of the proposed Log2Intent framework over the state-of-the-art log-trace user modeling method in three different tasks, including log annotation retrieval, user interest detection and user next action prediction.

[1]  Quoc V. Le,et al.  Distributed Representations of Sentences and Documents , 2014, ICML.

[2]  Christopher D. Manning,et al.  Effective Approaches to Attention-based Neural Machine Translation , 2015, EMNLP.

[3]  Martin Ester,et al.  Spatial topic modeling in online social media for location recommendation , 2013, RecSys.

[4]  James Bennett,et al.  The Netflix Prize , 2007 .

[5]  Yifan Hu,et al.  Collaborative Filtering for Implicit Feedback Datasets , 2008, 2008 Eighth IEEE International Conference on Data Mining.

[6]  Nicholas Jing Yuan,et al.  Exploiting Dining Preference for Restaurant Recommendation , 2016, WWW.

[7]  Sanja Fidler,et al.  Skip-Thought Vectors , 2015, NIPS.

[8]  Phil Blunsom,et al.  Learning to Transduce with Unbounded Memory , 2015, NIPS.

[9]  Jason Weston,et al.  Memory Networks , 2014, ICLR.

[10]  Bin Shen,et al.  Collaborative Memory Network for Recommendation Systems , 2018, SIGIR.

[11]  Gierad Laput,et al.  CommandSpace: modeling the relationships between tasks, descriptions and features , 2014, UIST.

[12]  Jeffrey Dean,et al.  Efficient Estimation of Word Representations in Vector Space , 2013, ICLR.

[13]  Richard Socher,et al.  Dynamic Memory Networks for Visual and Textual Question Answering , 2016, ICML.

[14]  Yoshua Bengio,et al.  Learning Phrase Representations using RNN Encoder–Decoder for Statistical Machine Translation , 2014, EMNLP.

[15]  Deborah Estrin,et al.  Personalizing Software and Web Services by Integrating Unstructured Application Usage Traces , 2017, WWW.

[16]  Jürgen Schmidhuber,et al.  Long Short-Term Memory , 1997, Neural Computation.

[17]  Yongfeng Zhang,et al.  Sequential Recommendation with User Memory Networks , 2018, WSDM.

[18]  Zi Huang,et al.  Neural Memory Streaming Recommender Networks with Adversarial Training , 2018, KDD.

[19]  Chang Zhou,et al.  ATRank: An Attention-Based User Behavior Modeling Framework for Recommendation , 2017, AAAI.

[20]  Xiang Li,et al.  Perceive Your Users in Depth: Learning Universal User Representations from Multiple E-commerce Tasks , 2018, KDD.

[21]  Yoshua Bengio,et al.  Neural Machine Translation by Jointly Learning to Align and Translate , 2014, ICLR.

[22]  Jeffrey Pennington,et al.  GloVe: Global Vectors for Word Representation , 2014, EMNLP.

[23]  Tovi Grossman,et al.  Searching for software learning resources using application context , 2011, UIST.

[24]  Hongning Wang,et al.  When Sentiment Analysis Meets Social Network: A Holistic User Behavior Modeling in Opinionated Data , 2018, KDD.

[25]  Christof Monz,et al.  Recurrent Memory Networks for Language Modeling , 2016, NAACL.

[26]  Geoffrey E. Hinton,et al.  Layer Normalization , 2016, ArXiv.

[27]  Deborah Estrin,et al.  Creative Procedural-Knowledge Extraction From Web Design Tutorials , 2019, ArXiv.

[28]  Alexandros Karatzoglou,et al.  Session-based Recommendations with Recurrent Neural Networks , 2015, ICLR.

[29]  Jimmy Ba,et al.  Adam: A Method for Stochastic Optimization , 2014, ICLR.

[30]  Geoffrey E. Hinton,et al.  Speech recognition with deep recurrent neural networks , 2013, 2013 IEEE International Conference on Acoustics, Speech and Signal Processing.

[31]  Quoc V. Le,et al.  Sequence to Sequence Learning with Neural Networks , 2014, NIPS.

[32]  Qi He,et al.  Personalizing LinkedIn Feed , 2015, KDD.

[33]  Jorge Nocedal,et al.  On the limited memory BFGS method for large scale optimization , 1989, Math. Program..

[34]  Yoshua Bengio,et al.  Empirical Evaluation of Gated Recurrent Neural Networks on Sequence Modeling , 2014, ArXiv.

[35]  Tovi Grossman,et al.  Design and evaluation of a command recommendation system for software applications , 2011, TCHI.

[36]  Benjamin Schrauwen,et al.  Deep content-based music recommendation , 2013, NIPS.

[37]  Jason Weston,et al.  Key-Value Memory Networks for Directly Reading Documents , 2016, EMNLP.

[38]  Jason Weston,et al.  End-To-End Memory Networks , 2015, NIPS.

[39]  Tovi Grossman,et al.  CommunityCommands: command recommendations for software applications , 2009, UIST '09.

[40]  Alex Beutel,et al.  Recurrent Recommender Networks , 2017, WSDM.