A Learning-based Framework to Handle Multi-round Multi-party Influence Maximization on Social Networks

Considering nowadays companies providing similar products or services compete with each other for resources and customers, this work proposes a learning-based framework to tackle the multi-round competitive influence maximization problem on a social network. We propose a data-driven model leveraging the concept of meta-learning to maximize the expected influence in the long run. Our model considers not only the network information but also the opponent's strategy while making a decision. It maximizes the total influence in the end of the process instead of myopically pursuing short term gain. We propose solutions for scenarios when the opponent's strategy is known or unknown and available or unavailable for training. We also show how an effective framework can be trained without manually labeled data, and conduct several experiments to verify the effectiveness of the whole process.

[1]  Richard S. Sutton,et al.  Introduction to Reinforcement Learning , 1998 .

[2]  Roger Wattenhofer,et al.  Word of Mouth: Rumor Dissemination in Social Networks , 2008, SIROCCO.

[3]  Divyakant Agrawal,et al.  Limiting the spread of misinformation in social networks , 2011, WWW.

[4]  Milind Tambe,et al.  Bayesian Security Games for Controlling Contagion , 2013, 2013 International Conference on Social Computing.

[5]  Laks V. S. Lakshmanan,et al.  SIMPATH: An Efficient Algorithm for Influence Maximization under the Linear Threshold Model , 2011, 2011 IEEE 11th International Conference on Data Mining.

[6]  Shishir Bharathi,et al.  Competitive Influence Maximization in Social Networks , 2007, WINE.

[7]  Milind Tambe,et al.  Security Games for Controlling Contagion , 2012, AAAI.

[8]  Wei Chen,et al.  Scalable influence maximization for prevalent viral marketing in large-scale social networks , 2010, KDD.

[9]  Shou-De Lin,et al.  Exploiting rank-learning models to predict the diffusion of preferences on social networks , 2014, 2014 IEEE/ACM International Conference on Advances in Social Networks Analysis and Mining (ASONAM 2014).

[10]  Wei Chen,et al.  Efficient influence maximization in social networks , 2009, KDD.

[11]  Wei Chen,et al.  Influence Blocking Maximization in Social Networks under the Competitive Linear Threshold Model , 2011, SDM.

[12]  Kyomin Jung,et al.  IRIE: Scalable and Robust Influence Maximization in Social Networks , 2011, 2012 IEEE 12th International Conference on Data Mining.

[13]  Laks V. S. Lakshmanan,et al.  A Data-Based Approach to Social Influence Maximization , 2011, Proc. VLDB Endow..

[14]  Peter Dayan,et al.  Q-learning , 1992, Machine Learning.

[15]  Matthew Richardson,et al.  Mining the network value of customers , 2001, KDD '01.

[16]  Shou-De Lin,et al.  Assessing the Quality of Diffusion Models Using Real-World Social Network Data , 2011, 2011 International Conference on Technologies and Applications of Artificial Intelligence.

[17]  Éva Tardos,et al.  Maximizing the Spread of Influence through a Social Network , 2015, Theory Comput..

[18]  Andreas Krause,et al.  Cost-effective outbreak detection in networks , 2007, KDD '07.

[19]  Matthew Richardson,et al.  Mining knowledge-sharing sites for viral marketing , 2002, KDD.

[20]  Michael L. Littman,et al.  Markov Games as a Framework for Multi-Agent Reinforcement Learning , 1994, ICML.

[21]  E. Rowland Theory of Games and Economic Behavior , 1946, Nature.

[22]  Ben J. A. Kröse,et al.  Learning from delayed rewards , 1995, Robotics Auton. Syst..

[23]  Yifei Yuan,et al.  Influence Maximization in Social Networks When Negative Opinions May Emerge and Propagate , 2011, SDM.

[24]  Allan Borodin,et al.  Threshold Models for Competitive Influence in Social Networks , 2010, WINE.

[25]  Shou-De Lin,et al.  Information propagation game: a tool to acquire humanplaying data for multiplayer influence maximization on social networks , 2012, KDD.

[26]  Richard S. Sutton,et al.  Reinforcement Learning: An Introduction , 1998, IEEE Trans. Neural Networks.