In-Session Personalization for Talent Search

Previous efforts in recommendation of candidates for talent search followed the general pattern of receiving an initial search criteria and generating a set of candidates utilizing a pre-trained model. Traditionally, the generated recommendations are final, that is, the list of potential candidates is not modified unless the user explicitly changes his/her search criteria. In this paper, we are proposing a candidate recommendation model which takes into account the immediate feedback of the user, and updates the candidate recommendations at each step. This setting also allows for very uninformative initial search queries, since we pinpoint the user's intent due to the feedback during the search session. To achieve our goal, we employ an intent clustering method based on topic modeling which separates the candidate space into meaningful, possibly overlapping, subsets (which we call intent clusters) for each position. On top of the candidate segments, we apply a multi-armed bandit approach to choose which intent cluster is more appropriate for the current session. We also present an online learning scheme which updates the intent clusters within the session, due to user feedback, to achieve further personalization. Our offline experiments as well as the results from the online deployment of our solution demonstrate the benefits of our proposed methodology.

[1]  R. Weisberg A-N-D , 2011 .

[2]  Steven C. H. Hoi,et al.  Second Order Online Collaborative Filtering , 2013, ACML.

[3]  F ROSENBLATT,et al.  The perceptron: a probabilistic model for information storage and organization in the brain. , 1958, Psychological review.

[4]  Jaana Kekäläinen,et al.  Cumulated gain-based evaluation of IR techniques , 2002, TOIS.

[5]  Peter Auer,et al.  Finite-time Analysis of the Multiarmed Bandit Problem , 2002, Machine Learning.

[6]  Thomas L. Griffiths,et al.  Hierarchical Topic Models and the Nested Chinese Restaurant Process , 2003, NIPS.

[7]  L. Williams,et al.  Contents , 2020, Ophthalmology (Rochester, Minn.).

[8]  Deepak Agarwal,et al.  GLMix: Generalized Linear Mixed Models For Large-Scale Response Prediction , 2016, KDD.

[9]  Yan Yan,et al.  Search by Ideal Candidates: Next Generation of Talent Search at LinkedIn , 2016, WWW.

[10]  Shipra Agrawal,et al.  Analysis of Thompson Sampling for the Multi-armed Bandit Problem , 2011, COLT.

[11]  Te-Ming Chang,et al.  LDA-based Personalized Document Recommendation , 2013, PACIS.

[12]  Robin Burke,et al.  Personalization in Folksonomies Based on Tag Clustering , 2008 .

[13]  Shai Shalev-Shwartz,et al.  Online Learning and Online Convex Optimization , 2012, Found. Trends Mach. Learn..

[14]  Filip Radlinski,et al.  Mortal Multi-Armed Bandits , 2008, NIPS.

[15]  Steffen Rendle,et al.  Factorization Machines , 2010, 2010 IEEE International Conference on Data Mining.

[16]  David W. Aha,et al.  Generalizing from Case studies: A Case Study , 1992, ML.

[17]  H. Akaike A new look at the statistical model identification , 1974 .

[18]  Michael I. Jordan,et al.  Latent Dirichlet Allocation , 2001, J. Mach. Learn. Res..

[19]  Alexander J. Smola,et al.  Reducing the sampling complexity of topic models , 2014, KDD.

[20]  Daniele Quercia,et al.  Auralist: introducing serendipity into music recommendation , 2012, WSDM '12.

[21]  Filip Radlinski,et al.  Learning diverse rankings with multi-armed bandits , 2008, ICML '08.

[22]  W. R. Thompson ON THE LIKELIHOOD THAT ONE UNKNOWN PROBABILITY EXCEEDS ANOTHER IN VIEW OF THE EVIDENCE OF TWO SAMPLES , 1933 .

[23]  Kate Smith-Miles,et al.  Cross-disciplinary perspectives on meta-learning for algorithm selection , 2009, CSUR.

[24]  Oscar Fontenla-Romero,et al.  Online Machine Learning , 2024, Machine Learning: Foundations, Methodologies, and Applications.

[25]  Lars Kotthoff,et al.  Algorithm Selection for Combinatorial Search Problems: A Survey , 2012, AI Mag..

[26]  Pushmeet Kohli,et al.  A Fast Bandit Algorithm for Recommendation to Users With Heterogenous Tastes , 2013, AAAI.

[27]  Takuya Kitazawa Incremental Factorization Machines for Persistently Cold-starting Online Item Recommendation , 2016, ArXiv.

[28]  Yang Gao,et al.  A Comparative Study on Parallel LDA Algorithms in MapReduce Framework , 2015, PAKDD.

[29]  Yizhou Sun,et al.  LCARS: a location-content-aware recommender system , 2013, KDD.

[30]  Oskar Kohonen,et al.  Using Topic Models in Content-Based News Recommender Systems , 2013, NODALIDA.

[31]  H. Vincent Poor,et al.  Bandit problems with side observations , 2005, IEEE Transactions on Automatic Control.

[32]  J. Langford,et al.  The Epoch-Greedy algorithm for contextual multi-armed bandits , 2007, NIPS 2007.

[33]  Michael I. Jordan Learning in Graphical Models , 1999, NATO ASI Series.

[34]  Katja Hofmann,et al.  Contextual Bandits for Information Retrieval , 2011 .

[35]  Ganesh Venkataraman,et al.  2015 Ieee International Conference on Big Data (big Data) Personalized Expertise Search at Linkedin , 2022 .