In-Session Personalization for Talent Search
暂无分享,去创建一个
Meng Meng | Sahin Cem Geyik | Ryan Smith | Vijay Dialani | V. Dialani | S. C. Geyik | Ryan Smith | Meng Meng
[1] R. Weisberg. A-N-D , 2011 .
[2] Steven C. H. Hoi,et al. Second Order Online Collaborative Filtering , 2013, ACML.
[3] F ROSENBLATT,et al. The perceptron: a probabilistic model for information storage and organization in the brain. , 1958, Psychological review.
[4] Jaana Kekäläinen,et al. Cumulated gain-based evaluation of IR techniques , 2002, TOIS.
[5] Peter Auer,et al. Finite-time Analysis of the Multiarmed Bandit Problem , 2002, Machine Learning.
[6] Thomas L. Griffiths,et al. Hierarchical Topic Models and the Nested Chinese Restaurant Process , 2003, NIPS.
[7] L. Williams,et al. Contents , 2020, Ophthalmology (Rochester, Minn.).
[8] Deepak Agarwal,et al. GLMix: Generalized Linear Mixed Models For Large-Scale Response Prediction , 2016, KDD.
[9] Yan Yan,et al. Search by Ideal Candidates: Next Generation of Talent Search at LinkedIn , 2016, WWW.
[10] Shipra Agrawal,et al. Analysis of Thompson Sampling for the Multi-armed Bandit Problem , 2011, COLT.
[11] Te-Ming Chang,et al. LDA-based Personalized Document Recommendation , 2013, PACIS.
[12] Robin Burke,et al. Personalization in Folksonomies Based on Tag Clustering , 2008 .
[13] Shai Shalev-Shwartz,et al. Online Learning and Online Convex Optimization , 2012, Found. Trends Mach. Learn..
[14] Filip Radlinski,et al. Mortal Multi-Armed Bandits , 2008, NIPS.
[15] Steffen Rendle,et al. Factorization Machines , 2010, 2010 IEEE International Conference on Data Mining.
[16] David W. Aha,et al. Generalizing from Case studies: A Case Study , 1992, ML.
[17] H. Akaike. A new look at the statistical model identification , 1974 .
[18] Michael I. Jordan,et al. Latent Dirichlet Allocation , 2001, J. Mach. Learn. Res..
[19] Alexander J. Smola,et al. Reducing the sampling complexity of topic models , 2014, KDD.
[20] Daniele Quercia,et al. Auralist: introducing serendipity into music recommendation , 2012, WSDM '12.
[21] Filip Radlinski,et al. Learning diverse rankings with multi-armed bandits , 2008, ICML '08.
[22] W. R. Thompson. ON THE LIKELIHOOD THAT ONE UNKNOWN PROBABILITY EXCEEDS ANOTHER IN VIEW OF THE EVIDENCE OF TWO SAMPLES , 1933 .
[23] Kate Smith-Miles,et al. Cross-disciplinary perspectives on meta-learning for algorithm selection , 2009, CSUR.
[24] Oscar Fontenla-Romero,et al. Online Machine Learning , 2024, Machine Learning: Foundations, Methodologies, and Applications.
[25] Lars Kotthoff,et al. Algorithm Selection for Combinatorial Search Problems: A Survey , 2012, AI Mag..
[26] Pushmeet Kohli,et al. A Fast Bandit Algorithm for Recommendation to Users With Heterogenous Tastes , 2013, AAAI.
[27] Takuya Kitazawa. Incremental Factorization Machines for Persistently Cold-starting Online Item Recommendation , 2016, ArXiv.
[28] Yang Gao,et al. A Comparative Study on Parallel LDA Algorithms in MapReduce Framework , 2015, PAKDD.
[29] Yizhou Sun,et al. LCARS: a location-content-aware recommender system , 2013, KDD.
[30] Oskar Kohonen,et al. Using Topic Models in Content-Based News Recommender Systems , 2013, NODALIDA.
[31] H. Vincent Poor,et al. Bandit problems with side observations , 2005, IEEE Transactions on Automatic Control.
[32] J. Langford,et al. The Epoch-Greedy algorithm for contextual multi-armed bandits , 2007, NIPS 2007.
[33] Michael I. Jordan. Learning in Graphical Models , 1999, NATO ASI Series.
[34] Katja Hofmann,et al. Contextual Bandits for Information Retrieval , 2011 .
[35] Ganesh Venkataraman,et al. 2015 Ieee International Conference on Big Data (big Data) Personalized Expertise Search at Linkedin , 2022 .