Addressing delayed feedback for continuous training with neural networks in CTR prediction
暂无分享,去创建一个
Lucas Theis | Alykhan Tejani | Sofia Ira Ktena | Pranay Kumar Myana | Wenzhe Shi | Steven Yoo | Ferenc Huszár | Pranay K. Myana | Deepak Dilipkumar | Alykhan Tejani | Lucas Theis | Wenzhe Shi | Ferenc Huszár | S. Ktena | D. Dilipkumar | A. Tejani | Steven Yoo
[1] Gilles Blanchard,et al. Novelty detection: Unlabeled data definitely help , 2009, AISTATS.
[2] Philip S. Yu,et al. Text classification without negative examples revisit , 2006, IEEE Transactions on Knowledge and Data Engineering.
[3] Bing Liu,et al. Learning with Positive and Unlabeled Examples Using Weighted Logistic Regression , 2003, ICML.
[4] Joaquin Quiñonero Candela,et al. Practical Lessons from Predicting Clicks on Ads at Facebook , 2014, ADKDD'14.
[5] Chih-Jen Lin,et al. Field-aware Factorization Machines for CTR Prediction , 2016, RecSys.
[6] Konstantinos V. Katsikopoulos,et al. Markov decision processes with delays and asynchronous cost collection , 2003, IEEE Trans. Autom. Control..
[7] Heng-Tze Cheng,et al. Wide & Deep Learning for Recommender Systems , 2016, DLRS@RecSys.
[8] Gang Niu,et al. Convex Formulation for Learning from Positive and Unlabeled Data , 2015, ICML.
[9] Ed H. Chi,et al. Top-K Off-Policy Correction for a REINFORCE Recommender System , 2018, WSDM.
[10] Yuya Yoshikawa,et al. A Nonparametric Delayed Feedback Model for Conversion Rate Prediction , 2018, ArXiv.
[11] Jean-Philippe Vert,et al. A bagging SVM to learn from positive and unlabeled examples , 2010, Pattern Recognit. Lett..
[12] Gang Niu,et al. Positive-Unlabeled Learning with Non-Negative Risk Estimator , 2017, NIPS.
[13] Thomas J. Walsh,et al. Learning and planning in environments with delayed feedback , 2009, Autonomous Agents and Multi-Agent Systems.
[14] Laurence L. George,et al. The Statistical Analysis of Failure Time Data , 2003, Technometrics.
[15] Dong Wang,et al. Click-through Prediction for Advertising in Twitter Timeline , 2015, KDD.
[16] Yoshua Bengio,et al. Understanding the difficulty of training deep feedforward neural networks , 2010, AISTATS.
[17] D. Rubin,et al. Maximum likelihood from incomplete data via the EM - algorithm plus discussions on the paper , 1977 .
[18] Joaquin Quiñonero Candela,et al. Counterfactual reasoning and learning systems: the example of computational advertising , 2013, J. Mach. Learn. Res..
[19] Vianney Perchet,et al. Stochastic Bandit Models for Delayed Conversions , 2017, UAI.
[20] Gang Niu,et al. Analysis of Learning from Positive and Unlabeled Data , 2014, NIPS.
[21] Steven C. H. Hoi,et al. Online Deep Learning: Learning Deep Neural Networks on the Fly , 2017, IJCAI.
[22] Peter L. Bartlett,et al. Adaptive Online Gradient Descent , 2007, NIPS.
[23] M. N. Nguyen,et al. pro-Positive Unlabeled Learning for Time Series Classification , 2022 .
[24] Charles Elkan,et al. Learning classifiers from only positive and unlabeled data , 2008, KDD.
[25] Bo Zhang,et al. PBODL : Parallel Bayesian Online Deep Learning for Click-Through Rate Prediction in Tencent Advertising System , 2017, ArXiv.
[26] Razvan Pascanu,et al. Overcoming catastrophic forgetting in neural networks , 2016, Proceedings of the National Academy of Sciences.
[27] Olivier Chapelle,et al. Modeling delayed feedback in display advertising , 2014, KDD.
[28] E. Stuart,et al. Moving towards best practice when using inverse probability of treatment weighting (IPTW) using the propensity score to estimate causal treatment effects in observational studies , 2015, Statistics in medicine.