暂无分享,去创建一个
Nicola Gatti | Alessandro Nuara | Matteo Castiglioni | Francesco Trovo | Giulia Romano | Giorgio Spadaro | N. Gatti | F. Trovò | Alessandro Nuara | Matteo Castiglioni | Giulia Romano | Giorgio Spadaro
[1] Renato Paes Leme,et al. Auction Design for ROI-Constrained Buyers , 2018, WWW.
[2] Marcello Restelli,et al. When Gaussian Processes Meet Combinatorial Bandits : GCB , 2018 .
[3] John N. Tsitsiklis,et al. Online Learning with Sample Path Constraints , 2009, J. Mach. Learn. Res..
[4] Steffen Udluft,et al. Safe exploration for reinforcement learning , 2008, ESANN.
[5] Michèle Sebag,et al. Exploration vs Exploitation vs Safety: Risk-Aware Multi-Armed Bandits , 2013, ACML.
[6] Nikhil R. Devanur,et al. The price of truthfulness for pay-per-click auctions , 2009, EC '09.
[7] Marcello Restelli,et al. Dealing with Interdependencies and Uncertainty in Multi-Channel Advertising Campaigns Optimization , 2019, WWW.
[8] K. J. Ray Liu,et al. Online Convex Optimization With Time-Varying Constraints and Bandit Feedback , 2019, IEEE Transactions on Automatic Control.
[9] Marcello Restelli,et al. A Combinatorial-Bandit Algorithm for the Online Joint Bid/Budget Optimization of Pay-per-Click Advertising Campaigns , 2018, AAAI.
[10] Daniele Calandriello,et al. Safe Policy Iteration , 2013, ICML.
[11] Tao Qin,et al. Multi-Armed Bandit with Budget Constraint and Variable Costs , 2013, AAAI.
[12] Tie-Yan Liu,et al. Joint optimization of bid and budget allocation in sponsored search , 2012, KDD.
[13] S. Muthukrishnan,et al. Stochastic Models for Budget Optimization in Search-Based Advertising , 2007, WINE.
[14] Aleksandrs Slivkins,et al. Bandits with Knapsacks , 2013, 2013 IEEE 54th Annual Symposium on Foundations of Computer Science.
[15] Christos Thrampoulidis,et al. Regret Bounds for Safe Gaussian Process Bandit Optimization , 2020, 2021 IEEE International Symposium on Information Theory (ISIT).
[16] Alkis Gotovos,et al. Safe Exploration for Optimization with Gaussian Processes , 2015, ICML.
[17] Javier García,et al. Safe Exploration of State and Action Spaces in Reinforcement Learning , 2012, J. Artif. Intell. Res..
[18] Nicola Gatti,et al. Online Joint Bid/Daily Budget Optimization of Internet Advertising Campaigns , 2020, Artif. Intell..
[19] Marcello Restelli,et al. Budgeted Multi-Armed Bandit in Continuous Action Space , 2016, ECAI.
[20] Carl E. Rasmussen,et al. Gaussian processes for machine learning , 2005, Adaptive computation and machine learning.
[21] Tim Roughgarden,et al. Algorithmic Game Theory , 2007 .
[22] Christos Thrampoulidis,et al. Stage-wise Conservative Linear Bandits , 2020, NeurIPS.
[23] Nicole Immorlica,et al. Dynamics of bid optimization in online advertisement auctions , 2007, WWW '07.
[24] Wei Chen,et al. Combinatorial Multi-Armed Bandit: General Framework and Applications , 2013, ICML.
[25] Juong-Sik Lee,et al. Impact of ROI on Bidding and Revenue in Sponsored Search Advertisement Auctions , 2006 .
[26] Aditya Gopalan,et al. On Kernelized Multi-armed Bandits , 2017, ICML.
[27] Jon Feldman,et al. Budget optimization in search-based advertising auctions , 2006, EC '07.
[28] Michalis Vazirgiannis,et al. Toward an integrated framework for automated development and optimization of online advertising campaigns , 2014, Intell. Data Anal..
[29] Andreas Krause,et al. Information-Theoretic Regret Bounds for Gaussian Process Optimization in the Bandit Setting , 2009, IEEE Transactions on Information Theory.