暂无分享,去创建一个
Na Li | Xin Chen | Yujie Tang
[1] Yan Zhang,et al. Improving the Convergence Rate of One-Point Zeroth-Order Optimization using Residual Feedback , 2020, ArXiv.
[2] Sean P. Meyn,et al. Model-Free Primal-Dual Methods for Network Optimization with Application to Real-Time Optimal Power Flow , 2019, 2020 American Control Conference (ACC).
[3] I. Mareels,et al. Extremum seeking from 1922 to 2010 , 2010, Proceedings of the 29th Chinese Control Conference.
[4] Kartik B. Ariyur,et al. Real-Time Optimization by Extremum-Seeking Control , 2003 .
[5] Na Li,et al. Robust hybrid zero-order optimization algorithms with acceleration via averaging in time , 2020, Autom..
[6] Jinfeng Yi,et al. ZOO: Zeroth Order Optimization Based Black-box Attacks to Deep Neural Networks without Training Substitute Models , 2017, AISec@CCS.
[7] Ning Qian,et al. On the momentum term in gradient descent learning algorithms , 1999, Neural Networks.
[8] Yurii Nesterov,et al. Random Gradient-Free Minimization of Convex Functions , 2015, Foundations of Computational Mathematics.
[9] Ambuj Tewari,et al. Improved Regret Guarantees for Online Smooth Convex Optimization with Bandit Feedback , 2011, AISTATS.
[10] Angelia Nedic,et al. A Dual Approach for Optimal Algorithms in Distributed Optimization over Networks , 2018, 2020 Information Theory and Applications Workshop (ITA).
[11] Miroslav Krstic,et al. Performance improvement and limitations in extremum seeking control , 2000 .
[12] Jorge I. Poveda,et al. Model-Free Optimal Voltage Control via Continuous-Time Zeroth-Order Methods , 2021, ArXiv.
[13] Qing Tao,et al. The Role of Momentum Parameters in the Optimal Convergence of Adaptive Polyak's Heavy-ball Methods , 2021, ICLR.
[14] Ronen Eldan,et al. Bandit Smooth Convex Optimization: Improving the Bias-Variance Tradeoff , 2015, NIPS.
[15] Boris Polyak. Some methods of speeding up the convergence of iteration methods , 1964 .
[16] Ying Tan,et al. On non-local stability properties of extremum seeking control , 2006, Autom..
[17] Sebastian Ruder,et al. An overview of gradient descent optimization algorithms , 2016, Vestnik komp'iuternykh i informatsionnykh tekhnologii.
[18] Martin J. Wainwright,et al. Derivative-Free Methods for Policy Optimization: Guarantees for Linear Quadratic Systems , 2018, AISTATS.
[19] Pramod K. Varshney,et al. A Primer on Zeroth-Order Optimization in Signal Processing and Machine Learning: Principals, Recent Advances, and Applications , 2020, IEEE Signal Processing Magazine.
[20] Ohad Shamir,et al. An Optimal Algorithm for Bandit and Zero-Order Convex Optimization with Two-Point Feedback , 2015, J. Mach. Learn. Res..
[21] Saeed Ghadimi,et al. Stochastic First- and Zeroth-Order Methods for Nonconvex Stochastic Programming , 2013, SIAM J. Optim..
[22] Adam Tauman Kalai,et al. Online convex optimization in the bandit setting: gradient descent without a gradient , 2004, SODA '05.
[23] Hans-Bernd Dürr,et al. Saddle Point Seeking for Convex Optimization Problems , 2013, NOLCOS.
[24] Na Li,et al. Distributed Reinforcement Learning for Decentralized Linear Quadratic Control: A Derivative-Free Policy Optimization Approach , 2019, IEEE Transactions on Automatic Control.
[25] Maojiao Ye,et al. Distributed Extremum Seeking for Constrained Networked Optimization and Its Application to Energy Consumption Control in Smart Grid , 2016, IEEE Transactions on Control Systems Technology.