We focus on a simulation-based optimization problem of choosing the best design from the feasible space. Although the simulation model can be queried with finite samples, its internal processing rule cannot be utilized in the optimization process. We formulate the sampling process as a policy searching problem and give a solution from the perspective of Reinforcement Learning (RL). Concretely, ActorCritic (AC) framework is applied, where the Actor serves as a surrogate model to predict the performance on unknown designs, whereas the actor encodes the sampling policy to be optimized. We design the updating rule and propose two algorithms for the cases where the feasible spaces are continuous and discrete respectively. Some experiments are designed to validate the effectiveness of proposed algorithms, including two toy examples, which intuitively explain the algorithms, and two more complex tasks, i.e., adversarial attack task and RL task, which validate the effectiveness in large-scale problems. The results show that the proposed algorithms can successfully deal with these problems. Especially note that in the RL task, our methods give a new perspective to robot control by treating the task as a simulation model and solving it by optimizing the policy generating process, while existing works commonly optimize the policy itself directly.
[1]
Wojciech Zaremba,et al.
OpenAI Gym
,
2016,
ArXiv.
[2]
Herke van Hoof,et al.
Addressing Function Approximation Error in Actor-Critic Methods
,
2018,
ICML.
[3]
Peter Dayan,et al.
Q-learning
,
1992,
Machine Learning.
[4]
Richard M. Dudley,et al.
Sample Functions of the Gaussian Process
,
1973
.
[5]
Yuval Tassa,et al.
Continuous control with deep reinforcement learning
,
2015,
ICLR.
[6]
Alec Radford,et al.
Proximal Policy Optimization Algorithms
,
2017,
ArXiv.
[7]
D. Goldberg,et al.
BOA: the Bayesian optimization algorithm
,
1999
.
[8]
Chun-Hung Chen,et al.
Ranking and Selection as Stochastic Control
,
2017,
IEEE Transactions on Automatic Control.
[9]
Sergey Levine,et al.
Reinforcement Learning with Deep Energy-Based Policies
,
2017,
ICML.
[10]
Sergey Levine,et al.
Soft Actor-Critic: Off-Policy Maximum Entropy Deep Reinforcement Learning with a Stochastic Actor
,
2018,
ICML.
[11]
Sergey Levine,et al.
Trust Region Policy Optimization
,
2015,
ICML.
[12]
P. Schrimpf,et al.
Dynamic Programming
,
2011
.
[13]
Henry Zhu,et al.
Soft Actor-Critic Algorithms and Applications
,
2018,
ArXiv.