Survey of apprenticeship learning based on reward function learning

This paper focuses on apprenticeship learning,based on reward function learning.Both the historical basis of this field and a broad selection of current work were investigated.In this paper,two kinds of algorithm—apprenticeship learning methods based on inverse reinforcement learning(IRL) and maximum margin planning(MMP) frameworks were discussed under respective assumptions of linear and nonlinear reward functions.Comparison was made under the linear assumption conditions.The former can be implemented with an efficient approximate method but has made a strong supposition of optimal demonstration.The latter has a relatively easy to extend form but may take large amounts of computation.Finally,some suggestions were given for further research in reward function learning in a partially observable Markov decision process(POMDP) environment and in continuous/high dimensional space,using either an approximate algorithm such as point-based value iteration(PBVI) or a feature Abstraction algorithm using dimension reduction methods such as principle component analysis(PCA).Adopting these may alleviate the curse of dimensionality.