A General Model for Online Probabilistic Plan Recognition
暂无分享,去创建一个
[1] TWO-WEEK Loan COpy,et al. University of California , 1886, The American journal of dental science.
[2] K. Pearson. Biometrika , 1902, The American Naturalist.
[3] G. Schwarz. Estimating the Dimension of a Model , 1978 .
[4] Lawrence R. Rabiner,et al. A tutorial on hidden Markov models and selected applications in speech recognition , 1989, Proc. IEEE.
[5] Frederick Jelinek,et al. Basic Methods of Probabilistic Context Free Grammars , 1992 .
[6] Robert P. Goldman,et al. A Bayesian Model of Plan Recognition , 1993, Artif. Intell..
[7] Ann E. Nicholson,et al. Dynamic Belief Networks for Discrete Monitoring , 1994, IEEE Trans. Syst. Man Cybern. Syst..
[8] Edmund H. Durfee,et al. The Automated Mapping of Plans for Plan Recognition , 1994, AAAI.
[9] Proceedings of the IEEE , 2018, IEEE Journal of Emerging and Selected Topics in Power Electronics.
[10] Michael P. Wellman,et al. Accounting for Context in Plan Recognition, with Application to Traffic Monitoring , 1995, UAI.
[11] Finn Verner Jensen,et al. Introduction to Bayesian Networks , 2008, Innovations in Bayesian Networks.
[12] Craig Boutilier,et al. Context-Specific Independence in Bayesian Networks , 1996, UAI.
[13] Maja J. Matarić,et al. Learning to Use Selective Attention and Short-Term Memory in Sequential Tasks , 1996 .
[14] G. Casella,et al. Rao-Blackwellisation of sampling schemes , 1996 .
[15] Ronald E. Parr,et al. Hierarchical control and learning for markov decision processes , 1998 .
[16] Doina Precup,et al. Intra-Option Learning about Temporally Abstract Actions , 1998, ICML.
[17] Doina Precup,et al. Between MDPs and Semi-MDPs: A Framework for Temporal Abstraction in Reinforcement Learning , 1999, Artif. Intell..
[18] Justin A. Boyan,et al. Least-Squares Temporal Difference Learning , 1999, ICML.
[19] Robert P. Goldman,et al. A New Model of Plan Recognition , 1999, UAI.
[20] Michael P. Wellman,et al. Probabilistic grammars for plan recognition , 1999 .
[21] Michael P. Wellman,et al. Probabilistic State-Dependent Grammars for Plan Recognition , 2000, UAI.
[22] Thomas G. Dietterich. Hierarchical Reinforcement Learning with the MAXQ Value Function Decomposition , 1999, J. Artif. Intell. Res..
[23] Andrew G. Barto,et al. Automated State Abstraction for Options using the U-Tree Algorithm , 2000, NIPS.
[24] Nando de Freitas,et al. Rao-Blackwellised Particle Filtering for Dynamic Bayesian Networks , 2000, UAI.
[25] Svetha Venkatesh,et al. On the Recognition of Abstract Markov Policies , 2000, AAAI/IAAI.
[26] Kevin P. Murphy,et al. Linear-time inference in Hierarchical HMMs , 2001, NIPS.
[27] Bernhard Hengst,et al. Discovering Hierarchy in Reinforcement Learning with HEXQ , 2002, ICML.
[28] Hung H. Bui,et al. Efficient Approximate Inference for Online Probabilistic Plan Recognition , 2002 .
[29] Svetha Venkatesh,et al. Recognizing and monitoring high-level behaviors in complex spatial environments , 2003, 2003 IEEE Computer Society Conference on Computer Vision and Pattern Recognition, 2003. Proceedings..
[30] Sridhar Mahadevan,et al. Recent Advances in Hierarchical Reinforcement Learning , 2003, Discret. Event Dyn. Syst..
[31] Nuttapong Chentanez,et al. Intrinsically Motivated Reinforcement Learning , 2004, NIPS.
[32] Ingrid Zukerman,et al. Bayesian Models for Keyhole Plan Recognition in an Adventure Game , 2004, User Modeling and User-Adapted Interaction.
[33] Yoram Singer,et al. The Hierarchical Hidden Markov Model: Analysis and Applications , 1998, Machine Learning.
[34] Alicia P. Wolfe,et al. Identifying useful subgoals in reinforcement learning by local graph partitioning , 2005, ICML.
[35] Richard S. Sutton,et al. Reinforcement Learning: An Introduction , 1998, IEEE Trans. Neural Networks.
[36] Alfred Kobsa. User Modeling and User-Adapted Interaction , 2005, User Modeling and User-Adapted Interaction.
[37] Christos G. Cassandras,et al. Discrete-Event Systems , 2005, Handbook of Networked and Embedded Control Systems.