Process control using finite Markov chains with iterative clustering
暂无分享,去创建一个
[1] Jan Lunze,et al. On the Markov Property of Quantised State Measuirement Sequences , 1998, Autom..
[2] Kaddour Najim,et al. Advanced Process Identification and Control , 2001 .
[3] Sigurd Skogestad,et al. Control structure design for complete chemical plants , 2004, Comput. Chem. Eng..
[4] Warren B. Powell,et al. “Approximate dynamic programming: Solving the curses of dimensionality” by Warren B. Powell , 2007, Wiley Series in Probability and Statistics.
[5] Vijay S. Pande,et al. Everything you wanted to know about Markov State Models but were afraid to ask. , 2010, Methods.
[6] Kaddour Najim,et al. Multiple Model-Based Control Using Finite Controlled Markov Chains , 2009, Cognitive Computation.
[7] Yu Yang,et al. Probabilistic modeling and dynamic optimization for performance improvement and risk management of plant-wide operation , 2010, Comput. Chem. Eng..
[8] D. J. White,et al. A Survey of Applications of Markov Decision Processes , 1993 .
[9] C. Hsu,et al. Cell-To-Cell Mapping A Method of Global Analysis for Nonlinear Systems , 1987 .
[10] N. Gordon,et al. Novel approach to nonlinear/non-Gaussian Bayesian state estimation , 1993 .
[11] Neil J. Gordon,et al. A tutorial on particle filters for online nonlinear/non-Gaussian Bayesian tracking , 2002, IEEE Trans. Signal Process..
[12] J. MacQueen. Some methods for classification and analysis of multivariate observations , 1967 .
[13] Warren B. Powell,et al. What you should know about approximate dynamic programming , 2009, Naval Research Logistics (NRL).
[14] John G. Kemeny,et al. Finite Markov Chains. , 1960 .
[15] Leslie Pack Kaelbling,et al. Planning and Acting in Partially Observable Stochastic Domains , 1998, Artif. Intell..
[16] U. Kortela,et al. Model-Based Multivariable Control of a Secondary Air System Using Controlled Finite Markov Chains , 2009 .
[17] Wee Chin Wong,et al. Approximate dynamic programming approach for process control , 2009 .
[18] Keyu Li,et al. Bayesian state estimation of nonlinear systems using approximate aggregate markov chains , 2006 .
[19] Nikos K. Logothetis,et al. Frontiers in Computational Neuroscience Computational Neuroscience , 2022 .
[20] Vadim Mizonov,et al. Applications of Markov Chains in Particulate Process Engineering: A Review , 2004 .
[21] Bart De Schutter,et al. Learning-based model predictive control for Markov decision processes , 2005 .
[22] John M. Hancock,et al. K -Means Clustering. , 2010 .
[23] Martin L. Puterman,et al. Markov Decision Processes: Discrete Stochastic Dynamic Programming , 1994 .
[24] Sergei Vassilvitskii,et al. k-means++: the advantages of careful seeding , 2007, SODA '07.
[25] Enso Ikonen,et al. Utilizing permutational symmetries in dynamic programming - with an application to the optimal control of water distribution systems under water demand uncertainties , 2013 .
[26] Rajesh P. N. Rao,et al. Decision Making Under Uncertainty: A Neural Model Based on Partially Observable Markov Decision Processes , 2010, Front. Comput. Neurosci..
[27] Andrew W. Moore,et al. Reinforcement Learning: A Survey , 1996, J. Artif. Intell. Res..
[28] Dimitri P. Bertsekas,et al. Dynamic Programming and Optimal Control, Two Volume Set , 1995 .
[29] Jay H. Lee,et al. Approximate dynamic programming based approach to process control and scheduling , 2006, Comput. Chem. Eng..
[30] Jong Min Lee,et al. Approximate Dynamic Programming Strategies and Their Applicability for Process Control: A Review and Future Directions , 2004 .