An Introduction to Variational Methods for Graphical Models
暂无分享,去创建一个
Michael I. Jordan | Tommi S. Jaakkola | Zoubin Ghahramani | Lawrence K. Saul | L. Saul | T. Jaakkola | Zoubin Ghahramani
[1] Philipp Slusallek,et al. Introduction to real-time ray tracing , 2005, SIGGRAPH Courses.
[2] Michael I. Jordan,et al. Factorial Hidden Markov Models , 1995, Machine Learning.
[3] Michael I. Jordan,et al. Variational Probabilistic Inference and the QMR-DT Network , 2011, J. Artif. Intell. Res..
[4] David J. C. MacKay,et al. Comparison of Approximate Methods for Handling Hyperparameters , 1999, Neural Computation.
[5] Michael I. Jordan,et al. Improving the Mean Field Approximation Via the Use of Mixture Distributions , 1999, Learning in Graphical Models.
[6] Michael I. Jordan. Learning in Graphical Models , 1999, NATO ASI Series.
[7] David Heckerman,et al. A Tutorial on Learning with Bayesian Networks , 1999, Innovations in Bayesian Networks.
[8] Brian Sallans,et al. A Hierarchical Community of Experts , 1999, Learning in Graphical Models.
[9] Michael I. Jordan,et al. A Mean Field Learning Algorithm for Unsupervised Neural Networks , 1999, Learning in Graphical Models.
[10] David J. C. Mackay,et al. Introduction to Monte Carlo Methods , 1998, Learning in Graphical Models.
[11] Robert Cowell,et al. Introduction to Inference for Bayesian Networks , 1998, Learning in Graphical Models.
[12] Geoffrey E. Hinton,et al. A View of the Em Algorithm that Justifies Incremental, Sparse, and other Variants , 1998, Learning in Graphical Models.
[13] Jung-Fu Cheng,et al. Turbo Decoding as an Instance of Pearl's "Belief Propagation" Algorithm , 1998, IEEE J. Sel. Areas Commun..
[14] T. Jaakkola,et al. Improving the Mean Field Approximation Via the Use of Mixture Distributions , 1999, Learning in Graphical Models.
[15] Michael I. Jordan,et al. Variational methods and the QMR-DT database , 1998 .
[16] Catherine Blake,et al. UCI Repository of machine learning databases , 1998 .
[17] Neil D. Lawrence,et al. Approximating Posterior Distributions in Belief Networks Using Mixtures , 1997, NIPS.
[18] Michael I. Jordan,et al. Probabilistic Independence Networks for Hidden Markov Probability Models , 1997, Neural Computation.
[19] ModelsbyTommi S. Jaakkola. Variational Methods for Inference and Estimation inGraphical , 1997 .
[20] Michael I. Jordan,et al. Variational methods for inference and estimation in graphical models , 1997 .
[21] Michael I. Jordan,et al. Recursive Algorithms for Approximating Probabilities in Graphical Models , 1996, NIPS.
[22] Michael I. Jordan,et al. Hidden Markov Decision Trees , 1996, NIPS.
[23] Michael I. Jordan,et al. Computing upper and lower bounds on likelihoods in intractable networks , 1996, UAI.
[24] Rina Dechter,et al. Bucket elimination: A unifying framework for probabilistic inference , 1996, UAI.
[25] Michael I. Jordan,et al. Mean Field Theory for Sigmoid Belief Networks , 1996, J. Artif. Intell. Res..
[26] Geoffrey E. Hinton,et al. Switching State-Space Models , 1996 .
[27] Michael I. Jordan,et al. Exploiting Tractable Substructures in Intractable Networks , 1995, NIPS.
[28] Steve R. Waterhouse,et al. Bayesian Methods for Mixtures of Experts , 1995, NIPS.
[29] Geoffrey E. Hinton,et al. The Helmholtz Machine , 1995, Neural Computation.
[30] Stuart J. Russell,et al. Stochastic simulation algorithms for dynamic probabilistic networks , 1995, UAI.
[31] K. Bathe. Finite Element Procedures , 1995 .
[32] Uffe Kjærulff,et al. Blocking Gibbs sampling in very large probabilistic expert systems , 1995, Int. J. Hum. Comput. Stud..
[33] Geoffrey E. Hinton,et al. The "wake-sleep" algorithm for unsupervised neural networks. , 1995, Science.
[34] Hill,et al. Annealed Theories of Learning , 1995 .
[35] Michael I. Jordan,et al. Learning in Boltzmann Trees , 1994, Neural Computation.
[36] Robert M. Fung,et al. Backward Simulation in Bayesian Networks , 1994, UAI.
[37] Uffe Kjærulff,et al. Reduction of Computational Complexity in Bayesian Networks Through Removal of Weak Dependences , 1994, UAI.
[38] Ross D. Shachter,et al. Global Conditioning for Probabilistic Inference in Belief Networks , 1994, UAI.
[39] Frank Jensen,et al. Optimal junction Trees , 1994, UAI.
[40] Denise Draper,et al. Localized Partial Evaluation of Belief Networks , 1994, UAI.
[41] Michael I. Jordan. A statistical approach to decision tree modeling , 1994, COLT '94.
[42] Walter R. Gilks,et al. A Language and Program for Complex Bayesian Modelling , 1994 .
[43] Geoffrey E. Hinton,et al. Keeping the neural networks simple by minimizing the description length of the weights , 1993, COLT '93.
[44] Michael Luby,et al. Approximating Probabilistic Inference in Bayesian Belief Networks is NP-Hard , 1993, Artif. Intell..
[45] R. Martin Chavez,et al. Approximating Probabilistic Inference in Bayesian Belief Networks , 1993, IEEE Trans. Pattern Anal. Mach. Intell..
[46] C. Galland. The limitations of deterministic Boltzmann machine learning , 1993 .
[47] Radford M. Neal. Connectionist Learning of Belief Networks , 1992, Artif. Intell..
[48] Prakash P. Shenoy,et al. Valuation-Based Systems for Bayesian Decision Analysis , 1992, Oper. Res..
[49] Gregory F. Cooper,et al. An Empirical Analysis of Likelihood-Weighting Simulation on a Large, Multiply-Connected Belief Network , 1991, Computers and biomedical research, an international journal.
[50] Max Henrion,et al. Search-Based Methods to Bound Diagnostic Probabilities in Very Large Belief Nets , 1991, UAI.
[51] A. Hasman,et al. Probabilistic reasoning in intelligent systems: Networks of plausible inference , 1991 .
[52] Geoffrey E. Hinton,et al. Mean field networks that learn to discriminate temporally distorted strings , 1991 .
[53] D. Heckerman,et al. ,81. Introduction , 2022 .
[54] Thomas M. Cover,et al. Elements of Information Theory , 2005 .
[55] Michael I. Jordan,et al. Advances in Neural Information Processing Systems 30 , 1995 .
[56] Uue Kjjrull. Triangulation of Graphs { Algorithms Giving Small Total State Space Triangulation of Graphs { Algorithms Giving Small Total State Space , 1990 .
[57] Eric Horvitz,et al. Bounded Conditioning: Flexible Inference for Decisions under Scarce Resources , 2013, UAI 1989.
[58] Judea Pearl,et al. Probabilistic reasoning in intelligent systems - networks of plausible inference , 1991, Morgan Kaufmann series in representation and reasoning.
[59] Keiji Kanazawa,et al. A model for reasoning about persistence and causation , 1989 .
[60] Carsten Peterson,et al. A Mean Field Theory Learning Algorithm for Neural Networks , 1987, Complex Syst..
[61] J. J. Sakurai,et al. Modern Quantum Mechanics , 1986 .
[62] Geoffrey E. Hinton,et al. Learning and relearning in Boltzmann machines , 1986 .
[63] H. Saunders. Book Reviews : NUMERICAL METHODS IN FINITE ELEMENT ANALYSIS K.-J. Bathe and E.L. Wilson Prentice-Hall, Inc, Englewood Cliffs, NJ , 1978 .
[64] D. Rubin,et al. Maximum likelihood from incomplete data via the EM - algorithm plus discussions on the paper , 1977 .
[65] L. Baum,et al. A Maximization Technique Occurring in the Statistical Analysis of Probabilistic Functions of Markov Chains , 1970 .