Margin-based probabilistic networks
暂无分享,去创建一个
We introduce a new class of graphical models called Margin-based Probabilistic Networks (MPNs) which provide a probabilistic object-oriented language for modeling structured stochastic processes. Rich representations allowed in MPNs make it possible to handle both discrete valued and real valued variables, as well as dependency relations beyond canonical forms, e.g., relations defined by kernels. We apply MPNs to solve visual perception problems at different levels: transformation-invariant image modeling, object recognition by density estimators, and semantic image annotation.
Graphical models have been used to extract structured information from unstructured data, but doing this accurately and efficiently, especially on large scale problems, remains difficult. Complex structures impose a great challenge to both inference and learning algorithms. Neither exact nor approximate inference algorithms are feasible for graphical models with unbounded tree-width. We propose a heuristic inference algorithm for the MPNs that achieves polynomial time based on average-case complexity rather than worst-case complexity. This heuristic algorithm learns from data, and runs in polynomial time on average, even if algorithms with worst case polynomial time complexity do not exist. Moreover, conventional learning methods, when applied on structured prediction, do not guarantee consistency as they may not converge to the Bayes optimal in the limit of infinite number of samples. We introduce a new consistent learning algorithm for MPNs that utilizes the heuristic inference algorithm described above to achieve polynomial time on average. Together, these two algorithms make MPNs a practical generic probabilistic modeling language for tackling large-scale real-world problems.