The process of inverting Markov kernels relates to the important subject of Bayesian modelling and learning. In fact, Bayesian update is exactly kernel inversion. In this paper, we investigate how and when Markov kernels (aka stochastic relations, or probabilistic mappings, or simply kernels) can be inverted. We address the question both directly on the category of measurable spaces, and indirectly by interpreting kernels as Markov operators: For the direct option, we introduce a typed version of the category of Markov kernels and use the so-called ‘disintegration of measures’. Here, one has to specialise to measurable spaces borne from a simple class of topological spaces -e.g. Polish spaces (other choices are possible). Our method and result greatly simplify a recent development in Ref. [4]. For the operator option, we use a cone version of the category of Markov operators (kernels seen as predicate transformers). That is to say, our linear operators are not just continuous, but are required to satisfy the stronger condition of being ω-chain-continuous.1 Prior work shows that one obtains an adjunction in the form of a pair of contravariant and inverse functors between the categories of L1and L∞-cones [3]. Inversion, seen through the operator prism, is just adjunction.2 No topological assumption is needed. We show that both categories (Markov kernels and ω-chain-continuous Markov operators) are related by a family of contravariant functors Tp for 1 ≤ p ≤ ∞. The Tp’s are Kleisli extensions of (duals of) conditional expectation functors introduced in Ref. [3]. With this bridge in place, we can prove that both notions of inversion agree when both defined: if f is a kernel, and f† its direct inverse, then T∞(f) = T1(f). 1998 ACM Subject Classification Semantics of programming languages
[1]
J. Norris,et al.
Probabilities and Potential, C: Potential Theory for Discrete and Continuous Semigroups
,
2011
.
[2]
O. Kallenberg.
Foundations of Modern Probability
,
2021,
Probability Theory and Stochastic Modelling.
[3]
Vincent Danos,et al.
Giry and the Machine
,
2016,
MFPS.
[4]
Jared Culbertson,et al.
A Categorical Foundation for Bayesian Probability
,
2012,
Applied Categorical Structures.
[5]
Roman Fric,et al.
A Categorical Approach to Probability Theory
,
2010,
Stud Logica.
[6]
Vincent Danos,et al.
Dirichlet is Natural
,
2015,
MFPS.
[7]
P. Malliavin.
Infinite dimensional analysis
,
1993
.
[8]
Vincent Danos,et al.
Approximating Markov Processes by Averaging
,
2009,
JACM.
[9]
Dominic R. Verity,et al.
∞-Categories for the Working Mathematician
,
2018
.
[10]
Dexter Kozen,et al.
Semantics of probabilistic programs
,
1979,
20th Annual Symposium on Foundations of Computer Science (sfcs 1979).
[11]
P. Selinger.
Towards a semantics for higher-order quantum computation
,
2004
.
[12]
Benjamin Naumann,et al.
Classical Descriptive Set Theory
,
2016
.
[13]
Vincent Danos,et al.
Bisimulation and cocongruence for probabilistic systems
,
2006,
Inf. Comput..