Maximum likelihood estimation of discrete control processes
暂无分享,去创建一个
Consider the following “inverse stochastic control” problem. A statistician observes a realization of a controlled stochastic process $\{ d_t ,x_t \} $ consisting of the sequence of states $x_t$, and decisions $d_t$ of an agent at times $t = 1, \cdots ,T$. The null hypothesis is that the agent’s behavior is generated from the solution to a Markovian decision problem. The inverse problem is to use the data $\{ d_t ,x_t \} $ to go backward and “uncover” the agent's objective function U, and his beliefs about the law of motion of the state variables p. The problem is complicated by the fact that the statistician generally only observes a subset $x_t$ of the state variables $(x_t ,\eta _t )$ observed by the agent. This paper formulates the inverse problem as a problem of statistical inference, explicitly accounting for unobserved state variables$\eta _t $, in order to produce a nondegenerate and internally consistent statistical model. Specifically, the functions U and p are assumed to depend on a vector of u...