Lossless Value Directed Compression of Complex User Goal States for Statistical Spoken Dialogue Systems

This paper presents initial results in the application of Value Directed Compression (VDC) to spoken dialogue management belief states for reasoning about complex user goals. On a small but realistic SDS problem VDC generates a lossless compression which achieves a 6-fold reduction in the number of dialogue states required by a Partially Observable Markov Decision Process (POMDP) dialogue manager (DM). Reducing the number of dialogue states reduces the computational power, memory, and storage requirements of the hardware used to deploy such POMDP SDSs, thus increasing the complexity of the systems which could theoretically be deployed. In addition, in the case when on-line reinforcement learning is used to learn the DM policy, it should lead to, in this case, a 6-fold reduction in policy learning time. These are the first automatic compression results that have been presented for POMDP SDS states which represent user goals as sets over possible domain objects.