Modeling sequences with quantum states: a look under the hood

Classical probability distributions on sets of sequences can be modeled using quantum states. Here, we do so with a quantum state that is pure and entangled. Because it is entangled, the reduced densities that describe subsystems also carry information about the complementary subsystem. This is in contrast to the classical marginal distributions on a subsystem in which information about the complementary system has been integrated out and lost. A training algorithm based on the density matrix renormalization group (DMRG) procedure uses the extra information contained in the reduced densities and organizes it into a tensor network model. An understanding of the extra information contained in the reduced densities allow us to examine the mechanics of this DMRG algorithm and study the generalization error of the resulting model. As an illustration, we work with the even-parity dataset and produce an estimate for the generalization error as a function of the fraction of the dataset used in training.

[1]  J. Ignacio Cirac,et al.  Supervised learning with generalized tensor networks , 2018, ArXiv.

[2]  E. Miles Stoudenmire,et al.  Learning relevant features of data with multi-scale tensor networks , 2017, ArXiv.

[3]  Jack Hidary,et al.  TensorNetwork for Machine Learning , 2019, ArXiv.

[4]  U. Schollwoeck The density-matrix renormalization group in the age of matrix product states , 2010, 1008.3477.

[5]  Glen Evenbly,et al.  Number-state preserving tensor networks as classifiers for supervised learning , 2019, Frontiers in Physics.

[6]  Lei Wang,et al.  Tree Tensor Networks for Generative Modeling , 2019, Physical Review B.

[7]  David J. Schwab,et al.  Supervised Learning with Quantum-Inspired Tensor Networks , 2016, ArXiv.

[8]  J. Ignacio Cirac,et al.  From Probabilistic Graphical Models to Generalized Tensor Networks for Supervised Learning , 2018, IEEE Access.

[9]  Pan Zhang,et al.  Shortcut Matrix Product States and its applications , 2018, ArXiv.

[10]  Jens Eisert,et al.  Expressive power of tensor-network factorizations for probabilistic modeling, with applications from hidden Markov models to quantum machine learning , 2019, NeurIPS.

[11]  Roman Orus,et al.  A Practical Introduction to Tensor Networks: Matrix Product States and Projected Entangled Pair States , 2013, 1306.2164.

[12]  J. Chen,et al.  Equivalence of restricted Boltzmann machines and tensor network states , 2017, 1701.04831.

[13]  White,et al.  Density matrix formulation for quantum renormalization groups. , 1992, Physical review letters.

[14]  Gang Su,et al.  Machine learning by unitary tensor network of hierarchical tree structure , 2017, New Journal of Physics.

[15]  Enrique Romero,et al.  Weighted contrastive divergence. , 2019 .

[16]  Jun Wang,et al.  Unsupervised Generative Modeling Using Matrix Product States , 2017, Physical Review X.

[17]  Frank Verstraete,et al.  Matrix product state representations , 2006, Quantum Inf. Comput..

[18]  James Stokes,et al.  Probabilistic Modeling with Matrix Product States , 2019, Entropy.

[19]  Chu Guo,et al.  Matrix product operators for sequence-to-sequence learning , 2018, Physical Review E.

[20]  Alexander Novikov,et al.  Tensor Train polynomial models via Riemannian optimization , 2016, ArXiv.