Learning structural correspondences across different linguistic domains with synchronous neural language models

We introduce a novel framework for learning structural correspondences between two linguistic domains based on training synchronous neural language models with co-regularization on both domains simultaneously. We show positive preliminary results indicating that our framework can be successfully used to learn similar feature representations for correlated objects across different domains, and may therefore be a successful approach for transfer learning across different linguistic domains.