Natural language processing applications often suffer the curse of dimensionality. In this paper, we propose a low-dimensional text representation learning algorithm, which preserves the pairwise similarity relations of texts. Our method maximizes the log-probability of observing similar texts conditioned on its feature representation. To generate enough similar text pairs for training the objective function, we first build an adjacency graph based on the pairwise similarity relations of the texts, and then propose a simulated sampling strategy to generate the co-occurrence text sequences from the adjacency graph. Experiments on four long and short text datasets demonstrate that our method outperforms several state-of-the-art dimensionality reduction methods. Our method is also better than Doc2vec except on the 20 Newsgroups” dataset for text clustering. Our method can also be applied to the representation learning of images rather than specified in texts.