Even though the plagiarism identification issue remains relevant, modern detection methods are still resource-intensive. This paper reports a more efficient alternative to existing solutions.
The devised system for identifying patterns in multilingual texts compares two texts and determines, by using different approaches, whether the second text is a translation of the first or not. This study's approach is based on Renyi entropy.
The original text from an English writer's work and five texts in the Russian language were selected for this research. The real and "fake" translations that were chosen included translations by Google Translator and Yandex Translator, an author's book translation, a text from another work by an English writer, and a fake text. The fake text represents a text compiled with the same frequency of keywords as in the authentic text.
Upon forming a key series of high-frequency words for the original text, the relevant key series for other texts were identified. Then the entropies for the texts were calculated when they were divided into "sentences" and "paragraphs".
A Minkowski metric was used to calculate the proximity of the texts. It underlies the calculations of a Hamming distance, the Cartesian distance, the distance between the centers of masses, the distance between the geometric centers, and the distance between the centers of parametric means.
It was found that the proximity of texts is best determined by calculating the relative distances between the centers of parametric means (for "fake" texts ‒ exceeding 3, for translations ‒ less than 1).
Calculating the proximity of texts by using the algorithm based on Renyi entropy, reported in this work, makes it possible to save resources and time compared to methods based on neural networks. All the raw data and an example of the entropy calculation on php are publicly available
[1]
Jiajun Zhang,et al.
Addressing the Under-Translation Problem from the Entropy Perspective
,
2019,
AAAI.
[2]
P. A. Bromiley,et al.
Shannon Entropy, Renyi Entropy, and Information
,
2004
.
[3]
L. Lei,et al.
Lexical Richness and Text Length: An Entropy-based Perspective
,
2020,
Journal of Quantitative Linguistics.
[4]
Jie Yu,et al.
Concept extraction for structured text using entropy weight method
,
2019,
2019 IEEE Symposium on Computers and Communications (ISCC).
[5]
Mario Köppen,et al.
Entropy Analysis of Questionable Text Sources by Example of the Voynich Manuscript
,
2019,
SCDS.