Employing Thematic Variables for Enhancing Classification Accuracy Within Author Discrimination Experiments

This article reports on experiments performed with a large corpus, aiming at separating texts according to the author style. The study initially focusses on whether the classification accuracy regarding the author identity may be improved, if the text topic is known in advance. The experimental results indicate that this kind of information contributes to more accurate author recognition. Furthermore, as the diversity of a topic set increases, the classification accuracy is reduced. In general, the experimental results indicate that taking into account knowledge regarding the text topic can lead to the construction of specialized models for each author with higher classification accuracy. For example, by focussing on a specific topic, the accuracy with which the author identity is determined increases, the exact amount depending on the specific topic. This also applies when the topic of the text is more broadly determined, as a set of topic categories. In an associated task, the most salient parameters within an 85-parameter vector are studied, for a number of subsets of the corpus, where each subset contains speeches from a single topic. These studies indicate that the salient parameters are the same for the different subsets. Two fixed data vectors have been defined, using 16 and 25 parameters, respectively. The classification accuracy obtained, even with the smallest data vector, is only 5% less than with the complete vector. This indicates that the parameters retained in the reduced vectors bear a large amount of discriminatory information and suffice for an accurate classification of the corpus.

[1]  Patrick Juola,et al.  Authorship Attribution , 2008, Found. Trends Inf. Retr..

[2]  Pantelis K. Vlachos,et al.  Detecting Collaborations in Text Comparing the Authors' Rhetorical Language Choices in The Federalist Papers , 2004, Comput. Humanit..

[3]  John Burrows,et al.  'Delta': a Measure of Stylistic Difference and a Guide to Likely Authorship , 2002, Lit. Linguistic Comput..

[4]  David L. Hoover Frequent Word Sequences and Statistical Stylistics , 2002, Lit. Linguistic Comput..

[5]  Stella Markantonatou,et al.  Applying the SOM Model to Text Classification According to Register and Stylistic Content , 2003, Int. J. Neural Syst..

[6]  H. van Halteren,et al.  Outside the cave of shadows: using syntactic annotation to enhance authorship attribution , 1996 .

[7]  D. Holmes The Analysis of Literary Style — a Review , 1985 .

[8]  H. S. Sichel,et al.  On a Distribution Representing Sentence‐Length in Written Prose , 1974 .

[9]  Robert Matthews,et al.  Neural Computation in Stylometry I: An Application to the Works of Shakespeare and Fletcher , 1993 .

[10]  Ward E. Y. Elliott,et al.  The Professor Doth Protest Too Much, Methinks: Problems with the Foster ”Response“ , 1998, Comput. Humanit..

[11]  Stella Markantonatou,et al.  Discriminating the Registers and Styles in the Modern Greek Language-Part 2: Extending the Feature Vector to Optimize Author Discrimination , 2004, Lit. Linguistic Comput..

[12]  Penelope J. Gurney,et al.  Subsets and Homogeneity: Authorship Attribution in the Scriptories Historiae Augustae , 1998 .

[13]  Douglas Biber,et al.  Using Register-Diversified Corpora for General Language Studies , 1993, Comput. Linguistics.

[14]  Donald W. Foster The Claremont Shakespeare Authorship Clinic: How Severe Are the Problems? , 1998, Comput. Humanit..

[15]  D. Holmes The Evolution of Stylometry in Humanities Scholarship , 1998 .

[16]  S Waugh,et al.  Computational stylistics using artificial neural networks , 2000 .

[17]  D. Holmes A Stylometric Analysis of Mormon Scripture and Related Texts , 1992 .

[18]  David J. Bartholomew,et al.  Probability, Statistics and Theology , 1988 .