Citing-side normalization of journal impact: A robust variant of the Audience Factor

The principle of a new type of impact measure was introduced recently, called the “Audience Factor” (AF). It is a variant of the journal impact factor where emitted citations are weighted inversely to the propensity to cite of the source. In the initial design, propensity was calculated using the average length of bibliography at the source level with two options: a journal-level average or a field-level average. This citing-side normalization controls for propensity to cite, the main determinant of impact factor variability across fields. The AF maintains the variability due to exports–imports of citations across field and to growth differences. It does not account for influence chains, powerful approaches taken in the wake of Pinski–Narin's influence weights. Here we introduce a robust variant of the audience factor, trying to combine the respective advantages of the two options for calculating bibliography lengths: the classification-free scheme when the bibliography length is calculated at the individual journal level, and the robustness and avoidance of ad hoc settings when the bibliography length is averaged at the field level. The variant proposed relies on the relative neighborhood of a citing journal, regarded as its micro-field and assumed to reflect the citation behavior in this area of science. The methodology adopted allows a large range of variation of the neighborhood, reflecting the local citation network, and partly alleviates the “cross-scale” normalization issue. Citing-side normalization is a general principle which may be extended to other citation counts.

[1]  Jonathan Adams,et al.  Calibrating the zoom — a test of Zitt’s hypothesis , 2008, Scientometrics.

[2]  A. Ochiai Zoogeographical Studies on the Soleoid Fishes Found in Japan and its Neighbouring Regions-III , 1957 .

[3]  Anthony F. J. van Raan,et al.  Performance-related differences of bibliometric statistical properties of research groups: Cumulative advantages and hierarchically layered networks , 2006 .

[4]  Gabriel Pinski,et al.  Citation influence for journal aggregates of scientific publications: Theory, with application to the literature of physics , 1976, Inf. Process. Manag..

[5]  Johan Bollen,et al.  Journal status , 2006, Scientometrics.

[6]  Ana María Ramírez,et al.  Renormalized Impact Factor , 2004, Scientometrics.

[7]  H. Small,et al.  Modifying the journal impact factor by fractional citation weighting: The audience factor , 2008 .

[8]  Claudio Castellano,et al.  Universality of citation distributions: Toward an objective measure of scientific impact , 2008, Proceedings of the National Academy of Sciences.

[9]  Brian W. Rogers,et al.  Meeting Strangers and Friends of Friends: How Random are Social Networks? , 2007 .

[10]  Tibor Braun,et al.  Cross-field normalization of scientometric indicators , 1996, Scientometrics.

[11]  Peter Vinkler,et al.  Characterization of the impact of sets of scientific papers: The Garfield (impact) factor , 2004, J. Assoc. Inf. Sci. Technol..

[12]  Michel Zitt,et al.  Delineating complex scientific fields by an hybrid lexical-citation method: An application to nanosciences , 2006, Inf. Process. Manag..

[13]  Ronald Rousseau,et al.  Median and percentile impact factors: A set of new indicators , 2005, Scientometrics.

[14]  Mohammad Hossein Biglu,et al.  The influence of references per paper in the SCI to Impact Factors and the Matthew Effect , 2007, Scientometrics.

[15]  P. Seglen,et al.  Education and debate , 1999, The Ethics of Public Health.

[16]  Judit Bar-Ilan,et al.  A Closer Look at the Sources of Informetric Research , 2009 .

[17]  Irina Marshakova-Shaikevich,et al.  The standard impact factor as an evaluation tool of science fields and scientific journals , 1996, Scientometrics.

[18]  Carl T. Bergstrom Eigenfactor Measuring the value and prestige of scholarly journals , 2007 .

[19]  Alexander I. Pudovkin,et al.  Rank-normalized impact factor: A way to compare journal performance across subject categories , 2005, ASIST.

[20]  Gideon Czapski,et al.  The use of deciles of the citation impact to evaluate different fields of research in Israel , 1997, Scientometrics.

[21]  Michel Zitt,et al.  Relativity of citation performance and excellence measures: From cross-field to cross-scale effects of field-normalisation , 2005, Scientometrics.

[22]  Henry Small,et al.  Navigating the Citation Network. , 1995 .

[23]  Eugene Garfield,et al.  Random thoughts on citationology its theory and practice , 1998, Scientometrics.

[24]  Isabel Gómez,et al.  Advantages and limitations in the use of impact factor measures for the assessment of research performance , 2002, Scientometrics.

[25]  Christina Courtright,et al.  Context in information behavior research , 2007 .

[26]  B. K. Sen Normalised Impact factor , 1992, J. Documentation.

[27]  Peter Vinkler,et al.  Subfield problems in applying the Garfield (Impact) Factors in practice , 2002, Scientometrics.

[28]  Henk F. Moed,et al.  Journal impact measures in bibliometric research , 2004, Scientometrics.

[29]  Henk F. Moed,et al.  Measuring contextual citation impact of scientific journals , 2009, J. Informetrics.

[30]  Michael J. Moravcsik,et al.  Variation of the nature of citation measures with journals and scientific specialties , 1978, J. Am. Soc. Inf. Sci..

[31]  Oscar Volij,et al.  The Measurement of Intellectual Influence , 2002 .

[32]  Grant Lewison,et al.  Beyond outputs: new measures of biomedical research impact , 2003, Aslib Proc..

[33]  Henry G. Small,et al.  Clustering thescience citation index® using co-citations , 1985, Scientometrics.

[34]  E. Garfield Citation analysis as a tool in journal evaluation. , 1972, Science.

[35]  Michel Zitt,et al.  Indicators in a research institute: A multi-level classification of scientific journals , 1999, Scientometrics.