Usage statistics are frequently used by repositories to justify their value to the management who decide about the funding to support the repository infrastructure. Another reason for collecting usage statistics at repositories is the increased use of webometrics in the process of assessing the impact of publications and researchers. Consequently, one of the worries repositories sometimes have about their content being aggregated is that they feel aggregations have a detrimental effect on the accuracy of statistics they collect. They believe that this potential decrease in reported usage can negatively influence the funding provided by their own institutions. This raises the fundamental question of whether repositories should allow aggregators to harvest their metadata and content. In this paper, we discuss the benefits of allowing content aggregations harvest repository content and investigate how to overcome the drawbacks.
[1]
A. Oskamp,et al.
Agent Exclusion on Websites
,
2005
.
[2]
J. Groom.
Are 'Agent' Exclusion Clauses a Legitimate Application of the EU Database Directive?
,
2004
.
[3]
Zdenek Zdráhal,et al.
CORE: Three Access Levels to Underpin Open Access
,
2012,
D Lib Mag..
[4]
Zdenek Zdráhal,et al.
CORE: aggregation use cases for open access
,
2013,
JCDL '13.
[5]
Raym Crow,et al.
The case for institutional repositories : a SPARC position paper
,
2002
.
[6]
Eloy Rodrigues,et al.
The Case for Interoperability for Open Access Repositories
,
2011
.