On the Change in Archivability of Websites Over Time

As web technologies evolve, web archivists work to keep up so that our digital history is preserved. Recent advances in web technologies have introduced client-side executed scripts that load data without a referential identifier or that require user interaction (e.g., content loading when the page has scrolled). These advances have made automating methods for capturing web pages more difficult. Because of the evolving schemes of publishing web pages along with the progressive capability of web preservation tools, the archivability of pages on the web has varied over time. In this paper we show that the archivability of a web page can be deduced from the type of page being archived, which aligns with that page’s accessibility in respect to dynamic content. We show concrete examples of when these technologies were introduced by referencing mementos of pages that have persisted through a long evolution of available technologies. Identifying these reasons for the inability of these web pages to be archived in the past in respect to accessibility serves as a guide for ensuring that content that has longevity is published using good practice methods that make it available for preservation.

[1]  Michele C. Weigle,et al.  WARCreate: create wayback-consumable WARC files from any webpage , 2012, JCDL '12.

[2]  J. Bass Getting personal: confronting the challenges of archiving personal records in the digital age , 2012 .

[3]  Stuart Macdonald,et al.  User Engagement in Research Data Curation , 2009, ECDL.

[4]  Brad Tofel ‘Wayback’ for Accessing Web Archives , 2007 .

[5]  Catherine C. Marshall,et al.  Why web sites are lost (and how they're sometimes found) , 2009, Commun. ACM.

[6]  Jesse James Garrett Ajax: A New Approach to Web Applications , 2007 .

[7]  Benjamin Livshits,et al.  Ripley: automatically securing web 2.0 applications through replicated execution , 2009, CCS.

[8]  Kristinn Sigurðsson Incremental Crawling with Heritrix , 2010 .

[9]  Eunjin Jung,et al.  A targeted web crawling for building malicious javascript collection , 2009, CIKM-DSMM.

[10]  Michael K. Bergman White Paper: The Deep Web: Surfacing Hidden Value , 2001 .

[11]  Matthew Ryan Kelly,et al.  An Extensible Framework for Creating Personal Archives of Web Resources Requiring Authentication , 2012 .

[12]  Gregg C. Vanderheiden,et al.  Web content accessibility guidelines 1.0 , 2001, INTR.

[13]  Arthur Thomas,et al.  Researcher Engagement with Web Archives: Challenges and Opportunities for Investment , 2010 .

[14]  Michael L. Nelson,et al.  Factors affecting website reconstruction from the web infrastructure , 2007, JCDL '07.

[15]  Ben Livshits,et al.  Gulfstream: Incremental Static Analysis for Streaming JavaScript Applications , 2010 .

[16]  Benjamin Livshits,et al.  AjaxScope: a platform for remotely monitoring the client-side behavior of web 2.0 applications , 2007, TWEB.

[17]  Bambang Parmanto,et al.  Metric for Web accessibility evaluation , 2005, J. Assoc. Inf. Sci. Technol..

[18]  Michael L. Nelson,et al.  Music Video Redundancy and Half-Life in YouTube , 2011, TPDL.

[19]  Herbert Van de Sompel,et al.  Memento: Time Travel for the Web , 2009, ArXiv.

[20]  Chirag Shah Tubekit: a query-based youtube crawling toolkit , 2008, JCDL '08.

[21]  Edgar Crook Web archiving in a Web 2.0 world , 2009, Electron. Libr..

[22]  Benjamin Livshits,et al.  ConScript: Specifying and Enforcing Fine-Grained Security Policies for JavaScript in the Browser , 2010, 2010 IEEE Symposium on Security and Privacy.

[23]  Michael L. Nelson,et al.  What happens when facebook is gone? , 2009, JCDL '09.

[24]  Michael L. Nelson,et al.  How much of the web is archived? , 2011, JCDL '11.

[25]  Julien Masanès,et al.  Web Archiving Methods and Approaches: A Comparative Study , 2006, Libr. Trends.

[26]  Benjamin Livshits,et al.  AjaxScope: A Platform for Remotely Monitoring the Client-Side Behavior of Web 2.0 Applications , 2010, ACM Trans. Web.