Twitter A11y: A Browser Extension to Make Twitter Images Accessible

Social media platforms are integral to public and private discourse, but are becoming less accessible to people with vision impairments due to an increase in user-posted images. Some platforms (i.e. Twitter) let users add image descriptions (alternative text), but only 0.1% of images include these. To address this accessibility barrier, we created Twitter A11y, a browser extension to add alternative text on Twitter using six methods. For example, screenshots of text are common, so we detect textual images, and create alternative text using optical character recognition. Twitter A11y also leverages services to automatically generate alternative text or reuse them from across the web. We compare the coverage and quality of Twitter A11y's six alt-text strategies by evaluating the timelines of 50 self-identified blind Twitter users. We find that Twitter A11y increases alt-text coverage from 7.6% to 78.5%, before crowdsourcing descriptions for the remaining images. We estimate that 57.5% of returned descriptions are high-quality. We then report on the experiences of 10 participants with visual impairments using the tool during a week-long deployment. Twitter A11y increases access to social media platforms for people with visual impairments by providing high-quality automatic descriptions for user-posted images.

[1]  Meredith Ringel Morris,et al.  Understanding Blind People's Experiences with Computer-Generated Captions of Social Media Images , 2017, CHI.

[2]  Richard E. Ladner,et al.  WebInSight:: making web images accessible , 2006, Assets '06.

[3]  Bernt Schiele,et al.  A dataset for Movie Description , 2015, 2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).

[4]  Meredith Ringel Morris,et al.  "With most of it being pictures now, I rarely use it": Understanding Twitter's Evolving Accessibility to Blind Users , 2016, CHI.

[5]  M. McHugh Interrater reliability: the kappa statistic , 2012, Biochemia medica.

[6]  Meredith Ringel Morris,et al.  Toward Scalable Social Alt Text: Conversational Crowdsourcing as a Tool for Refining Vision-to-Language Technology for the Blind , 2017, HCOMP.

[7]  Lydia B. Chilton,et al.  Making Memes Accessible , 2019, ASSETS.

[8]  Lada A. Adamic,et al.  Visually impaired users on an online social network , 2014, CHI.

[9]  Jaclyn Packer,et al.  An Overview of Video Description: History, Benefits, and Guidelines , 2015 .

[10]  Paul T. Jaeger,et al.  #HandsOffMyADA: A Twitter Response to the ADA Education and Reform Act , 2019, CHI.

[11]  Darren Guinness,et al.  Caption Crawler: Enabling Reusable Alternative Text Descriptions using Reverse Image Search , 2018, CHI.

[12]  Shaomei Wu,et al.  Automatic Alt-text: Computer-generated Image Descriptions for Blind Users on a Social Network Service , 2017, CSCW.

[13]  Bambang Parmanto,et al.  Accessibility of Internet websites through time , 2003, ASSETS.

[14]  Meredith Ringel Morris,et al.  “It's almost like they're trying to hide it”: How User-Provided Image Descriptions Have Failed to Make Twitter Accessible , 2019, WWW.

[15]  Meredith Ringel Morris,et al.  Investigating the appropriateness of social network question asking as a resource for blind users , 2013, CSCW.

[16]  Li Deng,et al.  Deep Learning for Image-to-Text Generation: A Technical Overview , 2017, IEEE Signal Processing Magazine.

[17]  C. V. Jawahar,et al.  Towards Increased Accessibility of Meme Images with the Help of Rich Face Emotion Captions , 2019, ACM Multimedia.

[18]  Tim Berners-Lee,et al.  HTML 2.0 Specification , 1995 .

[19]  Meredith Ringel Morris,et al.  Rich Representations of Visual Content for Screen Reader Users , 2018, CHI.

[20]  Meredith Ringel Morris,et al.  Gauging Receptiveness to Social Microvolunteering , 2015, CHI.