Envisioning Equitable Speech Technologies for Black Older Adults
暂无分享,去创建一个
[1] Cosmin Munteanu,et al. Does Alexa Live Up to the Hype? Contrasting Expectations from Mass Media Narratives and Older Adults' Hands-on Experiences of Voice Interfaces , 2022, CUI.
[2] Casey S. Pierce,et al. An Empirical Study of Older Adult’s Voice Assistant Use for Health Information Seeking , 2022, ACM Trans. Interact. Intell. Syst..
[3] Sachin Pathiyan Cherumanal. Fairness-Aware Question Answering for Intelligent Assistants , 2022, Proceedings of the 45th International ACM SIGIR Conference on Research and Development in Information Retrieval.
[4] Youjin Kong. Are “Intersectionally Fair” AI Algorithms Really Fair to Women of Color? A Philosophical Analysis , 2022, FAccT.
[5] P. Lohia,et al. Beyond Fairness: Reparative Algorithms to Address Historical Injustices of Housing Discrimination in the US , 2022, FAccT.
[6] Marilyn Zhang. Affirmative Algorithms: Relational Equality as Algorithmic Fairness , 2022, FAccT.
[7] G. Gu,et al. BiasHacker: Voice Command Disruption by Exploiting Speaker Biases in Automatic Speech Recognition , 2022, WISEC.
[8] Christopher L. Dancy,et al. The Forgotten Margins of AI Ethics , 2022, Conference on Fairness, Accountability and Transparency.
[9] Robin N. Brewer,et al. “If Alexa knew the state I was in, it would cry”: Older Adults’ Perspectives of Voice Assistants for Health , 2022, CHI Extended Abstracts.
[10] McKane Andrus,et al. Demographic-Reliant Algorithmic Fairness: Characterizing the Risks of Demographic Data Collection in the Pursuit of Fairness , 2022, FAccT.
[11] Seth Polsley,et al. Inequity in Popular Speech Recognition Systems for Accented English Speech , 2022, IUI Companion.
[12] A. Ding,et al. Bias in Automated Speaker Recognition , 2022, FAccT.
[13] Christina N. Harrington,et al. “It’s Kind of Like Code-Switching”: Black Older Adults’ Experiences with a Voice Assistant for Health Information Seeking , 2021, CHI.
[14] Jenny Waycott,et al. Swipe a Screen or Say the Word: Older Adults’ Preferences for Information-seeking with Touchscreen and Voice-User Interfaces , 2021, OZCHI.
[15] Courtney Heldreth,et al. “I don’t Think These Devices are Very Culturally Sensitive.”—Impact of Automated Speech Recognition Errors on African Americans , 2021, Frontiers in Artificial Intelligence.
[16] Rediet Abebe,et al. Fairness, Equality, and Power in Algorithmic Decision-Making , 2021, FAccT.
[17] Michelle Cohn,et al. Age- and Gender-Related Differences in Speech Alignment Toward Humans and Voice-AI , 2021, Frontiers in Communication.
[18] W. Rogers,et al. Digital Home Assistants and Aging: Initial Perspectives from Novice Older Adult Users , 2020, Proceedings of the Human Factors and Ergonomics Society Annual Meeting.
[19] Gianni Fenu,et al. Improving Fairness in Speaker Recognition , 2020, ESSE.
[20] Elena Spitzer,et al. What We Can't Measure, We Can't Understand: Challenges to Demographic Data Procurement in the Pursuit of Fairness , 2020, FAccT.
[21] Joshua L. Martin,et al. Understanding Racial Disparities in Automatic Speech Recognition: The Case of Habitual "be" , 2020, INTERSPEECH.
[22] Ana-Andreea Stoica,et al. Bridging Machine Learning and Mechanism Design towards Algorithmic Fairness , 2020, FAccT.
[23] Amanda Lazar,et al. Use of Intelligent Voice Assistants by Older Adults with Low Technology Use , 2020, ACM Trans. Comput. Hum. Interact..
[24] Aqueasha Martin-Hammond,et al. "Alexa is a Toy": Exploring Older Adults' Reasons for Using, Limiting, and Abandoning Echo , 2020, CHI.
[25] Dan Jurafsky,et al. Racial disparities in automated speech recognition , 2020, Proceedings of the National Academy of Sciences.
[26] Karthik Dinakar,et al. Studying up: reorienting the study of algorithmic fairness around issues of power , 2020, FAT*.
[27] Sean A. McGlynn,et al. Perceptions of Digital Assistants From Early Adopters Aged 55+ , 2019, Ergonomics in Design: The Quarterly of Human Factors Applications.
[28] Reuben Binns,et al. On the apparent conflict between individual and group fairness , 2019, FAT*.
[29] Francis M. Tyers,et al. Common Voice: A Massively-Multilingual Speech Corpus , 2019, LREC.
[30] Hanna M. Wallach,et al. Measurement and Fairness , 2019, FAccT.
[31] Emily Denton,et al. Towards a critical race methodology in algorithmic fairness , 2019, FAT*.
[32] Daniel Gruen,et al. Considerations for AI fairness for people with disabilities , 2019, SIGAI.
[33] M. Mills,et al. Disability, Bias, and AI , 2019 .
[34] Benjamin R. Cowan,et al. Voice assistants and older people: some open issues , 2019, CUI.
[35] Krzysztof Marasek,et al. Older Adults and Voice Interaction: A Pilot Study with Google Home , 2019, CHI Extended Abstracts.
[36] Danah Boyd,et al. Fairness and Abstraction in Sociotechnical Systems , 2019, FAT.
[37] Reuben Binns,et al. Fairness in Machine Learning: Lessons from Political Philosophy , 2017, FAT.
[38] Rachael Tatman,et al. Effects of Talker Dialect, Gender & Race on Accuracy of Bing Speech and YouTube Automatic Captions , 2017, INTERSPEECH.
[39] Anne Marie Piper,et al. Exploring Traditional Phones as an E-Mail Interface for Older Adults , 2016, ACM Trans. Access. Comput..
[40] Carlos Eduardo Scheidegger,et al. Certifying and Removing Disparate Impact , 2014, KDD.
[41] D. Dunlop,et al. Racial/ethnic differences in the development of disability among older adults. , 2007, American journal of public health.
[42] Mark Hasegawa-Johnson,et al. Counterfactually Fair Automatic Speech Recognition , 2021, IEEE/ACM Transactions on Audio, Speech, and Language Processing.