Centering disability perspectives in algorithmic fairness, accountability, & transparency

It is vital to consider the unique risks and impacts of algorithmic decision-making for people with disabilities. The diverse nature of potential disabilities poses unique challenges for approaches to fairness, accountability, and transparency. Many disabled people choose not to disclose their disabilities, making auditing and accountability tools particularly hard to design and operate. Further, the variety inherent in disability poses challenges for collecting representative training data in any quantity sufficient to better train more inclusive and accountable algorithms. This panel highlights areas of concern, present emerging research efforts, and enlist more researchers and advocates to study the potential impacts of algorithmic decision-making on people with disabilities. A key objective is to surface new research projects and collaborations, including by integrating a critical disability perspective into existing research and advocacy efforts focused on identifying sources of bias and advancing equity. In the technology space, discussion topics will include methods to assess the fairness of current AI systems, and strategies to develop new systems and bias mitigation approaches that ensure fairness for people with disabilities. For example, how do today's currently-deployed AI systems impact people with disabilities? If developing inclusive datasets is part of the solution, how can researchers ethically gather such data, and what risks might centralizing data about disability pose? What new privacy solutions must developers create to reduce the risk of deductive disclosure of identities of people with disabilities in "anonymized" datasets? How can AI models and bias mitigation techniques be developed that handle the unique challenges of disability, i.e., the "long tail" and low incidence of many types of disability - for instance, how do we ensure that data about disability are not treated as outliers? What are the pros and cons of developing custom/personalized AI models for people with disabilities versus ensuring that general models are inclusive? In the law and policy space, the framework for people with disabilities requires specific study. For example, the Americans with Disabilities Act (ADA) requires employers to adopt "reasonable accommodations" for qualified individuals with a disability. But what is a "reasonable accommodation" in the context of machine learning and AI? How will the ADA's unique standards interact with case law and scholarship about algorithmic bias against other protected groups? When the ADA governs what questions employers can ask about a candidate's disability, and HIPAA and the Genetic Information Privacy Act regulate the sharing of health information, how should we think about inferences from data that approximate such questions? Panelists will bring varied perspectives to this conversation, including backgrounds in computer science, disability studies, legal studies, and activism. In addition to their scholarly expertise, several panelists have direct lived experience with disability. The session format will consist of brief position statements from each panelist, followed by questions from the moderator, and then open questions from and discussion with the audience.