The past decade has seen an exponential growth in the capabilities and deployment of artificial intelligence systems based on deep neural networks. These are visible through the speech recognition and natural language processing of Alexa/Siri/Google that structure many of our everyday interactions, and the promise of SAE Level 5 autonomous driving provided by Tesla and Waze. Aside from these shiny and visible applications of AI-ML are many other uses that are more subtle: AI-ML is now being used to screen job applicants as well as determine which web ads we are shown. And while many vendors of AI-ML technologies have promised that these tools provide for greater access and freedom from human prejudice, disabled users have found that these tools can embed and deploy newer, subtler forms of discrimination against disabled people. At their worst, AI-ML systems can deny disabled people their humanity. The explosion of AI-ML technologies in the last decade has been driven by at least three factors. First, the deep neural networks algorithms that currently drive much machine learning have been improved dramatically through the use of backpropagation [1], generative adversarial nets [2], and convolution [3], allowing for their deployment across a broad variety of datasets. Second, the cost of computing hardware (especially GPUs) has dropped dramatically while large scale cloud computing facilities and widespread fiber/ broadband/4G has provided for universal availability. Finally, large datasets have come online to aid in the training of the neural nets - for example, the image datasets provided through Google and Facebook, the large natural language datasets driving Amazon Alexa, and so forth. Deep neural networks themselves have two key features or flaws, depending on the perspective. First, they are highly dependent on the diversity of the training dataset used. Second, their internal operations when deployed are entirely opaque not only to the end-user but also to the designers of the system itself.
[1]
Timnit Gebru,et al.
Gender Shades: Intersectional Accuracy Disparities in Commercial Gender Classification
,
2018,
FAT.
[2]
Geoffrey E. Hinton,et al.
Learning representations by back-propagating errors
,
1986,
Nature.
[3]
Shari Trewin,et al.
AI Fairness for People with Disabilities: Point of View
,
2018,
ArXiv.
[4]
Yoshua Bengio,et al.
Generative Adversarial Nets
,
2014,
NIPS.
[5]
Yann LeCun,et al.
Convolutional neural networks applied to house numbers digit classification
,
2012,
Proceedings of the 21st International Conference on Pattern Recognition (ICPR2012).
[6]
K. Crawford.
Artificial Intelligence's White Guy Problem
,
2016
.