Feminist Perspective on Robot Learning Processes

As different research works report and daily life experiences confirm, learning models can result in biased outcomes. The biased learned models usually replicate historical discrimination in society and typically negatively affect the less represented identities. Robots are equipped with these models that allow them to operate, performing tasks more complex every day. The learning process consists of different stages depending on human judgments. Moreover, the resulting learned models for robot decisions rely on recorded labeled data or demonstrations. Therefore, the robot learning process is susceptible to bias linked to human behavior in society. This imposes a potential danger, especially when robots operate around humans and the learning process can reflect the social unfairness present today. Different feminist proposals study social inequality and provide essential perspectives towards removing bias in various fields. What is more, feminism allowed and still allows to reconfigure numerous social dynamics and stereotypes advocating for equality across people through their diversity. Consequently, we provide a feminist perspective on the robot learning process in this work. We base our discussion on intersectional feminism, community feminism, decolonial feminism, and pedagogy perspectives, and we frame our work in a feminist robotics approach. In this paper, we present an initial discussion to emphasize the relevance of feminist perspectives to explore, foresee, en eventually correct the biased robot decisions.

[1]  Gennaro Cordasco,et al.  How Human Likeness, Gender and Ethnicity affect Elders’Acceptance of Assistive Robots , 2020, 2020 IEEE International Conference on Human-Machine Systems (ICHMS).

[2]  M. Mcneil Simians, Cyborgs and Women: The Reinvention of Nature , 1992 .

[3]  Safiya Noble,et al.  Algorithms of Oppression , 2018 .

[4]  Daniel James Fuchs,et al.  The Dangers of Human-Like Bias in Machine-Learning Algorithms , 2018 .

[5]  Anupam Datta,et al.  Gender Bias in Neural Natural Language Processing , 2018, Logic, Language, and Security.

[6]  Natalia Kovalyova,et al.  Data feminism , 2020, Information, Communication & Society.

[7]  Adam Tauman Kalai,et al.  Man is to Computer Programmer as Woman is to Homemaker? Debiasing Word Embeddings , 2016, NIPS.

[8]  Arvind Narayanan,et al.  Semantics derived automatically from language corpora contain human-like biases , 2016, Science.

[9]  K. Crenshaw Demarginalizing the Intersection of Race and Sex: A Black Feminist Critique of Antidiscrimination Doctrine, Feminist Theory and Antiracist Politics , 1989 .

[10]  A. Quijano Coloniality of Power and Eurocentrism in Latin America , 2000 .

[11]  Shaowen Bardzell,et al.  Feminist HCI: taking stock and outlining an agenda for design , 2010, CHI.

[12]  Mará Viveros Vigoya,et al.  La interseccionalidad: una aproximación situada a la dominación , 2016 .

[13]  Abeba Birhane,et al.  Algorithmic Injustices: Towards a Relational Ethics , 2019, ArXiv.

[14]  Sirma Yavuz,et al.  Social navigation framework for assistive robots in human inhabited unknown environments , 2021, Engineering Science and Technology, an International Journal.

[15]  Lucy A. Suchman,et al.  Located Accountabilities in Technology Production , 2002, Scand. J. Inf. Syst..

[16]  Abhinav Valada,et al.  From Learning to Relearning: A Framework for Diminishing Bias in Social Robot Navigation , 2021, Frontiers in Robotics and AI.

[17]  S. West Redistribution and Rekognition , 2020 .

[18]  Iolanda Leite,et al.  Boosting Robot Credibility and Challenging Gender Norms in Responding to Abusive Behaviour: A Case for Feminist Robots , 2021, HRI.

[19]  Paulo Freire,et al.  Pedagogy of the Oppressed , 2019, Toward a Just World Order.

[20]  Tatsuya Nomura,et al.  Robots and Gender , 2017 .

[21]  Megan Garcia,et al.  Racist in the Machine: The Disturbing Implications of Algorithmic Bias , 2016 .

[22]  Judy Hoffman,et al.  Predictive Inequity in Object Detection , 2019, ArXiv.

[23]  Timnit Gebru,et al.  Gender Shades: Intersectional Accuracy Disparities in Commercial Gender Classification , 2018, FAT.