暂无分享,去创建一个
Zhou Yu | David Gros | Yu Li | Zhou Yu | Yu Li | David Gros
[1] Dawn Song,et al. Aligning AI With Shared Human Values , 2020, ICLR.
[2] Raja Chatila,et al. The IEEE Global Initiative on Ethics of Autonomous and Intelligent Systems , 2019, Robotics and Well-Being.
[3] Claas Christian Germelmann,et al. Alexa, Can I Trust You? Exploring Consumer Paths to Trust in Smart Voice-Interaction Technologies , 2020, Journal of the Association for Consumer Research.
[4] Michele Farisco,et al. Anthropomorphism in AI , 2020, AJOB neuroscience.
[5] Adam S. Miner,et al. Psychological, Relational, and Emotional Effects of Self-Disclosure After Conversations With a Chatbot , 2018, The Journal of communication.
[6] Ming-Wei Chang,et al. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding , 2019, NAACL.
[7] Antonella De Angeli,et al. To the rescue of a lost identity: Social perception in human-chatterbot interaction , 2005 .
[8] Asbjørn Følstad,et al. Chatbots and the new world of HCI , 2017, Interactions.
[9] Solon Barocas,et al. Language (Technology) is Power: A Critical Survey of “Bias” in NLP , 2020, ACL.
[10] Jason Weston,et al. Build it Break it Fix it for Dialogue Safety: Robustness from Adversarial Human Attack , 2019, EMNLP.
[11] Cristian Danescu-Niculescu-Mizil,et al. ConvoKit: A Toolkit for the Analysis of Conversations , 2020, SIGDIAL.
[12] Ali Farhadi,et al. Defending Against Neural Fake News , 2019, NeurIPS.
[13] Mary Williamson,et al. Recipes for Building an Open-Domain Chatbot , 2020, EACL.
[14] William D. Smart,et al. Averting Robot Eyes , 2017 .
[15] J. Cacioppo,et al. On seeing human: a three-factor theory of anthropomorphism. , 2007, Psychological review.
[16] Alex Gillespie,et al. Co-constructing intersubjectivity with artificial conversational agents: People are more likely to initiate repairs of misunderstandings with agents represented as human , 2016, Comput. Hum. Behav..
[17] Welf H. Weiger,et al. The Chatbot Disclosure Dilemma: Desirable and Undesirable Effects of Disclosing the Non-Human Identity of Chatbots , 2020, ICIS.
[18] Asbjørn Følstad,et al. Why People Use Chatbots , 2017, INSCI.
[19] Rosemarie E. Yagoda,et al. You Want Me to Trust a ROBOT? The Development of a Human–Robot Interaction Trust Scale , 2012, International Journal of Social Robotics.
[20] Jason Weston,et al. Personalizing Dialogue Agents: I have a dog, do you have pets too? , 2018, ACL.
[21] J. Bryson. Robots should be slaves , 2010 .
[22] Zhou Yu,et al. Persuasion for Good: Towards a Personalized Persuasive Dialogue System for Social Good , 2019, ACL.
[23] Jianfeng Gao,et al. DialoGPT: Large-Scale Generative Pre-training for Conversational Response Generation , 2020, ACL.
[24] Quoc V. Le,et al. Towards a Human-like Open-Domain Chatbot , 2020, ArXiv.
[25] Theo Araujo,et al. Living up to the chatbot hype: The influence of anthropomorphic design cues and communicative agency framing on conversational agent and company perceptions , 2018, Comput. Hum. Behav..
[26] Zhou Yu,et al. LEGOEval: An Open-Source Toolkit for Dialogue System Evaluation via Crowdsourcing , 2021, ACL.
[27] Ryan Calo,et al. Regulating Bot Speech , 2018 .
[28] Tomas Mikolov,et al. Bag of Tricks for Efficient Text Classification , 2016, EACL.
[29] Peter Henderson,et al. Ethical Challenges in Data-Driven Dialogue Systems , 2017, AIES.
[30] Woodrow Hartzog,et al. UNFAIR AND DECEPTIVE ROBOTS , 2015 .
[31] Evan Selinger,et al. Robot Eyes Wide Shut: Understanding Dishonest Anthropomorphism , 2019, FAT.
[32] Lysandre Debut,et al. HuggingFace's Transformers: State-of-the-art Natural Language Processing , 2019, ArXiv.
[33] Allan Dafoe,et al. Artificial Intelligence: American Attitudes and Trends , 2019, SSRN Electronic Journal.
[34] John Danaher,et al. Robot Betrayal: a guide to the ethics of robotic deception , 2020, Ethics and Information Technology.
[35] C. Nass,et al. Machines and Mindlessness , 2000 .