Systematic Review: Trust-Building Factors and Implications for Conversational Agent Design
暂无分享,去创建一个
[1] Kien Hoa Ly,et al. A fully automated conversational agent for promoting mental well-being: A pilot RCT using mixed methods , 2017, Internet interventions.
[2] Manfred Tscheligi,et al. Interacting with embodied agents that can see: how vision-enabled agents can assist in spatial tasks , 2006, NordiCHI '06.
[3] Sean Andrist,et al. Effects of Culture on the Credibility of Robot Speech: A Comparison between English and Arabic , 2015, 2015 10th ACM/IEEE International Conference on Human-Robot Interaction (HRI).
[4] Manfred Tscheligi,et al. I would choose the other card: humanoid robot gives an advice , 2009, HRI '09.
[5] V B CERVIN,et al. PERSUASIVENESS AND PERSUASIBILITY AS RELATED TO INTELLIGENCE AND EXTRAVERSION. , 1965, The British journal of social and clinical psychology.
[6] J. H. Davis,et al. An Integrative Model Of Organizational Trust , 1995 .
[7] Joonhwan Lee,et al. It Sounds Like A Woman: Exploring Gender Stereotypes in South Korean Voice Assistants , 2019, CHI Extended Abstracts.
[8] James L. Szalma,et al. A Meta-Analysis of Factors Influencing the Development of Trust in Automation , 2016, Hum. Factors.
[9] Ilaria Torre,et al. Trust in artificial voices: A "congruency effect" of first impressions and behavioural experience , 2018, APAScience.
[10] B. J. Fogg,et al. Credibility and computing technology , 1999, CACM.
[11] Angelo Cangelosi,et al. Priming Anthropomorphism: Can the credibility of humanlike robots be transferred to non-humanlike robots? , 2016, 2016 11th ACM/IEEE International Conference on Human-Robot Interaction (HRI).
[12] Brian Scassellati,et al. Effects of form and motion on judgments of social robots' animacy, likability, trustworthiness and unpleasantness , 2016, Int. J. Hum. Comput. Stud..
[13] D. Moher,et al. Preferred Reporting Items for Systematic Reviews and Meta-Analyses: The PRISMA Statement , 2009, BMJ : British Medical Journal.
[14] Jessie Y. C. Chen,et al. A Meta-Analysis of Factors Affecting Trust in Human-Robot Interaction , 2011, Hum. Factors.
[15] S. Shyam Sundar,et al. Are specialist robots better than generalist robots? , 2011, 2011 6th ACM/IEEE International Conference on Human-Robot Interaction (HRI).
[16] Wolfgang Minker,et al. Human After All: Effects of Mere Presence and Social Interaction of a Humanoid Robot as a Co-Driver in Automated Driving , 2016, AutomotiveUI.
[17] Pei-Luen Patrick Rau,et al. Effects of communication style and culture on ability to accept recommendations from robots , 2009, Comput. Hum. Behav..
[18] Willem F. G. Haselager,et al. Do Robot Performance and Behavioral Style a ↵ ect Human Trust ? A Multi-Method Approach , 2014 .
[19] Luc Wijnen,et al. "It's not my Fault!": Investigating the Effects of the Deceptive Behaviour of a Humanoid Robot , 2017, HRI.
[20] Kun-Pyo Lee,et al. Once a Kind Friend is Now a Thing: Understanding How Conversational Agents at Home are Forgotten , 2019, Conference on Designing Interactive Systems.
[21] Nicole C. Krämer,et al. Empathy for Everyone?: The Effect of Age When Evaluating a Virtual Agent , 2018, HAI.
[22] Timothy W. Bickmore,et al. Establishing and maintaining long-term human-computer relationships , 2005, TCHI.
[23] Clifford Nass,et al. Source Orientation in Human-Computer Interaction , 2000, Commun. Res..
[24] E. C. Tupes,et al. Personality characteristics related to leadership behavior in two types of small group situational problems. , 1958 .
[25] Gordon L. Patzer,et al. Source credibility as a function of communicator physical attractiveness , 1983 .
[26] Juliane Junker. Agents for Games and Simulations, Trends in Techniques, Concepts and Design [AGS 2009, The First International Workshop on Agents for Games and Simulations, May 11, 2009, Budapest, Hungary] , 2009, AGS.
[27] Jodi Forlizzi,et al. "Hey Alexa, What's Up?": A Mixed-Methods Studies of In-Home Conversational Agent Usage , 2018, Conference on Designing Interactive Systems.
[28] Cynthia Breazeal,et al. How smart are the smart toys?: children and parents' agent interaction and intelligence attribution , 2018, IDC.
[29] Abigail Sellen,et al. "Like Having a Really Bad PA": The Gulf between User Expectation and Experience of Conversational Agents , 2016, CHI.
[30] Autumn P. Edwards,et al. “Why Aren’t You a Sassy Little Thing”: The Effects of Robot-Enacted Guilt Trips on Credibility and Consensus in a Negotiation , 2016 .
[31] Clifford Nass,et al. The media equation - how people treat computers, television, and new media like real people and places , 1996 .
[32] Roger K. Moore. Is Spoken Language All-or-Nothing? Implications for Future Speech-Based Human-Machine Interaction , 2016, IWSDS.
[33] J. G. Holmes,et al. Trust in close relationships. , 1985 .
[34] David Griol,et al. The Conversational Interface , 2016 .
[35] Karl F. MacDorman,et al. The Uncanny Valley [From the Field] , 2012, IEEE Robotics Autom. Mag..
[36] Beste F. Yuksel,et al. Brains or Beauty , 2017, ACM Trans. Internet Techn..
[37] Christopher A. Bailey,et al. Social interaction moderates human-robot trust-reliance relationship and improves stress coping , 2016, 2016 11th ACM/IEEE International Conference on Human-Robot Interaction (HRI).
[38] Jessica A. Chen,et al. Conversational agents in healthcare: a systematic review , 2018, J. Am. Medical Informatics Assoc..
[39] Cynthia Breazeal,et al. Persuasive Robotics: The influence of robot gender on human behavior , 2009, 2009 IEEE/RSJ International Conference on Intelligent Robots and Systems.
[40] D. Wiegmann,et al. Similarities and differences between human–human and human–automation trust: an integrative review , 2007 .
[41] J. Cassell,et al. Social Dialongue with Embodied Conversational Agents , 2005 .
[42] C. Nass,et al. Are Machines Gender Neutral? Gender‐Stereotypic Responses to Computers With Voices , 1997 .
[43] John D. Lee,et al. Trust in Automation: Designing for Appropriate Reliance , 2004, Hum. Factors.
[44] Benjamin R. Cowan,et al. "What can i help you with?": infrequent users' experiences of intelligent personal assistants , 2017, MobileHCI.
[45] David R. Ewoldsen,et al. The MODE Model and Its Implications for Studying the Media , 2015 .
[46] P. Costa,et al. Personality in adulthood: a six-year longitudinal study of self-reports and spouse ratings on the NEO Personality Inventory. , 1988, Journal of personality and social psychology.
[47] Cynthia Breazeal,et al. Computationally modeling interpersonal trust , 2013, Front. Psychol..
[48] Brian Scassellati,et al. The Ripple Effects of Vulnerability: The Effects of a Robot’s Vulnerable Behavior on Trust in Human-Robot Teams , 2018, 2018 13th ACM/IEEE International Conference on Human-Robot Interaction (HRI).
[49] Yugo Hayashi,et al. Can AI become Reliable Source to Support Human Decision Making in a Court Scene? , 2017, CSCW Companion.
[50] Justine Cassell,et al. Relational agents: a model and implementation of building user trust , 2001, CHI.
[51] J. Cassell,et al. Embodied conversational agents , 2000 .
[52] Aaron C. Elkins,et al. The Sound of Trust: Voice as a Measurement of Trust During Interactions with Embodied Conversational Agents , 2013 .
[53] C. Nass,et al. Machines and Mindlessness , 2000 .
[54] C. Nass,et al. When a Talking-Face Computer Agent Is Half-Human and Half-Humanoid: Human Identity and Consistency Preference. , 2007 .
[55] Mark A. Neerincx,et al. Persuasive robotic assistant for health self-management of older adults: Design and evaluation of social behaviors , 2010, Int. J. Hum. Comput. Stud..
[56] Kerstin Dautenhahn,et al. Would You Trust a (Faulty) Robot? Effects of Error, Task Type and Personality on Human-Robot Cooperation and Trust , 2015, 2015 10th ACM/IEEE International Conference on Human-Robot Interaction (HRI).
[57] Fuyuan Shen,et al. Benefits for Me or Risks for Others: A Cross-Culture Investigation of the Effects of Message Frames and Cultural Appeals , 2013, Health communication.
[58] Ning Wang,et al. Trust calibration within a human-robot team: Comparing automatically generated explanations , 2016, 2016 11th ACM/IEEE International Conference on Human-Robot Interaction (HRI).
[59] Philipp Wintersberger,et al. (Over)Trust in Automated Driving: The Sleeping Pill of Tomorrow? , 2019, CHI Extended Abstracts.
[60] Rachel K. E. Bellamy,et al. At Face Value , 2021, Bigger Than Life.
[61] Elaheh Sanoubari,et al. A need for trust in conversational interface research , 2019, CUI.
[62] S. Shyam Sundar,et al. Machine Heuristic: When We Trust Computers More than Humans with Our Personal Information , 2019, CHI.
[63] Bruce A. MacDonald,et al. People respond better to robots than computer tablets delivering healthcare instructions , 2015, Comput. Hum. Behav..
[64] J. Cacioppo,et al. On seeing human: a three-factor theory of anthropomorphism. , 2007, Psychological review.
[65] C. V. Ramamoorthy,et al. Phase Coherence in Conceptual Spaces for Conversational Agents , 2010 .
[66] Izak Benbasat,et al. Evaluating Anthropomorphic Product Recommendation Agents: A Social Relationship Perspective to Designing Information Systems , 2009, J. Manag. Inf. Syst..
[67] Futoshi Naya,et al. Differences in effect of robot and screen agent recommendations on human decision-making , 2005, Int. J. Hum. Comput. Stud..
[68] J. Gilbert,et al. Virtual agents in e‐commerce: representational characteristics for seniors , 2011 .
[69] Catherine J. Stevens,et al. Robot Pressure: The Impact of Robot Eye Gaze and Lifelike Bodily Movements upon Decision-Making and Trust , 2014, ICSR.
[70] Elisabeth André,et al. An empirical study on the trustworthiness of life-like interface agents , 1999, HCI.
[71] Jaap Ham,et al. The Influence of Social Cues and Controlling Language on Agent's Expertise, Sociability, and Trustworthiness , 2017, HRI.
[72] Kristinn R. Thórisson,et al. The Power of a Nod and a Glance: Envelope Vs. Emotional Feedback in Animated Conversational Agents , 1999, Appl. Artif. Intell..
[73] Jean E. Fox,et al. The effects of information accuracy on user trust and compliance , 1996, CHI Conference Companion.
[74] Jay F. Nunamaker,et al. Embodied Conversational Agent-Based Kiosk for Automated Interviewing , 2011, J. Manag. Inf. Syst..