Recent research aims to develop new open-microphone engagement techniques capable of identifying when a speaker is addressing a computer versus human partner, including during computer-assisted group interactions. The present research explores: (1) how accurately people can judge whether an intended interlocutor is a human versus computer, (2) which linguistic, acoustic-prosodic, and visual information sources they use to make these judgments, and (3) what type of systematic errors are present in their judgments. Sixteen participants were asked to determine a speaker's intended addressee based on actual videotaped utterances matched on illocutionary force, which were played back as: (1) lexical transcriptions only, (2) audio-only, (3) visual-only, and (4) audio-visual information. Perhaps surprisingly, people's accuracy in judging human versus computer addressees did not exceed chance levels with lexical-only content (46%). As predicted, accuracy improved significantly with audio (58%), visual (57%), and especially audio-visual information (63%). Overall, accuracy in detecting human interlocutors was significantly worse than judging computer ones, and specifically worse when only visual information was present because speakers often looked at the computer when addressing peers. In contrast, accuracy in judging computer interlocutors was significantly better whenever visual information was present than with audio alone, and it yielded the highest accuracy levels observed (86%). Questionnaire data also revealed that speakers' gaze, peers' gaze, and tone of voice were considered the most valuable information sources. These results reveal that people rely on cues appropriate for interpersonal interactions in determining computer- versus human-directed speech during mixed human-computer interactions, even though this degrades their accuracy. Future systems that process actual rather than expected communication patterns potentially could be designed that perform better than humans.
[1]
Berry Eggen,et al.
Identifying the intended addressee in mixed human-human and human-computer interaction from non-verbal features
,
2005,
ICMI '05.
[2]
Dirk Heylen,et al.
in head orientation between speakers and listeners in multi-party conversations
,
2005
.
[3]
Tanja Schultz,et al.
Identifying the addressee in human-human-robot interactions based on head pose and speech
,
2004,
ICMI '04.
[4]
Sharon L. Oviatt,et al.
Prototyping novel collaborative multimodal systems: simulation, data collection and analysis tools for the next decade
,
2006,
ICMI '06.
[5]
R.P.H. Vertegaal,et al.
Look who's talking to whom : mediating joint attention in multiparty communication and collaboration
,
1998
.
[6]
Sharon L. Oviatt,et al.
Toward open-microphone engagement for multiparty interactions
,
2006,
ICMI '06.
[7]
Anton Nijholt,et al.
Addressee Identification in Face-to-Face Meetings
,
2006,
EACL.
[8]
Sharon L. Oviatt,et al.
Audio-visual cues distinguishing self- from system-directed speech in younger and older adults
,
2005,
ICMI '05.