A Model for Eye and Head Motion for Virtual Agents

In this paper we propose a model for generating head and eye movements during gaze shifts of virtual characters, including eyelid and eyebrow motion. A user study with 30 participants was conducted to evaluate the communicative accuracy and perceived naturalness of the model. Results showed that the model communicates gaze targets with an accuracy that closely matches that of a human confederate, and participants subjectively rated the head and eye movements as natural (as opposed to artificial). The implementation can be used as-is in applications where virtual characters act as idle bystanders or observers, or it can be paired with a lip synchronization solution.