Gathering egocentric video and other sensor data with AAC users to inform narrative prediction

During a conversation, partners base their contributions amongst other factors on the context of the conversation, for example, who the conversation partner is or where and when a conversation takes place. Some AAC systems provide the user with phrases and whole narratives rather than simple word prediction. In a study by Todman et al. [2] handcrafted contextual conversational items were provided to AAC users on their device, communication rates of up to 64 wpm were demonstrated. However, using such a system requires hand scripted paragraphs and training users to remember the existence and location of these. Automatic data-to-text sentence generators have been trialed in narrative based systems. In [3], a narrative ontology was populated with conversational topics linked to people and places.