Cognitive architectures and virtual worlds: Integrating ACT-R with the XNA framework

Within the sciences of the mind, issues of material embodiment and environmental embedding have emerged as important areas of research attention over recent decades. Embodiment and embedding are deemed to be important, it is argued, because extra-neural resources may shape the profile of brain-based processes, and, at least occasionally, they may feature in the realization of what are referred to as ‘environmentally extended cognitive systems’. This interest in situated, embodied and extended cognition motivates the development of cognitive computational models that can engage in complex forms of perceptuo-motor processing within highly dynamic and perceptually-rich environments. While the ACT-R cognitive architecture is capable of supporting certain forms of environmental interaction via a set of perceptuo-motor modules, in the majority of cases these modules are used to emulate the interaction seen with relatively simple devices, such as computer keyboards and display screens. In this paper, we show how ACT-R can be integrated with Microsoft’s XNA Framework to support sophisticated forms of interaction with 3D virtual environments. The XNA Framework forms part of Microsoft’s XNA Game Studio, which provides a managed runtime environment that supports the process of video game development. By demonstrating how ACT-R can be integrated with the XNA Framework, we hope to show how ACT-R agents could be embedded in a range of virtual 3D multiplayer game environments. This capability could be used to support future research efforts associated with the development of computational models of embodied, situated and extended cognitive processes. This work builds on previous efforts to integrate ACT-R agents within virtual environments, most notably the work of Best and Lebiere [1] using the Unreal Tournament game engine.