Controlling a virtual body by thought in a highly-immersive virtual environment - a case study in using a brain-computer interface in a virtual-reality cave-like system

A brain-computer interface (BCI) can arguably be considered the ultimate user interface, where humans operate a computer using thought alone. We have integrated the Graz-BCI into a highly immersive Cave-like system. In this paper we report a case study where three participants were able to control their avatar using only their thought. We have analyzed the participants’ subjective experience using an in-depth qualitative methodology. We also discuss some limitations of BCI in controlling a virtual environment, and interaction design decisions that needed to be made. Brain-computer interface (BCI) has been studied extensively as a tool for paralyzed patients, which may augment their communication with the external world and allow them better control of their limbs. However, once it has been developed for these critical applications, we expect it will have profound implications on many other types of user interfaces and applications. BCI could be one of the most significant steps following “direct manipulation interfaces” (Schneiderman, 1983) – where intention is mapped directly into interaction, rather than being conveyed through motor movements. Furthermore, if used in an immersive virtual environment (IVE) this could be a completely novel experience and, in the future, lead to unprecedented levels in the sense of presence (for recent reviews of the concept of presence see (Vives and Slater, 2005) and (Riva et al., 2003)). A key requirement for a successful experience in an immersive virtual environment (IVE) is the representation of the participant, or its avatar (Pandzic et al., 1997; Slater et al., 1994; Slater et al., 1998). This paper describes the first ever study where participants control their own avatar using only their thoughts. Three subjects were able to use the GrazBCI to control an avatar, and their subjective experience was assessed using questionnaires and a semistructured interview. Naturally, a third-person avatar, such as used in this experiment, is only one possible interface to an IVE. Using a BCI to control an IVE by thought raises several major human-computer interaction (HCI) issues: whether classification of thought patterns is continuous (asynchronous BCI) or only takes place in specific moments (synchronous BCI), the number of input classes recognized, the importance of feedback, and the nature of the mapping between thoughts and resulting action in the IVE. In this paper we refer to these issues, and present a case study that specifically addresses the issues of feedback and mapping. A critical initial hypothesis is that natural mapping between thought processes and IVE functionality would improve the experience. A one-to-one mapping seemingly makes intuitive sense, but having this mapping is constraining because we are limited in the scope of thought patterns that we can detect based on contemporary brain recording techniques. In addition, it precludes other more complex or more fanciful body image mappings; what if we want to experiment with lobster avatars? (See Jaron Lanier’s “everyone can be a lobster” statement in http://www.edge.org/q2006/q06 7.html#lanier). In the case study reported here we have found out that natural mapping was reported to feel more natural and easy than when the mapping was reversed. However, the results do not indicate that BCI accuracy was better with natural mapping than with reversed mapping. The main implication of our case study is that this new type of interface, whereby IVE participants control their avatars by thought, is possible, and should be further pursued. In addition, we reveal new insights about the HCI issues that are involved in such an interface, and provide a first glance into what the experience of using such an interface may be like.

[1]  David Tartakover,et al.  How many people? , 2007, Lancet.

[2]  Jesper Mortensen,et al.  Spelunking: Experiences using the DIVE System on CAVE-like Platforms , 2001, EGVE/IPT.

[3]  V. Ramachandran,et al.  Projecting sensations to external objects: evidence from skin conductance response , 2003, Proceedings of the Royal Society of London. Series B: Biological Sciences.

[4]  John C. Hart,et al.  The CAVE: audio visual experience automatic virtual environment , 1992, CACM.

[5]  G. Pfurtscheller,et al.  How many people are able to operate an EEG-based brain-computer interface (BCI)? , 2003, IEEE Transactions on Neural Systems and Rehabilitation Engineering.

[6]  G. Riva,et al.  Being There: Concepts, Effects and Measurements of User Presence in Synthetic Environments , 2003 .

[7]  Anthony Steed,et al.  An Overview of the COVEN Platform , 2001, Presence: Teleoperators & Virtual Environments.

[8]  Maria V. Sanchez-Vives,et al.  From presence to consciousness through virtual reality , 2005, Nature Reviews Neuroscience.

[9]  J. L. Smith,et al.  Semi-Structured Interviewing and Qualitative Analysis , 1995 .

[10]  Michael W. Haas,et al.  Navigating through virtual flight environments using brain-body-actuated control , 1997, Proceedings of IEEE 1997 Annual International Symposium on Virtual Reality.

[11]  Mel Slater,et al.  The Influence of Body Movement on Subjective Presence in Virtual Environments , 1998, Hum. Factors.

[12]  Ivan E. Sutherland,et al.  The Ultimate Display , 1965 .

[13]  J.D. Bayliss,et al.  Use of the evoked potential P3 component for control in a virtual apartment , 2003, IEEE Transactions on Neural Systems and Rehabilitation Engineering.

[14]  J. Mourino,et al.  Asynchronous BCI and local neural classifiers: an overview of the adaptive brain interface project , 2003, IEEE Transactions on Neural Systems and Rehabilitation Engineering.

[15]  G. Pfurtscheller,et al.  Brain-Computer Interfaces for Communication and Control. , 2011, Communications of the ACM.

[16]  Gert Pfurtscheller,et al.  Motor imagery and direct brain-computer communication , 2001, Proc. IEEE.

[17]  Jonathan D. Cohen,et al.  Rubber hands ‘feel’ touch that eyes see , 1998, Nature.

[18]  Mel Slater,et al.  Depth of Presence in Virtual Environments , 1994, Presence: Teleoperators & Virtual Environments.

[19]  J J Vidal,et al.  Toward direct brain-computer communication. , 1973, Annual review of biophysics and bioengineering.

[20]  G Pfurtscheller,et al.  Visualization of significant ERD/ERS patterns in multichannel EEG and ECoG data , 2002, Clinical Neurophysiology.

[21]  Daniel Thalmann,et al.  Virtual Life Network: A Body-Centered Networked Virtual Environment , 1997, Presence: Teleoperators & Virtual Environments.

[22]  Gert Pfurtscheller,et al.  Navigating Virtual Reality by Thought: What Is It Like? , 2007, PRESENCE: Teleoperators and Virtual Environments.

[23]  J D Bayliss,et al.  A virtual reality testbed for brain-computer interface research. , 2000, IEEE transactions on rehabilitation engineering : a publication of the IEEE Engineering in Medicine and Biology Society.

[24]  Ben Shneiderman,et al.  Direct Manipulation: A Step Beyond Programming Languages , 1983, Computer.

[25]  G Calhoun,et al.  Brain-computer interfaces based on the steady-state visual-evoked response. , 2000, IEEE transactions on rehabilitation engineering : a publication of the IEEE Engineering in Medicine and Biology Society.