Wearable Technology for “Real-World Research”: Realistic or Not?

Research conducted in laboratories is unlikely to be representative of human functioning in the “real world.” Or so one is led to believe when one sifts through the psychological literature of the last century. Discussions about the generalisability of laboratory research beyond its confines have arisen in the fields of perception, attention, memory, and social cognition as well as many applied sciences (see Holleman et al., 2020b, for an overview). More often than not, papers advocating research in the “real world” are published in highimpact journals. Of course, this is in itself no reason for particular concern. What does concern us is that some of these papers make unrealistic claims about how “real-world research”—whatever its precise definition—is to be conducted. The fact that such claims are made in high-impact journals is worrisome, as papers in these journals are often the basis for grant applications or serve as a starting point for new researchers. A recent claim that particularly concerns us is that techniques such as wearable eye trackers, mobile electroencephalography (EEG), and mobile functional near-infrared spectroscopy (fNIRS) will allow researchers to finally conduct “real-world research” (P erezEdgar et al., 2020; Shamay-Tsoory & Mendelsohn, 2019). The gist is that with wearable technology, one can freely roam the world, while (eye-movement) behaviour and its neural correlates are continuously measured, allowing for profound insights into human perception and cognition. In our view, there is a stark contrast between how this claim is presented and the real problems of the empirical work. We have often found that researchers vastly underestimate the reality of conducting research with wearable technology, whether it is conducted inside or outside of the laboratory. Using wearable eye tracking as an example, we show that there are theoretical, conceptual, and methodological problems when moving from classical laboratory-based research to research using wearable technology, which many researchers seem to be unaware of. Although our focus is on eye-tracking research, as that is where our expertise lies, we expect that similar concerns can be raised about research using mobile EEG or mobile fNIRS. The fact that some researchers overestimate what wearable eye trackers can deliver is perhaps best illustrated by a quote from a recent paper advocating the use of wearable technology for “real-world research.” Shamay-Tsoory & Mendelsohn (2019) state that “. . . newly available portable eye-tracking systems offer a cost-effective, easy to apply, and reliable measure of eye gaze and saccades in an ecological environment” (p. 853, our emphasis). If this were true, wearable eye trackers might indeed be wonderful tools for studying “real-life” behaviour. However, we contest these three postulated characteristics of wearable Perception 2020, Vol. 49(6) 611–615 ! The Author(s) 2020