We propose a knowledge-representation architecture allowing a robot to learn arbitrarily complex, hierarchical / symbolic relationships between sensors and actuators. These relationships are encoded in high-dimensional, low-precision vectors that are very robust to noise. Low-dimensional (single-bit) sensor values are projected onto the high-dimensional representation space using low-precision random weights, and the appropriate actions are then computed using elementwise vector multiplication in this space. The high-dimensional action representations are then projected back down to low-dimensional actuator signals via a simple vector operation like dot product. As a proof-of-concept for our architecture, we use it to implement a behavior-based controller for a simulated robot with three sensors (touch sensor, left/right light sensor) and two actuators (wheels). We conclude by discussing the prospects for deriving such representations automatically.
[1]
James L. McClelland,et al.
Parallel distributed processing: explorations in the microstructure of cognition, vol. 1: foundations
,
1986
.
[2]
Ross W. Gayler.
Vector Symbolic Architectures answer Jackendoff's challenges for cognitive neuroscience
,
2004,
ArXiv.
[3]
Terrence C. Stewart,et al.
Compositionality and Biologically Plausible Models
,
2012
.
[4]
Pentti Kanerva,et al.
Hyperdimensional Computing: An Introduction to Computing in Distributed Representation with High-Dimensional Random Vectors
,
2009,
Cognitive Computation.
[5]
Rodney A. Brooks,et al.
Fast, Cheap and Out of Control: a Robot Invasion of the Solar System
,
1989
.
[6]
V. Braitenberg.
Vehicles, Experiments in Synthetic Psychology
,
1984
.