Development of Symbiotic Brain-Machine Interfaces Using a Neurophysiology Cyberworkstation

We seek to develop a new generation of brain-machine interfaces (BMI) that enable both the user and the computer to engage in a symbiotic relationship where they must co-adapt to each other to solve goal-directed tasks. Such a framework would allow the possibility real-time understanding and modeling of brain behavior and adaptation to a changing environment, a major departure from either offline learning and static models or one-way adaptive models in conventional BMIs. To achieve a symbiotic architecture requires a computing infrastructure that can accommodate multiple neural systems, respond within the processing deadlines of sensorimotor information, and can provide powerful computational resources to design new modeling approaches. To address these issues we present or ongoing work in the development of a neurophysiology Cyberworkstation for BMI design.

[1]  E. Fetz,et al.  Long-term motor cortex plasticity induced by an electronic neural implant , 2006, Nature.

[2]  Deniz Erdogmus,et al.  Input-output mapping performance of linear and nonlinear models for estimating hand trajectories from cortical neuronal firing patterns , 2002, Proceedings of the 12th IEEE Workshop on Neural Networks for Signal Processing.

[3]  Michael J. Black,et al.  A quantitative comparison of linear and non-linear models of motor cortical activity for the encoding and decoding of arm motions , 2003, First International IEEE EMBS Conference on Neural Engineering, 2003. Conference Proceedings..

[4]  W H Calvin The emergence of intelligence. , 1994, Scientific American.

[5]  Yoshua Bengio,et al.  Brain Inspired Reinforcement Learning , 2004, NIPS.

[6]  R. Kass,et al.  Multiple neural spike train data analysis: state-of-the-art and future challenges , 2004, Nature Neuroscience.

[7]  José Carlos Príncipe,et al.  Coadaptive Brain–Machine Interface via Reinforcement Learning , 2009, IEEE Transactions on Biomedical Engineering.

[8]  D. Schneider,et al.  The neurosciences research program. , 1974, Federation proceedings.

[9]  J. Fuster Upper processing stages of the perception–action cycle , 2004, Trends in Cognitive Sciences.

[10]  Colin Camerer,et al.  A framework for studying the neurobiology of value-based decision making , 2008, Nature Reviews Neuroscience.

[11]  Jose C. Principe,et al.  BMI cyberworkstation: Enabling dynamic data-driven brain-machine interface research through cyberinfrastructure , 2008, 2008 30th Annual International Conference of the IEEE Engineering in Medicine and Biology Society.

[12]  Michael J. Black,et al.  Inferring Hand Motion from Multi-Cell Recordings in Motor Cortex using a Kalman Filter , 2002 .

[13]  Dawn M. Taylor,et al.  Direct Cortical Control of 3D Neuroprosthetic Devices , 2002, Science.

[14]  Nicholas G. Hatsopoulos,et al.  Brain-machine interface: Instant neural control of a movement signal , 2002, Nature.

[15]  A B Schwartz,et al.  Motor cortical representation of speed and direction during reaching. , 1999, Journal of neurophysiology.

[16]  J. Donoghue,et al.  Strengthening of horizontal cortical connections following skill learning , 1998, Nature Neuroscience.

[17]  G. Edelman,et al.  The Mindful Brain: Cortical Organization and the Group-Selective Theory of Higher Brain Function , 1978 .

[18]  M A Lebedev,et al.  A comparison of optimal MIMO linear and nonlinear models for brain–machine interfaces , 2006, Journal of neural engineering.

[19]  S I Helms Tillery,et al.  Training in Cortical Control of Neuroprosthetic Devices Improves Signal Extraction from Small Neuronal Ensembles , 2003, Reviews in the neurosciences.

[20]  J. Kleim,et al.  Functional reorganization of the rat motor cortex following motor skill learning. , 1998, Journal of neurophysiology.

[21]  Jerald D. Kralik,et al.  Real-time prediction of hand trajectory by ensembles of cortical neurons in primates , 2000, Nature.

[22]  Richard S. Sutton,et al.  Introduction to Reinforcement Learning , 1998 .

[23]  Richard S. Sutton,et al.  Reinforcement Learning: An Introduction , 1998, IEEE Trans. Neural Networks.

[24]  Peter Dayan,et al.  Non-commercial Research and Educational Use including without Limitation Use in Instruction at Your Institution, Sending It to Specific Colleagues That You Know, and Providing a Copy to Your Institution's Administrator. All Other Uses, Reproduction and Distribution, including without Limitation Comm , 2022 .

[25]  Florentin Wörgötter,et al.  Temporal Sequence Learning, Prediction, and Control: A Review of Different Models and Their Relation to Biological Mechanisms , 2005, Neural Computation.

[26]  Mitsuo Kawato,et al.  Multiple Model-Based Reinforcement Learning , 2002, Neural Computation.

[27]  W. Schultz Multiple reward signals in the brain , 2000, Nature Reviews Neuroscience.

[28]  Nicholas K. Jong,et al.  Kernel-Based Models for Reinforcement Learning , 2006 .

[29]  D M Wolpert,et al.  Multiple paired forward and inverse models for motor control , 1998, Neural Networks.

[30]  G. Buzsáki Rhythms of the brain , 2006 .

[31]  Miguel A. L. Nicolelis,et al.  Real-time control of a robot arm using simultaneously recorded neurons in the motor cortex , 1999, Nature Neuroscience.