Multi-modal visual attention for robotics active vision systems - A reference architecture
暂无分享,去创建一个
[1] Mark H. Lee,et al. Fast Learning Mapping Schemes for Robotic Hand–Eye Coordination , 2010, Cognitive Computation.
[2] Mary M Hayhoe,et al. Task and context determine where you look. , 2016, Journal of vision.
[3] Danica Kragic,et al. An Active Vision System for Detecting, Fixating and Manipulating Objects in the Real World , 2010, Int. J. Robotics Res..
[4] Kevin N. Gurney,et al. The Basal Ganglia and Cortex Implement Optimal Decision Making Between Alternative Actions , 2007, Neural Computation.
[5] Mark Lee,et al. Robotic hand-eye coordination without global reference: A biologically inspired learning scheme , 2009, 2009 IEEE 8th International Conference on Development and Learning.
[6] Mark H. Lee,et al. Integration of Active Vision and Reaching From a Developmental Robotics Perspective , 2010, IEEE Transactions on Autonomous Mental Development.
[7] Mark H. Lee,et al. A developmental algorithm for ocular-motor coordination , 2010, Robotics Auton. Syst..
[8] Frédéric Alexandre,et al. Cortical basis of communication: Local computation, coordination, attention , 2009, Neural Networks.
[9] C. Koch,et al. A saliency-based search mechanism for overt and covert shifts of visual attention , 2000, Vision Research.