Brain-inspired visual processing for robust gesture recognition

Abstract In current robot technology, vision is one of the most serious issues. In order for robots to be used in everyday life, robots must recognize natural scene images with compact hardware. In addition, robots should be able to communicate with humans by gesture as well as speech so that anyone can easily treat a robot. Therefore, one of our research objectives is to develop visual processing techniques for markerless gesture recognition. This paper presents some achievements developed by our project team regarding an arm posture recognition algorithm, brain-inspired VLSI vision systems for face/object recognition, and a psychophysics-based model for segmentation of objects from backgrounds.