Preface
暂无分享,去创建一个
Until recently, neuroscience and robotics were utterly unrelated subjects, requiring completely different background, skills and methodologies. Nowadays, the distance between the two fields is being constantly shortened by the progress in computational modeling, and the construction of increasingly skilled autonomous artificial agents inspired by the abilities and behavior of living beings. The astounding discoveries that are being recently achieved by brain scientists constitute the fundamental building blocks for computational neuroscience and biomimetic robotics. This book presents interdisciplinary research which pursues the mutual enrichment of neuroscience and robotics research. Grasping and manipulation of every kind of object is arguably the most distinctive practical skill of human beings, and erect posture has likely evolved in order to free the upper limbs and make of the hands two unmatchable tools. Despite the great efforts that are being put on it, grasping in robotics is largely an unsolved problem, due to its inherent complexity and the still limited adaptive skills of present day robots in visual and visuomotor behaviors. In our approach, the task of object grasping is dealt with by mimicking, as accurately as possible, the brain mechanisms which underlie planning and execution of grasping actions in humans and other skilled primates. The principal contribution of the presented research is the definition and implementation of a functional model of the brain areas involved in vision-based grasping actions. The model constitutes a bridge between cognitive science and robotics research, and includes all the steps required for performing a successful grasping action from visual data. The subdivision of visual processing into the dorsal and ventral cortical streams, respectively dedicated to action-oriented and perception-oriented vision, is thoroughly taken into account. Hypotheses regarding the mechanisms that allow to achieve complex interactions with the peripersonal space, through the integration of the data provided by the streams, are put forth. Transfer functions are proposed for modeling the visuomotor transformations performed by the brain areas most critical in grasp planning and execution. The particular attention payed to the functional role of brain areas makes the model especially suitable for implementation on a real robotic setup, and a full