Robot Gesture Generation from EnvironmentalSounds Using Inter-modality Mapping
暂无分享,去创建一个
We propose a motion generation model in
which robots presume the sound source of an
environmental sound and imitate its motion.
Sharing environmental sounds between humans
and robots enables them to share environmental
information. It is difficult to transmit
environmental sounds in human-robot
communications. We approached this problem
by focusing on the iconic gestures. Concretely,
robots presume the motion of the
sound source object and map it to the robot
motion. This method enabled robots to imitate
the motion of the sound source using
their bodies.
[1] Tomohiro Nakatani,et al. Automatic Sound-Imitation Word Recognition from Environmental Sounds Focusing on Ambiguity Problem in Determining Phonemes , 2004, PRICAI.
[2] Heinz Werner,et al. Symbol formation : an organismic-developmental approach to the psychology of language , 1984 .
[3] Renate Sitte,et al. Comparison of techniques for environmental sound recognition , 2003, Pattern Recognit. Lett..
[4] V. Ramachandran,et al. Synaesthesia? A window into perception, thought and language , 2001 .