We are building a musical performance assistance system that uses visual information to supplement the musical communication among deaf and hard-of-hearing people playing music together, enabling them to enjoy music communication playing music representing a certain emotion. We previously collected drawings for which the drawer had an emotion in mind in order to improve the communication in our system, which presents them to the players to enable them to focus on expressing a specific emotion. We thus need to better understand the properties of images that relate to an emotion. After computing the principal components of the image properties, we clustered the drawings into groups representing one of four emotions. We also asked viewers to categorize them on the basis of the emotion they elicited in the viewer. The results showed that the viewers based their judgment on the specific meaning of the drawing rather than the shape, if the shape of drawings was not abstract. The drawings intended to represent fear were well grouped by clustering, while the viewers categorized them as either fear or sadness. The results imply that we can add new drawings with intended emotions to our system as long as their image properties meet our analysis.
[1]
Emery Schubert,et al.
Emotionface: Prototype Facial Expression Display of Emotion in Music
,
2004,
ICAD.
[2]
Yoichi Sato,et al.
Pose-Invariant Facial Expression Recognition Using Variable-Intensity Templates
,
2007,
International Journal of Computer Vision.
[3]
N. Kato,et al.
Is visual information useful for music communication?
,
2009,
MSIADU '09.
[4]
N. Tsapatsoulis,et al.
Comparing Template-based , Feature-based and Supervised Classification of Facial Expressions from Static Images
,
1999
.
[5]
Elaine Chew,et al.
Quantitative and visual analysis of the impact of music on perceived emotion of film
,
2007,
CIE.
[6]
Marina Pavlova,et al.
Perceived Dynamics of Static Images Enables Emotional Attribution
,
2005,
Perception.
[7]
T. Oyama,et al.
Multidimensional Scaling of Computer-generated Abstract Forms
,
2003
.