Linguistic expressions of picture information considering connections between pictures

This paper describes the construction of a system for generating linguistic expressions explaining circumstances related to people or objects that appear in multiple pictures as a means of modeling one form of human intellectual information processing. This system outputs linguistic expressions explaining the emotions of a human object, positional relationship between objects, apparent behavior of an animate object, or relationships connecting pictures based on objective information such as the facial expression, position, or size of an object. To implement the system, soft computing techniques such as neural networks, fuzzy reasoning, and case-based reasoning are used. Neural networks are used to recognize the degree of emotion from the facial expression of a human object, and fuzzy reasoning is used to infer the degree to which an animate object can discern another object. Case-based reasoning is used to explain the apparent behavior of an object and relationships connecting pictures. Fuzzy sets are used to convert the information that is obtained to the linguistic expressions that are output. Finally, the effectiveness of the system was confirmed by using the constructed system to perform simulations and evaluation experiments. © 2003 Wiley Periodicals, Inc. Electron Comm Jpn Pt 3, 86(12): 38–53, 2003; Published online in Wiley InterScience (www.interscience.wiley.com). DOI 10.1002/ecjc.10141