The interaction of textual and photographic information in document understanding is explored. Specifically, a computational model whereby textual captions are used as collateral information in the interpretation of the corresponding photographs is presented. The final understanding of the picture and caption reflects a consolidation of the information obtained from each of the two sources and can thus be used in intelligent information retrieval tasks. The problem of performing general-purpose vision without apriori knowledge is very difficult at best. The concept of using collateral information in scene understanding has been explored in systems that use general scene context in the task of object identification. The work described extends this notion by incorporating picture specific information. Finally, as a test of the model, a multi-stage system PICTION which uses captions to identify humans in an accompanying photograph is described. This provides a computationally less expensive alternative to traditional methods of face recognition since it does not require a pre-stored database of face models for all people to be identified.<<ETX>>
[1]
Venu Govindaraju,et al.
A Computational Model for Face Location Based on Cognitive Principles
,
1992,
AAAI.
[2]
김인철,et al.
Consistent Labeling Problem을 풀기 위한 휴우리스틱 탐색 기법
,
1988
.
[3]
Rohini K. Srihari,et al.
Piction: A System That Uses Captions to Label Human Faces in Newspaper Photographs
,
1991,
AAAI.
[4]
Robert M. Haralick,et al.
The Consistent Labeling Problem: Part I
,
1979,
IEEE Transactions on Pattern Analysis and Machine Intelligence.
[5]
Stuart C. Shapiro,et al.
SNePS Considered as a Fully Intensional Propositional Semantic Network
,
1986,
AAAI.
[6]
R M Haralick,et al.
The consistent labeling problem: part I.
,
1979,
IEEE transactions on pattern analysis and machine intelligence.
[7]
Terry Edward Waymouth.
Using object descriptions in a schema network for machine vision
,
1986
.