Intelligent document understanding: Understanding photographs with captions

The interaction of textual and photographic information in document understanding is explored. Specifically, a computational model whereby textual captions are used as collateral information in the interpretation of the corresponding photographs is presented. The final understanding of the picture and caption reflects a consolidation of the information obtained from each of the two sources and can thus be used in intelligent information retrieval tasks. The problem of performing general-purpose vision without apriori knowledge is very difficult at best. The concept of using collateral information in scene understanding has been explored in systems that use general scene context in the task of object identification. The work described extends this notion by incorporating picture specific information. Finally, as a test of the model, a multi-stage system PICTION which uses captions to identify humans in an accompanying photograph is described. This provides a computationally less expensive alternative to traditional methods of face recognition since it does not require a pre-stored database of face models for all people to be identified.<<ETX>>