Use of collateral text in understanding photos in documents

This research explores the interaction of textural and photographic information in document understanding. Specifically, it presents a computational model whereby textural captions are used as collateral information in the interpretation of the corresponding photographs. The final understanding of the picture and caption reflects a consolidation of the information obtained from each of the two sources and can thus be used in intelligent information retrieval tasks. The problem of performing general-purpose vision without a-priori knowledge is very difficult at best. The concept of using collateral information in scene understanding has been explored in systems that use general scene context in the task of object identification. The work described here extends this notion by incorporating picture specific information. A multistage system PICTION, which uses captions to identify humans in an accompanying photograph, is described. This provides a computationally less expensive alternative to traditional methods of face recognition. It does not require a prestored database of face models for all people to be identified. A key component of the system is the utilization of spatial and characteristic constraints (derived from the caption) in labeling face candidates (generated by a face locator).