Consistent Modeling of Functional Dependencies along with World Knowledge

In this paper we propose a method for vision systems to consistently represent functional dependencies between different visual routines along with relational shortand long-term knowledge about the world. Here the visual routines are bound to visual properties of objects stored in the memory of the system. Furthermore, the functional dependencies between the visual routines are seen as a graph also belonging to the object’s structure. This graph is parsed in the course of acquiring a visual property of an object to automatically resolve the dependencies of the bound visual routines. Using this representation, the system is able to dynamically rearrange the processing order while keeping its functionality. Additionally, the system is able to estimate the overall computational costs of a certain action. We will also show that the system can efficiently use that structure to incorporate already acquired knowledge and thus reduce the computational demand. Keywords—Adaptive systems, Knowledge representation, Machine vision, Systems engineering