Automated creation of visual routines using genetic programming

Traditional machine vision assumes that the vision system continually recovers a complete, labeled description of the world from the visual field. Many researchers have criticized this model and proposed an alternative model which considers visual perception as a distributed collection of task-specific, context-driven visual routines. Ullman's visual routines model (1984) of intermediate vision describes one way this might be accomplished. To date, most researchers have hand-coded task-specific visual routines for actual implementations of systems requiring simple vision. We propose an alternative approach in which visual routines are created using artificial evolution, a supervised learning approach. We present results from a series of runs on a simple vision problem using real camera data, in which simple Ullman-like visual routines were evolved using genetic programming. Results were accurate and able to generalize.