Research on computer vision systems for driver assistance resulted in a variety of isolated
approaches mainly performing very specialized tasks like, e. g., lane keeping or traffic sign
detection. However, for a full understanding of generic traffic situations, integrated and flexible approaches are needed. We here present a highly integrated vision architecture for an
advanced driver assistance system inspired by human cognitive principles. The system uses
an attention system as the flexible and generic front-end for all visual processing, allowing
a task-specific scene decomposition and search for known objects (based on a short term
memory) as well as generic object classes (based on a long term memory). Knowledge fusion,
e. g., between an internal 3D representation and a reliable road detection module improves
the system performance. The system heavily relies on top-down links to modulate lower
processing levels, resulting in a high system robustness.