Supplemental Information for Design Principles of the Hippocampal Cognitive Map

The SR-based critic learns an estimate of the value function, using the SR as its feature representation. Unlike standard actor-critic methods, the critic does not use reward-based temporal difference errors to update its value estimate; instead, it relies on the fact that the value function is given by V (s) = ∑ s′ M(s, s ′)R(s′), where M is the successor representation andR is the expected reward in each state. Thus, learningM andR is sufficient to construct an estimate of V .