A Toy Model of Universality: Reverse Engineering How Networks Learn Group Operations

Universality is a key hypothesis in mechanistic interpretability -- that different models learn similar features and circuits when trained on similar tasks. In this work, we study the universality hypothesis by examining how small neural networks learn to implement group composition. We present a novel algorithm by which neural networks may implement composition for any finite group via mathematical representation theory. We then show that networks consistently learn this algorithm by reverse engineering model logits and weights, and confirm our understanding using ablations. By studying networks of differing architectures trained on various groups, we find mixed evidence for universality: using our algorithm, we can completely characterize the family of circuits and features that networks learn on this task, but for a given network the precise circuits learned -- as well as the order they develop -- are arbitrary.

[1]  Jessica B. Hamrick,et al.  Unifying Grokking and Double Descent , 2023, ArXiv.

[2]  J. Steinhardt,et al.  Progress measures for grokking via mechanistic interpretability , 2023, ICLR.

[3]  Tom McGrath,et al.  Tracr: Compiled Transformers as a Laboratory for Interpretability , 2023, ArXiv.

[4]  J. Steinhardt,et al.  Interpretability in the Wild: a Circuit for Indirect Object Identification in GPT-2 small , 2022, ArXiv.

[5]  Tom B. Brown,et al.  In-context Learning and Induction Heads , 2022, ArXiv.

[6]  S. Kakade,et al.  Hidden Progress in Deep Learning: SGD Learns Parities Near the Computational Limit , 2022, NeurIPS.

[7]  J. Dean,et al.  Emergent Abilities of Large Language Models , 2022, Trans. Mach. Learn. Res..

[8]  Sébastien Bubeck,et al.  Unveiling Transformers with LEGO: a synthetic reasoning task , 2022, ArXiv.

[9]  Max Tegmark,et al.  Towards Understanding Grokking: An Effective Theory of Representation Learning , 2022, NeurIPS.

[10]  Tom B. Brown,et al.  Predictability and Surprise in Large Generative Models , 2022, FAccT.

[11]  D. Hassabis,et al.  Acquisition of chess knowledge in AlphaZero , 2021, Proceedings of the National Academy of Sciences of the United States of America.

[12]  Boaz Barak,et al.  Revisiting Model Stitching to Compare Neural Representations , 2021, NeurIPS.

[13]  Eran Yahav,et al.  Thinking Like Transformers , 2021, ICML.

[14]  Jaime Fern'andez del R'io,et al.  Array programming with NumPy , 2020, Nature.

[15]  Mark Chen,et al.  Language Models are Few-Shot Learners , 2020, NeurIPS.

[16]  Geoffrey E. Hinton,et al.  Similarity of Neural Network Representations Revisited , 2019, ICML.

[17]  Michael Carbin,et al.  The Lottery Ticket Hypothesis: Finding Sparse, Trainable Neural Networks , 2018, ICLR.

[18]  Jascha Sohl-Dickstein,et al.  SVCCA: Singular Vector Canonical Correlation Analysis for Deep Learning Dynamics and Interpretability , 2017, NIPS.

[19]  Lukasz Kaiser,et al.  Attention is All you Need , 2017, NIPS.

[20]  Andy R. Terrel,et al.  SymPy: Symbolic computing in Python , 2017, PeerJ Prepr..

[21]  Hod Lipson,et al.  Convergent Learning: Do different neural networks learn the same representations? , 2015, FE@NIPS.

[22]  Wes McKinney,et al.  Data Structures for Statistical Computing in Python , 2010, SciPy.

[23]  Jonathan L. Alperin,et al.  Groups and Representations , 1995 .