The Omniglot challenge: a 3-year progress report

Three years ago, we released the Omniglot dataset for one-shot learning, along with five challenge tasks and a computational model that addresses these tasks. The model was not meant to be the final word on Omniglot; we hoped that the community would build on our work and develop new approaches. In the time since, we have been pleased to see wide adoption of the dataset. There has been notable progress on one-shot classification, but researchers have adopted new splits and procedures that make the task easier. There has been less progress on the other four tasks. We conclude that recent approaches are still far from human-like concept learning on Omniglot, a challenge that requires performing many tasks with a single model.

[1]  Joshua B. Tenenbaum,et al.  Building machines that learn and think like people , 2016, Behavioral and Brain Sciences.

[2]  Melanie Mitchell,et al.  Artificial Intelligence Hits the Barrier of Meaning , 2019, Inf..

[3]  Gregory R. Koch,et al.  Siamese Neural Networks for One-Shot Image Recognition , 2015 .

[4]  Marc G. Bellemare,et al.  The Arcade Learning Environment: An Evaluation Platform for General Agents , 2012, J. Artif. Intell. Res..

[5]  Bartunov Sergey,et al.  Meta-Learning with Memory-Augmented Neural Networks , 2016 .

[6]  Geoffrey E. Hinton,et al.  Inferring Motor Programs from Images of Handwritten Digits , 2005, NIPS.

[7]  Chelsea Finn,et al.  Active One-shot Learning , 2017, ArXiv.

[8]  Geoffrey E. Hinton,et al.  Attend, Infer, Repeat: Fast Scene Understanding with Generative Models , 2016, NIPS.

[9]  D. Hofstadter Metamagical Themas: Questing for the Essence of Mind and Pattern , 1985 .

[10]  Joshua B. Tenenbaum,et al.  The Variational Homoencoder: Learning to learn high capacity generative models from few examples , 2018, UAI.

[11]  Bhaskara Marthi,et al.  A generative vision model that trains with high data efficiency and breaks text-based CAPTCHAs , 2017, Science.

[12]  Ernest Davis,et al.  Causal generative models are just a start , 2017, Behavioral and Brain Sciences.

[13]  Daan Wierstra,et al.  One-shot Learning with Memory-Augmented Neural Networks , 2016, ArXiv.

[14]  William M. Smith,et al.  A Study of Thinking , 1956 .

[15]  Alex Graves,et al.  Scaling Memory-Augmented Neural Networks with Sparse Reads and Writes , 2016, NIPS.

[16]  Ambedkar Dukkipati,et al.  Attentive Recurrent Comparators , 2017, ICML.

[17]  Daan Wierstra,et al.  Towards Conceptual Compression , 2016, NIPS.

[18]  Tom Schaul,et al.  Building Machines that Learn and Think for Themselves: Commentary on Lake et al., Behavioral and Brain Sciences, 2017 , 2017, 1711.08378.

[19]  R. Shepard,et al.  Learning and memorization of classifications. , 1961 .

[20]  William M. Smith,et al.  A Study of Thinking , 1956 .

[21]  Amos J. Storkey,et al.  Towards a Neural Statistician , 2016, ICLR.

[22]  Daan Wierstra,et al.  Meta-Learning with Memory-Augmented Neural Networks , 2016, ICML.

[23]  Joshua B. Tenenbaum,et al.  Concept learning as motor program induction: A large-scale empirical study , 2012, CogSci.

[24]  Richard S. Zemel,et al.  Prototypical Networks for Few-shot Learning , 2017, NIPS.

[25]  Joan Bruna,et al.  Few-Shot Learning with Graph Neural Networks , 2017, ICLR.

[26]  G. Murphy,et al.  The Big Book of Concepts , 2002 .

[27]  Joshua B. Tenenbaum,et al.  Human-level concept learning through probabilistic program induction , 2015, Science.

[28]  Linda B. Smith,et al.  Object name Learning Provides On-the-Job Training for Attention , 2002, Psychological science.

[29]  James L McClelland,et al.  Building on prior knowledge without building it in. , 2017, The Behavioral and brain sciences.

[30]  Shane Legg,et al.  Human-level control through deep reinforcement learning , 2015, Nature.

[31]  Oriol Vinyals,et al.  Matching Networks for One Shot Learning , 2016, NIPS.

[32]  Alex Graves,et al.  DRAW: A Recurrent Neural Network For Image Generation , 2015, ICML.

[33]  Sergey Levine,et al.  Model-Agnostic Meta-Learning for Fast Adaptation of Deep Networks , 2017, ICML.

[34]  Daan Wierstra,et al.  One-Shot Generalization in Deep Generative Models , 2016, ICML.

[35]  J. Tenenbaum,et al.  Ingredients of intelligence: From classic debates to an engineering roadmap , 2017, Behavioral and Brain Sciences.

[36]  Oriol Vinyals,et al.  Synthesizing Programs for Images using Reinforced Adversarial Learning , 2018, ICML.

[37]  Fei-Fei Li,et al.  ImageNet: A large-scale hierarchical image database , 2009, 2009 IEEE Conference on Computer Vision and Pattern Recognition.

[38]  Geoffrey E. Hinton,et al.  ImageNet classification with deep convolutional neural networks , 2012, Commun. ACM.