Transforming task representations to perform novel tasks
暂无分享,去创建一个
[1] Andrew Kyle Lampinen. A computational framework for learning and transforming task representations , 2021 .
[2] Andrew Kyle Lampinen,et al. What shapes feature representations? Exploring datasets, architectures, and training , 2020, NeurIPS.
[3] Zeb Kurth-Nelson,et al. A distributional code for value in dopamine-based reinforcement learning , 2020, Nature.
[4] Noah D. Goodman,et al. Shaping Visual Representations with Language for Few-Shot Classification , 2019, ACL.
[5] Benjamin F. Grewe,et al. Continual learning with hypernetworks , 2019, ICLR.
[6] Wojciech M. Czarnecki,et al. Grandmaster level in StarCraft II using multi-agent reinforcement learning , 2019, Nature.
[7] Colin Wei,et al. Towards Explaining the Regularization Effect of Initial Large Learning Rate in Training Neural Networks , 2019, NeurIPS.
[8] Feiyue Huang,et al. LGM-Net: Learning to Generate Matching Networks for Few-Shot Learning , 2019, ICML.
[9] Stefano Soatto,et al. Few-Shot Learning With Embedded Class Models and Shot-Free Meta Training , 2019, 2019 IEEE/CVF International Conference on Computer Vision (ICCV).
[10] Vineeth N. Balasubramanian,et al. Zero-Shot Task Transfer , 2019, 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR).
[11] Subhransu Maji,et al. Task2Vec: Task Embedding for Meta-Learning , 2019, 2019 IEEE/CVF International Conference on Computer Vision (ICCV).
[12] Felix Hill,et al. Learning to Make Analogies by Contrasting Abstract Relational Structure , 2019, ICLR.
[13] Sergey Levine,et al. Unsupervised Learning via Meta-Learning , 2018, ICLR.
[14] Razvan Pascanu,et al. Meta-Learning with Latent Embedding Optimization , 2018, ICLR.
[15] Sergey Levine,et al. Diversity is All You Need: Learning Skills without a Reward Function , 2018, ICLR.
[16] Christoph H. Lampert,et al. Zero-Shot Learning—A Comprehensive Evaluation of the Good, the Bad and the Ugly , 2017, IEEE Transactions on Pattern Analysis and Machine Intelligence.
[17] Yee Whye Teh,et al. Conditional Neural Processes , 2018, ICML.
[18] Andrew K. Lampinen,et al. Different Presentations of a Mathematical Concept Can Support Learning in Complementary Ways , 2018, Journal of Educational Psychology.
[19] Gary Marcus,et al. Deep Learning: A Critical Appraisal , 2018, ArXiv.
[20] Marco Baroni,et al. Generalization without Systematicity: On the Compositional Skills of Sequence-to-Sequence Recurrent Networks , 2017, ICML.
[21] Anthony C. Robinson,et al. Reflections on ‘ColorBrewer.org: An Online Tool for Selecting Colour Schemes for Maps’ , 2017 .
[22] James L McClelland,et al. Building on prior knowledge without building it in. , 2017, The Behavioral and brain sciences.
[23] Andrew K. Lampinen,et al. One-shot and few-shot learning of word embeddings , 2017, ArXiv.
[24] Demis Hassabis,et al. Grounded Language Learning in a Simulated 3D World , 2017, ArXiv.
[25] Honglak Lee,et al. Zero-Shot Task Generalization with Multi-Task Deep Reinforcement Learning , 2017, ICML.
[26] Sergey Levine,et al. Model-Agnostic Meta-Learning for Fast Adaptation of Deep Networks , 2017, ICML.
[27] Romain Laroche,et al. Transfer Reinforcement Learning with Shared Dynamics , 2017, AAAI.
[28] Sergio Gomez Colmenarejo,et al. Hybrid computing using a neural network with dynamic external memory , 2016, Nature.
[29] Oriol Vinyals,et al. Matching Networks for One Shot Learning , 2016, NIPS.
[30] Andreas Krause,et al. Safe Exploration in Finite Markov Decision Processes with Gaussian Processes , 2016, NIPS.
[31] Joshua B. Tenenbaum,et al. Building machines that learn and think like people , 2016, Behavioral and Brain Sciences.
[32] Tim Salimans,et al. Weight Normalization: A Simple Reparameterization to Accelerate Training of Deep Neural Networks , 2016, NIPS.
[33] Demis Hassabis,et al. Mastering the game of Go with deep neural networks and tree search , 2016, Nature.
[34] Nando de Freitas,et al. Neural Programmer-Interpreters , 2015, ICLR.
[35] Tianqi Chen,et al. Empirical Evaluation of Rectified Activations in Convolutional Network , 2015, ArXiv.
[36] Shane Legg,et al. Human-level control through deep reinforcement learning , 2015, Nature.
[37] James L. McClelland,et al. Interactive Activation and Mutual Constraint Satisfaction in Perception and Cognition , 2014, Cogn. Sci..
[38] Surya Ganguli,et al. Exact solutions to the nonlinear dynamics of learning in deep linear neural networks , 2013, ICLR.
[39] Hava T Siegelmann,et al. Turing on Super-Turing and adaptivity. , 2013, Progress in biophysics and molecular biology.
[40] Geoffrey Zweig,et al. Linguistic Regularities in Continuous Space Word Representations , 2013, NAACL.
[41] Andrew Y. Ng,et al. Zero-Shot Learning Through Cross-Modal Transfer , 2013, NIPS.
[42] James L. McClelland,et al. Letting structure emerge: connectionist and dynamical systems approaches to cognition , 2010, Trends in Cognitive Sciences.
[43] Yoshua Bengio,et al. Understanding the difficulty of training deep feedforward neural networks , 2010, AISTATS.
[44] Y. Niv. Reinforcement learning in the brain , 2009 .
[45] Yoshua Bengio,et al. Zero-data Learning of New Tasks , 2008, AAAI.
[46] J. Fodor. Lot 2: The Language of Thought Revisited , 2008 .
[47] B. Baars. Global workspace theory of consciousness: toward a cognitive neuroscience of human experience. , 2005, Progress in brain research.
[48] James L. McClelland,et al. Semantic Cognition: A Parallel Distributed Processing Approach , 2004 .
[49] G. Vigliocco,et al. Language in mind , 2004 .
[50] Cynthia A. Brewer,et al. ColorBrewer.org: An Online Tool for Selecting Colour Schemes for Maps , 2003 .
[51] Jerry A. Fodor,et al. Language, Thought and Compositionality , 2001, Royal Institute of Philosophy Supplement.
[52] James L. McClelland,et al. Automaticity , Attention and the Strength of Processing : A Parallel Distributed Processing Account of the Stroop Effect , 2001 .
[53] S. Goldin-Meadow,et al. The role of gesture in communication and thinking , 1999, Trends in Cognitive Sciences.
[54] Orit Hazzan,et al. Reducing Abstraction Level When Learning Abstract Algebra Concepts , 1999 .
[55] M. Chi,et al. Eliciting Self‐Explanations Improves Understanding , 1994 .
[56] A. Karmiloff-Smith,et al. The cognizer's innards: A psychological and philosophical perspective on the development of thought. , 1993 .
[57] Hava T. Siegelmann,et al. On the computational power of neural nets , 1992, COLT '92.
[58] U. Wilensky. Abstract Meditations on the Concrete and Concrete Implications for Mathematics Education , 1991 .
[59] James L. McClelland,et al. On the control of automatic processes: a parallel distributed processing account of the Stroop effect. , 1990, Psychological review.
[60] M. Tanenhaus,et al. Context effects in lexical processing , 1987, Cognition.
[61] Geoffrey E. Hinton. Using fast weights to deblur old memories , 1987 .
[62] A. Karmiloff-Smith. From meta-processes to conscious access: Evidence from children's metalinguistic and repair data , 1986, Cognition.
[63] James L. McClelland. Putting Knowledge in its Place: A Scheme for Programming Parallel Processing Structures on the Fly , 1988, Cogn. Sci..
[64] J. Fodor. The Modularity of mind. An essay on faculty psychology , 1986 .
[65] K. Holyoak,et al. Analogical problem solving , 1980, Cognitive Psychology.
[66] L. E. Bourne. Knowing and Using Concepts. , 1970 .
[67] G. D. Logan. Task Switching , 2022 .