Are we ready for a new paradigm shift? A survey on visual deep MLP
暂无分享,去创建一个
Haitao Zheng | Li Tao | Ruiyang Liu | Yinghui Li | Dun Liang | Shih-Min Hu
[1] Shuicheng Yan,et al. Vision Permutator: A Permutable MLP-Like Architecture for Visual Recognition , 2021, IEEE Transactions on Pattern Analysis and Machine Intelligence.
[2] Mingxing Tan,et al. PolyLoss: A Polynomial Expansion Perspective of Classification Loss Functions , 2022, ICLR.
[3] Jian Sun,et al. Scaling Up Your Kernels to 31×31: Revisiting Large Kernel Design in CNNs , 2022, 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR).
[4] Cuiling Lan,et al. ActiveMLP: An MLP-like Architecture with Active Token Mixer , 2022, ArXiv.
[5] Vishal M. Patel,et al. UNeXt: MLP-based Rapid Medical Image Segmentation Network , 2022, MICCAI.
[6] Pradeep Kumar Singh,et al. GGA-MLP: A Greedy Genetic Algorithm to Optimize Weights and Biases in Multilayer Perceptron , 2022, Contrast media & molecular imaging.
[7] ViTAEv2: Vision Transformer Advanced by Exploring Inductive Bias for Image Recognition and Beyond , 2022, 2202.10108.
[8] Chengrou Lu,et al. Visual attention network , 2022, Computational Visual Media.
[9] Rethinking Network Design and Local Geometry in Point Cloud: A Simple Residual MLP Framework , 2022, ArXiv.
[10] Mixing and Shifting: Exploiting Global and Local Dependencies in Vision MLPs , 2022, ArXiv.
[11] Wenhao Jiang,et al. DynaMixer: A Vision MLP Architecture with Dynamic Mixing , 2022, ICML.
[12] Trevor Darrell,et al. A ConvNet for the 2020s , 2022, 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR).
[13] P. Milanfar,et al. MAXIM: Multi-Axis MLP for Image Processing , 2022, 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR).
[14] Kai Han,et al. PyramidTNT: Improved Transformer-in-Transformer Baselines with Pyramid Architecture , 2022, ArXiv.
[15] X. Zhang,et al. RepMLPNet: Hierarchical Vision MLP with Re-parameterized Locality , 2021, 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR).
[16] Alan Yuille,et al. Masked Feature Prediction for Self-Supervised Visual Pre-Training , 2021, ArXiv.
[17] Chao Xu,et al. An Image Patch is a Wave: Phase-Aware Vision MLP , 2021, 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR).
[18] François Rameau,et al. PointMixer: MLP-Mixer for Point Cloud Understanding , 2021, ECCV.
[19] Li Dong,et al. Swin Transformer V2: Scaling Up Capacity and Resolution , 2021, 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR).
[20] Han Hu,et al. SimMIM: a Simple Framework for Masked Image Modeling , 2021, 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR).
[21] Ross B. Girshick,et al. Masked Autoencoders Are Scalable Vision Learners , 2021, 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR).
[22] Guanglu Song,et al. UniNet: Unified Architecture Search with Convolution, Transformer, and MLP , 2021, ECCV.
[23] Chong Luo,et al. Sparse MLP for Image Recognition: Is Self-Attention Really Necessary? , 2021, AAAI.
[24] Kai Han,et al. Hire-MLP: Vision MLP via Hierarchical Rearrangement , 2021, 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR).
[25] Ping Luo,et al. CycleMLP: A MLP-like Architecture for Dense Prediction , 2021, ICLR.
[26] Kai Han,et al. CMT: Convolutional Neural Networks Meet Vision Transformers , 2021, 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR).
[27] Nenghai Yu,et al. CSWin Transformer: A General Vision Transformer Backbone with Cross-Shaped Windows , 2021, 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR).
[28] P. Luo,et al. PVT v2: Improved baselines with Pyramid Vision Transformer , 2021, Computational Visual Media.
[29] Shuicheng Yan,et al. VOLO: Vision Outlooker for Visual Recognition , 2021, IEEE Transactions on Pattern Analysis and Machine Intelligence.
[30] Yunfeng Cai,et al. S2-MLP: Spatial-Shift MLP Architecture for Vision , 2021, 2022 IEEE/CVF Winter Conference on Applications of Computer Vision (WACV).
[31] Jianmin Bao,et al. Uformer: A General U-Shaped Transformer for Image Restoration , 2021, 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR).
[32] Cho-Jui Hsieh,et al. When Vision Transformers Outperform ResNets without Pretraining or Strong Data Augmentations , 2021, ICLR.
[33] Jianfei Cai,et al. Less is More: Pay Less Attention in Vision Transformers , 2021, AAAI.
[34] Shi-Min Hu,et al. Beyond Self-Attention: External Attention Using Two Linear Layers for Visual Tasks , 2021, IEEE Transactions on Pattern Analysis and Machine Intelligence.
[35] Daguang Xu,et al. UNETR: Transformers for 3D Medical Image Segmentation , 2021, 2022 IEEE/CVF Winter Conference on Applications of Computer Vision (WACV).
[36] Fahad Shahbaz Khan,et al. Transformers in Vision: A Survey , 2021, ACM Comput. Surv..
[37] Yi Tay,et al. Efficient Transformers: A Survey , 2020, ACM Comput. Surv..
[38] Jun Yu,et al. Hierarchical Deep Click Feature Prediction for Fine-Grained Image Recognition , 2019, IEEE Transactions on Pattern Analysis and Machine Intelligence.
[39] Mingchen Zhuge,et al. Skating-Mixer: Multimodal MLP for Scoring Figure Skating , 2022 .
[40] Yali Wang,et al. MorphMLP: A Self-Attention Free, MLP-Like Backbone for Image and Video , 2021, ArXiv.
[41] Lu Yuan,et al. PeCo: Perceptual Codebook for BERT Pre-training of Vision Transformers , 2021, ArXiv.
[42] Philipp Benz,et al. Adversarial Robustness Comparison of Vision Transformer and MLP-Mixer to CNNs , 2021, BMVC.
[43] Ross Wightman,et al. ResNet strikes back: An improved training procedure in timm , 2021, ArXiv.
[44] Ali Hassani,et al. ConvMLP: Hierarchical Convolutional MLPs for Vision , 2021, 2023 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW).
[45] Luc Van Gool,et al. SwinIR: Image Restoration Using Swin Transformer , 2021, 2021 IEEE/CVF International Conference on Computer Vision Workshops (ICCVW).
[46] Yunfeng Cai,et al. S2-MLPv2: Improved Spatial-Shift MLP Architecture for Vision , 2021, ArXiv.
[47] Shenghua Gao,et al. AS-MLP: An Axial Shifted MLP Architecture for Vision , 2021, ICLR.
[48] Yunfeng Cai,et al. Rethinking Token-Mixing MLP for MLP-based Vision Backbone , 2021, BMVC.
[49] Adriana Kovashka,et al. Exploring Corruption Robustness: Inductive Biases in Vision Transformers and MLP-Mixers , 2021, ArXiv.
[50] Furu Wei,et al. BEiT: BERT Pre-Training of Image Transformers , 2021, ArXiv.
[51] Luc Van Gool,et al. Video Super-Resolution Transformer , 2021, ArXiv.
[52] Carlos Riquelme,et al. Scaling Vision with Sparse Mixture of Experts , 2021, NeurIPS.
[53] Quoc V. Le,et al. CoAtNet: Marrying Convolution and Attention for All Data Sizes , 2021, NeurIPS.
[54] Wassim Hamidouche,et al. Reveal of Vision Transformers Robustness against Adversarial Attacks , 2021, ArXiv.
[55] Zilong Huang,et al. Shuffle Transformer: Rethinking Spatial Shuffle for Vision Transformer , 2021, ArXiv.
[56] Roozbeh Mottaghi,et al. Container: Context Aggregation Network , 2021, NeurIPS.
[57] Ralph R. Martin,et al. Can Attention Enable MLPs To Catch Up With CNNs? , 2021, Comput. Vis. Media.
[58] Manuel Ladron de Guevara,et al. MixerGAN: An MLP-Based Architecture for Unpaired Image-to-Image Translation , 2021, ArXiv.
[59] Fahad Shahbaz Khan,et al. Intriguing Properties of Vision Transformers , 2021, NeurIPS.
[60] Quoc V. Le,et al. Pay Attention to MLPs , 2021, NeurIPS.
[61] Matthieu Cord,et al. ResMLP: Feedforward Networks for Image Classification With Data-Efficient Training , 2021, IEEE Transactions on Pattern Analysis and Machine Intelligence.
[62] Luke Melas-Kyriazi,et al. Do You Even Need Attention? A Stack of Feed-Forward Layers Does Surprisingly Well on ImageNet , 2021, ArXiv.
[63] A. Dosovitskiy,et al. MLP-Mixer: An all-MLP Architecture for Vision , 2021, NeurIPS.
[64] Chunhua Shen,et al. Twins: Revisiting the Design of Spatial Attention in Vision Transformers , 2021, NeurIPS.
[65] Saining Xie,et al. An Empirical Study of Training Self-Supervised Vision Transformers , 2021, 2021 IEEE/CVF International Conference on Computer Vision (ICCV).
[66] Quoc V. Le,et al. EfficientNetV2: Smaller Models and Faster Training , 2021, ICML.
[67] Marten van Dijk,et al. On the Robustness of Vision Transformers to Adversarial Examples , 2021, 2021 IEEE/CVF International Conference on Computer Vision (ICCV).
[68] Matthieu Cord,et al. Going deeper with Image Transformers , 2021, 2021 IEEE/CVF International Conference on Computer Vision (ICCV).
[69] Andreas Veit,et al. Understanding Robustness of Transformers for Image Classification , 2021, 2021 IEEE/CVF International Conference on Computer Vision (ICCV).
[70] Enhua Wu,et al. Transformer in Transformer , 2021, NeurIPS.
[71] Xiang Li,et al. Pyramid Vision Transformer: A Versatile Backbone for Dense Prediction without Convolutions , 2021, 2021 IEEE/CVF International Conference on Computer Vision (ICCV).
[72] Vishal M. Patel,et al. Medical Transformer: Gated Axial-Attention for Medical Image Segmentation , 2021, MICCAI.
[73] Francis E. H. Tay,et al. Tokens-to-Token ViT: Training Vision Transformers from Scratch on ImageNet , 2021, 2021 IEEE/CVF International Conference on Computer Vision (ICCV).
[74] Matthieu Cord,et al. Training data-efficient image transformers & distillation through attention , 2020, ICML.
[75] Ralph R. Martin,et al. PCT: Point cloud transformer , 2020, Computational Visual Media.
[76] Wen Gao,et al. Pre-Trained Image Processing Transformer , 2020, 2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR).
[77] Klaus Dietmayer,et al. Point Transformer , 2020, IEEE Access.
[78] S. Gelly,et al. An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale , 2020, ICLR.
[79] Quoc V. Le,et al. Meta Pseudo Labels , 2020, 2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR).
[80] Masato Taki,et al. RaftMLP: Do MLP-based Models Dream of Winning Over Computer Vision? , 2021, ArXiv.
[81] Stephen Lin,et al. Swin Transformer: Hierarchical Vision Transformer using Shifted Windows , 2021, 2021 IEEE/CVF International Conference on Computer Vision (ICCV).
[82] Zangwei Zheng,et al. Sparse-MLP: A Fully-MLP Architecture with Conditional Computation , 2021, ArXiv.
[83] Long Zhao,et al. Aggregating Nested Transformers , 2021, ArXiv.
[84] Shi-Min Hu,et al. Jittor: a novel deep learning framework with meta-operators and unified graph execution , 2020, Science China Information Sciences.
[85] Fillia Makedon,et al. A Survey on Contrastive Self-supervised Learning , 2020, Technologies.
[86] Geoffrey E. Hinton,et al. Big Self-Supervised Models are Strong Semi-Supervised Learners , 2020, NeurIPS.
[87] Nicolas Usunier,et al. End-to-End Object Detection with Transformers , 2020, ECCV.
[88] Chongruo Wu,et al. ResNeSt: Split-Attention Networks , 2020, 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW).
[89] Kaiming He,et al. Designing Network Design Spaces , 2020, 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR).
[90] Kaiming He,et al. Improved Baselines with Momentum Contrastive Learning , 2020, ArXiv.
[91] Geoffrey E. Hinton,et al. A Simple Framework for Contrastive Learning of Visual Representations , 2020, ICML.
[92] Ankesh Anand. Contrastive Self-Supervised Learning , 2020 .
[93] S. Gelly,et al. Big Transfer (BiT): General Visual Representation Learning , 2019, ECCV.
[94] Ross B. Girshick,et al. Momentum Contrast for Unsupervised Visual Representation Learning , 2019, 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR).
[95] Asifullah Khan,et al. A survey of the recent architectures of deep convolutional neural networks , 2019, Artificial Intelligence Review.
[96] Ross B. Girshick,et al. Mask R-CNN , 2017, 1703.06870.
[97] Natalia Gimelshein,et al. PyTorch: An Imperative Style, High-Performance Deep Learning Library , 2019, NeurIPS.
[98] Kai Chen,et al. MMDetection: Open MMLab Detection Toolbox and Benchmark , 2019, ArXiv.
[99] Quoc V. Le,et al. EfficientNet: Rethinking Model Scaling for Convolutional Neural Networks , 2019, ICML.
[100] Quoc V. Le,et al. Searching for MobileNetV3 , 2019, 2019 IEEE/CVF International Conference on Computer Vision (ICCV).
[101] Quoc V. Le,et al. NAS-FPN: Learning Scalable Feature Pyramid Architecture for Object Detection , 2019, 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR).
[102] Kaiming He,et al. Panoptic Feature Pyramid Networks , 2019, 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR).
[103] Zhi Zhang,et al. Bag of Tricks for Image Classification with Convolutional Neural Networks , 2018, 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR).
[104] Matthias Bethge,et al. ImageNet-trained CNNs are biased towards texture; increasing shape bias improves accuracy and robustness , 2018, ICLR.
[105] Thomas G. Dietterich,et al. Benchmarking Neural Network Robustness to Common Corruptions and Perturbations , 2018, ICLR.
[106] Ning Xu,et al. Wide Activation for Efficient and Accurate Image Super-Resolution , 2018, ArXiv.
[107] Yuning Jiang,et al. Unified Perceptual Parsing for Scene Understanding , 2018, ECCV.
[108] Ali Farhadi,et al. YOLOv3: An Incremental Improvement , 2018, ArXiv.
[109] Jun Yu,et al. Local Deep-Feature Alignment for Unsupervised Dimension Reduction , 2018, IEEE Transactions on Image Processing.
[110] Yu-Sheng Chen,et al. Learning Deep Convolutional Networks for Demosaicing , 2018, ArXiv.
[111] Li Fei-Fei,et al. Progressive Neural Architecture Search , 2017, ECCV.
[112] Garrison W. Cottrell,et al. Understanding Convolution for Semantic Segmentation , 2017, 2018 IEEE Winter Conference on Applications of Computer Vision (WACV).
[113] Frank Hutter,et al. Fixing Weight Decay Regularization in Adam , 2017, ArXiv.
[114] 拓海 杉山,et al. “Unpaired Image-to-Image Translation using Cycle-Consistent Adversarial Networks”の学習報告 , 2017 .
[115] Bolei Zhou,et al. Scene Parsing through ADE20K Dataset , 2017, 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).
[116] Chen Sun,et al. Revisiting Unreasonable Effectiveness of Data in Deep Learning Era , 2017, 2017 IEEE International Conference on Computer Vision (ICCV).
[117] Lukasz Kaiser,et al. Attention is All you Need , 2017, NIPS.
[118] Leonidas J. Guibas,et al. PointNet++: Deep Hierarchical Feature Learning on Point Sets in a Metric Space , 2017, NIPS.
[119] Andrew Zisserman,et al. Quo Vadis, Action Recognition? A New Model and the Kinetics Dataset , 2017, 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).
[120] Yurong Liu,et al. A survey of deep neural network architectures and their applications , 2017, Neurocomputing.
[121] Yi Li,et al. Deformable Convolutional Networks , 2017, 2017 IEEE International Conference on Computer Vision (ICCV).
[122] Geoffrey E. Hinton,et al. Outrageously Large Neural Networks: The Sparsely-Gated Mixture-of-Experts Layer , 2017, ICLR.
[123] Kaiming He,et al. Feature Pyramid Networks for Object Detection , 2016, 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).
[124] Tae Hyun Kim,et al. Deep Multi-scale Convolutional Neural Network for Dynamic Scene Deblurring , 2016, 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).
[125] Leonidas J. Guibas,et al. PointNet: Deep Learning on Point Sets for 3D Classification and Segmentation , 2016, 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).
[126] Zhuowen Tu,et al. Aggregated Residual Transformations for Deep Neural Networks , 2016, 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).
[127] François Chollet,et al. Xception: Deep Learning with Depthwise Separable Convolutions , 2016, 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).
[128] Kilian Q. Weinberger,et al. Densely Connected Convolutional Networks , 2016, 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).
[129] David A. Wagner,et al. Towards Evaluating the Robustness of Neural Networks , 2016, 2017 IEEE Symposium on Security and Privacy (SP).
[130] Sergey Ioffe,et al. Inception-v4, Inception-ResNet and the Impact of Residual Connections on Learning , 2016, AAAI.
[131] Xiaoou Tang,et al. Accelerating the Super-Resolution Convolutional Neural Network , 2016, ECCV.
[132] Geoffrey E. Hinton,et al. Layer Normalization , 2016, ArXiv.
[133] Kevin Gimpel,et al. Gaussian Error Linear Units (GELUs) , 2016 .
[134] Kevin Gimpel,et al. Bridging Nonlinearities and Stochastic Regularizers with Gaussian Error Linear Units , 2016, ArXiv.
[135] Sebastian Ramos,et al. The Cityscapes Dataset for Semantic Urban Scene Understanding , 2016, 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).
[136] Jian Sun,et al. Deep Residual Learning for Image Recognition , 2015, 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).
[137] Vladlen Koltun,et al. Multi-Scale Context Aggregation by Dilated Convolutions , 2015, ICLR.
[138] Seyed-Mohsen Moosavi-Dezfooli,et al. DeepFool: A Simple and Accurate Method to Fool Deep Neural Networks , 2015, 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).
[139] Kaiming He,et al. Faster R-CNN: Towards Real-Time Object Detection with Region Proposal Networks , 2015, IEEE Transactions on Pattern Analysis and Machine Intelligence.
[140] Thomas Brox,et al. U-Net: Convolutional Networks for Biomedical Image Segmentation , 2015, MICCAI.
[141] Sergey Ioffe,et al. Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift , 2015, ICML.
[142] Jonathon Shlens,et al. Explaining and Harnessing Adversarial Examples , 2014, ICLR.
[143] Dumitru Erhan,et al. Going deeper with convolutions , 2014, 2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).
[144] Andrew Zisserman,et al. Very Deep Convolutional Networks for Large-Scale Image Recognition , 2014, ICLR.
[145] Xiaoou Tang,et al. Learning a Deep Convolutional Network for Image Super-Resolution , 2014, ECCV.
[146] Pietro Perona,et al. Microsoft COCO: Common Objects in Context , 2014, ECCV.
[147] Joan Bruna,et al. Intriguing properties of neural networks , 2013, ICLR.
[148] Geoffrey E. Hinton,et al. ImageNet classification with deep convolutional neural networks , 2012, Commun. ACM.
[149] Léon Bottou,et al. Stochastic Gradient Descent Tricks , 2012, Neural Networks: Tricks of the Trade.
[150] Luca Maria Gambardella,et al. Deep, Big, Simple Neural Nets for Handwritten Digit Recognition , 2010, Neural Computation.
[151] Fei-Fei Li,et al. ImageNet: A large-scale hierarchical image database , 2009, 2009 IEEE Conference on Computer Vision and Pattern Recognition.
[152] Alex Krizhevsky,et al. Learning Multiple Layers of Features from Tiny Images , 2009 .
[153] Kevin Skadron,et al. Scalable parallel programming , 2008, 2008 IEEE Hot Chips 20 Symposium (HCS).
[154] Erik Lindholm,et al. NVIDIA Tesla: A Unified Graphics and Computing Architecture , 2008, IEEE Micro.
[155] Jürgen Schmidhuber,et al. New Millennium AI and the Convergence of History: Update of 2012 , 2012 .
[156] Tom M. Mitchell,et al. The Need for Biases in Learning Generalizations , 2007 .
[157] Geoffrey E. Hinton,et al. Reducing the Dimensionality of Data with Neural Networks , 2006, Science.
[158] T. Poggio,et al. Networks and the best approximation property , 1990, Biological Cybernetics.
[159] Allan Pinkus,et al. Approximation theory of the MLP model in neural networks , 1999, Acta Numerica.
[160] Toshiyuki TANAKA. Mean-field theory of Boltzmann machine learning , 1998 .
[161] G.E. Moore,et al. Cramming More Components Onto Integrated Circuits , 1998, Proceedings of the IEEE.
[162] Yoshua Bengio,et al. Gradient-based learning applied to document recognition , 1998, Proc. IEEE.
[163] W S McCulloch,et al. A logical calculus of the ideas immanent in nervous activity , 1990, The Philosophy of Artificial Intelligence.
[164] Lawrence D. Jackel,et al. Backpropagation Applied to Handwritten Zip Code Recognition , 1989, Neural Computation.
[165] George Cybenko,et al. Approximation by superpositions of a sigmoidal function , 1989, Math. Control. Signals Syst..
[166] H. White,et al. Universal approximation using feedforward networks with non-sigmoid hidden layer activation functions , 1989, International 1989 Joint Conference on Neural Networks.
[167] Kurt Hornik,et al. Multilayer feedforward networks are universal approximators , 1989, Neural Networks.
[168] Ken-ichi Funahashi,et al. On the approximate realization of continuous mappings by neural networks , 1989, Neural Networks.
[169] Geoffrey E. Hinton. Deterministic Boltzmann Learning Performs Steepest Descent in Weight-Space , 1989, Neural Computation.
[170] Eric B. Baum,et al. On the capabilities of multilayer perceptrons , 1988, J. Complex..
[171] Geoffrey E. Hinton,et al. Learning and relearning in Boltzmann machines , 1986 .
[172] Geoffrey E. Hinton,et al. Learning internal representations by error propagation , 1986 .
[173] Paul Smolensky,et al. Information processing in dynamical systems: foundations of harmony theory , 1986 .
[174] Geoffrey E. Hinton,et al. A Learning Algorithm for Boltzmann Machines , 1985, Cogn. Sci..
[175] Takayuki Ito,et al. Neocognitron: A neural network model for a mechanism of visual pattern recognition , 1983, IEEE Transactions on Systems, Man, and Cybernetics.
[176] Kunihiko Fukushima,et al. Neocognitron: A new algorithm for pattern recognition tolerant of deformations and shifts in position , 1982, Pattern Recognit..
[177] P. Werbos,et al. Beyond Regression : "New Tools for Prediction and Analysis in the Behavioral Sciences , 1974 .
[178] Frank Rosenblatt,et al. PRINCIPLES OF NEURODYNAMICS. PERCEPTRONS AND THE THEORY OF BRAIN MECHANISMS , 1963 .
[179] F ROSENBLATT,et al. The perceptron: a probabilistic model for information storage and organization in the brain. , 1958, Psychological review.
[180] C. Sherrington. Observations on the scratch‐reflex in the spinal dog , 1906, The Journal of physiology.