暂无分享,去创建一个
Sepp Hochreiter | Bernhard Nessler | Johannes Weissenbock | Thomas Doms | S. Hochreiter | Philip Matthias Winter | Sebastian Eder | Christoph Schwald | Tom Vogt | Bernhard Nessler | Sebastian K. Eder | P. M. Winter | J. Weissenbock | Christoph Schwald | Thomas Doms | Tom Vogt | Sepp Hochreiter
[1] M. Rosenblatt. Remarks on Some Nonparametric Estimates of a Density Function , 1956 .
[2] Arthur L. Samuel,et al. Some Studies in Machine Learning Using the Game of Checkers , 1967, IBM J. Res. Dev..
[3] E. Parzen. On Estimation of a Probability Density Function and Mode , 1962 .
[4] T. Spreen,et al. Chapter I Introduction , 1967, Geological Society, London, Memoirs.
[5] L. Baum,et al. Statistical Inference for Probabilistic Functions of Finite State Markov Chains , 1966 .
[6] J. MacQueen. Some methods for classification and analysis of multivariate observations , 1967 .
[7] Nils J. Nilsson,et al. Artificial Intelligence , 1974, IFIP Congress.
[8] H. Simon,et al. History of Artificial Intelligence , 1977, IJCAI.
[9] E. Oja. Simplified neuron model as a principal component analyzer , 1982, Journal of mathematical biology.
[10] S. P. Lloyd,et al. Least squares quantization in PCM , 1982, IEEE Trans. Inf. Theory.
[11] Geoffrey E. Hinton,et al. Learning representations by back-propagating errors , 1986, Nature.
[12] Paul Smolensky,et al. Information processing in dynamical systems: foundations of harmony theory , 1986 .
[13] A. Baier. Trust and Antitrust , 1986, Ethics.
[14] Barak A. Pearlmutter. Learning State Space Trajectories in Recurrent Neural Networks , 1989, Neural Computation.
[15] W S McCulloch,et al. A logical calculus of the ideas immanent in nervous activity , 1990, The Philosophy of Artificial Intelligence.
[16] Jeffrey L. Elman,et al. Finding Structure in Time , 1990, Cogn. Sci..
[17] Christian Jutten,et al. Blind separation of sources, part I: An adaptive algorithm based on neuromimetic architecture , 1991, Signal Process..
[18] M. Kramer. Nonlinear principal component analysis using autoassociative neural networks , 1991 .
[19] O. J. Vrieze,et al. Kohonen Network , 1995, Artificial Neural Networks.
[20] P. Sopp. Cluster analysis. , 1996, Veterinary immunology and immunopathology.
[21] Jürgen Schmidhuber,et al. Long Short-Term Memory , 1997, Neural Computation.
[22] Michael I. Jordan. Serial Order: A Parallel Distributed Processing Approach , 1997 .
[23] Yoshua Bengio,et al. Convolutional networks for images, speech, and time series , 1998 .
[24] Tafsir Thiam,et al. The Boltzmann machine , 1999, IJCNN'99. International Joint Conference on Neural Networks. Proceedings (Cat. No.99CH36339).
[25] L. Infante,et al. Hierarchical Clustering , 2020, International Encyclopedia of Statistical Science.
[26] André Elisseeff,et al. Stability and Generalization , 2002, J. Mach. Learn. Res..
[27] Salim Roukos,et al. Bleu: a Method for Automatic Evaluation of Machine Translation , 2002, ACL.
[28] Kunihiko Fukushima,et al. Neocognitron: A self-organizing neural network model for a mechanism of pattern recognition unaffected by shift in position , 1980, Biological Cybernetics.
[29] Gualtiero Piccinini,et al. The First Computational Theory of Mind and Brain: A Close Look at Mcculloch and Pitts's “Logical Calculus of Ideas Immanent in Nervous Activity” , 2004, Synthese.
[30] Jean-Michel Marin,et al. Bayesian Modelling and Inference on Mixtures of Distributions , 2005 .
[31] Richard S. Sutton,et al. Reinforcement Learning: An Introduction , 1998, IEEE Trans. Neural Networks.
[32] Robert P. W. Duin,et al. A simplified extension of the Area under the ROC to the multiclass domain , 2006 .
[33] W. Singer,et al. Better than conscious? : decision making, the human mind, and implications for institutions , 2008 .
[34] Yoshua Bengio,et al. Zero-data Learning of New Tasks , 2008, AAAI.
[35] Geoffrey E. Hinton,et al. Zero-shot Learning with Semantic Output Codes , 2009, NIPS.
[36] Nils J. Nilsson. EARLY EXPLORATIONS: 1950S AND 1960S , 2009 .
[37] N. Nilsson. EFFLORESCENCE: MID-1960S TO MID-1970S , 2009 .
[38] J. Schmidhuber,et al. A Novel Connectionist System for Unconstrained Handwriting Recognition , 2009, IEEE Transactions on Pattern Analysis and Machine Intelligence.
[39] N. Nilsson. APPLICATIONS AND SPECIALIZATIONS: 1970s TO EARLY 1980s , 2009 .
[40] Robert Tibshirani,et al. The Elements of Statistical Learning: Data Mining, Inference, and Prediction, 2nd Edition , 2001, Springer Series in Statistics.
[41] Tyler Lu,et al. Impossibility Theorems for Domain Adaptation , 2010, AISTATS.
[42] I. Čatić,et al. Energy or Information , 2010 .
[43] Richard A. Davis,et al. Remarks on Some Nonparametric Estimates of a Density Function , 2011 .
[44] Geoffrey E. Hinton,et al. ImageNet classification with deep convolutional neural networks , 2012, Commun. ACM.
[45] Joos Vandewalle,et al. Multi-Valued and Universal Binary Neurons: Theory, Learning and Applications , 2012 .
[46] Dick Stenmark,et al. Distrust in Information Systems Research: A Need for Stronger Theoretical Contributions to Our Discipline , 2013, 2013 46th Hawaii International Conference on System Sciences.
[47] Rob Fergus,et al. Visualizing and Understanding Convolutional Networks , 2013, ECCV.
[48] Yoshua Bengio,et al. Generative Adversarial Nets , 2014, NIPS.
[49] Max Welling,et al. Auto-Encoding Variational Bayes , 2013, ICLR.
[50] Luc Van Gool,et al. The Pascal Visual Object Classes Challenge: A Retrospective , 2014, International Journal of Computer Vision.
[51] Carl Lagoze,et al. Big Data, data integrity, and the fracturing of the control zone , 2014, Big Data Soc..
[52] Joan Bruna,et al. Intriguing properties of neural networks , 2013, ICLR.
[53] Andrew W. Senior,et al. Long Short-Term Memory Based Recurrent Neural Network Architectures for Large Vocabulary Speech Recognition , 2014, ArXiv.
[54] Shawn Loewen,et al. Exploratory Factor Analysis and Principal Components Analysis , 2015 .
[55] Shakir Mohamed,et al. Variational Inference with Normalizing Flows , 2015, ICML.
[56] D. Sculley,et al. Hidden Technical Debt in Machine Learning Systems , 2015, NIPS.
[57] Jan Kautz,et al. Loss Functions for Neural Networks for Image Processing , 2015, ArXiv.
[58] W. Singer. The Ongoing Search for the Neuronal Correlate of Consciousness , 2015 .
[59] Yann LeCun,et al. The Loss Surfaces of Multilayer Networks , 2014, AISTATS.
[60] Jonathon Shlens,et al. Explaining and Harnessing Adversarial Examples , 2014, ICLR.
[61] Günter Klambauer,et al. DeepTox: Toxicity Prediction using Deep Learning , 2016, Front. Environ. Sci..
[62] Guigang Zhang,et al. Deep Learning , 2016, Int. J. Semantic Comput..
[63] Carlos Guestrin,et al. "Why Should I Trust You?": Explaining the Predictions of Any Classifier , 2016, ArXiv.
[64] Wojciech Zaremba,et al. Improved Techniques for Training GANs , 2016, NIPS.
[65] John Schulman,et al. Concrete Problems in AI Safety , 2016, ArXiv.
[66] John J. Hopfield,et al. Dense Associative Memory for Pattern Recognition , 2016, NIPS.
[67] Lukasz Kaiser,et al. Attention is All you Need , 2017, NIPS.
[68] Sepp Hochreiter,et al. GANs Trained by a Two Time-Scale Update Rule Converge to a Local Nash Equilibrium , 2017, NIPS.
[69] Eldad Haber,et al. Stable architectures for deep neural networks , 2017, ArXiv.
[70] Scott Lundberg,et al. A Unified Approach to Interpreting Model Predictions , 2017, NIPS.
[71] Kilian Q. Weinberger,et al. On Calibration of Modern Neural Networks , 2017, ICML.
[72] Ankur Taly,et al. Axiomatic Attribution for Deep Networks , 2017, ICML.
[73] Matthias Hein,et al. The Loss Surface of Deep and Wide Neural Networks , 2017, ICML.
[74] Ryan P. Adams,et al. Motivating the Rules of the Game for Adversarial Example Research , 2018, ArXiv.
[75] Leland McInnes,et al. UMAP: Uniform Manifold Approximation and Projection for Dimension Reduction , 2018, ArXiv.
[76] Sepp Hochreiter,et al. Fréchet ChemNet Distance: A Metric for Generative Models for Molecules in Drug Discovery , 2018, J. Chem. Inf. Model..
[77] Chenchen Liu,et al. How convolutional neural networks see the world - A survey of convolutional neural network visualization methods , 2018, Math. Found. Comput..
[78] Sjoerd van Steenkiste,et al. Towards Accurate Generative Models of Video: A New Metric & Challenges , 2018, ArXiv.
[79] Elad Hoffer,et al. Exponentially vanishing sub-optimal local minima in multilayer neural networks , 2017, ICLR.
[80] Dominik Roblek,et al. Fréchet Audio Distance: A Reference-Free Metric for Evaluating Music Enhancement Algorithms , 2019, INTERSPEECH.
[81] Saeed Mahloujifar,et al. The Curse of Concentration in Robust Learning: Evasion and Poisoning Attacks from Concentration of Measure , 2018, AAAI.
[82] James T. Kwok,et al. Generalizing from a Few Examples , 2019, ACM Comput. Surv..
[83] Graham D. Riley,et al. Estimation of energy consumption in machine learning , 2019, J. Parallel Distributed Comput..
[84] Mikhail Belkin,et al. Reconciling modern machine-learning practice and the classical bias–variance trade-off , 2018, Proceedings of the National Academy of Sciences.
[85] Elizabeth Dubois,et al. Assessing Trust Versus Reliance for Technology Platforms by Systematic Literature Review , 2020 .
[86] J. Agar. What is science for? The Lighthill report on artificial intelligence reinterpreted. , 2020, British journal for the history of science.
[87] Demis Hassabis,et al. Improved protein structure prediction using potentials from deep learning , 2020, Nature.
[88] Mark Chen,et al. Language Models are Few-Shot Learners , 2020, NeurIPS.
[89] Verena Geist,et al. Applying AI in Practice: Key Challenges and Lessons Learned , 2020, CD-MAKE.
[90] Gary Marcus,et al. The Next Decade in AI: Four Steps Towards Robust Artificial Intelligence , 2020, ArXiv.
[91] Boaz Barak,et al. Deep double descent: where bigger models and more data hurt , 2019, ICLR.
[92] Geir Kjetil Sandve,et al. Modern Hopfield Networks and Attention for Immune Repertoire Classification , 2020, bioRxiv.
[93] Francesco Renna,et al. On instabilities of deep learning in image reconstruction and the potential costs of AI , 2019, Proceedings of the National Academy of Sciences.
[94] W. Zellinger,et al. On generalization in moment-based domain adaptation , 2020, Annals of Mathematics and Artificial Intelligence.
[95] F. Turck,et al. Overly optimistic prediction results on imbalanced data: a case study of flaws and benefits when applying over-sampling , 2020, Artif. Intell. Medicine.
[96] Verena Geist,et al. AI System Engineering - Key Challenges and Lessons Learned , 2020, Mach. Learn. Knowl. Extr..
[97] David P. Kreil,et al. Hopfield Networks is All You Need , 2020, ICLR.