Machine Learning in Python: Main developments and technology trends in data science, machine learning, and artificial intelligence
暂无分享,去创建一个
[1] Clément Farabet,et al. Torch7: A Matlab-like Environment for Machine Learning , 2011, NIPS 2011.
[2] Matthew Rocklin,et al. Better and faster hyperparameter optimization with Dask , 2019 .
[3] Amos J. Storkey,et al. Data Augmentation Generative Adversarial Networks , 2017, ICLR 2018.
[4] F ROSENBLATT,et al. The perceptron: a probabilistic model for information storage and organization in the brain. , 1958, Psychological review.
[5] Kunle Olukotun,et al. DAWNBench : An End-to-End Deep Learning Benchmark and Competition , 2017 .
[6] Aleksander Madry,et al. Exploring the Landscape of Spatial Robustness , 2017, ICML.
[7] Dawn Xiaodong Song,et al. Decision Boundary Analysis of Adversarial Examples , 2018, ICLR.
[8] Chandan Singh,et al. Interpretations are useful: penalizing explanations to align neural networks with prior knowledge , 2019, ICML.
[9] Jin Song Dong,et al. Silas: High Performance, Explainable and Verifiable Machine Learning , 2019, ArXiv.
[10] Dumitru Erhan,et al. Going deeper with convolutions , 2014, 2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).
[11] Mani Srivastava,et al. GenAttack: practical black-box attacks with gradient-free optimization , 2018, GECCO.
[12] Somesh Jha,et al. Objective Metrics and Gradient Descent Algorithms for Adversarial Examples in Machine Learning , 2017, ACSAC.
[13] Luca Antiga,et al. Automatic differentiation in PyTorch , 2017 .
[14] Kevin Leyton-Brown,et al. Auto-WEKA: combined selection and hyperparameter optimization of classification algorithms , 2012, KDD.
[15] Qihang Lin,et al. Model-Agnostic Linear Competitors - When Interpretable Models Compete and Collaborate with Black-Box Models , 2019, ArXiv.
[16] Matthias Bethge,et al. Decision-Based Adversarial Attacks: Reliable Attacks Against Black-Box Machine Learning Models , 2017, ICLR.
[17] Quoc V. Le,et al. Efficient Neural Architecture Search via Parameter Sharing , 2018, ICML.
[18] Patrick D. McDaniel,et al. Transferability in Machine Learning: from Phenomena to Black-Box Attacks using Adversarial Samples , 2016, ArXiv.
[19] Sanjay Ghemawat,et al. MapReduce: Simplified Data Processing on Large Clusters , 2004, OSDI.
[20] A. F. Melik-Adamyan,et al. Speeding up numerical calculations in Python , 2016 .
[21] Bin Yu,et al. Beyond Word Importance: Contextual Decomposition to Extract Interactions from LSTMs , 2018, ICLR.
[22] Jonathon S. Hare,et al. Torchbearer: A Model Fitting Library for PyTorch , 2018, ArXiv.
[23] Michael Siebers,et al. Enriching Visual with Verbal Explanations for Relational Concepts - Combining LIME with Aleph , 2019, PKDD/ECML Workshops.
[24] David P. Anderson,et al. SETI@home: an experiment in public-resource computing , 2002, CACM.
[25] Swagath Venkataramani,et al. PACT: Parameterized Clipping Activation for Quantized Neural Networks , 2018, ArXiv.
[26] Ali Farhadi,et al. XNOR-Net: ImageNet Classification Using Binary Convolutional Neural Networks , 2016, ECCV.
[27] Aaron Klein,et al. Efficient and Robust Automated Machine Learning , 2015, NIPS.
[28] David A. Wagner,et al. Towards Evaluating the Robustness of Neural Networks , 2016, 2017 IEEE Symposium on Security and Privacy (SP).
[29] J. Demmel,et al. Sun Microsystems , 1996 .
[30] John Salvatier,et al. Probabilistic programming in Python using PyMC3 , 2016, PeerJ Comput. Sci..
[31] Lawrence D. Jackel,et al. Backpropagation Applied to Handwritten Zip Code Recognition , 1989, Neural Computation.
[32] Jian Sun,et al. Deep Residual Learning for Image Recognition , 2015, 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).
[33] Vijay Vasudevan,et al. Learning Transferable Architectures for Scalable Image Recognition , 2017, 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition.
[34] Ameet Talwalkar,et al. MLlib: Machine Learning in Apache Spark , 2015, J. Mach. Learn. Res..
[35] Luca Maria Gambardella,et al. Deep, Big, Simple Neural Nets for Handwritten Digit Recognition , 2010, Neural Computation.
[36] Quoc V. Le,et al. Neural Architecture Search with Reinforcement Learning , 2016, ICLR.
[37] W. Brendel,et al. Foolbox: A Python toolbox to benchmark the robustness of machine learning models , 2017 .
[38] Geoffrey E. Hinton,et al. Learning representations by back-propagating errors , 1986, Nature.
[39] Michael Backes,et al. The Limitations of Model Uncertainty in Adversarial Settings , 2018, ArXiv.
[40] Martin Wistuba,et al. Adversarial Robustness Toolbox v1.0.0 , 2018, 1807.01069.
[41] Andrew Slavin Ross,et al. Improving the Adversarial Robustness and Interpretability of Deep Neural Networks by Regularizing their Input Gradients , 2017, AAAI.
[42] Yiming Yang,et al. DARTS: Differentiable Architecture Search , 2018, ICLR.
[43] Liang Lin,et al. SNAS: Stochastic Neural Architecture Search , 2018, ICLR.
[44] Michael J. Franklin,et al. Resilient Distributed Datasets: A Fault-Tolerant Abstraction for In-Memory Cluster Computing , 2012, NSDI.
[45] Patrick D. McDaniel,et al. Cleverhans V0.1: an Adversarial Machine Learning Library , 2016, ArXiv.
[46] Li Chen,et al. Keeping the Bad Guys Out: Protecting and Vaccinating Deep Learning with JPEG Compression , 2017, ArXiv.
[47] L. Shapley. A Value for n-person Games , 1988 .
[48] G. Hua,et al. LQ-Nets: Learned Quantization for Highly Accurate and Compact Deep Neural Networks , 2018, ECCV.
[49] Silvio Savarese,et al. Neural Task Graphs: Generalizing to Unseen Tasks From a Single Video Demonstration , 2018, 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR).
[50] S. Hido,et al. CuPy : A NumPy-Compatible Library for NVIDIA GPU Calculations , 2017 .
[51] Kaiyong Zhao,et al. AutoML: A Survey of the State-of-the-Art , 2019, Knowl. Based Syst..
[52] Patrice Y. Simard,et al. Using GPUs for machine learning algorithms , 2005, Eighth International Conference on Document Analysis and Recognition (ICDAR'05).
[53] Li Fei-Fei,et al. ImageNet: A large-scale hierarchical image database , 2009, CVPR.
[54] S Gawley,et al. Trends and analysis , 1998 .
[55] Michael I. Jordan,et al. HopSkipJumpAttack: A Query-Efficient Decision-Based Attack , 2019, 2020 IEEE Symposium on Security and Privacy (SP).
[56] David Berthelot,et al. Evaluation Methodology for Attacks Against Confidence Thresholding Models , 2018 .
[57] Vishakh Hegde,et al. Parallel and Distributed Deep Learning , 2016 .
[58] Oriol Vinyals,et al. Hierarchical Representations for Efficient Architecture Search , 2017, ICLR.
[59] David J. Fleet,et al. Adversarial Manipulation of Deep Representations , 2015, ICLR.
[60] Sebastian Gehrmann,et al. exBERT: A Visual Analysis Tool to Explore Learned Representations in Transformers Models , 2019, ArXiv.
[61] Yang Wang,et al. Advbox: a toolbox to generate adversarial examples that fool neural networks , 2020, ArXiv.
[62] J. Friedman. Greedy function approximation: A gradient boosting machine. , 2001 .
[63] Jiqiang Guo,et al. Stan: A Probabilistic Programming Language. , 2017, Journal of statistical software.
[64] George D. C. Cavalcanti,et al. Dynamic classifier selection: Recent advances and perspectives , 2018, Inf. Fusion.
[65] Yang Song,et al. PixelDefend: Leveraging Generative Models to Understand and Defend against Adversarial Examples , 2017, ICLR.
[66] Travis E. Oliphant,et al. Python for Scientific Computing , 2007, Computing in Science & Engineering.
[67] Yue Zhao,et al. Combining Machine Learning Models using combo Library , 2020, AAAI.
[68] Yoshua Bengio,et al. Algorithms for Hyper-Parameter Optimization , 2011, NIPS.
[69] Yoav Freund,et al. A decision-theoretic generalization of on-line learning and an application to boosting , 1997, EuroCOLT.
[70] Joel Nothman,et al. SciPy 1.0-Fundamental Algorithms for Scientific Computing in Python , 2019, ArXiv.
[71] Avanti Shrikumar,et al. Learning Important Features Through Propagating Activation Differences , 2017, ICML.
[72] Siu Kwan Lam,et al. Numba: a LLVM-based Python JIT compiler , 2015, LLVM '15.
[73] Thomas G. Dietterich. Learning at the knowledge level , 2004, Machine Learning.
[74] Shenjian Chen,et al. Message Passing Interface (MPI) , 2011, Encyclopedia of Parallel Computing.
[75] Deborah Silver,et al. Feature Visualization , 1994, Scientific Visualization.
[76] Carlos Guestrin,et al. "Why Should I Trust You?": Explaining the Predictions of Any Classifier , 2016, ArXiv.
[77] Moustapha Cissé,et al. Countering Adversarial Images using Input Transformations , 2018, ICLR.
[78] D C CavalcantiGeorge,et al. Dynamic classifier selection , 2018 .
[79] David H. Wolpert,et al. Stacked generalization , 1992, Neural Networks.
[80] Seyed-Mohsen Moosavi-Dezfooli,et al. DeepFool: A Simple and Accurate Method to Fool Deep Neural Networks , 2015, 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).
[81] Yoshua Bengio,et al. Random Search for Hyper-Parameter Optimization , 2012, J. Mach. Learn. Res..
[82] Sylvain Arlot,et al. A survey of cross-validation procedures for model selection , 2009, 0907.4728.
[83] Enhua Wu,et al. Squeeze-and-Excitation Networks , 2017, IEEE Transactions on Pattern Analysis and Machine Intelligence.
[84] Masoud Mohseni,et al. TensorFlow Quantum: A Software Framework for Quantum Machine Learning , 2020, ArXiv.
[85] Geoffrey E. Hinton,et al. Visualizing Data using t-SNE , 2008 .
[86] Torsten Hoefler,et al. Demystifying Parallel and Distributed Deep Learning: An In-Depth Concurrency Analysis. , 2018 .
[87] Yuan Yu,et al. TensorFlow: A system for large-scale machine learning , 2016, OSDI.
[88] Alan L. Yuille,et al. Mitigating adversarial effects through randomization , 2017, ICLR.
[89] Kevin Duh,et al. DyNet: The Dynamic Neural Network Toolkit , 2017, ArXiv.
[90] Ankur Taly,et al. Axiomatic Attribution for Deep Networks , 2017, ICML.
[91] Jan Eric Lenssen,et al. Fast Graph Representation Learning with PyTorch Geometric , 2019, ArXiv.
[92] Yoav Freund,et al. A decision-theoretic generalization of on-line learning and an application to boosting , 1995, EuroCOLT.
[93] Arvind Satyanarayan,et al. Altair: Interactive Statistical Visualizations for Python , 2018, J. Open Source Softw..
[94] Matthias Bethge,et al. Foolbox v0.8.0: A Python toolbox to benchmark the robustness of machine learning models , 2017, ArXiv.
[95] Gaël Varoquaux,et al. Scikit-learn: Machine Learning in Python , 2011, J. Mach. Learn. Res..
[96] Ning Qian,et al. On the momentum term in gradient descent learning algorithms , 1999, Neural Networks.
[97] Samy Bengio,et al. Adversarial Machine Learning at Scale , 2016, ICLR.
[98] Alistair A. Young,et al. Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) , 2017, MICCAI 2017.
[99] Luiz Eduardo Soares de Oliveira,et al. Decoupling Direction and Norm for Efficient Gradient-Based L2 Adversarial Attacks and Defenses , 2018, 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR).
[100] Luca Maria Gambardella,et al. Deep Big Simple Neural Nets Excel on Handwritten Digit Recognition , 2010, ArXiv.
[101] R. Fisher. THE USE OF MULTIPLE MEASUREMENTS IN TAXONOMIC PROBLEMS , 1936 .
[102] Atul Prakash,et al. Robust Physical-World Attacks on Deep Learning Visual Classification , 2018, 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition.
[103] Ming-Wei Chang,et al. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding , 2019, NAACL.
[104] Timnit Gebru,et al. Gender Shades: Intersectional Accuracy Disparities in Commercial Gender Classification , 2018, FAT.
[105] Giovanni S. Alberti,et al. ADef: an Iterative Algorithm to Construct Adversarial Deformations , 2018, ICLR.
[106] Benjamin Kaufman,et al. Machine learning and AI-based approaches for bioactive ligand discovery and GPCR-ligand recognition , 2020, Methods.
[107] Michael Siebers,et al. Explaining Black-Box Classifiers with ILP - Empowering LIME with Aleph to Approximate Non-linear Decisions with Relational Rules , 2018, ILP.
[108] R. Brereton,et al. Support vector machines for classification and regression. , 2010, The Analyst.
[109] Tong Liu,et al. The development of Mellanox/NVIDIA GPUDirect over InfiniBand—a new model for GPU to GPU communications , 2011, Computer Science - Research and Development.
[110] Zheng Zhang,et al. MXNet: A Flexible and Efficient Machine Learning Library for Heterogeneous Distributed Systems , 2015, ArXiv.
[111] Shin Ishii,et al. Distributional Smoothing with Virtual Adversarial Training , 2015, ICLR 2016.
[112] Martín Abadi,et al. Adversarial Patch , 2017, ArXiv.
[113] Ilya Sutskever,et al. Language Models are Unsupervised Multitask Learners , 2019 .
[114] Aleksander Madry,et al. Towards Deep Learning Models Resistant to Adversarial Attacks , 2017, ICLR.
[115] Jonathon Shlens,et al. Explaining and Harnessing Adversarial Examples , 2014, ICLR.
[116] Joan Bruna,et al. Intriguing properties of neural networks , 2013, ICLR.
[117] Jin Song Dong,et al. Towards Dependable and Explainable Machine Learning Using Automated Reasoning , 2018, ICFEM.
[118] Neeraj Pradhan,et al. Composable Effects for Flexible and Accelerated Probabilistic Programming in NumPyro , 2019, ArXiv.
[119] Kahlen Aymes,et al. One Step Closer , 2016 .
[120] Razvan Pascanu,et al. Relational inductive biases, deep learning, and graph networks , 2018, ArXiv.
[121] Radha Poovendran,et al. On the Limitation of Convolutional Neural Networks in Recognizing Negative Images , 2017, 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA).
[122] Ameet Talwalkar,et al. Hyperband: A Novel Bandit-Based Approach to Hyperparameter Optimization , 2016, J. Mach. Learn. Res..
[123] Aaron Klein,et al. Auto-sklearn: Efficient and Robust Automated Machine Learning , 2019, Automated Machine Learning.
[124] Cho-Jui Hsieh,et al. GPU-acceleration for Large-scale Tree Boosting , 2017, ArXiv.
[125] Duen Horng Chau,et al. Summit: Scaling Deep Learning Interpretability by Visualizing Activation and Attribution Summarizations , 2019, IEEE Transactions on Visualization and Computer Graphics.
[126] Juntang Zhuang,et al. Decision explanation and feature importance for invertible networks , 2019, 2019 IEEE/CVF International Conference on Computer Vision Workshop (ICCVW).
[127] Tianqi Chen,et al. XGBoost: A Scalable Tree Boosting System , 2016, KDD.
[128] Hunter Scales,et al. AltiVec Extension to PowerPC Accelerates Media Processing , 2000, IEEE Micro.
[129] Scott Lundberg,et al. A Unified Approach to Interpreting Model Predictions , 2017, NIPS.
[130] Kamyar Azizzadenesheli,et al. signSGD with Majority Vote is Communication Efficient and Fault Tolerant , 2018, ICLR.
[131] Dustin Tran,et al. Edward: A library for probabilistic modeling, inference, and criticism , 2016, ArXiv.
[132] Quoc V. Le,et al. Towards a Human-like Open-Domain Chatbot , 2020, ArXiv.
[133] Francisco Herrera,et al. A Review on Ensembles for the Class Imbalance Problem: Bagging-, Boosting-, and Hybrid-Based Approaches , 2012, IEEE Transactions on Systems, Man, and Cybernetics, Part C (Applications and Reviews).
[134] Lukasz Kaiser,et al. Attention is All you Need , 2017, NIPS.
[135] Noah D. Goodman,et al. Pyro: Deep Universal Probabilistic Programming , 2018, J. Mach. Learn. Res..
[136] Natalia Gimelshein,et al. PyTorch: An Imperative Style, High-Performance Deep Learning Library , 2019, NeurIPS.
[137] Alok Aggarwal,et al. Regularized Evolution for Image Classifier Architecture Search , 2018, AAAI.
[138] Kalyanmoy Deb,et al. A Comparative Analysis of Selection Schemes Used in Genetic Algorithms , 1990, FOGA.
[139] Yanjun Ma,et al. PaddlePaddle: An Open-Source Deep Learning Platform from Industrial Practice , 2019 .
[140] Guigang Zhang,et al. Deep Learning , 2016, Int. J. Semantic Comput..
[141] Jimmy Ba,et al. Adam: A Method for Stochastic Optimization , 2014, ICLR.
[142] Sean M. Law,et al. STUMPY: A Powerful and Scalable Python Library for Time Series Data Mining , 2019, J. Open Source Softw..
[143] Lin Xu,et al. Incremental Network Quantization: Towards Lossless CNNs with Low-Precision Weights , 2017, ICLR.
[144] Leo Breiman,et al. Random Forests , 2001, Machine Learning.
[145] Matthew Johnson,et al. Compiling machine learning programs via high-level tracing , 2018 .
[146] John Salvatier,et al. Theano: A Python framework for fast computation of mathematical expressions , 2016, ArXiv.
[147] Vahid Mirjalili,et al. Python machine learning : machine learning and deep learning with Python, scikit-learn, and TensorFlow , 2017 .
[148] Yanjun Qi,et al. Feature Squeezing: Detecting Adversarial Examples in Deep Neural Networks , 2017, NDSS.
[149] Dan Boneh,et al. Adversarial Training and Robustness for Multiple Perturbations , 2019, NeurIPS.
[150] Silvia M. Nassar,et al. Impact of an Extra Layer on the Stacking Algorithm for Classification Problems , 2018, J. Comput. Sci..
[151] Sebastian Raschka,et al. MLxtend: Providing machine learning and data science utilities and extensions to Python's scientific computing stack , 2018, J. Open Source Softw..
[152] Darshan Patil,et al. Towards modular and programmable architecture search , 2019, NeurIPS.
[153] Randal S. Olson,et al. TPOT: A Tree-based Pipeline Optimization Tool for Automating Machine Learning , 2016, AutoML@ICML.
[154] Joseph Sill,et al. Feature-Weighted Linear Stacking , 2009, ArXiv.
[155] Pushmeet Kohli,et al. Adversarial Risk and the Dangers of Evaluating Against Weak Attacks , 2018, ICML.
[156] Amit Agarwal,et al. CNTK: Microsoft's Open-Source Deep-Learning Toolkit , 2016, KDD.
[157] Matthias Bethge,et al. Towards the first adversarially robust neural network model on MNIST , 2018, ICLR.
[158] R. Tibshirani,et al. Least angle regression , 2004, math/0406456.
[159] Yaser Sheikh,et al. Total Capture: A 3D Deformation Model for Tracking Faces, Hands, and Bodies , 2018, 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition.
[160] Dhabaleswar K. Panda,et al. Efficient Inter-node MPI Communication Using GPUDirect RDMA for InfiniBand Clusters with NVIDIA GPUs , 2013, 2013 42nd International Conference on Parallel Processing.
[161] H. Brendan McMahan,et al. A General Approach to Adding Differential Privacy to Iterative Training Procedures , 2018, ArXiv.
[162] Xiaoyu Cao,et al. Mitigating Evasion Attacks to Deep Neural Networks via Region-based Classification , 2017, ACSAC.
[163] John F. Canny,et al. T-SNE-CUDA: GPU-Accelerated T-SNE and its Applications to Modern Data , 2018, 2018 30th International Symposium on Computer Architecture and High Performance Computing (SBAC-PAD).
[164] Sergey Ioffe,et al. Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift , 2015, ICML.
[165] Gilles Louppe,et al. Independent consultant , 2013 .
[166] Li Fei-Fei,et al. Progressive Neural Architecture Search , 2017, ECCV.
[167] Sebastian Ruder,et al. Universal Language Model Fine-tuning for Text Classification , 2018, ACL.
[168] Sebastian Raschka,et al. Naive Bayes and Text Classification I - Introduction and Theory , 2014, ArXiv.
[169] Quoc V. Le,et al. EfficientNet: Rethinking Model Scaling for Convolutional Neural Networks , 2019, ICML.
[170] Valentina Zantedeschi,et al. Efficient Defenses Against Adversarial Attacks , 2017, AISec@CCS.
[171] Samy Bengio,et al. Adversarial examples in the physical world , 2016, ICLR.
[172] Bernd Bischl,et al. An Open Source AutoML Benchmark , 2019, ArXiv.
[173] Qingquan Song,et al. Auto-Keras: An Efficient Neural Architecture Search System , 2018, KDD.
[174] Ting Wang,et al. DEEPSEC: A Uniform Platform for Security Analysis of Deep Learning Model , 2019, 2019 IEEE Symposium on Security and Privacy (SP).
[175] George Bosilca,et al. UCX: An Open Source Framework for HPC Network APIs and Beyond , 2015, 2015 IEEE 23rd Annual Symposium on High-Performance Interconnects.
[176] Jun Zhu,et al. Boosting Adversarial Attacks with Momentum , 2017, 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition.
[177] GhemawatSanjay,et al. The Google file system , 2003 .
[178] Bo Chen,et al. Quantization and Training of Neural Networks for Efficient Integer-Arithmetic-Only Inference , 2017, 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition.
[179] Jacob Schreiber,et al. Pomegranate: fast and flexible probabilistic modeling in python , 2017, J. Mach. Learn. Res..
[180] Ian J. Goodfellow,et al. Technical Report on the CleverHans v2.1.0 Adversarial Examples Library , 2016 .
[181] Michael I. Jordan,et al. CoCoA: A General Framework for Communication-Efficient Distributed Optimization , 2016, J. Mach. Learn. Res..
[182] Jinfeng Yi,et al. ZOO: Zeroth Order Optimization Based Black-box Attacks to Deep Neural Networks without Training Substitute Models , 2017, AISec@CCS.
[183] Jinfeng Yi,et al. EAD: Elastic-Net Attacks to Deep Neural Networks via Adversarial Examples , 2017, AAAI.
[184] Hesham El-Rewini,et al. Message Passing Interface (MPI) , 2005 .
[185] Trevor Darrell,et al. Caffe: Convolutional Architecture for Fast Feature Embedding , 2014, ACM Multimedia.
[186] Benjamin Edwards,et al. Adversarial Robustness Toolbox v0.2.2 , 2018, ArXiv.
[187] John D. Hunter,et al. Matplotlib: A 2D Graphics Environment , 2007, Computing in Science & Engineering.
[188] Matthew Rocklin,et al. Dask: Parallel Computation with Blocked algorithms and Task Scheduling , 2015, SciPy.
[189] Dimitrios Sarigiannis,et al. Snap ML: A Hierarchical Framework for Machine Learning , 2018, NeurIPS.
[190] Colin Raffel,et al. Thermometer Encoding: One Hot Way To Resist Adversarial Examples , 2018, ICLR.
[191] Dan Boneh,et al. Ensemble Adversarial Training: Attacks and Defenses , 2017, ICLR.
[192] Leland McInnes,et al. UMAP: Uniform Manifold Approximation and Projection for Dimension Reduction , 2018, ArXiv.
[193] Aaron Klein,et al. BOHB: Robust and Efficient Hyperparameter Optimization at Scale , 2018, ICML.
[194] Nina Narodytska,et al. Simple Black-Box Adversarial Perturbations for Deep Networks , 2016, ArXiv.
[195] Wojciech M. Czarnecki,et al. Grandmaster level in StarCraft II using multi-agent reinforcement learning , 2019, Nature.
[196] Kirthevasan Kandasamy,et al. Neural Architecture Search with Bayesian Optimisation and Optimal Transport , 2018, NeurIPS.
[197] W S McCulloch,et al. A logical calculus of the ideas immanent in nervous activity , 1990, The Philosophy of Artificial Intelligence.
[198] Jack Dongarra,et al. LAPACK: a portable linear algebra library for high-performance computers , 1990, SC.
[199] Prabhat,et al. Scalable Bayesian Optimization Using Deep Neural Networks , 2015, ICML.
[200] Leo Breiman,et al. Bagging Predictors , 1996, Machine Learning.
[201] María Rodríguez Martínez,et al. MonoNet: Towards Interpretable Models by Learning Monotonic Features , 2019, ArXiv.
[202] Jeff Johnson,et al. Billion-Scale Similarity Search with GPUs , 2017, IEEE Transactions on Big Data.
[203] Ananthram Swami,et al. Distillation as a Defense to Adversarial Perturbations Against Deep Neural Networks , 2015, 2016 IEEE Symposium on Security and Privacy (SP).
[204] Demis Hassabis,et al. Mastering Chess and Shogi by Self-Play with a General Reinforcement Learning Algorithm , 2017, ArXiv.
[205] Kenta Oono,et al. Chainer : a Next-Generation Open Source Framework for Deep Learning , 2015 .
[206] Sebastian Raschka,et al. Model Evaluation, Model Selection, and Algorithm Selection in Machine Learning , 2018, ArXiv.
[207] Quoc V. Le,et al. GPipe: Efficient Training of Giant Neural Networks using Pipeline Parallelism , 2018, ArXiv.
[208] Shuchang Zhou,et al. DoReFa-Net: Training Low Bitwidth Convolutional Neural Networks with Low Bitwidth Gradients , 2016, ArXiv.
[209] Kilian Q. Weinberger,et al. Densely Connected Convolutional Networks , 2016, 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).
[210] Seyed-Mohsen Moosavi-Dezfooli,et al. Universal Adversarial Perturbations , 2016, 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).
[211] Skipper Seabold,et al. Statsmodels: Econometric and Statistical Modeling with Python , 2010, SciPy.
[212] Ananthram Swami,et al. The Limitations of Deep Learning in Adversarial Settings , 2015, 2016 IEEE European Symposium on Security and Privacy (EuroS&P).
[213] Michael S. Bernstein,et al. ImageNet Large Scale Visual Recognition Challenge , 2014, International Journal of Computer Vision.
[214] Wes McKinney,et al. pandas: a Foundational Python Library for Data Analysis and Statistics , 2011 .
[215] Fernando Nogueira,et al. Imbalanced-learn: A Python Toolbox to Tackle the Curse of Imbalanced Datasets in Machine Learning , 2016, J. Mach. Learn. Res..
[216] Sanjay Ghemawat,et al. MapReduce: simplified data processing on large clusters , 2008, CACM.
[217] Tie-Yan Liu,et al. LightGBM: A Highly Efficient Gradient Boosting Decision Tree , 2017, NIPS.