A Scalable, Interpretable, Verifiable&Differentiable Logic Gate Convolutional Neural Network Architecture From Truth Tables
暂无分享,去创建一个
[1] Joao Marques-Silva. Logic-Based Explainability in Machine Learning , 2022, RW.
[2] C. Borgelt,et al. Deep Differentiable Logic Gate Networks , 2022, NeurIPS.
[3] F. Yang,et al. Learning Interpretable Decision Rule Sets: A Submodular Optimization Approach , 2022, NeurIPS.
[4] Clark W. Barrett,et al. Scalable verification of GNN-based job schedulers , 2022, Proc. ACM Program. Lang..
[5] Pierre Le Bodic,et al. Learning Optimal Decision Sets and Lists with SAT , 2021, J. Artif. Intell. Res..
[6] Bubacarr Bah,et al. Efficient and Robust Mixed-Integer Optimization Methods for Training Binarized Deep Neural Networks , 2021, ArXiv.
[7] Pushmeet Kohli,et al. Making sense of raw input , 2021, Artif. Intell..
[8] Jianyong Wang,et al. Scalable Rule-Based Representation Learning for Interpretable Classification , 2021, NeurIPS.
[9] Taylor Johnson,et al. The Second International Verification of Neural Networks Competition (VNN-COMP 2021): Summary and Results , 2021, ArXiv.
[10] Martin Rinard,et al. Verifying Low-dimensional Input Neural Networks via Input Quantization , 2021, SAS.
[11] Taolue Chen,et al. BDD4BNN: A BDD-based Quantitative Analysis Framework for Binarized Neural Networks , 2021, CAV.
[12] Mark Niklas Müller,et al. PRIMA: general and precise neural network certification via scalable convex hull approximations , 2021, Proc. ACM Program. Lang..
[13] Daniel Fryer,et al. Shapley values for feature selection: The good, the bad, and the axioms , 2021, IEEE Access.
[14] Di He,et al. Towards Certifying L-infinity Robustness using Neural Networks with L-inf-dist Neurons , 2021, ICML.
[15] Vasco M. Manquinho,et al. Pseudo-Boolean and Cardinality Constraints , 2021, Handbook of Satisfiability.
[16] Dan Alistarh,et al. Sparsity in Deep Learning: Pruning and growth for efficient inference and training in neural networks , 2021, J. Mach. Learn. Res..
[17] Peter Tiňo,et al. A Survey on Neural Network Interpretability , 2020, IEEE Transactions on Emerging Topics in Computational Intelligence.
[18] Yihan Wang,et al. Fast and Complete: Enabling Complete Neural Network Verification with Rapid and Massively Parallel Incomplete Verifiers , 2020, ICLR.
[19] S. Gelly,et al. An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale , 2020, ICLR.
[20] Nicolas Flammarion,et al. RobustBench: a standardized adversarial robustness benchmark , 2020, NeurIPS Datasets and Benchmarks.
[21] Rishabh Singh,et al. Scaling Symbolic Methods using Gradients for Neural Model Explanation , 2020, ICLR.
[22] Mark Chen,et al. Language Models are Few-Shot Learners , 2020, NeurIPS.
[23] Martin Rinard,et al. Efficient Exact Verification of Binarized Neural Networks , 2020, NeurIPS.
[24] Nina Narodytska,et al. In Search for a SAT-friendly Binarized Neural Network Architecture , 2020, ICLR.
[25] Geoffrey E. Hinton,et al. Neural Additive Models: Interpretable Machine Learning with Neural Nets , 2020, NeurIPS.
[26] Meng Ma,et al. Extract interpretability-accuracy balanced rules from artificial neural networks: A review , 2020, Neurocomputing.
[27] Adnan Darwiche,et al. On Tractable Representations of Binary Neural Networks , 2020, KR.
[28] Kai Jia,et al. Exploiting Verified Neural Networks via Floating Point Numerical Error , 2020, SAS.
[29] Cho-Jui Hsieh,et al. Automatic Perturbation Analysis for Scalable Certified Robustness and Beyond , 2020, NeurIPS.
[30] Natalia Gimelshein,et al. PyTorch: An Imperative Style, High-Performance Deep Learning Library , 2019, NeurIPS.
[31] Brandon M. Greenwell,et al. Interpretable Machine Learning , 2019, Hands-On Machine Learning with R.
[32] Andre Araujo,et al. Computing Receptive Fields of Convolutional Neural Networks , 2019, Distill.
[33] V. Gottemukkula. POLYNOMIAL ACTIVATION FUNCTIONS , 2019 .
[34] Joao Marques-Silva,et al. Assessing Heuristic Machine Learning Explanations with Model Counting , 2019, SAT.
[35] Shweta Shinde,et al. Quantitative Verification of Neural Networks and Its Security Applications , 2019, CCS.
[36] Cho-Jui Hsieh,et al. Towards Stable and Efficient Training of Verifiably Robust Neural Networks , 2019, ICLR.
[37] Matthias Hein,et al. Provably Robust Boosted Decision Stumps and Trees against Adversarial Attacks , 2019, NeurIPS.
[38] Sanjeeb Dash,et al. Generalized Linear Rule Models , 2019, ICML.
[39] Peter Y. K. Cheung,et al. LUTNet: Rethinking Inference in FPGA Soft Logic , 2019, 2019 IEEE 27th Annual International Symposium on Field-Programmable Custom Computing Machines (FCCM).
[40] Matthias Bethge,et al. Approximating CNNs with Bag-of-local-Features models works surprisingly well on ImageNet , 2019, ICLR.
[41] Aleksander Madry,et al. On Evaluating Adversarial Robustness , 2019, ArXiv.
[42] Aleksander Madry,et al. Training for Faster Adversarial Robustness Verification via Inducing ReLU Stability , 2018, ICLR.
[43] Wei Liu,et al. Bi-Real Net: Enhancing the Performance of 1-bit CNNs With Improved Representational Capability and Advanced Training Algorithm , 2018, ECCV.
[44] Joao Marques-Silva,et al. PySAT: A Python Toolkit for Prototyping with SAT Oracles , 2018, SAT.
[45] Matthew Mirman,et al. Differentiable Abstract Interpretation for Provably Robust Neural Networks , 2018, ICML.
[46] J. Zico Kolter,et al. Scaling provable adversarial defenses , 2018, NeurIPS.
[47] Pushmeet Kohli,et al. Training verified learners with learned verifiers , 2018, ArXiv.
[48] Sanjeeb Dash,et al. Boolean Decision Rules via Column Generation , 2018, NeurIPS.
[49] Carlos Guestrin,et al. Anchors: High-Precision Model-Agnostic Explanations , 2018, AAAI.
[50] Aditi Raghunathan,et al. Certified Defenses against Adversarial Examples , 2018, ICLR.
[51] Mark Sandler,et al. MobileNetV2: Inverted Residuals and Linear Bottlenecks , 2018, 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition.
[52] Russ Tedrake,et al. Evaluating Robustness of Neural Networks with Mixed Integer Programming , 2017, ICLR.
[53] J. Zico Kolter,et al. Provable defenses against adversarial examples via the convex outer adversarial polytope , 2017, ICML.
[54] Chung-Hao Huang,et al. Verification of Binarized Neural Networks via Inter-neuron Factoring - (Short Paper) , 2017, VSTTE.
[55] Clark W. Barrett,et al. Provably Minimally-Distorted Adversarial Examples , 2017 .
[56] Leonid Ryzhyk,et al. Verifying Properties of Binarized Deep Neural Networks , 2017, AAAI.
[57] Alessio Lomuscio,et al. An approach to reachability analysis for feed-forward ReLU neural networks , 2017, ArXiv.
[58] Aleksander Madry,et al. Towards Deep Learning Models Resistant to Adversarial Attacks , 2017, ICLR.
[59] Lukasz Kaiser,et al. Attention is All you Need , 2017, NIPS.
[60] Sepp Hochreiter,et al. Self-Normalizing Neural Networks , 2017, NIPS.
[61] Rüdiger Ehlers,et al. Formal Verification of Piece-Wise Linear Feed-Forward Neural Networks , 2017, ATVA.
[62] Chih-Hong Cheng,et al. Maximum Resilience of Artificial Neural Networks , 2017, ATVA.
[63] Margo I. Seltzer,et al. Learning Certifiably Optimal Rule Lists , 2017, KDD.
[64] William Welser,et al. An Intelligence in Our Image: The Risks of Bias and Errors in Artificial Intelligence , 2017 .
[65] Dimitris Bertsimas,et al. Optimal classification trees , 2017, Machine Learning.
[66] Mykel J. Kochenderfer,et al. Reluplex: An Efficient SMT Solver for Verifying Deep Neural Networks , 2017, CAV.
[67] Alessio Lomuscio,et al. MCMAS: an open-source model checker for the verification of multi-agent systems , 2017, International Journal on Software Tools for Technology Transfer.
[68] Andy R. Terrel,et al. SymPy: Symbolic computing in Python , 2017, PeerJ Prepr..
[69] Frank Hutter,et al. SGDR: Stochastic Gradient Descent with Warm Restarts , 2016, ICLR.
[70] Jure Leskovec,et al. Interpretable Decision Sets: A Joint Framework for Description and Prediction , 2016, KDD.
[71] Shuchang Zhou,et al. DoReFa-Net: Training Low Bitwidth Convolutional Neural Networks with Low Bitwidth Gradients , 2016, ArXiv.
[72] Francesco Visin,et al. A guide to convolution arithmetic for deep learning , 2016, ArXiv.
[73] Ali Farhadi,et al. XNOR-Net: ImageNet Classification Using Binary Convolutional Neural Networks , 2016, ECCV.
[74] Carlos Guestrin,et al. "Why Should I Trust You?": Explaining the Predictions of Any Classifier , 2016, ArXiv.
[75] Sergey Ioffe,et al. Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift , 2015, ICML.
[76] Jimmy Ba,et al. Adam: A Method for Stochastic Optimization , 2014, ICLR.
[77] Tobias Philipp,et al. A More Compact Translation of Pseudo-Boolean Constraints into CNF Such That Generalized Arc Consistency Is Maintained , 2014, KI.
[78] Alex Alves Freitas,et al. Comprehensible classification models: a position paper , 2014, SKDD.
[79] Steffen Hölldobler,et al. A Compact Encoding of Pseudo-Boolean Constraints into SAT , 2012, KI.
[80] Mark H. Liffiton,et al. A Cardinality Solver: More Expressive Constraints for Free - (Poster Presentation) , 2012, SAT.
[81] Albert Oliveras,et al. BDDs for Pseudo-Boolean Constraints - Revisited , 2011, SAT.
[82] Gaël Varoquaux,et al. The NumPy Array: A Structure for Efficient Numerical Computation , 2011, Computing in Science & Engineering.
[83] Barry O'Sullivan,et al. Minimising Decision Tree Size as Combinatorial Optimisation , 2009, CP.
[84] Fei-Fei Li,et al. ImageNet: A large-scale hierarchical image database , 2009, 2009 IEEE Conference on Computer Vision and Pattern Recognition.
[85] Sanjeev Arora,et al. Computational Complexity: A Modern Approach , 2009 .
[86] Niklas Sörensson,et al. Translating Pseudo-Boolean Constraints into SAT , 2006, J. Satisf. Boolean Model. Comput..
[87] Tiziano Villa,et al. Complexity of two-level logic minimization , 2006, IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems.
[88] João Gama,et al. Learning with Drift Detection , 2004, SBIA.
[89] Pierre Marquis,et al. A Knowledge Compilation Map , 2002, J. Artif. Intell. Res..
[90] L. Breiman. Random Forests , 2001, Encyclopedia of Machine Learning and Data Mining.
[91] Saburo Muroga,et al. Binary Decision Diagrams , 2000, The VLSI Handbook.
[92] Yoram Singer,et al. A simple, fast, and effective rule learner , 1999, AAAI 1999.
[93] Nir Friedman,et al. Bayesian Network Classifiers , 1997, Machine Learning.
[94] Ute St. Clair,et al. Fuzzy Set Theory: Foundations and Applications , 1997 .
[95] William W. Cohen. Fast Effective Rule Induction , 1995, ICML.
[96] J. Ross Quinlan,et al. C4.5: Programs for Machine Learning , 1992 .
[97] J. Ross Quinlan,et al. Simplifying decision trees , 1987, Int. J. Hum. Comput. Stud..
[98] Geoffrey E. Hinton,et al. Learning representations by back-propagating errors , 1986, Nature.
[99] Randal E. Bryant,et al. Graph-Based Algorithms for Boolean Function Manipulation , 1986, IEEE Transactions on Computers.
[100] J. Ross Quinlan,et al. Induction of Decision Trees , 1986, Machine Learning.
[101] C. Y. Lee. Representation of switching circuits by binary-decision programs , 1959 .
[102] Willard Van Orman Quine,et al. The Problem of Simplifying Truth Functions , 1952 .
[103] Archie Blake,et al. Corrections to Canonical expressions in Boolean algebra , 1938, Journal of Symbolic Logic.
[104] L. Locascio,et al. Artificial Intelligence Risk Management Framework (AI RMF 1.0) , 2023 .
[105] Cho-Jui Hsieh,et al. Beta-CROWN: Efficient Bound Propagation with Per-neuron Split Constraints for Complete and Incomplete Neural Network Verification , 2021, ArXiv.
[106] Nikil D. Dutt,et al. pyEDA: An Open-Source Python Toolkit for Pre-processing and Feature Extraction of Electrodermal Activity , 2021, ANT/EDI40.
[107] Adnan Darwiche,et al. Verifying Binarized Neural Networks by Local Automaton Learning , 2019 .
[108] Adnan Darwiche,et al. Compiling Neural Networks into Tractable Boolean Circuits , 2019 .
[109] Ran El-Yaniv,et al. Binarized Neural Networks , 2016, ArXiv.
[110] Ronald L. Rivest,et al. Learning decision lists , 2004, Machine Learning.