暂无分享,去创建一个
Xiaoxiao Li | Kai Li | Zhao Song | Binghui Peng | Yangsibo Huang | K. Li | Yangsibo Huang | Zhao Song | Binghui Peng | Xiaoxiao Li
[1] David P. Woodruff,et al. Relative Error Tensor Low Rank Approximation , 2017, Electron. Colloquium Comput. Complex..
[2] Arthur Jacot,et al. Neural tangent kernel: convergence and generalization in neural networks (invited paper) , 2018, NeurIPS.
[3] Jakub Konecný,et al. Federated Optimization: Distributed Optimization Beyond the Datacenter , 2015, ArXiv.
[4] Uriel Feige,et al. Relations between average case complexity and approximation complexity , 2002, STOC '02.
[5] Eero P. Simoncelli,et al. Image quality assessment: from error visibility to structural similarity , 2004, IEEE Transactions on Image Processing.
[6] Varun Kanade,et al. Reliably Learning the ReLU in Polynomial Time , 2016, COLT.
[7] Raghu Meka,et al. Learning Deep ReLU Networks Is Fixed-Parameter Tractable , 2020, 2021 IEEE 62nd Annual Symposium on Foundations of Computer Science (FOCS).
[8] Huaimin Wang,et al. Mixup Based Privacy Preserving Mixed Collaboration Learning , 2019, 2019 IEEE International Conference on Service-Oriented System Engineering (SOSE).
[9] Hongyi Zhang,et al. mixup: Beyond Empirical Risk Minimization , 2017, ICLR.
[10] Ran Raz,et al. Two Query PCP with Sub-Constant Error , 2008, 2008 49th Annual IEEE Symposium on Foundations of Computer Science.
[11] Prasad Raghavendra,et al. A Birthday Repetition Theorem and Complexity of Approximating Dense CSPs , 2016, ICALP.
[12] David P. Woodruff,et al. Weighted low rank approximations with provable guarantees , 2016, STOC.
[13] Pasin Manurangsi,et al. On the parameterized complexity of approximating dominating set , 2017, Electron. Colloquium Comput. Complex..
[14] Alexandros G. Dimakis,et al. Inverting Deep Generative models, One layer at a time , 2019, NeurIPS.
[15] Andrew Y. Ng,et al. Reading Digits in Natural Images with Unsupervised Feature Learning , 2011 .
[16] Alexander A. Sherstov,et al. Cryptographic Hardness for Learning Intersections of Halfspaces , 2006, 2006 47th Annual IEEE Symposium on Foundations of Computer Science (FOCS'06).
[17] Pasin Manurangsi,et al. Parameterized Approximation Algorithms for Directed Steiner Network Problems , 2017, ESA.
[18] Yuanzhi Li,et al. Learning and Generalization in Overparameterized Neural Networks, Going Beyond Two Layers , 2018, NeurIPS.
[19] Saibal Mukhopadhyay,et al. Edge-Host Partitioning of Deep Neural Networks with Feature Space Encoding for Resource-Constrained Internet-of-Things Platforms , 2018, 2018 15th IEEE International Conference on Advanced Video and Signal Based Surveillance (AVSS).
[20] Johan Håstad,et al. On bounded occurrence constraint satisfaction , 2000, Inf. Process. Lett..
[21] Ruosong Wang,et al. Fine-Grained Analysis of Optimization and Generalization for Overparameterized Two-Layer Neural Networks , 2019, ICML.
[22] Massoud Pedram,et al. JointDNN: An Efficient Training and Inference Engine for Intelligent Mobile Cloud Computing Services , 2018, IEEE Transactions on Mobile Computing.
[23] Soumith Chintala,et al. Unsupervised Representation Learning with Deep Convolutional Generative Adversarial Networks , 2015, ICLR.
[24] Pasin Manurangsi,et al. The Computational Complexity of Training ReLU(s) , 2018, ArXiv.
[25] David P. Woodruff,et al. Low rank approximation with entrywise l1-norm error , 2017, STOC.
[26] Aggelos K. Katsaggelos,et al. Generative Adversarial Networks and Perceptual Losses for Video Super-Resolution , 2018, 2018 25th IEEE International Conference on Image Processing (ICIP).
[27] David P. Woodruff,et al. A PTAS for 𝓁p-Low Rank Approximation , 2019, SODA.
[28] Barnabás Póczos,et al. Gradient Descent Provably Optimizes Over-parameterized Neural Networks , 2018, ICLR.
[29] Chen Wang,et al. Supervised Contrastive Learning , 2020, NeurIPS.
[30] Roland Vollgraf,et al. Fashion-MNIST: a Novel Image Dataset for Benchmarking Machine Learning Algorithms , 2017, ArXiv.
[31] Omri Weinstein,et al. Training (Overparametrized) Neural Networks in Near-Linear Time , 2020, ITCS.
[32] Pasin Manurangsi,et al. Almost-polynomial ratio ETH-hardness of approximating densest k-subgraph , 2016, STOC.
[33] Vitaly Shmatikov,et al. Privacy-preserving deep learning , 2015, 2015 53rd Annual Allerton Conference on Communication, Control, and Computing (Allerton).
[34] H. T. Kung,et al. Distributed Deep Neural Networks Over the Cloud, the Edge and End Devices , 2017, 2017 IEEE 37th International Conference on Distributed Computing Systems (ICDCS).
[35] Li Fei-Fei,et al. Perceptual Losses for Real-Time Style Transfer and Super-Resolution , 2016, ECCV.
[36] Samet Oymak,et al. Toward Moderate Overparameterization: Global Convergence Guarantees for Training Shallow Neural Networks , 2019, IEEE Journal on Selected Areas in Information Theory.
[37] Johan Håstad,et al. Some optimal inapproximability results , 2001, JACM.
[38] Sanjeev Arora,et al. Computational Complexity: A Modern Approach , 2009 .
[39] Yuanzhi Li,et al. Learning Overparameterized Neural Networks via Stochastic Gradient Descent on Structured Data , 2018, NeurIPS.
[40] Song Han,et al. Deep Leakage from Gradients , 2019, NeurIPS.
[41] Yuanzhi Li,et al. Convergence Analysis of Two-layer Neural Networks with ReLU Activation , 2017, NIPS.
[42] Russell Impagliazzo,et al. Which problems have strongly exponential complexity? , 1998, Proceedings 39th Annual Symposium on Foundations of Computer Science (Cat. No.98CB36280).
[43] Richard Nock,et al. Advances and Open Problems in Federated Learning , 2019, Found. Trends Mach. Learn..
[44] Inderjit S. Dhillon,et al. Towards Fast Computation of Certified Robustness for ReLU Networks , 2018, ICML.
[45] Ronald L. Rivest,et al. Training a 3-node neural network is NP-complete , 1988, COLT '88.
[46] Trevor N. Mudge,et al. Neurosurgeon: Collaborative Intelligence Between the Cloud and Mobile Edge , 2017, ASPLOS.
[47] Zhao Song,et al. Over-parameterized Adversarial Training: An Analysis Overcoming the Curse of Dimensionality , 2020, NeurIPS.
[48] Somesh Jha,et al. Model Inversion Attacks that Exploit Confidence Information and Basic Countermeasures , 2015, CCS.
[49] David P. Woodruff,et al. Learning Two Layer Rectified Neural Networks in Polynomial Time , 2018, COLT.
[50] Thomas Brox,et al. Inverting Visual Representations with Convolutional Networks , 2015, 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).
[51] Pasin Manurangsi,et al. Parameterized Intractability of Even Set and Shortest Vector Problem from Gap-ETH , 2018, Electron. Colloquium Comput. Complex..
[52] Dawn Song,et al. The Secret Revealer: Generative Model-Inversion Attacks Against Deep Neural Networks , 2020, 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR).
[53] Dacheng Tao,et al. Perceptual Adversarial Networks for Image-to-Image Transformation , 2017, IEEE Transactions on Image Processing.
[54] Marc Tommasi,et al. Decentralized Collaborative Learning of Personalized Models over Networks , 2016, AISTATS.
[55] Yoshua Bengio,et al. Gradient-based learning applied to document recognition , 1998, Proc. IEEE.
[56] Luca Trevisan,et al. From Gap-ETH to FPT-Inapproximability: Clique, Dominating Set, and More , 2017, 2017 IEEE 58th Annual Symposium on Foundations of Computer Science (FOCS).
[57] Ronald G. Dreslinski,et al. A hybrid approach to offloading mobile image classification , 2014, 2014 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP).
[58] Inderjit S. Dhillon,et al. Recovery Guarantees for One-hidden-layer Neural Networks , 2017, ICML.
[59] Heekuck Oh,et al. Neural Networks for Pattern Recognition , 1993, Adv. Comput..
[60] Roi Livni,et al. On the Computational Efficiency of Training Neural Networks , 2014, NIPS.
[61] Kai Li,et al. TextHide: Tackling Data Privacy for Language Understanding Tasks , 2020, FINDINGS.
[62] Pasin Manurangsi,et al. ETH-Hardness of Approximating 2-CSPs and Directed Steiner Network , 2018, ITCS.
[63] Mengdi Wang,et al. Generalized Leverage Score Sampling for Neural Networks , 2020, NeurIPS.
[64] Ruby B. Lee,et al. Model inversion attacks against collaborative inference , 2019, ACSAC.
[65] Xin Yang,et al. Quadratic Suffices for Over-parametrization via Matrix Chernoff Bound , 2019, ArXiv.
[66] Ruosong Wang,et al. On Exact Computation with an Infinitely Wide Neural Net , 2019, NeurIPS.
[67] Sanjeev Arora,et al. Computing a nonnegative matrix factorization -- provably , 2011, STOC '12.
[68] Kai Li,et al. InstaHide: Instance-hiding Schemes for Private Distributed Learning , 2020, ICML.
[69] Irit Dinur,et al. Mildly exponential reduction from gap 3SAT to polynomial-gap label-cover , 2016, Electron. Colloquium Comput. Complex..
[70] Ramesh Raskar,et al. Split learning for health: Distributed deep learning without sharing raw patient data , 2018, ArXiv.
[71] Yuanzhi Li,et al. On the Convergence Rate of Training Recurrent Neural Networks , 2018, NeurIPS.
[72] Amit Daniely,et al. Complexity Theoretic Limitations on Learning DNF's , 2014, COLT.
[73] Andrew Zisserman,et al. Very Deep Convolutional Networks for Large-Scale Image Recognition , 2014, ICLR.
[74] Yuanzhi Li,et al. A Convergence Theory for Deep Learning via Over-Parameterization , 2018, ICML.
[75] Amit Daniely,et al. Complexity theoretic limitations on learning halfspaces , 2015, STOC.
[76] Inderjit S. Dhillon,et al. Learning Non-overlapping Convolutional Neural Networks with Multiple Kernels , 2017, ArXiv.
[77] Amit Daniely,et al. Hardness of Learning Neural Networks with Natural Weights , 2020, NeurIPS.
[78] Luca Trevisan,et al. Non-approximability results for optimization problems on bounded degree instances , 2001, STOC '01.
[79] Tengyu Ma,et al. Learning One-hidden-layer Neural Networks with Landscape Design , 2017, ICLR.