暂无分享,去创建一个
Jimeng Sun | Cao Xiao | Shenda Hong | Hongyan Li | Tengfei Ma | Jimeng Sun | Cao Xiao | Tengfei Ma | Hongyan Li | linda Qiao
[1] Yoshua Bengio,et al. Neural Machine Translation by Jointly Learning to Align and Translate , 2014, ICLR.
[2] Bowen Zhou,et al. A Structured Self-attentive Sentence Embedding , 2017, ICLR.
[3] Nikos Komodakis,et al. Paying More Attention to Attention: Improving the Performance of Convolutional Neural Networks via Attention Transfer , 2016, ICLR.
[4] Peter Szolovits,et al. MIMIC-III, a freely accessible critical care database , 2016, Scientific Data.
[5] Yoshua Bengio,et al. Domain Adaptation for Large-Scale Sentiment Classification: A Deep Learning Approach , 2011, ICML.
[6] François Laviolette,et al. Domain-Adversarial Training of Neural Networks , 2015, J. Mach. Learn. Res..
[7] Shahrokh Valaee,et al. Training Neural Networks with Very Little Data - A Draft , 2017, ArXiv.
[8] Yu Zhang,et al. Hierarchical Attention Transfer Network for Cross-Domain Sentiment Classification , 2018, AAAI.
[9] Yiannis Kompatsiaris,et al. Proceedings of the 2016 ACM on Multimedia Conference , 2016, MM 2016.
[10] Marc Toussaint,et al. Probabilistic inference for solving discrete and continuous state Markov Decision Processes , 2006, ICML.
[11] Didier Stricker,et al. Creating and benchmarking a new dataset for physical activity monitoring , 2012, PETRA '12.
[12] Mark Sandler,et al. Convolutional recurrent neural networks for music classification , 2016, 2017 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP).
[13] H. S. Wolff,et al. iRun: Horizontal and Vertical Shape of a Region-Based Graph Compression , 2022, Sensors.
[14] Ralf Bousseljot,et al. Nutzung der EKG-Signaldatenbank CARDIODAT der PTB über das Internet , 2009 .
[15] Kaiming He,et al. Data Distillation: Towards Omni-Supervised Learning , 2017, 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition.
[16] Geoffrey E. Hinton,et al. Distilling the Knowledge in a Neural Network , 2015, ArXiv.
[17] Xiangji Huang,et al. Bi-Transferring Deep Neural Networks for Domain Adaptation , 2016, ACL.
[18] Vineeth N. Balasubramanian,et al. Deep Model Compression: Distilling Knowledge from Noisy Teachers , 2016, ArXiv.
[19] Rich Caruana,et al. Do Deep Nets Really Need to be Deep? , 2013, NIPS.
[20] Daniel Roggen,et al. Deep Convolutional and LSTM Recurrent Neural Networks for Multimodal Wearable Activity Recognition , 2016, Sensors.
[21] Mark Goadrich,et al. The relationship between Precision-Recall and ROC curves , 2006, ICML.
[22] William W. Cohen,et al. Proceedings of the 23rd international conference on Machine learning , 2006, ICML 2008.
[23] Bernhard Schölkopf,et al. Unifying distillation and privileged information , 2015, ICLR.
[24] Fillia Makedon,et al. Proceedings of the 10th International Conference on PErvasive Technologies Related to Assistive Environments , 2017, PETRA.
[25] Geoffrey E. Hinton,et al. Distilling a Neural Network Into a Soft Decision Tree , 2017, CEx@AI*IA.
[26] Meng Wu,et al. ENCASE: An ENsemble ClASsifiEr for ECG classification using expert features and deep neural networks , 2017, 2017 Computing in Cardiology (CinC).
[27] Naiyan Wang,et al. Like What You Like: Knowledge Distill via Neuron Selectivity Transfer , 2017, ArXiv.
[28] Jimmy Ba,et al. Adam: A Method for Stochastic Optimization , 2014, ICLR.
[29] Jimeng Sun,et al. Opportunities and challenges in developing deep learning models using electronic health records data: a systematic review , 2018, J. Am. Medical Informatics Assoc..
[30] Junmo Kim,et al. A Gift from Knowledge Distillation: Fast Optimization, Network Minimization and Transfer Learning , 2017, 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).
[31] George Trigeorgis,et al. Domain Separation Networks , 2016, NIPS.
[32] Mohan S. Kankanhalli,et al. Attention Transfer from Web Images for Video Recognition , 2017, ACM Multimedia.
[33] Kilian Q. Weinberger,et al. Marginalized Denoising Autoencoders for Domain Adaptation , 2012, ICML.