暂无分享,去创建一个
[1] Andrea Vedaldi,et al. Understanding deep image representations by inverting them , 2014, 2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).
[2] Yoshua Bengio,et al. Understanding intermediate layers using linear classifier probes , 2016, ICLR.
[3] K. Ilk. On the Regularization of Ill-Posed Problems , 1987 .
[4] Alex Smola,et al. Kernel methods in machine learning , 2007, math/0701907.
[5] Zachary Chase Lipton. The mythos of model interpretability , 2016, ACM Queue.
[6] Felipe Cucker,et al. On the mathematical foundations of learning , 2001 .
[7] Rich Caruana,et al. Do Deep Nets Really Need to be Deep? , 2013, NIPS.
[8] Rob Fergus,et al. Visualizing and Understanding Convolutional Networks , 2013, ECCV.
[9] Vladimir Vapnik,et al. Statistical learning theory , 1998 .
[10] Thomas Brox,et al. Inverting Visual Representations with Convolutional Networks , 2015, 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).
[11] Yu Liu,et al. Regression learning based on incomplete relationships between attributes , 2018, Inf. Sci..
[12] Bolei Zhou,et al. Network Dissection: Quantifying Interpretability of Deep Visual Representations , 2017, 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).
[13] A. N. Tikhonov,et al. The regularization of ill-posed problems , 1963 .
[14] Percy Liang,et al. Understanding Black-box Predictions via Influence Functions , 2017, ICML.