暂无分享,去创建一个
[1] Patrick Pérez,et al. Automatic Face Reenactment , 2014, 2014 IEEE Conference on Computer Vision and Pattern Recognition.
[2] Xiaoming Liu,et al. Disentangled Representation Learning GAN for Pose-Invariant Face Recognition , 2017, 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).
[3] Joshua Correll,et al. The Chicago face database: A free stimulus set of faces and norming data , 2015, Behavior research methods.
[4] Jimmy Ba,et al. Adam: A Method for Stochastic Optimization , 2014, ICLR.
[5] Jianfei Cai,et al. Pluralistic Image Completion , 2019, 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR).
[6] Qiang Chen,et al. Network In Network , 2013, ICLR.
[7] Yoshua Bengio,et al. Generative Adversarial Nets , 2014, NIPS.
[8] Tieniu Tan,et al. Geometry Guided Adversarial Facial Expression Synthesis , 2017, ACM Multimedia.
[9] Fei Yang,et al. Expression flow for 3D-aware face component transfer , 2011, SIGGRAPH 2011.
[10] Maneesh Kumar Singh,et al. DRIT++: Diverse Image-to-Image Translation via Disentangled Representations , 2019, International Journal of Computer Vision.
[11] Danna Zhou,et al. d. , 1934, Microbial pathogenesis.
[12] Skyler T. Hawk,et al. Presentation and validation of the Radboud Faces Database , 2010 .
[13] P. Ekman,et al. What the face reveals : basic and applied studies of spontaneous expression using the facial action coding system (FACS) , 2005 .
[14] Jianfei Cai,et al. Deep Adaptive Attention for Joint Facial Action Unit Detection and Face Alignment , 2018, ECCV.
[15] Yu-Ding Lu,et al. DRIT++: Diverse Image-to-Image Translation via Disentangled Representations , 2020, International Journal of Computer Vision.
[16] Fei Yang,et al. Facial expression editing in video using a temporally-smooth factorization , 2012, 2012 IEEE Conference on Computer Vision and Pattern Recognition.
[17] Rama Chellappa,et al. ExprGAN: Facial Expression Editing with Controllable Expression Intensity , 2017, AAAI.
[18] Alexei A. Efros,et al. Image-to-Image Translation with Conditional Adversarial Networks , 2016, 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).
[19] Yuichi Yoshida,et al. Spectral Normalization for Generative Adversarial Networks , 2018, ICLR.
[20] M. Pantic,et al. Induced Disgust , Happiness and Surprise : an Addition to the MMI Facial Expression Database , 2010 .
[21] Davis E. King,et al. Dlib-ml: A Machine Learning Toolkit , 2009, J. Mach. Learn. Res..
[22] Jiaya Jia,et al. Homomorphic Latent Space Interpolation for Unpaired Image-To-Image Translation , 2019, 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR).
[23] Andrew Zisserman,et al. Very Deep Convolutional Networks for Large-Scale Image Recognition , 2014, ICLR.
[24] Peter Robinson,et al. Cross-dataset learning and person-specific normalisation for automatic Action Unit detection , 2015, 2015 11th IEEE International Conference and Workshops on Automatic Face and Gesture Recognition (FG).
[25] Raymond Y. K. Lau,et al. Least Squares Generative Adversarial Networks , 2016, 2017 IEEE International Conference on Computer Vision (ICCV).
[26] G. G. Stokes. "J." , 1890, The New Yale Book of Quotations.
[27] Jung-Woo Ha,et al. StarGAN: Unified Generative Adversarial Networks for Multi-domain Image-to-Image Translation , 2017, 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition.
[28] Alexei A. Efros,et al. Unpaired Image-to-Image Translation Using Cycle-Consistent Adversarial Networks , 2017, 2017 IEEE International Conference on Computer Vision (ICCV).
[29] Francesc Moreno-Noguer,et al. GANimation: Anatomically-aware Facial Animation from a Single Image , 2018, ECCV.
[30] Iasonas Kokkinos,et al. Deforming Autoencoders: Unsupervised Disentangling of Shape and Appearance , 2018, ECCV.
[31] Hui Chen,et al. Emotional facial expression transfer from a single image via generative adversarial nets , 2018, Comput. Animat. Virtual Worlds.
[33] Maja Pantic,et al. Web-based database for facial expression analysis , 2005, 2005 IEEE International Conference on Multimedia and Expo.