In recent years, a large number of makeup images have been shared on social media. Most of these images lack information about the cosmetics used, such as color, glitter or etc., while they are difficult to infer due to the diversity of skin color or lighting conditions. In this paper, our goal is to estimate cosmetic features only from a single makeup image. Previous work has measured the material parameters of cosmetic products from pairs of images showing the face with and without makeup, but such comparison images are not always available. Furthermore, this method cannot represent local effects such as pearl or glitter since they adapted physically-based reflectance models. We propose a novel image-based method to extract cosmetic features considering both color and local effects by decomposing the target image into makeup and skin color using Difference of Gaussian (DoG). Our method can be applied for single, standalone makeup images, and considers both local effects and color. In addition, our method is robust to the skin color difference due to the decomposition separating makeup from skin. The experimental results demonstrate that our method is more robust to skin color difference and captures characteristics of each cosmetic product.
[1]
Xiaochun Cao,et al.
Makeup Like a Superstar: Deep Localized Makeup Transfer Network
,
2016,
IJCAI.
[2]
Kun Zhou,et al.
Simulating makeup through physics-based manipulation of intrinsic image layers
,
2015,
2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).
[3]
Frédo Durand,et al.
Style transfer for headshot portraits
,
2014,
ACM Trans. Graph..
[4]
Adam Finkelstein,et al.
PairedCycleGAN: Asymmetric Style Transfer for Applying and Removing Makeup
,
2018,
2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition.