Detecting Sexist MEME On The Web: A Study on Textual and Visual Cues

In recent years, it is evident the interest in the role of women within society and, in particular, the way we approach and refer to them. However, sexism as a form of discrimination towards women spread exponentially through the web and at a very high frequency, especially in the form of memes. Memes, which are typically composed of pictorial and textual components, can convey messages ranging from women stereotype, shaming, objectification to violence. In order to counterattack this phenomenon, in this paper we give a first insight in the field of automatic detection of sexist memes, by investigating both unimodal and multimodal approaches to understand the contribution of textual and visual cues.

[1]  Susan T. Fiske,et al.  The Ambivalent Sexism Inventory: Differentiating hostile and benevolent sexism. , 1996 .

[2]  Matti Pietikäinen,et al.  A comparative study of texture measures with classification based on featured distributions , 1996, Pattern Recognit..

[3]  Dorin Comaniciu,et al.  Mean Shift: A Robust Approach Toward Feature Space Analysis , 2002, IEEE Trans. Pattern Anal. Mach. Intell..

[4]  Simone Bianco,et al.  Adaptive Skin Classification Using Face and Body Detection , 2015, IEEE Transactions on Image Processing.

[5]  Debbie Ging,et al.  Special issue on online misogyny , 2018, Feminist Media Studies.

[6]  Yuanzhen Li,et al.  Measuring visual clutter. , 2007, Journal of vision.

[7]  Silvia Corchs,et al.  Ensemble learning on visual and textual data for social image emotion classification , 2017, International Journal of Machine Learning and Cybernetics.

[8]  Gianluigi Ciocca,et al.  Predicting Complexity Perception of Real World Images , 2016, PloS one.

[9]  Dong Liu,et al.  Towards a comprehensive computational model foraesthetic assessment of videos , 2013, MM '13.

[10]  Urbano Nunes,et al.  Trainable classifier-fusion schemes: An application to pedestrian detection , 2009, 2009 12th International IEEE Conference on Intelligent Transportation Systems.

[11]  Erik Cambria,et al.  A review of affective computing: From unimodal analysis to multimodal fusion , 2017, Inf. Fusion.

[12]  Erik Cambria,et al.  International Conference on Advances in Social Networks Analysis and Mining ( ASONAM ) Sounds of Silence Breakers : Exploring Sexual Violence on Twitter , 2018 .

[13]  Paolo Rosso,et al.  Overview of the Evalita 2018 Task on Automatic Misogyny Identification (AMI) , 2018, EVALITA@CLiC-it.

[14]  Sabine Süsstrunk,et al.  Measuring colorfulness in natural images , 2003, IS&T/SPIE Electronic Imaging.

[15]  Silvia Corchs,et al.  Multimodal Classification of Sexist Advertisements , 2018, ICETE.

[16]  Raimondo Schettini,et al.  Contrast image correction method , 2010, J. Electronic Imaging.

[17]  Q. M. Jonathan Wu,et al.  3D Shape from Focus and Depth Map Computation Using Steerable Filters , 2009, ICIAR.

[18]  Paolo Rosso,et al.  Automatic Identification and Classification of Misogynistic Language on Twitter , 2018, NLDB.

[19]  Paul A. Viola,et al.  Rapid object detection using a boosted cascade of simple features , 2001, Proceedings of the 2001 IEEE Computer Society Conference on Computer Vision and Pattern Recognition. CVPR 2001.

[20]  Raimondo Schettini,et al.  Recall or precision-oriented strategies for binary classification of skin pixels , 2008, J. Electronic Imaging.

[21]  Erik Cambria,et al.  Tweeting in Support of LGBT?: A Deep Learning Approach , 2019, COMAD/CODS.

[22]  Gianluigi Ciocca,et al.  Genetic programming approach to evaluate complexity of texture images , 2016, J. Electronic Imaging.

[23]  Hideyuki Tamura,et al.  Textural Features Corresponding to Visual Perception , 1978, IEEE Transactions on Systems, Man, and Cybernetics.