This repository contains the code developed for "From Explanation to Detection: Multimodal Insights into Disagreement in Misogynous Memes". It contains several scripts to reproduce the results presented in the paper. The scripts allow both to estimate the Element Disagreement Scores (EDS) for each constituent in the training dataset, and to estimate Sentence Disagreement Scores (EDS) according to the four proposed strategies: Sum, Mean, Median, and Minimum.
The datasets are exclusively reserved for the participants of SemEval-2022 Task 5 and are not to be used freely. The data may be distributed upon request and for academic purposes only. To request the datasets, please fill out the following form: https://forms.gle/AGWMiGicBHiQx4q98 After submitting the required info, participants will have a link to a folder containing the datasets in a zip format (trial, training and development) and the password to uncompress the files.
Data provided by the challenge have been enriched with a synthetic dataset. More information about the synthetic dataset creation can be found in the paper, while information for download it can be found here.
If you found our work useful, please cite our papers: Unraveling Disagreement Constituents in Hateful Speech
@inproceedings{rizzi2024unraveling,
title={Unraveling Disagreement Constituents in Hateful Speech},
author={Rizzi, Giulia and Astorino, Alessandro and Rosso, Paolo and Fersini, Elisabetta},
booktitle={European Conference on Information Retrieval},
pages={21--29},
year={2024},
organization={Springer}
}
SemEval-2022 Task 5: Multimedia Automatic Misogyny Identification
@inproceedings{fersini2022semeval,
title={SemEval-2022 Task 5: Multimedia automatic misogyny identification},
author={Fersini, Elisabetta and Gasparini, Francesca and Rizzi, Giulia and Saibene, Aurora and Chulvi, Berta and Rosso, Paolo and Lees, Alyssa and Sorensen, Jeffrey},
booktitle={Proceedings of the 16th International Workshop on Semantic Evaluation (SemEval-2022)},
pages={533--549},
year={2022}
}