Quantifying Gender Bias Towards Politicians in Cross-Lingual Language Models
Published in PLOS One, 2023
While the prevalence of large pre-trained language models has led to significant improvements in the performance of NLP systems, recent research has demonstrated that these models inherit societal biases extant in natural language. In this paper, we explore a simple method to probe pre-trained language models for gender bias, which we use to effect a multi-lingual study of gender bias towards politicians. We construct a dataset of 250k politicians from most countries in the world and quantify adjective and verb usage around those politicians' names as a function of their gender. We conduct our study in 7 languages across 6 different language modeling architectures. Our results demonstrate that stance towards politicians in pre-trained language models is highly dependent on the language used. Finally, contrary to previous findings, our study suggests that larger language models do not tend to be significantly more gender-biased than smaller ones.
@article{stanczak-etal-2023-quantifying,
author = {
Karolina StaĆczak and
Sagnik Ray Choudhury and
Tiago Pimentel and
Ryan Cotterell and
Isabelle Augenstein
},
article = {PLOS One},
title = {Quantifying Gender Bias Towards Politicians in Cross-Lingual Language Models},
year = {2023},
url = {https://arxiv.org/abs/2104.07505},
pages = {},
}