Scientific Presentation: Gender Bias in Natural Language Processing: Origins and Consequences

September 19, 2024 / IRIS3D

The talk will discuss the origins and implications of gender biases in NLP models, focusing on understanding how these biases emerge from language. Through a series of case studies, I will present inequalities present in commonly used datasets such as Wikipedia, instructional texts, and online discussion forums. These case studies will highlight how subtle biases in data can influence the outputs of NLP models, potentially perpetuating harmful stereotypes and inequities. 

The talk aims to provide the summer school participants with insights into the importance of mitigating biases in their own work with deep learning models. The discussion will emphasize the need for a careful approach to data selection and model development to ensure that the tools they create contribute to more equitable outcomes.

LINK TO THE PRESENTATION

Contact

Dr. Agnieszka Faleńska

To the top of the page