Institut für Informatik | Sitemap | LMU-Portal
English
  • Startseite
  • Studieninteressierte
  • Studierende
  • Lehrveranstaltungen
  • Forschung
    • Publikationen
    • Partner
  • Personen
  • Kontakt
  • Besucher
  • Jobs
  • FAQ
  • Intern

Publikations-Information

[Download PDF]
Download
Luke Haliburton, Jan Leusmann, Robin Welsch, Sinksar Ghebremedhin, Petros Isaakidis, Albrecht Schmidt, Sven Mayer
Uncovering Labeler Bias in Machine Learning Annotation Tasks
In AI and Ethics 2024 (bib)
  As artificial intelligence becomes increasingly pervasive, it is essential that we understand the implications of bias in machine learning. Many developers rely on crowd workers to generate and annotate datasets for machine learning applications. However, this step risks embedding training data with labeler bias, leading to biased decision-making in systems trained on these datasets. To characterize labeler bias, we created a face dataset and conducted two studies where labelers of different ethnicity and sex completed annotation tasks. In the first study, labelers annotated subjective characteristics of faces. In the second, they annotated images using bounding boxes. Our results demonstrate that labeler demographics significantly impact both subjective and accuracy-based annotations, indicating that collecting a diverse set of labelers may not be enough to solve the problem. We discuss the consequences of these findings for current machine learning practices to create fair and unbiased systems.
Nach oben
Impressum – Datenschutz – Kontakt  |  Letzte Änderung am 05.02.2007 von Richard Atterer (rev 1481)