Institut für Informatik | Sitemap | LMU-Portal
English
  • Startseite
  • Studieninteressierte
  • Studierende
  • Lehrveranstaltungen
  • Forschung
    • Publikationen
    • Partner
  • Personen
  • Kontakt
  • Besucher
  • Jobs
  • FAQ
  • Intern

Publikations-Information

[Download PDF]
Download
Luke Haliburton, Sinksar Ghebremedhin, Robin Welsch, Albrecht Schmidt, Sven Mayer
Investigating Labeler Bias in Face Annotation for Machine Learning
In Frontiers in Artificial Intelligence and Applications: HHAI 2024 (bib)
  In a world increasingly reliant on artificial intelligence, it is more important than ever to consider the ethical implications of artificial intelligence. One key under-explored challenge is labeler bias --- bias introduced by individuals who label datasets --- which can create inherently biased datasets for training and subsequently lead to inaccurate or unfair decisions in healthcare, employment, education, and law enforcement. Hence, we conducted a study ($N$=98) to investigate and measure the existence of labeler bias using images of people from different ethnicities and sexes in a labeling task. Our results show that participants hold stereotypes that influence their decision-making process and that labeler demographics impact assigned labels. We also discuss how labeler bias influences datasets and, subsequently, the models trained on them. Overall, a high degree of transparency must be maintained throughout the entire artificial intelligence training process to identify and correct biases in the data as early as possible.
Nach oben
Impressum – Datenschutz – Kontakt  |  Letzte Änderung am 05.02.2007 von Richard Atterer (rev 1481)