Department for Informatics | Sitemap | LMU-Portal
Deutsch
  • Home
  • Future Students
  • Enrolled students
  • Teaching
    • Archive
      • Detail
      • Padabama
      • Presentations
      • Publikationen
      • Themen
  • Research
  • People
  • Contact
  • Jobs
  • Internal
  • COVID-19 special: online teaching
Home > Teaching > Archive > Detail

Is human overreliance on AI provoked by study designs?

master thesis (2022)

Status in progress
Student Sven Tong
Advisor Tony Zhang
Professor Prof. Dr. H. Hußmann
Period 2022/05/16 - 2022/11/16

Task

A popular application for AI is to support human decision-making with AI-generated suggestions, often in high-stakes applications like medical diagnoses, loan approvals, or hiring decisions. The expectation is that AI can supplement humans to make better decisions, but that humans in return also notice when AI suggestions are inappropriate and override the AI in such a case. To help humans decide when to follow and when to override the AI, such decision support systems usually not only give AI-generated suggestions, but also explain the AI output to the decision-maker. However, an increasing number of empirical studies show that in many cases, humans tend to overrely on AI, following the AI suggestions even when they are inappropriate. Explanations appear to be ineffective in solving this and often even compound the issue. Clearly, this is highly problematic for high-stakes applications.

Recent research suggests that the overreliance effect occurs when people judge the appropriateness of AI suggestions through error-prone heuristic thinking instead of more deliberate analytic thinking. At the same time, a common feature of experimental studies on human-AI decision-making is that participants have to make dozens of decisions in a row, which might be tiring. Could it be that this study design provokes less tiring heuristic thinking by participants, and hence the overreliance effect? Would the overreliance effect still occur if participants had to make fewer or even only a single decision at a time? In this thesis, you will answer these questions by designing and conducting a controlled experiment. As test bed, you will employ one of the common tasks researched in human-AI decision-making, such as sentiment analysis (e.g. deciding whether an IMDb review is positive or negative).

This thesis topic is less heavy on technical implementation, but requires rigor in designing and conducting the experiment. Nevertheless, familiarity with machine learning is still a plus.

Suggested Reading:

  • Z. Buçinca, M. B. Malaya, and K. Z. Gajos, “To trust or to think: cognitive forcing functions can reduce overreliance on AI in AI-assisted decision-making,” PACM HCI, vol. 5, no. CSCW1, p. 188:1-188:21, Apr. 2021, doi: 10.1145/3449287.

Keywords

Human-AI Interaction, Human-Centered AI, Decision Support, Explainable AI, User Study, Experimental Study
To top
– Impressum – Privacy policy – Contact  |  Last modified on 2020-04-11 by Changkun Ou (rev 35667)