The Fine Line Between Assistance and Dependence: The Impact of Task Difficulty and AI Explanation on User Overreliance
master thesis (2023)
Status | in progress |
Student | Felicitas Buchner |
Advisor | Tony Zhang |
Professor | Prof. Dr. Andreas Butz |
Period | 2023/03/10 - 2023/09/09 |
Task
A popular application for AI is to support human decision-making with AI-generated suggestions, often in high-stakes applications like medical diagnoses, loan approvals, or hiring decisions. The expectation is that AI can supplement humans to make better decisions, but that humans in return also notice when AI suggestions are inappropriate and override the AI in such a case. To help humans decide when to follow and when to override the AI, such decision support systems usually not only give AI-generated suggestions, but also explain the AI output to the decision-maker. However, an increasing number of empirical studies show that in many cases, humans tend to overrely on AI, following the AI suggestions even when they are inappropriate. Explanations appear to be ineffective in solving this and often even compound the issue. Clearly, this is highly problematic for high-stakes applications.
But when does the overreliance effect occur? The effectiveness of AI-based decision support depends on a wide range of factors, many related to the specifics of the decision task. In this thesis, you will study the relationship between overreliance and task difficulty: Some decisions are easy for humans, others more difficult—is there a difference between easy and difficult decisions in how prone humans are to overreliance? Does the effect of explanations differ for easy and difficult decisions? To answer these questions, you will design and conduct a controlled experiment on the research platform LabintheWild.
This thesis topic is less heavy on technical implementation, but requires rigor in designing and conducting the experiment. Nevertheless, familiarity with machine learning is still a plus.
Suggested Reading:
- G. Bansal et al., “Does the whole exceed its parts? The effect of AI explanations on complementary team performance”, in Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems, Yokohama, Japan, May 2021, p. 81:1-81:16. doi: 10.1145/3411764.3445717.
- P. Schmidt and F. Biessmann, “Calibrating human-AI collaboration: impact of risk, ambiguity and transparency on algorithmic bias”, in Machine Learning and Knowledge Extraction, Dublin, Ireland, Aug. 2020, pp. 431–449. doi: 10.1007/978-3-030-57321-8_24.