Department for Informatics | Sitemap | LMU-Portal
Deutsch
  • Home
  • Future Students
  • Enrolled students
  • Teaching
    • Archive
      • Detail
      • Padabama
      • Presentations
      • Publikationen
      • Themen
  • Research
  • People
  • Contact
  • Jobs
  • Internal
  • COVID-19 special: online teaching
Home > Teaching > Archive > Detail

Supporting Decisions With AI: Exploring Alternatives to Fully Automatic Suggestions

master thesis (2022)

Status in progress
Student Jenny Phu
Advisor Tony Zhang
Professor Prof. Dr. H. Hußmann
Period 2022/01/31 - 2022/07/31

Task

A common application for AI is in decision support systems, often in high-stakes domains like medical diagnostics, criminal justice, or finance. The usual approach is to give decision makers fully automatic, AI-based suggestions. To help decision makers to evaluate these suggestions, the AI systems are often designed to explain their outputs. This setup creates the problem of trust calibration, i.e. decision makers should neither rely on the AI system when it is wrong (overtrust), nor dismiss the system when it is correct (undertrust). This trust calibration is hard to achieve, and current explainable AI approaches show limited effectiveness in this regard.

A mostly disregarded alternative would be to design decision support systems in a way that avoids the trust calibration problem as much as possible. Instead of automatically suggesting decisions with AI, a decision support system could also be designed to interactively support users to reach a decision themselves, e.g. by highlighting important features.

In this thesis, you are supposed to explore such an alternative design for AI decision support systems on the example of creditworthiness decisions. How could such a design look like? How does it compare to the common approach of fully automatic suggestions? Is it a viable alternative to support decisions with AI without running into the problem of trust calibration?

Tasks:

  • Development of concept to support decision makers with AI in a more engaging and interactive manner.
  • Implementation of the concept on the example of creditworthiness decisions.
  • Implementation of a comparison system that relies on fully automatic suggestions.
  • Designing and conducting a user study with both systems.

Requirements:

  • Familiarity with machine learning.
  • Programming experience, preferrably in Python.

Suggested Reading:

  • Zhang, Z. T., Liu, Y., & Hussmann, H. (2021, July). Forward Reasoning Decision Support: Toward a More Complete View of the Human-AI Interaction Design Space. In CHItaly 2021: 14th Biannual Conference of the Italian SIGCHI Chapter (pp. 1-5).

Keywords

Human-AI Interaction, Human-Centered AI, Decision Support, Explainable AI, User Study
To top
– Impressum – Privacy policy – Contact  |  Last modified on 2020-04-11 by Changkun Ou (rev 35667)