Publication Details
Download |
Tony Zhang, Yuanting Liu, Andreas Butz
Designing AI for Appropriation Will Calibrate Trust CHI TRAIT '23: Workshop on Trust and Reliance in AI-Assisted Tasks at CHI 2023 (bib) |
Calibrating users' trust on AI to an appropriate level is widely considered one of the key mechanisms to manage brittle AI performance. However, trust calibration is hard to achieve, with numerous interacting factors that can tip trust into one direction or the other. In this position paper, we argue that instead of focusing on trust calibration to achieve resilient human-AI interactions, it might be helpful to design AI systems for appropriation first, i.e. allowing users to use an AI system according to their intention, beyond what was explicitly considered by the designer. We observe that rather than suggesting end results without human involvement, appropriable AI systems tend to offer users incremental support. Such systems do not eliminate the need for trust calibration, but we argue that they may calibrate users' trust as a side effect and thereby achieve an appropriate level of trust by design. |