Publikations-Information
Does Belief Shape How We Read Scatterplots? An Eye-Tracking Study
BT/MT
| Status | open |
| Advisor | Kathrin Schnizer |
| Professor | Prof. Dr. Sven Mayer |
Task
Description
In visualization comprehension research, the majority of work evaluates viewers' performance using accuracy-based measures [1-6]. Users are presented with a predefined task and assessed on whether they can correctly extract the requested information from a visualization. This approach has proven effective for quantifying basic chart-reading skills and has informed the development of widely used literacy assessments such as VLAT [1] and CALVI [3]. However, it captures only one dimension of how people engage with visualized data: whether they arrive at a correct answer.
Interacting with a data visualization is not limited to extracting individual values. Viewers also interpret relationships, trends, and patterns, and, critically, they bring their own knowledge and beliefs to the interaction. Most evaluation studies control for this by using arbitrary or unfamiliar data categories, ensuring that prior knowledge does not confound performance [7, 8]. But in real-world settings, visualizations carry meaning. A chart showing the relationship between vaccination rates and disease incidence, or between income and level of education, is not neutral to the viewer. Users may agree or disagree with the relationship depicted, and this may shape how they process the visualization.
Understanding whether and how agreement with visualized relationships influences viewing behavior has direct practical relevance. Characterizing how viewers react to information that confirms or contradicts their expectations contributes a complementary dimension of visualization comprehension beyond task accuracy. Furthermore, if agreement is reflected in gaze, this opens the long-term possibility of detecting disagreement or misconceptions from viewing behavior, enabling adaptive systems that respond to what users believe. Before any such applications are feasible, however, we need to establish whether agreement is reflected in gaze at all.
Prior work suggests that agreement may be detectable through gaze behavior. Gaze and facial expressions can be used to infer when users disagree with the output of a machine learning system [9], and eye-tracking attention maps differ systematically when viewers arrive at disagreeing interpretations of the same visual stimulus [10]. These findings have not yet been extended to the domain of data visualization.
In this thesis, we investigate whether the strength of a viewer's agreement with the content of a data visualization is reflected in their viewing behavior. To address this, we examine how gaze patterns relate to self-reported agreement when viewers inspect visualizations depicting relationships that align with or contradict their prior beliefs.
Research Phases
- Literature review: Review existing work on gaze behavior in the context of agreement, expectation violation, surprise, and cognitive conflict â focusing on eye-tracking studies in information processing and visualization comprehension. Identify which gaze metrics (e.g., fixation duration, revisits, scanpath patterns) have been linked to belief-congruent vs. belief-incongruent processing in related domains.
- Pilot study (online): Design and run an online survey (minimum 10 participants) to identify real-world data relationships (e.g., education and salary, smoking and life expectancy) for which there is strong population-level consensus on the expected direction. Select a balanced set of congruent and incongruent relationships to serve as the basis for stimulus design.
- Stimulus design: Create scatterplot visualizations depicting the selected relationships. For each relationship, produce both a congruent version (matching expected direction) and an incongruent version (opposing expected direction). Control for visual complexity, data legibility, and number of data points across stimuli.
- Experiment implementation: Implement the laboratory experiment using PsychoPy and EyeLink. Each trial presents a data visualization followed by a continuous agreement rating on a Likert scale, with an opt-out option ("I have no prior belief about this relationship"). Implement trial randomization and counterbalancing.
- Data collection: Conduct the laboratory study (minimum 30 participants).
- Data analysis: Preprocess gaze data using an existing feature extraction pipeline. Analyze the relationship between agreement strength (continuous Likert rating) and gaze metrics using regression-based methods.
- Thesis and presentation: Summarize motivation, method, results, and implications in a written thesis and present findings to an audience.
- (Optional) Co-author a research paper based on the study results.
You Will
- Conduct a literature review on gaze correlates of agreement, expectation violation, and cognitive conflict.
- Design and run an online pilot study to select stimulus topics.
- Create scatterplot stimuli for the laboratory experiment.
- Implement the experiment in PsychoPy with EyeLink eye-tracking integration.
- Run a laboratory study with a minimum of 30 participants.
- Analyze gaze data in relation to agreement strength using regression-based methods.
- Document your work in a thesis and present your findings.
- (Optional) Contribute to co-authoring a research publication.
You Need
- Good written and verbal communication skills in English.
- Solid Python skills for stimulus generation and experiment implementation.
- Basic knowledge of R for statistical analysis.
- Familiarity with eye-tracking is a plus, but not required.
References
- [1] S. Lee, S.-H. Kim, and B. C. Kwon, "VLAT: Development of a Visualization Literacy Assessment Test," IEEE Trans. Vis. Comput. Graph., vol. 23, no. 1, pp. 551â560, Jan. 2017, doi: 10.1109/TVCG.2016.2598920.
- [2] S. Pandey and A. Ottley, "Mini-VLAT: A Short and Effective Measure of Visualization Literacy," Comput. Graph. Forum, vol. 42, no. 3, pp. 1â11, 2023, doi: 10.1111/cgf.14809.
- [3] L. W. Ge, Y. Cui, and M. Kay, "CALVI: Critical Thinking Assessment for Literacy in Visualizations," in Proceedings of the 2023 CHI Conference on Human Factors in Computing Systems, in CHI '23. New York, NY, USA: Association for Computing Machinery, Apr. 2023, pp. 1â18. doi: 10.1145/3544548.3581406.
- [4] Y. Cui, L. W. Ge, Y. Ding, F. Yang, L. Harrison, and M. Kay, "Adaptive Assessment of Visualization Literacy," Aug. 27, 2023, arXiv: arXiv:2308.14147. Accessed: Aug. 27, 2024. [Online]. Available: http://arxiv.org/abs/2308.14147
- [5] J. Boy, R. A. Rensink, E. Bertini, and J.-D. Fekete, "A Principled Way of Assessing Visualization Literacy," IEEE Trans. Vis. Comput. Graph., vol. 20, no. 12, pp. 1963â1972, Dec. 2014, doi: 10.1109/TVCG.2014.2346984.
- [6] G. J. Quadri, A. Z. Wang, Z. Wang, J. Adorno, P. Rosen, and D. A. Szafir, "Do You See What I See? A Qualitative Study Eliciting High-Level Visualization Comprehension," in Proceedings of the CHI Conference on Human Factors in Computing Systems, in CHI '24. New York, NY, USA: Association for Computing Machinery, May 2024, pp. 1â26. doi: 10.1145/3613904.3642813.
- [7] D. Peebles and N. Ali, "Expert interpretation of bar and line graphs: The role of graphicacy in reducing the effect of graph format," Frontiers in Psychology, vol. 6, p. 1673, 2015.
- [8] E. E. Firat, A. Joshi, and R. S. Laramee, "Interactive visualization literacy: The state-of-the-art," Information Visualization, vol. 21, no. 3, pp. 285â310, 2022.
- [9] O. Bhatti, M. Barz, and D. Sonntag, "Leveraging implicit gaze-based user feedback for interactive machine learning," in German Conference on Artificial Intelligence (Künstliche Intelligenz). Cham: Springer International Publishing, 2022.
- [10] S. Hindennach, L. Shi, and A. Bulling, "Explaining Disagreement in Visual Question Answering Using Eye Tracking," in Proceedings of the 2024 Symposium on Eye Tracking Research and Applications, 2024.
