Toward a Multimodal Assessment of Visualization Literacy: Integrating EEG and Eye Tracking
bachelor thesis
Status | open |
Advisor | Kathrin Schnizer |
Professor | Prof. Dr. Sven Mayer |
Task
Description
Understanding how users interpret and process data visualizations is crucial for improving visualization literacy assessment. Existing assessment approaches have extensively examined factors such as visualization type [6â8], task complexity [7,8], and data volume [9] in relation to user performance. However, these studies primarily rely on response accuracy, a measure that can be influenced by guessing and thus fails to reliably capture the underlying cognitive processes involved in comprehension.
To address this gap, our research systematically evaluates user behavior during a standardized visualization literacy assessmentâthe Visualization Literacy Assessment Test (VLAT) [1]. By integrating the collection of event-related potentials (ERP) and eye-tracking data, we aim to identify physiological markers that correlate with performance and detect signs of guessing behavior in multiple-choice assessments. These insights could lead to a deeper understanding of how individuals engage with existing visualization literacy tests, moving beyond binary correctness.
Ultimately, this work aims to improve traditional questionnaire-based assessments with physiological sensing, providing a richer and more reliable evaluation of visualization literacy. By enhancing existing measures with physiological responses, we contribute to the development of more precise and insightful methods for assessing visualization comprehension.
Research Phases
The research consists of the following phases:
- Literature review: Review current work on visualization literacy assessment and physiological data collection in HCI.
- Implementation: Integrate the standardized VLAT test into an experimental framework (e.g., PsychoPy), including response logging, confidence ratings, and synchronized eye-tracking and EEG data capture.
- Experimentation: Conduct a user study collecting EEG, eye-tracking, and behavioral data across multiple VLAT items.
- Data analysis: Compile the dataset and perform exploratory analysis focusing on uncertainty and gaze behavior.
You Will
- Conduct a literature review on visualization literacy and physiological computing.
- Set up and implement the VLAT test within an experimental environment (PsychoPy).
- Extend the experiment to collect responses, confidence ratings, EEG, and gaze data.
- Design and run a user study with EEG and eye-tracking.
- Compile and structure the resulting dataset for analysis.
- Perform data analysis to uncover patterns in cognitive engagement and comprehension.
- Document your work in a thesis and present your findings.
- (Optional) Contribute to co-authoring a research publication.
You Need
- Good written and verbal communication skills in English.
- Solid Python skills for experiment implementation and data analysis.
References
- [1] S. Lee, S.-H. Kim, and B. C. Kwon, âVLAT: Development of a Visualization Literacy Assessment Test,â IEEE Trans. Vis. Comput. Graph., vol. 23, no. 1, pp. 551â560, Jan. 2017, doi: 10.1109/TVCG.2016.2598920.
- [2] S. Pandey and A. Ottley, âMini-VLAT: A Short and Effective Measure of Visualization Literacy,â Comput. Graph. Forum, vol. 42, no. 3, pp. 1â11, 2023, doi: 10.1111/cgf.14809.
- [3] L. W. Ge, Y. Cui, and M. Kay, âCALVI: Critical Thinking Assessment for Literacy in Visualizations,â in Proceedings of the 2023 CHI Conference on Human Factors in Computing Systems, in CHI â23. New York, NY, USA: Association for Computing Machinery, Apr. 2023, pp. 1â18. doi: 10.1145/3544548.3581406.
- [4] Y. Cui, L. W. Ge, Y. Ding, F. Yang, L. Harrison, and M. Kay, âAdaptive Assessment of Visualization Literacy,â Aug. 27, 2023, arXiv: arXiv:2308.14147. Accessed: Aug. 27, 2024. [Online]. Available: http://arxiv.org/abs/2308.14147
- [5] J. Boy, R. A. Rensink, E. Bertini, and J.-D. Fekete, âA Principled Way of Assessing Visualization Literacy,â IEEE Trans. Vis. Comput. Graph., vol. 20, no. 12, pp. 1963â1972, Dec. 2014, doi: 10.1109/TVCG.2014.2346984.
- [6] G. J. Quadri, A. Z. Wang, Z. Wang, J. Adorno, P. Rosen, and D. A. Szafir, âDo You See What I See? A Qualitative Study Eliciting High-Level Visualization Comprehension,â in Proceedings of the CHI Conference on Human Factors in Computing Systems, in CHI â24. New York, NY, USA: Association for Computing Machinery, May 2024, pp. 1â26. doi: 10.1145/3613904.3642813.
- [7] S. Lee, B. C. Kwon, J. Yang, B. C. Lee, and S.-H. Kim, âThe Correlation between Usersâ Cognitive Characteristics and Visualization Literacy,â Appl. Sci., vol. 9, no. 3, Art. no. 3, Jan. 2019, doi: 10.3390/app9030488.
- [8] C. Nobre, K. Zhu, E. Mörth, H. Pfister, and J. Beyer, âReading Between the Pixels: Investigating the Barriers to Visualization Literacy,â in Proceedings of the CHI Conference on Human Factors in Computing Systems, in CHI â24. New York, NY, USA: Association for Computing Machinery, May 2024, pp. 1â17. doi: 10.1145/3613904.3642760.
- [9] J. Talbot, V. Setlur, and A. Anand, âFour Experiments on the Perception of Bar Charts,â IEEE Trans. Vis. Comput. Graph., vol. 20, no. 12, pp. 2152â2160, Dec. 2014, doi: 10.1109/TVCG.2014.2346320.