Institut für Informatik | Sitemap | LMU-Portal
English
  • Startseite
  • Studieninteressierte
  • Studierende
    • Ersti Infopoint
    • Semesterplanung
    • Prüfungsleistungen und Studienordnung
    • Abschlussarbeiten
      • Bachelor
      • Master
      • Projekt
      • Freie Themen
      • Richtlinien
    • Benutzerstudien
    • Auslandsaufenthalte
    • Vorlagen
    • Glossar
    • FAQ
    • Ansprechpartner
  • Lehrveranstaltungen
  • Forschung
  • Personen
  • Kontakt
  • Besucher
  • Jobs
  • FAQ
  • Intern

Publikations-Information

Toward a Multimodal Assessment of Visualization Literacy: Integrating EEG and Eye Tracking

bachelor thesis

Status open
Advisor Kathrin Schnizer
Professor Prof. Dr. Sven Mayer

Task

Description

Understanding how users interpret and process data visualizations is crucial for improving visualization literacy assessment. Existing assessment approaches have extensively examined factors such as visualization type [6–8], task complexity [7,8], and data volume [9] in relation to user performance. However, these studies primarily rely on response accuracy, a measure that can be influenced by guessing and thus fails to reliably capture the underlying cognitive processes involved in comprehension.

To address this gap, our research systematically evaluates user behavior during a standardized visualization literacy assessment—the Visualization Literacy Assessment Test (VLAT) [1]. By integrating the collection of event-related potentials (ERP) and eye-tracking data, we aim to identify physiological markers that correlate with performance and detect signs of guessing behavior in multiple-choice assessments. These insights could lead to a deeper understanding of how individuals engage with existing visualization literacy tests, moving beyond binary correctness.

Ultimately, this work aims to improve traditional questionnaire-based assessments with physiological sensing, providing a richer and more reliable evaluation of visualization literacy. By enhancing existing measures with physiological responses, we contribute to the development of more precise and insightful methods for assessing visualization comprehension.

Research Phases

The research consists of the following phases:

  1. Literature review: Review current work on visualization literacy assessment and physiological data collection in HCI.
  2. Implementation: Integrate the standardized VLAT test into an experimental framework (e.g., PsychoPy), including response logging, confidence ratings, and synchronized eye-tracking and EEG data capture.
  3. Experimentation: Conduct a user study collecting EEG, eye-tracking, and behavioral data across multiple VLAT items.
  4. Data analysis: Compile the dataset and perform exploratory analysis focusing on uncertainty and gaze behavior.

You Will

  • Conduct a literature review on visualization literacy and physiological computing.
  • Set up and implement the VLAT test within an experimental environment (PsychoPy).
  • Extend the experiment to collect responses, confidence ratings, EEG, and gaze data.
  • Design and run a user study with EEG and eye-tracking.
  • Compile and structure the resulting dataset for analysis.
  • Perform data analysis to uncover patterns in cognitive engagement and comprehension.
  • Document your work in a thesis and present your findings.
  • (Optional) Contribute to co-authoring a research publication.

You Need

  • Good written and verbal communication skills in English.
  • Solid Python skills for experiment implementation and data analysis.

References

  • [1] S. Lee, S.-H. Kim, and B. C. Kwon, “VLAT: Development of a Visualization Literacy Assessment Test,” IEEE Trans. Vis. Comput. Graph., vol. 23, no. 1, pp. 551–560, Jan. 2017, doi: 10.1109/TVCG.2016.2598920.
  • [2] S. Pandey and A. Ottley, “Mini-VLAT: A Short and Effective Measure of Visualization Literacy,” Comput. Graph. Forum, vol. 42, no. 3, pp. 1–11, 2023, doi: 10.1111/cgf.14809.
  • [3] L. W. Ge, Y. Cui, and M. Kay, “CALVI: Critical Thinking Assessment for Literacy in Visualizations,” in Proceedings of the 2023 CHI Conference on Human Factors in Computing Systems, in CHI ’23. New York, NY, USA: Association for Computing Machinery, Apr. 2023, pp. 1–18. doi: 10.1145/3544548.3581406.
  • [4] Y. Cui, L. W. Ge, Y. Ding, F. Yang, L. Harrison, and M. Kay, “Adaptive Assessment of Visualization Literacy,” Aug. 27, 2023, arXiv: arXiv:2308.14147. Accessed: Aug. 27, 2024. [Online]. Available: http://arxiv.org/abs/2308.14147
  • [5] J. Boy, R. A. Rensink, E. Bertini, and J.-D. Fekete, “A Principled Way of Assessing Visualization Literacy,” IEEE Trans. Vis. Comput. Graph., vol. 20, no. 12, pp. 1963–1972, Dec. 2014, doi: 10.1109/TVCG.2014.2346984.
  • [6] G. J. Quadri, A. Z. Wang, Z. Wang, J. Adorno, P. Rosen, and D. A. Szafir, “Do You See What I See? A Qualitative Study Eliciting High-Level Visualization Comprehension,” in Proceedings of the CHI Conference on Human Factors in Computing Systems, in CHI ’24. New York, NY, USA: Association for Computing Machinery, May 2024, pp. 1–26. doi: 10.1145/3613904.3642813.
  • [7] S. Lee, B. C. Kwon, J. Yang, B. C. Lee, and S.-H. Kim, “The Correlation between Users’ Cognitive Characteristics and Visualization Literacy,” Appl. Sci., vol. 9, no. 3, Art. no. 3, Jan. 2019, doi: 10.3390/app9030488.
  • [8] C. Nobre, K. Zhu, E. Mörth, H. Pfister, and J. Beyer, “Reading Between the Pixels: Investigating the Barriers to Visualization Literacy,” in Proceedings of the CHI Conference on Human Factors in Computing Systems, in CHI ’24. New York, NY, USA: Association for Computing Machinery, May 2024, pp. 1–17. doi: 10.1145/3613904.3642760.
  • [9] J. Talbot, V. Setlur, and A. Anand, “Four Experiments on the Perception of Bar Charts,” IEEE Trans. Vis. Comput. Graph., vol. 20, no. 12, pp. 2152–2160, Dec. 2014, doi: 10.1109/TVCG.2014.2346320.

Keywords

visualization literacy assessment, data visualizations, EEG, eye tracking, dataset
Nach oben
Impressum – Datenschutz – Kontakt  |  Letzte Änderung am 11.04.2020 von Changkun Ou (rev 35668)