Publication Details
Download |
Xingyu Long, Sven Mayer, Francesco Chiossi
Multimodal Detection of External and Internal Attention in Virtual Reality using EEG and Eye Tracking Features Proceedings of Mensch und Computer 2024 (MuC'24), Association for Computing Machinery, 2024-09-01 (bib) |
Future VR environments will sense users' context, enabling a wide range of intelligent interactions, thus enabling diverse applications and improving usability through attention-aware VR systems. However, attention-aware VR systems based on EEG data suffer from long training periods, hindering generalizability and widespread adoption. added{At the same time, } there remains a gap in research regarding which physiological features (EEG and eye tracking) are most effective for decoding attention direction in the VR paradigm. We addressed this issue by evaluating several classification models using EEG and eye tracking data. We recorded that training data simultaneously during tasks that required internal attention in an N-Back task or external attention allocation in Visual Monitoring. We used linear and deep learning models to compare classification performance under several uni- and multimodal feature sets alongside different window sizes. Our results indicate that multimodal features improve prediction for classical and modern classification models. We discuss approaches to assess the importance of physiological features and achieve automatic, robust, and individualized feature selection. |