GravitySpot 2.0: Guiding Multiple Users in Front of Public Displays Using On-Screen Visual Cues
master thesis
Status | open |
Advisor | Ville Mäkelä, Dr. Mohamed Khamis |
Task
Description
In this project, the student will extend a previous project called GravitySpot (link; video; talk). GravitySpot is a system that guides users to specific spots in front of large interactive displays using cues that implicitly guide users. For example, in case of a display with which users can interact via mid-air gestures, a perfect position to guide users to could be 2-3 meters away from the display, but in case of interaction via eye gaze, this position could be 30-60cm away from the eye tracking device. GravitySpot guides users by showing them on-screen cues that implicitly guide people to the correct position.
In this project, we aim to implement GravitySpot 2.0, which guides multiple users in front of large displays to one or more optimal interaction positions. We attempted to implement GravitySpot 2.0 before. We started by following the same approach adopted in the first version i.e. showing on-screen cues to guide each user in a different way. However, we quickly realized that a prerequisite is to first make sure the user knows which on-screen cue they should follow. This led us to question how users identify themselves on public displays. In our investigation of this problem, we published this work (link; video).
Tasks
- Build on the two previous projects to implement and evaluate GravitySpot 2.0. Taking some points into consideration e.g. the kinds of situations GravitySpot 2.0 should cover.
- Review of previous work on interaction with public displays is necessary.
- Conducting user studies and evaluating the data
- Handling of depth imaging data from a motion sensor (e.g. Kinect)
Requirements
- Good programming skills
- Interest in conducting user studies