Institut für Informatik | Sitemap | LMU-Portal
English
  • Startseite
  • Studieninteressierte
  • Studierende
    • Ersti Infopoint
    • Semesterplanung
    • Prüfungsleistungen und Studienordnung
    • Abschlussarbeiten
      • Bachelor
      • Master
      • Projekt
      • Freie Themen
      • Richtlinien
    • Benutzerstudien
    • Auslandsaufenthalte
    • Vorlagen
    • Glossar
    • FAQ
    • Ansprechpartner
  • Lehrveranstaltungen
  • Forschung
  • Personen
  • Kontakt
  • Besucher
  • Jobs
  • FAQ
  • Intern
Startseite > Studierende > Abschlussarbeiten > Freie Themen

Offene Themen für Abschlussarbeiten

  • Informationen zur Themenfindung
  • Liste freier Themen
  • Weitere Themen

Informationen zur Themenfindung

Informationen finden Sie in den FAQs.

Liste freier Themen

Auf dieser Webseite sind offene Bachelor-, Master- und Projektarbeitsthemen bei unseren Mitarbeiter*innen zu finden. Am Anfang jeder Zeile ist angegeben für welchen Typ von Arbeit sich das Thema eignet. Ein Klick auf ein Thema bringt weitere Informationen.

Zeige: alle, Bachelor-Arbeiten, Master-Arbeiten, Projektarbeiten, PWAL

Type Advisor Title
MT/BT/PT Prof. Dr. Florian Alt, Doruntina Murtezaj, Verena Winterhalter, Oliver Hein, Felix Dietz, Viktorija Paneva, Sarah Delgado Rodriguez, Lukas Mecke
Abschlussarbeiten im Bereich Human-Centered Security and Privacy

Below you will find focus areas in the research field "Human-Centered Security and Privacy" for which we offer Bachelor's and Master's theses. For a specific topic and any questions about these focus areas, please contact the relevant person.

Public Security User Interfaces

The rapid development of digital technologies and the increasing threat of cybersecurity have led to a growing need for innovative security solutions in public spaces. One example of user interfaces that can improve security behavior are so-called Public Security User Interfaces. These are interfaces positioned in shared, non-personal areas that offer information or interactions on security-related topics. These interfaces play an important role in providing security information, improving situational awareness, and promoting secure behavior. The main goal of this research is to investigate the design, implementation, and impact of user interfaces that enhance security behavior, in order to facilitate the transition from cybersecurity awareness to habitual secure behavior.

The theses in this area deal with topics such as:

  • Behavior analysis of user interaction with Public Security User Interfaces
  • Personalization strategies to support secure behavior
  • Selection of content and dynamic adaptation to the target group and contextual factors

Recommended knowledge and interests

  • Knowledge in human-centered design
  • Experience in conducting user studies
  • Interest in conducting a thorough literature review
  • Independent thinking and creative problem solving
  • Optional: Interest in public display research

Contact

Interested students are asked to submit their CV, academic transcript, and intended start date.

Doruntina Murtezaj

Social Engineering

Cybercrime currently causes a global economic loss amounting to several trillion euros. According to expert analyses, up to 90% of these damages are a direct or indirect result of attacks in which the human element is at the center. Attackers exploit authority, fear, curiosity, or helpfulness with the goal of manipulating their victims to obtain sensitive data. Examples include phone calls to obtain user login credentials, emails containing malware attachments to gain access to protected networks, or deep fakes used to impersonate someone's identity.

Theses in this area address a variety of questions:

  • How do people behave during social engineering attacks?
  • How can social engineering attacks be detected?
  • Which contextual factors facilitate social engineering attacks?
  • How can user interfaces be developed to protect against social engineering attacks?

Recommended knowledge and interests

  • Interest in human-centered attacks
  • Knowledge of qualitative and/or quantitative research methods
  • Interest in conducting a thorough literature review
  • Independent thinking and creative problem solving

Contact

Interested students are asked to submit their CV, academic transcript, and intended start date.

Felix Dietz

Security and Privacy in Mixed Reality

Mixed reality devices are quickly finding their way into users’ daily lives, particularly in the form of head-mounted displays. Users can immerse themselves in virtual worlds or enrich the virtual world with physical content, supporting a wide range of applications in the areas of entertainment, work, education, and well-being. While these technologies support an ever-increasing number of features in the aforementioned areas, they also present challenges and create opportunities for security and privacy.

Theses in this area essentially deal with topics in the context of two general questions: (1) How can mixed reality solve existing challenges in terms of privacy and security? (2) What challenges in terms of privacy and security arise in the context of mixed reality, and how can these be addressed?

Recommended knowledge and interests

  • Interest in VR/AR technology
  • Knowledge of qualitative and/or quantitative research methods
  • Interest in conducting a thorough literature review
  • Willingness to learn, e.g., Unity

Readings | Literature

  • Ethics Emerging: the Story of Privacy and Security Perceptions in Virtual Reality
    https://www.usenix.org/system/files/conference/soups2018/soups2018-adams.pdf
  • Exploring the Unprecedented Privacy Risks of the Metaverse
    https://arxiv.org/pdf/2207.13176.pdf

Contact

Interested students are asked to submit their CV, academic transcript, and intended start date.

Verena Winterhalter

Viktorija Paneva

On-Body Security and Privacy Interfaces

The rapid integration of wearable sensors and head-mounted displays (HMDs) makes on-body computing increasingly relevant for security and privacy research. In this area, we focus on biometric authentication, privacy-preserving wearables, physiological sensing, and secure interaction paradigms for augmented reality (AR) and virtual reality (VR). Possible topics include the development of novel authentication methods for wearable devices, privacy-preserving approaches to continuous physiological monitoring, secure interaction concepts in AR and VR environments, and adaptive security/privacy mechanisms that enhance user trust and system reliability. By addressing current challenges and future opportunities, we aim to develop resilient, privacy-conscious, and user-friendly on-body systems that prioritize both security and seamless interaction experiences.

Recommended knowledge and interests

  • Interest in wearables / hardware prototyping
  • Knowledge of qualitative and/or quantitative research methods
  • Interest in conducting a thorough literature review
  • Willingness to learn (e.g., Unity)

Contact

Interested students are asked to submit their CV, academic transcript, and intended start date.

Oliver Hein

Tangible Security and Privacy User Interfaces

In the age of ubiquitous computing, users' IT security and privacy are at risk almost anytime. IT security and privacy assistants help users become aware of these risks and take appropriate measures to protect their data. However, these systems are often too complex, unintuitive, and not visually appealing. In order to enable even less technologically savvy or inexperienced individuals to use IT security and privacy assistants, such mechanisms must become tangible, i.e., physically manipulable and touchable by humans.

Recommended knowledge and interests

  • Interest in Usable Security
  • Knowledge in the field of Human-Computer Interaction and qualitative and/or quantitative research methods
  • Independent thinking and creative problem solving
  • For some projects: Interest in Fabrication (e.g., 3D modeling/printing, electronics, soldering)

Readings | Literature

  • Take Your Security and Privacy Into Your Own Hands! Why Security and Privacy Assistants Should be Tangible https://dl.gi.de/handle/20.500.12116/37360
  • Making Privacy Graspable: Can we Nudge Users to use Privacy Enhancing Techniques? https://arxiv.org/abs/1911.07701
  • Privacy Itch and Scratch: On Body Privacy Warnings and Controls https://dl.acm.org/doi/10.1145/2851581.2892475
  • Privacy Care: A Tangible Interaction Framework for Privacy Management https://dl.acm.org/doi/10.1145/3430506

Contact

Interested students are asked to submit their CV, academic transcript, and intended start date.

Sarah Delgado Rodriguez

Behavioral Biometrics

The use of biometric mechanisms—i.e., authentication based on unique features of a user's physiology or behavior—is a convenient and fast alternative to classical token- or knowledge-based authentication. Popular examples include fingerprint, facial recognition, or typing behavior biometrics. However, these systems typically rely on machine learning algorithms, making their decisions both difficult for the user to comprehend and subject to manipulation.

In this research area, we investigate novel approaches that enable users to understand and influence the results of biometric (black-box) systems, and develop new approaches with a focus on the user.

The following questions are particularly interesting:

  • How can users explore and understand influences on the decision-making process of biometric systems?
  • How can user interfaces for biometric systems be designed to more clearly communicate the robustness and accuracy of predictions?
  • How can users influence how they are recognized, i.e., by changing their behavior?
  • How can users be encouraged to exhibit more distinctive behavior?
  • How can biometric authentication be embedded in natural interaction?

Concrete research approaches include, among others, investigating (real) user behavior (e.g., through observations, interviews, surveys) and designing, implementing, and evaluating novel security and privacy concepts.

Recommended knowledge and interests

  • General interest in biometrics, authentication, and machine learning
  • Knowledge of qualitative and/or quantitative research methods
  • Solid programming skills (e.g., Python or Android)

Readings | Literature

  • Comparing passwords, tokens, and biometrics for user authentication http://www.nikacp.com/images/10.1.1.200.3888.pdf
  • An introduction to biometric recognition https://www.cse.msu.edu/~rossarun/pubs/RossBioIntro_CSVT2004.pdf
  • Touch me once and I know it’s you! Implicit Authentication based on Touch Screen Patterns https://www.medien.ifi.lmu.de/pubdb/publications/pub/deluca2012chi/deluca2012chi.pdf

Example Thesis

Reauthentication Concepts for Biometric Authentication Systems on Mobile Devices

Contact

Interested students are asked to submit their CV, academic transcript, and intended start date.

Lukas Mecke


Details
BT/MT Florian Bemmann, Doruntina Murtezaj
Public AI Displays for Society

Please check the PDF announcement on my webspace: Thesis MA(/BA): Public AI Displays for Society
14.04.2025


Details
BT/MT Francesco Chiossi
Design of a Virtual Reality Adaptive System based on Electrodermal Activity phasic components

Description

Electrodermal activity (EDA) denotes the measurement of continuous changes in the electrical conductance properties of the skin in response to sweat secretion by the sweat glands. EDA is autonomously modulated by sympathetic nervous system (SNS) activity, a component of the autonomic nervous system (ANS), which is involved in the control of involuntary bodily functions as well as cognitive and emotional states. Specifically, phasic EDA activity correlated with stress, cognitive load, and attention orienting. Therefore, measuring phasic EDA responses can give us information about the user's state.In this thesis project, we want to develop an adaptive system that modifies the visual complexity of the VR environment based on changes in phasic EDA. Specifically, we want to use new signal processing methodologies termed adaptive thresholding and gaussian filtering.The research consists of three main stages: (1) validation of the psychophysiological inference underpinning the adaptive system (2) implementation of a working VR prototype, and (3) an evaluation of the adaptive environment.

You will

  • Perform a literature review
  • Modify an existing VR environment
  • Implement an preprocessing pipeline for phasic EDA detection
  • Collect and analyze electroencephalographic (EEG), electrodermal activity (EDA) and electrocardiography (ECG) data
  • Summarize your findings in a thesis and present them to an audience
  • (Optional) co-writing a research paper

You need

  • Strong communication skills in English
  • Good knowledge of Unity
  • Good knowledge of Python libraries for scientific computing (e.g. Scipy, MNE).

References

  • Fairclough, S. H. (2009). Fundamentals of physiological computing. Interacting with computers, 21(1-2), 133-145.
  • Chiossi, F., Welsch, R., Villa, S., Chuang, L., & Mayer, S. (2022). Virtual Reality Adaptation Using Electrodermal Activity to Support the User Experience. Big Data and Cognitive Computing, 6(2), 55.
  • Babaei, E., Tag, B., Dingler, T., & Velloso, E. (2021, May). A critique of electrodermal activity practices at chi. In Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems (pp. 1-14).
  • Kleckner, I., Wormwood, J. B., Jones, R. M., Siegel, E., Culakova, E., Heathers, J., ... & Goodwin, M. (2021). Adaptive thresholding increases ability to detect changes in rate of skin conductance responses to psychologically arousing stimuli.

Details
BT/MT Francesco Chiossi
Physiologically adaptive MR Blending

Description

Mixed reality (MR) systems refer to the entire broad spectrum that ranges from physical to virtual reality (VR). It includes instances that overlay virtual content on physical information, i.e., Augmented Reality (AR), and those that rely on physical content to increase the realism of virtual environments, i.e., Augmented Virtuality (AV). Such instances tend to be pre-defined for the blend of physical and virtual content. To what extent can MR systems rely on physiological inputs to infer user state and expectations and, in doing, adapt their visualization in response? Measurement sensors for eye and body motion, autonomic arousal (e.g., respiration, electrodermaland heart activity), and cortical activity (e.g., EEG, fNIRS) are widely used in psychological and neuroscience research to infer hidden user states, such as stress, overt/covert attention, working memory load, etc.However, it is unclear if such inferences can serve as useful real-time inputs in controlling the presentation parameters of MR environments.In this thesis project, we will investigate whether this blend can be adaptive to user states, which are inferred from physiological measurements derived from gaze behavior, peripheral physiology (e.g.., electrodermal activity (EDA); electrocardiography (ECG)), and cortical activity (i.e.., electroencephalography (EEG)). In other words, we will investigate the viability and usefulness of MR use scenarios that vary in their blend of virtual and physicalcontent according to user physiology. In particular, we will focus on understanding how physiological readings can passively determine the appropriate amount ofvisual information to present within an MR system.

You will

  • Perform a literature review
  • Modify an MR environment
  • Adapt existing processing pipeline for EEG and EDA data
  • Collect and analyze electroencephalographic (EEG), electrodermal activity (EDA), and electrocardiography (ECG) data
  • Summarize your findings in a thesis and present them to an audience
  • (Optional) co-writing a research paper

You need

  • Strong communication skills in English
  • Good knowledge of Unity
  • Good knowledge of Python libraries for scientific computing (e.g. Scipy, Neurokit)

References

  • Lotte, F., Faller, J., Guger, C., Renard, Y., Pfurtscheller, G., Lecuyer, A., & Leeb, R. (2012). Combining BCI with virtual reality: towards new applications and improved BCI. In Towards practical brain-computer interfaces (pp. 197-220). Springer, Berlin, Heidelberg.
  • McGill, M., Boland, D., Murray-Smith, R., & Brewster, S. (2015, April). A dose of reality: Overcoming usability challenges in vr head-mounted displays. In Proceedings of the 33rd Annual ACM Conference on Human Factors in Computing Systems (pp. 2143-2152).
  • Fairclough, S. H. (2009). Fundamentals of physiological computing. Interacting with computers, 21(1-2), 133-145.

Details
BT/MT Francesco Chiossi
Evaluation of an Adaptive VR environment that Uses EEG Measures as Inputs to a Biocybernetic Loop

Description

Biocybernetic adaptation is a form of physiological computing where real-time physiological data from the brain and the body can be used as an input to adapt the user interface. In this way, from the physiological data, we can infer the user’s state and design implicit interactions in VR to change the scene to support certain goals. This thesis aims the develop and evaluate an adaptive VR environment designed to maximize users' performance by exploiting changes in real-time electroencephalography (EEG) to adjust the level of visual complexity. The research consists of three main stages: (1) validation of the input EEG measures underpinning the loop; (2) implementation of a working VR prototype; and (3) an evaluation of the adaptive environment. Specifically, we aim to demonstrate the sensitivity of EEG power in the (frontal) theta and (parietal) alpha bands to adapt levels of visual complexity.

You will

  • Perform a literature review
  • Modify an existing VR environment
  • Implement an online biocybernetic loop using EEG
  • Collect and analyze EEG, EDA, and ECG data
  • Summarize your findings in a thesis and present them to an audience
  • (Optional) co-writing a research paper

You need

  • Strong communication skills in English
  • Good knowledge of Unity and/or C#
  • Good knowledge of Python libraries for scientific computing (e.g. Scipy, MNE).

References

  • Biocybernetics Loops and Physiological Computing
  • Development of an Adaptive Game using EEG frequencies

Details
BT/MT Francesco Chiossi
Modulating distraction by adapting the perceptual load: implementation of a biocybernetic loop to support performance and prevent distraction

Description

Research from cognitive science and computerized displays of simple stimuli has shown how perceptual load is a critical factor for modulating distraction. Perceptual load is the amount of information involved in processing task stimuli. According to Lavie (1995), our attentional resources are limited and mainly directed towards task-relevant goals, but we might be more prone to distractors if we have cognitive spare resources. Previous research showed that human faces have bigger distracting power than non-face objects. This project aims to assess the distracting potential distracting effect of human avatars in a social VR scenario. We aim to transfer of traditional paradigms that assess attention and distraction to immersive VR. Lastly, we adapt the target-distractor recognizability to evaluate if a physiologically-adaptive system that optimizes for perceptual load can support task performance. The research consists of three main stages: (1) validation of the psychophysiological inference underpinning the physiological loop (2) implementation of a working VR prototype, and (3) an evaluation of the adaptive environment.

You will

  • Perform a literature review
  • Modify an existing VR environment
  • Implement an online biocybernetic loop using EEG and/or EDA
  • Collect and analyze electroencephalography (EEG), electrodermal activity (EDA), and electrocardiography (ECG) data
  • Summarize your findings in a thesis and present them to an audience
  • (Optional) co-writing a research paper

You need

  • Strong communication skills in English
  • Good knowledge of Unity and/or C#
  • Good knowledge of Python libraries for scientific computing (e.g. Scipy, Neurokit, MNE).

References

  • Perceptual Load of faces
  • Perceptual load and task engagement
  • Evaluating perceptual load in VR

Details
BT/MT Francesco Chiossi
Design of a physiological loop settled in a Social VR scenario to support task performance and user experience

Description

Physiological computing is a multidisciplinary research field in HCI wherein the interaction depends on measuring and responding to the user's physiological activity in real-time (Fairclough, 2009). Physiological computing allows for implicit interaction; by monitoring the physiological signals of the user, the computer can infer, e.g., if the task demands are either too challenging or easy, and either adapt the difficulty level or when users are getting distracted from the task, the system could give them a notification. Measuring the psychological state of the user creates intriguing possibilities for Social VR scenarios as we can either adapt the number of displayed avatars, their form or even their proxemic distance. This thesis aims the develop an adaptive Social VR environment designed to support users' performance when engaged in a cognitive task using a measure of physiological state (electrodermal activity: EDA) as input for adaptation. The research consists of three main stages: (1) validation of the psychophysiological inference underpinning the physiological loop (2) implementation of a working VR prototype, and (3) an evaluation of the adaptive environment.

You will

  • Perform a literature review
  • Modify an existing VR environment
  • Implement an online biocybernetic loop using EDA
  • Collect and analyze EEG, electrodermal activity (EDA) and electrocardiography (ECG) data
  • Summarize your findings in a thesis and present them to an audience
  • (Optional) co-writing a research paper

You need

  • Strong communication skills in English
  • Good knowledge of Unity
  • Good knowledge of Python libraries for scientific computing (e.g. Scipy, Neurokit).

References

  • Biocybernetics Loops and Physiological Computing
  • Adapting task complexity of a Social VR environment based on skin conductance

Details
BT/MT Francesco Chiossi, Abdallah El Ali
Designing and Evaluating Mixed Reality Transition Visualizations

Description

Prior work has explored transition visualizations between VR environments, or on specific interaction techniques for transferring objects from VR <-> AR views. However, there has been less attention on what are the more effective transitions across the reality-virtuality continuum. The focus of this work would be to (a) identify suitable MR transitions (b) create a mapping to common tasks where such transitions may be applicable (e.g., keyboard typing) (c) prototype different transitions, from R-->AR-->AV--VR, and vice versa: VR-->AV-->AR--R, and empirically investigating different parameters of each (d) run a user evaluation to assess perceived UX. comfort, sickness, etc. This project extends the work in Keep it simple? Evaluation of Transitions VR, by exploring MR transitions, instead of only across different VR environments. Evaluation metrics will involve both objective and subjective measures.

RQ1: What are the most effective methods for transitioning users across the reality-virtuality spectrum?

RQ2: How do these transition visualizations influence user experience, user physiological state, workload, and acceptance across tasks?

You will

  • Perform a literature review
  • Develop a environment
  • Implement an preprocessing pipeline for phasic EDA detection
  • Collect and analyze electroencephalographic (EEG), electrodermal activity (EDA) and electrocardiography (ECG) data
  • Summarize your findings in a thesis and present them to an audience
  • (Optional) co-writing a research paper

You need

  • Strong communication skills in English
  • Good knowledge of Unity
  • Good knowledge of Python libraries for scientific computing (e.g. Scipy, MNE).
  • Knowledge of physiological sensing

References

  • Nico Feld, Pauline Bimberg, Benjamin Weyers, and Daniel Zielasko. 2023. Keep it simple? Evaluation of Transitions in Virtual Reality. In Extended Abstracts of the 2023 CHI Conference on Human Factors in Computing Systems (CHI EA '23). Association for Computing Machinery, New York, NY, USA, Article 196, 1-7. https://doi.org/10.1145/3544549.3585811
  • Dimitar Valkov and Steffen Flagge. 2017. Smooth immersion: the benefits of making the transition to virtual environments a continuous process. In Proceedings of the 5th Symposium on Spatial User Interaction (SUI '17). Association for Computing Machinery, New York, NY, USA, 12-19. https://doi.org/10.1145/3131277.3132183
  • Han, Jihae, Robbe Cools, and Adalberto L. Simeone. "The Body in Cross-Reality: A Framework for Selective Augmented Reality Visualisation of Virtual Objects." In XR@ ISS. 2020. https://ceur-ws.org/Vol-2779/paper6.pdf
  • Cools, R., Esteves, A., & Simeone, A. L. (2022, October). Blending spaces: Cross-reality interaction techniques for object transitions between distinct virtual and augmented realities. In 2022 IEEE International Symposium on Mixed and Augmented Reality (ISMAR) (pp. 528-537). IEEE.
  • Pointecker, F., Friedl, J., Schwajda, D., Jetter, H.C. and Anthes, C., 2022, October. Bridging the gap across realities: Visual transitions between virtual and augmented reality. In 2022 IEEE International Symposium on Mixed and Augmented Reality (ISMAR) (pp. 827-836). IEEE.
  • Uwe Gruenefeld, Jonas Auda, Florian Mathis, Stefan Schneegass, Mohamed Khamis, Jan Gugenheimer, and Sven Mayer. 2022. VRception: Rapid Prototyping of Cross-Reality Systems in Virtual Reality. In Proceedings of the 2022 CHI Conference on Human Factors in Computing Systems (CHI '22). Association for Computing Machinery, New York, NY, USA, Article 611, 1-15. https://doi.org/10.1145/3491102.3501821

Details
MT Sarah Delgado Rodriguez
Offline QKD II - Perceived vs. "Real" Security

Imagine Bob's office is connected via a (quantum-)encrypted connection to a server. How could Bob access this server from his home office if he does not have the necessary hardware at home? Well, he could get keys in his office and save them on his personal key-safe token. He could subsequently use the token at home and connect to the server.

The topic of offline distribution of cryptographic keys is interesting for researchers and practitioners alike, even outside the QKD context. Your thesis would evolve around the evaluation of already existing consumer devices that could be used to store and transport QKD-keys (or symmetric cryptographic keys in general).


Details
BT/MT Jesse Grootjen
Adaptive RSVP System Based on Pupil Dilation

Description

Project Overview
This thesis project presents a unique opportunity for students to contribute to innovative research on adaptive RSVP (Rapid Serial Visual Presentation) systems, following up from [1]. The focus is on developing an intelligent RSVP system that adapts to user attention and cognitive load by leveraging pupil dilation data. Pupil dilation has been shown to correlate with cognitive processing, providing valuable insight into the user's mental state while reading or processing visual stimuli. By incorporating this biometric feedback into the RSVP system, the project aims to create a more intuitive and personalized reading experience, especially for users with attention challenges or disabilities.

Project Motivation

Traditional RSVP systems often rely on fixed speeds or manual adjustments, which may not suit every user's cognitive capacity. This project seeks to enhance user engagement and efficiency by using real-time pupil dilation data to adjust the speed and presentation style dynamically. By doing so, the RSVP system can become more responsive to individual reading habits, reducing cognitive overload and improving comprehension and retention of information. This work has important implications for accessibility, enabling better interaction for users with reading difficulties or neurological impairments.

Project Goals

This thesis will explore the development and evaluation of an adaptive RSVP system, with a focus on the following key objectives:

  • Experiment Design: Participants will engage with an RSVP system where reading speeds are adjusted in real-time based on pupil dilation data, providing insights into the correlation between cognitive load and visual presentation speed.
  • Model Development: Develop a model that interprets pupil dilation changes and optimizes the RSVP presentation in response to varying cognitive loads, ensuring that reading pace and comprehension are maximized.

You will

  • Conduct a literature review on pupil dilation and cognitive load in relation to visual stimuli
  • Develop or modify an RSVP system to integrate real-time pupil tracking data
  • Implement a preprocessing pipeline to analyze pupil dilation data during RSVP tasks
  • Collect and analyze data, focusing on how pupil dilation correlates with reading performance and user engagement
  • Summarize findings in a thesis and present them to an audience
  • (Optional) Co-write a research paper based on the results

You need

  • Strong communication skills in English
  • Experience with eye-tracking technologies and software
  • Basic knowledge of machine learning for modeling data (e.g., Python, TensorFlow)

References

  • [1] Grootjen, J., Thalhammer, P., & Kosch, T. (2024). Your eyes on speed: Using pupil dilation to adaptively select speed-reading parameters in virtual reality. In Proceedings of the 26th International Conference on Human-Computer Interaction with Mobile Devices and Services (MobileHCI '24). ACM. https://doi.org/10.1145/3676531

Details
BT/MT Jesse Grootjen, Prof. Dr. Sven Mayer
Investigating Gaze Estimation Accuracy in Collaborative Virtual Environments (CVEs)

Description

Project OverviewThis thesis project offers an exciting opportunity for students to contribute to cutting-edge research on gaze estimation in interactive systems. The focus is on enhancing the accuracy of gaze interpretation within Collaborative Virtual Environments (CVEs), where effective communication is often dependent on understanding where participants are looking. Gaze serves as a vital non-verbal communication cue, yet people frequently struggle to accurately determine another persons gaze direction (i.e., where someone is looking), especially over distances.

Project Motivation

In CVEs, precise gaze estimation is crucial for natural and effective interaction. While previous research has explored distant pointing as an interaction mechanism, this project shifts focus to gaze estimation. By addressing common inaccuracies in gaze prediction, this research aims to significantly improve how users interpret each others gaze during virtual interactions, ultimately enhancing the overall immersive experience.

Project Goals

This thesis will investigate how accurately gaze estimation can be performed in CVEs, focusing on two main aspects:
  • 1. Gaze Estimation Experiments: Participants will perform gaze tasks directed at targets on a screen from two different distances. The data collected will help evaluate the performance of current gaze estimation methods in these scenarios.
  • 2. Model Development: Using the insights from distant pointing research, the project aims to develop a mathematical model to correct (potential) systematic displacements in gaze estimation.

You will

  • Perform a literature review
  • Modify an existing VR environment
  • Implement an preprocessing pipeline for eye-tracking data
  • Collect and analyze eye-tracking data, focussing on developing a model to correct potential systematic displacement in gaze estimation
  • Summarize your findings in a thesis and present them to an audience
  • (Optional) co-writing a research paper

You need

  • Strong communication skills in English
  • Good knowledge of Unity

References

  • [1] Schweigert, R., Schwind, V., & Mayer, S. (2019). EyePointing: A gaze-based selection technique. In Proceedings of Mensch und Computer 2019. ACM. https://doi.org/10.1145/3340764.3344897
  • [2] Mayer, S., Schwind, V., Schweigert, R., & Henze, N. (2018). The effect of offset correction and cursor on mid-air pointing in real and virtual environments. In Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems (pp. 653:1–653:13). ACM. https://doi.org/10.1145/3173574.3174227
  • [3] Mayer, S., Wolf, K., Schneegass, S., & Henze, N. (2015). Modeling distant pointing for compensating systematic displacements. In Proceedings of the 33rd Annual ACM Conference on Human Factors in Computing Systems (pp. 4165–4168). ACM. https://doi.org/10.1145/2702123.2702332

Details
MT Dominik Hirschberg
Design of a Study-Tool to Collect Context-Based Probes

Background

Probes are a powerful and widely used method in Human-Computer Interaction (HCI) and related disciplines such as social sciences. They enable researchers to gather rich, contextual insights into participants’ lived experiences, uncovering underlying needs, emotions, intentions, and desires. By prompting participants to repeatedly reflect on and document specific aspects of their daily lives — often in the form of diary entries — researchers can gain a nuanced, reliable understanding of what truly matters to participants. This depth of insight is essential for the design of applications that are not only user-centered but also contextually relevant and personally meaningful.

It is inherent that probes are easy and engaging for the participants, while at the same time they are supposed to include multiple data types, such as written text, photos, log data, drawings, audio or video recordings and individual designs. With the growing importance of context-adaptive applications, there is an increasing need to investigate well-defined situations based on human and environmental context factors (e.g., location, activity, intentions, cognitive states, and emotions). Existing tools often lack the ability to effectively prompt participants to respond within these specific contexts.

Thesis Goal

The objective of this thesis is to design and develop a mobile application that functions as an innovative and engaging probe tool. It should support the collection of diverse data formats, including photos, videos, audio, and creative user inputs such as sketches or prototypes. Additionally, the application should utilize contextual triggers (e.g., location data, physiological data from smartwatches) to send timely notifications prompting users to provide input.

You will

  • Conduct a literature review on the implementation of mobile probe tools
  • Design and develop an innovative Android/iOS application for collecting context-based probes
  • Implement a secure backend for storing research data, including features for export and analysis
  • Conduct a short pre-test study involving real participants
  • Document your findings in a written thesis and present them in a final presentation
  • (Optional) Co-author a research paper based on the application artifact and results

Recommended Background

  • Strong interest in mobile app development, including the use of APIs to access contextual data
  • Creativity in exploring new and untested methods for probe design
  • Solid programming skills in iOS and/or Android development

Readings | Literature

  • Eghtebas, C., Klinker, G., Boll, S., & Koelle, M. (2023, July). Co-speculating on dark scenarios and unintended consequences of a ubiquitous (ly) augmented reality. In Proceedings of the 2023 ACM Designing Interactive Systems Conference (pp. 2392-2407).
  • Gaver, B., Dunne, T., & Pacenti, E. (1999). Design: cultural probes. interactions, 6(1), 21-29.
  • Graham, C., Rouncefield, M., Gibbs, M., Vetere, F., & Cheverst, K. (2007, November). How probes work. In Proceedings of the 19th Australasian conference on Computer-Human Interaction: Entertaining User Interfaces (pp. 29-37).
  • Boehner, K., Vertesi, J., Sengers, P., & Dourish, P. (2007, April). How HCI interprets the probes. In Proceedings of the SIGCHI conference on Human factors in computing systems (pp. 1077-1086).
  • Ibrahim, S. B., Antle, A. N., Kientz, J. A., Pullin, G., & Slovák, P. (2024, June). A Systematic Review of the Probes Method in Research with Children and Families. In Proceedings of the 23rd Annual ACM Interaction Design and Children Conference (pp. 157-172).

Note

Please note that the scope and complexity of this project are more appropriate for a Master’s thesis. However, exceptionally qualified Bachelor’s students—based on academic transcript, motivation, or relevant experience—are also welcome to apply.

Contact

Interested students are invited to submit their CV, academic transcript, and intended start date to dominik.hirschberg ät unibw.de.


Details
MT Jan Leusmann, Prof. Dr. Sven Mayer
Enhancing Action Detection for Robot Curiosity: Integrating Online and Offline Learning

Aufgabenstellung / Topic

This thesis builds upon a previous master's thesis on online action learning and aims to improve the approach by integrating both online and offline action detection. The goal is to refine the system to better detect when a robot should exhibit curiosity based on human activity. Improvements will focus on optimizing classification accuracy, reducing latency, and enhancing adaptability. Additionally, a user study will be conducted to evaluate the system's effectiveness in real-world human-robot interaction scenarios.

You will:

  • Review literature on action recognition and curiosity-driven learning
  • Analyze and improve the existing online action learning approach
  • Develop and integrate a combined online-offline detection model
  • Implement improvements and optimize system performance
  • Conduct a user study to evaluate the system
  • Summarize findings in a thesis
  • (Optional) Co-author a research paper

You need:

  • Experience with Machine Learning and Computer Vision
  • Strong programming skills in Python
  • Familiarity with ROS (Robot Operating System)
  • Knowledge of study design and data analysis
  • Strong statistical evaluation skills
  • Strong English communication skills

References

  • Automatic Class Discovery and One-Shot Interactions for Acoustic Activity Recognition

Details
MT Jan Leusmann, Steeven Villa, Prof. Dr. Sven Mayer
Eliciting Expressions for On-Task Communication for a Humanoid Robot

Aufgabenstellung / Topic

In this thesis, we aim to elicit non-verbal expressions for a semi-humanoid robot to enhance its on-task communication. Applying the expression elicitation approach ([1]), the goal is to develop a set of understandable expressions for task-related interactions. These expressions may include showing curiosity, interrupting to correct mistakes, or signaling attentiveness.

You will:

  • Perform a literature review
  • Identify relevant design dimensions
  • Implement expressions on a semi-humanoid robot
  • Conduct a user study to evaluate expression clarity
  • Analyze and summarize findings in a thesis
  • (Optional) Co-author a research paper

You need:

  • High research interest in Human-Robot Interaction
  • Experience with ROS and Python
  • Knowledge of study design and data analysis
  • Strong English communication skills

References

  • Expression Elicitation Approach

Details
MT Teodora Mitrevska
Assessing Image Similarity via EEG

Description

Project Overview

For human-in-the-loop (HITL) systems, it is important to understand and quantify users’ perceptions to make the next predictions inline with the user’s intention. HITL systems that employ visual recognition often require inferring similarity between a percieved object and a mental targed. This is often difficult to determine when it comes to complex stimuli like faces. While HITL systems traditionally rely on explicit user input, implicit EEG responses can support the decision-making process effortlessly.

Project Goals

In this project, we will be exploring brain signals (more accurately, ERP components) in similariy prediction for visual stimuli.

  • Experiment Design: Participants will be shown a target image followed by a number of non-target images. For every image pair (target & target), they will need to determine similarity on a scale. During the experiment, they will provide keyboard input, eye tracking data and EEG data.
  • Data Analysis: Preprocess the received data and analyze ERP components.
  • Model Training: Train a model on the collected data that predicts different levels of similarity between images.

You will

  • Generate visual stimuli for the experiment (via Midjourney or similar) OR explore previously used datasets of images from similar studies
  • Integrate the stimuli in a given system
  • Conduct a user study with EEG and Eye Tracking
  • Collect and analyze the collected data, focusing on finding correlations between assessed similarity and ERP components
  • Summarize findings in a thesis and present them to an audience
  • (Optional) Co-write a research paper based on the results

You need

  • Strong communication skills in English
  • Knowledge of machine learning and data analysis (e.g., Python)

References

  • De la Torre Ortiz, The P3 indexes the distance between perceived and target image https://helda.helsinki.fi/bitstream/10138/572292/1/main_1.pdf
  • De la Torre Ortiz, Cross-Subject EEG Feedback for Implicit Image Generation https://helda.helsinki.fi/server/api/core/bitstreams/d2a54f39-7d87-4311-9b08-0d71578a36ca/content

Details
MT/BT Teodora Mitrevska
Brain Tracking Opportunities for Neurodivergents

Description

Project Overview

There has been rapid development in technologies that aim to support the needs of neurodivergent individuals in navigating everyday life and improving their wellbeing. Addressing the differences in perception and cognition, multiple mechanisms are available to ease self regulation, focus improvement and stress management. One very promising approach is neurofeedback - a method of brain tracking in connection to a specific activity. In the past, this non invasive method used to be practiced in a lab, with the assistance of a professional. However, nowadays there are tracking opportunities via consumer devices that can be done in the comfort of one's home. We would like to investigate the need for these devices and in which context could they provide maximum support to suit the needs of neurodivergent individuals.

Project Goals

In this project, you will find a scenario to inspect the usefulness and improvement of brain tracking for neurodivergent individuals

You will

  • Design a scenario where the participants can be familiarized with the device and try it out
  • Develop the set up of the experiment and a questionnaire/li>
  • Conduct a user study with ND participants
  • Collect and analyze the collected data
  • Summarize findings in a thesis and present them to an audience
  • (Optional) Co-write a research paper based on the results

You need

  • Strong communication skills in English
  • Strong MOTIVATION and interest in qualitative research

References

  • Burtcher et al, Neurodivergence and Work in Human-Computer Interaction: Mapping the Research Landscape https://dl.acm.org/doi/fullHtml/10.1145/3663384.3663386
  • Hall et al, Designing for Strengths: Opportunities to Support Neurodiversity in the Workplace https://dl.acm.org/doi/10.1145/3613904.3642424

Details
PT Julian Rasch
Camera-Based Wave Prediction and Calm Surface Detection Using Optical Flow and Machine Learning (+ Robotic Arm Control)

Individual Practical (6 ECTS)

Start Date: Flexible

Supervisor: Julian Rasch (julian.rasch ät um.ifi.lmu.de)

This project is a collaboration with a Munich-based artist Philip Gröning and part of a larger initiative.

Project Overview

The objective of this project is to create a camera-based system to predict wave movements and identify the calmest surface point on a defined, square water body. This system will employ computer vision techniques, specifically optical flow, to track wave motion across video frames. Machine learning models will be utilized to predict future wave behavior, consistently detecting regions with minimal motion, representing calm areas. The calmest point will serve as the primary output and will be forwarded to a 7-axis robotic arm. The project includes real-time video processing, optical flow analysis, and machine learning for wave pattern forecasting.

Project Objectives

  • Capture video of water surfaces and process it using optical flow to detect wave motion.
  • Analyze optical flow data to locate regions with the least movement (calm surface points).
  • Utilize machine learning models to predict wave dynamics from historical wave movement patterns, identifying calm surface areas.
  • Communicate the identified area to a 7-axis robotic arm for position alignment.
  • Optional: Develop a real-time visualization tool to highlight calm areas on the water surface.

Expected Deliverables

  • A fully functional system for detecting waves and calm surface regions using live camera footage.
  • A machine learning model for wave dynamics prediction.
  • ROS (Robot Operating System) communication functionality.
  • Optional: A visualization tool for real-time water surface monitoring.

Required Skills & Knowledge

  • Python Programming
  • Computer Vision (e.g., OpenCV)
  • Familiarity with ML frameworks (e.g., TensorFlow, PyTorch, Keras)
  • Basic understanding of ROS

This project offers practical experience in computer vision and machine learning applied to a real-world problem, making it ideal for students interested in AI, robotics, environmental monitoring, and computational fluid dynamics. As part of a larger art project, an interest in the creative domain is beneficial but not mandatory.

Please send a brief motivation letter, CV, and transcript of records if you are interested in this project.


Details
MT Sarah Christin Reichmann
Infotainment Systems for Motorbikes [Exchange Australia]

We, the Centre for Accident Research & Road Safety - Queensland (CARRS-Q) at the Queensland University of Technology (QUT) in Australia, offer you a unique position for your Master/Bachelor thesis in the areas of human-computer interaction and infotainment systems for motorbikes.

Join us on the journey of shaping the digital future and break the cycle with newest innovation technology approaches. We are a dedicated research team based in Brisbane in the Sunshine State of Australia and look for creative and out of the box thinking minds to join our team onshore.

We work on the most difficult challenges in the automotive industry where the only limits are our own imagination. Digitalization will be key to ensure a safe riding experience in the future. Come join our creative team to shape the future of motorbikes. All in?

What awaits you?

We will work with you to shape and scope your thesis project to align with any of the following activities:

  • You will prepare real-world rider studies in various traffic scenarios and different interaction concepts.
  • Evaluate various output media for a safe riding experience.
  • Build a pretotype with an innovative mixed approach of augmented reality and voice speech. In addition, this unique placement provides
  • the ability to present and discuss your solutions with industry experts at BMW Motorrad Munich on a higher level.
  • the chance to discuss the newest trends in smart voice agents with other creative minds within industry and relevant stakeholders such as the state police in Queensland with an idea to learn from them and support shaping infotainment solutions for the future.

What should you bring along?

  • Currently enrolled full time in an undergraduate or graduate program at an accredited college or university studying Human-Computer Interaction (HCI), or related fields like Computer Science Engineering, Software Engineering, Informatics, Data Science. Master’s degree preferred.
  • Proven experience in software development, software engineering, prototyping or similar role.
  • Experience in software development life cycles (SDLC).
  • Experience with software design and development in a test-driven environment.
  • Knowledge of coding languages (e.g.,C++, Java, JavaScript, Python) and platforms like Unity.
  • An elevated mindset that is willing to think outside of the box and sees opportunity in every difficulty.
  • Being able to work in a team is key for this position.

Challenge accepted? Apply now!

Earliest starting date: 01.05.2023
Duration: 6 - 12 months
Working hours: Full-time


Details
MT Sophia Sakel, Luke Haliburton, Francesco Chiossi
Investigating the impact of short-form video interruptions and device use on conversation quality and social bonding

Details
MT Clara Sayffaerth
Visualizing the Expert's Perspective in Extended Reality

Master Thesis

Start Date: Flexible

Supervisor: Clara Sayffaerth (clara.sayffaerth@um.ifi.lmu.de)

Overview

This project aims to explore how different Extended Reality (XR) visualizations of an expert's perspective impact learning and memory retention. With more people retiring from the workforce than entering it, combined with the rapid evolution of technology, transferring practical and procedural knowledge has become increasingly challenging. Currently, this transfer often relies on in-person demonstrations, which can be limiting. XR technology offers solutions by enabling learning that is not restricted by time or place while also presenting information in an immersive, three-dimensional way. Additionally, XR allows us to visually adapt reality. By experimenting with how a virtual expert is visualized, we can go beyond replicating realistic instructions. These adaptations have the potential to enhance the learning experience and improve knowledge transfer for future learners.

Objectives

  • Conduct a literature review on perspectives and their visualizations in XR (AR/VR/MR)
  • Design muliple perspective visualizations
  • Build a HMD-AR application (with Unity)
  • Conduct a user study
  • Analyze the data statistically
  • Summarize your findings in a thesis and present them
  • (Optional) Co-author a research paper based on your findings

Required Skills & Knowledge

  • Strong communication skills in English.
  • Experience with XR development preferably AR/HMDs in Unity
  • Motivation to learn new skills
  • (Optional) Interest in building hardware

Please send a brief motivation letter, CV, and transcript of records if you are interested in this Master thesis.


Details
MT/BT Clara Sayffaerth, Beat Rossmy
UX-Design "Museumsbuddy" An Interactive Audioguide for Children at Museum Brandhorst

The "Museumsbuddy" is a unique interactive audioguide that will accompany children (from the age of 4) on a scavenger hunt through the Museum Brandhorst. Spira, the spirit of art and the museum, shares inspiring stories, dream journeys and soundscapes with the listeners. Thanks to a cooperation with tonies®, we have 40 Tonieboxes at our disposal. By interacting with the device (with figures and RFID tags), children are playfully activated to discover the museum and the works of art there, thus creating an active experience instead of a purely passive one.

You will

  • Develope new functions and feedback elements (based on visitor feedback) intended to change and optimize the Toniebox for museum visits. For this purpose, tonies® kindly granted us access to the firmware.
  • Find answers the following design challenge(s):
  • How can the Toniebox be modified to make it easier and more exciting to use in the museum?
  • How can the Toniebox support non-linear storytelling?
  • How can the Toniebox support a shared listening experience?
  • Completion by the end of May 2025

You need (preferable)

  • Interest in art, design, technology and UX design
  • Enjoy helping to shape art education at the museum

The project is carried out in cooperation with the Museum Brandhorst and the Creative Technologist Dr. Beat Rossmy. For further information on the project, please contact the project coordinator Janina Horn (janina.horn ät museum-brandhorst.de).


Details
BT/MT Steeven Villa
Neurotechnologies to Augment Human Cognitive Skills

Description

Neurotechnology has been typically used in the medical domain. However, they can bring huge benefits to healthy individuals as well. In this thesis, we will use transcranial direct stimulation in a controlled environment to test inhibition control in individuals (how good is a person at stopping an instinctive action). You will conduct a series of user studies following an established protocol and analyze whether transcranial stimulation helps participant inhibition. This work moves forward the field of human-computer interaction to enhance human capabilities.

You will

  • Perform a review of current literature
  • Collect data in a lab experiment
  • Learn how to stimulate the brain using medical grade devices

You need

  • User Study Experience
  • Good communication skills

References

  • Steeven Villa, Jasmin Niess, Takuro Nakao, Jonathan Lazar, Albrecht Schmidt, and Tonja-Katrin Machulla. 2023. Understanding Perception of Human Augmentation: A Mixed-Method Study.
  • Albrecht Schmidt Augmenting Human Intellect and Amplifying Perception and Cognition

Details
MT Christoph Weber
AI in Sound Design: Exploring Automatic Sound Recommendation and Generation for Film Production

Master Thesis

Start Date: Flexible

Supervisor: Christoph Weber (c.weber ät hff-muc.de) HFF Munich

Overview

This thesis project offers an exciting opportunity for students interested in artificial intelligence, film production, and audio design to contribute to cutting-edge research. Sound design plays a crucial role in shaping emotional engagement and narrative coherence in film, yet the process remains highly manual, experience-dependent, and time-consuming. The primary goal of this research is to explore how AI-driven systems can automatically classify selected film scenes and subsequently generate and/or suggest appropriate sound design elements from a database. By systematically examining scene classification and AI-based audio generation, this research aims to develop a practical solution that integrates seamlessly into professional audio workflows—reducing manual workload and enhancing creative possibilities for sound designers—while also investigating its impact on user experience, as well as key factors such as control and sense of ownership.

Objectives

  • Perform a literature review
  • Design, conduct, and analyze expert interviews
  • Develop an AI-based prototype for automatic scene classification and sound design recommendation and/or generation
  • Demonstrate the integration of your prototype within an established audio production workflow (e.g., Avid Pro Tools)
  • Evaluate the effectiveness of your prototype and document your findings in a thesis
  • Summarize your findings in a thesis and present them
  • (Optional) Co-author a research paper based on your findings

Required Skills & Knowledge

  • Good programming skills in Python
  • Interest in film production and sound design
  • Basic knowledge of audio and video editing (ideally Avid Pro Tools or similar)
  • Knowledge of artificial intelligence, machine learning, and ideally deep learning
  • Motivation to learn new skills
  • You have reviewed and familiarized yourself with the provided references
  • Optional, but beneficial) Experience with the JUCE Framework

References (selection)

  • Kamath, Purnima, et al. "Sound designer-generative ai interactions: Towards designing creative support tools for professional sound designers." Proceedings of the 2024 CHI Conference on Human Factors in Computing Systems. 2024.
  • Cheng, Ho Kei, et al. "Taming multimodal joint training for high-quality video-to-audio synthesis." arXiv preprint arXiv:2412.15322 (2024).
  • Zhang, Yiming, et al. "Foleycrafter: Bring silent videos to life with lifelike and synchronized sounds." arXiv preprint arXiv:2407.01494 (2024).

Please send a brief motivation letter, CV, and transcript of records if you are interested in this Master thesis to c.weber ät hff-muc.de.


Details
MT Maximiliane Windl, Philipp Thalhammer
Squishy 2.0

Squishy is a tangible/deformable Interface in the form of a plushy that can be used to interact with generative AI beyond traditional chat-based input methods.After entering a prompt into an AI chat window, users often correct the AI's output by entering another prompt. Squishy enables users to react by interacting with a tangible interface, making the process more engaging and playful:

  • Hitting > The user is not pleased with the output
  • Shaking > The user is confused by the output
  • Petting > The user is happy with the output
  • ...

This concept has already been verified in a previous study. However, we want to improve the previous version by adding the following features:

  • Voice recognition to enable the user to use voice input instead of prompting by text
  • Text to speech to add feedback from Squishy to the user that goes beyond simple sounds
  • A desktop application that serves as the frontend (displays the generated output, chat history, etc.)

What we expect:

  • You have experience with hardware (working with microcontrollers, etc.)
  • You have experience with (web) programming (frontend/backend)
  • You can work independently

What you get:

  • Two committed supervisors, weekly meetings, and hands-on advice
  • (Co-)writing a scientific paper
  • A masters thesis

Details

BT = bachelor thesis - PT = project thesis - MT = master thesis - PWAL = practical research course

Weitere Themen

Fortiss Forschungsinstitut

Am Institut fortiss besteht seit 2017 eine Arbeitsgruppe zum Thema "Human-Centered Engineering" unter der Leitung von Prof. Hußmann.

Ansprechpartner am fortiss ist Frau Dr. Yuanting Liu.

Weitere Informationen

Phonetik - Medieninformatik.

Bei Interesse für Bachelorarbeiten wenden Sie sich bitte an Christoph Draxler.

Weitere Informationen

Institut für Digitales Management und Neue Medien

Themen für Studierende mit Nebenfach Medienwirtschaft. Eine Betreuung durch die BWL ist nach Absprache mit dem Prüfungsausschuss kein Problem.

Weitere Informationen

Lehrstuhl für Ergonomie (TUM-LFE)

Der Lehrstuhl für Ergonomie der Technischen Universität München (TUM) (Prof. Bengler) bietet studentische Arbeiten u.a. in den Themengebieten Umgang mit zukünftigen Assistenzsystemen und hochautomatisierten Systemen, Untersuchung multimodaler Mensch-Maschine-Interaktion, Digitale Menschmodellierung.

Weitere Informationen

Lehrstuhl für Architekturinformatik (TUM-LFE)

Der Lehrstuhl für Architekturinformatik der Technischen Universität München (TUM) (Prof. Petzold) bietet studentische Arbeiten in den Themengebieten: Gamification - Kooperativ in der Planung.

Weitere Informationen

Ansprechpartner am Lehrstuhl Architekturinformatik ist Herr Gerhard Schubert.

Lehrstuhl für Architekturinformatik (TUM-LFE)

Der Lehrstuhl für Architekturinformatik der Technischen Universität München (TUM) (Prof. Petzold) bietet studentische Arbeiten in den Themengebieten: USP - Augmented Reality in der Kommunikation.

Weitere Informationen

Ansprechpartner am Lehrstuhl Architekturinformatik ist Herr Gerhard Schubert.

Lehrstuhl für Architekturinformatik (TUM-LFE)

Der Lehrstuhl für Architekturinformatik der Technischen Universität München (TUM) (Prof. Petzold) bietet studentische Arbeiten in den Themengebieten: Visual exploration and supporting and documenting the (architectural) design process.

Weitere Informationen

Ansprechpartner am Lehrstuhl Architekturinformatik ist Herr Ata Zahedi.

Lehrstuhl für Fahrzeugtechnik (FTM)

Der Lehrstuhl für Fahrzeugtechnik (FTM) der Technischen Universität München (TUM) (Prof. Lienkamp) bietet studentische Arbeiten in den Themengebieten Fahrerassistenzsysteme, Mensch-Maschine-Interaktion sowie Fahrsimulation. Als Entwicklungs- und Erprobungstool verfügt der Lehrstuhl über einen dynamischen LKW-Fahrsimulator.

Weitere Informationen

Global Drive

  • Gemeinschaftsprojekte des Lehrstuhls für Fahrzeugtechnik mit ausländischen Partneruniversitäten
  • Auslandsaufenthalt
  • Internationale und Interdisziplinäre Teamarbeit
  • Anfertigen von Studienarbeiten (Semester- / Bachelor- / Masterarbeit)
  • Unterstützung der Projekte durch Industrieunternehmen
  • Persönliche Weiterbildung durch Soft Skills Seminare (ECTS)

Weitere Informationen

Lehrstuhl für Medientechnik (LMT-TUM)

Der Lehrstuhl für Medientechnik (LMT) der Technischen Universität München (TUM) (Prof. Steinbach) bietet studentische Arbeiten u.a. in den Themengebieten Kompression und Kodierung multimedialer Information.

Weitere Informationen

Lehrstuhl für Mensch-Maschine-Kommunikation (MMK-TUM)

Der Lehrstuhl für Mensch-Maschine.Kommunikation (MMK) der Technischen Universität München (TUM) (Prof. Rigoll) bietet studentische Arbeiten u.a. in den Themengebieten Mustererkennung, Psychoakustik und Signalverarbeitung.

Weitere Informationen

Lancaster University

In Großbritannien kann man an unserer Partneruniversität in Lancaster Abschlussarbeiten schreiben.

weitere Informationen und eine Themenliste

Queensland University of Technology

Auch in Australien an unserer Partneruniversität der QUT in Brisbane ist es möglich seine Abschlussarbeit zu schreiben.

Irish Software Engineering Research Centre (LERO)

In Irland kann man am "Irish Software Engineering Research Centre (LERO)" Abschlussarbeiten schreiben. Bei Interesse wenden Sie sich bitte an Andreas Pleuss (ehemaliger Doktorand der LMU-Medieninformatik).

Institut fuer Maschinelles Sehen und Darstellen

Das Institut fuer Maschinelles Sehen und Darstellen hat frei verfügbare Themen für Abschlussarbeiten, insbesondere im Bereich AR/VR und Bildverarbeitung. Eine Betreuung wird geregelt in Absprache mit Prof. Butz.

Weitere Informationen

Nach oben
Impressum – Datenschutz – Kontakt  |  Letzte Änderung am 21.11.2023 von Rainer Fink (rev 42790)