Praktika / Abschlussarbeiten
Verfügbare Abschlussarbeiten
Development of a novel Simulation Framework for a Wearable Eye-Tracking Sensor (MA/BA Thesis / Guided Research / IDP)
Key words: Eye-tracking, Sensors, Signal Processing, Machine Learning, Computer Vision
Eye-Tracking is a technology with numerous application areas in market and behaviour research, education, human-machine-interaction, health-care, and more. Recent advancements in sensor hardware and processing are paving the way for wide-spread adoption even in consumer products, which will enable exciting new opportunities and have the potential to introduce a paradigm shift in how we interact with our environment.
As part of the EU-funded VIVA project (vivaproject.eu), we are currently researching a completely novel wearable eye-tracking sensor based on Lasers, which requires the development of a camera-based reference eye-tracker. For large parts of the required algorithm pipeline, there are no published robust state-of-the-art implementations, which makes this an interesting area for fundamental research in Computer Vision algorithms and filling research gaps.
The main task consists of the development of a simulation tool, which will be used for researching new algorithms for the sensor. Depending on personal interests, the scope of the project can be extended to algorithm research.
Prerequisites:
- Programming skills, especially Python or MATLAB
- Solid foundations in Computer Science and Algorithms
- Fundamental knowledge of Computer Vision and Image Processing
- Ideally: Experience in sensor simulation technology, especially for camera-based systems.
Supervision in German or English and for all relevant TUM Schools.
Contact (feel free to ask questions): alexander.zimmer(at)tum.de
Bachelor/Master Thesis: Can GenAI foundation models realistically reproduce human scanpaths?

Human scanpaths (sequences of fixations and saccades) capture how people visually explore scenes and tasks. Modeling these sequences can provide insights into attention, perception, and human cognition, but collecting eye-tracking data is expensive, privacy-sensitive, and often domain-limited. This project investigates whether modern Generative AI foundation models (e.g. multimodal transformers or diffusion models) can learn to generate synthetic scanpaths that resemble real human gaze behavior.
Requirements:
- Strong Python skills
- Interest in computer vision, eye tracking, and generative AI
- Basic knowledge of deep learning architectures (transformers, VAEs, diffusion)
Contact: yasmeen.abdrabou[at]tum[dot]de or virmarie.maquiling[at]tum[dot]de
Bachelor / Master Thesis: VR Classroom

Virtual Reality (VR) technology has revolutionized the way we learn and interact with information. In this project, we aim to design a VR classroom that leverages the power of generative behavior and text models to enhance the learning experience for students. The project is presumably suitable for students from computer science: games engineering, but also for all other students who have an interest in developing VR applications that integrate artificial intelligence.
For more information contact ozdelsuleyman(at)tum.de.
Laufende Abschlussarbeiten
Call for Bachelor/Master Thesis: Leveraging Generative AI Chatbots in VR Museums for Enhanced Educational Experiences (ongoing)

Generative AI (GenAI) has gained immense popularity, particularly for its applications in education. Within the virtual reality (VR) realm, where interactive environments are pivotal, GenAI, especially Large Language Models (LLMs), offers promising potential to enrich user experiences. An exemplary use case is the creation of immersive VR museums, where LLM-based chatbots serve as interactive guides. This project aims to propose, develop, and assess an immersive VR museum enhanced with LLM-based chatbots, employing advanced machine learning and eye-tracking technology.
Prerequisites: Unity/Unreal, Python, C#, Machine learning
Contact: hong.gao(at)tum.de
Evaluating Immersive VR and Generative AI Based Learning Environments for Scottish Culture Heritage

Virtual Reality (VR) and Generative Artificial Intelligence (AI) are two innovative technologies with the potential to transform educational practices. VR can create immersive learning environments, enabling students to experience different cultures and historical periods firsthand. AI can personalize learning experiences by providing students with tailored content and feedback. This bachelor thesis explores the potential of combining VR and generative AI to enhance the teaching and preservation of Scottish cultural heritage. Central to this study is the development of a prototype VR experience Scottish Curling VR, which focused on Scottish Bonspiel, an outdoor curling tournament held on a frozen lake. The thesis was showcased at the Festival der Zukunft, where participants were observed and asked to complete a survey. These observations resulted in improvements to the prototype and valuable insights into the public perception of VR and AI in educational settings. This research suggests combining VR and generative AI can create immersive and personalized learning environments. It also reveals a positive public perception toward employing these technologies in educational frameworks. Nevertheless, certain limitations persist. Notably, generative AI models occasionally introduce inaccuracies or misinformation, demanding either expert supervision or a discerning user. Despite these limitations, the research presented in the thesis suggests that VR and AI can potentially improve educational learning experiences. Future research should focus on the quantifiable improvements in learning outcomes achieved through combining VR and generative AI in educational settings.
Effects of Gamification on Learners' Motivation and Engagement in Digital Learning Experiences

Gamification” is the use of game design elements in non-game contexts.
This bachelor thesis focuses on the effects of gamification on motivation and engagement in digital learning. A three-phase user study is included. The study will recruit 40 participants divided into two groups: one using a gamified platform (Datacamp) to learn Python, and the other using a non-gamified medium.
This study is supported by DataCamp, the most intuitive learning platform for data science and analytics. Learn any time, anywhere and become an expert in R, Python, SQL, and more. DataCamp’s learn-by-doing methodology combines short expert videos and hands-on-the-keyboard exercises to help learners retain knowledge. DataCamp offers 350+ courses by expert instructors on topics such as importing data, data visualization, and machine learning. They’re constantly expanding their curriculum to keep up with the latest technology trends and to provide the best learning experience for all skill levels. Join over 6 million learners around the world and close your skills gap. Find out more here: datacamp.com/groups/education.
GenAI tool for reading
Key words: Educational Technologies, GenAI, eye-tracking, reading acquisition
The goal of this project is to design, implement and test a prototype for an ITS (Intelligent Tutorial System) that helps students to learn how to read and strengthen this skill. The project focuses on the conceptualisation of the reading tool (modality of feedback, simple interactive UX design) as well as the first implementation (prototype development and testing).
For more information contact Franziska Kaltenberger (franziska.kaltenberger[at]tum.de) and Enkeleda Thaqi (enkeleda.thaqi[at]tum.de)
Benchmarking webcam eye-tracking

Key words: Webcam-based eye-tracking, benchmarking, comparative study
Eye-tracking allows to collect data of eye movements, which can be used to analyse cognitive processes such as reading difficulties. However, most high-quality eye-tracking technologies rely on dedicated hardware, limiting their accessibility and scalability. This is particularly challenging in educational technologies, where eye-tracking-based tools for learning and especially reading assessment should ideally function on standard consumer devices such as laptops or tablets, without sacrificing accuracy. In these areas, it is essential to obtain high-quality data using webcam-based eye-tracking systems.
For more information contact Franziska Kaltenberger (franziska.kaltenberger[at]tum.de) and Enkeleda Thaqi (enkeleda.thaqi[at]tum.de)