Chair of Human-Centered Technologies for Learning
Research Clusters
AI for Empowerment and Learning

Focus: Developing AI systems that amplify human learning, creativity, and agency through collaborative human-AI partnerships.
Technologies: Artificial Intelligence (AI), machine learning, natural language processing, generative models.
Immersive Environments for Human Augmentation

Focus: Advancing human perception, collaboration, and innovation through immersive technologies.
Technologies: Virtual Reality (VR), Augmented Reality (AR), Mixed Reality (MR), spatial computing.
Our research pioneers human-centered AI that transforms how individuals learn, create, and thrive. By fostering collaborative partnerships between humans and AI, we enhance educational experiences, spark innovative thinking, and promote agency in domains like professional development, social interaction, and lifelong learning.
We create immersive environments that augment human perception and capabilities, empowering users to explore virtual worlds, design innovative solutions, or collaborate in enhanced realities. By integrating human-centered AI, these systems adapt to user needs, enabling applications in fields like education, training, entertainment, and social interaction.
Multimodal and Adaptive Systems for Empowered Interaction

Focus: Enabling intuitive, personalized, and inclusive human-technology interaction through dynamic, multi-sensory systems.
Technologies: AI, VR, AR, eye-tracking, multimodal sensing.
Eye Tracking and Gaze-Based Interaction

Focus: Harnessing eye-tracking to enhance human attention, intent, and social connection in interactive systems.
Technologies: Eye-tracking, AI, multimodal sensing, VR/AR integration.
We develop multimodal and adaptive systems that empower users by making technology responsive, intuitive, and tailored to individual needs. By combining multi-sensory interfaces with human-centered AI, these systems support seamless interaction for creative expression, professional workflows, and inclusive applications, ensuring accessibility for diverse users across contexts like collaboration, productivity, and innovation.
Our gaze-based research augments cognitive and social capabilities by using eye-tracking to capture user intent and enhance interaction. Integrated with human-centered AI, these systems empower users in real-time collaboration, creative design, and inclusive communication, with applications spanning education, healthcare, gaming, and professional environments, ensuring accessibility and engagement for all.
News
17.06.2025: Prof. Enkelejda Kasneci Gave a Keynote Speech at IS-EUD 2025

Professor Enkelejda Kasneci gave a keynote speech at the International Symposium on End-User Development (IS-EUD) 2025. “Can Learners Design Their Future? Promoting Agency with Large Language Models in Education”
More information is available on the IS-EUD 2025 Keynote Speakers page.
09.06.2025: Prof. Enkelejda Kasneci Delivered Invited Lecture at University of Tokyo

Professor Enkelejda Kasneci delivered an invited lecture titled “Augmenting Human Potential through Human-Centered AI and Attention-Aware Systems” at the University of Tokyo’s Institute of Industrial Science, hosted by the Interactive Visual Intelligence Lab under the leadership of Professor Yusuke Sugano.
28.05.2025: Best Paper Honorable Mention at ETRA 2025
We are thrilled to announce that Virmarie Maquiling, doctoral researcher at the Chair of Human-Centered Technologies for Learning, has received a Best Paper Honorable Mention at the ACM Symposium on Eye Tracking Research & Applications (ETRA 2025) held in Tokyo, Japan.
Her paper, which explores imperceptible gaze guidance in virtual reality, was recognized for its innovative contribution to the field of user-centered eye-tracking research.
Congratulations on this well-deserved honor!
Paper DOI: https://doi.org/10.1145/3725839
15.04.2025: Paper Acceptances at ETRA and DSP!
We are thrilled to share that four papers from our group have been accepted to the ACM Symposium on Eye Tracking Research & Applications (ETRA) and three papers have been accepted to the International Conference on Digital Signal Processing (DSP) this year.
Congratulations to all the authors for their outstanding contributions!
12.03.2025: Supporting socially disadvantaged children with AI

The pioneering educational project initiated by the Roland Berger Foundation and the Technical University of Munich (TUM) aims to impart artificial intelligence (AI) skills to socially disadvantaged children. Within this project, children start learning to use AI responsibly and critically from as early as third grade.
Under the scientific leadership of Prof. Enkelejda Kasneci, director of the TUM Center for Educational Technologies, the three-year model focuses on AI literacy and enhancing writing and language skills through AI-supported tools.
The project also collaborates with 70 partner schools, supporting 650 talented children. Guided by Prof. Kasneci, the initiative aims to fully develop the individual potential of these children, opening up future opportunities for them because by 2035, there will no longer be a job in Germany without AI.