Chair of Human-Centered Technologies for Learning
Research Clusters
AI for Empowerment and Learning

Focus: Developing AI systems that amplify human learning, creativity, and agency through collaborative human-AI partnerships.
Technologies: Artificial Intelligence (AI), machine learning, natural language processing, generative models.
Immersive Environments for Human Augmentation

Focus: Advancing human perception, collaboration, and innovation through immersive technologies.
Technologies: Virtual Reality (VR), Augmented Reality (AR), Mixed Reality (MR), spatial computing.
Our research pioneers human-centered AI that transforms how individuals learn, create, and thrive. By fostering collaborative partnerships between humans and AI, we enhance educational experiences, spark innovative thinking, and promote agency in domains like professional development, social interaction, and lifelong learning.
We create immersive environments that augment human perception and capabilities, empowering users to explore virtual worlds, design innovative solutions, or collaborate in enhanced realities. By integrating human-centered AI, these systems adapt to user needs, enabling applications in fields like education, training, entertainment, and social interaction.
Multimodal and Adaptive Systems for Empowered Interaction

Focus: Enabling intuitive, personalized, and inclusive human-technology interaction through dynamic, multi-sensory systems.
Technologies: AI, VR, AR, eye-tracking, multimodal sensing.
Eye Tracking and Gaze-Based Interaction

Focus: Harnessing eye-tracking to enhance human attention, intent, and social connection in interactive systems.
Technologies: Eye-tracking, AI, multimodal sensing, VR/AR integration.
We develop multimodal and adaptive systems that empower users by making technology responsive, intuitive, and tailored to individual needs. By combining multi-sensory interfaces with human-centered AI, these systems support seamless interaction for creative expression, professional workflows, and inclusive applications, ensuring accessibility for diverse users across contexts like collaboration, productivity, and innovation.
Our gaze-based research augments cognitive and social capabilities by using eye-tracking to capture user intent and enhance interaction. Integrated with human-centered AI, these systems empower users in real-time collaboration, creative design, and inclusive communication, with applications spanning education, healthcare, gaming, and professional environments, ensuring accessibility and engagement for all.
News
07.11.2025: Our Team Wins Second Place in the AIAI Competition

We are pleased to announce that our team — Ivo Bueno, Ruikun Hou, Dr. Babette Bühler, and Dr. Tim Fütterer — has won second place in the AI for Advancing Instruction (AIAI) Competition 2025, organized by DrivenData in collaboration with the University of Virginia.
The AIAI challenge invited participants to develop machine learning models capable of automatically identifying instructional activities in classroom videos and discourse content in anonymized audio transcripts.
Our submission focused on transformer-based architectures optimized for multimodal data, achieving excellent performance across both video and audio tasks. The competition comprised two phases—model development on labeled training data and evaluation on an unseen test set—with final rankings determined by the instructional activity and discourse labels predictions.
This competition brought together leading research teams from around the world, advancing the state of the art in AI-assisted education research.
We warmly congratulate Ivo, Ruikun, Babette, and Tim on this remarkable achievement, and extend our thanks to the competition organizers and all participating teams for their inspiring contributions to this important field.
Zugang Gestalten Conference, Leipzig
Carrie Lau was invited by the German Commission for UNESCO to speak at the Zugang Gestalten Conference, held at the Deutsche Nationalbibliothek in Leipzig. Her talk explored how Virtual Reality (VR) and Generative AI can democratize access to cultural heritage. 🔗 Watch the talk on YouTube
31.08 – 03.09.2025: Route2Vec Earns Honorable Mention at MuC 2025 in Chemnitz

We are pleased to announce that Philipp Hallgarten has received an Honorable Mention at Mensch und Computer (MuC) 2025 held in Chemnitz, Germany, for the paper “Route2Vec: Enabling Efficient Use of Driving Context through Contextualized Route Representations.”
The paper presents Route2Vec, an attention-based self-supervised framework that encodes contextual data from driving routes into compact embeddings and thus enables the design of context-aware in-vehicle interfaces.
Paper DOI: https://doi.org/10.1145/3743049.3743056
28.08.2025: Babette Bühler Wins FUTURE EDUCATION Early Career Award

We are proud to announce that our postdoctoral researcher, Dr. Babette Bühler, has been awarded the FUTURE EDUCATION Early Career Award 2025 in the category Educational Technology(ies): interdisciplinary, innovative, disruptive.
She received the award for her paper: “Temporal Dynamics of Meta-Awareness of Mind Wandering During Lecture Viewing: Implications for Learning and Automated Assessment Using Machine Learning.” https://doi.org/10.1037/edu0000903
The award was presented on 28 August 2025 at the EARLI Conference at the University of Graz. This year marked the secon edition of the FUTURE EDUCATION Early Career Awards, which celebrate excellent transdisciplinary research in education, learning, development, and teaching. Awardees were selected through a rigorous single-blind peer review process, with submissions evaluated by both FUTURE EDUCATION network reviewers and EARLI community experts.
We warmly congratulate Babette on this well-deserved recognition!
22.07.2025: Papers Accepted at ICMI 2025 and ECAI 2025
We are pleased to announce that our group has had two full papers and one doctoral consortium paper accepted at the International Conference on Multimodal Interaction (ICMI 2025), and one paper accepted at the European Conference on Artificial Intelligence (ECAI 2025).
ICMI 2025:
- "Multimodal Behavioral Patterns Analysis with Eye-Tracking and LLM-Based Reasoning"
- “Adaptive Gen-AI Guidance in Virtual Reality: A Multimodal Exploration of Engagement in Neapolitan Pizza-Making”
- Designing and Evaluating Gen-AI for Cultural Resilience (Doctoral Consortium Track)
ECAI 2025:
- "TRUCE-AV: A Multimodal Dataset for Trust and Comfort Estimation in Autonomous Vehicles"