Chair of Human-Centered Technologies for Learning
Research Clusters
AI for Empowerment and Learning

Focus: Developing AI systems that amplify human learning, creativity, and agency through collaborative human-AI partnerships.
Technologies: Artificial Intelligence (AI), machine learning, natural language processing, generative models.
Immersive Environments for Human Augmentation

Focus: Advancing human perception, collaboration, and innovation through immersive technologies.
Technologies: Virtual Reality (VR), Augmented Reality (AR), Mixed Reality (MR), spatial computing.
Our research pioneers human-centered AI that transforms how individuals learn, create, and thrive. By fostering collaborative partnerships between humans and AI, we enhance educational experiences, spark innovative thinking, and promote agency in domains like professional development, social interaction, and lifelong learning.
We create immersive environments that augment human perception and capabilities, empowering users to explore virtual worlds, design innovative solutions, or collaborate in enhanced realities. By integrating human-centered AI, these systems adapt to user needs, enabling applications in fields like education, training, entertainment, and social interaction.
Multimodal and Adaptive Systems for Empowered Interaction

Focus: Enabling intuitive, personalized, and inclusive human-technology interaction through dynamic, multi-sensory systems.
Technologies: AI, VR, AR, eye-tracking, multimodal sensing.
Eye Tracking and Gaze-Based Interaction

Focus: Harnessing eye-tracking to enhance human attention, intent, and social connection in interactive systems.
Technologies: Eye-tracking, AI, multimodal sensing, VR/AR integration.
We develop multimodal and adaptive systems that empower users by making technology responsive, intuitive, and tailored to individual needs. By combining multi-sensory interfaces with human-centered AI, these systems support seamless interaction for creative expression, professional workflows, and inclusive applications, ensuring accessibility for diverse users across contexts like collaboration, productivity, and innovation.
Our gaze-based research augments cognitive and social capabilities by using eye-tracking to capture user intent and enhance interaction. Integrated with human-centered AI, these systems empower users in real-time collaboration, creative design, and inclusive communication, with applications spanning education, healthcare, gaming, and professional environments, ensuring accessibility and engagement for all.
News
Paper Accepted at AIED 2026
We are happy to announce that our paper has been accepted at the International Conference on Artificial Intelligence in Education (AIED) 2026, taking place in Seoul, South Korea.
Should AI Ask First? Investigating the Effects of Proactive vs Reactive AI Mentoring in Self-Directed Learning - Khaoula Otmani, Anna Bodonhelyi, Babette Bühler, Enkelejda Kasneci
Congratulations to all authors!
10 Papers Accepted at ETRA 2026
We are excited to announce that our group has 10 papers accepted at the ACM Symposium on Eye Tracking Research & Applications (ETRA) 2026, taking place June 1–4 in Marrakesh, Morocco.
Our contributions span four full papers, one short paper, and five workshop papers, covering topics from LLM-based eye-tracking event detection and privacy-preserving scanpath comparison to misinformation susceptibility and affordable wearable eye-tracking platforms.
Congratulations to all authors!
ACL 2026 Paper Accepted
From Scoring to Explanations: Evaluating SHAP and LLM Rationales for Rubric-based Teaching Quality Assessment
This paper introduces a general framework for generating and evaluating sentence-level explanations in rubric-based teaching quality assessment by combining SHAP attributions with LLM-generated rationales. Experiments on classroom transcripts show that while fine-tuned language models outperform LLMs in scoring accuracy, SHAP provides significantly more faithful and transferable explanations than LLM rationales, which are often inconsistent and weakly aligned with model predictions. Overall, the work highlights the limitations of current LLM explanations and offers a principled approach to improving transparency in high-stakes educational assessment settings.
Journal Paper Accepted in Computers & Education: AI
With generative AI (GenAI) becoming increasingly prevalent in higher education, concerns about overreliance and declining critical engagement highlight the need to better understand learning-to-learn as a lifelong learning skill and how it can be supported. Based on a PRISMA-ScR scoping review, the paper synthesizes existing definitions into a three-layered framework that can help clarify the concept across levels of broadness. Connecting the framework to emerging GenAI research, it offers a structured starting point for developing pedagogically grounded AI-supported learning practices and systems, reframing GenAI from a potential source of cognitive offloading into a tool that can support effective learning processes.
DOI: https://doi.org/10.1016/j.caeai.2026.100575
06.03.2025: Paper Acceptances at CHI'26!

We are thrilled to share that Carrie Lau, a doctoral researcher in our group, will be presenting her paper at CHI 2026 - the ACM Conference on Human Factors in Computing Systems. Her paper explores how the appearance of AI avatars influences job applicants' perceptions of trust, fairness, and bias in AI-conducted interviews, offering design insights for more equitable AI hiring systems. The paper has also received a CHI 2026 Honourable Mention Award, recognizing its originality, rigor, and potential impact in the field of human–computer interaction. Congratulations to Carrie and other co-authors for their outstanding contributions!
Read More: Skin-Deep Bias: How Avatar Appearances Shape Perceptions of AI Hiring