Research

Our research evolves around Human-Centered Technologies and AI systems that sense and infer the user's cognitive state, the level of task-related expertise, actions, and intentions based on multimodal data and provide information for media and assistive technologies in many activities of everyday life, and especially in the context of learning.

 

PEER: An AI-based educational platform aimed at enhancing essay writing skills

PEER (Paper Evaluation and Empowerment Resource) is a tool designed to assist students in writing essays, from primary school to university level and across various genres. The tool leverages AI to provide personalized feedback and improvement suggestions based on the analysis of essays submitted either by photograph or direct text input, while preserving anonymity of the data collected for continuous improvement.

SARA: Smart AI Reading Assitant

SARA wurde entwickelt, um Schülern dabei zu helfen, ihre Lese- und Verständnisfähigkeiten zu verbessern. Dieses Tool nutzt künstliche Intelligenz in Verbindung mit der Erfassung von Augenbewegungen, um Stellen im Text zu identifizieren, an denen das Verständnis möglicherweise schwierig ist, und bietet individuell angepasste Hilfestellungen. 

VR Classroom

The VR classroom project aims at an immersive virtual learning environment that allows data tracking in multiple modalities such as eye tracking and hand tracking with the help of corresponding devices and sensors, especially Head Mounted Display (HMD).

Privacy Preserving Eye-tracking Applications

We aim at privacy-preserving applications of eye tracking using different techniques including differential privacy, federated learning, domain adaptation, randomized encryption and many more techniques. An emphasis of our work is the application in virtual environments. Part of this work will be integrated into our VR classroom project.

Assisting the remote video learner with self-regulation support

Online video learning is becoming increasingly important in educational contexts. However, remote video learning challenges not only students’ self-regulation, but also teachers‘ abilities to detect these self-regulation problems. The project addresses this problem at the interface of psychology, educational science, and computer science. To this end, potential problems of self-regulation will be automatically detected and measures to support them, for example by optimizing instructional videos, shall be developed. 

The Museum Gaze

Recent studies have been using cutting-edge mobile eye tracking technology to capture eye movements in natural, uncontrolled settings such as museums. Our collaboration with the Cognitive Research in Art History (CReA) Lab at the University of Vienna aims to comprehensively investigate this complex phenomenon, which has long been of interest in art history and psychology. We are conducting four distinct research studies, each with a unique focus on different aspects of the museum gaze, in partnership with the experienced curatorial team at the Belvedere Museum in Vienna.

ArtisanXR: Immersive Learning Experience with Conversational AI

ArtisanXR aims to safeguard and promote intangible cultural heritage (ICH) especially extinguished Art and Craftsmanship, by harnessing cutting-edge technologies to preserve, document, and disseminate the knowledge, skills, and expressions that define the rich cultural identities of communities worldwide. This project fosters cross-cultural understanding and leverages conversational AI agents to inform cultural and historical backgrounds and learning guidance. By creating a comprehensive digital archive, we hope that this project can facilitate knowledge preservation and inspire future generations through accessible and engaging learning experiences.