Generic filters
Exact matches only
Filter by content type

Anuschka Schmitt

Wissenschaftliche Mitarbeiterin
Müller-Friedbergstrasse 8
9000 St. Gallen
+41 71 224 3225


get_appThiemo Wambsganss, Anuschka Schmitt, Thomas Mahning, Anja Ott, Sigita Soellner, Ngoc Anh Ngo, Jerome Geyer-Klingeberg, Janina Nakladal, Jan Marco Leimeister
Educational process mining (EPM) offers new possibilities to discover, monitor, improve, or predict students’ learning processes using data about their learning activities captured in technology-mediated information systems (IS). Although EPM has recently attracted considerable research interest, there is still limited shared knowledge about the distinctive design characteristics of EPM from an integrative perspective. To address this gap, we conducted a systematic literature review to identify EPM characteristics. Building on a technology-mediated learning perspective, we develop a taxonomy that classifies EPM characteristics into four major categories (i.e., purpose, user, input, analysis). We evaluate and refine our taxonomy with ten domain experts, identified three clusters in the reviewed literature, and derived six archetypes of EPM scenarios based on our categorization. Finally, we formulate a novel research agenda to guide researchers in systematizing and synthesizing research on different technological embeddings of EPM in a students’ learning process.

Beyond AI-based systems’ potential to augment decision-making, reduce organizational resources, and counter human biases, unintended consequences of such systems have been largely neglected so far. Researchers are undecided on whether erroneous advice acts as an impediment to system use or is blindly relied upon. As part of an experimental study, we turn towards the impact of incorrect system advice and how to design for failure-prone AI. In an experiment with 156 subjects we find that, although incorrect algorithmic advice is trusted less, users adapt their answers to a system’s incorrect recommendations. While transparency on a system’s accuracy levels fosters trust and reliance in the context of incorrect advice, an opposite effect is found for users exposed to correct advice. Our findings point towards a paradoxical gap between stated trust and actual behavior. Furthermore, transparency mechanisms should be deployed with caution as their effectiveness is intertwined with system performance.

Voice assistants’ increasingly nuanced and natural communication bears new opportunities for user experiences and task automation, while challenging existing patterns of human-computer interaction. A fragmented research field, as well as constant technological advancements, impede a common apprehension of prevalent design features of voice-based interfaces. As part of this study, 86 papers across domains are systematically identified and analysed to arrive at a common understanding of voice assistants. The review highlights perceptual differences to other human-computer interfaces and points out relevant auditory cues. Key findings regarding those cues’ impact on user perception and behaviour are discussed along with the three design strategies 1) personification, 2) individualization and 3) contextualization. Avenues for future research are lastly deducted. Our results provide relevant opportunities to researchers and designers alike to advance the design and deployment of voice assistants.