[AIP Distinguished Lecture] Talk by Prof. Klaus-Robert Müller (BIFOLD, Berlin / TU Berlin, ML Group / Korea University, Seoul /Max Planck Institut für Informatik, Saarbrücken)
This talk will be held in a hybrid format, both in person at AIP Open Space of RIKEN AIP (Nihonbashi office) and online by Zoom. AIP Open Space: *only available to AIP researchers.
DATE & TIME
November 17, 2025: 3:30 pm - 4:30 pm (JST)
TITLE
How can the behavior of AI systems be more closely aligned with that of human users?
SPEAKER
Prof. Klaus-Robert Müller (BIFOLD, Berlin / TU Berlin, ML Group / Korea University, Seoul /Max Planck Institut für Informatik, Saarbrücken)
ABSTRACT
Deep neural networks have achieved success across a wide range of applications, including as models of human behavior in vision tasks.
However, neural network training and human learning differ in fundamental ways, and neural networks often fail to generalize as robustly as humans do, raising questions regarding the similarity of their underlying representations. What is missing for modern learning systems to exhibit more human-like behavior? We highlight a key misalignment between vision models and humans: whereas human conceptual knowledge is hierarchically organized from fine- to coarse-scale distinctions, model representations do not accurately capture all these levels of abstraction. We therefore propose human-aligned models that more accurately approximate human behavior and uncertainty. Interestingly, they also perform better on a diverse set of machine learning tasks, increasing generalization and out-of-distribution robustness. Thus, infusing neural networks with additional human knowledge yields a best-of-both-worlds representation that is both more consistent with human cognition and more practically useful, thus paving the way toward more robust, interpretable, and human-like artificial intelligence systems.