Skip to main content
arc logo
Automotive Research Center
hero image
Back to all projects

Human-Autonomy Interaction

Annual Plan

Multimodal Real-Time Cognitive Load and Emotional State Detection for Human-Machine Interactive Formations

Project Team

Principal Investigator

Rada Mihalcea, University of Michigan Mihai Burzo, University of Michigan-Flint Bogdan Epureanu, University of Michigan

Government

Matthew Castanier, Terrance Tierney, US Army GVSC

Industry

Glenn Taylor, Soar Technology Inc

Paul Rybski, Neya Systems

Student

Hussein Kokash (Postdoc), PhD student TBD, University of Michigan

Project Summary

Project begins 2025.

The growing presence of autonomous systems has significantly expanded our capacity to explore vast and uncharted areas. However, this surge has also led to a substantial rise in the volume of information that needs to be processed by the users of these systems, often in real time, which can result in decreased performance, increased stress and unsafe vehicle operation due to increased cognitive load. Adaptive automation has the potential to significantly change how users interact with autonomous systems, by tailoring interface responses to the user’s state to enhance the bidirectional trust between autonomous systems and humans, and increasing the performance of human-machine integrated formations. Yet, the real-time, precise measurement of the user state, especially their cognitive load and emotional state, remains a critical challenge.

The primary goal of this research is to develop a multimodal, real-time system for detecting cognitive load and emotional state during human-machine interactions. By integrating physiological signals, facial expressions, linguistic cues, and task contexts, the project aims to build accurate and adaptive data-driven models that can measure these mental states in operational environments. Our key objective is to develop predictive models for cognitive load and emotional state that also account for their interdependence, with the ultimate aim of enhancing the adaptability of autonomous systems in various human-machine integrated formations.

We will address two fundamental research questions:

  1. How can multimodal data (physiological, visual, linguistic, acoustic, user-based and task-based information) be effectively integrated to measure cognitive load and emotional state in real-time, and what modeling techniques are most effective?
  2. How do cognitive load and emotional state modulate each other, and how can we leverage their interaction to develop better predictive models?

#2.24