Skip to main content
arc logo
Automotive Research Center
hero image
Back to all projects

Human-Autonomy Interaction

Annual Plan

Dynamic Allocation of Information Through Auditory, Visual and Haptic Interfaces to Minimize Cognitive Burden and Maximize Performance During Crew-AI Teaming

Project Team

Principal Investigator

Wing-Yue Geoffrey Louie, Oakland University Jennifer Vonk, Oakland University

Government

Mark Brudnak, Ryan Wood, Chris Mikulski, US Army GVSC

Student

Andrea Macklem-Zabel, Absalat Getachew, Oakland University

Project Summary

Project begins 2025.

The US Army Next Generation Combat Vehicle (NGCV) program is expected to produce vehicles that are closed hatched with the crew situated inside to reduce risk. Crews are then envisioned to be in hybrid virtual-physical environments to accomplish missions with technologies that may include extended reality (XR), sensors, and intelligent agents (IA) to replace, or augment existing crew capabilities. Hybrid virtual-physical environments refer to the convergence of digital and physical spaces or elements that soldiers interact with to support them in completing mission objectives. Hence, the future force will have multiple sensors (on and off platform) combined with AI/ML agents all vying for the attention of the soldier. This brings new challenges in situational awareness (SA), communication, workload, and task allocation.

The overall objective of this ARC project is to develop approaches to dynamically allocate information to each crew member through auditory, visual, and haptic interfaces to maximize overall team mission performance. Herein, we define a task as an activity that has a single goal and a mission as an activity that has multiple tasks/goals. Each task can be accomplished only by consideration of task-unique information and situational awareness that is used by crew members to inform the correct decisions and/or actions to take. This proposal specifically focuses on leveraging the unique capabilities afforded by XR technologies to address challenges in sensory and meaning making processing highlighted by Multiple Resource Theory (MRT). Namely, MRT asserts that processing information over sensory channels, specifically visual and audio, in parallel decreases the competition for cognitive resources that would occur with singlechannel processing. XR technologies enable the ability to manipulate: 1) the sensory channels available to a user during a task, 2) the type of information to be delivered on the sensory channels, and 3) the way the information should be delivered. We expect that by controlling the quantity, type, and level of processing required of the information for each sensory channel, we can improve individual task performance and, thereby, overall team performance by allocating tasks contingent on individual member sensory processing burden. Hence, the core research questions (RQ) for this project are:

RQ1) How do we leverage the potential of in-vehicle XR technologies to design effective/efficient mappings of information to auditory, visual, and/or haptic interfaces?

RQ2) How do we design interfaces to maximize information delivery within the constraints of cognitive load of each sensory channel and optimize crew performance via optimal allocation of information?

Leveraging previous ARC project:

#2.25