Skip to main content
arc logo
Automotive Research Center
hero image
Back to all projects

Systems of Systems & Integration

Annual Plan

LLM-Enabled Operation Management of Multi-Agent Systems

Project Team

Principal Investigator

Bogdan Epureanu, University of Michigan

Government

Phil Frederick, US Army GVSC

Industry

Chenyu Yi, Mercedes-Benz

Student

Soham Purhoit, University of Michigan

Project Summary

Project begins 2025.

Autonomous vehicles will be used widely in military operations in teams with humans. In particular,Human-machine integrated formations (HMIF) with novel autonomous vehicles is set to revolutionize military operations by redefining traditional tactics and strategies. Operating alongside human teams, autonomous agents offer significant advantages such as increased operational efficiency and reduced risk to human personnel. However, their deployment also introduces substantial challenges, particularly in communication, coordination, and command structures between human leaders and autonomous systems. This complexity presents a significant barrier to human-machine integration in operations where humans have to quickly and efficiently manage multiple agents simultaneously.

This work proposes a system of integrated software and algorithms, demonstrated using the ARC software and vehicle platforms, that facilitates efficient, environment-aware two-way communication and control of 10+ agents by a human operator. Leveraging Large Language Models (LLMs), the system enhances human-autonomy interaction by reducing cognitive load and optimizing the abstraction and de-abstraction of information. In the forward communication path, high-level commands are dynamically de-abstracted into an optimized list of feasible subtasks, adaptable in response to real-time agent observations. Conversely, in the backward communication path, multimodal data is abstracted and fused to highlight critical information contextualized to the intent of the original command, enabling the LLM and human operator to make higher-level interpretations of the environment and operation state more efficiently. Thus, this research aims to answer the following two fundamental research questions. Addressing these two questions together will result in a new LLM-based bi-directional communication and control method.

RQ1: How can a group of (10+) autonomous vehicles be operated simultaneously by a human without requiring the human to specify detailed tasks for each agent? The novelty is to create for the first time a specialized active LLM system to generate detailed operational demands from high-level abstract human commands. Thus, a simple high-level command “scout the right flank behind the tree line” is translated into a detailed list of intent-based instructions to be executed by formations of autonomous agents. For example, an intent-based instruction generated by the LLM may be “vehicle advance to provide cover to another agent by positioning itself at a location hard detect by the enemy at a distance of 200 meters ahead of the current location of the agent”. Once the list of instructions is created, each individual agent decides what parts of the demand to satisfy using their task allocation algorithms (e.g., those developed in a previous ARC project).

RQ2: How can large multi-modal sensor and observation data from (10+) autonomous agents be communicated to a human with minimal cognitive load and yet provide intent-based contextual summarization to ensure enable effective communication? The novelty is to create for the first time an LLM-based system combined with transformer algorithms capable of fusing data, summarizing and abstracting data and observations from groups of agents to make the information contextual and easy to communicate to a human with minimal cognitive workload.

Previous ARC project outcomes leveraged: Dynamic Teaming of Autonomous Vehicles to Address Intelligent Adversarial Actions

#5.24