Skip to main content
arc logo
Automotive Research Center
hero image
Back to all projects

Human-Autonomy Interaction

Annual Plan

Latent-space Generative Learning for Gameplay Scenario Generation at Scale

Project Team

Principal Investigator

Alex Gorodetsky, University of Michigan

Government

David Gorsich, U.S. Army GVSC

Faculty

Shravan Veerapaneni, U. of Michigan

Industry

Santi Adavani, PostgresQL

Student

Doruk Aksoy (Aero), Brian Chen (Math), U. of Michigan

Project Summary

Project began in 2024.

Video-based gaming environments are increasingly relied upon by the US Army for simulating the battlefield environment, soldier training, system performance assessment, mission planning, and improving command and control. These environments exhibit a large majority of difficulties involving human-centered design and human-autonomy teaming. Given player experiences in this environment, it is desirable to learn strategy and aspects that are features of good players and bad players. One can pose this problem as that of imitation learning or behavioral cloning – can we clone good players from observations of their actions?

It remains desirable to approach this problem from more of an imitation learning perspective where we are constrained by (1) limited access to the underlying game engine; (2) rely primarily on post-processing results from existing games; and (3) have a significantly limited ability to perform feedback with human players. Moreover, we seek to approach this problem through a mathematical abstraction to consider the underlying limitations of the modeling paradigm. Specifically, a number of computational problems exist that must be addressed prior to direct application deployment. Difficulties in this problem task involve (1) generation of useful scenarios; (2) learning from relatively low-numbers of high-dimensional data points (video game image sequences); and (3) effectively cloning human and AI players to extract strategies. Our proposed methodology is to investigate generative modeling approaches in reduced-dimensional spaces to tackle problems associated with the extrapolatory power of generative models using data in confined regions of the space.

The traditional approach to learning from gaming data typically involves extracting hand-designed feature sets. Our research objective is to develop methods that work more broadly and with less human intervention — operating directly on the video game images, player actions, and any additional auxiliary information. The basic research problems that arise in this context stem from resource constraints encountered when trying to implement state-of-the-art methods to solve problems in realistic environments. Specifically, we seek to work in a setting where there exist significant restrictions in the resources available both for data generation and computation. Furthermore, we seek to answer foundational questions regarding how to best use sparse data and how to carefully tailor the process of obtaining additional data.

We seek provably optimal and scalable mathematical formulations that address the following questions:

  1. Can datasets be compressed into latent spaces that facilitate accurate development of generative models;
  2. Can we ensure/constrain generative models to generate data that exhibit expected structure; and
  3. Can these generative models be used to enhance training of game players by facilitating the rapid creation of game scenarios?

Moreover, we consider these fundamental questions in the context of computational and data resources that are fairly limited, with only a few GPUs available for training.

Publications from Prior Work closely related to the proposed project:

  1. D. Aksoy, D. J. Gorsich, S. Veerapaneni, and A. A. Gorodetsky, “An Incremental Tensor Train Decomposition Algorithm.” 2023. Accepted, SIAM Journal on Scientific Computing Available: http://arxiv.org/abs/2211.12487
  2. B. Chen, S. Tandon, D. Gorsich, A. Gorodetsky, and S. Veerapaneni, “Behavioral Cloning in Atari Games Using a Combined Variational Autoencoder and Predictor Model,” in 2021 IEEE Congress on Evolutionary Computation (CEC), Kraków, Poland: IEEE, Jun. 2021, pp. 2077–2084. doi: 10.1109/CEC45853.2021.9505001.
  3. Gorodetsky, A. A., Safta, C., & Jakeman, J. D. (2022). Reverse-mode differentiation in arbitrary tensor network format: with application to supervised learning. The Journal of Machine Learning Research, 23(1), 6400-6428.
  4. Gorodetsky, A. A., & Jakeman, J. D. (2018). Gradient-based optimization for regression in the functional tensor-train format. Journal of Computational Physics, 374, 1219-1238.
  5. Gorodetsky, A., Karaman, S., & Marzouk, Y. (2019). A continuous analogue of the tensor-train decomposition. Computer methods in applied mechanics and engineering, 347, 59-84.
  6. Gorodetsky, A., Karaman, S., & Marzouk, Y. (2018). High-dimensional stochastic optimal control using continuous tensor decompositions. The International Journal of Robotics Research, 37(2-3), 340-377.
  7. De, Saibal, Eduardo Corona, Paramsothy Jayakumar, and Shravan Veerapaneni. “Tensor-train compression of discrete element method simulation data.” arXiv preprint arXiv:2210.08399 (2022).
  8. Corona, E., Gorsich, D., Jayakumar, P., & Veerapaneni, S. (2019). Tensor train accelerated solvers for nonsmooth rigid body dynamics. Applied Mechanics Reviews, 71(5), 050804.

#2.A115