In collaboration with GAIDG Lab external collaborators, York University’s Prof. Petros Faloutsos, Université de Montréal and Mila member Prof. Glen Berseth, and Rutgers University’s Prof. Mubbasir Kapadia, Prof. Vladimir Pavlovic, and Ph.D. Candidate Hu Kaidong, we developed a multi-agent reinforcement learning approach using parameter sharing and goal conditioning that affords diverse crowd simulations by learning a single shared policy without centralized control or communication, Heterogeneous Crowd Simulation using Parametric Reinforcement Learning.

https://doi.org/10.1109/TVCG.2021.3139031

Abstract

Agent-based synthetic crowd simulation affords the cost-effective large-scale simulation and animation of interacting digital humans. Model-based approaches have successfully generated a plethora of simulators with a variety of foundations. However, prior approaches have been based on statically defined models predicated on simplifying assumptions, limited video-based datasets, or homogeneous policies. Recent works have applied reinforcement learning to learn policies for navigation. However, these approaches may learn static homogeneous rules, are typically limited in their generalization to trained scenarios, and limited in their usability in synthetic crowd domains. In this paper, we present a multi-agent reinforcement learning-based approach that learns a parametric predictive collision avoidance and steering policy. We show that training over a parameter space produces a flexible model across crowd configurations. That is, our goal-conditioned approach learns a parametric policy that affords heterogeneous synthetic crowds. We propose a model-free approach without centralization of internal agent information, control signals, or agent communication. The model is extensively evaluated. The results show policy generalization across unseen scenarios, agent parameters, and out-of-distribution parameterizations. The learned model has comparable computational performance to traditional methods. Qualitatively the model produces both expected (laminar flow, shuffling, bottleneck) and unexpected (side-stepping) emergent qualitative behaviours, and quantitatively the approach is performant across measures of movement quality.