PMP: Learning to Physically Interact with Environments using Part-wise Motion Priors

SIGGRAPH 2023
Seoul National University

Our method utilizes multiple part-wise motion priors to interact with the environments.

Abstract

We present a method to animate a character incorporating multiple part-wise motion priors (PMP). While previous works allow creating realistic articulated motions from reference data, the range of motion is largely limited by the available samples. Especially for the interaction-rich scenarios, it is impractical to attempt acquiring every possible interacting motion, as the combination of physical parameters increases exponentially. The proposed PMP allows us to assemble multiple part skills to animate a character, creating a diverse set of motions with different combinations of existing data. In our pipeline, we can train an agent with a wide range of part-wise priors. Therefore, each body part can obtain a kinematic insight of the style from the motion captures, or at the same time extract dynamics-related information from the additional part-specific simulation. For example, we can first train a general interaction skill, e.g. grasping, only for the dexterous part, and then combine the expert trajectories from the pre-trained agent with the kinematic priors of other limbs. Eventually, our whole-body agent learns a novel physical interaction skill even with the absence of the object trajectories in the reference motion sequence.

Method

Visualization of the pipeline in our system. Kinematic style discriminators \(\{D_{\phi_k}\}_K\) are trained with part-wise motion captures and interaction discriminators \(\{D_{\psi_n}\}_N\) are trained with demo trajectories from the pretrained interaction gym. Note partial observations \(\{o_k\}_K,\{u_n\}_N\) and hand actions \(\{y_n\}_N\) are subsets of state \(s\) and action \(a\) of the whole-body agent.

Video

BibTeX

@inproceedings{10.1145/3588432.3591487,
author = {Bae, Jinseok and Won, Jungdam and Lim, Donggeun and Min, Cheol-Hui and Kim, Young Min},
title = {PMP: Learning to Physically Interact with Environments using Part-wise Motion Priors},
year = {2023},
isbn = {9798400701597},
publisher = {Association for Computing Machinery},
address = {New York, NY, USA},
url = {https://doi.org/10.1145/3588432.3591487},
doi = {10.1145/3588432.3591487},
abstract = {We present a method to animate a character incorporating multiple part-wise motion priors (PMP). While previous works allow creating realistic articulated motions from reference data, the range of motion is largely limited by the available samples. Especially for the interaction-rich scenarios, it is impractical to attempt acquiring every possible interacting motion, as the combination of physical parameters increases exponentially. The proposed PMP allows us to assemble multiple part skills to animate a character, creating a diverse set of motions with different combinations of existing data. In our pipeline, we can train an agent with a wide range of part-wise priors. Therefore, each body part can obtain a kinematic insight of the style from the motion captures, or at the same time extract dynamics-related information from the additional part-specific simulation. For example, we can first train a general interaction skill, e.g. grasping, only for the dexterous part, and then combine the expert trajectories from the pre-trained agent with the kinematic priors of other limbs. Eventually, our whole-body agent learns a novel physical interaction skill even with the absence of the object trajectories in the reference motion sequence.},
booktitle = {ACM SIGGRAPH 2023 Conference Proceedings},
articleno = {64},
numpages = {10},
keywords = {Data-driven Animation, Deep Reinforcement Learning, Physics-Based Simulation, Whole-body Control},
location = {Los Angeles, CA, USA},
series = {SIGGRAPH '23}
}