NVlabs/ProtoMotions
ProtoMotions is a GPU-accelerated simulation and learning framework for training physically simulated digital humans and humanoid robots.
Overview
ProtoMotions3 is a GPU-accelerated simulation and learning framework for training physically simulated digital humans and humanoid robots. Our mission is to provide a fast prototyping platform for various simulated humanoid learning tasks and environments—for researchers and practitioners in animation, robotics, and reinforcement learning—bridging efforts across domains.
Modularity, extensibility, and scalability are at the core of ProtoMotions3. It is permissively licensed under the Apache-2.0 license.
Also check out MimicKit, our sibling repository for a lightweight framework for motion imitation learning.
What You Can Do with ProtoMotions3
🏃 Large-Scale Motion Learning
Train your fully physically simulated character to learn motion skills from the entire public AMASS human animation dataset (40+ hours) within 12 hours on 4 A100s.
📈 Scalable Multi-GPU Training
Scale training to even larger datasets with each GPU handling a subset of motions. For example, we have trained with 24 A100s with 13K motions on each GPU.
🔄 One-Command Retargeting
Transfer (retarget) the entire AMASS dataset to your favorite robot with the built-in PyRoki-based optimizer—in one command.
Note: As of v3, we use PyRoki for retargeting. Earlier versions used Mink.
🤖 Train Any Robot
Train your robot to perform AMASS motor skills in 12 hours, by just changing one command argument:
--robot-name=smpl → --robot-name=h1_2 and preparing retargeted motions (see here)
🔬 Sim2Sim Testing
One-click test (--simulator=isaacgym → --simulator=newton) of robot control policies on H1_2 or G1 in different physics engines (newly released NVIDIA Newton, built upon MuJoCo Warp). Policies shown below only use observations you could actually get from real hardwares.
🎨 High-Fidelity Rendering
Test your policy in IsaacSim 5.0+, which allows you to load beautifully rendered Gaussian splatting backgrounds (with Omniverse NuRec — this rendered scene is not physically interact-able yet).
🎬 Motion Authoring Integration
With a motion authoring model (not included in ProtoMotions), generate any motion from a text prompt, and author a scene in ProtoMotions to go along with this motion—for both the animation character and the G1 robot to perform this stunt.
Image Credit: NVIDIA Human Motion Modeling Research
🏗️ Procedural Scene Generation
Procedurally generate many scenes for scalable Synthetic Data Generation (SDG): start from a seed motion set, use RL to adapt motions to augmented scenes.
🎭 Generative Policies
Train a generative policy (e.g., MaskedMimic) that can autonomously choose its "move" to finish the task.
⛰️ Terrain Navigation
Train your robot to hike challenging terrains!
🎯 Custom Environments
Have a new task? Implement your own custom environment:
🧪 New RL Algorithms
Want to try a new RL algorithm? Implement algorithms like ADD in ProtoMotions in ~50 lines of code, utilizing our modularized design:
📄 protomotions/agents/mimic/agent_add.py
🔧 Custom Simulators
Would like to use your own simulator? Implement these APIs interfacing among different simulators:
📄 protomotions/simulator/base_simulator/
Refer to this community-contributed example:
📄 protomotions/simulator/genesis/
🤖 Add Your Own Robot
Want to add your own robot? Follow these steps:
- Add your
.xmlMuJoCo spec file toprotomotions/data/robots/ - Fill in config fields (see examples like
protomotions/robot_configs/g1.py) - Register in
protomotions/robot_configs/factory.py
And you're good to go!
Documentation
- Installation Guide
- Quick Start
- AMASS Data Preparation
- PHUMA Data Preparation
- Tutorials
- API Reference
Contributing
We welcome contributions! Please read our Contributing Guide before submitting pull requests.
License
ProtoMotions3 is released under the Apache-2.0 License.
Citation
If you use ProtoMotions3 in your research, please cite:
@misc{ProtoMotions,
title = {ProtoMotions3: An Open-source Framework for Humanoid Simulation and Control},
author = {Tessler*, Chen and Jiang*, Yifeng and Peng, Xue Bin and Coumans, Erwin and Shi, Yi and Zhang, Haotian and Rempe, Davis and Chechik†, Gal and Fidler†, Sanja},
year = {2025},
publisher = {GitHub},
journal = {GitHub repository},
howpublished = {\url{https://github.com/NVLabs/ProtoMotions/}},
}









