back to home

Axellwppr / motion_tracking

156 stars
16 forks
2 issues
PythonShell

AI Architecture Analysis

This repository is indexed by RepoMind. By analyzing Axellwppr/motion_tracking in our AI interface, you can instantly generate complete architecture diagrams, visualize control flows, and perform automated security audits across the entire codebase.

Our Agentic Context Augmented Generation (Agentic CAG) engine loads full source files into context on-demand, avoiding the fragmentation of traditional RAG systems. Ask questions about the architecture, dependencies, or specific features to see it in action.

Source files are only loaded when you start an analysis to optimize performance.

Embed this Badge

Showcase RepoMind's analysis directly in your repository's README.

[![Analyzed by RepoMind](https://img.shields.io/badge/Analyzed%20by-RepoMind-4F46E5?style=for-the-badge)](https://repomind.in/repo/Axellwppr/motion_tracking)
Preview:Analyzed by RepoMind

Repository Overview (README excerpt)

Crawler view

Whole Body Motion Tracking This repository contains the training, evaluation, and deployment assets for a whole-body motion tracking policy built on top of the GentleHumanoid codebase. The main focus of this repository is: • training a **universal**, **robust**, and **highly dynamic** whole-body motion tracking policy, • supporting upper-body **compliance-aware** behavior for contact-rich interaction, • supporting robust live **VR teleoperation** through a separate teleop stack. The simulation and training backend is based on **mjlab**. A demo of the pretrained policy, showing one model generalizing across diverse and highly dynamic motions, is available here. https://github.com/user-attachments/assets/263dd3cc-8d23-4d67-bd36-37fe89f525de https://github.com/user-attachments/assets/4d210dbf-8023-4270-b094-ab6a2353deda Instructions for deployment and runtime usage are available in the folder. Installation If you do not have installed, you can install it following the instructions in the uv documentation. Motion Dataset Preparation Quick Start: Download Preprocessed Dataset (Google Drive) Download link: • Google Drive Dataset After downloading, extract it into the repository directory, and you should have the following structure: Build Dataset from AMASS/LAFAN with GMR Retargeting with GMR We use GMR to retarget the AMASS and LAFAN datasets. The output format is a dataset containing a series of npz files with the following fields: • : Frame rate • : Root position • : Root rotation in quaternion format (xyzw) • : Degrees of freedom positions • : Local body positions • : Local body rotations • : List of body names • : List of joint names You can use the modified version of GMR to directly export npz files that meet the requirements. You should organize the processed datasets in the following structure: Dataset Building Modify in to point to your dataset root directory, then run the script to generate the dataset: The dataset will be generated in the directory, and the code will automatically load these datasets. You can also use the environment variable to specify the dataset root path. Training You can use the provided script to run the full training pipeline. Modify the global configuration section in to set your WandB account and other parameters, then run: Under standard settings, training takes approximately 15 hours on 4× A100 GPUs. If GPU memory is constrained, it is recommended to appropriately tune the and parameters in and , respectively. Such adjustments may increase training time and could affect training performance to some extent. Evaluation If you export a deployment policy, the exported checkpoint will be written under . To use it in the deployment runtime: • Copy the exported policy folder (including , , and ) into . • Update so that points to the new ONNX file. For the actual deployment-side test procedure: • see for testing That README also explains how to use the UDP motion selector and the VR motion source during deployment.