aim-uofa/StaMo
Unsupervised Learning of Generalizable Robot Motion from Compact State Representation
This project helps robotics researchers develop and train models that can learn to control robot movements from static images. You input existing robotic demonstration data, formatted as images and converted to JSON, and the output is a trained model capable of generating new, generalized robot motions. This tool is for robotics researchers and academics working on robot learning and control.
No commits in the last 6 months.
Use this if you need to train a robot motion generation model using unsupervised learning from a compact state representation of robotic demonstrations.
Not ideal if you need a pre-trained model for immediate deployment or are not comfortable with machine learning model training workflows.
Stars
35
Forks
—
Language
Python
License
—
Category
Last pushed
Oct 09, 2025
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/diffusion/aim-uofa/StaMo"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
ZhengYinan-AIR/Diffusion-Planner
[ICLR 2025 Oral] The official implementation of "Diffusion-Based Planning for Autonomous Driving...
intuitive-robots/MoDE_Diffusion_Policy
[ICLR 25] Code for "Efficient Diffusion Transformer Policies with Mixture of Expert Denoisers...
caio-freitas/GraphDiffusionImitate
Diffusion-based graph generative policies for imitation learning in robotics tasks 🧠🤖
LeCAR-Lab/model-based-diffusion
Official implementation for the paper "Model-based Diffusion for Trajectory Optimization"....
Weixy21/SafeDiffuser
Safe Planning with Diffusion Probabilistic Models