vlongle/articulate-anything
[ICLR 2025] Official implementation of Articulate-Anything
This project helps robotics engineers and 3D artists automatically create detailed, articulated 3D models of objects. You can provide a text description, an image, or a video of an object, and the system will output a 3D model with its movable parts and how they move. This is ideal for quickly generating realistic 3D assets for simulations or virtual environments.
176 stars. No commits in the last 6 months.
Use this if you need to quickly generate 3D models of objects with realistic joint movements from various inputs like text, images, or videos.
Not ideal if you require precise manual control over every minute detail of the 3D model's articulation or a lightweight solution for basic static models.
Stars
176
Forks
12
Language
HTML
License
—
Category
Last pushed
Jul 08, 2025
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/ml-frameworks/vlongle/articulate-anything"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
MaximeVandegar/Papers-in-100-Lines-of-Code
Implementation of papers in 100 lines of code.
kk7nc/RMDL
RMDL: Random Multimodel Deep Learning for Classification
OML-Team/open-metric-learning
Metric learning and retrieval pipelines, models and zoo.
miguelvr/dropblock
Implementation of DropBlock: A regularization method for convolutional networks in PyTorch.
PaddlePaddle/models
Officially maintained, supported by PaddlePaddle, including CV, NLP, Speech, Rec, TS, big models...