thuml/iVideoGPT
Official repository for "iVideoGPT: Interactive VideoGPTs are Scalable World Models" (NeurIPS 2024), https://arxiv.org/abs/2405.15223
This project helps robotics researchers and engineers to predict future actions and visual outcomes for robotic systems. By taking existing records of how robots manipulate objects (like pick-and-place tasks or general interaction), it generates predictions of what the robot will do next or what the environment will look like. It's designed for those developing or evaluating advanced robotic control systems.
172 stars. No commits in the last 6 months.
Use this if you are a robotics researcher or engineer building or evaluating advanced robot control systems and need to predict future robot actions or environmental states based on past data.
Not ideal if you are looking for an out-of-the-box solution for immediate real-world robot deployment without deep technical understanding of model-based reinforcement learning.
Stars
172
Forks
17
Language
Python
License
MIT
Category
Last pushed
Sep 23, 2025
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/llm-tools/thuml/iVideoGPT"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
jingyaogong/minimind-v
🚀 「大模型」1小时从0训练26M参数的视觉多模态VLM!🌏 Train a 26M-parameter VLM from scratch in just 1 hours!
roboflow/vision-ai-checkup
Take your LLM to the optometrist.
SkyworkAI/Skywork-R1V
Skywork-R1V is an advanced multimodal AI model series developed by Skywork AI, specializing in...
zai-org/GLM-TTS
GLM-TTS: Controllable & Emotion-Expressive Zero-shot TTS with Multi-Reward Reinforcement Learning
NExT-GPT/NExT-GPT
Code and models for ICML 2024 paper, NExT-GPT: Any-to-Any Multimodal Large Language Model