pittisl/PhyT2V
official code repo of CVPR 2025 paper PhyT2V: LLM-Guided Iterative Self-Refinement for Physics-Grounded Text-to-Video Generation
This project helps video creators, animators, and content producers generate more realistic videos from text descriptions. You provide a text prompt describing the desired video, and the system uses AI to refine it, producing a video that adheres better to real-world physics and common sense. It's designed for anyone needing to create visually coherent video content from text.
No commits in the last 6 months.
Use this if you need to generate videos from text prompts and want the resulting animation to obey real-world physical laws and common sense, especially for unusual or complex scenarios.
Not ideal if you primarily need stylized, abstract, or non-physical animations, or if you prefer direct control over every aspect of video generation without AI-driven prompt refinement.
Stars
64
Forks
4
Language
Python
License
—
Category
Last pushed
Jul 31, 2025
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/diffusion/pittisl/PhyT2V"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
hao-ai-lab/FastVideo
A unified inference and post-training framework for accelerated video generation.
ModelTC/LightX2V
Light Image Video Generation Inference Framework
thu-ml/TurboDiffusion
TurboDiffusion: 100–200× Acceleration for Video Diffusion Models
PKU-YuanGroup/Helios
Helios: Real Real-Time Long Video Generation Model
PKU-YuanGroup/MagicTime
[TPAMI 2025🔥] MagicTime: Time-lapse Video Generation Models as Metamorphic Simulators