TimeBlindness/time-blindness
[CVPR 2026 🔥] Time Blindness: Why Video-Language Models Can't See What Humans Can?
This project introduces a new way to test how well AI models understand videos, especially when information is shown over time, not just in individual frames. It uses specially created videos that look like random noise in each frame, but reveal clear words, shapes, or objects when watched as a sequence. This is for researchers and developers who build and evaluate advanced video analysis AI.
Use this if you are developing or evaluating video-language AI models and need to rigorously test their ability to perceive information presented purely through temporal changes, rather than spatial features within frames.
Not ideal if you are looking for an AI tool to directly analyze typical videos where visual information is clearly present in static frames.
Stars
62
Forks
2
Language
Python
License
MIT
Category
Last pushed
Jan 28, 2026
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/computer-vision/TimeBlindness/time-blindness"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
col14m/cadrille
[ICLR2026] cadrille: Multi-modal CAD Reconstruction with Online Reinforcement Learning
filaPro/cad-recode
[ICCV2025] CAD-Recode: Reverse Engineering CAD Code from Point Clouds
pengsongyou/openscene
[CVPR'23] OpenScene: 3D Scene Understanding with Open Vocabularies
worldbench/3EED
[NeurIPS 2025 DB Track] 3EED: Ground Everything Everywhere in 3D
cambrian-mllm/cambrian-s
Cambrian-S: Towards Spatial Supersensing in Video