TimeBlindness/time-blindness

[CVPR 2026 🔥] Time Blindness: Why Video-Language Models Can't See What Humans Can?

38
/ 100
Emerging

This project introduces a new way to test how well AI models understand videos, especially when information is shown over time, not just in individual frames. It uses specially created videos that look like random noise in each frame, but reveal clear words, shapes, or objects when watched as a sequence. This is for researchers and developers who build and evaluate advanced video analysis AI.

Use this if you are developing or evaluating video-language AI models and need to rigorously test their ability to perceive information presented purely through temporal changes, rather than spatial features within frames.

Not ideal if you are looking for an AI tool to directly analyze typical videos where visual information is clearly present in static frames.

video-AI-evaluation temporal-perception AI-benchmarking computer-vision-research video-understanding
No Package No Dependents
Maintenance 10 / 25
Adoption 8 / 25
Maturity 15 / 25
Community 5 / 25

How are scores calculated?

Stars

62

Forks

2

Language

Python

License

MIT

Last pushed

Jan 28, 2026

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/computer-vision/TimeBlindness/time-blindness"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.