katha-ai/VELOCITI

VELOCITI Benchmark Evaluation and Visualisation Code

14
/ 100
Experimental

This tool helps researchers and engineers rigorously test the compositional reasoning abilities of video-language AI models (like CLIP or Video-LLMs). You input a video-language model and the VELOCITI dataset, and it outputs detailed evaluation metrics and predictions, showing how well the model understands complex relationships in videos. It's designed for AI researchers and practitioners working on advanced video understanding.

No commits in the last 6 months.

Use this if you need to benchmark and understand the strengths and weaknesses of your video-language AI models in interpreting complex visual and linguistic information.

Not ideal if you intend to use the VELOCITI dataset to train or fine-tune your models, as it is strictly designed as a test set for evaluation.

AI model evaluation video understanding natural language processing computer vision research AI benchmarking
No License Stale 6m No Package No Dependents
Maintenance 2 / 25
Adoption 4 / 25
Maturity 8 / 25
Community 0 / 25

How are scores calculated?

Stars

8

Forks

Language

Python

License

Last pushed

Apr 18, 2025

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/ml-frameworks/katha-ai/VELOCITI"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.