NVlabs/OmniVinci

OmniVinci is an omni-modal LLM for joint understanding of vision, audio, and language.

51
/ 100
Established

OmniVinci helps you understand and reason about video, audio, and text information together, much like humans do. You provide it with various media — videos, audio clips, or text — and it generates descriptive analyses and answers based on all the input. This is ideal for researchers, AI developers, and anyone building applications that need to interpret complex, real-world multimedia.

639 stars.

Use this if you need an AI model that can jointly analyze and respond to queries involving intertwined visual, auditory, and textual information.

Not ideal if your task only involves processing a single type of media (e.g., just text or just images) or if you require an extremely lightweight model for edge devices.

multimedia-analysis AI-development video-understanding audio-comprehension robotics-perception
No Package No Dependents
Maintenance 10 / 25
Adoption 10 / 25
Maturity 15 / 25
Community 16 / 25

How are scores calculated?

Stars

639

Forks

51

Language

Python

License

Apache-2.0

Last pushed

Feb 26, 2026

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/transformers/NVlabs/OmniVinci"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.