bytedance/video-SALMONN-2

video-SALMONN 2 is a powerful audio-visual large language model (LLM) that generates high-quality audio-visual video captions, which is developed by the Department of Electronic Engineering at Tsinghua University and ByteDance.

50
/ 100
Established

This project helps content creators, marketers, and educators by automatically generating high-quality captions for videos, taking into account both what is seen and heard. You provide video files, and it outputs detailed, accurate captions that enhance accessibility and understanding. It's designed for anyone needing to quickly and efficiently caption video content.

167 stars.

Use this if you need to generate descriptive captions for video content, leveraging both visual and audio cues for better accuracy and detail.

Not ideal if you primarily need to transcribe spoken dialogue without needing detailed descriptions of on-screen actions or sounds.

video-captioning content-accessibility media-production e-learning social-media-marketing
No Package No Dependents
Maintenance 10 / 25
Adoption 10 / 25
Maturity 15 / 25
Community 15 / 25

How are scores calculated?

Stars

167

Forks

19

Language

Python

License

Apache-2.0

Last pushed

Feb 23, 2026

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/transformers/bytedance/video-SALMONN-2"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.