Hyeongkeun/LAVCap
Official Pytorch Implementation of 'LAVCap: LLM-based Audio-Visual Captioning using Optimal Transport' (ICASSP2025)
This project helps generate descriptive text for audio content, especially when visual information is also available. It takes audio recordings and corresponding video frames as input and produces detailed textual captions. This tool is useful for researchers and developers working on multimedia content analysis and accessibility features.
No commits in the last 6 months.
Use this if you need to automatically generate precise, context-rich captions for audio content by leveraging both sound and associated video.
Not ideal if you only have audio data or if you need to process large volumes of data without GPU acceleration.
Stars
10
Forks
1
Language
Python
License
Apache-2.0
Category
Last pushed
Apr 14, 2025
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/llm-tools/Hyeongkeun/LAVCap"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
jingyaogong/minimind-v
🚀 「大模型」1小时从0训练26M参数的视觉多模态VLM!🌏 Train a 26M-parameter VLM from scratch in just 1 hours!
roboflow/vision-ai-checkup
Take your LLM to the optometrist.
SkyworkAI/Skywork-R1V
Skywork-R1V is an advanced multimodal AI model series developed by Skywork AI, specializing in...
zai-org/GLM-TTS
GLM-TTS: Controllable & Emotion-Expressive Zero-shot TTS with Multi-Reward Reinforcement Learning
NExT-GPT/NExT-GPT
Code and models for ICML 2024 paper, NExT-GPT: Any-to-Any Multimodal Large Language Model