nasib-ullah/video-captioning-models-in-Pytorch

A PyTorch implementation of state of the art video captioning models from 2015-2019 on MSVD and MSRVTT datasets.

35
/ 100
Emerging

This project helps researchers and AI practitioners automatically generate descriptive text captions for video content. It takes raw video files or pre-extracted video features as input and outputs natural language sentences describing the actions and objects in the video. This is designed for computer vision and natural language processing researchers developing and evaluating video understanding models.

No commits in the last 6 months.

Use this if you are a researcher needing to benchmark or implement various state-of-the-art video captioning models for your experiments.

Not ideal if you are looking for a ready-to-use application to caption your personal videos or for production deployment.

video-analysis natural-language-generation computer-vision AI-research multimodal-AI
No License Stale 6m No Package No Dependents
Maintenance 0 / 25
Adoption 9 / 25
Maturity 8 / 25
Community 18 / 25

How are scores calculated?

Stars

74

Forks

16

Language

Python

License

Last pushed

Jul 30, 2023

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/ml-frameworks/nasib-ullah/video-captioning-models-in-Pytorch"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.