Abdelrhman-Yasser/video-content-description
Video content description model for generating descriptions for unconstrained videos
This tool automatically generates concise text descriptions for any video, even if it covers an unusual or unpredictable topic. You provide a video, and it outputs a sentence or two summarizing the visual content. This is useful for anyone working with large video collections, such as content managers, archivists, or accessibility specialists.
No commits in the last 6 months.
Use this if you need to quickly understand or categorize the content of many videos without watching them all.
Not ideal if you require highly detailed, nuanced, or subjective interpretations of video content.
Stars
15
Forks
11
Language
Jupyter Notebook
License
GPL-3.0
Category
Last pushed
Jul 05, 2019
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/nlp/Abdelrhman-Yasser/video-content-description"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
ntrang086/image_captioning
generate captions for images using a CNN-RNN model that is trained on the Microsoft Common...
fregu856/CS224n_project
Neural Image Captioning in TensorFlow.
vacancy/SceneGraphParser
A python toolkit for parsing captions (in natural language) into scene graphs (as symbolic...
ltguo19/VSUA-Captioning
Code for "Aligning Linguistic Words and Visual Semantic Units for Image Captioning", ACM MM 2019
kozodoi/BMS_Molecular_Translation
Image-to-text translation of chemical molecule structures with deep learning (top-5% Kaggle solution)