doubledaibo/compcaption_neurips2018
A Neural Compositional Paradigm for Image Captioning
This tool helps researchers and content creators automatically generate diverse and descriptive captions for images. By inputting an image, it outputs multiple distinct sentences that describe the visual content, varying both the sentence structure and the focus of attention. It's designed for those who need to quickly produce varied text descriptions for visual assets.
No commits in the last 6 months.
Use this if you need to generate multiple unique text captions for an image, exploring different ways to describe the same visual information.
Not ideal if you need to caption videos or require very short, keyword-based image tags rather than full sentences.
Stars
9
Forks
—
Language
Lua
License
—
Category
Last pushed
Apr 03, 2019
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/nlp/doubledaibo/compcaption_neurips2018"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
ntrang086/image_captioning
generate captions for images using a CNN-RNN model that is trained on the Microsoft Common...
fregu856/CS224n_project
Neural Image Captioning in TensorFlow.
vacancy/SceneGraphParser
A python toolkit for parsing captions (in natural language) into scene graphs (as symbolic...
ltguo19/VSUA-Captioning
Code for "Aligning Linguistic Words and Visual Semantic Units for Image Captioning", ACM MM 2019
Abdelrhman-Yasser/video-content-description
Video content description model for generating descriptions for unconstrained videos