ltguo19/VSUA-Captioning

Code for "Aligning Linguistic Words and Visual Semantic Units for Image Captioning", ACM MM 2019

41
/ 100
Emerging

This project helps researchers in computer vision automatically generate descriptive captions for images. It takes raw images and pre-extracted visual features as input, along with existing image-caption pairs, and produces human-readable sentences that accurately describe the image content. This is useful for scientists working on machine vision and natural language processing applications.

258 stars. No commits in the last 6 months.

Use this if you are a researcher focused on advancing image captioning models and want to build upon a system that aligns linguistic words with visual semantic units.

Not ideal if you need an out-of-the-box solution for generating image captions in a production environment or if you do not have access to GPU hardware and Python development experience.

image-captioning computer-vision natural-language-generation machine-learning-research deep-learning
Stale 6m No Package No Dependents
Maintenance 0 / 25
Adoption 10 / 25
Maturity 16 / 25
Community 15 / 25

How are scores calculated?

Stars

258

Forks

24

Language

Python

License

MIT

Category

image-captioning

Last pushed

Oct 18, 2019

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/nlp/ltguo19/VSUA-Captioning"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.