inuwamobarak/Image-captioning-ViT
Image Captioning Vision Transformers (ViTs) are transformer models that generate descriptive captions for images by combining the power of Transformers and computer vision. It leverages state-of-the-art pre-trained ViT models and employs technique
This project helps generate descriptive captions for images, automating a task that typically requires manual observation and typing. You input an image, and it outputs a human-like textual description of what's in the picture. This is useful for anyone working with large collections of images, such as content managers, digital archivists, or e-commerce professionals.
No commits in the last 6 months.
Use this if you need to automatically generate clear, concise text descriptions for a collection of images.
Not ideal if you're looking for a ready-to-use application without any development or coding experience, as this is a developer-focused tool.
Stars
40
Forks
5
Language
Jupyter Notebook
License
—
Category
Last pushed
Oct 14, 2024
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/generative-ai/inuwamobarak/Image-captioning-ViT"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
stevan-milovanovic/LiteRT-for-Android
Image Classification, Image Captioning and LLM inference with LiteRT
ABX9801/Image-Caption-Generator
A Web App to generate caption for Images. VGG-16 Model is used to encode the images and...
ekkonwork/qwen3-vl-autotagger-cli
Standalone CLI for Qwen3-VL auto-tagging with optional XMP embedding.
floydhub/pix2code-template
Build a neural network to code a basic a HTML and CSS website based on a picture of a design mockup.
regiellis/ecko-cli
ecko-cli is a simple CLI tool that streamlines the process of processing images in a directory,...