FoundationVision/Groma

[ECCV2024] Grounded Multimodal Large Language Model with Localized Visual Tokenization

41
/ 100
Emerging

This tool helps you understand and process images by identifying specific regions mentioned in text descriptions. You input an image and a natural language query describing something in that image, and it outputs a precise bounding box or a detailed textual response grounded in the visual context. Researchers and developers working with visual-language tasks, such as creating smart image assistants or improving visual search, would use this.

584 stars. No commits in the last 6 months.

Use this if you need a multimodal AI that can accurately pinpoint specific objects or areas within an image based on a detailed text description.

Not ideal if your primary goal is general image classification or object detection without the need for language-based regional grounding.

visual-language-processing image-understanding referring-expression-comprehension computer-vision multimodal-ai
Stale 6m No Package No Dependents
Maintenance 0 / 25
Adoption 10 / 25
Maturity 16 / 25
Community 15 / 25

How are scores calculated?

Stars

584

Forks

45

Language

Python

License

Apache-2.0

Last pushed

Jun 07, 2024

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/transformers/FoundationVision/Groma"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.