kyegomez/PALI3

Implementation of PALI3 from the paper PALI-3 VISION LANGUAGE MODELS: SMALLER, FASTER, STRONGER"

51
/ 100
Established

This project helps researchers and developers explore and utilize a powerful vision-language model. It takes an image and a text prompt as input and generates relevant text output, combining visual and textual information. This is designed for AI researchers and practitioners building applications that require advanced understanding of both images and language.

146 stars. Available on PyPI.

Use this if you are a researcher or developer who needs to experiment with or integrate a state-of-the-art vision-language model for tasks like image captioning or visual question answering.

Not ideal if you are a non-technical user looking for a ready-to-use application, as this requires programming knowledge to implement.

vision-language-models image-captioning visual-question-answering multimodal-ai deep-learning-research
Maintenance 10 / 25
Adoption 10 / 25
Maturity 25 / 25
Community 6 / 25

How are scores calculated?

Stars

146

Forks

4

Language

Python

License

MIT

Last pushed

Jan 17, 2026

Commits (30d)

0

Dependencies

5

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/transformers/kyegomez/PALI3"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.

Compare