kyegomez/PALI
Democratization of "PaLI: A Jointly-Scaled Multilingual Language-Image Model"
This project offers the foundational architecture for a multilingual model that can understand both images and text. It takes an image and a text prompt as input and aims to generate relevant text as output. This is for AI developers and researchers looking to build or experiment with advanced multi-modal AI systems.
No commits in the last 6 months.
Use this if you are an AI researcher or developer building a new multi-modal model and need a robust, scalable architecture for combining vision and language.
Not ideal if you are a non-technical user looking for an out-of-the-box solution or a pre-trained model for immediate use, as this requires significant technical expertise and further training.
Stars
94
Forks
8
Language
Python
License
MIT
Category
Last pushed
Mar 20, 2024
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/transformers/kyegomez/PALI"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Compare
Higher-rated alternatives
kyegomez/RT-X
Pytorch implementation of the models RT-1-X and RT-2-X from the paper: "Open X-Embodiment:...
kyegomez/PALI3
Implementation of PALI3 from the paper PALI-3 VISION LANGUAGE MODELS: SMALLER, FASTER, STRONGER"
chuanyangjin/MMToM-QA
[🏆Outstanding Paper Award at ACL 2024] MMToM-QA: Multimodal Theory of Mind Question Answering
lyuchenyang/Macaw-LLM
Macaw-LLM: Multi-Modal Language Modeling with Image, Video, Audio, and Text Integration
Muennighoff/vilio
🥶Vilio: State-of-the-art VL models in PyTorch & PaddlePaddle