kyegomez/qformer
Implementation of Qformer from BLIP2 in Zeta Lego blocks.
This project provides a building block for AI developers who are creating systems that understand both images and text. It takes in visual data (like an image) and textual data (like a description or query) and processes them together, outputting a combined representation that can be used for tasks such as image captioning, visual question answering, or multimodal search. It is used by machine learning engineers and researchers working on advanced AI applications.
No commits in the last 6 months. Available on PyPI.
Use this if you are a machine learning engineer building a multimodal AI model and need a component to effectively process and relate information from both images and text.
Not ideal if you are an end-user looking for a pre-built application or a low-code tool for image and text analysis, as this is a foundational AI model component requiring programming expertise.
Stars
48
Forks
2
Language
Python
License
MIT
Category
Last pushed
Nov 11, 2024
Commits (30d)
0
Dependencies
4
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/transformers/kyegomez/qformer"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
kyegomez/RT-X
Pytorch implementation of the models RT-1-X and RT-2-X from the paper: "Open X-Embodiment:...
kyegomez/PALI3
Implementation of PALI3 from the paper PALI-3 VISION LANGUAGE MODELS: SMALLER, FASTER, STRONGER"
chuanyangjin/MMToM-QA
[🏆Outstanding Paper Award at ACL 2024] MMToM-QA: Multimodal Theory of Mind Question Answering
lyuchenyang/Macaw-LLM
Macaw-LLM: Multi-Modal Language Modeling with Image, Video, Audio, and Text Integration
Muennighoff/vilio
🥶Vilio: State-of-the-art VL models in PyTorch & PaddlePaddle