kyegomez/qformer

Implementation of Qformer from BLIP2 in Zeta Lego blocks.

38
/ 100
Emerging

This project provides a building block for AI developers who are creating systems that understand both images and text. It takes in visual data (like an image) and textual data (like a description or query) and processes them together, outputting a combined representation that can be used for tasks such as image captioning, visual question answering, or multimodal search. It is used by machine learning engineers and researchers working on advanced AI applications.

No commits in the last 6 months. Available on PyPI.

Use this if you are a machine learning engineer building a multimodal AI model and need a component to effectively process and relate information from both images and text.

Not ideal if you are an end-user looking for a pre-built application or a low-code tool for image and text analysis, as this is a foundational AI model component requiring programming expertise.

Multimodal AI Deep Learning Computer Vision Natural Language Processing AI Model Development
Stale 6m
Maintenance 0 / 25
Adoption 8 / 25
Maturity 25 / 25
Community 5 / 25

How are scores calculated?

Stars

48

Forks

2

Language

Python

License

MIT

Last pushed

Nov 11, 2024

Commits (30d)

0

Dependencies

4

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/transformers/kyegomez/qformer"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.