kyegomez/MultiModalCrossAttn

The open source implementation of the cross attention mechanism from the paper: "JOINTLY TRAINING LARGE AUTOREGRESSIVE MULTIMODAL MODELS"

26
/ 100
Experimental

This project offers a pre-built component for developers working on advanced AI models that combine different types of data, like text and images. It provides the core mechanism to allow these distinct data streams to 'understand' each other. Developers can integrate this to create AI that processes information more holistically.

No commits in the last 6 months.

Use this if you are an AI/ML engineer or researcher building large autoregressive models that need to process and integrate information from multiple modalities, such as text and images, to generate unified outputs.

Not ideal if you are looking for an end-user application or a complete multimodal AI system, as this is a foundational building block for developers.

AI-development machine-learning-engineering multimodal-AI deep-learning-research natural-language-processing
Stale 6m No Package No Dependents
Maintenance 0 / 25
Adoption 7 / 25
Maturity 16 / 25
Community 3 / 25

How are scores calculated?

Stars

37

Forks

1

Language

Python

License

MIT

Last pushed

Mar 11, 2024

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/transformers/kyegomez/MultiModalCrossAttn"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.