kyegomez/GATS

Implementation of GATS from the paper: "GATS: Gather-Attend-Scatter" in pytorch and zeta

20
/ 100
Experimental

This project offers a specialized building block for deep learning models that can process and combine information from various data types like text, images, audio, and video simultaneously. It takes in these different data streams and outputs a unified representation that captures relationships across them. This is for AI researchers and machine learning engineers developing advanced multi-modal AI systems.

No commits in the last 6 months.

Use this if you are building complex AI models that need to understand and integrate information from multiple diverse data sources at once.

Not ideal if you are working with single-modality data (e.g., only text or only images) or are not a machine learning practitioner.

multi-modal-ai deep-learning-architecture ai-research machine-learning-engineering generative-ai
Stale 6m No Package No Dependents
Maintenance 0 / 25
Adoption 4 / 25
Maturity 16 / 25
Community 0 / 25

How are scores calculated?

Stars

8

Forks

Language

Python

License

MIT

Last pushed

Nov 11, 2024

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/transformers/kyegomez/GATS"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.