kyegomez/M2PT

Implementation of M2PT in PyTorch from the paper: "Multimodal Pathway: Improve Transformers with Irrelevant Data from Other Modalities"

36
/ 100
Emerging

This project helps machine learning engineers and researchers improve the performance of their existing transformer models by integrating information from other data types, even if that data is normally considered 'irrelevant.' It takes as input an existing transformer model and linear layers from other models, and outputs a refined transformer model with enhanced capabilities. This is for professionals building advanced AI models that process multiple forms of data, such as text and images.

No commits in the last 6 months. Available on PyPI.

Use this if you are developing AI models that process different types of data (like text and images) and want to boost your model's accuracy by cleverly incorporating insights from auxiliary data sources.

Not ideal if you are looking for a complete, out-of-the-box multimodal AI model, as this project provides a technique for enhancing existing models rather than a standalone solution.

multimodal-ai deep-learning-optimization transformer-architectures model-enhancement
Stale 6m
Maintenance 0 / 25
Adoption 5 / 25
Maturity 25 / 25
Community 6 / 25

How are scores calculated?

Stars

14

Forks

1

Language

Python

License

MIT

Last pushed

Mar 11, 2024

Commits (30d)

0

Dependencies

3

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/transformers/kyegomez/M2PT"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.