zerovl/ZeroVL

[ECCV2022] Contrastive Vision-Language Pre-training with Limited Resources

34
/ 100
Emerging

This project helps machine learning researchers efficiently train models to understand both images and text, even with limited computational resources and data. You input image-text pairs, and it produces a powerful model capable of tasks like image-text retrieval or image classification. It's designed for academic researchers and practitioners who need high-performing vision-language models without access to supercomputers or massive datasets.

No commits in the last 6 months.

Use this if you are a machine learning researcher or practitioner aiming to pre-train vision-language models but are constrained by typical academic or small-scale industry computing environments and data availability.

Not ideal if you already have access to vast computational resources (hundreds of GPUs, specialized TPUs) and billion-scale datasets for model training.

vision-language-modeling deep-learning-research multi-modal-ai resource-constrained-ml representation-learning
Stale 6m No Package No Dependents
Maintenance 0 / 25
Adoption 8 / 25
Maturity 16 / 25
Community 10 / 25

How are scores calculated?

Stars

46

Forks

5

Language

Python

License

MIT

Last pushed

Sep 29, 2022

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/transformers/zerovl/ZeroVL"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.