filipbasara0/simple-clip

A minimal, but effective implementation of CLIP (Contrastive Language-Image Pretraining) in PyTorch

50
/ 100
Established

This project helps machine learning engineers and researchers quickly train powerful models that understand both images and text. You input a large dataset of images paired with their descriptions, and it outputs a trained model capable of linking visual content with natural language. This model can then perform tasks like image classification or advanced visual reasoning without needing specific, task-based training.

No commits in the last 6 months. Available on PyPI.

Use this if you are an AI/ML practitioner looking to pre-train a versatile model to understand visual and textual relationships, especially for zero-shot learning tasks.

Not ideal if you're a business user looking for a ready-to-use image recognition application without any machine learning setup or training.

computer-vision natural-language-processing zero-shot-learning image-classification model-pretraining
Stale 6m
Maintenance 0 / 25
Adoption 8 / 25
Maturity 25 / 25
Community 17 / 25

How are scores calculated?

Stars

42

Forks

8

Language

Jupyter Notebook

License

MIT

Last pushed

Feb 14, 2024

Commits (30d)

0

Dependencies

8

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/ml-frameworks/filipbasara0/simple-clip"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.