leaderj1001/CLIP

CLIP: Connecting Text and Image (Learning Transferable Visual Models From Natural Language Supervision)

39
/ 100
Emerging

This project helps machine learning engineers and researchers evaluate how well a model connects images with text descriptions. You input an image dataset and the corresponding text labels, and it outputs accuracy metrics showing how effectively the model understands and links visual content with natural language. This is useful for those developing or assessing AI models that need to interpret both images and text.

No commits in the last 6 months.

Use this if you are a machine learning engineer or researcher who needs to evaluate the performance of models designed to link visual information with textual descriptions.

Not ideal if you are looking for a ready-to-use application for everyday tasks like image captioning or image search without programming.

machine-learning-evaluation computer-vision natural-language-processing AI-model-assessment multimodal-AI
Stale 6m No Package No Dependents
Maintenance 0 / 25
Adoption 9 / 25
Maturity 16 / 25
Community 14 / 25

How are scores calculated?

Stars

83

Forks

11

Language

Python

License

MIT

Last pushed

Jan 19, 2021

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/transformers/leaderj1001/CLIP"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.