CoIR-team/coir

(ACL 2025 Main) A Comprehensive Benchmark for Code Information Retrieval.

41
/ 100
Emerging

This project provides a comprehensive way to evaluate how well different AI models can find relevant code snippets. It takes various code retrieval datasets and measures how accurately models can match natural language queries to code or find similar code. AI researchers and developers building code search engines or code-understanding AI models would use this.

148 stars. No commits in the last 6 months.

Use this if you are developing or evaluating AI models designed to search for or understand code, and you need a standardized benchmark to measure their performance.

Not ideal if you are an end-user simply looking to find code snippets or use an existing code search tool.

AI model evaluation code search natural language processing for code information retrieval AI development
Stale 6m No Package No Dependents
Maintenance 2 / 25
Adoption 10 / 25
Maturity 16 / 25
Community 13 / 25

How are scores calculated?

Stars

148

Forks

14

Language

Python

License

Apache-2.0

Last pushed

Jun 30, 2025

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/rag/CoIR-team/coir"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.