CoIR-team/coir
(ACL 2025 Main) A Comprehensive Benchmark for Code Information Retrieval.
This project provides a comprehensive way to evaluate how well different AI models can find relevant code snippets. It takes various code retrieval datasets and measures how accurately models can match natural language queries to code or find similar code. AI researchers and developers building code search engines or code-understanding AI models would use this.
148 stars. No commits in the last 6 months.
Use this if you are developing or evaluating AI models designed to search for or understand code, and you need a standardized benchmark to measure their performance.
Not ideal if you are an end-user simply looking to find code snippets or use an existing code search tool.
Stars
148
Forks
14
Language
Python
License
Apache-2.0
Category
Last pushed
Jun 30, 2025
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/rag/CoIR-team/coir"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
Kain-90/RAG-Play
An interactive visualization tool for understanding Retrieval-Augmented Generation (RAG) pipelines.
rryam/LumoKit
Swift package for on-device Retrieval-Augmented Generation (RAG)
harvard-lil/warc-gpt
WARC + AI - Experimental Retrieval Augmented Generation Pipeline for Web Archive Collections.
constacts/ragtacts
RAG(Retrieval-Augmented Generation) for Evolving Data
giuliano-t/openAI-to-freeCAD-workflow
This project uses a Large Language Model (LLM) with Retrieval-Augmented Generation (RAG) to...