izam-mohammed/ragrank

🎯 Your free LLM evaluation toolkit helps you assess the accuracy of facts, how well it understands context, its tone, and more. This helps you see how good your LLM applications are.

52
/ 100
Established

This toolkit helps you assess the performance of your Retrieval-Augmented Generation (RAG) applications. You provide your RAG model's questions, the contexts it retrieves, and its generated responses, and it gives you metrics on factual accuracy, context understanding, and tone. This is for AI/ML engineers, data scientists, or product managers who build and deploy LLM applications and need to ensure their RAG systems are delivering high-quality, reliable outputs.

Use this if you are developing RAG-based LLM applications and need to systematically measure and improve their factual accuracy, contextual understanding, and overall response quality.

Not ideal if you are looking to evaluate foundational LLMs directly, rather than the end-to-end performance of a RAG system.

LLM application development RAG system evaluation AI model quality assurance Natural Language Processing Generative AI
No Package No Dependents
Maintenance 10 / 25
Adoption 8 / 25
Maturity 16 / 25
Community 18 / 25

How are scores calculated?

Stars

45

Forks

14

Language

Python

License

Apache-2.0

Last pushed

Feb 14, 2026

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/rag/izam-mohammed/ragrank"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.