Llm Evaluation Benchmarking RAG Tools
There are 2 llm evaluation benchmarking tools tracked. The highest-rated is TJ-Neary/AI-Eval-Pro at 13/100 with 0 stars.
Get all 2 projects as JSON
curl "https://pt-edge.onrender.com/api/v1/datasets/quality?domain=rag&subcategory=llm-evaluation-benchmarking&limit=20"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
| # | Tool | Score | Tier |
|---|---|---|---|
| 1 |
TJ-Neary/AI-Eval-Pro
Commercial LLM evaluation service — hardware-aware benchmarking across text... |
|
Experimental |
| 2 |
nshkrdotcom/rats
Experimental framework for testing and measuring AI system capabilities,... |
|
Experimental |