Contextualist/lone-arena

Self-hosted LLM chatbot arena, with yourself as the only judge

34
/ 100
Emerging

This tool helps you manually compare and evaluate responses from different fine-tuned language models. You input your specific prompts and model endpoints, and it presents you with pairs of responses for you to judge. It's designed for researchers or practitioners who need to assess LLM performance in specialized domains where automated benchmarks or third-party evaluations aren't suitable.

No commits in the last 6 months.

Use this if you need a confidential, customizable way to human-evaluate multiple large language models on your specific tasks and data.

Not ideal if you prefer fully automated benchmarking or if your evaluation criteria can be adequately addressed by existing public benchmarks.

LLM evaluation NLP research model comparison private data analysis domain-specific AI
Stale 6m No Package No Dependents
Maintenance 0 / 25
Adoption 7 / 25
Maturity 16 / 25
Community 11 / 25

How are scores calculated?

Stars

41

Forks

5

Language

Python

License

MIT

Last pushed

Feb 06, 2024

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/llm-tools/Contextualist/lone-arena"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.