dokimos-dev/dokimos

Evaluation Framework for LLM applications in Java and Kotlin

38
/ 100
Emerging

Dokimos is an evaluation framework designed for developers building applications with Large Language Models (LLMs) in Java and Kotlin. It allows you to assess LLM responses for quality, such as hallucination or relevance, and track performance changes over time. Developers can integrate Dokimos into their existing test suites to ensure their LLM applications maintain high quality before deployment.

Use this if you are a Java or Kotlin developer building LLM-powered applications and need a robust way to automatically test and evaluate your LLM's responses and agent behavior.

Not ideal if you are not a Java or Kotlin developer, or if you are looking for a no-code solution to evaluate LLMs.

LLM development Java development Kotlin development Application quality assurance Automated testing
No Package No Dependents
Maintenance 10 / 25
Adoption 6 / 25
Maturity 13 / 25
Community 9 / 25

How are scores calculated?

Stars

18

Forks

2

Language

Java

License

MIT

Last pushed

Mar 08, 2026

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/rag/dokimos-dev/dokimos"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.