tommasocerruti/detllm

Deterministic-mode checks for LLM inference: measure run/batch variance, generate repro packs, and explain why outputs differ.

32
/ 100
Emerging

When you're working with AI models that generate text, you might notice the same prompt can sometimes give different answers. This tool helps you understand why by taking your text prompts and an AI model, then showing you if the outputs vary across different runs or batch sizes. It's for AI engineers, data scientists, or anyone developing or testing large language models who needs consistent and predictable results.

Use this if you need to ensure that your language model consistently produces the same output for the same input, or if you need to diagnose why it isn't.

Not ideal if you are troubleshooting determinism issues in distributed or multi-process AI inference environments, or if you don't need strict reproducibility for your model outputs.

AI model testing LLM evaluation AI model reliability AI debugging MLOps
No Package No Dependents
Maintenance 10 / 25
Adoption 6 / 25
Maturity 11 / 25
Community 5 / 25

How are scores calculated?

Stars

18

Forks

1

Language

Python

License

Apache-2.0

Last pushed

Jan 17, 2026

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/transformers/tommasocerruti/detllm"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.