yanzai-4/Contrix

Turn any LLM into a reliable local JSON API | Contract-first local LLM API builder for structured JSON outputs, model/prompt comparison, schema validation, repair/retry, model switching, and runtime testing.

29
/ 100
Experimental

Contrix helps AI engineers, backend, and fullstack teams reliably integrate Large Language Models (LLMs) into their applications. You define expected JSON output structures through a user interface, and the tool then generates optimized prompts, validates LLM responses against your specifications, and handles errors. This allows developers to get consistent, structured data from LLMs for their software.

Use this if you are building software that needs to reliably receive structured JSON data from various LLMs and providers, ensuring consistency and handling potential output errors.

Not ideal if you only need one-off, unstructured text outputs from LLMs or if you are comfortable manually managing prompt engineering, schema validation, and error recovery for every LLM call.

AI-engineering backend-development API-integration software-development data-extraction
No Package No Dependents
Maintenance 13 / 25
Adoption 7 / 25
Maturity 9 / 25
Community 0 / 25

How are scores calculated?

Stars

30

Forks

Language

TypeScript

License

Apache-2.0

Category

output-parsing

Last pushed

Apr 03, 2026

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/prompt-engineering/yanzai-4/Contrix"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.