microsoft/interwhen

A framework for verifiable reasoning with language models.

44
/ 100
Emerging

This project helps ensure the reliability and accuracy of language model outputs, especially for critical tasks where errors are unacceptable. It takes a prompt and a language model, and by continuously checking the model's intermediate thinking steps against specific rules, it either corrects, revises, or stops the generation process. This tool is designed for AI developers and engineers building high-stakes applications like those in law, healthcare, or robotics, where verifiable and correct AI reasoning is paramount.

Use this if you are developing language model-powered applications where individual errors are costly or dangerous, and you need to guarantee that outputs adhere to explicit, verifiable constraints.

Not ideal if your application prioritizes speed over absolute correctness, or if the domain lacks clear, verifiable rules to check against.

AI-safety reliable-AI language-model-development responsible-AI AI-verification
No Package No Dependents
Maintenance 13 / 25
Adoption 5 / 25
Maturity 11 / 25
Community 15 / 25

How are scores calculated?

Stars

13

Forks

4

Language

Python

License

MIT

Last pushed

Mar 19, 2026

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/transformers/microsoft/interwhen"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.