AlignTrue/aligntrue

The system of record for AI. Models commoditize, trust doesn’t.

42
/ 100
Emerging

This platform helps AI development teams maintain trust and accountability in their AI systems by providing a system of record for AI outputs. It takes in AI model decisions and actions, and outputs an auditable, verifiable history that explains why an AI made a certain decision. This is for AI engineers and operations teams who need to ensure their AI models are explainable, traceable, and reliable in production.

Available on npm.

Use this if you need to prove what your AI saw, why it decided, and what it did, especially for compliance, debugging, or critical operations.

Not ideal if you are looking for a simple model deployment tool without a strong emphasis on auditability or detailed behavioral record-keeping.

AI Governance AI Operations Model Audit Behavioral Traceability Production AI Reliability
Maintenance 10 / 25
Adoption 6 / 25
Maturity 22 / 25
Community 4 / 25

How are scores calculated?

Stars

24

Forks

1

Language

TypeScript

License

MIT

Last pushed

Feb 16, 2026

Commits (30d)

0

Dependencies

1

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/agents/AlignTrue/aligntrue"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.