isagawa-co/isagawa-kernel

The self-improving harness for AI coding agents. Drop-in enforcement that the agent builds, follows, and improves — mechanically.

42
/ 100
Emerging

This tool helps AI coding agents stay on track and deliver consistent, high-quality code by preventing them from drifting or making repeat mistakes during development tasks. It takes your project repository and, optionally, a domain-specific knowledge specification, then outputs code that adheres to strict internal rules and improves over time. This is for software development teams, QA engineers, and anyone managing AI agents for coding tasks who needs reliable, governed output.

Use this if your AI coding agents frequently ignore instructions, skip steps, or produce inconsistent results on long or complex coding tasks.

Not ideal if you prefer manual oversight and intervention for every decision your AI agent makes, or if your coding tasks are very short and simple.

AI-assisted-development software-quality-assurance test-automation code-generation developer-productivity
No Package No Dependents
Maintenance 13 / 25
Adoption 5 / 25
Maturity 11 / 25
Community 13 / 25

How are scores calculated?

Stars

10

Forks

2

Language

Python

License

MIT

Last pushed

Mar 18, 2026

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/agents/isagawa-co/isagawa-kernel"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.