isagawa-co/isagawa-kernel
The self-improving harness for AI coding agents. Drop-in enforcement that the agent builds, follows, and improves — mechanically.
This tool helps AI coding agents stay on track and deliver consistent, high-quality code by preventing them from drifting or making repeat mistakes during development tasks. It takes your project repository and, optionally, a domain-specific knowledge specification, then outputs code that adheres to strict internal rules and improves over time. This is for software development teams, QA engineers, and anyone managing AI agents for coding tasks who needs reliable, governed output.
Use this if your AI coding agents frequently ignore instructions, skip steps, or produce inconsistent results on long or complex coding tasks.
Not ideal if you prefer manual oversight and intervention for every decision your AI agent makes, or if your coding tasks are very short and simple.
Stars
10
Forks
2
Language
Python
License
MIT
Category
Last pushed
Mar 18, 2026
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/agents/isagawa-co/isagawa-kernel"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
mastersof-ai/harness
Agent runtime with full system prompt control. Define agents in markdown, run them in a terminal...
Chorus-AIDLC/Chorus
The Agent Harness for AI-Human Collaboration, inspired by the AI-DLC (AI-Driven Development Lifecycle)
L-Forster/open-jet
Agent Harness & TUI for Edge devices
inngest/utah
Universally Triggered Agent Harness - An OpenClaw-like Inngest-powered personal agent
jrenaldi79/harness-engineering
Context engineering for coding agents - CLAUDE.md templates, mechanical enforcement, and a field...