YerbaPage/MGDebugger
Multi-Granularity LLM Debugger [ICSE2026]
This tool helps developers debug code generated by Large Language Models (LLMs) by systematically identifying and fixing errors. You provide the LLM-generated code, and it pinpoints issues at different levels, from individual functions to the whole program. This is designed for software developers who are integrating LLMs into their code generation workflows.
No commits in the last 6 months.
Use this if you are a software developer frequently working with LLMs for code generation and need a reliable way to automatically debug and improve the correctness of the generated code.
Not ideal if you are debugging human-written code or do not use Large Language Models for code generation.
Stars
96
Forks
10
Language
Python
License
MIT
Category
Last pushed
Jul 06, 2025
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/llm-tools/YerbaPage/MGDebugger"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
OWASP/www-project-top-10-for-large-language-model-applications
OWASP Top 10 for Large Language Model Apps (Part of the GenAI Security Project)
esbmc/esbmc-ai
Automated Code Repair suite powered by ESBMC and LLMs.
cla7aye15I4nd/PatchAgent
[USENIX Security 25] PatchAgent is a LLM-based practical program repair agent that mimics human...
iSEngLab/AwesomeLLM4APR
[TOSEM 2026]A Systematic Literature Review on Large Language Models for Automated Program Repair
Mohannadcse/AlloySpecRepair
An Empirical Evaluation of Pre-trained Large Language Models for Repairing Declarative Formal...