jpmorganchase/CodeQuest
CodeQUEST is a generalizable framework which leverages LLMs to iteratively evaluate and enhance code quality across multiple dimensions for a variety of programming languages.
This framework helps software developers and engineers automatically assess and improve the quality of their code. You provide it with existing code, and it provides both a detailed evaluation across dimensions like readability and security, and then suggests optimized code based on that feedback. This is ideal for developers looking to systematically enhance their codebase.
Use this if you are a software developer or engineering manager looking to improve code quality, maintainability, and security across a project or team.
Not ideal if you are looking for a basic linter or a tool to simply format your code without in-depth quality analysis and iterative improvement.
Stars
17
Forks
5
Language
Python
License
Apache-2.0
Category
Last pushed
Feb 11, 2026
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/llm-tools/jpmorganchase/CodeQuest"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
NVIDIA-NeMo/Curator
Scalable data pre processing and curation toolkit for LLMs
MigoXLab/dingo
Dingo: A Comprehensive AI Data, Model and Application Quality Evaluation Tool
data-prep-kit/data-prep-kit
Open source project for data preparation for GenAI applications
TheDataStation/pneuma
LLM-Powered Data Discovery System for Tabular Data
cleanlab/cleanlab-studio
Client interface to Cleanlab Studio