yueyueL/ChatGPT-CodeGenAnalysis

Exploring and improving the quality of ChatGPT-generated code for LeetCode programming tasks.

25
/ 100
Experimental

This project helps software engineering researchers and practitioners evaluate and improve the quality of code generated by large language models like ChatGPT for competitive programming tasks. It takes ChatGPT-generated Python or Java code for LeetCode problems and outputs analysis of its correctness and quality issues, including whether it passes tests and details on errors. This tool is for researchers studying AI-generated code, educators assessing programming task solutions, or anyone looking to systematically benchmark language models on coding challenges.

No commits in the last 6 months.

Use this if you need to systematically test and analyze the correctness and quality of code snippets generated by ChatGPT for LeetCode-style programming problems.

Not ideal if you're looking for a general-purpose code quality checker for production applications or a tool to help you write code yourself.

AI-generated code evaluation Competitive programming analysis Software quality research Language model benchmarking Code correctness testing
No License Stale 6m No Package No Dependents
Maintenance 0 / 25
Adoption 5 / 25
Maturity 8 / 25
Community 12 / 25

How are scores calculated?

Stars

11

Forks

2

Language

Python

License

Last pushed

Jan 19, 2024

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/llm-tools/yueyueL/ChatGPT-CodeGenAnalysis"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.