yueyueL/ChatGPT-CodeGenAnalysis
Exploring and improving the quality of ChatGPT-generated code for LeetCode programming tasks.
This project helps software engineering researchers and practitioners evaluate and improve the quality of code generated by large language models like ChatGPT for competitive programming tasks. It takes ChatGPT-generated Python or Java code for LeetCode problems and outputs analysis of its correctness and quality issues, including whether it passes tests and details on errors. This tool is for researchers studying AI-generated code, educators assessing programming task solutions, or anyone looking to systematically benchmark language models on coding challenges.
No commits in the last 6 months.
Use this if you need to systematically test and analyze the correctness and quality of code snippets generated by ChatGPT for LeetCode-style programming problems.
Not ideal if you're looking for a general-purpose code quality checker for production applications or a tool to help you write code yourself.
Stars
11
Forks
2
Language
Python
License
—
Category
Last pushed
Jan 19, 2024
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/llm-tools/yueyueL/ChatGPT-CodeGenAnalysis"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
openai/openai-cookbook
Examples and guides for using the OpenAI API
rgbkrk/dangermode
Execute IPython & Jupyter from the comforts of chat.openai.com
CogStack/OpenGPT
A framework for creating grounded instruction based datasets and training conversational domain...
Declipsonator/GPTZzzs
Large language model detection evasion through grammar and vocabulary modifcation.
antononcube/Python-JupyterChatbook
Python package of a Jupyter extension that facilitates the interaction with LLMs.