CodeEff/ECCO

[EMNLP 2024] Code for the paper "ECCO: Can We Improve Model-Generated Code Efficiency Without Sacrificing Functional Correctness?"

25
/ 100
Experimental

This project helps evaluate and improve the efficiency of code generated by large language models, ensuring it runs faster without breaking functionality. It takes either a natural language instruction or existing code with a change history as input, and outputs metrics on the correctness and runtime performance of the generated code. Developers and researchers working with code generation models would use this to benchmark and refine their systems.

No commits in the last 6 months.

Use this if you need to compare how efficiently different large language models generate code, or if you want to test if model-generated code is both correct and fast.

Not ideal if you are looking for a tool to optimize existing human-written code or to debug functional errors in your own applications.

code-generation large-language-models software-benchmarking code-efficiency AI-development
No License Stale 6m No Package No Dependents
Maintenance 0 / 25
Adoption 4 / 25
Maturity 8 / 25
Community 13 / 25

How are scores calculated?

Stars

7

Forks

2

Language

Python

License

Last pushed

Oct 03, 2024

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/ai-coding/CodeEff/ECCO"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.