zorazrw/odex

[EMNLP'23] Execution-Based Evaluation for Open Domain Code Generation

36
/ 100
Emerging

This project helps evaluate how well AI models can generate code from natural language descriptions. You provide a dataset of natural language prompts and the corresponding generated code, and it assesses the code's correctness by running it against predefined tests. The primary users are researchers and developers working on code generation models who need to accurately measure their model's performance.

No commits in the last 6 months.

Use this if you are developing or comparing AI models that generate executable code from human language instructions and need a robust, execution-based evaluation framework.

Not ideal if you are looking for an AI tool to generate code directly for your own projects, rather than evaluating the underlying code generation models themselves.

AI model evaluation code generation research natural language processing software engineering research machine learning development
Stale 6m No Package No Dependents
Maintenance 0 / 25
Adoption 8 / 25
Maturity 16 / 25
Community 12 / 25

How are scores calculated?

Stars

49

Forks

6

Language

Python

License

CC-BY-SA-4.0

Last pushed

Dec 22, 2023

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/ai-coding/zorazrw/odex"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.