zorse-project/COBOLEval

Evaluate LLM-generated COBOL

34
/ 100
Emerging

This helps evaluate how well Large Language Models can generate COBOL code. You provide an LLM and it produces COBOL code snippets, which are then checked for functional correctness against a benchmark. This is for developers working with LLMs, especially those tasked with modernizing or maintaining legacy COBOL systems using AI.

No commits in the last 6 months.

Use this if you need to objectively measure the quality and correctness of COBOL code generated by different Large Language Models.

Not ideal if you're looking for a general COBOL compiler or a tool to help write COBOL code manually.

LLM-evaluation COBOL-modernization code-generation-benchmarking AI-software-development legacy-system-AI
Stale 6m No Package No Dependents
Maintenance 0 / 25
Adoption 8 / 25
Maturity 16 / 25
Community 10 / 25

How are scores calculated?

Stars

43

Forks

4

Language

Python

License

MIT

Last pushed

May 09, 2024

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/llm-tools/zorse-project/COBOLEval"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.