zorse-project/COBOLEval
Evaluate LLM-generated COBOL
This helps evaluate how well Large Language Models can generate COBOL code. You provide an LLM and it produces COBOL code snippets, which are then checked for functional correctness against a benchmark. This is for developers working with LLMs, especially those tasked with modernizing or maintaining legacy COBOL systems using AI.
No commits in the last 6 months.
Use this if you need to objectively measure the quality and correctness of COBOL code generated by different Large Language Models.
Not ideal if you're looking for a general COBOL compiler or a tool to help write COBOL code manually.
Stars
43
Forks
4
Language
Python
License
MIT
Category
Last pushed
May 09, 2024
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/llm-tools/zorse-project/COBOLEval"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Featured in
Higher-rated alternatives
EvolvingLMMs-Lab/lmms-eval
One-for-All Multimodal Evaluation Toolkit Across Text, Image, Video, and Audio Tasks
vibrantlabsai/ragas
Supercharge Your LLM Application Evaluations 🚀
open-compass/VLMEvalKit
Open-source evaluation toolkit of large multi-modality models (LMMs), support 220+ LMMs, 80+ benchmarks
EuroEval/EuroEval
The robust European language model benchmark.
Giskard-AI/giskard-oss
🐢 Open-Source Evaluation & Testing library for LLM Agents