terryyz/ice-score
[EACL 2024] ICE-Score: Instructing Large Language Models to Evaluate Code
This tool helps evaluate the quality of computer code snippets generated by large language models. You provide a coding problem, the generated code, and optionally, a reference solution. The tool then outputs an evaluation score, assessing aspects like usefulness. This is for developers working with AI code generation, who need to systematically grade the output of different models.
No commits in the last 6 months.
Use this if you are a developer or researcher who needs an automated and consistent way to assess the quality of code produced by AI models for code generation tasks.
Not ideal if you need a code evaluation system that performs full static analysis, dynamic testing, or comprehensive security vulnerability checks beyond simple scoring.
Stars
80
Forks
10
Language
Python
License
MIT
Category
Last pushed
Jun 16, 2024
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/llm-tools/terryyz/ice-score"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Featured in
Higher-rated alternatives
EvolvingLMMs-Lab/lmms-eval
One-for-All Multimodal Evaluation Toolkit Across Text, Image, Video, and Audio Tasks
vibrantlabsai/ragas
Supercharge Your LLM Application Evaluations 🚀
open-compass/VLMEvalKit
Open-source evaluation toolkit of large multi-modality models (LMMs), support 220+ LMMs, 80+ benchmarks
EuroEval/EuroEval
The robust European language model benchmark.
Giskard-AI/giskard-oss
🐢 Open-Source Evaluation & Testing library for LLM Agents