jie-jw-wu/human-eval-comm

HumanEvalComm: Evaluating Communication Skill of Code LLM and LLM Agent

38
/ 100
Emerging

This project helps evaluate how well Large Language Models (LLMs) can clarify ambiguous, inconsistent, or incomplete coding problems. You feed it a coding problem description (modified from the HumanEval benchmark) and it assesses the LLM's ability to ask relevant clarifying questions and generate correct code. This is useful for AI researchers and developers who are building or evaluating LLMs for code generation tasks.

Use this if you need to benchmark the communication and code generation skills of various LLMs when faced with imperfect problem specifications.

Not ideal if you're looking for a tool to automatically fix or debug your own code, or to generate production-ready code from vague requirements.

LLM-evaluation code-generation AI-benchmarking natural-language-understanding software-engineering-AI
No License No Package No Dependents
Maintenance 10 / 25
Adoption 5 / 25
Maturity 8 / 25
Community 15 / 25

How are scores calculated?

Stars

11

Forks

4

Language

Python

License

Last pushed

Feb 08, 2026

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/transformers/jie-jw-wu/human-eval-comm"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.