mrconter1/PullRequestBenchmark
Evaluating LLMs performance in PR reviews as an indicator for their capability in creating PRs.
This tool helps software engineering leaders and AI researchers understand how well large language models (LLMs) can review code changes. It takes in realistic pull request details, including full Git history, and outputs a binary decision: Approved or Rejected. The primary users are those evaluating AI capabilities for automating software development tasks.
No commits in the last 6 months.
Use this if you need to benchmark the performance of AI models in the critical task of reviewing pull requests, focusing on their decision-making accuracy against human expert judgment.
Not ideal if you are looking to evaluate an LLM's ability to generate code fixes for specific software bugs or issues, which is a different kind of coding task.
Stars
13
Forks
—
Language
Python
License
MIT
Category
Last pushed
Apr 10, 2024
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/transformers/mrconter1/PullRequestBenchmark"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
allenai/RL4LMs
A modular RL library to fine-tune language models to human preferences
emredeveloper/Mem-LLM
Mem-LLM is a Python library for building memory-enabled AI assistants that run entirely on local...
cloudguruab/modsysML
Human reinforcement learning (RLHF) framework for AI models. Evaluate and compare LLM outputs,...
ManasVardhan/bench-my-llm
🏎️ Dead-simple LLM benchmarking CLI - latency, cost, and quality metrics
modal-labs/stopwatch
A tool for benchmarking LLMs on Modal