Copilot-Eval-Replication-Package/CopilotEvaluation

The Replication Package of the paper "GitHub Copilot AI pair programmer: Asset or Liability?" submitted to the Journal of Systems and Software on June 2022.

33
/ 100
Emerging

This project helps software engineering researchers evaluate the effectiveness of AI code generation tools like GitHub Copilot. It takes raw code suggestions from Copilot for fundamental algorithms and compares them against human-written code. The output is a detailed analysis of Copilot's code correctness, efficiency, and similarity to human solutions, allowing researchers to draw conclusions about its utility as a pair programmer.

No commits in the last 6 months.

Use this if you are a software engineering researcher or academic studying the capabilities and implications of AI code assistants like GitHub Copilot.

Not ideal if you are looking for a tool to improve your own coding skills or integrate AI code generation into a production workflow.

software-engineering-research ai-evaluation code-analysis algorithmic-assessment programming-education
Stale 6m No Package No Dependents
Maintenance 0 / 25
Adoption 4 / 25
Maturity 16 / 25
Community 13 / 25

How are scores calculated?

Stars

7

Forks

2

Language

Python

License

Apache-2.0

Last pushed

Feb 06, 2023

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/ai-coding/Copilot-Eval-Replication-Package/CopilotEvaluation"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.