lin-tan/clm

For our ICSE23 paper "Impact of Code Language Models on Automated Program Repair" by Nan Jiang, Kevin Liu, Thibaud Lutellier, and Lin Tan

40
/ 100
Emerging

This project helps software researchers and academics evaluate how well different code language models can automatically fix bugs in Java programs. It takes buggy Java code or code benchmarks as input and outputs potential code patches generated by various pre-trained or fine-tuned language models. Anyone studying automated program repair or the effectiveness of AI in software engineering would find this useful.

No commits in the last 6 months.

Use this if you are a researcher in automated program repair and want to reproduce or extend experiments on how code language models perform at fixing bugs.

Not ideal if you are a software developer looking for a production-ready bug-fixing tool or a general-purpose code generation assistant.

software-engineering-research automated-program-repair java-development code-analysis academic-research
Stale 6m No Package No Dependents
Maintenance 0 / 25
Adoption 8 / 25
Maturity 16 / 25
Community 16 / 25

How are scores calculated?

Stars

63

Forks

11

Language

Python

License

Last pushed

Oct 16, 2024

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/transformers/lin-tan/clm"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.