lin-tan/clm
For our ICSE23 paper "Impact of Code Language Models on Automated Program Repair" by Nan Jiang, Kevin Liu, Thibaud Lutellier, and Lin Tan
This project helps software researchers and academics evaluate how well different code language models can automatically fix bugs in Java programs. It takes buggy Java code or code benchmarks as input and outputs potential code patches generated by various pre-trained or fine-tuned language models. Anyone studying automated program repair or the effectiveness of AI in software engineering would find this useful.
No commits in the last 6 months.
Use this if you are a researcher in automated program repair and want to reproduce or extend experiments on how code language models perform at fixing bugs.
Not ideal if you are a software developer looking for a production-ready bug-fixing tool or a general-purpose code generation assistant.
Stars
63
Forks
11
Language
Python
License
—
Category
Last pushed
Oct 16, 2024
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/transformers/lin-tan/clm"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Related models
waroad/losver
Source Code for LOSVER: Line-Level Modifiability Signal-Guided Vulnerability Detection and Classification
thanhlecongg/Invalidator
Invalidator: Automated Patch Correctness Assessment via Semantic and Syntactic Reasoning (IEEE TSE)
nghiempt/llm-analysis-privacy-policy
Unveiling Discrepancies in Android App Data Safety Declarations and Privacy Policies: An...
martin-wey/R2Vul
R2Vul: Learning to Reason about Software Vulnerabilities with Reinforcement Learning and...
garghub/VulnerabilityCouplingMutants
On the Coupling between Vulnerabilities and LLM-generated Mutants: A Study on Vul4J dataset, The...