yueyueL/ReliableLM4Code

Collections of research, benchmarks and tools towards more robust and reliable language models for code; LM4Code; LM4SE; reliable LLM; LLM4Code

21
/ 100
Experimental

This resource helps software engineering researchers and practitioners understand and address the common pitfalls that hinder the reliability of large language models used for code intelligence tasks. It provides a curated collection of research papers, benchmarks, and tools. The input is existing research and models for code-related tasks, and the output is a clearer understanding of potential issues and solutions to build more robust systems. This is for anyone researching, developing, or deploying AI-powered tools for code, such as automated bug repair or test case generation.

No commits in the last 6 months.

Use this if you are working with large language models for software engineering tasks and need to identify, understand, and mitigate potential reliability issues and pitfalls in their design or application.

Not ideal if you are looking for an off-the-shelf development library or a tutorial on basic LLM implementation for code, rather than research insights into reliability challenges.

software-engineering-research code-intelligence large-language-models-reliability automated-bug-repair test-case-generation
No License Stale 6m No Package No Dependents
Maintenance 0 / 25
Adoption 7 / 25
Maturity 8 / 25
Community 6 / 25

How are scores calculated?

Stars

30

Forks

2

Language

License

Last pushed

Dec 14, 2023

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/ai-coding/yueyueL/ReliableLM4Code"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.