martin-wey/peft-llm-code

Replication package of the paper "Exploring Parameter-Efficient Fine-Tuning Techniques for Code Generation with Large Language Models" (TOSEM 2025)

36
/ 100
Emerging

This project helps machine learning researchers and practitioners investigate how different parameter-efficient fine-tuning (PEFT) techniques impact large language models for code generation tasks. It takes pre-trained large language models and code-related datasets as input, then fine-tunes and evaluates their performance, outputting metrics like EM@k and CodeBLEU. Researchers and ML engineers focused on code generation with LLMs are the primary users.

No commits in the last 6 months.

Use this if you are an ML researcher or engineer exploring the effectiveness of PEFT methods like LoRA or QLoRA for adapting large language models to generate code more efficiently.

Not ideal if you are looking for a plug-and-play solution for code generation without needing to dive into model fine-tuning and evaluation specifics, or if your primary goal is not research into PEFT techniques.

Machine Learning Research Code Generation Large Language Models Parameter-Efficient Fine-Tuning ML Experimentation
Stale 6m No Package No Dependents
Maintenance 0 / 25
Adoption 7 / 25
Maturity 16 / 25
Community 13 / 25

How are scores calculated?

Stars

25

Forks

4

Language

Python

License

MIT

Last pushed

Oct 14, 2024

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/transformers/martin-wey/peft-llm-code"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.