martin-wey/peft-llm-code
Replication package of the paper "Exploring Parameter-Efficient Fine-Tuning Techniques for Code Generation with Large Language Models" (TOSEM 2025)
This project helps machine learning researchers and practitioners investigate how different parameter-efficient fine-tuning (PEFT) techniques impact large language models for code generation tasks. It takes pre-trained large language models and code-related datasets as input, then fine-tunes and evaluates their performance, outputting metrics like EM@k and CodeBLEU. Researchers and ML engineers focused on code generation with LLMs are the primary users.
No commits in the last 6 months.
Use this if you are an ML researcher or engineer exploring the effectiveness of PEFT methods like LoRA or QLoRA for adapting large language models to generate code more efficiently.
Not ideal if you are looking for a plug-and-play solution for code generation without needing to dive into model fine-tuning and evaluation specifics, or if your primary goal is not research into PEFT techniques.
Stars
25
Forks
4
Language
Python
License
MIT
Category
Last pushed
Oct 14, 2024
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/transformers/martin-wey/peft-llm-code"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
jncraton/languagemodels
Explore large language models in 512MB of RAM
microsoft/unilm
Large-scale Self-supervised Pre-training Across Tasks, Languages, and Modalities
haizelabs/verdict
Inference-time scaling for LLMs-as-a-judge.
albertan017/LLM4Decompile
Reverse Engineering: Decompiling Binary Code with Large Language Models
bytedance/Sa2VA
Official Repo For Pixel-LLM Codebase