IBM/DEFT

Official pytorch code for "From PEFT to DEFT: Parameter Efficient Finetuning for Reducing Activation Density in Transformers" (AAAI 2025)

22
/ 100
Experimental

This project offers a way for machine learning engineers to fine-tune large language models more efficiently. It takes a pre-trained language model and applies a novel density loss during parameter-efficient fine-tuning methods like LoRA or Adapter. The output is a fine-tuned model with significantly reduced activation density, which can lead to faster inference on specialized hardware while maintaining performance on tasks like text classification or question answering. This is for machine learning engineers who work with large transformer models.

No commits in the last 6 months.

Use this if you are a machine learning engineer looking to reduce the computational cost and improve the inference speed of your fine-tuned transformer models without sacrificing performance.

Not ideal if you are a non-developer or if your primary goal is to simply use an off-the-shelf fine-tuned model without optimizing its underlying efficiency for deployment.

large-language-models model-optimization deep-learning-deployment natural-language-processing transformer-models
Stale 6m No Package No Dependents
Maintenance 2 / 25
Adoption 4 / 25
Maturity 16 / 25
Community 0 / 25

How are scores calculated?

Stars

7

Forks

Language

Python

License

Apache-2.0

Last pushed

Sep 18, 2025

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/ml-frameworks/IBM/DEFT"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.