iboing/CorDA

CorDA: Context-Oriented Decomposition Adaptation of Large Language Models for task-aware parameter-efficient fine-tuning(NeurIPS 2024)

31
/ 100
Emerging

This tool helps AI engineers efficiently adapt large language models (LLMs) to new tasks. It takes a pre-trained LLM and a task-specific dataset as input, then produces a fine-tuned LLM that performs better on the target task while preserving existing knowledge. AI/ML practitioners working with LLMs will find this useful for customizing models without extensive retraining.

No commits in the last 6 months.

Use this if you need to fine-tune a Llama-2-7b model for specific tasks like answering questions, solving math problems, or generating code, and want to improve performance while reducing computational cost compared to full fine-tuning.

Not ideal if you are not working with large language models or require a solution that supports a broader range of LLM architectures beyond Llama-2 without manual configuration.

LLM fine-tuning natural language processing AI model adaptation machine learning engineering computational linguistics
Stale 6m No Package No Dependents
Maintenance 0 / 25
Adoption 8 / 25
Maturity 16 / 25
Community 7 / 25

How are scores calculated?

Stars

55

Forks

3

Language

Python

License

Apache-2.0

Last pushed

Jan 13, 2025

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/transformers/iboing/CorDA"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.