nitinvetcha/DeGAML-LLM

DeGAML-LLM: Decoupling Generalization and Adaptation in Meta-Learning for Large Language Models

25
/ 100
Experimental

If you work with large language models (LLMs) and need them to quickly learn new, unseen tasks without extensive retraining, this project helps. It takes a task description or a few examples and generates specialized adapters, allowing your LLM to perform well on new reasoning, math, or coding challenges. This is for researchers, MLOps engineers, or data scientists developing advanced AI applications.

Use this if you need your LLMs to generalize effectively to a wide range of new tasks and adapt rapidly without fine-tuning the entire model for each one.

Not ideal if you are looking for a plug-and-play solution without any technical expertise in meta-learning or LLM adaptation strategies.

LLM-adaptation meta-learning few-shot-learning AI-research model-generalization
No Package No Dependents
Maintenance 6 / 25
Adoption 6 / 25
Maturity 13 / 25
Community 0 / 25

How are scores calculated?

Stars

16

Forks

Language

Python

License

Apache-2.0

Last pushed

Jan 08, 2026

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/llm-tools/nitinvetcha/DeGAML-LLM"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.