nitinvetcha/DeGAML-LLM
DeGAML-LLM: Decoupling Generalization and Adaptation in Meta-Learning for Large Language Models
If you work with large language models (LLMs) and need them to quickly learn new, unseen tasks without extensive retraining, this project helps. It takes a task description or a few examples and generates specialized adapters, allowing your LLM to perform well on new reasoning, math, or coding challenges. This is for researchers, MLOps engineers, or data scientists developing advanced AI applications.
Use this if you need your LLMs to generalize effectively to a wide range of new tasks and adapt rapidly without fine-tuning the entire model for each one.
Not ideal if you are looking for a plug-and-play solution without any technical expertise in meta-learning or LLM adaptation strategies.
Stars
16
Forks
—
Language
Python
License
Apache-2.0
Category
Last pushed
Jan 08, 2026
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/llm-tools/nitinvetcha/DeGAML-LLM"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
aalok-sathe/surprisal
A unified interface for computing surprisal (log probabilities) from language models! Supports...
EvolvingLMMs-Lab/lmms-engine
A simple, unified multimodal models training engine. Lean, flexible, and built for hacking at scale.
reasoning-machines/pal
PaL: Program-Aided Language Models (ICML 2023)
FunnySaltyFish/Better-Ruozhiba
【逐条处理完成】人为审核+修改每一条的弱智吧精选问题QA数据集
microsoft/monitors4codegen
Code and Data artifact for NeurIPS 2023 paper - "Monitor-Guided Decoding of Code LMs with Static...