StringNLPLAB/MGS

Repository for the paper "Advancing General-Purpose Reasoning Models with Modular Gradient Surgery"

30
/ 100
Emerging

This project helps AI researchers and practitioners improve Large Language Models (LLMs) by balancing multiple training objectives like mathematical reasoning, general conversation, and instruction following. It takes an existing LLM and training data from diverse sources (e.g., math problems, chat logs), and outputs a more versatile LLM capable of strong performance across these different domains. This tool is for those who are fine-tuning or training LLMs for multi-skill applications.

Use this if you need to train a single LLM that performs well across distinct tasks such as complex mathematical reasoning, general chat, and accurately following instructions, without sacrificing performance in any one area.

Not ideal if you are looking for a pre-trained, ready-to-use LLM for a single, highly specialized task, or if you don't have the technical expertise to fine-tune advanced models.

LLM-fine-tuning AI-model-training multi-task-learning natural-language-processing AI-research
No Package No Dependents
Maintenance 13 / 25
Adoption 6 / 25
Maturity 11 / 25
Community 0 / 25

How are scores calculated?

Stars

19

Forks

Language

Python

License

MIT

Last pushed

Mar 15, 2026

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/transformers/StringNLPLAB/MGS"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.