Chunjiang-Intelligence/low-rank-decay

「Low-Rank Decay」的官方实现。

42
/ 100
Emerging

This project offers an advanced technique to train large language models (LLMs) more effectively, particularly when data is limited. It takes a transformer model's internal 'weight matrices' and applies a special kind of regularization during training. The output is a model that "groks" — meaning it learns the underlying rules and generalizes well, rather than just memorizing the training data.

Use this if you are a machine learning researcher or engineer developing large language models and struggle with models memorizing data or failing to generalize, especially in data-scarce environments.

Not ideal if you are looking for a plug-and-play solution for common machine learning tasks outside of deep learning research, or if you are not working with scale-invariant transformer architectures.

large-language-models deep-learning-research model-generalization transformer-optimization machine-learning-training
No Package No Dependents
Maintenance 6 / 25
Adoption 6 / 25
Maturity 13 / 25
Community 17 / 25

How are scores calculated?

Stars

17

Forks

8

Language

Python

License

GPL-3.0

Last pushed

Nov 25, 2025

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/ml-frameworks/Chunjiang-Intelligence/low-rank-decay"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.