Chunjiang-Intelligence/low-rank-decay
「Low-Rank Decay」的官方实现。
This project offers an advanced technique to train large language models (LLMs) more effectively, particularly when data is limited. It takes a transformer model's internal 'weight matrices' and applies a special kind of regularization during training. The output is a model that "groks" — meaning it learns the underlying rules and generalizes well, rather than just memorizing the training data.
Use this if you are a machine learning researcher or engineer developing large language models and struggle with models memorizing data or failing to generalize, especially in data-scarce environments.
Not ideal if you are looking for a plug-and-play solution for common machine learning tasks outside of deep learning research, or if you are not working with scale-invariant transformer architectures.
Stars
17
Forks
8
Language
Python
License
GPL-3.0
Category
Last pushed
Nov 25, 2025
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/ml-frameworks/Chunjiang-Intelligence/low-rank-decay"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
lmcinnes/umap
Uniform Manifold Approximation and Projection
pyRiemann/pyRiemann
Machine learning for multivariate data through the Riemannian geometry of positive definite...
geomstats/geomstats
Computations and statistics on manifolds with geometric structures.
higra/Higra
Hierarchical Graph Analysis
pavlin-policar/openTSNE
Extensible, parallel implementations of t-SNE