an-yongqi/systematic-outliers

[ICLR 2025] Systematic Outliers in Large Language Models.

34
/ 100
Emerging

This project helps AI researchers and machine learning engineers analyze the behavior of Large Language Models (LLMs). It takes existing LLM architectures and training data, and outputs visualizations and analyses of 'systematic outliers' within the model's weights, activations, and attention mechanisms. The goal is to understand how these outliers impact performance and efficiency, ultimately leading to better model design.

No commits in the last 6 months.

Use this if you are an AI researcher or machine learning engineer focused on understanding, debugging, and optimizing Large Language Models by examining their internal outlier phenomena.

Not ideal if you are looking for an off-the-shelf solution for general LLM fine-tuning or deployment without needing deep architectural analysis.

LLM research model interpretability neural network analysis deep learning optimization AI model debugging
Stale 6m No Package No Dependents
Maintenance 0 / 25
Adoption 5 / 25
Maturity 16 / 25
Community 13 / 25

How are scores calculated?

Stars

9

Forks

2

Language

Python

License

MIT

Last pushed

Feb 11, 2025

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/transformers/an-yongqi/systematic-outliers"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.