AmanPriyanshu/GPT-OSS-MoE-ExpertFingerprinting

ExpertFingerprinting: Behavioral Pattern Analysis and Specialization Mapping of Experts in GPT-OSS-20B's Mixture-of-Experts Architecture

41
/ 100
Emerging

This project offers specialized, smaller AI models derived from GPT-OSS-20B, optimized for specific tasks like scientific reasoning or legal analysis. It helps AI developers and researchers create more efficient language models by providing pre-trained, pruned models that maintain performance in a chosen domain while reducing computational overhead. You input a general language model and get a smaller, domain-focused model ready for deployment.

Use this if you need to deploy a powerful, domain-specific large language model but are constrained by computational resources or seek enhanced performance in a narrow field.

Not ideal if you require a general-purpose language model that performs equally well across all domains without any specialization.

AI model optimization natural language processing computational efficiency specialized AI large language models
No Package No Dependents
Maintenance 10 / 25
Adoption 6 / 25
Maturity 15 / 25
Community 10 / 25

How are scores calculated?

Stars

24

Forks

3

Language

HTML

License

Apache-2.0

Last pushed

Feb 03, 2026

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/llm-tools/AmanPriyanshu/GPT-OSS-MoE-ExpertFingerprinting"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.