AmanPriyanshu/GPT-OSS-MoE-ExpertFingerprinting
ExpertFingerprinting: Behavioral Pattern Analysis and Specialization Mapping of Experts in GPT-OSS-20B's Mixture-of-Experts Architecture
This project offers specialized, smaller AI models derived from GPT-OSS-20B, optimized for specific tasks like scientific reasoning or legal analysis. It helps AI developers and researchers create more efficient language models by providing pre-trained, pruned models that maintain performance in a chosen domain while reducing computational overhead. You input a general language model and get a smaller, domain-focused model ready for deployment.
Use this if you need to deploy a powerful, domain-specific large language model but are constrained by computational resources or seek enhanced performance in a narrow field.
Not ideal if you require a general-purpose language model that performs equally well across all domains without any specialization.
Stars
24
Forks
3
Language
HTML
License
Apache-2.0
Category
Last pushed
Feb 03, 2026
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/llm-tools/AmanPriyanshu/GPT-OSS-MoE-ExpertFingerprinting"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
InternLM/xtuner
A Next-Generation Training Engine Built for Ultra-Large MoE Models
arm-education/Advanced-AI-Mixture-of-Experts
Hands-on course materials for ML engineers to implement and optimize Mixture of Experts models:...
SuperBruceJia/Awesome-Mixture-of-Experts
Awesome Mixture of Experts (MoE): A Curated List of Mixture of Experts (MoE) and Mixture of...
sumitdotml/moe-emergence
a project highlighting the emergent expert specialization in Mixture of Experts (MoEs) across 3...
iahuang/cosmoe
Enabling inference of large mixture-of-experts (MoE) models on Apple Silicon using dynamic offloading.