dmis-lab/Monet
[ICLR 2025] Monet: Mixture of Monosemantic Experts for Transformers
Monet is a type of large language model (LLM) designed to make AI behavior more understandable and controllable. It takes raw text or code as input and generates human-like text or functional code snippets as output. This allows AI researchers and ML engineers to more clearly see how the model processes information and tailor its knowledge.
No commits in the last 6 months.
Use this if you are a machine learning researcher or engineer focused on developing more transparent and controllable large language models.
Not ideal if you are an end-user simply looking to apply an off-the-shelf LLM for content creation or customer service without needing to understand its internal mechanisms.
Stars
76
Forks
4
Language
Python
License
Apache-2.0
Category
Last pushed
Jun 23, 2025
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/transformers/dmis-lab/Monet"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
EfficientMoE/MoE-Infinity
PyTorch library for cost-effective, fast and easy serving of MoE models.
raymin0223/mixture_of_recursions
Mixture-of-Recursions: Learning Dynamic Recursive Depths for Adaptive Token-Level Computation...
AviSoori1x/makeMoE
From scratch implementation of a sparse mixture of experts language model inspired by Andrej...
thu-nics/MoA
[CoLM'25] The official implementation of the paper
jaisidhsingh/pytorch-mixtures
One-stop solutions for Mixture of Expert modules in PyTorch.