raymin0223/mixture_of_recursions
Mixture-of-Recursions: Learning Dynamic Recursive Depths for Adaptive Token-Level Computation (NeurIPS 2025)
This project offers a novel method to make large language models (LLMs) more efficient, allowing them to process information faster without losing accuracy. It takes existing LLM architectures and optimizes how they handle each piece of text, resulting in quicker predictions and reduced computational overhead. This is for AI researchers and machine learning engineers who are building or deploying large language models and need to improve their performance and resource usage.
548 stars. No commits in the last 6 months.
Use this if you are a machine learning engineer or researcher looking to significantly improve the inference speed and training efficiency of your large language models while maintaining or enhancing their performance.
Not ideal if you are a practitioner looking for a ready-to-use LLM for specific applications without needing to delve into model architecture optimization.
Stars
548
Forks
78
Language
Python
License
Apache-2.0
Category
Last pushed
Sep 26, 2025
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/transformers/raymin0223/mixture_of_recursions"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
EfficientMoE/MoE-Infinity
PyTorch library for cost-effective, fast and easy serving of MoE models.
AviSoori1x/makeMoE
From scratch implementation of a sparse mixture of experts language model inspired by Andrej...
thu-nics/MoA
[CoLM'25] The official implementation of the paper
jaisidhsingh/pytorch-mixtures
One-stop solutions for Mixture of Expert modules in PyTorch.
CASE-Lab-UMD/Unified-MoE-Compression
The official implementation of the paper "Towards Efficient Mixture of Experts: A Holistic Study...