raymin0223/mixture_of_recursions

Mixture-of-Recursions: Learning Dynamic Recursive Depths for Adaptive Token-Level Computation (NeurIPS 2025)

47
/ 100
Emerging

This project offers a novel method to make large language models (LLMs) more efficient, allowing them to process information faster without losing accuracy. It takes existing LLM architectures and optimizes how they handle each piece of text, resulting in quicker predictions and reduced computational overhead. This is for AI researchers and machine learning engineers who are building or deploying large language models and need to improve their performance and resource usage.

548 stars. No commits in the last 6 months.

Use this if you are a machine learning engineer or researcher looking to significantly improve the inference speed and training efficiency of your large language models while maintaining or enhancing their performance.

Not ideal if you are a practitioner looking for a ready-to-use LLM for specific applications without needing to delve into model architecture optimization.

Large Language Models AI Efficiency Model Optimization Neural Networks Deep Learning Research
Stale 6m No Package No Dependents
Maintenance 2 / 25
Adoption 10 / 25
Maturity 15 / 25
Community 20 / 25

How are scores calculated?

Stars

548

Forks

78

Language

Python

License

Apache-2.0

Last pushed

Sep 26, 2025

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/transformers/raymin0223/mixture_of_recursions"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.