mit-han-lab/neurips-micronet

[JMLR'20] NeurIPS 2019 MicroNet Challenge Efficient Language Modeling, Champion

38
/ 100
Emerging

This project provides an efficient approach to language modeling. It takes large text datasets, such as Wikipedia articles, and outputs a highly optimized language model capable of predicting the next word in a sequence. This is ideal for researchers and practitioners focused on natural language processing who need to deploy language models with minimal computational resources.

No commits in the last 6 months.

Use this if you need to build or evaluate a compact and efficient language model for text generation or prediction, especially when resource constraints (like memory or processing power) are a major concern.

Not ideal if you're looking for a simple, off-the-shelf language model without needing to understand or optimize its internal architecture for efficiency.

natural-language-processing computational-linguistics text-generation resource-constrained-ai model-optimization
Stale 6m No Package No Dependents
Maintenance 0 / 25
Adoption 7 / 25
Maturity 16 / 25
Community 15 / 25

How are scores calculated?

Stars

41

Forks

7

Language

Jupyter Notebook

License

MIT

Last pushed

Feb 26, 2021

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/nlp/mit-han-lab/neurips-micronet"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.