nawnoes/reformer-language-model

Reformer Language Model

36
/ 100
Emerging

This project offers an experimental, pre-trained Korean language model built on the Reformer architecture, designed to handle very long sequences of text efficiently. It takes a large Korean text corpus (like Wikipedia) as input and produces a language model capable of masked language modeling, autoregressive text generation, or replaced token detection. This is for researchers and machine learning engineers working on natural language processing tasks with Korean text.

Use this if you are a researcher or ML engineer experimenting with large-scale Korean language models and need an efficient, pre-trained base for tasks like text generation or understanding, especially with long text sequences.

Not ideal if you need a plug-and-play solution for a specific application, as this project is a research snapshot requiring some technical setup and customization.

Korean NLP language modeling natural language processing deep learning research text generation
No License No Package No Dependents
Maintenance 6 / 25
Adoption 6 / 25
Maturity 8 / 25
Community 16 / 25

How are scores calculated?

Stars

22

Forks

7

Language

Jupyter Notebook

License

Last pushed

Jan 12, 2026

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/nlp/nawnoes/reformer-language-model"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.