eole-nlp/eole
Open language modeling toolkit based on PyTorch
EOLE is an open-source toolkit designed for those working with large language models, enabling them to fine-tune, train, and run various models with high efficiency. It takes existing pre-trained models or custom architectures as input and outputs optimized, high-performance language models for tasks like text generation, translation, or transcription. Researchers and practitioners in natural language processing and AI development who need to build or deploy performant language models would use this.
176 stars. Available on PyPI.
Use this if you need to rapidly train, fine-tune, or deploy large language models and require top-tier inference speed on GPU hardware.
Not ideal if you are a casual user looking for a simple drag-and-drop solution without needing to dive into model configurations or code.
Stars
176
Forks
25
Language
Python
License
MIT
Category
Last pushed
Mar 12, 2026
Commits (30d)
0
Dependencies
29
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/transformers/eole-nlp/eole"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Related models
openvinotoolkit/nncf
Neural Network Compression Framework for enhanced OpenVINO™ inference
huggingface/optimum
🚀 Accelerate inference and training of 🤗 Transformers, Diffusers, TIMM and Sentence Transformers...
NVIDIA/Megatron-LM
Ongoing research training transformer models at scale
huggingface/optimum-intel
🤗 Optimum Intel: Accelerate inference with Intel optimization tools
huggingface/optimum-habana
Easy and lightning fast training of 🤗 Transformers on Habana Gaudi processor (HPU)