rosinality/halite
Acceleration framework for Human Alignment Learning
This framework helps machine learning engineers accelerate the development, training, and deployment of large language models. It takes various model architectures and training configurations as input, producing optimized, aligned, and ready-to-use LLMs. Engineers working on custom or cutting-edge language models will find this useful for their projects.
Use this if you are an ML engineer building or fine-tuning large language models and need a flexible, high-performance framework for rapid experimentation, alignment, and inference.
Not ideal if you prefer simple, YAML-based configurations or are not comfortable defining model architectures and training pipelines via Python code.
Stars
13
Forks
2
Language
Python
License
MIT
Category
Last pushed
Dec 03, 2025
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/transformers/rosinality/halite"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
agentscope-ai/Trinity-RFT
Trinity-RFT is a general-purpose, flexible and scalable framework designed for reinforcement...
OpenRLHF/OpenRLHF
An Easy-to-use, Scalable and High-performance Agentic RL Framework based on Ray (PPO & DAPO &...
zjunlp/EasyEdit
[ACL 2024] An Easy-to-use Knowledge Editing Framework for LLMs.
huggingface/alignment-handbook
Robust recipes to align language models with human and AI preferences
hyunwoongko/nanoRLHF
nanoRLHF: from-scratch journey into how LLMs and RLHF really work.