stochasticai/xTuring
Build, personalize and control your own LLMs. From data pre-processing to fine-tuning, xTuring provides an easy way to personalize open-source LLMs. Join our discord community: https://discord.gg/TgHXuSJEk6
This tool helps you customize open-source large language models (LLMs) with your own specific data. You provide your unique datasets, and it trains a personalized LLM that understands and generates text tailored to your needs. This is ideal for developers, researchers, or companies looking to create specialized AI assistants, chatbots, or content generation tools without sending proprietary data to third-party services.
2,668 stars. Available on PyPI.
Use this if you need to fine-tune open-source LLMs like LLaMA, GPT-OSS, or Qwen on your private data for specialized applications, and you need control over model privacy and efficiency.
Not ideal if you're looking for a pre-trained, off-the-shelf LLM solution that doesn't require any customization or if you lack the technical expertise to work with coding environments.
Stars
2,668
Forks
211
Language
Python
License
Apache-2.0
Category
Last pushed
Mar 04, 2026
Commits (30d)
0
Dependencies
19
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/transformers/stochasticai/xTuring"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Related models
unslothai/unsloth
Fine-tuning & Reinforcement Learning for LLMs. 🦥 Train OpenAI gpt-oss, DeepSeek, Qwen, Llama,...
huggingface/peft
🤗 PEFT: State-of-the-art Parameter-Efficient Fine-Tuning.
modelscope/ms-swift
Use PEFT or Full-parameter to CPT/SFT/DPO/GRPO 600+ LLMs (Qwen3.5, DeepSeek-R1, GLM-5,...
oumi-ai/oumi
Easily fine-tune, evaluate and deploy gpt-oss, Qwen3, DeepSeek-R1, or any open source LLM / VLM!
linkedin/Liger-Kernel
Efficient Triton Kernels for LLM Training