UCSC-REAL/TokenCleaning
[ICML 2025] Official implementation of paper "Token Cleaning: Fine-Grained Data Selection for LLM Supervised Fine-Tuning"
When fine-tuning large language models (LLMs), not all words or phrases in your training data are equally useful. This project helps you identify and remove the 'uninformative tokens' from your existing datasets, ensuring that your LLM focuses on the most relevant parts of the text during supervised fine-tuning. It takes your prepared text datasets and outputs cleaner, more effective training data, primarily benefiting AI researchers and machine learning engineers working on LLM development.
Use this if you are a machine learning engineer or AI researcher looking to improve the performance and efficiency of your large language models by fine-tuning them on higher-quality, task-specific data.
Not ideal if you are not directly involved in the supervised fine-tuning of large language models or if you are looking for methods to generate new training data rather than refining existing datasets.
Stars
51
Forks
5
Language
Python
License
—
Category
Last pushed
Feb 14, 2026
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/transformers/UCSC-REAL/TokenCleaning"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
DaoD/INTERS
This is the repository for our paper "INTERS: Unlocking the Power of Large Language Models in...
declare-lab/instruct-eval
This repository contains code to quantitatively evaluate instruction-tuned models such as Alpaca...
Haiyang-W/TokenFormer
[ICLR2025 Spotlightš„] Official Implementation of TokenFormer: Rethinking Transformer Scaling...
hkust-nlp/deita
Deita: Data-Efficient Instruction Tuning for Alignment [ICLR2024]
kehanlu/DeSTA2
Code and model for ICASSP 2025 Paper "Developing Instruction-Following Speech Language Model...