UCSC-REAL/TokenCleaning

[ICML 2025] Official implementation of paper "Token Cleaning: Fine-Grained Data Selection for LLM Supervised Fine-Tuning"

35
/ 100
Emerging

When fine-tuning large language models (LLMs), not all words or phrases in your training data are equally useful. This project helps you identify and remove the 'uninformative tokens' from your existing datasets, ensuring that your LLM focuses on the most relevant parts of the text during supervised fine-tuning. It takes your prepared text datasets and outputs cleaner, more effective training data, primarily benefiting AI researchers and machine learning engineers working on LLM development.

Use this if you are a machine learning engineer or AI researcher looking to improve the performance and efficiency of your large language models by fine-tuning them on higher-quality, task-specific data.

Not ideal if you are not directly involved in the supervised fine-tuning of large language models or if you are looking for methods to generate new training data rather than refining existing datasets.

LLM fine-tuning natural language processing AI model training data quality model performance optimization
No License No Package No Dependents
Maintenance 10 / 25
Adoption 8 / 25
Maturity 7 / 25
Community 10 / 25

How are scores calculated?

Stars

51

Forks

5

Language

Python

License

Last pushed

Feb 14, 2026

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/transformers/UCSC-REAL/TokenCleaning"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.