VityaVitalich/STASC

[ICLR 2025 SSI-FM] Self-Taught Self-Correction for Small Language Models

37
/ 100
Emerging

This project helps machine learning engineers and researchers improve the performance of smaller language models for specific tasks like question answering. By applying a 'self-taught self-correction' algorithm, it takes an existing small language model and a dataset, then iteratively refines the model's ability to answer questions by teaching it to correct its own mistakes. The output is a more accurate, fine-tuned small language model optimized for the given task.

No commits in the last 6 months.

Use this if you need to boost the accuracy and reliability of a small language model for specific applications, especially when working with limited computational resources compared to large models.

Not ideal if you are looking for a pre-trained, ready-to-use large language model without the need for specialized fine-tuning and iterative self-correction.

natural-language-processing machine-learning-engineering model-fine-tuning question-answering small-language-models
Stale 6m No Package No Dependents
Maintenance 2 / 25
Adoption 5 / 25
Maturity 16 / 25
Community 14 / 25

How are scores calculated?

Stars

11

Forks

3

Language

Jupyter Notebook

License

Apache-2.0

Last pushed

Sep 19, 2025

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/transformers/VityaVitalich/STASC"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.