VityaVitalich/STASC
[ICLR 2025 SSI-FM] Self-Taught Self-Correction for Small Language Models
This project helps machine learning engineers and researchers improve the performance of smaller language models for specific tasks like question answering. By applying a 'self-taught self-correction' algorithm, it takes an existing small language model and a dataset, then iteratively refines the model's ability to answer questions by teaching it to correct its own mistakes. The output is a more accurate, fine-tuned small language model optimized for the given task.
No commits in the last 6 months.
Use this if you need to boost the accuracy and reliability of a small language model for specific applications, especially when working with limited computational resources compared to large models.
Not ideal if you are looking for a pre-trained, ready-to-use large language model without the need for specialized fine-tuning and iterative self-correction.
Stars
11
Forks
3
Language
Jupyter Notebook
License
Apache-2.0
Category
Last pushed
Sep 19, 2025
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/transformers/VityaVitalich/STASC"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
jncraton/languagemodels
Explore large language models in 512MB of RAM
microsoft/unilm
Large-scale Self-supervised Pre-training Across Tasks, Languages, and Modalities
haizelabs/verdict
Inference-time scaling for LLMs-as-a-judge.
albertan017/LLM4Decompile
Reverse Engineering: Decompiling Binary Code with Large Language Models
bytedance/Sa2VA
Official Repo For Pixel-LLM Codebase