seonghyeonye/Flipped-Learning

[ICLR 2023] Guess the Instruction! Flipped Learning Makes Language Models Stronger Zero-Shot Learners

31
/ 100
Emerging

This project offers an improved method for making large language models (LLMs) like T5 better at performing tasks they haven't been explicitly trained for, also known as zero-shot learning. It takes an existing LLM and enhances its ability to understand implied instructions, leading to more accurate text generation, summarization, or classification. Machine learning researchers and NLP practitioners working with LLMs would find this useful.

117 stars. No commits in the last 6 months.

Use this if you are developing or fine-tuning large language models and want to improve their performance on new, unseen tasks without extensive retraining.

Not ideal if you are looking for a plug-and-play application for end-users, rather than a research framework to enhance existing language models.

natural-language-processing machine-learning-research zero-shot-learning large-language-models AI-model-improvement
No License Stale 6m No Package No Dependents
Maintenance 2 / 25
Adoption 10 / 25
Maturity 8 / 25
Community 11 / 25

How are scores calculated?

Stars

117

Forks

10

Language

Python

License

Last pushed

Jun 28, 2025

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/transformers/seonghyeonye/Flipped-Learning"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.