xuanzebi/Paper-Knowledge_Distillation-Adversarial_Training-NLP
some paper of Knowledge Distillation and Adversarial Training about NLP
This project is a curated collection of research papers focused on two advanced techniques in Natural Language Processing: Knowledge Distillation and Adversarial Training. It provides researchers and NLP practitioners with a categorized list of influential papers, including foundational works and recent advancements, to help them understand and implement more robust and efficient NLP models. Users can explore papers that tackle challenges like model size, speed, and resistance to malicious input.
No commits in the last 6 months.
Use this if you are an NLP researcher or practitioner looking for a structured collection of academic papers on Knowledge Distillation and Adversarial Training to enhance your models.
Not ideal if you are a non-technical user looking for a ready-to-use software solution or a tutorial on basic NLP concepts.
Stars
10
Forks
—
Language
—
License
—
Category
Last pushed
Mar 03, 2020
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/nlp/xuanzebi/Paper-Knowledge_Distillation-Adversarial_Training-NLP"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
airaria/TextBrewer
A PyTorch-based knowledge distillation toolkit for natural language processing
sunyilgdx/NSP-BERT
The code for our paper "NSP-BERT: A Prompt-based Zero-Shot Learner Through an Original...
princeton-nlp/CoFiPruning
[ACL 2022] Structured Pruning Learns Compact and Accurate Models https://arxiv.org/abs/2204.00408
kssteven418/LTP
[KDD'22] Learned Token Pruning for Transformers
georgian-io/Transformers-Domain-Adaptation
:no_entry: [DEPRECATED] Adapt Transformer-based language models to new text domains