xuanzebi/Paper-Knowledge_Distillation-Adversarial_Training-NLP

some paper of Knowledge Distillation and Adversarial Training about NLP

13
/ 100
Experimental

This project is a curated collection of research papers focused on two advanced techniques in Natural Language Processing: Knowledge Distillation and Adversarial Training. It provides researchers and NLP practitioners with a categorized list of influential papers, including foundational works and recent advancements, to help them understand and implement more robust and efficient NLP models. Users can explore papers that tackle challenges like model size, speed, and resistance to malicious input.

No commits in the last 6 months.

Use this if you are an NLP researcher or practitioner looking for a structured collection of academic papers on Knowledge Distillation and Adversarial Training to enhance your models.

Not ideal if you are a non-technical user looking for a ready-to-use software solution or a tutorial on basic NLP concepts.

Natural Language Processing Machine Learning Research Model Robustness Model Compression AI Security
No License Stale 6m No Package No Dependents
Maintenance 0 / 25
Adoption 5 / 25
Maturity 8 / 25
Community 0 / 25

How are scores calculated?

Stars

10

Forks

Language

License

Last pushed

Mar 03, 2020

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/nlp/xuanzebi/Paper-Knowledge_Distillation-Adversarial_Training-NLP"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.