JetRunner/BERT-of-Theseus
⛵️The official PyTorch implementation for "BERT-of-Theseus: Compressing BERT by Progressive Module Replacing" (EMNLP 2020).
This project helps machine learning engineers and researchers reduce the computational cost of large BERT models without significantly losing performance. It takes an existing, fine-tuned BERT model and outputs a smaller, faster version. This is ideal for those deploying natural language processing models in resource-constrained environments.
315 stars. No commits in the last 6 months.
Use this if you need to compress a BERT model to make it run faster or use less memory, especially for deployment on edge devices or in high-throughput systems.
Not ideal if you are looking for a pre-trained model for non-natural language processing tasks or if your primary concern is improving model accuracy rather than efficiency.
Stars
315
Forks
39
Language
Python
License
Apache-2.0
Category
Last pushed
Jun 12, 2023
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/transformers/JetRunner/BERT-of-Theseus"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
Tongjilibo/bert4torch
An elegent pytorch implement of transformers
nyu-mll/jiant
jiant is an nlp toolkit
lonePatient/TorchBlocks
A PyTorch-based toolkit for natural language processing
monologg/JointBERT
Pytorch implementation of JointBERT: "BERT for Joint Intent Classification and Slot Filling"
grammarly/gector
Official implementation of the papers "GECToR – Grammatical Error Correction: Tag, Not Rewrite"...