jiangtaoxie/SoT
SoT: Delving Deeper into Classification Head for Transformer
This project helps machine learning engineers and researchers improve the accuracy of classification models for both image and text data. It takes in structured image datasets (like ImageNet) or text datasets (like GLUE) and outputs a trained classification model with higher performance. This is for professionals building and optimizing advanced deep learning systems for computer vision and natural language processing tasks.
No commits in the last 6 months.
Use this if you are building or fine-tuning transformer-based classification models and want to achieve state-of-the-art accuracy by better utilizing all available information in your input data.
Not ideal if you are a business user or practitioner without a deep understanding of deep learning frameworks and model training, as this is a technical implementation for model developers.
Stars
50
Forks
7
Language
Python
License
—
Category
Last pushed
Dec 24, 2021
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/ml-frameworks/jiangtaoxie/SoT"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
lvapeab/nmt-keras
Neural Machine Translation with Keras
dair-ai/Transformers-Recipe
🧠A study guide to learn about Transformers
SirawitC/Transformer_from_scratch_pytorch
Build a transformer model from scratch using pytorch to understand its inner workings and gain...
jaketae/ensemble-transformers
Ensembling Hugging Face transformers made easy
lof310/transformer
PyTorch implementation of the current SOTA Transformer. Configurable, efficient, and...