AmirMansurian/AttnFD
[WACV'26] Attention as Geometric Transformation: Revisiting Feature Distillation for Semantic Segmentation
This project helps computer vision engineers and researchers create more accurate and efficient AI models for image analysis. It takes existing deep learning models (like ResNet-101 and DeepLabV3+) and applies a technique called 'feature distillation' to produce a smaller, faster model (like ResNet-18) that performs significantly better than standard training alone. The output is a highly optimized model capable of tasks like identifying and outlining objects in images, which is useful for autonomous vehicles or medical imaging.
Use this if you need to improve the performance of a smaller, more efficient semantic segmentation model without increasing its size or computational requirements.
Not ideal if your primary goal is to train a large, complex model from scratch without leveraging knowledge from a pre-trained 'teacher' model.
Stars
42
Forks
2
Language
Python
License
—
Category
Last pushed
Jan 06, 2026
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/ml-frameworks/AmirMansurian/AttnFD"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
deepinv/deepinv
DeepInverse: a PyTorch library for solving imaging inverse problems using deep learning
yjxiong/tsn-pytorch
Temporal Segment Networks (TSN) in PyTorch
mhamilton723/STEGO
Unsupervised Semantic Segmentation by Distilling Feature Correspondences
fidler-lab/polyrnn-pp
Inference Code for Polygon-RNN++ (CVPR 2018)
pyxu-org/pyxu
Modular and scalable computational imaging in Python with GPU/out-of-core computing.