AmirMansurian/AttnFD

[WACV'26] Attention as Geometric Transformation: Revisiting Feature Distillation for Semantic Segmentation

27
/ 100
Experimental

This project helps computer vision engineers and researchers create more accurate and efficient AI models for image analysis. It takes existing deep learning models (like ResNet-101 and DeepLabV3+) and applies a technique called 'feature distillation' to produce a smaller, faster model (like ResNet-18) that performs significantly better than standard training alone. The output is a highly optimized model capable of tasks like identifying and outlining objects in images, which is useful for autonomous vehicles or medical imaging.

Use this if you need to improve the performance of a smaller, more efficient semantic segmentation model without increasing its size or computational requirements.

Not ideal if your primary goal is to train a large, complex model from scratch without leveraging knowledge from a pre-trained 'teacher' model.

semantic-segmentation computer-vision model-optimization deep-learning image-analysis
No License No Package No Dependents
Maintenance 6 / 25
Adoption 8 / 25
Maturity 8 / 25
Community 5 / 25

How are scores calculated?

Stars

42

Forks

2

Language

Python

License

Last pushed

Jan 06, 2026

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/ml-frameworks/AmirMansurian/AttnFD"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.