NVlabs/FAN
Official PyTorch implementation of Fully Attentional Networks
This project offers robust computer vision models, called Fully Attentional Networks (FAN), that maintain high accuracy even when images are corrupted by noise, blur, or other common distortions. You input images that might be imperfect, and the system outputs reliable classifications or object detections. It is designed for researchers and practitioners building image recognition systems that need to perform consistently in real-world, less-than-ideal conditions.
480 stars. No commits in the last 6 months.
Use this if you need to build or evaluate computer vision systems that perform reliably on images that might be noisy, blurry, or otherwise degraded.
Not ideal if your primary concern is raw accuracy on perfectly clean datasets, or if you need to classify non-visual data.
Stars
480
Forks
28
Language
Python
License
—
Category
Last pushed
Mar 31, 2023
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/ml-frameworks/NVlabs/FAN"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
Jittor/jittor
Jittor is a high-performance deep learning framework based on JIT compiling and meta-operators.
berniwal/swin-transformer-pytorch
Implementation of the Swin Transformer in PyTorch.
zhanghang1989/ResNeSt
ResNeSt: Split-Attention Networks
NVlabs/FasterViT
[ICLR 2024] Official PyTorch implementation of FasterViT: Fast Vision Transformers with...
ViTAE-Transformer/ViTPose
The official repo for [NeurIPS'22] "ViTPose: Simple Vision Transformer Baselines for Human Pose...