EvgenyKashin/non-leaking-conv
Implementation of Spectral Leakage and Rethinking the Kernel Size in CNNs in Pytorch
This project offers an alternative way to build Convolutional Neural Networks (CNNs) by applying principles from signal processing. It provides code to construct CNN layers that reduce 'artifacts' that can appear in the frequency analysis of images. Machine learning engineers and researchers can use this to experiment with different CNN architectures for computer vision tasks.
No commits in the last 6 months.
Use this if you are a machine learning researcher or engineer interested in exploring new CNN architectures to potentially improve model performance or understand the impact of kernel design on image processing.
Not ideal if you are looking for a plug-and-play solution that guarantees out-of-the-box performance improvements for your existing computer vision models without requiring architectural modifications.
Stars
14
Forks
—
Language
Jupyter Notebook
License
MIT
Category
Last pushed
Feb 03, 2021
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/ml-frameworks/EvgenyKashin/non-leaking-conv"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
Jittor/jittor
Jittor is a high-performance deep learning framework based on JIT compiling and meta-operators.
zhanghang1989/ResNeSt
ResNeSt: Split-Attention Networks
berniwal/swin-transformer-pytorch
Implementation of the Swin Transformer in PyTorch.
NVlabs/FasterViT
[ICLR 2024] Official PyTorch implementation of FasterViT: Fast Vision Transformers with...
ViTAE-Transformer/ViTPose
The official repo for [NeurIPS'22] "ViTPose: Simple Vision Transformer Baselines for Human Pose...