keyu-tian/SparK
[ICLR'23 Spotlightš„] The first successful BERT/MAE-style pretraining on any convolutional network; Pytorch impl. of "Designing BERT for Convolutional Networks: Sparse and Hierarchical Masked Modeling"
SparK is a deep learning technique that significantly improves the performance of convolutional neural networks (CNNs) for image analysis tasks. It takes standard image datasets and a CNN architecture as input, then applies a self-supervised pretraining method to produce a more powerful, "pretrained" CNN. This pretrained model can then be used to achieve higher accuracy on various image classification benchmarks. It is designed for machine learning researchers and practitioners who work with computer vision models.
1,368 stars. No commits in the last 6 months.
Use this if you want to improve the performance of your existing convolutional neural network models for image-related tasks without needing a large amount of labeled data for initial training.
Not ideal if you are working with non-image data or if your project does not involve deep learning for computer vision.
Stars
1,368
Forks
84
Language
Python
License
MIT
Category
Last pushed
Jan 23, 2024
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/ml-frameworks/keyu-tian/SparK"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
InterDigitalInc/CompressAI
A PyTorch library and evaluation platform for end-to-end compression research
quic/aimet
AIMET is a library that provides advanced quantization and compression techniques for trained...
tensorflow/compression
Data compression in TensorFlow
baler-collaboration/baler
Repository of Baler, a machine learning based data compression tool
thulab/DeepHash
An Open-Source Package for Deep Learning to Hash (DeepHash)