bartbussmann/BatchTopK

Implementation of the BatchTopK activation function for training sparse autoencoders (SAEs)

37
/ 100
Emerging

This helps machine learning researchers and engineers train sparse autoencoders more efficiently. It takes in a batch of feature activations and applies a specialized activation function that identifies the most important features across the entire batch, rather than for each individual sample. This results in a refined set of activated features that can improve the performance and sparsity of the autoencoder model.

No commits in the last 6 months.

Use this if you are working with sparse autoencoders and want an alternative method to select the most active features across a batch, potentially leading to better model training and representation learning.

Not ideal if you are looking for a general-purpose machine learning tool and are not specifically involved in the research or development of sparse autoencoders.

machine-learning-research sparse-autoencoders representation-learning neural-network-training deep-learning-engineering
Stale 6m No Package No Dependents
Maintenance 2 / 25
Adoption 8 / 25
Maturity 16 / 25
Community 11 / 25

How are scores calculated?

Stars

61

Forks

6

Language

Python

License

MIT

Last pushed

Jul 24, 2025

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/ml-frameworks/bartbussmann/BatchTopK"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.