alsaniie/Understanding-Activation-functions-in-Neural-Networks
Activation functions are functions used in a neural network to compute the weighted sum of inputs and biases, which is in turn used to decide whether a neuron can be activated or not.
This project helps anyone learning about neural networks to understand the role and behavior of different activation functions. It illustrates how these functions introduce non-linearity, which is crucial for neural networks to learn complex, real-world patterns. You input conceptual knowledge about neural networks and output a clearer understanding of how activation functions impact model accuracy and training efficiency. This resource is for students, researchers, or practitioners in machine learning and AI.
No commits in the last 6 months.
Use this if you need a clear, visual explanation of how various activation functions work and why they are essential in neural networks.
Not ideal if you are looking for advanced research or highly technical implementations for cutting-edge neural network architectures.
Stars
20
Forks
1
Language
—
License
Apache-2.0
Category
Last pushed
Feb 21, 2024
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/ml-frameworks/alsaniie/Understanding-Activation-functions-in-Neural-Networks"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
digantamisra98/Mish
Official Repository for "Mish: A Self Regularized Non-Monotonic Neural Activation Function" [BMVC 2020]
Sentdex/nnfs_book
Sample code from the Neural Networks from Scratch book.
itdxer/neupy
NeuPy is a Tensorflow based python library for prototyping and building neural networks
vzhou842/cnn-from-scratch
A Convolutional Neural Network implemented from scratch (using only numpy) in Python.
nicklashansen/rnn_lstm_from_scratch
How to build RNNs and LSTMs from scratch with NumPy.