Here is a full Python π implementation of a neural network from scratch in less than 20 lines of code!
It shows how it can learn 5 logic functions. (But it's powerful enough to learn much more.)
An excellent exercise in learning how feedforward and backpropagation work!
A quick rundown of the code:
β«οΈ X β input
β«οΈ layer β hidden layer
β«οΈ output β output layer
β«οΈ W1 β set of weights between X and layer
β«οΈ W2 β set of weights between layer and output
β«οΈ error β how far is our prediction after every epoch
I'm using a sigmoid as the activation function. You will recognize it through this formula:
sigmoid(x) = 1 / 1 + exp(-x)
It would have been nicer to extract it as a separate function, but then the code wouldn't be as compact π
Within the loop, we first update the value of both layers. This is "forward propagation."
Then we compute the error.
Then we update the value of the weights (starting with the last set.) This is "backpropagation."
If you'd like to play with the code, here is the link to it: