Neural Network implemented from scratch in Python🔥
Here is the step by step explanation with code.
Thread🧵👇
Below is the simple Neural Network consists of 2 layers:
- Hidden Layer
- Output Layer
First Initialize the size of layers along with the weights & biases.
And also define the sigmoid activation function & it's derivative which is really key to introduce non-linearity.
Forward Pass:
Here the input data is passed through the neural network to obtain the predicted output.
In forward pass, First calculate the output of the hidden layer.
hidden_output = X•W1 + b1
Then apply the sigmoid activation to the output.
output = sigmoid( (X•W1) + b1)
Backward Pass:
First compute the gradients of the output layer.
Loss = (y - output)
Gradient of Loss = (y - output) * sigmoid_derivative(output)
Now calculate d_W2 which is gradient of the loss function with respect to W2.
d_W2 = hidden_output.T • Gradient of Loss
Similarly calculate d_W1, d_b2 & d_b1
dW1: Gradient of the loss function wrt W1
d_b2: Gradient of the loss function wrt b2(bias of neuron in output layer)
d_b1: Gradient of the loss function wrt b1(bias of neuron in hidden layer)
Now Update the Weights:
Here learning rate is the hyper parameter!
A low learning rate can cause the model getting caught in local optima, while the high learning rate can cause the model to overshoot the general solution
The agent pulls context from the indexed PDF files stored in vector DB. If needed, it can also fall back to web search to provide more relevant answers.
Tech stack:
@milvusio for vector DB as knowledge base
@AgnoAgi for agent orchestration
2. Install Milvus
We use Milvus, an open-source vector database, to store document embeddings.
Milvus supports flexible deployments via Kubernetes, Docker, or Lite mode via pip install.
Let's run the "standalone_embed. sh" script to launch Milvus in Docker standalone mode.
The Github Official MCP server comes with seamless integration with GitHub APIs, enabling advanced automation and interaction capabilities for developers & tools.