If you look at this example, you probably figured out the rule.
Each row is a node, and each element represents a directed and weighted edge. We omit any edges of zero elements.
The element in the 𝑖-th row and 𝑗-th column corresponds to an edge going from 𝑖 to 𝑗.
To unwrap the definition a bit, let's check the first row, which corresponds to the edges outgoing from the first node.
(Notice how there's no edge for the value 0.)
Similarly, the first column corresponds to the edges incoming to the first node.
Here is the full picture, with the nodes explicitly labeled.
Why is the directed graph representation beneficial?
For example, the powers of the matrix correspond to walks in the graph.
Take a look at the elements of the square matrix. All possible 2-step walks are accounted for in the sum defining the elements of A².
If the directed graph represents the states of a Markov chain, the square of its transition probability matrix essentially shows the probability of the chain having some state after two steps.
There is much more to this connection.
For instance, it gives us a deep insight into the structure of nonnegative matrices.
To see what graphs show about matrices, let's talk about the concept of strongly connected components.
A directed graph is strongly connected if every node can be reached from every other node.
If this is not true, the graph is not strongly connected.
Below, you can see an example of both.
Matrices that correspond to strongly connected graphs are called irreducible. All other nonnegative matrices are called reducible.
Soon, we'll see why.
(For simplicity, I assumed each edge to have a unit weight, but each weight can be an arbitrary nonnegative number.)
Back to the general case!
Even though not all directed graphs are strongly connected, we can partition the nodes into strongly connected components.
Let's label the nodes of this graph and construct the corresponding matrix!
(For simplicity, assume that all edges have unit weight.)
Do you notice a pattern?
The corresponding matrix of our graph can be reduced to a simpler form!
Its diagonal comprises blocks whose graphs are strongly connected. (That is, the blocks are irreducible.) Furthermore, the block below the diagonal is zero.
In general, this block-matrix structure is called the Frobenius normal form.
Let's reverse the question: can we transform an arbitrary nonnegative matrix into the Frobenius normal form?
Yes, and with the help of directed graphs, this is much easier to show than purely using algebra.
This is just the tip of the iceberg. For example, with the help of matrices, we can define the eigenvalues of graphs!
Utilizing the relation between matrices and graphs has been extremely profitable for both graph theory and linear algebra.
This thread is just ~30% of the full post, which you can find on Tivadar's book.
GPT-4o is slower than Flash, more expensive, chatty, and very stubborn (it doesn't like to stick to my prompts).
Next week, I'll post a step-by-step video on how to build this.
The first request takes longer (warming up), but things work faster from that point.
Few opportunities to improve this:
1. Stream answers from the model (instead of waiting for the full answer.)
2. Add the ability to interrupt the assistant.
3. Whisper running on GPU
Unfortunately, no local modal supports text+images (as far as I know,) so I'm stuck running online models.
The TTS API (synthesizing text to audio) can also be replaced by a local version. I tried, but the available voices suck (too robotic), so I kept OpenAI's.
I’m so sorry about anyone who bought the rabbit r1.
It’s not just that the product is non-functional (as we learned from all the reviews), the real problem is that the whole thing seems to be a lie.
None of what they pitched exists or functions the way they said.
They sold the world on a Large Action Model (LAM), an intelligent AI model that would understand applications and execute the actions requested by the user.
In reality, they are using Playwright, a web automation tool.
No AI. Just dumb, click-around, hard-coded scripts.
Their foundational AI model is just ChatGPT + scripts.
Rabbit’s founder lied on their marketing videos, during interviews, when he presented the product, and lied on Discord when answering questions from early supporters.
1. Mojo 🔥 went open-source 2. Claude 3 beats GPT-4 3. $100B supercomputer from MSFT and OpenAI 4. Andrew Ng and Harrison Chase discussed AI Agents 5. Karpathy talked about the future of AI
...
And more.
Here is everything that will keep you up at night:
Mojo 🔥, the programming language that turns Python into a beast, went open-source.
This is a huge step and great news for the Python and AI communities!
With Mojo 🔥 you can write Python code or scale all the way down to metal code. It's fast!