Santiago Profile picture
Computer scientist. I teach hard-core AI/ML Engineering at https://t.co/THCAAZcBMu. YouTube: https://t.co/pROi08OZYJ
95 subscribers
Jun 6 6 tweets 2 min read
Bye-bye, virtual assistants! Here is the most useful agent of 2025.

An agent with access to your Gmail, Calendar, and Drive, and the ability to do things for you is pretty mind-blowing.

I asked it to read my emails and reply to every cold outreach message.

My mind is blown! AI Secretary and the folks @genspark_ai will start printing money!

You can try this out here:

Check their announcement video and you'll see some of the crazy things it can do for you. genspark.ai
Jun 5 5 tweets 2 min read
You can now have a literal army of coding interns working for you while you sleep!

Remote Agent is now generally available. This is how we all get to experience what AI is really about.

Here is what you need to know: Remote Agent is a coding agent based on @augmentcode. They were gracious enough to partner with me on this post.

Remote Agent:

• Runs in the cloud
• Works autonomously
• Can handle small tasks from your backlog

Here is a link to try it out: fnf.dev/4jobOrw
Jun 4 4 tweets 2 min read
Knowledge graphs are infinitely better than vector search for building the memory of AI agents.

With five lines of code, you can build a knowledge graph with your data.

When you see the results, you'll never go back to vector-mediocrity-land.

Here is a quick video: Cognee is open-source and outperforms any basic vector search approach in terms of retrieval relevance.

• Easy to use
• Reduces hallucinations (by a ton!)
• Open-source

Here is a link to the repository: github.com/topoteretes/co…Image
May 26 4 tweets 2 min read
Cursor, WindSurf, and Copilot suck with Jupyter notebooks. They are great when you are writing regular code, but notebooks are a different monster.

Vincent is an extension fine-tuned to work with notebooks.

10x better than the other tools!

Here is a quick video: You can try Vincent for free. Here is a link to the extension:



It works with any of the VSCode forks, including Cursor and Windsurf. The free plan will give you enough to test it out.marketplace.visualstudio.com/items?itemName…
May 19 4 tweets 3 min read
I added a Knowledge Graph to Cursor using MCP.

You gotta see this working!

Knowledge graphs are a game-changer for AI Agents, and this is one example of how you can take advantage of them.

How this works:

1. Cursor connects to Graphiti's MCP Server. Graphiti is a very popular open-source Knowledge Graph library for AI agents.

2. Graphiti connects to Neo4j running locally.

Now, every time I interact with Cursor, the information is synthesized and stored in the knowledge graph. In short, Cursor now "remembers" everything about our project.

Huge!

Here is the video I recorded. To get this working on your computer, follow the instructions on this link:

github.com/getzep/graphit…

Something super cool about using Graphiti's MCP server:

You can use one model to develop the requirements and a completely different model to implement the code. This is a huge plus because you could use the stronger model at each stage.

Also, Graphiti supports custom entities, which you can use when running the MCP server.

You can use these custom entities to structure and recall domain-specific information, which will tenfold the accuracy of your results.

Here is an example of what these look like:

github.com/getzep/graphit…
Apr 30 11 tweets 3 min read
Improve your LLM-based applications by 200%:

Build an LLM-as-a-Judge evaluator and integrate it with your system.

This sounds harder than it is.

Here is how to do it and the things you need to keep in mind:

1/11 Image (LLM-as-a-judge is one of the topics I teach in my cohort. The next iteration starts next week. You can join at .)

LLM-as-a-Judge is a technique that uses an LLM to evaluate the quality of the outputs from your application.

2/11ml.school
Apr 18 8 tweets 3 min read
Falling off ladders to claim insurance checks is a multi-million-dollar fraud business in the US.

People bury insurance companies in paperwork to steal from them.

Enter RAG.

Here is how RAG is becoming the cheaters' worst nightmare (and how you can do the same):

1/8 Image An insurance claim can easily have 20,000 pages, and somebody must read them all!

I work with @EyeLevel, and we built a fraud detection system using their GroundX platform.

Best RAG use case I've seen—and you can use GroundX to build your own in any vertical.

2/8
Mar 7 4 tweets 2 min read
Here is an explanation of what MCP is, how it works, and why I think it's awesome.

I will also show you the MCP server I'm building.

This is good stuff. For those who like YouTube better:



By the way, I won't like you anymore if you don't subscribe to my channel.
Jan 16 11 tweets 1 min read
AWS is irrefutable proof that the right software with a great backend can succeed despite horrible UI/UX. Craigslist: “hold my beer”
Nov 12, 2024 4 tweets 2 min read
This is worth 1,000+ hours of engineering work every year:

1. Reproducing a bug
2. Getting detailed debug data
3. Writing how to reproduce it
4. Putting it all together in a good bug report

This tool can do all of this and cut the time it takes to fix the bug by 70%+: makes the reporting and fixing process really fast!

Click once, and engineers get:

• Console logs
• Network requests
• Timing waterfall
• Repro steps
• Session & user details
• Device & OS
• Backend logs

Check the attached video. Jam.dev
Oct 1, 2024 13 tweets 5 min read
My new soon-to-be Linux laptop right before I start assembling it. Image RAM and SSD are now installed. Took me 1 minute and I didn’t even read the manual. Image
Sep 16, 2024 9 tweets 3 min read
How can you build a good understanding of math for machine learning?

Here is a complete roadmap for you.

In essence, three fields make this up:

• Calculus
• Linear algebra
• Probability theory

Let's take a quick look at them! Image This thread is courtesy of @TivadarDanka.

3 years ago, he started writing a book about the mathematics of Machine Learning.

It's the best book you'll ever read:



Nobody explains complex ideas like he does.tivadardanka.com/books/mathemat…
Aug 12, 2024 18 tweets 6 min read
The single most undervalued fact of linear algebra:

Matrices are graphs, and graphs are matrices.

Encoding matrices as graphs is a cheat code, making complex behavior simple to study.

Let me show you how! Image By the way, this thread is courtesy of @TivadarDanka. He allowed me to republish it.

3 years ago, he started writing a book about the mathematics of Machine Learning.

It's the best book you'll ever read:



Nobody explains complex ideas like he does.tivadardanka.com/books/mathemat…
Jul 12, 2024 10 tweets 4 min read
A common fallacy:

If it's raining, the sidewalk is wet. But if the sidewalk is wet, is it raining?

Reversing the implication is called "affirming the consequent." We usually fall for this.

But surprisingly, it's not entirely wrong!

Let's explain it using Bayes Theorem:

1/10 Image This explanation is courtesy of @TivadarDanka. He allowed me to republish it.

He is writing a book about the mathematics of Machine Learning. It's the best book I've read:



Nobody explains complex ideas like he does.

2/10tivadardanka.com/books/mathemat…
Jun 12, 2024 6 tweets 2 min read
Some of the skills you need to start building AI applications:

• Python and SQL
• Transformer and diffusion models
• LLMs and fine-tuning
• Retrieval Augmented Generation
• Vector databases

Here is one of the most comprehensive programs that you'll find online: "Generative AI for Software Developers" is a 4-month online course.

It's a 5 to 10-hour weekly commitment, but you can dedicate as much time as you want to finish early.

Here is the link to the program:

I also have a PDF with the syllabus:bit.ly/4aNOJdy
Jun 10, 2024 15 tweets 5 min read
There's a stunning, simple explanation behind matrix multiplication.

This is the first time this clicked on my brain, and it will be the best thing you read all week.

Here is a breakdown of the most crucial idea behind modern machine learning:

1/15 Image This explanation is courtesy of @TivadarDanka. He allowed me to republish it

3 years ago, he started writing a book about the mathematics of Machine Learning.

It's the best book you'll ever read:



Nobody explains complex ideas like he does.

2/15tivadardanka.com/books/mathemat…
May 28, 2024 4 tweets 1 min read
This assistant has 169 lines of code:

• Gemini Flash
• OpenAI Whisper
• OpenAI TTS API
• OpenCV

GPT-4o is slower than Flash, more expensive, chatty, and very stubborn (it doesn't like to stick to my prompts).

Next week, I'll post a step-by-step video on how to build this. The first request takes longer (warming up), but things work faster from that point.

Few opportunities to improve this:

1. Stream answers from the model (instead of waiting for the full answer.)

2. Add the ability to interrupt the assistant.

3. Whisper running on GPU
May 25, 2024 4 tweets 2 min read
I’m so sorry about anyone who bought the rabbit r1.

It’s not just that the product is non-functional (as we learned from all the reviews), the real problem is that the whole thing seems to be a lie.

None of what they pitched exists or functions the way they said. Image They sold the world on a Large Action Model (LAM), an intelligent AI model that would understand applications and execute the actions requested by the user.

In reality, they are using Playwright, a web automation tool.

No AI. Just dumb, click-around, hard-coded scripts. Image
Mar 31, 2024 10 tweets 4 min read
What a week, huh?

1. Mojo 🔥 went open-source
2. Claude 3 beats GPT-4
3. $100B supercomputer from MSFT and OpenAI
4. Andrew Ng and Harrison Chase discussed AI Agents
5. Karpathy talked about the future of AI
...

And more.

Here is everything that will keep you up at night: Mojo 🔥, the programming language that turns Python into a beast, went open-source.

This is a huge step and great news for the Python and AI communities!

With Mojo 🔥 you can write Python code or scale all the way down to metal code. It's fast!

modular.com/blog/the-next-…
Mar 13, 2024 14 tweets 4 min read
The batch size is one of the most important parameters when training neural networks.

Here is everything you need to know about the batch size:

1 of 14 Image I trained two neural networks.

Same architecture, loss, optimizer, learning rate, momentum, epochs, and training data. Almost everything is the same.

Here is a plot of their losses.

Can you guess what the only difference is?

2 of 14 Image
Jan 5, 2024 5 tweets 2 min read
I had an amazing machine learning professor.

The first thing I learned from him was how to interpret learning curves. (Probably one of the best skills I built and refined over the years.)

Let me show you 4 pictures and you'll see how this process flows:

1/5 Image I trained a neural network. A simple one.

I plotted the model's training loss. As you can see, it's too high.

This network is underfitting. It's not learning.

I need to make the model larger.

2/5 Image