Computer scientist. I teach hard-core AI/ML Engineering at https://t.co/THCAAZcBMu. YouTube: https://t.co/pROi08OZYJ
95 subscribers
May 19 • 4 tweets • 3 min read
I added a Knowledge Graph to Cursor using MCP.
You gotta see this working!
Knowledge graphs are a game-changer for AI Agents, and this is one example of how you can take advantage of them.
How this works:
1. Cursor connects to Graphiti's MCP Server. Graphiti is a very popular open-source Knowledge Graph library for AI agents.
2. Graphiti connects to Neo4j running locally.
Now, every time I interact with Cursor, the information is synthesized and stored in the knowledge graph. In short, Cursor now "remembers" everything about our project.
Huge!
Here is the video I recorded.
To get this working on your computer, follow the instructions on this link:
Something super cool about using Graphiti's MCP server:
You can use one model to develop the requirements and a completely different model to implement the code. This is a huge plus because you could use the stronger model at each stage.
Also, Graphiti supports custom entities, which you can use when running the MCP server.
You can use these custom entities to structure and recall domain-specific information, which will tenfold the accuracy of your results.
GPT-4o is slower than Flash, more expensive, chatty, and very stubborn (it doesn't like to stick to my prompts).
Next week, I'll post a step-by-step video on how to build this.
The first request takes longer (warming up), but things work faster from that point.
Few opportunities to improve this:
1. Stream answers from the model (instead of waiting for the full answer.)
2. Add the ability to interrupt the assistant.
3. Whisper running on GPU
May 25, 2024 • 4 tweets • 2 min read
I’m so sorry about anyone who bought the rabbit r1.
It’s not just that the product is non-functional (as we learned from all the reviews), the real problem is that the whole thing seems to be a lie.
None of what they pitched exists or functions the way they said.
They sold the world on a Large Action Model (LAM), an intelligent AI model that would understand applications and execute the actions requested by the user.
In reality, they are using Playwright, a web automation tool.
No AI. Just dumb, click-around, hard-coded scripts.
Mar 31, 2024 • 10 tweets • 4 min read
What a week, huh?
1. Mojo 🔥 went open-source 2. Claude 3 beats GPT-4 3. $100B supercomputer from MSFT and OpenAI 4. Andrew Ng and Harrison Chase discussed AI Agents 5. Karpathy talked about the future of AI
...
And more.
Here is everything that will keep you up at night:
Mojo 🔥, the programming language that turns Python into a beast, went open-source.
This is a huge step and great news for the Python and AI communities!
With Mojo 🔥 you can write Python code or scale all the way down to metal code. It's fast!
Here are 40+ free lessons and practical projects on building advanced RAG applications for production:
1/4
This is one of the most comprehensive courses you'll find. It covers all of LangChain and LlamaIndex.
And it's 100% FREE!
@activeloopai, @towards_AI, and @intel Disruptor collaborated with @llama_index to develop it.
The best real-life Machine Learning program out there:
"I have seen hundreds of courses; this is the best material and depth of knowledge I've seen."
That's what a professional Software Engineer finishing my program said during class. This is the real deal.
I teach a hard-core live class. It's the best program to learn about building production Machine Learning systems.
But it's not a $9.99 online course. It's not about videos or a bunch of tutorials you can read.
This program is different.
It's 14 hours of live sessions where you interact with me, like in any other classroom. It's tough, with 30 quizzes and 30 coding assignments.
Online courses can't compete with that.
I'll teach you pragmatic Machine Learning for Engineers. This is the type of knowledge every company wants to have.
The program's next iteration (Cohort #8) starts on November 6th. The following (Cohort #9) on December 4th.
It will be different from any other class you've ever taken. It will be tough. It will be fun. It's the closest thing to sitting in a classroom.
And for the first time, the next iteration includes an additional 9 hours of pre-recorded materials to help you as much as possible!
You'll learn about Machine Learning in the real world. You'll learn to train, tune, evaluate, register, deploy, and monitor models. You'll learn how to build a system that continually learns and how to test it in production.
You'll get unlimited access to me and the entire community. I'll help you through the course, answer your questions, and help with your code.
You get lifetime access to all past and future sessions. You get access to every course I've created for free. You get access to recordings, job offers, and many people doing the job you want to do.
No monthly payments. Ever.
The link to join is in the attached image and in the following tweet.
The link to join the program:
The cost to join is $385.
November and December are the last two iterations remaining at that price. The cost will go up starting in January 2024.
Today, there are around 800 professionals in the community.ml.school
Oct 2, 2023 • 8 tweets • 3 min read
AI is changing how we build software.
A few weeks ago, I talked about using AI for code reviews. Many dismissed the idea, saying AI can't help beyond trivial suggestions.
You are wrong.
Here are a few examples of what you can do with @CodiumAI's open-source pull request agent:
Here, the agent generated the description of a pull request.
It looks at every commit and file involved and summarizes what's happening automatically.
You can do this by using the "/describe" command.
Sep 21, 2023 • 5 tweets • 2 min read
There is a considerable risk to start building with Large Language Models.
Prompt lock-in is a big issue, and I'm afraid many people will find out about it the hard way.
There's no cross-compatibility for many of your prompts. If you change your model, your prompts will stop working.
Here are two examples:
First, an application where an LLM generates marketing copy for a site. Here, you expect open-ended responses. A prompt like that will work across different models with little or no modifications. Use cases like this have high prompt portability.
Second, an LLM that interprets and classifies a customer request. This use case requires terse and structured responses. These prompts are model-dependent and have low portability.
Here is what makes matters worse:
The more complex the responses, the more time you need writing prompts and the less portable they are. In other words, the more you invest, the more you'll lock your implementation to one specific model.
What's the solution?
First, be careful how much you invest in writing prompts for a model that could stop working any day. Having to migrate to a different model will come at a steep cost.
Second, it's too early to understand how these models will evolve. Don't outsource too much to a Large Language Model. The more you do, the more significant the risk.
If you are using an LLM as part of a product, how are you protecting against this?
The biggest issue is not whether the model has the capacity to answer a prompt.
The problem is about the variability of that answer. For example, this is an issue when you require a strictly formatted response.
You can solve a problem using GPT-3.5, GPT-4, and Llama 2. But, in many cases, you'll need different prompts for every one of these models.