I build & teach AI stuff. Founder @TakeoffAI where we’re building an AI coding tutor. Come learn to code + build with AI at https://t.co/oJ8PNoAutE.
8 subscribers
Aug 7 • 5 tweets • 2 min read
My honest GPT-5 review:
- It is a *phenomenal* everyday chat model
- I will default to it for all normal chats
- API pricing is incredible, major points here
But code?
I will still be using Claude Code + Opus.
And now a long list of bullet points.
- Reminder: I do not do, and have never done, paid promos, and on days like today I’m glad I made that decision a long time ago.
- The API pricing is unbelievable. Like seriously. Favorite thing from me today. Tip my hat to the team.
- I really love the personality they landed on for GPT-5. It’s like if O3 was slightly more friendly.
- Not sycophantic. I personally could probably have it be even more disagreeable, but I think they landed on the right slider setting for mass market.
- The less hallucinations thing is real. Feel like I can actually notice its behavior differences there. Enjoyed this more than I thought.
- It’s very generally smart. Tyler Cowen’s review probably hit this the best. Can talk about niche things without feeling like it’s BSing you.
- Latency is good.
- I totally get the why of it, but I absolutely hate the model router thing. Hope we can override it.
- Solid improvement on chat that’s worthy of GPT-5 namesake. Feels like they definitely overhyped it though. I personally don’t mind the vagueposting. I actually think it activates the community in a fun way, but that was a touch overboard imo.
For code:
- Claude Code with Opus is still king and frankly it’s not close.
- I’m wildly suspicious of people who claim otherwise, which leads me to…
Today please remember…
- You would be *shocked* how many people posting strong opinions and elaborate pieces today don’t actually use the models that much, especially for code. Many people you respect on here are complete airheads on actually using the models + tools. Seriously.
- SF is many ways is a political game. Keep that in mind today as you read certain opinions. I live away from it for a reason.
Jul 23 • 4 tweets • 2 min read
To 10x AI coding agents you need to *obsess* over context engineering above all else.
Great Context = Great Plan = Great Result
AI models are geniuses who start from scratch every time.
So onboard them by going overboard on context.
Use this prompt as a starting point.
It is the single highest leverage thing you can do to improve perf of coding agents.
Go even further by building them tools to effectively search and build context - more on this later.
I use the below prompt as a /onboard custom command in Claude Code when starting new tasks.
-
# Onboard
You are given the following context:
$ARGUMENTS
## Instructions
"AI models are geniuses who start from scratch on every task." - Noam Brown
Your job is to "onboard" yourself to the current task.
Do this by:
- Using ultrathink
- Exploring the codebase
- Asking me questions if needed
The goal is to get you fully prepared to start working on the task.
Take as long as you need to get yourself ready. Overdoing it is better than underdoing it.
Record everything in a .claude/tasks/[TASK_ID]/onboarding.md file. This file will be used to onboard you to the task in a new session if needed, so make sure it's comprehensive.
Mar 9 • 13 tweets • 4 min read
Watch for a 14min demo of me using Manus for the 1st time.
It’s *shockingly* good.
Now imagine this in 2-3 years when:
- it has >180 IQ
- never stops working
- is 10x faster
- and runs in swarms by the 1000s
AGI is coming - expect rapid progress.
Manus is way better than everything on the market, but it’s also not going to automate you away rn.
I hope the US labs respond with a great wave of releases!
It’s good that things like this are starting to drop - the avg person needs to be aware.
Thinking of this more & more:
Feb 9 • 8 tweets • 2 min read
I know *nothing* about ads.
Used OpenAI Deep Research all week to help me start using Google Ads.
The campaign it helped me create is driving ~$600/day in <5 days on a *very* small starter budget.
Every day I give new data to o1 pro, iterate, and numbers go up.
It’s crazy.
I literally have a $200/mo AI growth engineer on my team now.
And it’s *actually* good.
LLMs have been useful for code for a while.
But these models are now starting to be able to do other types of legitimately useful economically viable work.
It’s incredible.
Feb 5 • 4 tweets • 1 min read
My friend wanted to learn to code.
Bought him 1mo of ChatGPT Pro and sent a GitHub link to my starter repo.
Didn’t hear on progress - figured he quit.
Turns out he just asks o1 pro endless questions and now his AI invoice app is at $3k MRR.
AI + coding in 2025 is *very* real.
He works in sales and sells to friends and connections in-network.
This is honestly super replicable if you’re high-agency.
Spend 1-2hrs every night using o1 pro to build a tool in your industry that you *know* people would buy.
With AI the skill acquisition happens VERY fast.
Jan 19 • 5 tweets • 2 min read
Since January 1st I’ve cancelled 7 subscriptions to B2B SaaS products.
With AI it’s taken me ~6hrs (1 night!) to replace 100% of the value I was getting from *all* of them.
This will save me $7,500+ in 2025.
The SaaS model is breaking.
How?
- only need a subset of each product’s features
- cancelled those 7 bc the feature(s) I need are easily cloneable by AI
- o1 pro was able to 1-shot 2 of them entirely from a starting template
- other 5 were mostly a few iterations of Cursor composer
- don’t need good design
Dec 20, 2024 • 7 tweets • 3 min read
We now live in a different world.
Acceleration is imminent.
You *will* need to adjust your worldview.
This is what the early days of the singularity look like.
And you are living through them.
There is absolute no situation in which you will outcompete someone who is using o3 and you are not.
This clearly seems like the model that will begin to actually spark a real AGI debate.
Based on the numbers they’re showing today?
Not sure I’d argue against it.
Dec 15, 2024 • 4 tweets • 2 min read
I asked o1 pro to implement 6 things I had on my todo list for a project today.
- It thought for 5m 25s.
- Modified 14 files.
- 64,852 input tokens.
- 14,740 output tokens.
Got it 100% correct - saved me 2 hours.
Absolute powerhouse.
Including my o1 workflow video + GitHub link to my xml parser for the ai code cowboys out there who want to come explore the wild west.
Here’s how to use OpenAI’s new o1 pro model to maximize coding productivity.
I’ve used this workflow for the last 48hrs and I estimate it has 2x’d my output.
Watch the full 19min tutorial.
Prompt below.
Actual workflow demo at start.
17:00ish for tool stack.
Here’s the full o1 XML prompt:
—
You are an expert software engineer.
You are tasked with following my instructions.
Use the included project instructions as a general guide.
You will respond with 2 sections: A summary section and an XLM section.
Here are some notes on how you should respond in the summary section:
- Provide a brief overall summary
- Provide a 1-sentence summary for each file changed and why.
- Provide a 1-sentence summary for each file deleted and why.
- Format this section as markdown.
Here are some notes on how you should respond in the XML section:
- Respond with the XML and nothing else
- Include all of the changed files
- Specify each file operation with CREATE, UPDATE, or DELETE
- If it is a CREATE or UPDATE include the full file code. Do not get lazy.
- Each file should include a brief change summary.
- Include the full file path
- I am going to copy/paste that entire XML section into a parser to automatically apply the changes you made, so put the XML block inside a markdown codeblock.
- Make sure to enclose the code with ![CDATA[__CODE HERE__]]
My #1 takeaway so far after using OpenAI’s new o1 model…
We’re about to have the ChatGPT moment for agentic coding systems.
o1’s ability to think, plan, and execute is off the charts.
The wave of products that will be built with this will be unlike anything we’ve ever seen.
Expect the Cursor Composers, Replit Agents, Devins, etc of the world to take a massive leap.
Will take a little bit of time bc standard prompting techniques aren’t that effective so we need to learn the system.
But expect many more tools like the above for various professions.
Aug 29, 2024 • 5 tweets • 5 min read
In Cursor I’m able to generate a fully functional backend with a single prompt.
A working database in <2min.
Composer is pure magic.
Full prompt below.
PUT THIS PROMPT IN A `` FILE:
--
# Backend Setup Instructions
Use this guide to setup the backend for this project.
It uses Supabase, Drizzle ORM, and Server Actions.
Write the complete code for every step. Do not get lazy. Write everything that is needed.
Your goal is to completely finish the backend setup.
## Helpful Links
If the user gets stuck, refer them to the following links:
export type InsertExample = typeof exampleTable.$inferInsert;
export type SelectExample = typeof exampleTable.$inferSelect;
```
- [ ] Export the example table in the `/schema/index.ts` file like so:
```ts
export * from "./example-schema";
```
- [ ] Create a new file called `example-queries.ts` in the `/queries` folder with the following code:
```ts
"use server";
import { eq } from "drizzle-orm";
import { db } from "../db";
import { InsertExample, SelectExample } from "../schema/example-schema";
import { exampleTable } from "./../schema/example-schema";
We’re at the point with AI codegen where Cursor + Claude 3.5 Sonnet is a legit technical cofounder.
The ceiling on complexity that it can handle will continue to go up over time, and this will happen quite quickly.
We are still early, and it’s already this good.
Learn how to communicate clearly and manage context effectively.
Do not let others tell you what you can’t build.
Aug 7, 2024 • 4 tweets • 1 min read
Here’s a 17min deep dive on advanced prompting techniques for LLMs.
Fully demonstrated on a real-world, multi-step AI workflow.
Watch for a complete breakdown.
The video covers:
- prompt chaining
- chain-of-thought with <scratchpad> tags
- xml tags
- system vs. user messages
- output parsing
- prefilling
- information hierarchy
- role prompting
- goal prompting
- recursive llm calls
Lots of good stuff in there!
Mar 13, 2024 • 5 tweets • 1 min read
I’m blown away by Devin.
Watch me use it for 27min.
It’s insane.
The era of AI agents has begun.
Devin feels like the ChatGPT moment for AI agents.
Exceptional work from the Cognition team.
It’s going to be fun to experiment and figure out where it’s most useful in its current state.
This is the worst it’ll ever be - the future is bright!
Jun 8, 2023 • 4 tweets • 2 min read
ChatGPT just killed Siri.
You can now:
- use ChatGPT with Siri
- start new chats
- continue old chats
- sync chats to ChatGPT app
I built “Let’s Chat” so everyone can take advantage of this and have a more powerful AI voice assistant!
AI is bringing in a *massive* new wave of people who are learning to code.
Why?
They want to run & build AI programs!
One of the interesting developments around this is that GitHub is becoming a sort of AI App Store.
And git clone is now the download button for AI apps.
I’ve seen a lot of pathetic gatekeepy behavior from programmer vets towards our new friends.
“Oh noooo auto-gpt has more stars than PyTorch now what are we gonna dooooo.”
How about encourage them?
More people are discovering the magical world of software - welcome them! :)
Apr 16, 2023 • 4 tweets • 1 min read
AI music is here.
This is the 1st example of AI generated music that *really* wowed me.
This guy ghostwriter977 on TikTok made a Drake x The Weeknd track that’s actually kind of insane?
You’ll soon be able to make unlimited music by your favorite artists on demand with AI.
If you told me this was a leak from an old mixtape I would’ve 100% believed you.
Imagine where this is in a year…
Obviously there are a ton of major copyright questions and whatnot, but you can’t deny that this is going to became a huge thing *really* quickly.