Mckay Wrigley Profile picture
I build & teach AI stuff. Founder @TakeoffAI where we’re building an AI coding tutor. Come learn to code + build with AI at https://t.co/oJ8PNoAutE.
8 subscribers
Dec 6 4 tweets 7 min read
Here are my Opus 4.5 thoughts after ~2 weeks of use.

First some general thoughts, then some practical stuff.

--- THE BIG PICTURE ---

THE UNLOCK FOR AGENTS

It's clear to anyone who's used Opus 4.5 that AI progress isn't slowing down.

I'm surprised more people aren't treating this as a major moment. I suspect getting released right before Thanksgiving combined with everyone at NeurIPS this week has delayed discourse on it by 2 weeks. But this is the best model for both code and for agents, and it's not close.

The analogy has been made that this is another 3.5 Sonnet moment, and I agree. But what does that mean?

Every few generations we get a major model unlock - a moment that unlocks a new way of working. GPT-4 was the unlock for chat, Sonnet 3.5 was the unlock for code, and now Opus 4.5 is the unlock for agents. Thanks to Opus 4.5, agents can now work reliably on increasingly longer time horizons and get real-world work done on your behalf.

Opus 4.5 is like a Waymo. You tell it "take me from A to B", and it takes you there. After a few of these experiences your brain realizes "oh. ok. we live in this world now". And then you're hooked.

From that moment on, you'll never work the same way again.

THE YEAR OF AGENTS

2025 has been touted as the year of agents, and Opus 4.5 + Claude Agent SDK is the pairing that makes that phrase true.

The Claude Agent SDK is the best open secret in AI right now. An agent's harness matters almost as much as its model. If you have a bad harness, then you may as well have a bad model. With the SDK you get a world-class agentic harness out-of-the-box which you can now pair with Opus 4.5 to build real-world agents that actually work.

I'm reminded of Alan Kay's quote "People who are really serious about software should make their own hardware". The agent version of this is "people who are serious about models should make their own harness". Anthropic clearly believes this, and it's working. The pairing of these tools is magic.

I would describe myself as being "unhobblings-pilled", and the Claude Agent SDK + Opus 4.5 is the next major unhobbling. There's now another OOM of new latent economic value stuck in this combo, and it's the job of builders to get it out.

If you were bearish on agents, now is the time to turn bullish.

"ALL OF THIS IS REAL"

"You know what's crazy? That all of this is real". This was Ilya's opening line about the state of AI in his Dwarkesh interview, and I echo that sentiment. I can't believe that Opus 4.5 is real.

There have been several times as Opus 4.5's been working where I've quite literally leaned back in my chair and given an audible laugh over how wild it is that we live in a world where it exists and where agents are this good.

Nat Friedman has this great question on his website: "Where do you get your dopamine?"

Increasingly, I get mine from Claude.

LONG ANTHROPIC

I saw a post yesterday where someone said that Opus 4.5 was the most important thing to happen to them in their professional career. This will be true for more people going forward.

Every year for the past 3 years, Anthropic has grown revenue by 10x. $1M to $100M in 2023, $100M to $1B in 2024, and $1B to $10B in 2025. In Dario's recent DealBook interview he expressed that he wasn't sure if that 10x pattern would hold for 2026.

While he's probably right, I do expect Anthropic's revenue at the end of next year to be much higher than everyone expects. It wouldn't surprise me if they passed OpenAI in valuation by early 2027.

Opus 4.5 is too good of a model, Claude Agent SDK is too good of a harness, and their focus on the enterprise is too obviously correct.

Claude Opus 4.5 is a winner.

And Anthropic will keep winning. --- REVIEW AND RECOMMENDATIONS ---

Now for some more practical stuff. The following are a few things I love about Opus 4.5 and that I've found to be useful.

If you want to hear from more people, I found this post to be a solid summary of Opus 4.5. It aggregates a lot of great anecdotes about the model. You'll find that it's universally heralded as an absolute gem.

GENERAL

- The best mental model for Opus 4.5 is to think of it as a coworker. A true collaborator that you can trust to get things done. Lean into trusting it more than you think you should. Doing this will train your mind to adapt to the future of work, and it will pay off both in the short-term and the long-term.

- Trust the model. Give it more complex tasks. Let it work for longer. Look over its shoulder less. If you're not occasionally dialing it back, then you're not trusting it enough.

- Just ramble to it. If you're still not using voice as input then you're working in the stone age. Opus 4.5 can easily turn a 5min vocal braindump into a completed task just how you'd expect a great teammate to do.

- Opus 4.5 is more efficient than Sonnet 4.5.

- Opus 4.5's image input capabilities are significantly improved. Play around with it. Screenshot-to-code in particular is now on a whole new level.

- Use Opus 4.5 with your Obsidian vault. I have a YouTube video on this here. It's a bit outdated, and I'm working on a new one, but you'll get the idea.

- Play around with Opus 4.5 + computer use. It's still not ready for production, but seeing it as still somewhat of a toy is still enough to get the gears turning in your head. I expect 2026 to be a big year for computer use, and it's worth getting a head start here. This is clearly the next major step for agents.

- If you want to get adventurous, try working with agent swarms. A useful starting point is to have a chatroom.md file that a team of agents can use to communicate and collaborate in. If you really want to get crazy with swarms, then you'll find hooks in the Claude Agent SDK to be essential.
Aug 7 5 tweets 2 min read
My honest GPT-5 review:

- It is a *phenomenal* everyday chat model
- I will default to it for all normal chats
- API pricing is incredible, major points here

But code?

I will still be using Claude Code + Opus. And now a long list of bullet points.

- Reminder: I do not do, and have never done, paid promos, and on days like today I’m glad I made that decision a long time ago.
- The API pricing is unbelievable. Like seriously. Favorite thing from me today. Tip my hat to the team.
- I really love the personality they landed on for GPT-5. It’s like if O3 was slightly more friendly.
- Not sycophantic. I personally could probably have it be even more disagreeable, but I think they landed on the right slider setting for mass market.
- The less hallucinations thing is real. Feel like I can actually notice its behavior differences there. Enjoyed this more than I thought.
- It’s very generally smart. Tyler Cowen’s review probably hit this the best. Can talk about niche things without feeling like it’s BSing you.
- Latency is good.
- I totally get the why of it, but I absolutely hate the model router thing. Hope we can override it.
- Solid improvement on chat that’s worthy of GPT-5 namesake. Feels like they definitely overhyped it though. I personally don’t mind the vagueposting. I actually think it activates the community in a fun way, but that was a touch overboard imo.

For code:
- Claude Code with Opus is still king and frankly it’s not close.
- I’m wildly suspicious of people who claim otherwise, which leads me to…

Today please remember…
- You would be *shocked* how many people posting strong opinions and elaborate pieces today don’t actually use the models that much, especially for code. Many people you respect on here are complete airheads on actually using the models + tools. Seriously.
- SF is many ways is a political game. Keep that in mind today as you read certain opinions. I live away from it for a reason.
Jul 23 4 tweets 2 min read
To 10x AI coding agents you need to *obsess* over context engineering above all else.

Great Context = Great Plan = Great Result

AI models are geniuses who start from scratch every time.

So onboard them by going overboard on context.

Use this prompt as a starting point. Image It is the single highest leverage thing you can do to improve perf of coding agents.

Go even further by building them tools to effectively search and build context - more on this later.

I use the below prompt as a /onboard custom command in Claude Code when starting new tasks.

-

# Onboard

You are given the following context:
$ARGUMENTS

## Instructions

"AI models are geniuses who start from scratch on every task." - Noam Brown

Your job is to "onboard" yourself to the current task.

Do this by:

- Using ultrathink
- Exploring the codebase
- Asking me questions if needed

The goal is to get you fully prepared to start working on the task.

Take as long as you need to get yourself ready. Overdoing it is better than underdoing it.

Record everything in a .claude/tasks/[TASK_ID]/onboarding.md file. This file will be used to onboard you to the task in a new session if needed, so make sure it's comprehensive.
Mar 9 13 tweets 4 min read
Watch for a 14min demo of me using Manus for the 1st time.

It’s *shockingly* good.

Now imagine this in 2-3 years when:
- it has >180 IQ
- never stops working
- is 10x faster
- and runs in swarms by the 1000s

AGI is coming - expect rapid progress. Manus is way better than everything on the market, but it’s also not going to automate you away rn.

I hope the US labs respond with a great wave of releases!

It’s good that things like this are starting to drop - the avg person needs to be aware.

Thinking of this more & more: Image
Feb 9 8 tweets 2 min read
I know *nothing* about ads.

Used OpenAI Deep Research all week to help me start using Google Ads.

The campaign it helped me create is driving ~$600/day in <5 days on a *very* small starter budget.

Every day I give new data to o1 pro, iterate, and numbers go up.

It’s crazy. I literally have a $200/mo AI growth engineer on my team now.

And it’s *actually* good.

LLMs have been useful for code for a while.

But these models are now starting to be able to do other types of legitimately useful economically viable work.

It’s incredible.
Feb 5 4 tweets 1 min read
My friend wanted to learn to code.

Bought him 1mo of ChatGPT Pro and sent a GitHub link to my starter repo.

Didn’t hear on progress - figured he quit.

Turns out he just asks o1 pro endless questions and now his AI invoice app is at $3k MRR.

AI + coding in 2025 is *very* real. He works in sales and sells to friends and connections in-network.

This is honestly super replicable if you’re high-agency.

Spend 1-2hrs every night using o1 pro to build a tool in your industry that you *know* people would buy.

With AI the skill acquisition happens VERY fast.
Jan 19 5 tweets 2 min read
Since January 1st I’ve cancelled 7 subscriptions to B2B SaaS products.

With AI it’s taken me ~6hrs (1 night!) to replace 100% of the value I was getting from *all* of them.

This will save me $7,500+ in 2025.

The SaaS model is breaking. How?

- only need a subset of each product’s features
- cancelled those 7 bc the feature(s) I need are easily cloneable by AI
- o1 pro was able to 1-shot 2 of them entirely from a starting template
- other 5 were mostly a few iterations of Cursor composer
- don’t need good design
Dec 20, 2024 7 tweets 3 min read
We now live in a different world.

Acceleration is imminent.

You *will* need to adjust your worldview.

This is what the early days of the singularity look like.

And you are living through them. Image
Image
Image
Image
There is absolute no situation in which you will outcompete someone who is using o3 and you are not.

This clearly seems like the model that will begin to actually spark a real AGI debate.

Based on the numbers they’re showing today?

Not sure I’d argue against it.
Dec 15, 2024 4 tweets 2 min read
I asked o1 pro to implement 6 things I had on my todo list for a project today.

- It thought for 5m 25s.
- Modified 14 files.
- 64,852 input tokens.
- 14,740 output tokens.

Got it 100% correct - saved me 2 hours.

Absolute powerhouse. Image
Image
Image
Image
Including my o1 workflow video + GitHub link to my xml parser for the ai code cowboys out there who want to come explore the wild west.

It’s nice over here.

GitHub: github.com/mckaywrigley/o…

Workflow Video: x.com/mckaywrigley/s…
Dec 8, 2024 5 tweets 2 min read
Here’s how to use OpenAI’s new o1 pro model to maximize coding productivity.

I’ve used this workflow for the last 48hrs and I estimate it has 2x’d my output.

Watch the full 19min tutorial.

Prompt below. Actual workflow demo at start.

17:00ish for tool stack.

Here’s the full o1 XML prompt:



You are an expert software engineer.

You are tasked with following my instructions.

Use the included project instructions as a general guide.

You will respond with 2 sections: A summary section and an XLM section.

Here are some notes on how you should respond in the summary section:

- Provide a brief overall summary
- Provide a 1-sentence summary for each file changed and why.
- Provide a 1-sentence summary for each file deleted and why.
- Format this section as markdown.

Here are some notes on how you should respond in the XML section:

- Respond with the XML and nothing else
- Include all of the changed files
- Specify each file operation with CREATE, UPDATE, or DELETE
- If it is a CREATE or UPDATE include the full file code. Do not get lazy.
- Each file should include a brief change summary.
- Include the full file path
- I am going to copy/paste that entire XML section into a parser to automatically apply the changes you made, so put the XML block inside a markdown codeblock.
- Make sure to enclose the code with ![CDATA[__CODE HERE__]]

Here is how you should structure the XML:




**BRIEF CHANGE SUMMARY HERE**
**FILE OPERATION HERE**
**FILE PATH HERE**
__FULL FILE CODE HERE__
]]>


**REMAINING FILES HERE**



So the XML section will be:

```xml
__XML HERE__
```
[[PUT CURSOR RULES HERE]]

[[PUT YOUR INSTRUCTIONS HERE]]
Sep 26, 2024 4 tweets 1 min read
ChatGPT’s Advanced Voice mode is the most magical product I’ve ever used.

Total game changer.

And imagine when OpenAI makes it available via API…

Add basic retrieval + function calling to voice and you’ll have an on-demand virtual assistant for anything.

The future is here. I basically had this moment with Advanced Voice yesterday.

It’s literal magic.

(OpenAI let me pay you for more use!!)
Sep 12, 2024 6 tweets 1 min read
My #1 takeaway so far after using OpenAI’s new o1 model…

We’re about to have the ChatGPT moment for agentic coding systems.

o1’s ability to think, plan, and execute is off the charts.

The wave of products that will be built with this will be unlike anything we’ve ever seen. Expect the Cursor Composers, Replit Agents, Devins, etc of the world to take a massive leap.

Will take a little bit of time bc standard prompting techniques aren’t that effective so we need to learn the system.

But expect many more tools like the above for various professions.
Aug 29, 2024 5 tweets 5 min read
In Cursor I’m able to generate a fully functional backend with a single prompt.

A working database in <2min.

Composer is pure magic.

Full prompt below. PUT THIS PROMPT IN A `` FILE:

--

# Backend Setup Instructions

Use this guide to setup the backend for this project.

It uses Supabase, Drizzle ORM, and Server Actions.

Write the complete code for every step. Do not get lazy. Write everything that is needed.

Your goal is to completely finish the backend setup.

## Helpful Links

If the user gets stuck, refer them to the following links:

- [Supabase Docs]()
- [Drizzle Docs]()
- [Drizzle with Supabase Quickstart]()

## Install Libraries

Make sure the user knows to install the following libraries:

```bash
npm i drizzle-orm dotenv postgres
npm i -D drizzle-kit
```

## Setup Steps

- [ ] Create a `/db` folder in the root of the project

- [ ] Create a `/types` folder in the root of the project

- [ ] Add a `drizzle.config.ts` file to the root of the project with the following code:

```ts
import { config } from "dotenv";
import { defineConfig } from "drizzle-kit";

config({ path: ".env.local" });

export default defineConfig({
schema: "./db/schema/index.ts",
out: "./db/migrations",
dialect: "postgresql",
dbCredentials: {
url: process.env.DATABASE_URL!
}
});
```

- [ ] Add a file called `db.ts` to the `/db` folder with the following code:

```ts
import { config } from "dotenv";
import { drizzle } from "drizzle-orm/postgres-js";
import postgres from "postgres";
import { exampleTable } from "./schema";

config({ path: ".env.local" });

const schema = {
exampleTable
};

const client = postgres(process.env.DATABASE_URL!);

export const db = drizzle(client, { schema });
```

- [ ] Create 2 folders in the `/db` folder:

- `/schema`
- Add a file called `index.ts` to the `/schema` folder
- `/queries`

- [ ] Create an example table in the `/schema` folder called `example-schema.ts` with the following code:

```ts
import { integer, pgTable, text, timestamp, uuid } from "drizzle-orm/pg-core";

export const exampleTable = pgTable("example", {
id: uuid("id").defaultRandom().primaryKey(),
name: text("name").notNull(),
age: integer("age").notNull(),
email: text("email").notNull(),
createdAt: timestamp("created_at").defaultNow().notNull(),
updatedAt: timestamp("updated_at")
.notNull()
.defaultNow()
.$onUpdate(() => new Date())
});

export type InsertExample = typeof exampleTable.$inferInsert;
export type SelectExample = typeof exampleTable.$inferSelect;
```

- [ ] Export the example table in the `/schema/index.ts` file like so:

```ts
export * from "./example-schema";
```

- [ ] Create a new file called `example-queries.ts` in the `/queries` folder with the following code:

```ts
"use server";

import { eq } from "drizzle-orm";
import { db } from "../db";
import { InsertExample, SelectExample } from "../schema/example-schema";
import { exampleTable } from "./../schema/example-schema";

export const createExample = async (data: InsertExample) => {
try {
const [newExample] = await db.insert(exampleTable).values(data).returning();
return newExample;
} catch (error) {
console.error("Error creating example:", error);
throw new Error("Failed to create example");
}
};

export const getExampleById = async (id: string) => {
try {
const example = await db.query.exampleTable.findFirst({
where: eq(, id)
});
if (!example) {
throw new Error("Example not found");
}
return example;
} catch (error) {
console.error("Error getting example by ID:", error);
throw new Error("Failed to get example");
}
};

export const getAllExamples = async (): Promise => {
return db.query.exampleTable.findMany();
};

export const updateExample = async (id: string, data: Partial) => {
try {
const [updatedExample] = await db.update(exampleTable).set(data).where(eq(, id)).returning();
return updatedExample;
} catch (error) {
console.error("Error updating example:", error);
throw new Error("Failed to update example");
}
};

export const deleteExample = async (id: string) => {
try {
await db.delete(exampleTable).where(eq(, id));
} catch (error) {
console.error("Error deleting example:", error);
throw new Error("Failed to delete example");
}
};
```

- [ ] In `package.json`, add the following scripts:

```json
"scripts": {
"db:generate": "npx drizzle-kit generate",
"db:migrate": "npx drizzle-kit migrate"
}
```

- [ ] Run the following command to generate the tables:

```bash
npm run db:generate
```

- [ ] Run the following command to migrate the tables:

```bash
npm run db:migrate
```

- [ ] Create a folder called `/actions` in the root of the project for server actions

- [ ] Create folder called `/types` in the root of the project for shared types

- [ ] Create a file called `action-types.ts` in the `/types/actions` folder for server action types with the following code:

- [ ] Create file called `/types/index.ts` and export all the types from the `/types` folder like so:

```ts
export * from "./action-types";
```

- [ ] Create a file called `example-actions.ts` in the `/actions` folder for the example table's actions:

```ts
"use server";

import { createExample, deleteExample, getAllExamples, getExampleById, updateExample } from "@/db/queries/example-queries";
import { InsertExample } from "@/db/schema/example-schema";
import { ActionState } from "@/types";
import { revalidatePath } from "next/cache";

export async function createExampleAction(data: InsertExample): Promise {
try {
const newExample = await createExample(data);
revalidatePath("/examples");
return { status: "success", message: "Example created successfully", data: newExample };
} catch (error) {
return { status: "error", message: "Failed to create example" };
}
}

export async function getExampleByIdAction(id: string): Promise {
try {
const example = await getExampleById(id);
return { status: "success", message: "Example retrieved successfully", data: example };
} catch (error) {
return { status: "error", message: "Failed to get example" };
}
}

export async function getAllExamplesAction(): Promise {
try {
const examples = await getAllExamples();
return { status: "success", message: "Examples retrieved successfully", data: examples };
} catch (error) {
return { status: "error", message: "Failed to get examples" };
}
}

export async function updateExampleAction(id: string, data: Partial): Promise {
try {
const updatedExample = await updateExample(id, data);
revalidatePath("/examples");
return { status: "success", message: "Example updated successfully", data: updatedExample };
} catch (error) {
return { status: "error", message: "Failed to update example" };
}
}

export async function deleteExampleAction(id: string): Promise {
try {
await deleteExample(id);
revalidatePath("/examples");
return { status: "success", message: "Example deleted successfully" };
} catch (error) {
return { status: "error", message: "Failed to delete example" };
}
}
```

```ts
export type ActionState = {
status: "success" | "error";
message: string;
data?: any;
};
```
- [ ] Implement the server actions in the `/app/page.tsx` file to allow for manual testing.

- [ ] The backend is now setup.setup-backend.md
Aug 24, 2024 20 tweets 5 min read
So people *really* want to learn Cursor.

Already 1,056 people learning in my Cursor course.

And my “Building a pro full-stack app with AI” course launches this week to pair with it.

Come learn to build with AI.

25% launch discount - link below. Image
Image
Image


Beyond excited for what we have in store.

Expect announcements soon.JoinTakeoff.com/courses/cursor
Aug 22, 2024 4 tweets 1 min read
We’re at the point with AI codegen where Cursor + Claude 3.5 Sonnet is a legit technical cofounder. The ceiling on complexity that it can handle will continue to go up over time, and this will happen quite quickly.

We are still early, and it’s already this good.

Learn how to communicate clearly and manage context effectively.

Do not let others tell you what you can’t build.
Aug 7, 2024 4 tweets 1 min read
Here’s a 17min deep dive on advanced prompting techniques for LLMs.

Fully demonstrated on a real-world, multi-step AI workflow.

Watch for a complete breakdown. The video covers:

- prompt chaining
- chain-of-thought with <scratchpad> tags
- xml tags
- system vs. user messages
- output parsing
- prefilling
- information hierarchy
- role prompting
- goal prompting
- recursive llm calls

Lots of good stuff in there!
Mar 13, 2024 5 tweets 1 min read
I’m blown away by Devin.

Watch me use it for 27min.

It’s insane.

The era of AI agents has begun. Devin feels like the ChatGPT moment for AI agents.

Exceptional work from the Cognition team.

It’s going to be fun to experiment and figure out where it’s most useful in its current state.

This is the worst it’ll ever be - the future is bright!
Jun 8, 2023 4 tweets 2 min read
ChatGPT just killed Siri.

You can now:
- use ChatGPT with Siri
- start new chats
- continue old chats
- sync chats to ChatGPT app

I built “Let’s Chat” so everyone can take advantage of this and have a more powerful AI voice assistant!

Install: icloud.com/shortcuts/8c4c… How to adjust starting prompt and switch to using GPT-4.
May 11, 2023 4 tweets 2 min read
LangChain 101: Models is live!

Come learn the basics of using models in @LangChainAI - great for beginners!

This is part 1 of @TakeoffAI’s 100% free 6 part LangChain 101 course.

Click the link to start the lesson as a project in @Replit.

Take Course: replit.com/@MckayWrigley/… Image The 101 track will be relatively basic.

I’m a strong believer in both understanding fundamentals and in helping onboard beginners!

But I plan on doing 201, 301, and 401 next to build a series of stepping stones to become a pro AI developer.

Part 2 (prompts) drops tomorrow.
Apr 24, 2023 4 tweets 1 min read
Can you imagine if Drake made his own AI music app where anyone could use his voice to create new songs?

It would become the #1 app overnight.

Millions of paying users in hours.

It’s all anyone would post about.

There are *wildly* interesting opportunities for artists in AI. Grimes gets it.
Apr 16, 2023 4 tweets 1 min read
AI is bringing in a *massive* new wave of people who are learning to code.

Why?

They want to run & build AI programs!

One of the interesting developments around this is that GitHub is becoming a sort of AI App Store.

And git clone is now the download button for AI apps. I’ve seen a lot of pathetic gatekeepy behavior from programmer vets towards our new friends.

“Oh noooo auto-gpt has more stars than PyTorch now what are we gonna dooooo.”

How about encourage them?

More people are discovering the magical world of software - welcome them! :)