Morgan Profile picture
born and raised in the bay now living in tahoe, cofounder/cto @boldmetrics, early @ sonos, engineering @ carnegie mellon, not an expert - always learning.
Mar 26 7 tweets 3 min read
I’ve found myself explaining LLMs to more and more friends and family.

One component I’ve been covering a lot lately is model weights.

If you aren’t totally clear on what these are, here’s a simple(ish) overview - no calculus knowledge required.

A “what the heck are model weights” 🧵 First - what is a weight.

I like analogies so what I usually tell people is - imagine you’re trying to predict the chance that you are going to get to the airport on time.

We’ve all been in this situation.

There’s some key inputs you’d use to understand you’ll make it to the airport on time - things like how much traffic there is, how early you left your house, distance from the airport.

Now, instead of asking - what’s the most likely to happen, you assign a strength or influence to each.

So traffic, yeah that sucks, and it can really influence if you make it on time, same with leaving early. Distance from the airport might be more of a medium influence, etc.

Now for the magical mathematical part where I’ll leave the math out and keep it high level.

You essentially multiply each input by how important they are and get a final score.

The importance values you just calculated, yup - that’s the weight.

Boom - you now understand in super simple terms, what model weights are.

But this is a thread, so now let’s go deeper.
Mar 6 11 tweets 4 min read
Every day since @perplexity_ai released Computer, I have built something new.

It has been an awesome experience, and as I've said many times, I'm not one-shotting something, showing a screenshot, and calling it done.

The first prompt, and initial build, is the start, not the end, and definitely not a finished product.

So, now that I've got a bunch of projects I'm refining the code on, I thought I'd share how I'm refining them, and some prompts and workflows you can use to go beyond the first shot.

I'm going to use this fun little stock portfolio analyzer someone suggested I build as an example.

A Perplexity Computer code optimization thread 🧵Image The first thing I do after the initial build is evaluate the codebase, and you don't have to leave Perplexity Computer, you can do this right in there with a prompt like this. Image
Feb 26 18 tweets 7 min read
Whoa, it did it. @perplexity_ai Computer just one-shotted a ful-stack fund in a box.

Over 4,500 lines of code, and it works.

The goal was to build a system that could credibly run a small fund's core workflow with 1-2 humans vs. the current model which is 10 analysts on terminals.

I came up with the idea by asking what could I build with computer that would be more valuable than a $30,000/year Bloomberg terminal.

Here's a screenshot of the fully working web app.

More details below, in what I think my might be the world's first Perplexity Computer Thread 🧵Image First, here's the idea I worked on with Perplexity. Image
Feb 16 12 tweets 4 min read
I’ve had a lot of people ask me about running models locally lately.

So here’s essentially what I keep sending to all my friends, and thought why not share with all of you.

And you honestly don’t need to know anything about how LLMs work under-the-hood to follow this.

A running LLMs locally thread 🧵 First things first. I’m heavily biased towards Macs, and you should be too.

Most software engineering today is done on a Mac, and all the cool new stuff comes out for Mac first, like the Codex Desktop app.

When it comes to running LLMs locally, Apple Silicon changed everything.

The unified memory architecture means the CPU and GPU share the same memory pool. For LLMs, that’s gold.

Models need big contiguous memory. On a Mac with 64–128GB unified memory, you can run models that would choke on many consumer GPUs.

If you’re choosing hardware, a Mac Studio with 64GB+ unified memory opens far more doors than a base Mac mini. Once you hit 128GB unified memory, you’re in serious territory. That’s when 70B-parameter class models become playable with quantization.
Jan 25 7 tweets 3 min read
ClawdBot is amazing - it absolutely deserves all the attention it’s getting.

But, a lot of people are going to get hacked.

And that’s because way too many people are diving in without thinking about security.

Security researchers have already shown how prompt injection can be used to delete ALL of your email 😳

So there’s a few things you should know about ClawdBot security before you let it lose - it won’t take too long, but yes, it’s worth the time.

A ClawdBot security 🧵Image First, there’s a Sandbox Mode - enable it. Image
Jul 2, 2025 11 tweets 3 min read
I've been using @diabrowser for a few weeks now and it's become one of those rare products I become completely fanatical about.

My timeline lately has been wild, so many people having the same experience. I thought I'd share my top ten favs in a 🧵 Image
Jun 24, 2025 10 tweets 4 min read
After using @diabrowser for a couple of weeks I can confirm, this isn't just another browser, it's a paradigm shift in how we use the Internet.

I don't think I can go back to using a traditional browser ever again.

Here's a good example of how I use Dia, daily. A 🧵 Image While there's a lot of awesome stuff in Dia, the key feature I use daily, constantly, is the ability to chat with tabs.

I've said it before and I'll say it again, LLMs are a new foundational software layer. This means you shouldn't have to go from a traditional app to an LLM, it means there's a new generation of apps with LLMs built in.

For too long people have been calling these wrappers. They aren't wrappers - this is the next generation of software, with LLMs inside.
Apr 30, 2025 8 tweets 3 min read
I moved our engineering org from @Jira to @linear and it's honestly one of the best tool changes I've ever made.

Here's a few reasons why I love Linear so much, and no, they aren't paying me to write this or giving me any kind of discount, this is just pure customer fanaticism.

A Linear love session thread 🧵 First, let's talk speed. Linear is fast, like blazing fast, which makes it much easier during Zoom meetings to jump around from ticket to ticket and team to team.

JIRA, as we all know, is a beast, it's slow and clunky and honestly, if I were to put it into relative terms, I would say Linear feels 5x - 6x faster, it's like butter 🧈