Whoa, it did it. @perplexity_ai Computer just one-shotted a ful-stack fund in a box.
Over 4,500 lines of code, and it works.
The goal was to build a system that could credibly run a small fund's core workflow with 1-2 humans vs. the current model which is 10 analysts on terminals.
I came up with the idea by asking what could I build with computer that would be more valuable than a $30,000/year Bloomberg terminal.
Here's a screenshot of the fully working web app.
More details below, in what I think my might be the world's first Perplexity Computer Thread 🧵
First, here's the idea I worked on with Perplexity.
I then had it build me a prompt, and it build a monster prompt, I'll share it in a few segments because to one shot this, I needed a serious prompt.
Prompt Part 1:
You are an autonomous engineering, product, and research team building Thesium(.)finance, an AI‑native fund operating system where agents maintain live theses on every name and theme, and humans supervise a workstation called Thesium Desk.
Your goal is to design and implement an MVP of Thesium(.)finance that can credibly run a small fund’s core workflow end‑to‑end (research → risk → execution), with 1–2 humans supervising instead of a floor of analysts.
Prompt Part 2:
1. Goal
Build Thesium(.)finance as an end‑to‑end “fund OS” that:
Continuously ingests and normalizes multi‑source market and alternative data.
Uses specialized agents (macro, factor, microstructure, alternative data) to maintain live Thesium thesis objects on tickers and themes.
Produces auditable, backtestable, position‑sized trade plans rather than raw signals.
Integrates with retail‑friendly brokers (e.g., Robinhood, Interactive Brokers) via paper / sandbox first, under explicit risk guardrails.
Logs every decision into an investment‑committee style memo and compliance trail automatically.
Deliver a working system plus documentation that I can deploy and operate with minimal additional coding.
Prompt Part 3:
2. Constraints and assumptions
Target user: serious retail investors and small funds (AUM low‑ to mid‑7 figures) who want institutional‑grade process without institutional headcount.
Jurisdiction assumption: U.S. fund / investor context; design with typical U.S. regulatory expectations (disclaimers, logs, separation of research vs. execution) in mind, but do not implement full legal workflows.
MVP first: prioritize correctness, auditability, and UX over ultra‑low latency microstructure; execution can be batched or near‑real‑time, not HFT.
Tech stack preferences:
Backend: TypeScript or Python, with a clean modular architecture that can evolve into a multi‑agent system.
Data: Postgres for state, plus an append‑only event log (e.g., separate table) for decisions and orders.
Frontend: React or Next.js dashboard that feels like a modern Thesium Desk, with views for theses, portfolios, open risk, and logs.
Assume you’re running in a hosted cloud sandbox (e.g., Computer’s environment) with standard browser and file‑system tools.
Prompt Part 4:
3. Core workflows to implement
Design and implement these workflows end‑to‑end for Thesium(.)finance: 3.1 Data ingestion & normalization
Connect to at least one free or low‑friction market data source (e.g., polygon(.)io demo, Yahoo Finance, Alpha Vantage) plus one simple alt‑data proxy (e.g., news sentiment or ETF holdings).
Normalize into a unified schema: instruments, prices, fundamentals, events, news.
Implement scheduled ingestion (cron or task runner) and simple backfill. 3.2 Agentic research loop
Implement modular Thesium research agents for: macro, factor, microstructure‑lite, and alternative data.
Each agent should:
Read normalized data plus configuration for its mandate.
Maintain a live Thesium thesis object per instrument or theme (thesis text, conviction score, horizon, key drivers).
Emit proposed actions (e.g., “increase position in X by Y% with stop at Z”), annotated with the thesis that supports the action. 3.3 Risk and portfolio engine
Implement portfolio state (positions, cash, P&L) and constraints (max exposure per name, sector, factor, and an overall VAR‑style or volatility‑based limit if feasible).
Build a risk engine that:
Evaluates proposed actions against constraints.
Performs simple scenario or factor sensitivity checks using available data.
Approves, scales down, or rejects proposed trades, annotating the decision and linking back to the originating Thesium thesis objects and agents. 3.4 Execution & broker integration (paper first)
Implement a paper‑trading execution engine that:
Translates approved orders into simulated fills with a simple, configurable slippage model.
Updates portfolio state and logs full execution details.
Abstract broker connectivity via a BrokerAdapter interface so that adding real brokers later (Robinhood, IBKR) is a matter of plugging in an adapter. 3.5 IC memo & compliance logging
On each decision cycle, automatically generate a “Thesium IC Memo – {date}” artifact that:
Summarizes current macro view, key factor tilts, notable Thesium thesis objects per major name, and proposed changes.
Links each trade to the agents, data, risk rules, and thesis objects involved.
Store IC memos and full event logs in the database plus human‑readable files (e.g., Markdown or PDF export). 3.6 User interface – Thesium Desk
Build a single‑page Thesium Desk UI with:
Brand: top‑left wordmark “Thesium Desk” and tagline “Live theses, auditable trades.”
Sections:
Today – P&L, risk snapshot, upcoming events.
Theses – table of names/themes with current thesis text, conviction, horizon, and last change timestamp.
Orders & Executions – open orders, recent fills, and links back to IC memos.
IC Memos – list and detail view for each Thesium IC Memo, with filters (date range, symbol, agent).
Prompt Part 5:
4. System design and architecture
Before coding, produce a concise system design package for Thesium(.)finance:
Context diagram of components: data ingestion, Thesium research agents, risk engine, execution engine, broker adapter, Thesium Desk UI, and database.
Description of the multi‑agent pattern you’ll use (e.g., orchestrated vs. decentralized agent mesh) and why it fits this MVP.
Data model: key tables and how events (theses, orders, fills, memos) are persisted, including an append‑only event log for auditability.
Extensibility notes: how to add new agents, new data sources, and new broker adapters without major refactors.
Prompt Part 6:
5. Deliverables
At the end of the run, produce:
Source code for backend, frontend (Thesium Desk), and any infra scripts (e.g., Dockerfiles, simple deployment config).
Schema and migration scripts for the database.
A “Thesium Runbook” in Markdown that covers:
How to set up API keys and environment variables.
How to run ingestion, the research loop, and Thesium Desk locally or in a simple cloud environment.
How to switch between paper trading and a (mocked) real broker adapter.
A Thesium product doc (2–4 pages) that explains:
The user persona and primary jobs‑to‑be‑done.
The end‑to‑end workflow (from data ingestion to Thesium IC Memo and execution).
Key design decisions and how Thesium could evolve into a more sophisticated multi‑agent, low‑latency architecture later.
Prompt Part 7:
6. Quality, safety, and guardrails
All order generation must respect explicit risk limits; never bypass the risk engine.
Default mode must be paper trading; any path toward real execution must require an explicit configuration flag and human confirmation in Thesium Desk.
Include clear disclaimers in the UI and docs that Thesium(.)finance does not provide investment advice; it is a research and execution automation tool operated under user control.
Use this spec to design, implement, and document the MVP of Thesium(.)finance and Thesium Desk in a single, coherent build.
And then, after building, it tested away.
And shared some key files like the Runbook.
Product Doc.
And Architecture.
Over 4,500 lines of code.
Here's details on the backend.
And the frontend.
And finally, some key design decisions.
Totally insane. Perplexity may have just jumped the shark with this one.
And no, I don't work for Perplexity, and I'm not an investor.
Just an engineer, playing around with it at 7:45pm, and I dunno, I just want to keep going, might not sleep.
• • •
Missing some Tweet in this thread? You can try to
force a refresh
I’ve found myself explaining LLMs to more and more friends and family.
One component I’ve been covering a lot lately is model weights.
If you aren’t totally clear on what these are, here’s a simple(ish) overview - no calculus knowledge required.
A “what the heck are model weights” 🧵
First - what is a weight.
I like analogies so what I usually tell people is - imagine you’re trying to predict the chance that you are going to get to the airport on time.
We’ve all been in this situation.
There’s some key inputs you’d use to understand you’ll make it to the airport on time - things like how much traffic there is, how early you left your house, distance from the airport.
Now, instead of asking - what’s the most likely to happen, you assign a strength or influence to each.
So traffic, yeah that sucks, and it can really influence if you make it on time, same with leaving early. Distance from the airport might be more of a medium influence, etc.
Now for the magical mathematical part where I’ll leave the math out and keep it high level.
You essentially multiply each input by how important they are and get a final score.
The importance values you just calculated, yup - that’s the weight.
Boom - you now understand in super simple terms, what model weights are.
But this is a thread, so now let’s go deeper.
In LLMs, the weights are numbers, and those numbers are stored in - bingo, you guessed it, a file.
These numbers are stored as floats, just think - numbers with decimals and the ability to have more numbers after the decimal, like 18.23 vs just 18.
Now here’s the kinda wild part that blows some people’s minds.
The file, with all these numbers - that’s the model.
Load weights into memory, send in tokens, get outputs.
Every day since @perplexity_ai released Computer, I have built something new.
It has been an awesome experience, and as I've said many times, I'm not one-shotting something, showing a screenshot, and calling it done.
The first prompt, and initial build, is the start, not the end, and definitely not a finished product.
So, now that I've got a bunch of projects I'm refining the code on, I thought I'd share how I'm refining them, and some prompts and workflows you can use to go beyond the first shot.
I'm going to use this fun little stock portfolio analyzer someone suggested I build as an example.
A Perplexity Computer code optimization thread 🧵
The first thing I do after the initial build is evaluate the codebase, and you don't have to leave Perplexity Computer, you can do this right in there with a prompt like this.
And you'll get back some really nice detailed analysis, broken down into sections like I specified in the prompt, i.e. what's good, needs work, and my favorite - glaringly wrong.
I’ve had a lot of people ask me about running models locally lately.
So here’s essentially what I keep sending to all my friends, and thought why not share with all of you.
And you honestly don’t need to know anything about how LLMs work under-the-hood to follow this.
A running LLMs locally thread 🧵
First things first. I’m heavily biased towards Macs, and you should be too.
Most software engineering today is done on a Mac, and all the cool new stuff comes out for Mac first, like the Codex Desktop app.
When it comes to running LLMs locally, Apple Silicon changed everything.
The unified memory architecture means the CPU and GPU share the same memory pool. For LLMs, that’s gold.
Models need big contiguous memory. On a Mac with 64–128GB unified memory, you can run models that would choke on many consumer GPUs.
If you’re choosing hardware, a Mac Studio with 64GB+ unified memory opens far more doors than a base Mac mini. Once you hit 128GB unified memory, you’re in serious territory. That’s when 70B-parameter class models become playable with quantization.
Now the software stack. There are lots of options, but I just recommend easy mode to all my friends.
And the easiest entry point is Ollama.
It’s basically “Docker for LLMs.” You install it, then:
ollama run llama3
And suddenly you have a local model chatting with you.
It handles:
– Model downloads
– Quantized builds
– Metal acceleration
– Simple REST API
It uses llama.cpp under the hood, which is highly optimized for Apple’s Metal GPU framework.
This is the smoothest path for:
– Llama models
– Mistral
– Mixtral
– Code models
– Small 7B–13B experimentation
After using @diabrowser for a couple of weeks I can confirm, this isn't just another browser, it's a paradigm shift in how we use the Internet.
I don't think I can go back to using a traditional browser ever again.
Here's a good example of how I use Dia, daily. A 🧵
While there's a lot of awesome stuff in Dia, the key feature I use daily, constantly, is the ability to chat with tabs.
I've said it before and I'll say it again, LLMs are a new foundational software layer. This means you shouldn't have to go from a traditional app to an LLM, it means there's a new generation of apps with LLMs built in.
For too long people have been calling these wrappers. They aren't wrappers - this is the next generation of software, with LLMs inside.
So back to Dia ☀️
Chat with tabs. What does that mean?
Well, before Dia, I was going to ChatGPT and asking it to summarize the news for me. And I'm a big fan of @FinancialTimes so that's typically how I start my mornings.