swyx io Profile picture
Dec 27, 2021 11 tweets 10 min read Read on X
In 1976, Stuart Feldman made Make, the build system that secretly runs ~all open source.

In 2021, @jaredpalmer spent the year working on a new tool that is up to 85% faster. @Vercel snapped it up last month.

Why @TurboRepo will blow up in 2022:

dev.to/swyx/why-turbo…
The origin story starts with this looongstanding open issue on TSDX from @n_moore. As an early helper on TSDX I shied away from it bc I thought that it should be solved by dedicated tooling like @yarnpkg.

Jared went one step further and *built it*.

Big monorepo shops like Facebook and Google have loads of tooling to make monorepos work, but this class of tooling never made it out into open source.

@TurboRepo does 3 things:
- Make monorepos zero config
- Make monorepos incrementally adoptable
- Make monorepos scale
What @TurboRepo does:

- Read your build pipeline from `turbo` config in package.json
- Generate dependency graph
- Fingerprints inputs
- Executes each task in turn
- Caches output/logs

On subsequent runs, if a task matches a fingerprint, it restores from cache and replays logs.
The API surface area of @TurboRepo is shockingly small:

1️⃣`npx create-turbo@latest myrepo` scaffolds a new monorepo
2️⃣`turbo run build` runs the `build` task
3️⃣configure pipeline in package.json `turbo` key

Thats it! Turbo parallelizes tasks based on DAG turborepo.org/docs/reference…
Remote Caching: Dropbox for your dist dir

The showstopper, still in beta, but the reason why @Vercel's acquisition makes total business sense (apart from gaining the imprimatur of having @jaredpalmer on staff).

Available for *FREE*.
How is @TurboRepo so fast?

- Efficient scheduler + rebuilder system ensures you never recompute work that has been done before
- Parallelize as much as possible
- 74% written in Go (see: "Systems core, Scripting shell" in Third Age thesis)
Why @TurboRepo will be a big deal in 2022:

- "Zero config" - lots of value out of the box
- Declarative build pipeline
- Great debugging/profiling
- Great docs/marketing
- Devs have been hearing nonstop about the benefits of monorepos but always held back by tooling
For more on TurboRepo, including notes on future roadmap, check out my blogpost:

dev.to/swyx/why-turbo…

Note that I am not affiliated with the project, am just excited about it and am sharing what I #LearnInPublic. All inaccuracies are my fault.
If you have time, watch @jaredpalmer and @leeerob go through the @turborepo demo:



As a former monorepo skeptic, IMO this is the time to really dig into them as they are set to explode in 2022.
Added a section to give some spotlight to @NxDevTools as well, as @jeffbcross has been patiently answering questions in the replies. They have a comparison page up on their docs: nx.dev/l/r/guides/tur…

Others have also created perf benchmarks:

• • •

Missing some Tweet in this thread? You can try to force a refresh
 

Keep Current with swyx io

swyx io Profile picture

Stay in touch and get notified when new unrolls are available from this author!

Read all threads

This Thread may be Removed Anytime!

PDF

Twitter may remove this content at anytime! Save it as PDF for later use!

Try unrolling a thread yourself!

how to unroll video
  1. Follow @ThreadReaderApp to mention us!

  2. From a Twitter thread mention us with a keyword "unroll"
@threadreaderapp unroll

Practice here first or read more on our help page!

More from @swyx

Oct 1
Here’s my @OpenAIDevs day thread for those following along. everyone else gotchu with videos and stuff so i will just give personal notes and aha moments thru the day

first observation: @sama MIA

GPT5 still mentioned and on the table



Image
Image
Image
Image
after some nice screenshot of Cocounsel, time for @romainhuet’s legendary live demos. o1 one-shots an ios app and does the frotnend/backend to control a drone.

ai controlled drones, what could go wrong?


Image
Image
Image
@romainhuet Realtime API announced!

starting with speech to speech support
all 6 adv voice mode voices supported

demo next Image
Read 43 tweets
Sep 30
just realized NotebookLM is @GoogleDeepMind's ChatGPT moment

- "low key research preview"/"experimental"
- not monetized
- GPUs/TPUs immediately on fire
- SOTA proprietary new model buried in there with upgrade that weren't previously announced
- new AI UX that cleverly embeds LLM usage natively within the product features

in this case NBLM nailed multimodal RAG and I/O in a way that @ChatGPTapp never did (or for that matter, @GeminiApp). The multiple rounds of preprocessing described by @stevenbjohnson also raise the quality of the audio conversation dramatically at the cost of extreme latency (took an efficient model that was advertised as capable of generating 30s of audio in 0.5s, and slapped on like 200s of LLM latency haha)Image
@GoogleDeepMind like, i put my podcast into it and it made a podcast of my podcast and... it was good.

do u guys know we spend 1-2 hrs writing up the show notes and now its a button press in NBLM

@GoogleDeepMind - just hired @OfficialLoganK

if i had a penny for every time this has happened i'd have two pennies
Read 5 tweets
Sep 18
Gemini really took pride topping @lmsysorg for a hot second and then @OpenAI said "oh no u dont" and put out 4 straight bangers pounding everyone into the dust by 50 elo points

V high bar set for Gemini 2, Grok 2.5, and Claude 4 this fall.

Multiple fronts - on reasoning, multiturn chat tuning, instruction following, and coding - to compete.Image
Image
Image
Image
anyway we finally did a @latentspacepod paper club on STaR and friends, swim on by



i hastily sketched out a "paper stack" of what the "literature of reasoning" could look like, but this is amateur work - would love @teortaxesTex or @arattml to map out a full list of likely relevant papers for o1Image
holy shit @ideogram_ai thumbnails are untapped alpha Image
Read 4 tweets
Sep 11
**Frontier AI in your Hands**

my live notes from today’s @MistralAI summit ft Jensen Huang and @arthurmensch and crew here

thread emoji
Image
Image
first articulation of La Plateforme vision beyond just hosted mistral models

sounds alarmingly familiar tbh


Image
Image
Image
Image
Mistral model priorities

i’ve seen a similar chart from openai i cannot share but this one is actually open :) Image
Read 22 tweets
Sep 9
wow. Apple might just have fixed Siri.

and beat OpenAI to the first AI phone.

and commoditized OpenAI with Google.

and casually dropped a video understanding model.

incredibly well executed.

(see @smol_ai writeup below for deltas from WWDC)
notable reveals from today's iphone 16 event, especially Apple Visual Intelligence:

- Mail and Notifications will show summaries instead of str[:x]

- Siri now knows iPhone, becomes the ultimate manual on how to use the increasingly complicated iOS 18

and can read your texts (!) to suggest actions with Personal Context Understanding

(also it will try to advertise apple tv shows to you... i'm SURE it will be totally objective and aligned to your preferences amirite)

- new iphone 16 camera control button is PRIME real estate - notice how OpenAI/ChatGPT is now next to Google search, and both are secondary clicks to Apple's visual search, which comes first

- camera adds events to calendar!

"all done on device" and on cloud (though craig doesnt say that haha)

insanely good ideas on ai + phone integrations.Image
Image
Image
Image
today: visual ai on button press

tomorrow: Image
Read 6 tweets
Jul 23
Llama 3: the Synthetic Data model

Llama 3 paper is finally out! by @lvdmaaten and Angela Fan. Quick diffs from yesterday's leaks (+ watch our exclusive @ThomasScialom interview out now!)

- NEW SCALING LAWS! turns out there's a reason why they trained a 405B param model because they had 15T tokens

- full weight class benchmarks table vs Gemma, Mistral, 4o/sonnet! no surprises - 8B and 70B are strongest here, but 405B has solid IFEval and Tool Use
- Multimodal encoder, Vision and Speech Adapter coming
- 15T token data pipeline uses Llama 2 cleaning/filtering, and Deepseek v2 pipelines for code and math!

some pretty fun notes on infra and training - together with full details on learning rates and training recipe.Image
Image
Image
Image
this is going to make @Teknium1 happy - 3 approaches for syndata explored, apart form the obvious 8B/70B distillation

- 405B teaching itself with code execution feedback

- translating code data to smaller programming languages (like TypeScript and PHP??? this is slander)

- "backtranslation" - 1.2m synthetic dialogs going from documentation/explanations to code, then using LLM as judge to filter (pretty smart!)

For math: let's verify step by step :)
Image
Image
Image
Image
All 3 frontier models would basically place out of college immediately lmao Image
Read 8 tweets

Did Thread Reader help you today?

Support us! We are indie developers!


This site is made by just two indie developers on a laptop doing marketing, support and development! Read more about the story.

Become a Premium Member ($3/month or $30/year) and get exclusive features!

Become Premium

Don't want to be a Premium member but still want to support us?

Make a small donation by buying us coffee ($5) or help with server cost ($10)

Donate via Paypal

Or Donate anonymously using crypto!

Ethereum

0xfe58350B80634f60Fa6Dc149a72b4DFbc17D341E copy

Bitcoin

3ATGMxNzCUFzxpMCHL5sWSt4DVtS8UqXpi copy

Thank you for your support!

Follow Us!

:(