The origin story starts with this looongstanding open issue on TSDX from @n_moore. As an early helper on TSDX I shied away from it bc I thought that it should be solved by dedicated tooling like @yarnpkg.
Big monorepo shops like Facebook and Google have loads of tooling to make monorepos work, but this class of tooling never made it out into open source.
@TurboRepo does 3 things:
- Make monorepos zero config
- Make monorepos incrementally adoptable
- Make monorepos scale
- Read your build pipeline from `turbo` config in package.json
- Generate dependency graph
- Fingerprints inputs
- Executes each task in turn
- Caches output/logs
On subsequent runs, if a task matches a fingerprint, it restores from cache and replays logs.
The API surface area of @TurboRepo is shockingly small:
1️⃣`npx create-turbo@latest myrepo` scaffolds a new monorepo
2️⃣`turbo run build` runs the `build` task
3️⃣configure pipeline in package.json `turbo` key
The showstopper, still in beta, but the reason why @Vercel's acquisition makes total business sense (apart from gaining the imprimatur of having @jaredpalmer on staff).
- Efficient scheduler + rebuilder system ensures you never recompute work that has been done before
- Parallelize as much as possible
- 74% written in Go (see: "Systems core, Scripting shell" in Third Age thesis)
- "Zero config" - lots of value out of the box
- Declarative build pipeline
- Great debugging/profiling
- Great docs/marketing
- Devs have been hearing nonstop about the benefits of monorepos but always held back by tooling
For more on TurboRepo, including notes on future roadmap, check out my blogpost:
As a former monorepo skeptic, IMO this is the time to really dig into them as they are set to explode in 2022.
Added a section to give some spotlight to @NxDevTools as well, as @jeffbcross has been patiently answering questions in the replies. They have a comparison page up on their docs: nx.dev/l/r/guides/tur…
this neurips is really going to be remembered as the "end of pretraining" neurips
notes from doctor @polynoamial's talk on scaling test time compute today
(thank you @oh_that_hat for organizing)
all gains to date have been from scaling data and pretrain compute and yet LLMs cant solve simple problems like tictactoe
however inference costs have scaled much less.
goes back to libratus/pluribus work
poker model scaling from 2012-2015 - scaled 5x each year, but still lost dramatically (9 big bets per hundred) to poker pros in 80k hands
recalls familiar insight about humans taking longer to think for harder problems.
added 20s of search - reduced distance from nash equilibrium results reduced by a factor of 7 - roughly the equivalent of scaling up model size by 100,000x
Here’s my @OpenAIDevs day thread for those following along. everyone else gotchu with videos and stuff so i will just give personal notes and aha moments thru the day
after some nice screenshot of Cocounsel, time for @romainhuet’s legendary live demos. o1 one-shots an ios app and does the frotnend/backend to control a drone.
ai controlled drones, what could go wrong?
@romainhuet Realtime API announced!
starting with speech to speech support
all 6 adv voice mode voices supported
just realized NotebookLM is @GoogleDeepMind's ChatGPT moment
- "low key research preview"/"experimental"
- not monetized
- GPUs/TPUs immediately on fire
- SOTA proprietary new model buried in there with upgrade that weren't previously announced
- new AI UX that cleverly embeds LLM usage natively within the product features
in this case NBLM nailed multimodal RAG and I/O in a way that @ChatGPTapp never did (or for that matter, @GeminiApp). The multiple rounds of preprocessing described by @stevenbjohnson also raise the quality of the audio conversation dramatically at the cost of extreme latency (took an efficient model that was advertised as capable of generating 30s of audio in 0.5s, and slapped on like 200s of LLM latency haha)
@GoogleDeepMind like, i put my podcast into it and it made a podcast of my podcast and... it was good.
do u guys know we spend 1-2 hrs writing up the show notes and now its a button press in NBLM
Gemini really took pride topping @lmsysorg for a hot second and then @OpenAI said "oh no u dont" and put out 4 straight bangers pounding everyone into the dust by 50 elo points
V high bar set for Gemini 2, Grok 2.5, and Claude 4 this fall.
Multiple fronts - on reasoning, multiturn chat tuning, instruction following, and coding - to compete.
anyway we finally did a @latentspacepod paper club on STaR and friends, swim on by
i hastily sketched out a "paper stack" of what the "literature of reasoning" could look like, but this is amateur work - would love @teortaxesTex or @arattml to map out a full list of likely relevant papers for o1
holy shit @ideogram_ai thumbnails are untapped alpha
notable reveals from today's iphone 16 event, especially Apple Visual Intelligence:
- Mail and Notifications will show summaries instead of str[:x]
- Siri now knows iPhone, becomes the ultimate manual on how to use the increasingly complicated iOS 18
and can read your texts (!) to suggest actions with Personal Context Understanding
(also it will try to advertise apple tv shows to you... i'm SURE it will be totally objective and aligned to your preferences amirite)
- new iphone 16 camera control button is PRIME real estate - notice how OpenAI/ChatGPT is now next to Google search, and both are secondary clicks to Apple's visual search, which comes first
- camera adds events to calendar!
"all done on device" and on cloud (though craig doesnt say that haha)