After discussion w @ttaylorr_b, we can implement stacked PRs/PR groups already (in fact we kind of do with Copilot) but restacking (automatically fanning out changes from the bottom of the the stack upwards) would be wildly inefficient. To do it right, we need to migrate @GitHub to use git reftables instead of packed-refs so that multi-ref updates / restacking will be O(n) instead of ngmi.
This will take some time but has been greenlit.
To be clear, packed-refs doesn't make restacking impossible by itself, and we already can/do batch reference updates together into a single transaction. Any individual transaction only rewrites the packed-refs file at most once.
The thing that @ttaylorr_b et al is more worried about is having many more references in general as a result of stacked PRs (e.g., refs/pull/NNN/v1, refs/pull/NNN/upstack, etc.) and the slowdown that would cause when doing deletions (outside of stacked PRs) in general.
• • •
Missing some Tweet in this thread? You can try to
force a refresh
This should be available in Next.js canaries starting next week
@turborepo This means Turbopack will work with babel, less, scss, postcss, svgr, mdx, and more. @wSokra + team are mad scientists. To get this to work, they built a custom IPC layer to talk with child node.js processes from Rust. The same layer is also used to read next.config.js 🤯
After joining @vercel and launching @turborepo, @wSokra presented me with a vision of what a next gen bundler and build system could be. What if there was no distinction? What if you could parallelize and cache work all the way at down to the function level?
🧵
There are two ways to make a process faster: do less work or do work in parallel. We knew if we wanted to make the fastest bundler possible, we’d need to pull hard on both levers.
So we created a new low-level Turbo engine for incremental (and soon distributed) computation.
The Turbo engine works like a scheduler for function calls, allowing calls to be parallelized across all available cores.
The engine also caches the result of all the functions it schedules. It never needs to do the same work twice. It does the minimum work at maximum speed.