I believe this will truly transform frontend applications.
The use-cases are endless: smart auto-completion, error correction and validation, natural language filtering, auto-filling forms, UI suggestions, first-pass summarization and search…
But what we're getting now is:
• Faster AI – the model is tiny and runs on-device, at the speed of keystrokes.
• Optimistic AI – just like `useOptimistic` is a React hook to commit a local state change beating the server roundtrip and provider a better UX, I believe we'll see "multi-tiered AI", where the On-device AI does the first pass, and the Cloud AI enhances it further.
• Cost-efficient AI – many usecases will be altogether served by the local AI end-to-end. And if the device doesn't support it, @vercel AI SDK helps you share code, execute it in the cloud, and just swap out the `provider` to use a fast cloud model like @groqinc
This is how simple the resulting API is.
h/t @jeasonstudio for implementing `chrome-ai`
• • •
Missing some Tweet in this thread? You can try to
force a refresh
This one set out to prove two things:
✓ React now has PHP-like levels of code simplicity
✓ A mutation that re-renders the page from the server is fast enough
The speed is absolutely nuts. I see as low as 15ms TTFB from my home Wi-Fi in SF. The final result streams from us-east, where the core pokemon database is.
I re-designed rauchg.com with App Router
Migration highlights 🧵
◆ 100 PSI perf *out of the box*
◆ Using AI to migrate
◆ React Server Components are unreal (ft. <Tweet>)
◆ Generating Dynamic OG
◆ Reaching Dark Mode nirvana
◆ Integrating Rust-powered MDX (+ RSC)
1️⃣ To start: is it really a framework if it's not blazing fast?
(or "why not an MPA?")
◆ 100 PSI (mobile!) with no work
◆ Yet page got *more complex* (realtime views)
◆ Yet I get:
◇ blazing page transitions
◇ state (e.g.: mutated view count)
◇ persistent layout (nav)
2️⃣ As part of this work, I converted
◆ `.jsx` → `.mdx` for all posts
◆ `<style jsx>` to Tailwind 🐐