I wonder if our early messaging about Concurrent Mode should have been focused on mounts rather than updates. Some of the conversation I’m seeing assumes we could just “do less work” which is not an option for rendering *new* subscreens — where granular rerendering doesn’t help.
This depends on the app — some apps, like dataviz, almost exclusively do “updates”. So dataviz example, while effective visually, may have been a misdirection. In consumer apps a lot of the interactions we want to make smoother are mounts — like switching tabs or infinite scroll.
There’s also a question of responsibility. We consider what happens when you have hundreds of components that all run a little bit of code *our* responsibility. Userland code then dwarves library overhead in CPU time. We can’t just wash our hands and say “don’t write slow code”.
Granular rerendering is certainly useful (and disproportionally useful for dataviz and graphical editors). There are several ways you can solve it. The problem with many popular solutions is that they preclude solving the mounts. This is why we start from the other end.
There are many features that your library gets if you solve non-blocking mounts. Like pre-rendering the contents of a hidden tab optimistically without blocking the initial paint and delaying user input. Or rendering long lists in visually intentional chunks.
I don’t care if the library authors of today see this as a problem worth solving. But I want to inspire the authors of libraries of tomorrow to at least consider it. We have several tabs, each has a deep tree inside, each layer has a fixed user code cost. Now make it responsive.
Maybe this problem alone by itself isn’t worth your effort. It is hard. But if you spend enough time on it, you might discover that other areas — animations, data fetching, hydration, code splitting — now allow new solutions you couldn’t consider before. If your model allows it.
• • •
Missing some Tweet in this thread? You can try to
force a refresh
i feel bittersweet sharing i’m leaving my job at meta in a few weeks. working in the react org at meta has been an honor. i am thankful to my past and present colleagues for taking me in, letting me make mistakes, helping me see my strengths, being kind, and sharing their time.
for the past three years, i kept saying i’d leave “in a year or so” but the moment never felt right. i wanted to (1) finish the new docs and (2) see a broadly usable Suspense data fetching integration shipping. after years of work from the team, both have shipped this spring.
i felt hesitant leaving earlier because not too long ago, leaving meta used to mean leaving the react team. that would feel too sad for me. but it is not true anymore. react has become a multi-company project, and there are several independent engineers on the team too.
fwiw i expected the article to be clickbait (and the title is) but it’s actually pretty balanced. imo it gets a few things wrong so i’ll provide an alternative perspective (tiny thread)
the framing of “existing features like useState / react-query / CSS-in-JS don’t work” is misleading at best.
to understand why, consider that here is the React you already know…
… in the RSC paradigm, all of these things keep working! we are not *replacing* that layer — we are adding a *new* layer that can run at the build or request time. that’s Server Components. the only thing they can do is pass data to the “React you already know”…
yeah i thought this was nice. idk if “spatial computing” will catch on or will stay as an apple-esque “we’re too good to use the industry terms” thing, but i thought it’s funny that this launch simultaneously validated meta’s bet *and* made meta’s branding feel instantly obsolete
mark’s meta announcement felt corny because they had to come up with a vision of mainstream aesthetics for a medium that has no mainstream community yet. of course it’s not believable! apple stuck with floating 2d stuff in the presentation because it feels familiar.
i think this is great news for meta too. i imagine it will be easier to motivate sweating the details and making them cohesive after apple resets the expectations of what this medium is supposed to feel like.
curious what the actual apple vision (not pro) looks like
vision is such a dope name for a product. focuses it on the human (what function does it serve you) rather than on the place you’re supposedly in (whatever reality). “apple vision” also kinda says “this is *the* thing we’re working on”
i mean i sorta get the point but also if a ballpen wrote stuff by itself and contained much of humanity’s collective knowledge within, maybe people would have a point being a bit more concerned about ballpens too? it’s more like a phone line with an alien made out of our voices
which is maybe fine, who knows! the internet is pretty good imo and it sure sounds a lot more dangerous than a ballpen. but like idk it’s just such a freaky vibes piece of technology, both natural and freaky like golems or acid. you don’t see language itself reanimated every day.
the closest positive emotional reference i can think of is something like talking to ancestor spirits. and even those stories typically have preexisting oracles instead of groups of people competing to discover and create them. it’s freaky
real talk. modern frameworks like Next.js and Gatsby have sort of an “SPA mode”. the main difference is with classical SPAs is that they produce several entry HTML files (per route). this means a purely static (not Node!) deploy needs a tiny URL -> path config. this trips people.
we need to get past this hurdle collectively. it is ridiculous if this is the reason we’re delaying adoption of better tools. SPAs with multiple HTML entry files are much better SPAs! we just need some standard way to deploy these across providers.
ideas welcome. i know there are scripts that generate config eg for nginx and apache. cool. i also know some providers infer these paths by default for next and gatsby. also cool. but can we have one obvious way to do it across the ecosystem? so that every single shop knows how.