Thread comparing V-buffer and hardware TBDR approaches.
Some of my TBDR/POSH thoughts here:
When I was designing V-buffer style renderer in 2015 I was a bit concerned about having to run vertex shader 3 times per pixel. People might say that this is fine if you are targeting 1:1 pixel:triangle density like Nanite does, but that's cutting corners...
If you look at a generic triangle grid (like terrain on highly tessellated object surface), you have N*N vertices and (N-1) * (N-1) * 2 triangles. Shading vertices once and sharing the results costs N^2. Shading 3x vertices per triangle costs 6x more. That's a significant cost.
Common V-buffer implementation thus is equal to non-indexed geometry even on 1:1 pixel:triangle geometry. So you pay the 6x overhead. The algorithmic gain of having roughly constant amount of triangles in screen is of course a massive gain, but still we don't want 6x overhead.
A hardware TBDR implementation thus runs POSH (position shader) first to bin the triangles to tiles and then does per-tile index deduplication before running the full attribute (vertex) shader. IMR index buffer hardware is similar, so this is nothing dramatically new.
But there's a cost of fetching the vertex positions (and skinning matrices, etc) and running the vertex transform twice. The index/vertex buffer abstraction is not perfect for TBDR. The same is true for mesh shaders. You want to deduplicate offline and split to small meshlets.
When you have preprocessed your mesh to meshlets and you know tight local bounds to each meshlet, you can do fine grained per-meshlet viewport, backface and occlusion culling first. In this pass you don't have to access per-vertex data at all, which is a big saving.
If we assume that our clusters are small and area local, we can simply run vertex shader to all vertices in each visible meshlet and raster them. With mesh shaders you can single pass this without memory roundtrip, but this works only in forward shading setup.
Simplest approach with V-buffer is to run visible cluster vertex shaders in compute shader and write them to memory. This is a bit similar to ARMs older mobile architectures. But you only write visible meshlets, not all the geometry, which is a big improvement.
The way to do this without any memory roundtrip is to bin the clusters to screen space tiles, just like TBDR architecture bins triangles to screen space tiles. But there's order of magnitude less overhead, since clusters are more coarse.
Now when you are shading your tile, you basically run mesh shader to each visible cluster. One mesh wave per cluster generating one chunk of triangles for the rasterizer.
This works fine. But even if you have 1:1 triangle:pixel dense geometry, many of your clusters will span multiple screen space tiles and be transformed 2-4 times. Depends on tile size of course. If you have tiny local tile cache (similar to groupshared mem) then overhead is big.
So you would want to do per-triangle culling in the mesh shader. This is already possible with the triangle bit mask. But you still pay extra overhead for the vertex processing. This is hard to improve without having to repack the vertex waves.
If you wanted to do a hardware solution for TBDR GPUs, you would likely want to have an index list per meshlet and deduplicate those per tile and run compacted vertex waves per tile. A bit like HW index buffering. This has no memory roundtrip and no 2x vertex shading.
In 2015 I also thought about a couple of different software compaction schemes that I could run on my tile threads first, and then run vertex work on those threads and then pixel work. But load balancing is iffy when you only have fixed amount of threads for all the steps.
And that's why we ended up with the UV-buffer approach in the GPU-driven renderer I presented in 2015. I just didn't want to amplify the vertex workload by 3x+ (equivalent of non-indexed geometry per pixel). I wanted fixed cost with very good cache hit rate.
But the UV-buffer has severe limitations and is not general purpose enough for generic engines. Nanite shows that V-buffer is shippable today on high end, if you want to lean on temporal upscaler to reduce the extra per-pixel overhead.
advances.realtimerendering.com/s2015/aaltonen…
But on mobile you want more optimal solution. Especially on low/mid tier devices that are optimized for fast uniform buffer usage patterns intead of firing dozens of raw mem loads per pixel to load three vertices and their attributes per pixel.
Share this Scrolly Tale with your friends.
A Scrolly Tale is a new way to read Twitter threads with a more visually immersive experience.
Discover more beautiful Scrolly Tales like this.
