1 million SDF cubes (950 MB SDF volume). Unoptimized baseline benchmark. Before the sparse data structure.
Running time around 150% in NSight (9.8 ms without).
Some analysis below...
PCI-E 4.0 bandwidth usage is now 6.9%. This was a bottleneck when my index buffer was in system memory. Measured 28 GB/s PCI-E 4.0 bandwidth with that setup. Albeit awesome numbers, putting this buffer to GPU memory made the simple cube test (no sphere trace) 5x faster...
L1$ hit rate hovers at around 90%. L2$ hit is around 60%. The first version didn't have any volume texture mip levels, and showed L1$ hit rate of 30% and L2$ hit rate of 20%.
The performance got 3x faster on RTX 3090 after that change. I can't see any visual difference.
"L1TEX Texture Data Throughput" is the bottleneck. That unit throughput has lots of 50% peaks. But never above it. I guess NSight has trouble showing volume texture bottlenecks (volume texture needs two samples). Even if I artificially add sample instructions it stays in 50%.
The ALU pipelines are obviously almost idling for sphere tracing. This kind of workload uses less than 50% ALU on AMD GCN2 in Claybook, and Ampere has double ALU pipes, so I never expected to see ALU bottleneck here. Even the optimized version should not show much ALU usage.
I am guessing that we see a lot of this ROP stall problem here. We have large bounding volumes and only small amount of rays really do lot of work.
Unfortunately we don't have super fine grained warp timeline visualization on any PC tools to show this.
The new sparse algorithm renders small cubes (8x8x8 SDF each), making the pixel shader cost similar to each lane. Thus the above problem should be much smaller. This needs to be combined with a GPU-driven occlusion culling of course to ensure no overdraw.
• • •
Missing some Tweet in this thread? You can try to
force a refresh
With the resizable BAR support getting more adaptation. Standard swizzle is becoming more important too: docs.microsoft.com/en-us/windows/…
With standard swizzle, you can do the swizzle on CPU side (even store swizzled textures on disk) and write to GPU memory directly without a copy.
I am just wondering how good the standard swizzle support is nowadays. AFAIK only Intel supported this feature in the beginning. What's the current situation? Is it supported on Nvidia and AMD? If yes, is it fast?
If the optimal tiling layout is 5%+ faster than standard swizzled, then there's no point in using it. Just pay the GPU copy cost for better runtime performance. But if the cost is tiny, then simply use RBAR memory for everything :)
Why is it better to get full GPU memory visible from CPU side compared to a small 256 MB region?
Thread...
Traditionally people allocate an upload heap. Which is CPU system memory visible to the GPU.
The CPU writes data there, and the GPU can directly read the data over PCI-E bus. Recently I measured 28 GB/s GPU read bandwidth from CPU system memory over PCI-E 4.0.
The two most common use cases are:
1. Dynamic data: CPU writes to upload heap. GPU reads it from there directly in pixel/vertex/compute shader. Examples: constant buffers, dynamic vertex data...
2. Static data: CPU writes to upload heap. GPU timeline copy to GPU resource.
I am going to implement a depth pyramid based approach first.
Would also like to test the new Nvidia extension that eliminates all pixels of a triangle after the first passed one. This way you don't even need a depth pyramid. Just write to visible bitfield in pixel shader.
New GPU culling algorithm: 1. Render last frame's visible list 2. Generate depth pyramid from the Z buffer 3. Do a 2x2 sample test for each instance using gather (refer to my SIGGRAPH 2015 presentation) 4. Write newly visible instances also to buffer B 5. Render visible list B
What should I implement next to my Rust Vulkan prototype?
It has plenty of occlusion potential, even though it's a sparse asteroid field of 1 million instances. Should be able to cull 90%+ easily...
I need the occlusion culling for efficient rendering of the sparse volume. Otherwise the brick raster results in overdraw. However the backfaces of SDF bricks terminate the root finding immediately as the ray starts from the inside. Could early out normal calc too...
It was about image kernels and their memory access patterns. Filled with GCN architecture specifics, but the most noteworthy detail was the LDS sliding window algorithm.
Thread...
Blur kernels are very popular, and the most annoying part about writing one is how you avoid fetching the neighborhood again and again. Tiny changes in execution order can have massive effect in cache utilization. The problem is especially tricky in separable X/Y gaussian blurs.
Naive separable gaussian blur fetches a long strip along X axis. Each pixel does the same. Pixel Y and Y+n share zero input pixels with each other. Pixels along the X axis share inputs. But if the kernel is wide enough it's hard to keep all of that data reliably in caches.