So I had this idea for an improvement to the way I render lines in Mars First Logistics. I've had a lot of people ask about how that works, so here's a (somewhat technical) thread about it and the improvement I recently made.
The lines are rendered using edge detection. This is a post process effect where first everything is rendered to a texture and then we read that texture to work out where the lines should go. Here's what it looks like before and after applying the post process shader:
With edge detection we're looking for pixels whose "value" is different from its neighbouring pixels (I'll get into what "value" means shortly). We can then darken these pixels to make lines along the edges.
I look at several values to determine if a pixel is along an edge: its depth (distance from camera), surface normal and colour. This info is encoded in 2 textures: the depth texture generated by Unity and the camera’s 4 channel colour buffer (16 bits per channel).
Colours are stored as an index into a palette in the blue channel. This means I only need to use one channel for colours and it gives me complete control of the colours at different times of day. Here’s what the day time palette looks like:
Instead of trying to store the x, y and z components of the normal separately, I store the dot product of the normal with the view direction and another orthogonal direction. These two dot products are stored in the green and alpha channels.
This gives pretty good results, but it’s not perfect. You can see little gaps in the lines where the dot products are not sufficiently different to be regarded as an edge:
Another issue is on curved surfaces, at particular distances, the normals can change too rapidly from pixel to pixel and the whole surface gets detected as an edge:
The idea I had to fix both these issues was to pre-compute the “surface IDs” of each mesh and use these values instead of the normals for edge detection. A surface here means a set of vertices that share triangles.
This works because vertices along sharp edges of a mesh do not share triangles, so the pixels around sharp edges will have different surface ids. Here’s what the game looks like with all the surfaces coloured differently:
And here’s with edge detection applied. Perfect!
I was feeling pretty pleased with myself, but then I started noticing some weird artifacts on some surfaces, like this:
Using RenderDoc, I tracked this down to the surface ids losing precision somewhere. This is a closeup showing pixels that should all have the same surface id:
It wasn’t an issue with the texture channel precision, because the surface ids are small enough to be exactly represented by 16 bit floats (they’re all integers in the 0-600 range).
The weird thing was even if I set all vertices to have the same surface id in the vertex shader, the values in the fragment shader would still be inconsistently different.
I eventually figured out it was the interpolation done during rasterization that was messing up the values. I guess this calculation must be done at a fairly low precision. If I turned off interpolation on surface ids the problem went away.
The problem now was that Unity’s surface shaders don’t support nointerpolation. It was possible to reproduce the surface shader features I needed in an unlit shader (basically just shadows and directional lighting), but this felt like it would be harder to maintain.
In the end a simple round(surfaceid) in the fragment shader seemed to fix the problem. Phew!
I did have to clean up a few of my models where I hadn’t marked surfaces as smooth that should have been, but that was worth doing anyway, if only to reduce the vertex count.
Even with surface ids, I do still keep the dot product of normal and view direction in a channel, because that’s still useful when using the depth to detect edges.
Consider the case where a surface is almost parallel to the camera’s view direction. The depth can change very rapidly from pixel to pixel, leading to false edges like this:
This can be fixed by biasing the depth edge threshold by the dot product of the normal and view direction. If the dot product is close to zero, then we don’t detect edges.
Finally the red channel is used for gradients between palette colours, which is useful for things like sunsets.
Shadows are stored in the sign bit of the blue channel. Here’s my final layout of data in the colour buffer:
Thanks for reading! Here’s a bonus debug screenshot.
Extra tidbit: The wireframe effect behind dust is achieved using a colour mask. The dust shader only writes to the red and blue channels, preserving the surface-ids and normal-dot-view-dirs of the objects behind the dust, while replacing the colour ids.
Share this Scrolly Tale with your friends.
A Scrolly Tale is a new way to read Twitter threads with a more visually immersive experience.
Discover more beautiful Scrolly Tales like this.