Ian MacLarty Profile picture
Mar 3, 2022 26 tweets 8 min read Read on X
So I had this idea for an improvement to the way I render lines in Mars First Logistics. I've had a lot of people ask about how that works, so here's a (somewhat technical) thread about it and the improvement I recently made.
The lines are rendered using edge detection. This is a post process effect where first everything is rendered to a texture and then we read that texture to work out where the lines should go. Here's what it looks like before and after applying the post process shader: ImageImage
With edge detection we're looking for pixels whose "value" is different from its neighbouring pixels (I'll get into what "value" means shortly). We can then darken these pixels to make lines along the edges. ImageImage
I look at several values to determine if a pixel is along an edge: its depth (distance from camera), surface normal and colour. This info is encoded in 2 textures: the depth texture generated by Unity and the camera’s 4 channel colour buffer (16 bits per channel).
Colours are stored as an index into a palette in the blue channel. This means I only need to use one channel for colours and it gives me complete control of the colours at different times of day. Here’s what the day time palette looks like: Image
Instead of trying to store the x, y and z components of the normal separately, I store the dot product of the normal with the view direction and another orthogonal direction. These two dot products are stored in the green and alpha channels.
This gives pretty good results, but it’s not perfect. You can see little gaps in the lines where the dot products are not sufficiently different to be regarded as an edge: Image
Another issue is on curved surfaces, at particular distances, the normals can change too rapidly from pixel to pixel and the whole surface gets detected as an edge:
The idea I had to fix both these issues was to pre-compute the “surface IDs” of each mesh and use these values instead of the normals for edge detection. A surface here means a set of vertices that share triangles.
This works because vertices along sharp edges of a mesh do not share triangles, so the pixels around sharp edges will have different surface ids. Here’s what the game looks like with all the surfaces coloured differently: Image
And here’s with edge detection applied. Perfect! ImageImage
I was feeling pretty pleased with myself, but then I started noticing some weird artifacts on some surfaces, like this: Image
Using RenderDoc, I tracked this down to the surface ids losing precision somewhere. This is a closeup showing pixels that should all have the same surface id: Image
It wasn’t an issue with the texture channel precision, because the surface ids are small enough to be exactly represented by 16 bit floats (they’re all integers in the 0-600 range).
The weird thing was even if I set all vertices to have the same surface id in the vertex shader, the values in the fragment shader would still be inconsistently different.
I eventually figured out it was the interpolation done during rasterization that was messing up the values. I guess this calculation must be done at a fairly low precision. If I turned off interpolation on surface ids the problem went away.
The problem now was that Unity’s surface shaders don’t support nointerpolation. It was possible to reproduce the surface shader features I needed in an unlit shader (basically just shadows and directional lighting), but this felt like it would be harder to maintain.
In the end a simple round(surfaceid) in the fragment shader seemed to fix the problem. Phew!
I did have to clean up a few of my models where I hadn’t marked surfaces as smooth that should have been, but that was worth doing anyway, if only to reduce the vertex count. ImageImage
Even with surface ids, I do still keep the dot product of normal and view direction in a channel, because that’s still useful when using the depth to detect edges.
Consider the case where a surface is almost parallel to the camera’s view direction. The depth can change very rapidly from pixel to pixel, leading to false edges like this: Image
This can be fixed by biasing the depth edge threshold by the dot product of the normal and view direction. If the dot product is close to zero, then we don’t detect edges. Image
Finally the red channel is used for gradients between palette colours, which is useful for things like sunsets. Image
Shadows are stored in the sign bit of the blue channel. Here’s my final layout of data in the colour buffer: Image
Thanks for reading! Here’s a bonus debug screenshot. Image
Extra tidbit: The wireframe effect behind dust is achieved using a colour mask. The dust shader only writes to the red and blue channels, preserving the surface-ids and normal-dot-view-dirs of the objects behind the dust, while replacing the colour ids.

• • •

Missing some Tweet in this thread? You can try to force a refresh
 

Keep Current with Ian MacLarty

Ian MacLarty Profile picture

Stay in touch and get notified when new unrolls are available from this author!

Read all threads

This Thread may be Removed Anytime!

PDF

Twitter may remove this content at anytime! Save it as PDF for later use!

Try unrolling a thread yourself!

how to unroll video
  1. Follow @ThreadReaderApp to mention us!

  2. From a Twitter thread mention us with a keyword "unroll"
@threadreaderapp unroll

Practice here first or read more on our help page!

Did Thread Reader help you today?

Support us! We are indie developers!


This site is made by just two indie developers on a laptop doing marketing, support and development! Read more about the story.

Become a Premium Member ($3/month or $30/year) and get exclusive features!

Become Premium

Don't want to be a Premium member but still want to support us?

Make a small donation by buying us coffee ($5) or help with server cost ($10)

Donate via Paypal

Or Donate anonymously using crypto!

Ethereum

0xfe58350B80634f60Fa6Dc149a72b4DFbc17D341E copy

Bitcoin

3ATGMxNzCUFzxpMCHL5sWSt4DVtS8UqXpi copy

Thank you for your support!

Follow Us!

:(