A mini Importance Sampling adventure: imagine a signal that we need to integrate (sum it's samples) over its domain. It could for example be an environment map convolution for diffuse lighting (1/6). Image
Capturing and processing many samples is expensive so we often randomly select a few and sum these only. If we uniformly (with same probability) select which samples to use though we risk missing important features in the signal, eg areas with large radiance (2/6). Image
If the signal are non negative (like an image), we can normalise its values (divide by the sum of all values) and treat it as a probability density function (pdf). Using this, we can calculate the cumulative distribution function (CDF) (3/6). Image
The value of a CDF at position X is the sum of the pdf values up to that point. The CDF has some nice properties, it is always increasing up to a max of 1. Also big changes in the original signal appear as steep slopes, while slow/small changing areas are flatter (4/6). Image
This means that if we uniformly select random Y values on the vertical axis and project them horizontally (X axis) til they meet the curve, larger features in the signal will receive more samples, while small ones, where the CDF curve is flatter, will receive less (5/6). Image
Finally, if we use those X positions to sample the original signal, most samples will fall on the large features of the signal, and won't be wasted, allowing us to represent the original signal better. (6/6). Image

• • •

Missing some Tweet in this thread? You can try to force a refresh

Keep Current with Kostas Anagnostou

Kostas Anagnostou Profile picture

Stay in touch and get notified when new unrolls are available from this author!

Read all threads

This Thread may be Removed Anytime!


Twitter may remove this content at anytime! Save it as PDF for later use!

Try unrolling a thread yourself!

how to unroll video
  1. Follow @ThreadReaderApp to mention us!

  2. From a Twitter thread mention us with a keyword "unroll"
@threadreaderapp unroll

Practice here first or read more on our help page!

More from @KostasAAA

1 May
In graphics programming we use a lot of awesome sounding names for techniques, which often trigger fantastic mental imagery (as well as actual imagery). Too many to list them all, the top 3 of my favourite ones, in no particular order, probably are: (1/4)
1) "Ambient occlusion": the percentage of rays cast from a point over the hemisphere centred around the surface normal that are not occluded (do not collide with) by geometry. A value of 0 means all rays collide, 1 means none does. (2/4)
2) "Shadow pancaking": project shadowcasting meshes that lie in front of the near plane of a light (and would normally get culled), on the near plane so that they will still cast shadows. Used to enable tightening of the shadow projection volume to increase the resolution. (3/4)
Read 4 tweets
4 Oct 20
During my years in graphics there have been many great conference presentations but also a few that I found "eye opening" and changed the way I think about and approach gfx programming. My top 3, in no particular order, probably are (1/4):
"Uncharted2: HDR Lighting" from GDC 2010, slideshare.net/ozlael/hable-j… by @FilmicWorlds, great introduction to linear lighting and its importance in graphics (2/4)
"Physically Based Shading" from Siggraph 2010, renderwonk.com/publications/s… by @renderwonk, the seminal introduction to physically based rendering (3/4)
Read 4 tweets
29 Aug 20
People starting to learn graphics techniques and a graphics API to implement them may find the whole process intimidating. In such a case there is the option to use a rendering framework that hides the API complexity, and handles asset and resource management. (1/4)
There are quite a few frameworks out there, for example:

bgfx: github.com/bkaradzic/bgfx
The Forge: github.com/ConfettiFX/The…
Falcor: github.com/NVIDIAGameWork…
Cauldron: github.com/GPUOpen-Librar… (2/4)
Some are closer to the API, some hide it completely. They still offer the opportunity to learn about asset loading, shaders, render states, render targets etc at a more granular level than a full blown engine while allowing the user to focus on the gfx tech implementation (3/4).
Read 4 tweets
18 Aug 20
Great question from DMs: "How bad are small triangles really"? Let me count the ways:

1) When a triangle goes pixel size it may miss the pixel centre and not get rasterised at all, wasting all the work done during vertex shading docs.microsoft.com/en-us/windows/… (1/6)
2) Even if it does get rasterised, since the GPU shades pixels in 2x2 quads, any work done for pixels in the quad not covered by the triangle will be wasted, leading to quad overshading. blog.selfshadow.com/publications/o…, blog.selfshadow.com/2012/11/12/cou… (2/6)
3) GPUs perform a coarse rasterisation pass to decide which (eg 8x8 pixel) tiles are touched by a triangle. Small tris that cover few pixels are wasting much of the tile. Thin and long triangles can have the same effect. g-truc.net/post-0662.html, fgiesen.wordpress.com/2011/07/06/a-t… (3/6)
Read 6 tweets
20 Jun 20
Good DM question: "is it better to dispatch 1 threadgroup with 100 threads or 100 groups with 1 thread in each?" The GPU will assign a threadgroup to a Compute Unit (or SM), and will batch its threads into wavefronts (64 threads, on AMD GCN) or warps (32 threads on NVidia). (1/4)
Those wavefronts/warps are executed on the CU's SIMDs, 64/32 threads at a time (per clock), in lockstep. If you have only one thread in the threadgroup you will waste most of the wavefront/warp as they can't contain threads from different threadgroups. (2/4)
The general advice is that the threadgroup should fill at least a few wavefronts/warps, for eg 128/256 threads on GCN. The number also depends on the registers used per thread, to achieve good occupancy, and the need to share data between the threads of the group or not. (3/4)
Read 4 tweets
26 Jan 20
Question from DMs: "So can the GPU automatically generate a Hi-Z pyramid?"

The confusion comes from a GPU feature often called HiZ (esp for AMD GPUs): for every tile of pixels (say 4x4 or 8x8), the GPU stores a min and max depth value in a special buffer while rendering. (1/4)
Every time a pixel tile, belonging to the same triangle, arrives, the GPU will use the min/max value in that buffer, that corresponds to the tile, to compare it with the min/max depth values of the pixel tile. (2/4)
If for example the min depth of all the pixels in the new tile is larger than the maximum depth stored in the corresponding Hi-Z pixel then the GPU rejects the whole tile. If not, then it updates the min/max value of the HiZ pixel and goes on to process the tile further. (3/4)
Read 4 tweets

Did Thread Reader help you today?

Support us! We are indie developers!

This site is made by just two indie developers on a laptop doing marketing, support and development! Read more about the story.

Become a Premium Member ($3/month or $30/year) and get exclusive features!

Become Premium

Too expensive? Make a small donation by buying us coffee ($5) or help with server cost ($10)

Donate via Paypal Become our Patreon

Thank you for your support!

Follow Us on Twitter!