I saw @iquilezles sdFbm article (iquilezles.org/www/articles/f…) and tried it in blender.
5 octaves of noise, 16 raymarching steps. That's all opengl can handle before hitting uniform limits with nodes.
As you can imagine, lots of nodes :D
The node editor lags to death, but viewport is nice and fast still (since unrolled loops run about the same in the end on the GPU side)
Note there's no drivers in the shader, as that *wouldn't* run anywhere near as fast!
Here's an overview of the nodes, with a little bit of annotation to show what each chunk of nodes is doing
And let's dive in.
loop - the main raymarch loop, just sample "map" and move along
map - main SDF function
sdFbm - algorithm described in the article, implemented with nodes
sdFbmLoop - Iterations of the algorithm
sdBase - "sphere grid" as described in the article
sph - single random-radius sphere SDF
no integers in blender shaders, so the integer hash is replaced with a float one ("White Noise" Texture)
That's it, pretty straightforward actually, just looks like a lot of nodes at first with all the iterations.
fuck i'm moving in a day i'm meant to be packing my things why am i getting fixated on balls of goo🥲
• • •
Missing some Tweet in this thread? You can try to
force a refresh
I've been very stressed and not blendering much, but I had a random idea for how to do fake caustics with geometry nodes 🤔
I'm sure this has been thought of before lol, but I thought it was a neat way of doing it:
basically, just displace the water surface mesh by the refracted normals of the light source, and then you can compare the face area to figure out how bright the spot should be:
In theory it should actually be pretty physically accurate, but probably it's not as simple as just dividing the face area by itself to get properly numerical results out, i guess
here's some random thing I made ages ago for doing laser shots through water
never got around to making it animate in a good way though :/
here's how it looks in the viewport - all it's doing is twisting the mesh, but it also instances some spheres on it which allow it to maintain volume and "blobbiness", without them it looks kinda flat (last 2 image comparison)
Here's the nodes for the shader as well - I annotated it a tiny bit. The two vectors going off to the right in the geo nodes screenshot only go to the group output so they can be fed into the shader
So, there's a concept I was thinking of for a while and I couldn't seem to find any existing examples...
How useful from a modelers perspective would "sparse" UV mapping be? What I'm envisioning is something that removes the [0-1] UV bounds, and instead of packing tightly you -
- do your UV unwrapping *and all your texture work* on an open, "unconstrained" area, without any concern for wasted space in between - think like PureRef board. The end result could be backed by a single texture, but you can freely change the texel density of each island easily.
Basically by doing any texture paint etc. on a (theoretically infinite) virtual texture, but at export the UV's get automatically packed, taking whatever parts of this virtual texture they cover along with them. Instead of working in that tiny 0-1 box.
Back home, realised I hadn't used blender in almost a full month :/
still thinking about Girls' Last tour...
wanted to do some stuff with the custom nodes to show them off a bit more, didn't have too long to work on this (maybe 1 hour? 1.5? something like that)
original reference image:
I used 2 curvature nodes, one for "AO" and one for the edge stuff. More obvious if i show just shading, then one, then the other, then both: