Keenan Crane Profile picture
Jun 14, 2021 12 tweets 7 min read Read on X
Uniform edge lengths are a reasonable proxy for mesh quality, since equilateral triangles have good angles. But if you have more specific criteria, you can usually do better. "Hodge-Optimized Triangulations" provides a nice discussion of this perspective: geometry.caltech.edu/pubs/MMdGD11.p…
A popular approach in graphics is something like Botsch & Kobbelt, "Remeshing for Multiresolution Modeling" (Sec. 4): graphics.rwth-aachen.de/media/papers/r…

Basic idea: iteratively move vertices to equalize areas, split long edges, collapse short edges, & flip edges to improve vertex valence.
In this class of algorithms, I really like Chen & Holst, "Efficient mesh optimization schemes based on Optimal Delaunay Triangulations": math.uci.edu/~chenlong/Pape…

Here there's a clear notion of optimality, & an efficient preconditioner. For surfaces: restrict to tangential motions
Another classic is Alliez et al, "Isotropic Surface Remeshing": hal.inria.fr/inria-00071991…

Here the idea is to parameterize the mesh & nicely sample according to varying density in the plane. I don't sense this idea is used too much these days, due to the need to parameterize.
Though on second thought, there are methods that use sophisticated field-aligned parameterization to get very high-quality almost-regular meshes with few irregular vertices, such as Nieser et al, "Hexagonal Global Parameterization of Arbitrary Surfaces": mi.fu-berlin.de/en/math/groups…
More recently a popular tool is the award-winning "Instant Meshes" by @wenzeljakob et al: igl.ethz.ch/projects/insta…

Combines the strengths of field-aligned & local iterative methods to get extreme performance/scalability, albeit with more irregular vertices than, say, Nieser et al
Finally, a fun example where meshes with unit edge lengths show up is in Isenberg et al, "Connectivity Shapes": cs.unc.edu/~isenburg/rese…

The idea: if you have perfect unit edge lengths, you can throw away vertex positions and recover the geometry *purely* from the connectivity!
In simulation, the award-winning "El Topo" package of Brochu & Bridson uses near-unit edge lengths to provide high-quality free surface tracking: github.com/tysonbrochu/el…

(By the way, "award winning" in these posts refers to the SGP software award @GeometryProcess!)
@BrunoLevy01 has a great package called GEOGRAM that provides a lot of great regular meshing functionality: alice.loria.fr/software/geogr…

Bruno will be able to give better details about the algorithms used, and interesting applications like optimal transport that I didn't mention yet.
It's also important to mention that uniform edge lengths provide more detail than necessary in large regions with low curvature.

Other methods adapt to local curvature/feature size, like Dunyach et al, "Adaptive Remeshing for Real-Time Mesh Deformation": hal.archives-ouvertes.fr/hal-01295339/d…
Likewise, the Holst & Chen paper mentioned earlier allows spatial adaptivity, by providing a density function: math.uci.edu/~chenlong/Pape…
There are sure to be countless references I'm missing both to algorithms & applications—but that should provide a few useful pointers!

• • •

Missing some Tweet in this thread? You can try to force a refresh
 

Keep Current with Keenan Crane

Keenan Crane Profile picture

Stay in touch and get notified when new unrolls are available from this author!

Read all threads

This Thread may be Removed Anytime!

PDF

Twitter may remove this content at anytime! Save it as PDF for later use!

Try unrolling a thread yourself!

how to unroll video
  1. Follow @ThreadReaderApp to mention us!

  2. From a Twitter thread mention us with a keyword "unroll"
@threadreaderapp unroll

Practice here first or read more on our help page!

More from @keenanisalive

Apr 17
Here's a nice "proof without words":

The sum of the squares of several positive values can never be bigger than the square of their sum.

This picture helps make sense of how ℓ₁ and ℓ₂ norms regularize and sparsify solutions (resp.). [1/n]
These pictures are often batting around in my brain when I think about optimization/learning problems, but can take some time to communicate to students, etc. So, I thought I'd make some visualizations. [2/n]
Suppose we minimize the squared length of a vector x, equal to the sum of squares of its components.

To avoid the trivial solution x=0, we'll also require that the components sum to a nonzero value.

Equivalently: minimize the ℓ₂ norm ‖x‖₂, subject to ‖x‖₁=1. [3/n]
Read 13 tweets
Apr 7
We often use discretization to approximate continuous laws of physics, but it also goes the other way:

You can use continuous equations to approximate the behavior of discrete systems!

Here we'll see how electrical circuits can be modeled using the Laplace equation Δφ=0. [1/n]
The Laplacian Δ is central to numerous (continuous) physical equations like the heat equation, the wave equation, and so on.

I have a whole video about it here: [2/n]
The discrete or graph Laplacian L is typically viewed as a numerical approximation of Δ, giving the difference between the value ui at a node of a graph, and a weighted average of uj at all neighbors j:

(Lu)_i := Σ_j w_ij (ui - uj)

Here w_ij are edge weights. [3/n] Image
Read 23 tweets
Dec 9, 2024
Entropy is one of those formulas that many of us learn, swallow whole, and even use regularly without really understanding.

(E.g., where does that “log” come from? Are there other possible formulas?)

Yet there's an intuitive & almost inevitable way to arrive at this expression.
When I first heard about entropy, there was a lot of stuff about "free states" and "disorder." Or about the number of bits needed to communicate a message.

These are ultimately important connections—but it's not clear it's the best starting point for the formula itself.
A better starting point is the idea of "surprise."

In particular, suppose an event occurs with probability p. E.g., if the bus shows up on time about 15% of the time, p = 0.15.

How *surprising*, then, is an event with probability p? Let's call this quantity S(p).
Read 17 tweets
Nov 14, 2024
We often think of an "equilibrium" as something standing still, like a scale in perfect balance.

But many equilibria are dynamic, like a flowing river which is never changing—yet never standing still.

These dynamic equilibria are nicely described by so-called "detailed balance"
In simple terms, detailed balance says that if you have less "stuff" at point x, and more "stuff" at point y, then to maintain a dynamic equilibrium, the fraction of stuff that moves from x to y needs to be bigger than the fraction that moves from y to x.
Detailed balance is also the starting point for algorithms that efficiently generate samples of a given distribution, called "Markov chain Monte Carlo" algorithms.

The idea is to "design" a random procedure that has the target distribution as the equilibrium distribution.
Read 5 tweets
Nov 12, 2024
Even though Newton's laws are deterministic, the behavior of many interacting bodies is so chaotic that it looks essentially "random."

Statistical mechanics effectively says: why bother with all those complex trajectories? Just go ahead and replace them with truly random motion.
A good way to think about the difference is to imagine actually simulating the particles.

With Newtonian simulation, you track both positions & velocities. Velocities increase or decrease according to forces (like attraction/repulsion); positions are then updated by velocities.
With (overdamped) Langevin simulation, you track just the positions. Positions follow the same forces (i.e., the potential gradient), plus some random "noise" that models all that chaotic motion.

Near equilibrium, these two simulators yield motions with very similar statistics.
Read 8 tweets
Sep 11, 2024
When you study statistics, people just start talking at you like you're supposed to understand the difference between "probability" and "likelihood."

If you're confused, you're not alone: these words have an *identical* meaning in English—but an important distinction in math. Image
In particular:

𝗣𝗿𝗼𝗯𝗮𝗯𝗶𝗹𝗶𝘁𝘆 assumes you've already chosen a fixed model θ, and asks how often given events x occur in this model

𝗟𝗶𝗸𝗲𝗹𝗶𝗵𝗼𝗼𝗱 assumes you've already observed some events x, and asks how plausible it is that a given model θ explains these events
In the cartoon above:

- 𝒳 is the "state space," describing all possible outcomes x of an experiment

- Θ is the "configuration space," describing all possible parameters θ for the model.
Read 10 tweets

Did Thread Reader help you today?

Support us! We are indie developers!


This site is made by just two indie developers on a laptop doing marketing, support and development! Read more about the story.

Become a Premium Member ($3/month or $30/year) and get exclusive features!

Become Premium

Don't want to be a Premium member but still want to support us?

Make a small donation by buying us coffee ($5) or help with server cost ($10)

Donate via Paypal

Or Donate anonymously using crypto!

Ethereum

0xfe58350B80634f60Fa6Dc149a72b4DFbc17D341E copy

Bitcoin

3ATGMxNzCUFzxpMCHL5sWSt4DVtS8UqXpi copy

Thank you for your support!

Follow Us!

:(