In short: no matter how awful your mesh is, we compute beautiful high-quality texture coordinates.
(1/n)
For instance, just as a stress test we can try to flatten this crazy #Thingi10k model as a single piece, rather than cutting it up into many charts. We still get excellent texture coordinates, with no flipped triangles, no angle distortion, and little scale distortion.
(2/n)
In general, the algorithm is guaranteed to produce maps that do not locally fold over and are perfectly conformal in a discrete sense, for absolutely any triangle mesh. It also gives globally 1-to-1 maps to the sphere, and handles any configuration of "cone singularities" (3/n)
Our starting point is the "uniformization theorem": any smooth surface—no matter how crazy it looks—can be deformed into a constant-curvature space (e.g. a sphere) by a "conformal" or angle-preserving map
See here for an intro:
(4/n)
A recent "discrete uniformization theorem" provides a perfect analog of the smooth one: every polyhedral surface—no matter how crazy it looks—can be mapped to a discretely conformally equivalent one w/ given curvature
Great theorem—but how do you actually compute the map?! (5/n)
A key idea of the discrete uniformization theorem is to allow the mesh connectivity to change as you flatten the surface. This ensures that you always arrive at a solution—but leads to other challenges that need to be addressed if you want a real, working algorithm. (6/n)
We add three key pieces to ensure things really always work in practice:
A. Improved optimization via "Ptolemy flips" to skirt invalid edge lengths
B. A new integer-based data structure to track how triangulations intersect
C. High-quality interpolation in the "light cone" (7/n)
A. One key idea is that every triangle mesh can also be viewed as an "ideal hyperbolic polyhedron." And even if an edge flip yields lengths that violate the triangle inequality, the hyperbolic picture still makes sense! The algorithm can hence continue without any trouble. (8/n)
B. Another nice idea is that the way two triangulations cross can be recovered from just the *number* of crossings on each edge. We augment these so-called "normal coordinates" (common in topology) with a new mechanism called "roundabouts," needed to recover the final map. (9/n)
C. Finally, it's not enough to just know how triangulations cross—we must also map data (texture coordinates, color, etc.) back to the input surface. Here, the "light cone" model of hyperbolic geometry helps significantly improve on ordinary piecewise linear interpolation. (10/n)
The final algorithm is extremely robust, flattening almost all 32,744 connected components from #Thingi10k—a feat never achieved with prior "guaranteed injective" methods. Yet we guarantee not only injectivity, but also *quality*: all maps are exactly discretely conformal. (11/n)
There's much more to say here, and the theory runs pretty deep. Here are some good entry points:
Thanks to @SaulSchleimer, by the way, for some helpful discussions about/references to normal coordinates. (Forgot to acknowledge you on the webpage, but got it right in the paper!)
• • •
Missing some Tweet in this thread? You can try to
force a refresh
Entropy is one of those formulas that many of us learn, swallow whole, and even use regularly without really understanding.
(E.g., where does that “log” come from? Are there other possible formulas?)
Yet there's an intuitive & almost inevitable way to arrive at this expression.
When I first heard about entropy, there was a lot of stuff about "free states" and "disorder." Or about the number of bits needed to communicate a message.
These are ultimately important connections—but it's not clear it's the best starting point for the formula itself.
A better starting point is the idea of "surprise."
In particular, suppose an event occurs with probability p. E.g., if the bus shows up on time about 15% of the time, p = 0.15.
How *surprising*, then, is an event with probability p? Let's call this quantity S(p).
We often think of an "equilibrium" as something standing still, like a scale in perfect balance.
But many equilibria are dynamic, like a flowing river which is never changing—yet never standing still.
These dynamic equilibria are nicely described by so-called "detailed balance"
In simple terms, detailed balance says that if you have less "stuff" at point x, and more "stuff" at point y, then to maintain a dynamic equilibrium, the fraction of stuff that moves from x to y needs to be bigger than the fraction that moves from y to x.
Detailed balance is also the starting point for algorithms that efficiently generate samples of a given distribution, called "Markov chain Monte Carlo" algorithms.
The idea is to "design" a random procedure that has the target distribution as the equilibrium distribution.
Even though Newton's laws are deterministic, the behavior of many interacting bodies is so chaotic that it looks essentially "random."
Statistical mechanics effectively says: why bother with all those complex trajectories? Just go ahead and replace them with truly random motion.
A good way to think about the difference is to imagine actually simulating the particles.
With Newtonian simulation, you track both positions & velocities. Velocities increase or decrease according to forces (like attraction/repulsion); positions are then updated by velocities.
With (overdamped) Langevin simulation, you track just the positions. Positions follow the same forces (i.e., the potential gradient), plus some random "noise" that models all that chaotic motion.
Near equilibrium, these two simulators yield motions with very similar statistics.
I work on fundamental algorithms for geometric and visual computing. Here's a taste of our group's work, as a list of "explainer" threads posted on Twitter/X over the years. (🧵)
Signed distance functions (SDFs) are an important surface representation, which can be directly visualized via the “sphere tracing” algorithm.
At #SIGGRAPH2024 we showed how to sphere trace a whole new class of surfaces, based on *harmonic functions* rather than SDFs. [1/n]
Harmonic functions are everywhere in geometric & visual computing, as well as in math, engineering, and physics. So, it's pretty powerful to be able to visualize them directly.
We show how they open up a variety of applications beyond what people currently do with SDFs. [2/n]
For instance, want to visualize a point cloud as a smooth surface?
Don't need to run a reconstruction algorithm (or fit a neural net): just trace some rays, and voila! [3/n]