Keenan Crane Profile picture
Dec 7, 2021 14 tweets 6 min read Read on X
Excited to share *two* papers appearing at #SIGGRAPHAsia2021, on "Repulsive Curves" and "Repulsive Surfaces."

Tons of graphics algorithms find nice distributions of points by minimizing a "repulsive" energy.

But what if you need to nicely distribute curves or surfaces? (1/14)
We take a deep dive into this question, building fast algorithms for optimizing the recently developed "tangent-point energy" from surface theory.

Talk video:

Repulsive Curves: cs.cmu.edu/~kmcrane/Proje…
Repulsive Surfaces: cs.cmu.edu/~kmcrane/Proje…

(2/14)
The basic problem? Given a collection of points, curves, or surfaces in space, find a well-distributed arrangement that avoids (self-)intersection.

To make things interesting, you also usually have some constraints—like: the geometry is contained inside a bunny!
(3/14)
Why would you want to do this?

Well, *point* repulsion is already used in every major area of computer graphics: from image stippling to mesh generation to density estimation to fluid simulation!

But surprisingly little work has been done on repulsive curves & surfaces… (4/14)
And yet there are *all sorts* of things you can do with higher-dimensional repulsion—from better graph layouts, to generative modeling, to robotic path planning, to intersection-free modeling & illustration, to artificial tissue design, to unraveling mathematical mysteries!(5/14)
Repulsion goes far beyond (literally) traditional collision-aware design & modeling: rather than slamming on the brakes right before the moment of impact, repulsive optimization is all about finding a harmonious *global* balance of forces, long before collisions occur. (6/14)
Imagine, for instance, playing a game of pool with charged particles rather than rigid bodies.

In contrast to collision, where you can aggressively prune away distant barrier functions, the whole point of repulsive optimization is to get long-range forces into equilibrium (7/14)
Beyond just dealing with O(n²) interactions, repulsive energies lead to some rich & interesting challenges. For one thing, repulsive energies for points don't nicely generalize to curves & surfaces. For another, you have to deal w/ optimization of "fractional derivatives." (8/14)
…Rather than regurgitate everything here on Twitter, I'll point again to this talk:

Since I know it's easy to get lost in the math & switch off the video, I gave each section a level of difficulty. Don't feel bad about skipping to the fun stuff! (9/14)
You might also be interested in checking out this longer Twitter thread, which also goes deeper into the details for curves:

But the generalization to repulsive surfaces lets us do some really beautiful things like never before. (10/14)
To end with just one cool example, consider this pair of handcuffs in a "linked" and "unlinked" configuration.

Do you think it's possible to unlink the handcuffs, without breaking them apart or letting them pass through each other? (11/14)
If you said "yes," you were right (hey, you had a 50% chance!)

But can you show me how?

There are lots of beautiful drawings of this counter-intuitive transition—but unless you have a good geometric imagination, they can be pretty hard to follow. (12/14).
However, we can use repulsive optimization to make this remarkable topological phenomenon come to life.

Just minimize tangent-point energy starting in both linked and unlinked configurations, then join the two movies together (the 2nd one playing backwards). Voilà! (13/14)
The paper/videos have lots of other examples of new & different things you can do with repulsive shape optimization.

But the real limitation is *our* own creativity!

I'd love to hear what ideas this sparks for you, and what problems, challenges, & creations it leads to…(14/14)

• • •

Missing some Tweet in this thread? You can try to force a refresh
 

Keep Current with Keenan Crane

Keenan Crane Profile picture

Stay in touch and get notified when new unrolls are available from this author!

Read all threads

This Thread may be Removed Anytime!

PDF

Twitter may remove this content at anytime! Save it as PDF for later use!

Try unrolling a thread yourself!

how to unroll video
  1. Follow @ThreadReaderApp to mention us!

  2. From a Twitter thread mention us with a keyword "unroll"
@threadreaderapp unroll

Practice here first or read more on our help page!

More from @keenanisalive

Sep 6
“Everyone knows” what an autoencoder is… but there's an important complementary picture missing from most introductory material.

In short: we emphasize how autoencoders are implemented—but not always what they represent (and some of the implications of that representation).🧵 Image
A similar thing happens when (many) people learn linear algebra:

They confuse the representation (matrices) with the objects represented by those matrices (linear maps… or is it a quadratic form?) Image
With autoencoders, the first (and last) picture we see often looks like this one: a network architecture diagram, where inputs get “compressed”, then decoded.

If we're lucky, someone bothers to draw arrows that illustrate the main point: we want the output to look like the input!Image
Read 14 tweets
Aug 29
I can't* fathom why the top picture, and not the bottom picture, is the standard diagram for an autoencoder.

The whole idea of an autoencoder is that you complete a round trip and seek cycle consistency—why lay out the network linearly? Image
*Of course I do in reality know why people use this diagram: it fits into a common visual language used for neural networks.

But it misses some critical features (like cycle consistency). And often adds other nutty stuff—like drawing functions as complete bipartite graphs! Image
Like, yeah, I know that for a function from ℝⁿ to ℝᵐ each output can depend on any of the inputs. That's how a function works!

Maybe you can use some of the space in that diagram to help me understand what those particular functions mean, or aim to do?
Read 6 tweets
May 21
Fun new paper at #SIGGRAPH2025:

What if instead of two 6-sided dice, you could roll a single "funky-shaped" die that gives the same statistics (e.g, 7 is twice as likely as 4 or 10).

Or make fair dice in any shape—e.g., dragons rather than cubes?

That's exactly what we do! 1/n Image
Here's the paper, which is an industry-funded collaboration between my PhD student Hossein Baktash at @SCSatCMU, @nmwsharp at @nvidia, and Qingnan Zhou & @_AlecJacobson at @AdobeResearch.



2/n cs.cmu.edu/~kmcrane/Proje…Image
In a nutshell, we show that the resting poses & statistics of a rigid body are easily computed from its geometry, without any dynamical simulation.

This simple geometric model enables us differentiate through dice designs, & optimize their shapes to match target statistics. 3/n Image
Read 5 tweets
Apr 17
Here's a nice "proof without words":

The sum of the squares of several positive values can never be bigger than the square of their sum.

This picture helps make sense of how ℓ₁ and ℓ₂ norms regularize and sparsify solutions (resp.). [1/n]
These pictures are often batting around in my brain when I think about optimization/learning problems, but can take some time to communicate to students, etc. So, I thought I'd make some visualizations. [2/n]
Suppose we minimize the squared length of a vector x, equal to the sum of squares of its components.

To avoid the trivial solution x=0, we'll also require that the components sum to a nonzero value.

Equivalently: minimize the ℓ₂ norm ‖x‖₂, subject to ‖x‖₁=1. [3/n]
Read 13 tweets
Apr 7
We often use discretization to approximate continuous laws of physics, but it also goes the other way:

You can use continuous equations to approximate the behavior of discrete systems!

Here we'll see how electrical circuits can be modeled using the Laplace equation Δφ=0. [1/n]
The Laplacian Δ is central to numerous (continuous) physical equations like the heat equation, the wave equation, and so on.

I have a whole video about it here: [2/n]
The discrete or graph Laplacian L is typically viewed as a numerical approximation of Δ, giving the difference between the value ui at a node of a graph, and a weighted average of uj at all neighbors j:

(Lu)_i := Σ_j w_ij (ui - uj)

Here w_ij are edge weights. [3/n] Image
Read 23 tweets
Dec 9, 2024
Entropy is one of those formulas that many of us learn, swallow whole, and even use regularly without really understanding.

(E.g., where does that “log” come from? Are there other possible formulas?)

Yet there's an intuitive & almost inevitable way to arrive at this expression.
When I first heard about entropy, there was a lot of stuff about "free states" and "disorder." Or about the number of bits needed to communicate a message.

These are ultimately important connections—but it's not clear it's the best starting point for the formula itself.
A better starting point is the idea of "surprise."

In particular, suppose an event occurs with probability p. E.g., if the bus shows up on time about 15% of the time, p = 0.15.

How *surprising*, then, is an event with probability p? Let's call this quantity S(p).
Read 17 tweets

Did Thread Reader help you today?

Support us! We are indie developers!


This site is made by just two indie developers on a laptop doing marketing, support and development! Read more about the story.

Become a Premium Member ($3/month or $30/year) and get exclusive features!

Become Premium

Don't want to be a Premium member but still want to support us?

Make a small donation by buying us coffee ($5) or help with server cost ($10)

Donate via Paypal

Or Donate anonymously using crypto!

Ethereum

0xfe58350B80634f60Fa6Dc149a72b4DFbc17D341E copy

Bitcoin

3ATGMxNzCUFzxpMCHL5sWSt4DVtS8UqXpi copy

Thank you for your support!

Follow Us!

:(