In ML, the term "manifold" generally refers to the image of a differentiable map from ℝᵏ→ℝⁿ. Sometimes assumed (explicitly or implicitly) to have a diff'ble inverse back to ℝᵏ (i.e. to be a diff'ble embedding). Either way, not the same idea of "manifold" used in math—(1/n)
With the invertibility assumption, this *is* a manifold, but an extremely special type of manifold: while manifold theory offers a lot of complexity around gluing together local Euclidean "charts", typical representation-learning learns just one *global* Euclidean chart! (2/n)
To boot, an ML "manifold" is an isometrically embedded submanifold of ℝⁿ. By Nash embedding theorem, isometric embedding into some ℝⁿ is possible for any Riemannian manifold, but it's not usually so trivial. (3/n)
If we *don't* assume invertibility, then it might not be a manifold at all, due to self-intersections! We can speculate about whether this happens in practice, or whether it happens only with measure 0 (so we can safely "pretend" like with the differentiability of ReLU)… (4/n)
To sum up, ML manifolds are either not necessarily true differentiable manifolds, or they're necessarily very special ones, depending on context. If you find yourself reading about charts & atlases, this won't help you understand the objects that ML researchers have in mind.(5/5)
Share this Scrolly Tale with your friends.
A Scrolly Tale is a new way to read Twitter threads with a more visually immersive experience.
Discover more beautiful Scrolly Tales like this.
