A year ago in Nature Biotechnology, Becht et al. argued that UMAP preserved global structure better than t-SNE. Now @GCLinderman and me wrote a comment saying that their results were entirely due to the different initialization choices: biorxiv.org/content/10.110…. Thread. (1/n)
Here is the original paper: nature.com/articles/nbt.4… by @EtienneBecht @leland_mcinnes @EvNewell1 et al. They used three data sets and two quantitative evaluation metrics: (1) preservation of pairwise distances and (2) reproducibility across repeated runs. UMAP won 6/6. (2/10)
UMAP and t-SNE optimize different loss functions, but the implementations used in Becht et al. also used different default initialization choices: t-SNE was initialized randomly, whereas UMAP was initialized using the Laplacian eigenmaps (LE) embedding of the kNN graph. (3/10)
Were the results due to the different loss functions or due to the different initializations? George extended the code of Becht et al. to add UMAP with random initialization and t-SNE (using FIt-SNE) with PCA initialization to the benchmark comparison. This is the result. (4/10) Image
Turns out, it was *entirely* due to initialization! UMAP with random initialization preserved global structure as poorly as t-SNE with random initialization, whereas t-SNE with informative (PCA) initialization performed as well as UMAP with informative (LE) initialization. (5/10)
This is particularly obvious for the reproducibility metric: of course if one runs t-SNE with random initialization and different random seeds, one can get very different global arrangements of clusters. People tend to think it is not true for UMAP, but we show that it is. (6/10)
In our view, the results of Becht et al. do not actually support the claim that UMAP preserves global structure better than t-SNE, which is how it's been cited in the field. The real lesson is that one should not be using random initialization for either of these methods. (7/10)
This is in agreement with the recommendation to use PCA initialization (rather than random initialization) for t-SNE made in the recent paper by @CellTypist and me: . (8/10)
Just to be clear: this is *not* an attack on UMAP! I think UMAP is great :-) But I also think t-SNE is great. And there is plenty of room for further improvements and for better conceptual understanding of this whole family of embedding methods. (9/10)
But to decide which algorithm is more faithful to the single-cell data, further research is needed. Our Comment argues that Becht et al. paper does not answer that. (10/10)
A follow-up thread can be found here:
UPDATE: Our comment was published in @NatureBiotech. There was no formal response from the authors so I assume we are all in agreement :-)

Free to read link: rdcu.be/cezFs

Journal link:
nature.com/articles/s4158…

• • •

Missing some Tweet in this thread? You can try to force a refresh
 

Keep Current with Dmitry Kobak

Dmitry Kobak Profile picture

Stay in touch and get notified when new unrolls are available from this author!

Read all threads

This Thread may be Removed Anytime!

PDF

Twitter may remove this content at anytime! Save it as PDF for later use!

Try unrolling a thread yourself!

how to unroll video
  1. Follow @ThreadReaderApp to mention us!

  2. From a Twitter thread mention us with a keyword "unroll"
@threadreaderapp unroll

Practice here first or read more on our help page!

More from @hippopedoid

21 Oct 20
Remember the galaxy-like UMAP visualization of integers from 1 to 1,000,000 represented as prime factors, made by @jhnhw?

I did t-SNE of the same data, and figured out what the individual blobs are. Turns out, the swirly and spaghetti UMAP structures were artifacts :-(

[1/n]
Here is the original tweet by @jhnhw. His write-up: johnhw.github.io/umap_primes/in…. UMAP preprint v2 by @leland_mcinnes et al. has a figure with 30,000,000 (!) integers.

But what are all the swirls and spaghetti?

Unexplained mystery since 2008. CC @ch402. [2/n]
The input here is a 1,000,000 x 78,628 matrix X with X_ij = 1 if integer i is divisible by the j'th prime number, and 0 otherwise. So columns correspond to 2, 3, 5, 7, 11, etc. The matrix is large but very sparse: only 0.0036% of entries are 1s. We'll use cosine similarity. [3/n]
Read 11 tweets
20 Jul 20
New preprint on attraction-repulsion spectrum in t-SNE => continuity-discreteness trade-off!

We also show that UMAP has higher attraction due to negative sampling, and not due to its loss. 🤯 Plus we demystify FA2.

With @jnboehm and @CellTypist.
arxiv.org/abs/2007.08902 [1/n]
We get the spectrum by changing the "exaggeration" in t-SNE, i.e. multiplying all attractive forces by a constant factor ρ. Prior work by @GCLinderman et al. showed that ρ->inf corresponds to Laplacian eigenmaps. We argue that the entire spectrum is interesting. [2/n]
Stronger attraction preserves continuous manifold structure. Stronger repulsion brings out discrete cluster structure.

Here is a toy dataset with 20 Gaussians arranged on a line, like a necklace. With LE one sees the string. With t-SNE one sees the individual beads. [3/n]
Read 10 tweets
26 Mar 20
Spent some time investigating history of "double descent". As a function of model complexity, I haven't seen it described before 2017. As a function of sample size, it can be traced to 1995; earlier research seems less relevant. Also: I think we need a better term. Thread. (1/n)
The term "double descent" was coined by Belkin et al 2019 pnas.org/content/116/32… but the same phenomenon was also described in two earlier preprints: Spigler et al 2019 iopscience.iop.org/article/10.108… and Advani & Saxe 2017 arxiv.org/abs/1710.03667 (still unpublished?) (2/n)
I don't like the term "double descent" because it has nothing to do with gradient descent. And nothing is really descending. It's all about bias-variance tradeoffs, so maybe instead of the U-shaped tradeoff one should talk about \/\-shaped? И-shaped? UL-shaped? ʯ-shaped? (3/n)
Read 13 tweets
12 Feb 20
Becht et al.: UMAP preserves global structure better than t-SNE.

@GCLinderman & me: only because you used random init for t-SNE but spectral init for UMAP.

@NikolayOskolkov: that's wrong; init does not matter; the loss function does.

This thread is a response to Nikolay. (1/n)
@NikolayOskolkov is the only person I saw arguing with that. Several people provided further simulations showing that UMAP with random init can mess up the global structure. I saw @leland_mcinnes agreeing that init can be important. It makes sense. (2/n)
But @NikolayOskolkov argued against. Here is his popular UMAP write-up: towardsdatascience.com/how-exactly-um…, and here: towardsdatascience.com/why-umap-is-su… he explicitly disagreed with our Comment. I think his UMAP posts are great and I like them a lot, but in this point I believe he is mistaken. (3/n)
Read 12 tweets
16 Dec 19
"The art of using t-SNE for single-cell transcriptomics" by @CellTypist and myself was published two weeks ago: nature.com/articles/s4146…. This is a thread about the initialisation, the learning rate, and the exaggeration in t-SNE. I'll use MNIST to illustrate. (1/16)
FIRST, the initialisation. Most implementations of t-SNE use random initialisation: points are initially placed randomly and gradient descent then makes similar points attract each other and collect into clusters. We argue that random initialisation is often a bad idea (2/16).
The t-SNE loss function only cares about preserving local neighbourhoods. With random initialisation, the global structure if usually not preserved, meaning that the arrangement of isolated clusters is largely arbitrary and depends mostly on the random seed. (3/16)
Read 16 tweets

Did Thread Reader help you today?

Support us! We are indie developers!


This site is made by just two indie developers on a laptop doing marketing, support and development! Read more about the story.

Become a Premium Member ($3/month or $30/year) and get exclusive features!

Become Premium

Too expensive? Make a small donation by buying us coffee ($5) or help with server cost ($10)

Donate via Paypal Become our Patreon

Thank you for your support!

Follow Us on Twitter!