Interested in representation learning? Want to understand your time-series data? Want a method that outperforms tSNE and UMAP in terms of finding consistent latent factors in time-series data? Got your attention? 🔥#cebraAI 1/n
We’ve developed a new (patent pending) method called CEBRA for you. It’s a novel method for supervised contrastive learning with continuous and discrete labels that adapts the sampling process based on label information.
2/n
We show that contrastive learning outperforms generative model-based methods when it comes to consistency/identifying ground truth latents on real world datasets! 3/n
Practically speaking: (1) you can do unsupervised & supervised clustering for ruling out what your population data actually represents; (2) use the latents for high performance decoding; (3) compare data: how consistent are the latents across brain regions, tasks, or cell types?
(4) use the latents as a behavioral classifier; discover what behaviors (latents) your @DeepLabCut data holds! And, compare across animals “score card” style… so much more to come. Read the UPDATED preprint at cebra.ai
And many thanks to for the plotting inspiration! See tweet below👌 @matplotlib#SciArt
- top left: neural data
- bottom left: decoding performance of CEBRA
- top right: latent dimension dynamics
- Embedding and dynamics of latents!