π¦ Self-supervised multimodal ML is promising the next AI breakthrough - in our new work published in @Nature, we debut @cebraAI: for self-supervised hypothesis- and discovery-driven science.
First, this is a story about people π₯³ @stes_io & @jinnnnnleeΒ are co-first authors and it was absolute pleasure to work with them & see the π¦ CEBRA magic come to life. A few happy/fun bits in the @Nature research briefing!
The work: we focused in on 4 open-access datasets: synthetic, hippocampus, sensorimotor, and vision across species & recording methods to show the general performance and many features of #CEBRA
The strength of CEBRA is the flexibility & performance: no generative model = no restrictions on data, & it can be used unsupervised (Time) or w/labels (Behavior).
Here, we demo on hippocampus with CEBRA-Time and then do hypo. testing for a quantitative readout of model fit!
One big #CEBRA moment for us was seeing the impressive video decoding results:
ππ§ +π₯->π¦=
- high perf. >95% accuracy at frame prediction π₯
- highly similar latents across Neuropixels & 2-photon data β
- differences in performance across the visual system of π
To be continued...
we won't tweet all the science today,
so stay tuned for more highlights later π€π¦πΎ
β¨π¦π€ Our Research Briefing can now be found at rdcu.be/dbhwe
- the problem, our solution, and future directions β¦ π¨including how CEBRA is not limited to neural dataπ¨β if you use t-SNE or UMAP, consider using CEBRA for more consistent and higher accuracy results
β’ β’ β’
Missing some Tweet in this thread? You can try to
force a refresh