Discover and read the best of Twitter Threads about #neurips22

Most recents (5)

Delighted to share #stablediffusion with Core ML on Apple Silicon built on top of @huggingface diffusers! 🧵
Today's release of macOS Ventura 13.1 Beta 4 and iOS and iPadOS 16.2 Beta 4 include optimizations that let Stable Diffusion run with improved efficiency on the Apple Neural Engine as well as on Apple Silicon GPU
We share sample code for model conversion from PyTorch to Core ML and have example Python pipelines for text-to-image using Core ML models run with coremltools and diffusers
Read 7 tweets
How could you even begin to find theoretical guarantees for zero-shot learning? Our team takes a step in this direction, providing the first non-trivial theoretical guarantees for zero-shot learning with attributes.

Poster #329, Hall J, Thur. Dec. 1, 4-6 PM

[1/3] #NeurIPS22
Alessio, @CriMenghini, and the team show how to quantify the quality of the information in attribute-based descriptions of unseen classes. They show how to analyze such descriptions and give non-trivial lower bounds such that *no* algorithm can guarantee better performance. [2/3]
Paper: openreview.net/forum?id=tzNWh…
Poster: neurips.cc/virtual/2022/p…

P.S. @CriMenghini is on the job market this year! She's looking for industry research positions, and she has top-notch skills in both computational social science *AND* machine learning!
Read 3 tweets
I will be taking interns and hiring FTEs for my advanced ML team at @insitro.
If you are interested in #AI4science and want to work on methods inspired by real world problems in drug discovery reach out. I am also at #neurips22 until the end of the week.
Topics! Links in 🧵1/4
We’re interested in:
generative models
causal inference
Robustness/uncertainty
Domain specific models for biology/genetics/chemistry/imaging
Probabilistic/deep modeling
Decision making & experimentation

Internships (available throughout the year):
jobs.lever.co/Insitro/d8d395…

2/4
Read 5 tweets
Simple trick to improve weak supervision: prune your training data! Our embedding-aware pruning method can boost the accuracy of weak supervision pipelines by up to 19%, and it only uses a few lines of code!

Come by #NeurIPS22 poster 611 today at 4pm to hear more, or read on 🧵
Most existing weak supervision setups (Snorkel, etc.) use all the weakly-labeled data to train a classifier. But there's an intuitive tradeoff between coverage and accuracy of the weak labels. If we cover *less* training data w/ higher accuracy, do we get a more accurate model?
Our theory and experiments say yes! We use a pruning method based on the "cut statistic" (Muhlenbach et al. 2004), which clusters examples by their embedding and picks examples w/ the least noisy nbhrds. Intuitively, homogenous regions are more likely to be correctly labeled.
Read 7 tweets
Our paper on using Optimal Transport to compare human cortical surfaces was accepted at #NeurIPS22 🥳
We implement a new OT solver (FUGW) to compare human brains although their anatomy and functional activity patterns differ a lot.
arxiv.org/abs/2206.09398
github.com/alexisthual/fu…
Comparing brains and deriving meaningful statistics from groups of individuals is hard, mostly because (i) brain anatomy varies across individuals and (ii) two individuals presented with the same stimuli will show different activity patterns.
Actually, we even built a web application to get a better intuition of how different brains are across humans.
It allows us to explore the IBC dataset: an in-depth fMRI study of 12 human subjects who have been scanned more than 50 hours each!
Try it now bit.ly/3VpgEcD 🤓
Read 13 tweets

Related hashtags

Did Thread Reader help you today?

Support us! We are indie developers!


This site is made by just two indie developers on a laptop doing marketing, support and development! Read more about the story.

Become a Premium Member ($3.00/month or $30.00/year) and get exclusive features!

Become Premium

Too expensive? Make a small donation by buying us coffee ($5) or help with server cost ($10)

Donate via Paypal Become our Patreon

Thank you for your support!