Discover and read the best of Twitter Threads about #ai4science

Most recents (4)

Beautiful new work “Del-Dock” on marrying DELs and docking stemming from the summer internship of @KShmilovich with us, led by him and Benson Chen, and our first of many works with @mmsltn and yours truly.

What did we do? A brief 🧵1/6
Paper link:
arxiv.org/abs/2212.00136
In this work we learn deep representations using attention models in top of sampled docking poses combined with readouts from DNA encoded libraries to predict binding affinity. We introduce a likelihood that takes care of background matrix binding.
See what DELs do here 2/6
Why does all this matter? Docking poses are typically sampled from somewhat misspecified energy functions. Using the target binding information we can learn representations which do not rank the samples poses by their likelihood, but by how well they bind, retailing the poses 3/6
Read 7 tweets
I will be taking interns and hiring FTEs for my advanced ML team at @insitro.
If you are interested in #AI4science and want to work on methods inspired by real world problems in drug discovery reach out. I am also at #neurips22 until the end of the week.
Topics! Links in 🧵1/4
We’re interested in:
generative models
causal inference
Robustness/uncertainty
Domain specific models for biology/genetics/chemistry/imaging
Probabilistic/deep modeling
Decision making & experimentation

Internships (available throughout the year):
jobs.lever.co/Insitro/d8d395…

2/4
Read 5 tweets
Our work has been published in @Nature!!

(G)NNs can successfully guide the intuition of mathematicians & yield top-tier results -- in both representation theory & knot theory.

dpmd.ai/nature-maths
arxiv.org/abs/2111.15161
arxiv.org/abs/2111.15323

See my 🧵 for more insight...
It’s hard to overstate how happy I am to finally see this come together, after years of careful progress towards our aim -- demonstrating that AI can be the mathematician’s 'pocket calculator of the 21st century'.

I hope you’ll enjoy it as much as I had fun working on it!
I was leading the GNN modelling on representation theory: working towards settling the combinatorial invariance conjecture, a long-standing open problem in the area.

My work earned me the co-credit of 'discovering math results', an honour I never expected to receive.
Read 8 tweets
Seems like a great time to plug AI/ML research at @Caltech!

A thread below 👇

1/N
We studied how to personalize the (implanted) neuromodulation of paralyzed subjects to help them stand. Led to novel bandit & Bayesian opt algs that learn via subjective preferences while exploring safely. Work by @YananSui

arxiv.org/abs/1806.07555
arxiv.org/abs/1705.00253

2/N
We built upon this work to address personalized gait optimization of exoskeletons.

arxiv.org/abs/1909.12316
(@icra2020 Best Paper Award)

Project: roams.caltech.edu



3/N
Read 23 tweets

Related hashtags

Did Thread Reader help you today?

Support us! We are indie developers!


This site is made by just two indie developers on a laptop doing marketing, support and development! Read more about the story.

Become a Premium Member ($3.00/month or $30.00/year) and get exclusive features!

Become Premium

Too expensive? Make a small donation by buying us coffee ($5) or help with server cost ($10)

Donate via Paypal Become our Patreon

Thank you for your support!