Profile picture
Suchi Saria @suchisaria
, 23 tweets, 7 min read Read on Twitter
Just resurfacing frm 2wks off over the holidays. Visited South Africa—saw gorgeous 🦒🐘🦁s, birds, seaside towns, & spent time thinking! Highlights from 2018(warning: long thread). It's been a hard yr so both bragging & celebrating. Looking fwd to all that’s in store for 2019! 1/
First, on the research front, we made headway in multiple open directions.

As ML is being deployed in domains like healthcare, education, & recruiting, it’s critical we understand scenarios in which model outputs may be unreliable during deployment. 2/
A key source of unreliability is due to unanticipated shifts in the data distribution between train and deployment environments. Previously, we’d discussed one such type—policy shift—and how these get introduced when deploying a decision support tool: arxiv.org/abs/1703.10651 3/
Other scenarios include shifts in inputs, labels, and selection bias in the training set. Using graph-based tools, we show a means for discovering whether a model is susceptible to unreliable behavior due to any shift: arxiv.org/abs/1808.03253 (UAI’18). 4/
Further, we provide a means for determining which parts of the data distribution to fit in order to proactively correct for likely shifts:
arxiv.org/abs/1812.04597 (AISTATS’19).

A key advantage of this approach is that it does ... 5/
not force you to abandon machinery in training complex neural models, rather you can view it as a pre-processing step that removes “unsafe” dependencies & specifies which conditionals to fit during training. It comes with nice performance guarantees under certain assumptions. 6/
Another interesting direction we pursued is providing a way to audit individual predictions at test time (arxiv.org/pdf/1901.00403… (AISTATS’19)). A model may be good on average but bad at specific data points (eg lack of support around the test point in the training data). 7/
Can subsequent-to-training pointwise audit tools help rule out bad predictions?

Some slides on these topics.
Healthcare, Time, and Causality: The Tricky Trio dropbox.com/s/77gms79wjsvm…

Reliable Predictions by Leveraging Causal Reasoning dropbox.com/s/j4w920elxmsd…
We also made significant progress in multiple disease areas. Our work in Parkinson’s showed first-of-a-kind results in measuring disease at home. This required ML innovations that address bottlenecks other mobile studies have faced. More here: 9/
Our ML tools have now been used for 10s of 1000s of patients. Through this, we made headway in expert-in-the-loop decision-making. We received a new NSF grant to study principles of augmentation so more on this in the coming years. We've room for new grads/postdocs to join. 9/
Back when I was on the faculty job market, I was commonly asked—is my goal to impact healthcare or computer science. The question often put forward a false choice and I worried about trying to make deep contributions in both. 10/
2018 was particularly gratifying as it came with some of the highest honors from both fields. Personal highlights for me were being selected for the Sloan research award in CS, and two translational awards for work in impacting patient care: 11/
Two other big surprises were being selected for National Academy of Medicine Emerging Leaders in Health (), and World Economic Forum Young Global Leaders (widgets.weforum.org/ygl-2018/north…). 12/
I’m reminded that many (my dean, chair, mentors, colleagues anonymously making nominations, and even reviewers) have quietly enabled me without asking for acknowledgement. I’m grateful and have been working to pay it forward. 13/
This was the year of overcommitment. I’d hoped to cut down on travel significantly but despite that, ended up giving ~45 invited talks including 8 keynotes. Many of these felt like important opportunities to disseminate and influence key decision-makers so I couldn’t triage 14/
Served my 2yrs as NeurIPS workshop co-chair, this yr alongside @jquinonero Loved seeing how meticulous he is. We had a successful yr w/ ~120 submissions, ~39 selected workshops, and new changes to increase rigor and transparency around the selection process. 15/
Was a year ML became mainstream in medical research. With the wonderful PLoS editorial team, Atul Butte, Aziz Sheikj & I co-guest-edited PLoS Medicine’s special issue in ML and health to facilitate more downstream publications in this space. More here: 16/
Highlights on the personal front: I focused on making way for others, listening more, understanding that sometimes when people are not good listeners, it’s because they are struggling to get their voice heard. 17/
I also had to lead much larger teams and work alongside teammates with very diverse expertise. Some good books that helped:
amazon.com/Measure-What-M…
amazon.com/Crucial-Conver…
and amazon.com/High-Output-Ma… 18/
The bulk of my learning came from the company we’re building. We need to be doing more to bringing benefit from ML+health innovation to patients. More on this in 2019. 19/
Received a named chair--an honor typically given to v. senior faculty. What’s extra special about this is that these chairs were given to scale the very types of ideas my lab seeks to advance, ideas that were entirely new & seen w/ skepticism by many 5yrs ago. 20/
Reminds me of this time in college that one of my mentors told me that I should consider a different career instead of a CS PhD. I was both confused and shattered at the time. But glad that I didn’t listen. 21/
This year will likely be a year of many firsts again. The first time is often the hardest especially if you’re attempting to do something uncharacteristic. Let the naysayers not discourage.

Closing on a high note, here is a picture I took yesterday before taking off. To 2019!
Missing some Tweet in this thread?
You can try to force a refresh.

Like this thread? Get email updates or save it to PDF!

Subscribe to Suchi Saria
Profile picture

Get real-time email alerts when new unrolls are available from this author!

This content may be removed anytime!

Twitter may remove this content at anytime, convert it as a PDF, save and print for later use!

Try unrolling a thread yourself!

how to unroll video

1) Follow Thread Reader App on Twitter so you can easily mention us!

2) Go to a Twitter thread (series of Tweets by the same owner) and mention us with a keyword "unroll" @threadreaderapp unroll

You can practice here first or read more on our help page!

Did Thread Reader help you today?

Support us! We are indie developers!


This site is made by just three indie developers on a laptop doing marketing, support and development! Read more about the story.

Become a Premium Member and get exclusive features!

Premium member ($30.00/year)

Too expensive? Make a small donation by buying us coffee ($5) or help with server cost ($10)

Donate via Paypal Become our Patreon

Thank you for your support!