Security and Privacy of Machine Learning @Uoft @VectorInst @Google π«π·πͺπΊπ¨π¦ Co-author https://t.co/VJF39DQPCu; @CentraleLyon + @PSUEngineering alumnus. Opinions mine
Feb 10, 2023 β’ 5 tweets β’ 2 min read
Very excited to start the 3rd day of @satml_conf with a tutorial by @thegautamkamath on differential privacy!
We have an ambitious goal: Gautam will take us from 0 to training ML models with differential privacy in the space of 1h!
Large scale machine learning and statistics inevitably violate individual privacy. Best effort or heuristic privacy measures don't work (e.g., aggregation, anonymization, etc.). Rigorous privacy guarantees are essential to preserve user trust.
Feb 10, 2023 β’ 9 tweets β’ 3 min read
A thread about @timnitGebru 's thought-provoking keynote "Eugenics and the Promise of Utopia through Artificial General Intelligence" @satml_conf
(Note that we will release the recording to the keynote soon with the rest of the @satml_conf talks)
Timnit started by asking "what is AGI?". She pointed out that AGI as it is often presented is an unscoped system with the apparent goal of trying to do everything for everyone under any environment.
She then asked why?
Feb 8, 2023 β’ 8 tweets β’ 2 min read
SaTML @satml_conf 2023 kicks off with @zicokolter giving a retrospective on robustness in ML!
Is the toy problem of p norm robustness even possible to solve?
May 21, 2019 β’ 10 tweets β’ 4 min read
CleverHans blog post with @nickfrosst: we explain how the Deep k-Nearest Neighbors (DkNN) and soft nearest-neighbor loss (SNNL) help recognize data that is not from the training distribution. The post includes an interactive figure (credit goes to Nick): cleverhans.io/security/2019/β¦
Models are deployed with little input validation, which boils down to expecting the classifier to correctly classify any input. This goes against one of the fundamental assumptions of ML: models should be presented at test time with inputs that fall on their training manifold.