Discover and read the best of Twitter Threads about #sp19

Most recents (3)

The web is global, but privacy laws differ by country. Which privacy rules do websites follow? We (@rvaneijk, @hadi_a, @__phw, and I) studied this by accessing 1,500 websites from 18 countries each, and analyzing the cookie notices shown. New paper: papers.ssrn.com/sol3/papers.cf…
@rvaneijk @hadi_a @__phw @Hadi will present this paper today at the Workshop on Technology and Consumer Protection ieee-security.org/TC/SPW2019/Con…

Check out the event if you are at IEEE S&P #SP19.
@rvaneijk @hadi_a @__phw @Hadi This is (at least) the 34th study to use OpenWPM, an open-source web privacy measurement tool developed by @s_englehardt & others, first at Princeton and now at Mozilla webtransparency.cs.princeton.edu/webcensus/inde…

It's getting hard to keep track—if you know of other studies that use it, let me know!
Read 3 tweets
Vitaly Shmatikov and I are delighted to receive a Test of Time award from the IEEE Security & Privacy community for our paper on de-anonymization. #SP19

What have we learned from the last decade of de-anonymization research? Here's our take: randomwalker.info/publications/d…
1. The core idea behind de-anonymization is at least 60 years old (!)
2. Attacks only get better with time. Don't underestimate the power of auxiliary data.
3. The burden of proof be on data controllers to
affirmatively show that anonymized data _can't_ be linked to individuals.
4. Beware privacy theater that merely makes users feel safe. We need sociotechnical infrastructures to close the gap b/w perceived & actual privacy.
5. Many privacy threats beyond deanonymization today. For real impact, researchers must engage w/ policymakers & privacy advocates.
Read 3 tweets
CleverHans blog post with @nickfrosst: we explain how the Deep k-Nearest Neighbors (DkNN) and soft nearest-neighbor loss (SNNL) help recognize data that is not from the training distribution. The post includes an interactive figure (credit goes to Nick): cleverhans.io/security/2019/…
Models are deployed with little input validation, which boils down to expecting the classifier to correctly classify any input. This goes against one of the fundamental assumptions of ML: models should be presented at test time with inputs that fall on their training manifold.
If we deploy a model on inputs that may fall outside of this data manifold, we need mechanisms for figuring out whether a specific input/output pair is acceptable for a given ML model. In security, we sometimes refer to this as admission control (see arxiv.org/abs/1811.01134)
Read 10 tweets

Related hashtags

Did Thread Reader help you today?

Support us! We are indie developers!


This site is made by just two indie developers on a laptop doing marketing, support and development! Read more about the story.

Become a Premium Member ($3.00/month or $30.00/year) and get exclusive features!

Become Premium

Too expensive? Make a small donation by buying us coffee ($5) or help with server cost ($10)

Donate via Paypal Become our Patreon

Thank you for your support!