My Authors
Read all threads
CVE-2019-20634 is a 💣🎇

@moo_hax shows that the #MachineLearning system powering @proofpoint email protection (versions up to 2019-09-08) are vulnerable to model stealing & evasion attacks.

nvd.nist.gov/vuln/detail/CV…

Adversarial ML is now an #infosec problem. Wow.

THREAD 1/
To the best of my knowledge, this is the first CVE assigned to an adversarial ML attack with a CRITICAL rating from @NISTcyber nonetheless. Wow.

And all hot on the heels of @CERT_Division first vuln note on the topic, a week back. Wow again.

Trailblazing, @moo_hax! 1/
Part I: Attack summary: @moo_hax & @monoxgas do:

1) Model Stealing - Query proofpoint email protection, observe the response. From this (query, response) pair, create a surrogate model

2) Evasion - Attack a surrogate model offline, and find samples that evade the model 2/
And because adversarial examples transfer, you replay the attack samples that evaded the offline model, to the real online model -- and voila -- you have evaded proofpoint's email protection system

Here is their tool for you to play with github.com/moohax/Proof-P…

3/
Part II: Reasoning the CRITICAL Rating

@NISTcyber assigned 9.1, which pushes to the CRITICAL region. (Yes I know CVSS scores have their own problems, but come back to that later)

4/
The thing that stands out for me is @NISTcyber breakdown of the Exploitability.

@moo_hax attack is:
EASY to mount (low attack complexity)
+ NO PRIVILEGES required (you are basically querying and observing the response
+ NO User interaction is required

O-M-G! 🔥🔥🔥 5/
And with mounting a low cost attack (for the attacker), the rewards are sweeeeeeet!

You steal the ML model (high confidentiality impact) and you evade the email protection system (high integrity impact)

Once again, say O-M-G and take a sip of water

6/
It is interesting to see @NISTcyber classify this as a "Improper Input Validation"

To me this shows that @NISTcyber (for no fault of theirs) trying to gouge a new vuln paradigm (like adversarial ML) into the outmoded traditional vuln paradigm 7/
Big picture: Adversarial ML is now going to show the #infosec community how ML systems can be tricked, stolen, evaded and downright exploited.

8/
Broadly, defenders are just not equipped to think about attacks on ML systems systematically like they do on "traditional" software systems. Which i is crazy -- ML is software.

That's why @jsnover remark on needing @MITREattack for ML systems is so on point

cc: @stromcoffee 9/
Missing some Tweet in this thread? You can try to force a refresh.

Enjoying this thread?

Keep Current with Ram Shankar

Profile picture

Stay in touch and get notified when new unrolls are available from this author!

Read all threads

This Thread may be Removed Anytime!

Twitter may remove this content at anytime, convert it as a PDF, save and print for later use!

Try unrolling a thread yourself!

how to unroll video

1) Follow Thread Reader App on Twitter so you can easily mention us!

2) Go to a Twitter thread (series of Tweets by the same owner) and mention us with a keyword "unroll" @threadreaderapp unroll

You can practice here first or read more on our help page!

Follow Us on Twitter!

Did Thread Reader help you today?

Support us! We are indie developers!


This site is made by just three indie developers on a laptop doing marketing, support and development! Read more about the story.

Become a Premium Member ($3.00/month or $30.00/year) and get exclusive features!

Become Premium

Too expensive? Make a small donation by buying us coffee ($5) or help with server cost ($10)

Donate via Paypal Become our Patreon

Thank you for your support!