My Authors
Read all threads
New Paper📢 : Adversarial Machine Learning - Industry perspectives

TL;DR:
- 25 out of 28 organizations we interviewed noted that they dont have right tools in place to secure ML assets
- SDL for industry grade ML models has lots of open questions

arxiv.org/abs/2002.05646 1/
Part I:
There is an adversarial ML research explosion -1 50 papers in the last 2 years - and been around since 2004 (see @biggiobattista paper sciencedirect.com/science/articl… )

We ask broader question: What does adversarial ML mean to ML and sec engineers in industry? 2/
We spoke to 18 large organizations and 10 SMBs. Most of them cluttered around cybersecurity but also "security sensitive" applications like healthcare, banking
We spoke to ML engineers and security analysts in the organization to learn how they approach adversarial ML 3/
We wanted to understand three themes:
1) What is the process the org follows currently when securing ML systems
2) What kind of attack would impact their org most?
3) If their ML system was under attack, how would the sec analyst approach it? 4/
Some of the result:

a) Traditional Sec More important
As one security analyst put it, "Our top threat vector is spearphishing and malware on the box. This [adversarial ML] looks futuristic."

Large organizations or governments are spearheading change. SMBs not so much. 5/
b) Poisoning attacks have caught everyone's attention. "We use ML systems to suggest tips and financial
products for our users. The integrity of our ML
system matters a lot. Worried about inappropriate
recommendation like attack on Tay" 6/
Also, Model Stealing is on people's mind. As a large retail org put it "We run a proprietary algorithm to solve
our problem and it would be worrisome if someone
can reverse engineer it"

No kidding. 7/
Part II:

There is a lot of insights into how orgs secure their "vanilla" software today. For instance, 122 orgs follow some form of Security Development Life Cycle to design, develop, deploy and safeguard vanilla software (See @Lipner safecode.org/wp-content/upl…)

8/
@NicolasPapernot was the first to take a security approach to ML, by applying Seltzer and Shroeder principles (incidentally @Lipner references principles as well). Nicholas also pointed to gaps in auditing and monitoring - arxiv.org/abs/1811.01134

9/
Building on top of @NicolasPapernot, we asked what does trustworthy ML mean in an industry setting? Specifically, how can ML engineers and security analysts develop, deployed and secure industry grade ML models.

We took the Secure Development Framework, and looked for gaps. 10/
To use Forensics as an example, today we know how to do forensics of traditional systems reasonably well. Here are different ways to Memory Capture and Analysis on traditional computer systems on NIST - toolcatalog.nist.gov/search/index.p…

11/
Lots of open questions when it comes to ML systems

What are the artifacts that should be analyzed when an ML system is under attack? ML attack? Model file? The queries that were scored?
Training data? Architecture? Telemetry? Hardware?How should these artifacts be collected? 12/
Is Forensics of ML systems platform dependentent? For instance, would it be depenedent on ML frameworks (PyTorch vs. Tensorflow), ML paradigms (e.g: reinforcement
learning vs. supervised learning) and ML environment
(running on host vs cloud vs edge)? 13/
If you are an #infosec veteran (looking at you @dinodaizovi @alexstamos @SwiftOnSecurity @wendynather), this is so not new: Yet another emerging tech is insecure.

But that same tech is controlling everything: your finances to healthcare to the video you'll watch on Netflix 14/
In @CamlisOrg 2019, Nicholas Carlini likened the current state of adversarial ML field to "crypto pre-Shannon".

This sums it up well. For me, we are deploying ML systems like it is 2020 but securing it like 1910.
If you are a researcher in the adversarial ML space and want to work in any of these gaps, please reach out to me or @drhyrum! We are open for business and believe team work = dream work.
P.S: If you are working in adversarial ML space, work more with old school crypto/security folks. They keep it real.

- @JohnLaTwC is the root cause of SDL in the industry
- Magnus, @MSwannMSFT and @goertzel are security gurus
- @AndiC1122 and @sharonxia get it out of ya.
Missing some Tweet in this thread? You can try to force a refresh.

Enjoying this thread?

Keep Current with Ram Shankar

Profile picture

Stay in touch and get notified when new unrolls are available from this author!

Read all threads

This Thread may be Removed Anytime!

Twitter may remove this content at anytime, convert it as a PDF, save and print for later use!

Try unrolling a thread yourself!

how to unroll video

1) Follow Thread Reader App on Twitter so you can easily mention us!

2) Go to a Twitter thread (series of Tweets by the same owner) and mention us with a keyword "unroll" @threadreaderapp unroll

You can practice here first or read more on our help page!

Follow Us on Twitter!

Did Thread Reader help you today?

Support us! We are indie developers!


This site is made by just three indie developers on a laptop doing marketing, support and development! Read more about the story.

Become a Premium Member ($3.00/month or $30.00/year) and get exclusive features!

Become Premium

Too expensive? Make a small donation by buying us coffee ($5) or help with server cost ($10)

Donate via Paypal Become our Patreon

Thank you for your support!