3 parts to this thread:
- Notes on attack surface of ML systems,
- Notes on implications of Adversarial ML to national security.
- Follow ups on NatSec
Security of ML systems begin with basic software security. 2/
That's a problem when Python is the lingua franca of ML engineers zdnet.com/article/github… 3/
arxiv.org/pdf/1708.06733… 4/
@JohnLaTwC "Githubification" post medium.com/@johnlatwc/the… shows how threat hunters like @Cyb3rWard0g are increasingly using Jupyter notebooks for hunting. 5/
Atleast in Meltdown, it was localized to CPU processors.
How many orgs have a detailed inventory of ML systems in their org, spanning cloud, federated learning, ML on edge?
6/
The report details how ML is currently used in National Security ( FRT, riot control, crisis prediction, recon, intelligence gathering) and more interesting observations like ML countermeasures. 7/
Think global supply chain for hardware and software in general, but as the report puts it "every other state might depend on US/China for powering their militaries" 8/
One of the proposed ban was on Deep Learning. Let that sink in) 9/
In a simple case, who do you attribute to when your autonomous vehicle crashes because of an errant adversarial example? 11/
Here are some follow ups if you are interested in this:
1) @Gregory_C_Allen's AI and National Security is essential reading - belfercenter.org/sites/default/…
2) China's AI Investment report by @CSETGeorgetown - cset.georgetown.edu/wp-content/upl… 12/
4) @Miles_Brundage mammoth and awesome Malicious AI report maliciousaireport.com 13/