, 12 tweets, 4 min read Read on Twitter
Very cool work showing feasibility of an adversarial-example-based attack on self-driving cars 😈 I’ve been working on a similar hobby project and love how thorough this write-up is, and I have some comments on the real-world feasibility of these attacks:
They attack autowipers and lane-following through both digital and physical attacks. For digital they show you can inject adversarial examples onto GPU by hooking t_cuda_std_tmrc::compute. This is obviously much harder to accomplish IRL but absolutely worth considering
They expertly demonstrate why you should never put a browser on the same network as CAN Bus :P You need physical access once and then can run the attack remote - also note that you can do the injection without root!
Supposing every Tesla shows up on a tool like Shodan a network vuln means somebody can inject a noisy image impacting 400,000+ cars. If the noisy image fools even 0.01% of the cars the potential impact is still massive because every Tesla except the Roadster has autopilot...
They target the autowipers with a Worley noise image on an electronic display. However they DON'T say how effectiveness changes with screen size/relative orientation -> big difference between feasibility of billboard ad attack vs display which must be right in front of windshield
They target lane following by adding small white dots to an intersection to cause the car to go into the wrong lane. They acknowledge this attack is human-detectable in clear road conditions but I guarantee it can be obfuscated in icy conditions with carefully-placed sand/salt :P
The downside is that bad weather makes the road messier and harder to orchestrate specific behavior without a lot of planning and intervention, increasing adversary operational risk. To me weather robustness is the bigger safety issue here
Bad weather might inadvertently cause the system to perceive an adversarial-example-like attack, leading to similarly harmful outcomes. This needs to be explored more fully by autonomous car manufacturers as a point of general robustness rather than purely for security
Keen Lab’s writeup offers a concrete example of security risks in AI/ML systems - ML researchers please take note that the threat model focuses more on system architecture than on adversarial examples themselves!
This also fits nicely with the excellent paper by @jmgilmer @ryan_p_adams @goodfellow_ian et al about practical adversarial example threat models: arxiv.org/pdf/1807.06732…
Also check out the excellent blog post by @catherineols from earlier this week about conflating unsolved research problems with real-world threat models: medium.com/@catherio/unso…
This thread keeps on giving - just remembered relevant work presented at NeurIPS SecML 2018 by @jhasomesh et al about needing to consider system specifications and semantics when developing robust ML for self-driving cars and other cyber-physical systems
Missing some Tweet in this thread?
You can try to force a refresh.

Like this thread? Get email updates or save it to PDF!

Subscribe to Ariel Herbert-Voss
Profile picture

Get real-time email alerts when new unrolls are available from this author!

This content may be removed anytime!

Twitter may remove this content at anytime, convert it as a PDF, save and print for later use!

Try unrolling a thread yourself!

how to unroll video

1) Follow Thread Reader App on Twitter so you can easily mention us!

2) Go to a Twitter thread (series of Tweets by the same owner) and mention us with a keyword "unroll" @threadreaderapp unroll

You can practice here first or read more on our help page!

Follow Us on Twitter!

Did Thread Reader help you today?

Support us! We are indie developers!


This site is made by just three indie developers on a laptop doing marketing, support and development! Read more about the story.

Become a Premium Member ($3.00/month or $30.00/year) and get exclusive features!

Become Premium

Too expensive? Make a small donation by buying us coffee ($5) or help with server cost ($10)

Donate via Paypal Become our Patreon

Thank you for your support!