, 12 tweets, 2 min read Read on Twitter
Clarification for the mitigation discussion: I do not think mitigations are all useless. Shipping the combo of NX, ASLR, GS jointly constrained an overabundance of trivially exploitable bugs. But I also think mitigations need to be evaluated soberly and critically. This...
... translates into a few things: (1) Every mitigation should have a clearly stated attacker model and precise claims what it is supposed to achieve. (2) A mitigation should have a number (3 or more?) of real-world past bugs that are rendered *unexploitable* by the mitigation.
(4) The way a mitigation deteriorates on repeated exploitation of similar bugs should be considered. Is it hard every time, or can I re-use a recipe next time? (5) The cost of the mitigation in terms of complexity, performance, and debuggability, should be discussed explicitly.
ASLR+NX killed non-interactive single-shot attacks for the most part. GS killed many stack overflows. These are good, useful technologies.

Heap layout randomization introduced more issues than it fixed (at least one Linux kernel pricesc?). Safe unlinking was quite possibly a ...
waste of time. Fine-grained ASLR against ROP is almost certainly a waste of time. CFI is useful in some scenarios, but a massive engineering and complexity investment when measured against the benefits. CFI in the browser is the same engineering complexity without the benefits.
My criticism is not directed against mitigations, but against muddled thinking about them. People do not articulate their attacker models clearly; they do not document the assumptions or interactions; they confuse “shuffling stuff around to make it annoying” with “securing”.
So I am all for *good* mitigations. Memory tagging (if it can be made to work) is scary effective. I would trade all browser-CFI engineering, all heap randomization engineering, and all fine-grained ASLR-against-ROP engineering and trade it for working a MT.
Engineering decisions are also resource allocation decisions. Poor mitigations introduce complexity & risk for scant benefit, and misdirect resources from more useful endeavors.
Mitigations also don’t fix the code. Nobody will be able to mitigate themselves out of sufficiently poor code. Re-introducing the same JS JIT bug and slight variants of easily reachable kernel bugs every 12 months is not something mitigations will solve.
Lastly: It is easy to assume that 0day-prices-are-rising means exploits are getting harder to build, but that makes a bunch of assumptions about the market that don’t seem to be true. Tech salaries are rising - but not because there are fewer programmers or because ...
... programming got harder. Demand for 0day seems inelastic to price increases, and I see no evidence of the economics for 0day vendors having gotten worse.
Enough tweetstorm for the night :-) - perhaps he above should really be a blog post.
Missing some Tweet in this thread?
You can try to force a refresh.

Like this thread? Get email updates or save it to PDF!

Subscribe to halvarflake
Profile picture

Get real-time email alerts when new unrolls are available from this author!

This content may be removed anytime!

Twitter may remove this content at anytime, convert it as a PDF, save and print for later use!

Try unrolling a thread yourself!

how to unroll video

1) Follow Thread Reader App on Twitter so you can easily mention us!

2) Go to a Twitter thread (series of Tweets by the same owner) and mention us with a keyword "unroll" @threadreaderapp unroll

You can practice here first or read more on our help page!

Follow Us on Twitter!

Did Thread Reader help you today?

Support us! We are indie developers!


This site is made by just three indie developers on a laptop doing marketing, support and development! Read more about the story.

Become a Premium Member ($3.00/month or $30.00/year) and get exclusive features!

Become Premium

Too expensive? Make a small donation by buying us coffee ($5) or help with server cost ($10)

Donate via Paypal Become our Patreon

Thank you for your support!