Dino A. Dai Zovi Profile picture
Jul 15, 2023 4 tweets 1 min read
Where @dotMudge makes an important point at @SummerC0n: real data on ATOs shows that SMS 2FA is fine for the vast majority of users. It prevented 100% of 3.3B automated password stuffing attacks, 96% of 12M bulk phishing, and even 76% of <10k targeted attacks seen over last year. Image The footnote on the slide is to "Data Breaches, Phishing, or Malware?: Understanding the Risks of Stolen Credentials" by Thomas et al (2017):

static.googleusercontent.com/media/research…
Jul 16, 2022 5 tweets 1 min read
Once I started to see security/risk management in terms of closed-loop control systems, I couldn't stop and can't imagine it ever working any other way.

Whatever org/product you are trying to secure is a system. Control theory is a great way to think about systems. Three observations:

1. Without low-latency and high-fidelity feedback, you fail.

2. Without sufficient ability to affect the state of the system, you also fail.

3. If the feedback loop is disconnected from the process to affect the state of the system, you still fail.
Jul 20, 2021 5 tweets 2 min read
One benefit of studying adjacent disciplines is that you can find some really good ideas to borrow and apply.

For example, SRE studies failure and resilience a lot (e.g. cascading failures). What does a cascading *security* failure look like in your env?

infoq.com/presentations/… For example, imagine that there is a security failure whereby a malicious entity can launch malware on an authorized user's machine (I know, suspend your disbelief).

How far can they get on their goals using the authorized access of the most privileged users in your environment?
Mar 20, 2021 10 tweets 3 min read
So much of my timeline talking about SolarWinds and so little of my timeline talking about how to properly harden your CI infra to make that kind of attack more difficult to pull off. I'll put some things that I consider good ideas in this thread here. I don't know your environment, so I don't have any good advice of what'll work best for you.
Jul 24, 2020 11 tweets 3 min read
A thread on security culture anti-patterns that I've seen first-hand over the last 25 years that I've been in charge of security for one thing or another.

My thesis: the farther security decisions are made from functional and operational concerns, the worse all three become. We'll start with my high school's Linux server that was used for our Adventures in Supercomputing program. I was doing independent study in that class, it got hacked, I could show the teacher how, so it became my job to run it. I was in charge of keeping it usable *and* secure.
Apr 11, 2020 5 tweets 2 min read
One of the greatest superpowers is the right shortcut to thinking. One of the greatest weaknesses is the wrong shortcut to thinking.

Here is a good reminder from @farnamstreet on why "The Map is Not the Territory":

fs.blog/2015/11/map-an… An example from @nntaleb:

"There is an internal contradiction between measuring risk (i.e. standard deviation) and using a tool [VAR] with a higher standard error than that of the measure itself. [...] The risk management objective function is survival, not profits and losses."
Oct 4, 2019 6 tweets 2 min read
1/n: There is a lot of risk in layering disparate security models because they often leave exploitable gaps at the seams. In the cloud, when you run k8s in the cloud, you are layering many security models: IAM, k8s RBAC, k8s Pod "sandbox", Linux containers, Unix user/groups, etc. 2/n: When I kick the tires on k8s clusters, I go straight for the seam between k8s and the cloud IAM permissions for maximal privilege escalation. This usually gives you access to powerful IAM roles in large, shared cloud accounts. It's a high-risk, large blast radius design.
Sep 28, 2019 8 tweets 2 min read
1/n: My rant about the new Checkm8 BootROM exploit and what it means for security of iOS devices. 2/n: it is super cool technically and I’m looking forward to playing with it on my older iOS devices.
Aug 11, 2019 5 tweets 2 min read
My #blackhat keynote () in a tweet thread.

I spent years focusing on the technical offense: red teaming, pen-testing, and security research. I felt that it wasn’t having enough impact, so pivoted to defensive security engineering.

I learned 3 key lessons: 1. We should reverse engineer our “jobs to be done” by talking to our internal “customers” and understanding their struggle. Every security role can benefit from more customer orientation and understanding of those impacted by our work.
Aug 7, 2019 8 tweets 4 min read
There a few talks that I wanted to highlight this year at @BlackHatEvents, and they just happen to be the ones that I’m most excited about seeing. First off, this talk by George Williams, Jonathan Saunders, and Alex Comerford on detecting deep fakes is really important and I missed it unfortunately:

blackhat.com/us-19/briefing…
Jun 1, 2019 4 tweets 1 min read
Tweeting an IRL rant from the last week, the biggest reason that many orgs are having trouble keeping up with cybersecurity IMHO is that attack surface scales with software, but their security orgs try to secure it by scaling with human toil. That's the opposite of leverage. Until we treat securing the org as a problem that we build and maintain custom in-house software to manage, we'll fail to keep up. That means treating security experts as product owners for cross-functional agile software engineering teams that own security management systems.
Jan 19, 2019 5 tweets 1 min read
Doug McIlroy on the Unix Philosophy:

"(i) Make each program do one thing well. To do a new job, build afresh rather than complicate old programs by adding new features." "(ii) Expect the output of every program to become the input to another, as yet unknown, program. Don't clutter output with extraneous information. Avoid stringently columnar or binary input formats. Don't insist on interactive input."