For example, imagine that there is a security failure whereby a malicious entity can launch malware on an authorized user's machine (I know, suspend your disbelief).
How far can they get on their goals using the authorized access of the most privileged users in your environment?
For the purposes of this analysis, consider exploiting another vulnerability or violating another security boundary to be the limit of the analysis. Assume they have user's creds and can reauth.
How many users in your env have unilateral access to RCE your entire endpoint fleet?
If you think about security through the lens of resilience, any capability to maliciously side-effect your entire fleet at once is a strategic risk.
Perhaps changes should cascade through segments of the fleet with separate individuals' manual approvals at 1%, 10%, 50%, etc.
Or just have a gazillion Active Directory Domain Admin accounts for whatever systems management infra or tasks are needed. You do you.
• • •
Missing some Tweet in this thread? You can try to
force a refresh
So much of my timeline talking about SolarWinds and so little of my timeline talking about how to properly harden your CI infra to make that kind of attack more difficult to pull off.
I'll put some things that I consider good ideas in this thread here. I don't know your environment, so I don't have any good advice of what'll work best for you.
Build systems that hash their inputs to derive the name of the resulting output and cache results in content-addressable storage (CAS) take a little effort to understand, but it's such a powerful security idea:
A thread on security culture anti-patterns that I've seen first-hand over the last 25 years that I've been in charge of security for one thing or another.
My thesis: the farther security decisions are made from functional and operational concerns, the worse all three become.
We'll start with my high school's Linux server that was used for our Adventures in Supercomputing program. I was doing independent study in that class, it got hacked, I could show the teacher how, so it became my job to run it. I was in charge of keeping it usable *and* secure.
I had to lock things down but I had to keep it usable for students to *telnet* in and develop their fortran projects. If it broke, it was my job to fix it. If it got broken into again, it was also my job to recover from it. I owned resilience: uptime, backups, and security.
"There is an internal contradiction between measuring risk (i.e. standard deviation) and using a tool [VAR] with a higher standard error than that of the measure itself. [...] The risk management objective function is survival, not profits and losses."
In a former life, I assessed all security risks to the firm where I was head of security using CVSS. It prioritized my work and helped make nice charts about how much risk I reduced each quarter. The model didn't properly capture the biggest security risks and discovery suffered.
1/n: There is a lot of risk in layering disparate security models because they often leave exploitable gaps at the seams. In the cloud, when you run k8s in the cloud, you are layering many security models: IAM, k8s RBAC, k8s Pod "sandbox", Linux containers, Unix user/groups, etc.
2/n: When I kick the tires on k8s clusters, I go straight for the seam between k8s and the cloud IAM permissions for maximal privilege escalation. This usually gives you access to powerful IAM roles in large, shared cloud accounts. It's a high-risk, large blast radius design.
3/n: Compare to the approach of baking app+OS into a single immutable AMI and embracing IAM security model with roles, sub-accounts/projects. App and OS vulns are roughly equivalent because they can only escalate to app's IAM role. If you use sub-accounts per app, even better.
1/n: My rant about the new Checkm8 BootROM exploit and what it means for security of iOS devices.
2/n: it is super cool technically and I’m looking forward to playing with it on my older iOS devices.
3/n: There is a world of difference in the security of iOS-based devices between the last public BootROM exploit (limera1n) and now due to the introduction of the Secure Enclave. With limera1n, you could boot a ramdisk and brute force a 4-digit PIM in roughly 18 minutes.
I spent years focusing on the technical offense: red teaming, pen-testing, and security research. I felt that it wasn’t having enough impact, so pivoted to defensive security engineering.
I learned 3 key lessons:
1. We should reverse engineer our “jobs to be done” by talking to our internal “customers” and understanding their struggle. Every security role can benefit from more customer orientation and understanding of those impacted by our work.
2. Seeking and applying leverage through better feedback loops and delivering software will help us better scale to meet our challenges. Software and data science are force multipliers that we should all strive to fully embrace.