There’s a lot to unpack in @MsftSecIntel’s latest blog on the CVE-2021-40444 vulnerability. Here’s a thread of some of the details that I think are notable
The volume of initial exploitation was limited. Most security orgs I talked to didn’t observe it directly in their telemetry
“In August…MSTIC identified a small number of attacks (less than 10) that attempted to exploit a remote code execution vulnerability in MSHTML”
The attribution behind the various components involved in the campaigns is a little more complicated than I typically see (we’ll unpack that more shortly).
But the end motivation was human-operated ransomware by cyber criminals
Attack surface reduction rules (ASR) mitigated the specific attack vector we observed in the wild.
I feel like ASR rules are an overlooked defensive capability by many enterprises. Seriously - go read up on ASR and implement as many as you can docs.microsoft.com/en-us/microsof…
The exploit chain involved a series of steps that when combined sort of resembles a “cyber Rube Goldberg machine”
These “features” will play into future detection opportunities for defenders due to the processes and arguments that need to be spawned
Every cyber security company tracks threat actors in a unique way based on their telemetry and insights. At @MsftSecIntel - we track threat actors initially as DEV-### clusters that may get collapsed into other clusters (based on new analysis) or “promoted” to a proper name
Now this is where attribution gets challenging. It’s clear that large set of C2 infrastructure registered by a distinct set of operators
But the same infrastructure has been used by multiple different ransomware actors
I feel like (possibly new) as-a-service model can really throw intel attribution curve ball (I know we had internal convos trying to make heads or tails of the disparate activity we were/are seeing)
We saw same CS-C2aaS delivering Trickbot/Bazaloader payloads w/o CVE-2021-40444
One explanation of the sophistication of the lure document (or lack thereof) vs. use of an unpatched exploit - could indicate that the group deploying the exploit didn’t develop the exploit
Cc: @HackingLZ
It probably doesn’t take a rocket 🚀scientist to figure out - I’m no expert in oleObject relationships and what is normal vs. anomalous. Fortunately I work with folks on the #MSTIC team who are gifted at spotting nuanced anomalies in maldocs - and dig deeper to get to the truth
Getting back to “cyber Rube Goldberg” machine
Due to the nature of exploit requiring path traversal & URL protocol handlers - makes for more durable hunting signal that is difficult for attackers to evade
Note: this may pick up activity that may not be related to CVE-2021-40444
If you want to learn more about the infrastructure that is related to the possible Cobalt Strike C2-as-a-Service (CSC2aaS) (aka DEV-0365) - check out @RiskIQ’s companion blog
🧵on the ongoing outage caused by Crowdstrike content update. Insights here mostly based on my time working on/helping build a competitor product Mandiant Intelligent Response\HX
First & foremost this sucks for both Crowdstrike & their customers - no one wants to see this happen
What happened? A security content update was released that caused the issue
Note: every security company pushes out content updates routinely. Depending on the architecture of software and type of update - each software vendor usually has unique processes for rolling these out
Most security software on Windows has two components a driver/kernel part and a userland part.
At Mandiant we used a driver to generate the realtime events (e.g create file, create process…etc) & for very specific actions like raw disk/memory access for forensic collection.
One (potentially overlooked) aspect from today’s latest breach news is the recent trend of password stealer malware as the initial vector to gain access to orgs
See those “LOGID-“ files in screenshot? They are output files from password stealers (e.g RedLine, raccoon stealer)
We discussed how DEV-0537/LAPSUS$ used this technique to gain initial access to a low privileged identity at targeted orgs in our ransomware ecosystem compendium microsoft.com/security/blog/…
Infostealer malware has been around for a long time. What’s changed is focus has shifted consumers (and their bank accounts) to threats to enterprises
An interesting business “innovation” is selling all/most credentials/secrets obtained in bulk for a low monthly fee ($100-$250)
The LAPSUS$ Group/DEV-0537 was not on my 2022 bingo card - given impact of their activities @MsftSecIntel wanted to detail unique blend of tradecraft. I've personally given dozens of threat briefings in the last few weeks
They monetize intrusions (some of them) through extorting orgs to prevent public data release. Nowadays we associate that with ransomware gangs but this isn't a new trend
Reminds me of investigation I led at South Carolina Department of Revenue in 2012 oag.ca.gov/system/files/M…
There are multiple DEV-0537 tactics that I haven't observed (outside of red teams):
-Phone based social engineering
-SIM swapping
-Coercing (through 💵) employees at target organizations to divulge creds
Each of these are hard for most orgs to defend against -especially the last
This shouldn’t be news to anyone,but human operated ransomware is a problem that has gotten completely out of control
The reasons are relatively straightforward:
The cost to pay is often significantly less than cost to business impact from downtime
The “supply” of possible targets is significantly higher than traditional financial crime which have to target payment/gift cards, banks (or related orgs)
Monetization is also wayyyyy easier
I don’t think you will see a material change in % of orgs who pay ransom unless governments make payment of ransoms illegal (which will have a lot of other unintended consequences).
Do you think governments should outlaw payment of cyber ransoms?
One of most undervalued aspects of incident response is incident documentation
In my experience as a consultant step 1 is interviewing client & reviewing whatever scattered notes 🔖📝they have about an incident & organizing it in a logical manner b/c most orgs do this poorly 🙈
Challenge is analysts (due to crisis) move fast to respond quickly & most orgs don’t experience impactful breaches often
This leads to scattered knowledge/understanding & each analyst documenting things in their own way that is efficient for them but not overall investigation
In my experience - here are the most important things to track (a spreadsheet is my preferred tool)
One table for each of the following:
-Timeline of forensic artifacts
-Systems
-Indicators (I prefer separate table for host and network)
-Compromised accounts
After more than a decade - today is my last day @FireEye.
Taking a job @Mandiant was one of the best decision's I've ever made & I wanted to share some of the stories & experiences of what it was like as well as recognize some of the people that helped me learn and grow
When I started @Mandiant in 2009 the infosec space (it was called information security and not cyber security for starters) was so different from today. It was fairly rare for companies to get breached and when they did there was an amazing amount of stigma associated with that.
I was employee 63 (not because there were 63 active employees but because I was the 63rd employee hired since the inception of the company in ~2005). There were offices in 3 cities (DC, NY, LA) & company split roughly 50/50 between consultants and software devs on MIR